OBIEE 11g User Interface (UI) Performance Is Extremely Slow With Internet Explorer 8


Bienvenue… Mes amis ..@ Welcome back …

…./// was struggling for last couple of weeks with OBIEE 11g (11.1.1.5) and IE 8 . The experience was awesome !! pathetic …  it is extremely slow starting from login page and browsing across reports/changing the reports views/layouts and while play with overall application ad-hoc basis.

The situation has been observed not for desktop/laptop standalone application users(at-least I don’t have issue in my personal desktop)  but for VPN or WAN users over the network 😦 😦

Initial assumption was that it is due to OBIEE 11g with its lot of junkies (animations ,flash plugin and special chart rendering engine etc..) .I thought it could be a database problem as for my case database was also not pretty strong. So overall performance issue could be for new OBIEE 11g. It was so irritating to develop a reports there which can pretty easily frustrating a developers . Even the console and EM window and other navigation options was pretty slow.

Later while install Mozilla Firefox we can see a significant performance improvement and the application is running cool like a charm !! So the main culprit for entire problem is the favorite “IE 8” browser !!  (Nowadays I didn’t even open my IE until a very specific needs) . So I switch to different browser at-least to make sure I didn’t spend my 20 minutes of time just for build a reports….huh… A shy of relief after spending my last painful weeks …

Now what is the reason behind IE 8 slowness while interacting via web server to OBIEE engine . There is no fighting game between two killer giants “Microsoft” and “Oracle”  , I believe… 🙂

As per Oracle “IE 8 generates many HTTP 304 return calls and this caused the 11g(11.1.1.3 and later) UI slower when compared to the Mozilla FireFox browser.”

Hmmm … so HTTP 304 error caused when the poor developer doesn’t respect the HTTP protocols completely … not bad … 🙂

So what is this HTTP 304 …let’s have a look ..

HTTP 304 doesn’t really indicate an error, but rather indicates that the resource for the requested URL has not changed since last accessed or cached. The 304 status code should only be returned if allowed by the client . The client specifies this in the HTTP data stream sent to the Web server e.g. via If_Modified_Since headers in the request.

Systems that cache or index Web resources (such as search engines) often use the 304 response to determine if the information they previously gathered for a particular URL is now out-of-date.

+++++++++++++++

§  304 errors in the HTTP cycle

Any client (e.g. your Web browser or our CheckUpDown robot) goes through the following cycle when it communicates with the Web server:

  • Obtain an IP address from the IP name of the site (the site URL without the leading ‘http://’). This lookup (conversion of IP name to IP address) is provided by domain name servers (DNSs).
  • Open an IP socket connection to that IP address.
  • Write an HTTP data stream through that socket.
  • Receive an HTTP data stream back from the Web server in response. This data stream contains status codes whose values are determined by the HTTP protocol. Parse this data stream for status codes and other useful information.

This error occurs in the final step above when the client receives an HTTP status code that it recognises as ‘304’.

§  Fixing HTTP 304 errors – General

You should never see this error in your Web browser. It should simply present the Web page from its cache – because it believes the page has not changed since it was last cached. If your client is not a Web browser, then it should equally be able to present the page from a cache. If unable to do so, it is not using the If_Modified_Since or related headers correctly.

§  Fixing HTTP 304 errors

You should never see this error at all for the CheckUpDown service. It indicates defective programming by us or the developers of the Web server software. Either we or they are not respecting HTTP protocols completely.

The 304 status code should only be returned if we allow it in the HTTP data stream we send to the Web server. Because we keep no records of the actual content of your URL Web page, we specifically disallow the 304 response in the HTTP data stream we send.

So if the Web server implements the HTTP protocol properly, it should never send an 304 status code back to us. This response is not what we expect, so we actively report it as an error even though it does not necessarily mean that the Web site is down.

Please contact us directly (email preferred) whenever you encounter 304 errors. Only we can resolve them for you. Unfortunately this may take some time, because we have to analyse the underlying HTTP data streams and may have to liaise with your ISP and the vendor of the Web server software to agree the exact source of the error.

++++++++++++++++

So enough fade up with HTTP 304… lets see how Oracle proposed to resolve the problem using “HTTP compression and caching” technique.

Why use Web Server Compression / Caching for OBIEE?

o       Bandwidth Savings: Enabling HTTP compression can have a dramatic improvement on the latency of responses. By compressing static files and dynamic application responses, it will significantly reduce the remote (high latency) user response time.

o       Improves request/response latency: Caching makes it possible to suppress the payload of the HTTP reply using the 304 status code.  Minimizing round trips over the Web to re-validate cached items can make a huge difference in browser page load times.

This screen shot depicts the flow and where the compression and decompression occurs.

The Solution:

1.      To implement HTTP compression / caching, install and configure Oracle HTTP Server (OHS) 11.1.1.x for the bi_serverN Managed Servers (refer to “OBIEE Enterprise Deployment Guide for Oracle Business Intelligence” document for details).

2.      On the OHS machine, open the file HTTP Server configuration file (httpd.conf) for editing.

This file is located in the OHS installation directory.
For example: ORACLE_HOME/Oracle_WT1/instances/instance1/config/OHS/ohs1

3.      In httpd.conf file, verify that the following directives are included and not commented out:

LoadModule expires_module “${ORACLE_HOME}/ohs/modules/mod_expires.so

LoadModule deflate_module “${ORACLE_HOME}/ohs/modules/mod_deflate.so

4.      Add the following lines in httpd.conf file below the directive LoadModule section and restart the OHS:

Note: For the Windows platform, you will need to enclose any paths in double quotes (“)

For example:

Alias /analytics “/ORACLE_HOME/bifoundation/web/app”

<Directory “/ORACLE_HOME/bifoundation/web/app”>

Alias /analytics ORACLE_HOME/bifoundation/web/app

#Pls replace the ORACLE_HOME with your actual BI ORACLE_HOME path
bifoundation/web/app>
#We don’t generate proper cross server ETags so disable them
FileETag none
SetOutputFilter DEFLATE
# Don’t compress images
SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary
<FilesMatch “\.(gif|jpeg|png|js|x-javascript|javascript|css)$”>
#Enable future expiry of static files
ExpiresActive on
ExpiresDefault “access plus 1 week”
#1 week, this will stops the HTTP304 calls i.e. generated by IE 8
Header set Cache-Control “max-age=604800”
</FilesMatch>
DirectoryIndex default.jsp
</Directory>

#Restrict access to WEB-INF
<Location /analytics/WEB-INF>
Order Allow,Deny
Deny from all
</Location>

Note: Make sure you replace above placeholder “ORACLE_HOME” to your correct path for BI ORACLE_HOME. For example: my BI Oracle Home path is /Oracle/BIEE11g/Oracle_BI1/bifoundation/web/app

 Important Notes:

  • Above caching rules restricted to static files found inside the /analytics directory(/web/app).
  • This approach is safer instead of setting static file caching globally.
  • In some customer environments you may not get 100% performance gains in IE 8.0 browser. So in that case you need to extend caching rules to other directories with static files content.
  • If OHS is installed on separate dedicated machine, make sure static files in your BI ORACLE_HOME (../Oracle_BI1/bifoundation/web/app) is accessible to the OHS instance.

Summary

The following screen shot summarizes the before and after results and improvements in response times(in IE7 and IE8) after enabling compression and caching: 

So now get the enthusiasm back and play with OBIEE 11g … 😀

But as per my observation till now, OBIEE 11g is still pretty slow specially in developers mode when you are editing some views in compound layouts,playing with measures and switch across columns across different sections ! It is still not best bet in regard of performance and response time … !

The portion of the above information of the has been actually except  from Oracle Support ID (1312299.1) .

In the upcoming thread I will explain a loads of  OBIEE 11g related bug/issues and unpredictable behavior and couple of non-working and missing treasures …..

Stay tuned until then …..

Yours (:))> DxP

Caching – A Cacophony in OBIEE Tuning !!!


Well ! A very interesting and controversial topic indeed , typically for Group Discussions . OBIEE  is not very much popular yet in OLAP Caching arena .There is hell lot of debates humming around regarding the BI server caching and obviously the aspects of performance tuning in effect of that .There are couple of benefits achievable in performance arena in true sense though sacrificing with couple of trade-offs .At the end of that day basic is , do Caching enable but do judiciously and make sure it would not get you into trouble in future . So need to configure the Caching considering substantial benefits as opposed to the trade-off. Here I am only focused on OBI server(SAS) level caching rather OBAW server(SAW).

Introduction

OBI has the ability to Cache query results, such that submitting the earlier processed requests do not pass through to the database and process it from filesystem.

Caching Framework

OBIEE Caching Concept

Benefits

Providing significant performance improvements as it frees up the database resources to perform other tasks and thus DB can entertain other set of user queries .

Native filesystem should perform better query processing and retrieve fast results comparing the data to and fro communication with database in network channel .

Moreover it conserves network resources by avoiding  connection to the database server so always less network traffic engagement could be markable .

This also reduces the  BI Server Engine processing overhead over queues of user requests .

However all the above things are so nice to comprehend and so difficult to manage  as providing the flexibility of faster response would lead to users an inconsistent state of data if not properly managed and planned .

This management and planning have several glitches and lots of dependencies and it is called as Cache Purging as opposed to its counterpart as Cache Seeding .

Cache Management Techniques

Do change Cache configurable parameters first .Note if disk space for Cached file exceed its limit ,entries least recently used(LRU) will be discarded and replace automatically to make room for new entries .Below mentioned techniques could be broadly distinguished as Manual , Automatic and Programmatic way to manage Caching :

Cache Seeding

There are several means to do this business :

1)  Set Global Cache parameter on – This will cache query running on any physical tables .By default for all tables in repository is cacheable .

2) Switch on Cacheable  property – This will provides table level benefit and extra customisation that which Physical tables should participate in generating query cache . E.g : sometime user would be more interested on giant Fact table caching rather tiny dimension tables .

3) Scheduling iBot – iBot could be properly configured and used for cache seeding purpose .This will silently build the cache without having any manual intervention . Possibly triggered in a time window after daily ETL load finishes or can be further customised and automated based on result retrieved from another iBot in chain request .The second iBot necessary ping the DB to identify whether a database update has been done(after ETL finish) before passing the request to trigger its chained counterpart .This will build data and query cache for a dashboard request and not for entire set of tables .

4)  Running nQCmd utility :

Another fantastic way to handle query caching which doesn’t have any dependency on the ETL load .But overhead to accumulate the superset of the actual physical query needs to be fired against a request /report and put it down in a single file to pass as parameter of nQCmd.exe . This necessarily need to be invoked after ETL run and by the ETL job itself .It could be done using remote login to BI server and trigger nQcmd  automatically and thus iBot scheduling time dependency could be avoided .

Cache Purging

A very important and crucial mechanism which should be proven good and perfect to make Caching a success story :

1) Manual  purging – Usually a dedicated dumb Administrating job ,kind an overhead for a company .This could be done simply by deleting the existing Cache TBL files or by firing purge from BI Admin Tool .This purging could be done by categories i.e. (Repository , Subject areas , Users or by physical tables) from Admintool in online mode .

2) Calling ODBC Extension – Bundled ODBC extension function like SAPurgeCacheByDatabase() ,SAPurgeCacheByQuery(),SAPurgeCacheByTable(),SAPurgeAllCache() etc . could be called to free the cache table for specific queries, tables,database or all .See Oracle documentation for details .This should be called using nQCMD utility and just after ETL load and before Cache seed to ensure there is no time window related gap and dependency .

3) Event Polling table – A nice and robust concept but not so nice to manage and lead to extra overhead .Really a good technique to make BI server aware of that DB update done and now carry forward to do your business of purging . A Cache Polling frequency is an important step and should be predefined and robust to make it a success .Poll table will be populated by a auto insert DB Trigger each time target DB tables updated .Analytics server polls that table at specific set of intervals and invalidates cache entries corresponding to updated tables.

4) Calling iBots to Purge Cache – It could be done by calling a custom java scripts .This in turn call nQCmd and ODBC extension to free cache .However the catch in this feature is again the iBots need to be scheduled just after ETL run and before Cache seed . So you might not sure about stale data if  ETL doesn’t meet the SLA .Again this could be done after setting a chained iBots to trigger the Purging activity in proper time .So ground rule is that never rely on iBot schedule on time.Lets pick the status from DB to trigger it .

Trade-off

Not purging the outdated caches , known as Stale Caches , can potentially return inaccurate results over time .Think of a situation where your Cache didn’t get purged on time after ETL load finishes. In this scenario though database has been updated but the change is not going to be reflected in your cached data as the seeded cache having outdated data at your filesystem and thus results a stale data which would throw inconsistent and wrong result .This potentially will cause huge confusion to the users mind .Thus Cache retention ,refresh time is important.

Not only that,in large platform , Caching should be separated and spreaded across  multiple folders/mountpoints for better utilization of I/O across the filesystems .The query Cache storage should be on local, high-performance ,high-reliable storage devices .The size of consumption of the Cached files would be a bottleneck for performance improvement across disk usage space .It should be used with proper Caching replacement algorithm and the policy towards the number of maximum cache entries  defined under NQSCONFIG.ini file . A potential Cache related problem found in Clustered environment where the Cache files build on one native host is not sharable with other one and this leads to be the same cache build across clustered participants as it could not be ensured that which BI server will handle which request .Until and unless the request get processed Cluster participant can’t determine there was already some Cache hit based on the query generated  and the request need not to be processed from other Clustered BI server .Again Purging need to be done from both Clustered servers .Cluster Aware Cache is propagated across Clustered Servers only when Cache is seeded via iBots and not using general dashboard request or answer based queries . So finally if you understand your business and requirements you could achieve success using Cache management by availing any of the above techniques .But again , be cautious ,Caching is not so reliable candidate at least from my experience .Hope Oracle will surely looks upon it to make it robust and non-debatable . Wish you the best and good luck with Cache implementation 🙂 🙂