OBIEE 11g couple of Bugs/Issues

The big giant Oracle releases the major version of OBIEE in August 2010 after it is directly take over from Siebel and after quite lot a time OBIEE 10g matured. However there are the couple of the issues I identified during working in UI layer for OBIEE 11g ( . Somebody might notice this and somebody might ignore and expect some intermediate patch to correct those issues later.

I am hereby attaching the screenshots for those UI bugs which I truly think as a bug ! I will keep this thread up-to-date whenever I feel something is not right and on track 🙂







I would rather expect an intelligent cross-hair for below so that one cross- hair on Products will allow to move all the respective presentation columns under the presentation table and the other cross- hair on individual column to allow moving (drag and drop) them individually across the criteria selection pane.


The problem with mapviewer is that after you click on Logout and confirm the logout,still  in the next page it still redirects to the previous page .This really annoying !


Clicking on display “collapsible” link  for prompt layout results the Display layout shift at the bottom of the page. This seems to be an scripting issue and the problem is there for both Internet Explorer and Mozilla firefox .

Just Click on ‘-‘ beside “Display” and see the next layout .This is an issue ! The extra white space between prompt definition and display is not acceptable at all ….


I was just trying to move one Presentation column from Excluded section to Display Slider section of the dashboard layout of my Graph view of a report and unfortunately I am not being able to do that . This is weird and irritating! Ideal scenario is that when you are trying to move the excluded section column to either Sections region or Slider region the “Layout” should auto-scroll to top just to allow you to do that .But this is not the case !! Hence I think this is an issue.This has been found in my Mozilla Firefox 4.0 version as well as IE8 browser.

So currently the workaround is , I have to go around halfway to the layout for e.g. drag till Measures / Bars and Lines section and then scroll a bit and then again drag . Another workaround just to make a bigger browser screen just to make the “Full Screen View” and directly drag it.The ideal one probably to expand the layout region using the “up and down mouse arrow” in section separator just above “Layout” 🙂


In a single dashboard page I put 2 master reports and 1 detail report side by side and it causes column sort order spill over to next attribute in display and for the last display attribute it is out or region display.This results the problem of setting the Sort while my column heading interaction is On .

See below:

However I found that the issue happening for Mozilla firefox 4.0 version and it doesn’t cause any concern in IE8 .


Ohh… Oracle did a very good job delivering a so-called highly demanding product with loads of expectation to meet user expectations , however loads of end-user friendliness and developer friendliness have been overlooked ! Anyway in my today’s finding I did notice a WoW factor …. see below ! Attribute/column movement around the sections of Slider and Graph prompt is uni-directional 🙂 i.e. you can move from left to right and not right to left !! That essentially means if you are having 5 attributes together in the Graph prompt section to make the lastly added attribute in the first of the list you have to perform the jugglery with the columns ! So ridiculous … :)(:

P.S:This issue has not been observed neither in movement of the attributes across measures section or nor during the movement in the criteria section .


OBIEE 11g Design Tips – The Best Practices

I had already composed the OBIEE design guidelines and best practices in my previous quite old post which mostly encircled the approach and best adopted strategies for previous release of OBIEE (aka Siebel Analytics 7.8.5) and OBIEE version. Refer to my previous post :

Here are the design tips and guidelines to adhere as per Oracle recommended Metadata Repository Builders guide of OBIEE11g . The excerpt from it directly compiled like below:

Design Tips for the Physical Layer

  • It is recommended that you use table aliases frequently in the Physical layer to
    eliminate extraneous joins, including the following:
    – Eliminate all physical joins that cross dimensions (inter-dimensional circular
    joins) by using aliases.
    – Eliminate all circular joins (intra-dimensional circular joins) in a logical table
    source in the Physical Model by creating physical table aliases.
    For example, say you have a Customer table that can be used to look up
    ship-to addresses, and using a different join, to look up bill-to addresses.
    Avoid the circular joins by aliasing the table in the Physical layer so that there
    is one instance for each purpose, with separate joins.
  • You might import some tables into the Physical layer that you might not use right
    away, but that you do not want to delete. To identify tables that you do want to
    use right away in the Business Model and Mapping layer, you can assign aliases to
    physical tables before mapping them to the business model layer.
  • An opaque view (a Physical layer table that consists of a SELECT statement)
    should be used only if there is no other solution to your modeling problem.
    Ideally, a physical table should be created, or alternatively a materialized view.
    Opaque views prevent the Oracle BI Server from generating its own optimized
    SQL, because they contain fixed SQL statements that are sent to the underlying
    data source.

Design Tips for the Business Model and Mapping Layer

  • Create the business model with one-to-many logical joins between logical
    dimension tables and the fact tables wherever possible. The business model
    should ideally resemble a simple star schema in which each fact table is joined
    directly to its dimensions.
     Every logical fact table must join to at least one logical dimension table. Note that
    when the source is a fully denormalized table or flat file, you must map its
    physical fact columns to one or more logical fact tables, and its physical dimension
    columns to logical fact tables.
  • Every logical dimension table should have a dimensional hierarchy associated
    with it. This rule holds true even if the hierarchy has only one level, such as a
    scenario dimension {actual, forecast, plan}.
  • Make sure that all appropriate fact sources map to the proper level in the
    hierarchy using aggregation content. You set up aggregation content in the Levels
    tab of the Logical Column dialog for the measure. Note that this is different from
    the Content tab of the Logical Table Source dialog, which is used to specify the
    grain of the source tables to which it maps.
  • Typically, logical fact tables should not contain any keys. The only exception is
    when you need to send Logical SQL queries against the Oracle BI Server from a
    client that requires keys. In this case, you need to expose those keys in both the
    logical fact tables, and in the Presentation layer.
  • Normally, all columns in logical fact tables are aggregated measures, except for
    keys required by external clients, or dummy columns used as a divider. Other
    non-aggregated columns should instead exist in a logical dimension table.
  • In some situations, you might want to have multiple logical fact tables in a single
    business model. For Logical SQL queries, the multiple logical fact tables behave as
    if they are one table. 
  • You can define calculations in either of the following ways:
    – Before the aggregation, in the logical table source. For example:
    sum(col_A *( col_B))
    – After the aggregation, in a logical column derived from two other logical
    columns. For example:
    sum(col A) * sum(col B)
    You can also define post-aggregation calculations in Answers or in Logical SQL
  • If you plan to use Oracle Scorecard and Strategy Management, it is a best practice
    to implement at least one time dimension in the Oracle BI repository you are using
    for your KPIs. This action is necessary because you use KPIs in scorecards to
    measure progress and performance over time. Note that an individual scorecard
    automatically picks up any dimension used by KPIs in that scorecard.
  • Aggregate sources should be created as separate logical table sources. For fact
    aggregates, use the Content tab of the Logical Table Source dialog to assign the
    correct logical level to each dimension.
  • Each dimension level in a hierarchy must have a unique level key. Also, each
    logical dimension table must have a unique primary key. Normally, this key is
    also used as the level key for the lowest hierarchy level.
  • Renaming columns in the Business Model and Mapping layer automatically
    creates aliases (synonyms) for Presentation layer columns that have the property
    Use Logical Column Name selected.
  • To prevent problems with aggregate navigation, ensure that each logical level of a
    dimension hierarchy contains the correct value in the field named Number of
    elements at this level. Fact sources are selected on a combination of the fields
    selected as well as the levels in the dimensions to which they map. By adjusting
    these values, you can alter the fact source selected by the Oracle BI Server. See
    “Creating Logical Levels in a Dimension” for more information about setting this

 Modeling Outer Joins
The following guidelines provide tips on how to model outer joins:

  • Due to the nature of outer joins, queries that use them are usually slower. Because
    of this, define outer joins only when necessary. Where possible, use ETL
    techniques to eliminate the need for outer joins in the reporting SQL.
  • Outer joins are always defined in the Business Model and Mapping layer. Physical
    layer joins do not specify inner or outer.
  • You can define outer joins by using logical table joins, or in logical table sources.
    Which type of outer join you use is determined by whether the physical join maps
    to a business model join, or to a logical table source join.
  • Be aware that outer joins in logical table sources are always included in a query,
    even if the none of the columns in one of the mapped physical tables are used.
    For example, assume that a logical table source is mapped to physical tables A and
    B. When no outer joins are defined, if physical table A is not required to satisfy a
    request, it is not included in the physical query. However, if the logical table
    source has an outer join defined to table A, table A is still included in the physical
    query, even if only physical table B is required to satisfy the request.
  • If you must define an outer join, try to create two separate dimensions, one that
    uses the outer join and one that does not. Make sure to name the dimension with
    the outer join in a way that clearly identifies it, so that client users can use it as
    little as possible.

 Design Tips for the Presentation Layer

  • You can show fewer columns than exist in the Business Model and Mapping layer.
    For example, you can exclude the key columns because they have no business
  • You can organize columns using a different structure from the table structure in
    the Business Model and Mapping layer.
  • You can display column names that are different from the column names in the
    Business Model and Mapping layer.
  • You can set permissions to grant or deny users access to individual subject areas,
    tables, and columns.
  • You can export logical keys to ODBC-based query and reporting tools.
  • You can create multiple subject areas for a single business model.
  • You can create a list of aliases (synonyms) for presentation objects that can be used
    in Logical SQL queries. This feature lets you change presentation column names
    without breaking existing reports.
    The following is a list of tips to use when designing the Presentation layer:
  • Because there is no automatic way to synchronize all changes between the
    Business Model and Mapping layer and the Presentation layer, it is best to wait
    until the Business Model and Mapping layer is relatively stable before adding
    customizations in the Presentation layer.
  • There are many ways to create subject areas, such as dragging and dropping the
    entire business model, dragging and dropping incremental pieces of the model, or
    automatically creating subject areas based on logical stars or snowflakes. See
    “Creating Subject Areas” for information about each of these methods. Dragging
    and dropping incrementally works well if certain parts of your business model are
    still changing.
  • It is a best practice to rename objects in the Business Model and Mapping layer
    rather than the Presentation layer, for better maintainability. Giving user-friendly
    names to logical objects rather than presentation objects ensures that the names
    can be reused in multiple subject areas. Also, it ensures that the names persist
    even when you need to delete and re-create subject areas to incorporate changes to
    your business model.
  • Be aware that members in a presentation hierarchy are not visible in the
    Presentation layer. Instead, you can see hierarchy members in Answers.
  • You can use the Administration Tool to update Presentation layer metadata to give
    the appearance of nested folders in Answers. See “Nesting Folders in Answers” for
    more information.
  • When setting up data access security for a large number of objects, consider
    setting object permissions by role rather than setting permissions for individual
    columns. See Chapter 13, “Applying Data Access Security to Repository Objects”
    for details.
  • When setting permissions on presentation objects, you can change the default
    permission by setting the DEFAULT_PRIVILEGES configuration setting in the
    NQSConfig.INI file. 

OBIEE11g Important Application/Services and directory paths

With the introduction of OBIEE11g there is huge change in the overall navigation for the BI components …. As it is hard to remember I am hereby trying to compile all together in the same place so you can always refer mine 🙂

  • Start BI Services (parameter:  start_all)

..\\<Fusion Middleware Home>\instances\instance1\bifoundation\OracleBIApplication\coreapplication\StartStopServices.cmd start_all

  • Stop BI Services (parameter:  stop_all)

..\\<Fusion Middleware Home>\instances\instance1\bifoundation\OracleBIApplication\coreapplication\StartStopServices.cmd stop_all

  • Sample Application Data and Repository

..\\<Fusion Middleware Home>\instances\instance1\bifoundation\OracleBIServerComponent\coreapplication_obis1\sample

  • BI Repository(RPD)

..\\<Fusion Middleware Home>\instances\instance1\bifoundation\OracleBIServerComponent\coreapplication_obis1\repository

  • Usage tracking and Marketing Segmentation Scripts

..\\<Fusion Middleware Home>\instances\instance1\bifoundation\OracleBIServerComponent\coreapplication_obis1\schema

  • BI Server Cache file

..\\<Fusion Middleware Home>\instances\instance1\bifoundation\OracleBIServerComponent\coreapplication_obis1\cache

  • BI Presentation Catalog

..\\<Fusion Middleware Home>\instances\instance1\bifoundation\OracleBIPresentationServicesComponent\coreapplication_obips1\catalog

  • Clustered Component

..\\<Fusion Middleware Home>\instances\instance1\bifoundation\OracleBIClusterControllerComponent\coreapplication_obiccs1

  • Scheduler Components (Scheduler schema etc.)

..\\<Fusion Middleware Home>\instances\instance1\bifoundation\OracleBISchedulerComponent\coreapplication_obisch1

  • BI ODBC , Javahost and other components

..\\<Fusion Middleware Home>\instances\instance1\bifoundation

  • Start Weblogic Server (startWebLogic.cmd)

..\\<Fusion Middleware Home>\user_projects\domains\bifoundation_domain

  • BI Config Files

..\\<Fusion Middleware Home>\instances\instance1\config

  • Repository Config files (NQSConfig.INI,DBFeatures.INI etc)

..\\<Fusion Middleware Home>\instances\instance1\config\OracleBIServerComponent\coreapplication_obis1

  • Catalog config files(instanceconfig.xml,credentialstore.xml etc)

..\\<Fusion Middleware Home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1

  • Scheduler Catalog config files (instanceconfig.xml)

..\\<Fusion Middleware Home>\instances\instance1\config\OracleBISchedulerComponent\coreapplication_obisch1

  • Cluster Config files

..\\<Fusion Middleware Home>\instances\instance1\config\OracleBIApplication\coreapplication

  • OPMN Startup (parameter:  startall)

..\\\instances\instance1\bin opmnctl startall

  • OPMN Stop (parameter:  stopall)

..\\\instances\instance1\bin opmnctl stopall

  • BI PATH Initialisation

..\\<Fusion Middleware Home>\instances\instance1\bifoundation\OracleBIApplication\coreapplication\setup\bi-init.cmd

  • Weblogic Admin and Managed Server start/stop scripts(startWebLogic.cmd,startManagedWebLogic.cmd etc.)

..\\<Fusion Middleware Home>\user_projects\domains\bifoundation_domain\bin

  • BIPublisher repository and Config files

..\\<Fusion Middleware Home>\user_projects\domains\bifoundation_domain\config\bipublisher

  • FMW Security Config files (system-jazn-data.xml)

..\\<Fusion Middleware Home>\user_projects\domains\bifoundation_domain\config\fmwconfig

  • BI Components Log files

..\\<Fusion Middleware Home>\instances\instance1\diagnostics\logs

  • BI Server Log files (NQserver.log , NQQuery.log etc.)

..\\<Fusion Middleware Home>\instances\instance1\diagnostics\logs\OracleBIServerComponent\coreapplication_obis1

  • BI Presentation Server Log files (sawlog)

..\\<Fusion Middleware Home>\instances\instance1\diagnostics\logs\OracleBIPresentationServicesComponent\coreapplication_obips1

  • Installation Log files

..\\<Fusion Middleware Home>\logs

Playing with CAF(OBIEE Content Accelerator Framework)

I am right back after hectic day-to-day schedule ! Now I will try to explore a new OBIEE feature called Content Accelerator Framework (CAF) version 1 . It is a free utility available in OTN and yet not part of OBIEE 10g default installation .Possible reason could be , as this utility come in the mid way of transition between 10g and future version 11g .Hopefully Oracle will provide a polished automated CAF release bundled in 11g version to thrash out Legacy Web Replication method and will provide a full-proof intelligent migration tool.

What is CAF/The Objective and Purpose

It is a RPD and Catalog migration utility to replicate/clone and synchronize RPD objects and existing reports from Source environment to Target OBIEE environment. It also helps to create duplicate report and duplicate logical RPD constructs from one OBIEE environment to other even if source and target environment has nothing in common and only basic logical model at Target environment .This in-turn help to propagate the changes made on RPD /Catalog  from one environment to other .This could even done after selecting each and individual object.

Overcome from Legacy Web Replication

Webcat replication seems to be an old school method while CAF approaches to make a revolutionary change in Migration aspects .The tool is simply amazing with its wide variety of features . As we all know Webcat Replication is not a very robust method while doing the synchronization. There are lot of glitches while anybody try to adopt this .Not only this , the configuration and network based setup is very much important for doing a successful webcat replication which sometime seems to be a headache for a Configuration Manager .However it is very good in clustered based synchronization arena and not typically in migration .In CAF approach the migration task would be much more simpler and not so cumbersome to handle .As the entire Catalog based concept changes from legacy .webcat file after Oracle takeover the product ,it is predictable that Replication strategy need to change as well . Lets see what OBIEE11g comes with !!!

Why this Blog Thread

There are lots of so-called OBIEE Guru’s already blogged about CAF . However my intension is to blog it with nitty-gritty after deep diving so that readers could feel how it works across various Proof Of Concept approaches.

Installation Prerequisite

1) OBIEE and later (not OBIEE 11g !)

2) JDK 1.6 or later (Note : Make sure the JDK version is correct and correct PATH entry is there as Environment variable parameter)

Installation and Configuration

Download CAF and extract it under below path .Note that don’t rename the Temporary subdirectory “C:\OracleBI\TEMP_CAFV1_Install” and the extracted content should be under the mentioned path and directory to avoid any issues.

Now make sure OBI Catalog manager is not running and all the process is killed i.e. “catalogmanager.exe, java.exe, javaw.exe” . Otherwise unwanted JAR error will be thrown.

Navigate to the directory ..\OracleBI\web\catalogmanager\plugins.

Create a backup of the file and move the backup file outside of the OBIEE directory structure.
Delete the original file from ..\OracleBI\web\catalogmanager\plugins.
Lets try to open Catalog Manager by selecting Start > All Programs > Oracle Business Intelligence > Catalog Manager. Catalog Manager should not open and you should receive an error message similar to the following:

Now run the batch file from the extracted directory and continue it by pressing “Y” .

Now try to open Catalog Manager once again and you can see two new option added as ‘Syncronize’ and ‘Clone’ . This indicates that CAF has been installed and configured successfully .

So lets discuss about how functionally it is working across several perspective in a step by step different kind of approaches.

Approach 1 – Cloning Catalog Object from Dev to Test Environment

CAF Cloner Framework

I have created a RPD called dev_rpd , duplicate it and rename it as test_rpd .The dev_rpd will be used as Source RPD and test_rpd as Target RPD .Both RPD’s having same SA and same metadata objects configured based on Default ‘EMP’ and ‘DEPT’ table  joins .A report and prompt is created in ‘dev_catalog’ .This will be used as source of target catalog ‘test_catalog’ .The Online RPD is dev_rpd and it has been copied to “\OracleBI\server\Repository\dev” to make an offline access to it.Another offline RPD is test_rpd and kept under “\OracleBI\server\Repository\test” .

Lets configure instanceconfig.xml to point “test_catalog” .Start the Presentation services .Initially it will be blank catalog .We will see how to synchronize dev_catalog items to online test_catalog now.

Open the Catalog Manager and open dev_catalog offline and Click on Clone .That does mean I want only Emp report to be cloned in target blank web catalog i.e. test_catalog .

It will pop-up content accelerator wizard .If the Target Catalog is not online it will throw below error :

Otherwise move to the next screen to Select Source and Target offline RPD location and Online Target Catalog URL prefix (BI service plugin ‘analytics’ should be excluded)

Now choose the Target Subject area where you want the items to be migrated .

Click Next .This will parse the Emp request/report XML and extract the columns from it to display at the left-pane.Whereas at the Right pane it selects the Presentation tables from Target RPD .

Below I have clicked EMPNO from left pane and double clicked to the EMPNO from EMP table at right pane to get the final Mapped Values .As you can see after hovering the mouse you can see the Data Type of the target mapped column.This datatype will be extracted and identified from Target RPD automatically .

Any incorrect mapping will throw below error :

Do all the mapping as below :

Lets change the target request name from default “/shared/Cloned” to make it in sync with source catalog .Check the “Create Dashboard” option to make a new dashboard “emp_dashboard” at target catalog .(You can skip Dashboard creation option,in that  case only Answers request will be Cloned at Target) .Also please note that we have same named dashboard “emp_dashboard” at “dev_catalog” and same report emp added under that dashboard which we want to cloned in same name at Target catalog “test_catalog” .

You can edit the target request name also .Finally click on Finish. This results below successful cloning log .

Now lets login to Online ‘test_catalog’ and I can see a new dashboard created having the same report.Also it is available in Answers .

So the Cloning has been succeeded from Source Development environment to Target Test environment.

You can notice 2 things : Target RPD test_rpd has been renamed to test_rpd_backup before change and target catalog report description has been changed i.e. marked red in above image .How pretty and careful CAF is !!!

To be Continued ……….

Updated as of 17-Feb-2011 … I am not resuming this thread anymore as new OBIEE 11g comes with enhanced migration capability without CAF …

OBIEE Magic Trick !!!

Anybody ever face the typical enhancement requirements on existing dashboard reports about enabling the column sorting for all Tabular reports ? I’m sure ,definitely somebody have faced this . However ,I have faced this kind of new requirements from one of my recent client .

Dashboard Column Sorting

Think of , you have 500 reports already in place and it is very tedious job to hover across all reports individually and put a tick against the “Enable column sorting in dashboards” option .Though this is very simple and straightforward  however too irritating and hectic for developers as he/she has to modify each report and for doing this he/she need to click 7-8 times from editing till saving the request .So altogether you are saving 4000 click and overall a substantial minutes 🙂 Don’t go for a Rapid Fire round !

So how it will be , if  we think of a tweak which will make developers free from this boring and hectic activities of monotonous job . Yes I know , some of you are thinking smartly to make a change on Stylesheet /msgdb XML templates or some tweak on instanceconfig.xml .

Fortunately I came to know that ,it could be achievable  from ” Global Sorting of Dashboard by changing XML ” i.e modifying answerstemplate.XML file from location \\OracleBI\Web\msgdb\messages .

Dashboard Column Sorting2

Only thing you have to do is ,search below strings from answerstemplate.XML file (before change)

<WebMessage name=”kuiCriteriaDefaultViewElements” translate=”no”>
Dashboard Column Sorting3 And replace with below (after change) .

<WebMessage name=”kuiCriteriaDefaultViewElements” translate=”no”><HTML><view signature=”tableView” sortable=”true” /></HTML></WebMessage>

Dashboard Column Sorting4 This has been well documented by Oracle .See this link . After doing it and restarting your services your reports should be automatically sorted and if you edit the request it would by default check the Sorting enable option .

Unfortunately this is not the end of story and that is why it is :

—————— xxxxxx A MAGIC THREAD  xxxxxx ——————-

I have implemented the above solution in my OBIEE version and unfortunately  it doesn’t work good . So I have raised an ORACLE SR and it reveals that ,it is a product bug and hence I presume a Documentation bug .An Enhancement Request (ER) 6619910 has been currently with Oracle Support (P3/P4 Bug 6619910) to be considered for review by Development team for future bug fix release .

Alas ! what will be then about my 500 reports sorting in a moment ?? Yes ,there is way and don’t put your finger cross 🙂

Open your Shared folders using Catalog Manager and do “XML Search and Replace ”  as below in offline mode :

Replace  name=”tableView!1″ rptViewVers=”200510010″ with
name=”tableView!1″ rptViewVers=”200510010″ sortable=”true” . Just restart Presentation Services .

Dashboard Column Sorting5

Voila …. It’s all magic see below ! Yes, all reports have been Sorted dynamically in dashboard within couple of seconds . Note that the above method is not documented and hence do it at your own risk .Suggestion is to keep the backup of catalog before performing the steps .

Dashboard Column Sorting6

Now , just think of how much effort you have saved to enable sort for your dashboard reports … Like a MAGIC  and less tricky so far!   🙂 🙂 🙂

Caching – A Cacophony in OBIEE Tuning !!!

Well ! A very interesting and controversial topic indeed , typically for Group Discussions . OBIEE  is not very much popular yet in OLAP Caching arena .There is hell lot of debates humming around regarding the BI server caching and obviously the aspects of performance tuning in effect of that .There are couple of benefits achievable in performance arena in true sense though sacrificing with couple of trade-offs .At the end of that day basic is , do Caching enable but do judiciously and make sure it would not get you into trouble in future . So need to configure the Caching considering substantial benefits as opposed to the trade-off. Here I am only focused on OBI server(SAS) level caching rather OBAW server(SAW).


OBI has the ability to Cache query results, such that submitting the earlier processed requests do not pass through to the database and process it from filesystem.

Caching Framework

OBIEE Caching Concept


Providing significant performance improvements as it frees up the database resources to perform other tasks and thus DB can entertain other set of user queries .

Native filesystem should perform better query processing and retrieve fast results comparing the data to and fro communication with database in network channel .

Moreover it conserves network resources by avoiding  connection to the database server so always less network traffic engagement could be markable .

This also reduces the  BI Server Engine processing overhead over queues of user requests .

However all the above things are so nice to comprehend and so difficult to manage  as providing the flexibility of faster response would lead to users an inconsistent state of data if not properly managed and planned .

This management and planning have several glitches and lots of dependencies and it is called as Cache Purging as opposed to its counterpart as Cache Seeding .

Cache Management Techniques

Do change Cache configurable parameters first .Note if disk space for Cached file exceed its limit ,entries least recently used(LRU) will be discarded and replace automatically to make room for new entries .Below mentioned techniques could be broadly distinguished as Manual , Automatic and Programmatic way to manage Caching :

Cache Seeding

There are several means to do this business :

1)  Set Global Cache parameter on – This will cache query running on any physical tables .By default for all tables in repository is cacheable .

2) Switch on Cacheable  property – This will provides table level benefit and extra customisation that which Physical tables should participate in generating query cache . E.g : sometime user would be more interested on giant Fact table caching rather tiny dimension tables .

3) Scheduling iBot – iBot could be properly configured and used for cache seeding purpose .This will silently build the cache without having any manual intervention . Possibly triggered in a time window after daily ETL load finishes or can be further customised and automated based on result retrieved from another iBot in chain request .The second iBot necessary ping the DB to identify whether a database update has been done(after ETL finish) before passing the request to trigger its chained counterpart .This will build data and query cache for a dashboard request and not for entire set of tables .

4)  Running nQCmd utility :

Another fantastic way to handle query caching which doesn’t have any dependency on the ETL load .But overhead to accumulate the superset of the actual physical query needs to be fired against a request /report and put it down in a single file to pass as parameter of nQCmd.exe . This necessarily need to be invoked after ETL run and by the ETL job itself .It could be done using remote login to BI server and trigger nQcmd  automatically and thus iBot scheduling time dependency could be avoided .

Cache Purging

A very important and crucial mechanism which should be proven good and perfect to make Caching a success story :

1) Manual  purging – Usually a dedicated dumb Administrating job ,kind an overhead for a company .This could be done simply by deleting the existing Cache TBL files or by firing purge from BI Admin Tool .This purging could be done by categories i.e. (Repository , Subject areas , Users or by physical tables) from Admintool in online mode .

2) Calling ODBC Extension – Bundled ODBC extension function like SAPurgeCacheByDatabase() ,SAPurgeCacheByQuery(),SAPurgeCacheByTable(),SAPurgeAllCache() etc . could be called to free the cache table for specific queries, tables,database or all .See Oracle documentation for details .This should be called using nQCMD utility and just after ETL load and before Cache seed to ensure there is no time window related gap and dependency .

3) Event Polling table – A nice and robust concept but not so nice to manage and lead to extra overhead .Really a good technique to make BI server aware of that DB update done and now carry forward to do your business of purging . A Cache Polling frequency is an important step and should be predefined and robust to make it a success .Poll table will be populated by a auto insert DB Trigger each time target DB tables updated .Analytics server polls that table at specific set of intervals and invalidates cache entries corresponding to updated tables.

4) Calling iBots to Purge Cache – It could be done by calling a custom java scripts .This in turn call nQCmd and ODBC extension to free cache .However the catch in this feature is again the iBots need to be scheduled just after ETL run and before Cache seed . So you might not sure about stale data if  ETL doesn’t meet the SLA .Again this could be done after setting a chained iBots to trigger the Purging activity in proper time .So ground rule is that never rely on iBot schedule on time.Lets pick the status from DB to trigger it .


Not purging the outdated caches , known as Stale Caches , can potentially return inaccurate results over time .Think of a situation where your Cache didn’t get purged on time after ETL load finishes. In this scenario though database has been updated but the change is not going to be reflected in your cached data as the seeded cache having outdated data at your filesystem and thus results a stale data which would throw inconsistent and wrong result .This potentially will cause huge confusion to the users mind .Thus Cache retention ,refresh time is important.

Not only that,in large platform , Caching should be separated and spreaded across  multiple folders/mountpoints for better utilization of I/O across the filesystems .The query Cache storage should be on local, high-performance ,high-reliable storage devices .The size of consumption of the Cached files would be a bottleneck for performance improvement across disk usage space .It should be used with proper Caching replacement algorithm and the policy towards the number of maximum cache entries  defined under NQSCONFIG.ini file . A potential Cache related problem found in Clustered environment where the Cache files build on one native host is not sharable with other one and this leads to be the same cache build across clustered participants as it could not be ensured that which BI server will handle which request .Until and unless the request get processed Cluster participant can’t determine there was already some Cache hit based on the query generated  and the request need not to be processed from other Clustered BI server .Again Purging need to be done from both Clustered servers .Cluster Aware Cache is propagated across Clustered Servers only when Cache is seeded via iBots and not using general dashboard request or answer based queries . So finally if you understand your business and requirements you could achieve success using Cache management by availing any of the above techniques .But again , be cautious ,Caching is not so reliable candidate at least from my experience .Hope Oracle will surely looks upon it to make it robust and non-debatable . Wish you the best and good luck with Cache implementation 🙂 🙂



Presentation Catalog/Services Caching

As we know there are two kind of Caching is available in OBIEE .One of them is most popular and familiar i.e. BI Server Caching and other is BI Web Server caching . Presentation Catalog Caching is rarely used and it rarely take any attraction in performance . However a most significant use of it will be, bypassing the default report level Caching in real-time BI application i.e typical Transactional Application where the back-end data changed in a moment and need to be reflected in the Analytics report (though BI is not built for OLTP query)  .In such scenario it would be meaningful for developers to force bypass the web query caching and let analytics hit the database once again to fetch updated data , if any .This could be manually done by pressing Refresh button at the bottom left of Browser window .

However if there are multiple dashboards and say user1 wants dashboard1 data updated on real-time after back-end ETL populates, then rather suggesting user to click on Refresh button each time is not the best option as fickle minded user1 could see stale data and can raise a hue and cry unnecessarily on old data .That could be easily avoided using Bypass Web Server Cache feature in Advanced tab of the corresponding reports .

Bypass NQQuery Cache

This feature would not only giving the user  flexibilities to customize the set of reports where the real-time data reflection needed but also avoiding a global change in cache config params under instanceconfig.xml which will be applicable for entire platform and possibly that could not be the right choice at right time !

If you are running multiple instances of Oracle BI Presentation Services (either through replication or clustering) you need to add the following entries in the Oracle BI Presentation Services configuration file (instanceconfig.xml) to manage when the Presentation Catalog cache is to be updated from disk .