You are here: BI Publisher Displaying items by tag: OBIEE
Displaying items by tag: OBIEE

This article is in continuation to our last article Adding Parameters and List of Values

Make a copy of OrderDM as OrderDetailDM  by Clicking the ‘Save As’  icon.

Add a new SQL Data set using following SQL query for the Order Items data.

 

Update the Group G1 => ORDERITEM from the structure tab. 

Navigate to structure tab and click the output section to view the XML Structure.  There is no relationship between ORDERS and ORDERITEM Groups in XML.  These are Multipart Unrelated Data Sets


In real time scenario, we often need a master detail or parent child relationship between two data sets.  This relationship defined through data link between two data sets, where one dataset defined as master/parent and other as details/child data set and this is referred as Multipart Related Data Sets.

There are two ways to define data link. Elements Level Link and Group Level Link.

Element Level Links: The element level link creates the join on columns of one query to column of other query. This is preferred option of defining the master details relationship.

In this example, Orders is master and OrderItems is detail data set. Both are linked with OrderID. There are multiple Orders items for a given Order. 

Select click the >> icon on ORDER_ID element of ORDERS Dataset. This will open the action windows. Click on Create Link action, This will popup the Create Link box, select the ORDER_ID1 element from the list and click OK to complete the operation

This will create data link on OrderID column between Orders and OrderItems. Hover the mouse on i icon to view the relationship.


Navigate to structure tab and click the output section to view the XML Structure. The OrderItem appears one level down the Orders



View the sample XML to verify the master detail relationship. There are multiple ORDERITM nodes for one ORDER node.

 

Creating Subgroups

This is another way to regroup the data at higher level or add the required hierarchy to flat data.

Look into sql query from Orders data set.  It lists out the orders for the Customers. There is 1.M relationship between Customers and Orders. 

Lets add the higher group on Customer level but before this we need to add proper order by clause to sort the data on CUSTOMER_ID AND ORDER_ID.

Select the ORDER data set from the Diagram pane and click the  icon at CUSTOMER_ID element level. Click the Group by action link form the action popup. This will add a new Group with CUSTOMER_ID element at higher level.


Click the >> icon at NAME element level, from the action menu, click on Move selected element to Parent Group.

This moves the Customer Name to Customer Group. Update the Tag name from the Structure tab to make it more readable.

 


Observe the new hierarchy. There three level.


==>CUSTOMERS

  ============> ORDER

 =====================> ORDERITEM

 

Generate the Sample XML and review the XML hierarchy.



Thats all for now. will few more features about the Data Model in next article.

Happy Reportng.

This article is part of BI Publisher for beginners series. You will see many more articles on Data Model design, Template desing, Administration, Web Services, Scheduler. The contents are picked-up from BI Publisher Training documents from  Adiva consulting. If you have any question regarding these articles or training program, please contact This e-mail address is being protected from spambots. You need JavaScript enabled to view it .

Published in BI Publisher
Tuesday, 07 August 2012 15:10

RTF Template : Working with variables

 Let’s see how we can use the variables to store temporary data or use for calculation.  This is achieved using  “xdoxslt:” function. These are the BI Publisher extension of standard xslt functions.  

Use xdoxslt:set_variable () function to set /initialize the variable  and xdoxslt:get_variable() function to get the variable value.  $_XDOCTX is the System variables to set the XDO Context.

 

 

/*initialize a variables*/

<?xdoxslt:set_variable($_XDOCTX, ‘counter’, 0)?>

/*update the variable’s value by adding the current value to MY_CNT, which is XML element */

<?xdoxslt:set_variable($_XDOCTX, ‘counter’, xdoxslt:get_variable($_XDOCTX, ‘counter’) + MY_CNT)?>

/* accessing the variables */

<?xdoxslt:get_variable($_XDOCTX, ‘counter’)?>

 

/*Working in a loop*/

<?xdoxslt:set_variable($_XDOCTX, ‘counter’, 0)?>

<?for-each:G1?>

/*increment the counter*/

<?xdoxslt:set_variable($_XDOCTX, ‘counter’, xdoxslt:get_variable($_XDOCTX, ‘counter’) + 1)?>

<?end for-each?>

<?xdoxslt:get_variable($_XDOCTX, ‘counter’)?>

 

Hope this help in understanding variable gimmicks.

Need help on RTF template design or BI Publisher implementation, please contact This e-mail address is being protected from spambots. You need JavaScript enabled to view it

 

  

Published in BI Publisher
Tuesday, 07 August 2012 14:33

How to re-group XML data in RTF Template

 If we need the grouping, it is always advisable to generate the XML as per the required hierarchy. This makes the RTF design simple and perform better.  But there are situation when we do not have control over XML structure and need to work on flat XML.  Lets see how we can create multiple nested grouping in RTF template.

I am using  following simple flat XML which is generated using scott.emp and scott.dept table.

Lets first design the simple table with default group

It generates simple listing report.

 

Lest add a group based on Depno. This is achieved by adding following loop

  <?for-each-group:ROW;./DEPTNO?>  --- <?end for-each?> and iterate the detail record within the current group.

This introduce the Dept Group ..
Let’s add another nested group for Job within the department group.
Hope this will help to design those nasty nested groups based on flat XML date.
Need help on RTF template design or BI Publisher implementation, please contact This e-mail address is being protected from spambots. You need JavaScript enabled to view it
Published in BI Publisher
Saturday, 28 July 2012 10:00

BI Apps (7.9.6.3)

BI Apps Technical Training

Adiva consulting, is a Expertise in Business Intelligence and Fusion Middleware Solutions,The online training model enables international participants to receive BI Apps Training without leaving their desks. Adiva Consulting offers training in BI Apps Technical & other related concepts along with Real Time Instructor Led Live Online Training.

Duration: 36 Hours (Weekend training on Saturday and Sunday).

Fee: 599$.
Please contact This e-mail address is being protected from spambots. You need JavaScript enabled to view it

Course Content:
-----------------------------------------

BI Apps 7.9.6.3 overview

  • Architecture
  • Overview of each of the Architrecture components
  • Overview of available functionalities

ETL Layer (Informatica Mappings)

  • Informatica in BI Apps Ecosystem
  • Setups
  • Mapping/Workflow types in BI Apps
  • Mapping Methodology
  • Full load and incremental loads using update strategy

ETL Layer ( Datawarehouse Application Console)

  • DAC Metadata
  • Importing BI Apps Metadata
  • Configuring a Datawarehouse
  • Maintaining Tasks,Task Groups,Tables,Indices,Subject Areas and Execution Plans
  • Running informatica workflows from DAC using Execution Plans
  • Controlling Full and Incremental loads from execution plans

Reporting Layer

  • Overview of out of the box rpd
  • RPD Settings
  • Data and object based security
  • Dashboards Overview

Customization

  • Adding new facts and dimensions in warehouse
  • Extending existing facts and dimensions in warehouse
  • Adding or extending new mappings
  • Customizing DAC to run new fact and dimension mappings
  • Adding new facts and dimensions in rpd and dashboards

Overview of functional areas (Star Data Model and Subject Areas)

  • Financials
  • Supply Chain
  • Human Resources
Published in Training
Saturday, 28 July 2012 09:55

BI Publisher Training

Oracle BI Publisher 11g (11.1.1.6.0)

Oracle BI Publisher is an enterprise reporting solution for Production Reporting. It is available as part of OBIEE suite, Standalone Enterprise Release as well integrated with most of the Oracle Enterprise Application as Oracle Fusion Application, Oracle eBusiness Suite (EBS 11i, R12), People Soft, Siebel CRM , JD Edwards and many more. 

During this course, participant gets an Overview of the BI Publisher product, Set up, configuration Administration, Report Development and Report Scheduling.

Course Content

PART I: Oracle BI Publisher 11g Overview 

1.    Overview of BI Publisher Enterprise Edition
2.    Describing the Oracle BI Publisher Applications Architecture and components
3.    Overview of Different Report Component (Layout, Data, Translation)
4.    Desktop BI Publisher Desktop (Word Template Designer)
5.    Designing First Report using Word Template Builder

PART II: Data Model and Report Designer Editor

1.    Overview of Data Model , Data Model Editor
2.    Data Model Components.
3.    Working with different Data Source options.
4.    Designing  Simple and Master Detail Data Model 
5.    Working with Parameters, LOVs
6.    Working with  Event Triggers, Substitute variables, Dynamic SQL query source
7.    Understanding Data Model Properties.
8.    Understanding Groups, Functions, Links, Aggregation
9.    Understanding Bursting Definition.    
10.  Overview of Report Designer and Report creation process.
11.  Understanding different Layout types (RTF, PDF, EText, Excel, Interactive)
12.  Setting Run Time Properties
13.  Style Templates & Sub Templates
14.  Understanding Translation

PART III : Designing RTF  Template using Word Template Designer

1.    Create Simple Listing Report
2.    Working with Groups
3.    Working with Tables, Forms and Charts
4.    Implementing conditional logic to handle complex layout
5.    Formatting Numbers, Date and Currencies.
6.    Defining Headers and Footers
7.    Inserting Images and Charts
8.    Implementing XPATH Commands in RTF
9.    Working with Sub-Templates
10.  Advance Report Layout 

PART IV : Design Interactive Reports Using Layout Builder

1.    Overview of BI Publisher Layouts
2.    Designing Simple interactive Report
3.    Designing a Master Detail Report Using Charts

PART V : Scheduling the Reports

1.    Overview of BI Publisher Scheduler
2.    Creating Report Jobs
3.    Viewing and Managing Report Jobs
4.    Viewing and Managing Report History

PART VI :BI Publisher Administration

1.    Introduction to Oracle BI Publisher Administration
2.    Understanding the Security Model
3.    Setting Up Data Sources
4.    Setting Up Delivery Destinations
5.    Defining Run-Time Configurations
6.    Diagnostics and Performance Monitoring
7.    Moving Catalog Objects Between Environments
8.    Overview of Web logic Console and Enterprise Manager (EM) to Administrator  BI Publisher

Who can be benefited?

  • BI Consultants
  • Oracle Application Consultants [Technical & Functional]
  • Data Warehouse consultants
  • Application Developers

Prerequisites:

  • Basic understanding with SQL
  • Basic understanding with XML   
Published in Training
Wednesday, 18 April 2012 10:42

OBIEE Server Administration

OBIEE Developers tend to make smaller changes in rpd file and unit test the same, hence starting the whole of BI Server process (Web logic Server + BI Server) is a time consuming process.  OBIEE Server Administration can be done in a quick and simple way using the “opmnctl”  command line utility. This is mainly for developers and not for administrators.


Steps to make a rpd change and load it to BI Server for testing:

·         Make changes in the rpd file

·         Go to  D:\OBIEE11G\instances\instance1\config\OracleBIServerComponent\coreapplication_obis1

·         Open NQSConfig.INI and replace

[REPOSITORY]

Star = older.rpd, DEFAULT;

With

[REPOSITORY]

Star = new.rpd, DEFAULT;

·         Save the NQSConfig.INI and close

·         Start > run > type cmd

·         Go to the path D:\OBIEE11G\instances\instance1\bin

·         Issue the command “opmnctl status”, if any of the services is alive issue the command “opmnctl stopall” followed by “opmnctl startall” . This will start the services with the new rpd loaded onto the BI Server.

·         If the “opmnctl status” shows as stopped then just issue “opmnctl startall” and this will start all the BI Services.

        "If you found this article useful, please rate the same"

Published in OBIEE
Wednesday, 18 April 2012 10:17

OBIEE Variables

OBIEE variable types :

·        Repository Variables

o   Static Variables

o   Dynamic Variables

·        Session Variables

o   System

§  Secutity

§  Others

o   Non-System


To create any variable Click Manage > Variable

This opens variable Manager as shown below.

Repository Variable(Static)

·         Initialized only when BI server is started

·         Is a hard coded value like string, number , etc

·         Value can only be changed by logging in to the rpd file

Repository Variable(Dynamic)

·         Initialized when BI server is started

·         Assigned value dynamically based on the result of a query

·         Value is dependent of the sql provided in Initialization Block

Create a dynamic variable and provide a Name and Default Initializer.Create a new initialization block by clicking on the New button in the above screen.

Provide Intialization Block variable Name

Click on “Edit Data Source” button and provide the sql to be used for the variable, e.g.

select lower(sys_context('USERENV','SESSION_USER'))||

       '@'||

       lower(sys_context('USERENV','DB_NAME'))

  from dual;

Provide Connection pool name. Note, a separate connection pool should be created for initialization blocks to execute te sql used for fetching data for the variable.

Test the sql by clicking on test button.

Session Variable(System)

·         Initialized when a Analytics web user logs (creates a new session)

·         Initialization depends on a Initialization block, similar to Dynamic repository Variable

·         Only system reserved variables can be created and the following is the list . Ones in Blue are Security related session Variables.

Variable

Description

USER

Holds the value the user enters as his or her logon name. This variable is typically populated from the LDAP profile of the user.

PROXY

Holds the name of the proxy user. A proxy user is a user that has been authorized to act for another user.

GROUP

Contains the groups to which the user belongs. Exists only for compatibility with previous releases. Legacy groups are mapped to application roles automatically.

WEBGROUPS

Specifies the Catalog groups (Presentation Services groups) to which the user belongs, if any. Note that the recommended practice is to use application roles rather than Catalog groups.

USERGUID

Contains the global unique identifier (GUID) of the user, typically populated from the LDAP profile of the user.

ROLES

Contains the application roles to which the user belongs.

ROLEGUIDS

Contains the global unique identifiers (GUIDs) for the application roles to which the user belongs. GUIDs for application roles are the same as the application role names.

PERMISSIONS

Contains the permissions held by the user, such as oracle.bi.server.impersonateUser or oracle.bi.server.manageRepository.

DISPLAYNAME

Used for Oracle BI Presentation Services. It contains the name that is displayed to the user in the greeting in the Oracle BI Presentation Services user interface. It is also saved as the author field for catalog objects. This variable is typically populated from the LDAP profile of the user.

PORTALPATH

Used for Oracle BI Presentation Services. It identifies the default dashboard the user sees when logging in (the user can override this preference after logged on).

LOGLEVEL

The value of LOGLEVEL (a number between 0 and 5) determines the logging level that the Oracle BI Server uses for user queries.
This system session variable overrides a variable defined in the Users object in the Administration Tool. If the administrator user (defined upon install) has a Logging level defined as 4 and the session variable
LOGLEVEL defined in the repository has a value of 0 (zero), the value of 0 applies.

REQUESTKEY

Used for Oracle BI Presentation Services. Any users with the same nonblank request key share the same Oracle BI Presentation Services cache entries. This tells Oracle BI Presentation Services that these users have identical content filters and security in the Oracle BI Server. Sharing Oracle BI Presentation Services cache entries is a way to minimize unnecessary communication with the Oracle BI Server.

SKIN

Determines certain elements of the look and feel of the Oracle BI Presentation Services user interface. The user can alter some elements of the user interface by picking a style when logged on to Oracle BI Presentation Services. The SKIN variable points to an Oracle BI Presentation Services folder that contains the nonalterable elements (for example, figures such as GIF files). Such directories begin with sk_. For example, if a folder were called sk_companyx, the SKIN variable would be set to companyx.

DESCRIPTION

Contains a description of the user, typically populated from the LDAP profile of the user.

USERLOCALE

Contains the locale of the user, typically populated from the LDAP profile of the user.

DISABLE_CACHE_HIT

Used to enable or disable Oracle BI Server result cache hits. This variable has a possible value of 0 or 1.

DISABLE_CACHE_SEED

Used to enable or disable Oracle BI Server result cache seeding. This variable has a possible value of 0 or 1.

DISABLE_SUBREQUEST_CACHE

Used to enable or disable Oracle BI Server subrequest cache hits and seeding. This variable has a possible value of 0 or 1.

SELECT_PHYSICAL

Identifies the query as a SELECT_PHYSICAL query..

DISABLE_PLAN_CACHE_HIT

Used to enable or disable Oracle BI Server plan cache hits. This variable has a possible value of 0 or 1.

DISABLE_PLAN_CACHE_SEED

Used to enable or disable Oracle BI Server plan cache seeding. This variable has a possible value of 0 or 1.

TIMEZONE

Contains the time zone of the user, typically populated from the LDAP profile of the user.

Session Variable(Non-Session)

·         Initialized when a Analytics web user logs (creates a new session)

·         Initialization depends on a Initialization block, similar to Dynamic repository Variable

Row wise initialization of variables

If a variable is marked for row wise initialization it means it returns an array of values. Below are the steps. E.g. if we want a variable to store last 10 years here is how we create it :

Create a variable and click on New  to create  new initialization block

Provide a name for the initialization block and click “Edit Data Source”

Provide a sql that returns multiple values. Set connection pool and Test the sql. Save this and exit the “Variable Manager”

Reopen the “Variable Manager” and open the initialization block. Next click on the “Edit Data Target”.

Select the variable and check “Row wise initialization” and say OK.

 

On the Initialization block page click Test  to check tat the Variable ARRAY  is initialized and returns values.

 Access method for Variable types

 "If you found this article useful, please rate the same"

Published in OBIEE
Wednesday, 18 April 2012 08:56

Cache Management

Cache is the most important feature used for obtaining an optimized report and dashboard performance. However when managed poorly, it might lead to stale data appearing in reports. Hence cache must be purged at intervals based on the right perception. E.g. cache must be deleted after ETL has completed the data load to the warehouse tables.

Cache management in OBIEE is done from multiple points and this is determined by who is purging it.


Cache Access Point

OBIEE Role

Online RPD file

Developer

Analysis Page

Report End Users and Developers

Scripts

Administrators

Enabling cache

Caching must be enabled first for this to be managed. To do this open the nqsconfig file and edit the following (in red)

###############################################################################

#

#  Query Result Cache Section

#

###############################################################################

[CACHE]

ENABLE = YES;  # This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager 

As this option is controlled from Enterprise Manager, restarting the BI Service will reset this option to what is define is Enterprise Manager. Hence for permanently setting this option to Yes/NO this must be done in EM. Navigation : Open EM > Business Intelligence > Coreapplication > Capacity Management > Performance

See Screenshot

  Deleting cache from RPD

Open rpd in online mode

Navigate to Manage>Cache

The cache manager screen shows the various SQL stored in Cache

One may see the sql related to each cache entry using “Show SQL”.

“Show SQL” option of the above step opens up a window with the Logical SQL of the cached query

“Purge” can be used to delete one or more cache entries.

Create a connection as shown above with Database as ODBC Basic

Create a connection pool called AnalyticsWeb with call interface as ODBC 2.0

Create a new Analysis

Select  “Create Direct Database Request”

Specify the Connection Pool created in first step(this connection pool is created in rpd exclusively for cache management, don’t use the one for reports)

SQL Statement : Specify the OBIEE command for deleting cache, it can be either of the following :

 Call SAPurgeAllCache()  -- Deletes all cache info

Call SAPurgeCacheByTable( 'Datwarehouse',  '','DWH', 'XW_SALES_F');  -- deletes specific table cache

Call SAPurgeCacheByDatabase( 'Datwarehouse' );   -- Deletes all cache related to a specific database

Call SAPurgeCacheByQuery('SELECT   0 s_0,    "Sales Subject Area"."Time - Dimension"."MONTH_NAME" s_1, "Sales Subject Area"."Fact Sales"."COST_PRICE" s_2 FROM "Sales Subject Area" ORDER BY 1, 2 ASC NULLS LAST ; ') -– Deletes specific logical SQL Cache

Validate SQL and Retrieve Columns: This will validate the command issued  in previous step

Click on results Tab, The message in RESULT_MESSAGE column indicates the success of cache deletion operation.

Click on results Tab, The message in RESULT_MESSAGE column indicates the success of cache deletion operation.

 Deleting Cache using a script

One can create a simple .txt file and specify the cache deletion command, which can be one or more of the following :

Call SAPurgeAllCache()  -- Deletes all cache info

Call SAPurgeCacheByTable( 'Datwarehouse',  '','DWH', 'XW_SALES_F');  -- deletes specific table cache

Call SAPurgeCacheByDatabase( 'Datwarehouse' );   -- Deletes all cache related to a specific database

Call SAPurgeCacheByQuery('SELECT   0 s_0,    "Sales Subject Area"."Time - Dimension"."MONTH_NAME" s_1, "Sales Subject Area"."Fact Sales"."COST_PRICE" s_2 FROM "Sales Subject Area" ORDER BY 1, 2 ASC NULLS LAST ; ') -– Deletes specific logical SQL Cache

 e.g. c:\del_cache.txt has the command Call SAPurgeAllCache() 

This script can be run using the following command :

D:\OBIEE11G\Oracle_BI1\bifoundation\server\bin\nqcmd –d coreapplication_OH833456789 –u admin –p weblogic –s c:\del_cache.txt

-d : OBIEE Driver Name : Copu from windows DSN Names

-u: Administrator user name

-p: Administrator password

-s: script path

 NQConfig.ini file parameters related to cache management

 Query Result Cache Section Parameters

The parameters in the Query Result Cache Section provide configuration information for Oracle BI Server caching. The query cache is enabled by default. After deciding on a strategy for flushing outdated entries, you should configure the cache storage parameters in Fusion Middleware Control and in the NQSConfig.INI file.

Note that query caching is primarily a run-time performance improvement capability. As the system is used over a period of time, performance tends to improve due to cache hits on previously executed queries. The most effective and pervasive way to optimize query performance is to use the Aggregate Persistence wizard and aggregate navigation.

This section describes only the parameters that control query caching. For information about how to use caching in Oracle Business Intelligence, including information about how to use agents to seed the Oracle BI Server cache

ENABLE

Note:

The ENABLE parameter is centrally managed by Fusion Middleware Control and cannot be changed by manually editing NQSConfig.INI, unless all configuration through Fusion Middleware Control has been disabled (not recommended).

The Cache enabled option on the Performance tab of the Capacity Management page in Fusion Middleware Control corresponds to the ENABLE parameter.

 Specifies whether the cache system is enabled. When set to NO, caching is disabled. When set to YES, caching is enabled. The query cache is enabled by default.

Example: ENABLE = YES;

DATA_STORAGE_PATHS

Specifies one or more paths for where the cached query results data is stored and are accessed when a cache hit occurs and the maximum capacity in bytes, kilobytes, megabytes, or gigabytes. The maximum capacity for each path is 4 GB. For optimal performance, the paths specified should be on high performance storage systems.

Each path listed must be an existing, writable path name, with double quotation marks ( " ) surrounding the path name. Specify mapped directories only. UNC path names and network mapped drives are allowed only if the service runs under a qualified user account.

You can specify either fully qualified paths, or relative paths. When you specify a path that does not start with "/" (on UNIX) or "<drive>:" (on Windows), the Oracle BI Server assumes that the path is relative to the local writable directory. For example, if you specify the path "cache," then at run time, the Oracle BI Server uses the following:

ORACLE_INSTANCE/bifoundation/OracleBIServerComponent/coreapplication_obisn/cache

Note:

Multiple Oracle BI Servers across a cluster do not share cached data. Therefore, the DATA_STORAGE_PATHS entry must be unique for each clustered server. To ensure this unique entry, enter a relative path so that the cache is stored in the local writable directory for each Oracle BI Server, or enter different fully qualified paths for each server.

Specify multiple directories with a comma-delimited list. When you specify multiple directories, they should reside on different physical drives. (If you have multiple cache directory paths that all resolve to the same physical disk, then both available and used space might be double-counted.)

Syntax: DATA_STORAGE_PATHS = "path_1" sz[, "path_2" sz{, "path_n" sz}];

Example: DATA_STORAGE_PATHS = "cache" 256 MB;

Note:

Specifying multiple directories for each drive does not improve performance, because file input and output (I/O) occurs through the same I/O controller. In general, specify only one directory for each disk drive. Specifying multiple directories on different drives might improve the overall I/O throughput of the Oracle BI Server internally by distributing I/O across multiple devices.

The disk space requirement for the cached data depends on the number of queries that produce cached entries, and the size of the result sets for those queries. The query result set size is calculated as row size (or the sum of the maximum lengths of all columns in the result set) times the result set cardinality (that is, the number of rows in the result set). The expected maximum should be the guideline for the space needed.

This calculation gives the high-end estimate, not the average size of all records in the cached result set. Therefore, if the size of a result set is dominated by variable length character strings, and if the length of those strings are distributed normally, you would expect the average record size to be about half the maximum record size.

Note:

It is a best practice to use a value that is less than 4 GB. Otherwise, the value might exceed the maximum allowable value for an unsigned 32-bit integer, because values over 4 GB cannot be processed on 32-bit systems. It is also a best practice to use values less than 4 GB on 64-bit systems.

Create multiple paths if you have values in excess of 4 GB.

MAX_ROWS_PER_CACHE_ENTRY

Specifies the maximum number of rows in a query result set to qualify for storage in the query cache. Limiting the number of rows is a useful way to avoid consuming the cache space with runaway queries that return large numbers of rows. If the number of rows a query returns is greater than the value specified in the MAX_ROWS_PER_CACHE_ENTRY parameter, then the query is not cached.

When set to 0, there is no limit to the number of rows per cache entry.

Example: MAX_ROWS_PER_CACHE_ENTRY = 100000;

MAX_CACHE_ENTRY_SIZE

Note:

The MAX_CACHE_ENTRY_SIZE parameter is centrally managed by Fusion Middleware Control and cannot be changed by manually editing NQSConfig.INI, unless all configuration through Fusion Middleware Control has been disabled (not recommended).

The Maximum cache entry size option on the Performance tab of the Capacity Management page in Fusion Middleware Control corresponds to the MAX_CACHE_ENTRY_SIZE parameter.

Specifies the maximum size for a cache entry. Potential entries that exceed this size are not cached. The default size is 20 MB.

Specify GB for gigabytes, KB for kilobytes, MB for megabytes, and no units for bytes.

Example: MAX_CACHE_ENTRY_SIZE = 20 MB;

MAX_CACHE_ENTRIES

Note:

The MAX_CACHE_ENTRIES parameter is centrally managed by Fusion Middleware Control and cannot be changed by manually editing NQSConfig.INI, unless all configuration through Fusion Middleware Control has been disabled (not recommended).

The Maximum cache entries option on the Performance tab of the Capacity Management page in Fusion Middleware Control corresponds to the MAX_CACHE_ENTRIES parameter.

Specifies the maximum number of cache entries allowed in the query cache to help manage cache storage. The actual limit of cache entries might vary slightly depending on the number of concurrent queries. The default value is 1000.

Example: MAX_CACHE_ENTRIES = 1000;

POPULATE_AGGREGATE_ROLLUP_HITS

Specifies whether to aggregate data from an earlier cached query result set and create a new entry in the query cache for rollup cache hits. The default value is NO.

Typically, if a query gets a cache hit from a previously executed query, then the new query is not added to the cache. A user might have a cached result set that contains information at a particular level of detail (for example, sales revenue by ZIP code). A second query might ask for this same information, but at a higher level of detail (for example, sales revenue by state). The POPULATE_AGGREGATE_ROLLUP_HITS parameter overrides this default when the cache hit occurs by rolling up an aggregate from a previously executed query (in this example, by aggregating data from the first result set stored in the cache). That is, Oracle Business Intelligence sales revenue for all ZIP codes in a particular state can be added to obtain the sales revenue by state. This is referred to as a rollup cache hit.

Normally, a new cache entry is not created for queries that result in cache hits. You can override this behavior specifically for cache rollup hits by setting POPULATE_AGGREGATE_ROLLUP_HITS to YES. Nonrollup cache hits are not affected by this parameter. If a query result is satisfied by the cache—that is, the query gets a cache hit—then this query is not added to the cache. When this parameter is set to YES, then when a query gets an aggregate rollup hit, then the result is put into the cache. Setting this parameter to YES might result in better performance, but results in more entries being added to the cache.

Example: POPULATE_AGGREGATE_ROLLUP_HITS = NO;

USE_ADVANCED_HIT_DETECTION

When caching is enabled, each query is evaluated to determine whether it qualifies for a cache hit. A cache hit means that the server was able to use cache to answer the query and did not go to the database at all. The Oracle BI Server can use query cache to answer queries at the same or later level of aggregation.

The parameter USE_ADVANCED_HIT_DETECTION enables an expanded search of the cache for hits. The expanded search has a performance impact, which is not easily quantified because of variable customer requirements. Customers that rely heavily on query caching and are experiencing misses might want to test the trade-off between better query matching and overall performance for high user loads. 

MAX_SUBEXPR_SEARCH_DEPTH

Lets you configure how deep the hit detector looks for an inexact match in an expression of a query. The default is 5.

For example, at level 5, a query on the expression SIN(COS(TAN(ABS(ROUND(TRUNC(profit)))))) misses on profit, which is at level 7. Changing the search depth to 7 opens up profit for a potential hit.

DISABLE_SUBREQUEST_CACHING

When set to YES, disables caching at the subrequest (subquery) level. The default value is NO.

Caching subrequests improves performance and the cache hit ratio, especially for queries that combine real-time and historical data. In some cases, however, you might disable subrequest caching, such as when other methods of query optimization provide better performance.

Example: DISABLE_SUBREQUEST_CACHING = NO;

GLOBAL_CACHE_STORAGE_PATH

Note:

The GLOBAL_CACHE_STORAGE_PATH parameter is centrally managed by Fusion Middleware Control and cannot be changed by manually editing NQSConfig.INI, unless all configuration through Fusion Middleware Control has been disabled (not recommended).

The Global cache path and Global cache size options on the Performance tab of the Capacity Management page in Fusion Middleware Control correspond to the GLOBAL_CACHE_STORAGE_PATH parameter.

In a clustered environment, Oracle BI Servers can be configured to access a shared cache that is referred to as the global cache. The global cache resides on a shared file system storage device and stores seeding and purging events and the result sets that are associated with the seeding events.

This parameter specifies the physical location for storing cache entries shared across clustering. This path must point to a network share. All clustering nodes share the same location.

You can specify the size in KB, MB, or GB, or enter a number with no suffix to specify bytes.

Syntax: GLOBAL_CACHE_STORAGE_PATH = "directory name" SIZE;

Example: GLOBAL_CACHE_STORAGE_PATH = "C:\cache" 250 MB;

MAX_GLOBAL_CACHE_ENTRIES

The maximum number of cache entries stored in the location that is specified by GLOBAL_CACHE_STORAGE_PATH.

Example: MAX_GLOBAL_CACHE_ENTRIES = 1000;

CACHE_POLL_SECONDS

The interval in seconds that each node polls from the shared location that is specified in GLOBAL_CACHE_STORAGE_PATH.

Example: CACHE_POLL_SECONDS = 300;

CLUSTER_AWARE_CACHE_LOGGING

Turns on logging for the cluster caching feature. Used only for troubleshooting. The default is NO.

Example: CLUSTER_AWARE_CACHE_LOGGING = NO;

"If you found this article useful, please rate the same"

Published in OBIEE
Wednesday, 18 April 2012 07:33

Aggregate Persistence

NOTE:

This exercise must be done on rpd’s which are deployed with the BI server is running

 Aggregate persistence is used to automate the creation and initial population of aggregate tables, persist them in backend datawarehouse schema and configures the rpd file to use them wherever its appropriate.  This is best way to deal with performance issue in datawarehouse tables for which intervention of ETL professionals is not needed.


Here are the steps to create an aggregate where a frequent query is fired for SALE_PRICE,MONTH and MODEL

Open rpd in online mode and and open the Utilities, choose “Aggregate Persistence”

Provide a name and location for the script file to be created

Select the metrics required to be persisted

Select the dimension level of the attributes that needs to be persisted. Check the use of “Surrogate Keys”

Select the Database, the schema and the connection pool and then specify the name of the aggregate table .

The created script now appears and simultaneously the script file is also created.

Once and check the script contents

Navigate to D:\OBIEE11G\Oracle_BI1\bifoundation\server\bin

Check that nqcmd.exe is present. This is a utility from OBIEE to execute scripts.

Cd to D:\OBIEE11G\Oracle_BI1\bifoundation\server\bin in a command window.

Issue the command nqcmd –d coreapplication_ON863956460 –u admin –p mypassword123 –s c:\agg_persistence.sql

Note that the entry after –d switch is the name of the OBIEE DSN. (to checn this Start>run>type odbcad32)

-u is the administrator username

-p is the administrator password

-s is script path

 Execute this command.

Once script executes, the command window displays the same as Processed. Note, any errors in this stage is because of 2 reasons :

·         BI Server is not running (opmnctl startall should be done) OR

·         The script is taken from an rpd that’s not deployed (take script from a rpd in online mode)

Once the script completes the changes done by it are visible.

·         New  tables are created in physical layer

·         Same are created in Database as well

·         New Logical table sources are created in BMM and mapped to respective columns

We can verify that column mapping to new LTS is done by opening the LTS’s created by the aggregate persistence wizard.

Notice that each of the objects created by aggregate persistence wizard in the rpd has a small “GEAR” symbol.

 "If you found this article useful, please rate the same"

Published in OBIEE

Objective: Import the Essbase Cube Data from Essbase Administration services console (EAS) to OBIEE and generate the dashboards with the presumption that the reader having basic understanding on Hyperion and OBIEE Tools

1. Overview of Essbase and OBIEE Integration

2. Create a Essbase Datasource into Repository

3. Create Business Model and Presentation for the Essbase Cubes

4. Generate Reports in OBIEE Analytics Tool


Overview of Essbase and OBIEE Integration

Hyperion Essbase is MOLAP where as OBIEE is ROLAP- MOLAP as we store the cube data, analysis is much faster than ROLAP.

Many users work with Essbase data using as their interface an add-in for Microsoft Excel. Hyperion places the entire cube in front of you in excel and you can drill down, add or remove dimensions, apply filters, pivot data, slice and dice and much more and also the write back capability which makes it the first choice for budgeting and forecasting applications. With Essbase as a middle layer, OBIEE is very easy for End users to generate reports and design dashboards in less span of time.

From OBIEE 10g, Essbase can also be one of the Datasources for OBIEE Repository as well as Oracle BI Repository can be a Datasources for Newly Emerging Essbase Studio Tool(11.x).

Supported Versions of Essbase Cubes: 9.x, 11.x

When Essbase is used as an OBIEE data source, you have the capability to combine Essbase sources with other OBIEE Server supported data sources, and present relevant information to your users in one place, from multiple sources. OBIEE is a relational Query tool so it makes Essbase behave that way too.

 Pre-Requisites:

 The Essbase Metadata import and its related queries require installation of the Essbase Client API on BI Server for OBIEE 10g.However I observed that for 11g it’s not mandatory. Let’s try to import Sample. Basic cube which comes with Essbase 11.x installation into OBIEE Repository.

Integration Architecture:

Also Ensure that all the Hyperion services are up and running fine along with OBIEE services.

Steps that needs to be followed:

1.  Importing Multidimensional Essbase Datasource into OBIEE Repository

Click on BI Administration to create a new rpd in repository

Start -> All Programs ->Oracle Business Intelligence-> BI Administration

Before precede further, need to define some user-defined options.

Since Essbase Cube Gen1 names contains only Dimension names, we can skip that levels before importing the cube as follows

Go to Tools-> Options and select the required checkbox

Click on file-> New Repository to create a repository

Provide the user-defined name and repository password

Path: C:\OBIEE\instances\instance4\bifoundation\OracleBIServerComponent\coreapplication_obis1\repository

It’s the default location where OBIEE Repository will store. Click Next

Select the connection type as “Essbase”, Provide Valid Server name, Username and password.

In this scenario, it’s my local system hence the Essbase Server is “local host”.

Best practice is using System name or IP address while Essbase Installation to make the Server name unique.

Once you click on Next, It will displays the Essbase Cubes under the server

Click on the Sample Application and click on “Imported Selected” button on the middle pane window to import the Essbase Cube to OBIEE Repository as following image

2.  Changes that will perform in Physical Layer

Once the cube got imported, Dimensions and facts appears as follows in the physical layer

To display all the Measures or facts in Essbase Cube.

Right Click Database [Basic] of Sample Application -> “Convert Measure dimension to flat measures”;

Once more advantage here is no need to create again star schema, OBIEE will brings all join relationships along with tables (This is where I impressed and of course lot of re-work got reduced for developersJ)

As you can see from the screenshot below, the Market physical dimension has three UDAs associated with it; Major Market, Small Market and New Market.

Another good point, Even the hierarchy is ragged or Skipped, Essbase+OBIEE can handle it.

We can also observed here it will display the Introdate, Caffeinated attribute dimensions as hierarchies

3. Now Drag the Sample Cube from Physical to BMM layer

Double Click on Essbase Cube Symbol and uncheck the disabled button to display in Subject area of OBIEE Analytics tool

4. Create a new Subject area in Presentation layer by dragging the logical cube from BMM to presentation layer

 Now click on “Check global Consistency” for verification of warnings or errors

If there are no errors, it will display as rpd building is successful as follow

5. Before Upload Ensure that OBIEE Services are up and running fine. This can be verified in Enterprise Manager link

Link: http://ServerName:7001/em/

Logon with valid credentials

Here is the path where we can observe the OBIEE services status

Now upload the repository to BI Server

From the left window, Click on 

Farm_bifoundation_domain->Business Intelligence-> Coreapplication

In the middle window, Click on “Deployment tab”.

Click on “Lock and Edit Configuration”

Then it displays the message as follows

Now click on “Browse” button to upload the repository

Path: C:\OBIEE\instances\instance4\bifoundation\OracleBIServerComponent\coreapplication_obis1\repository

Provide the repository password and click on “Activate changes”

Now click on “Restart to apply recent changes” to upload rpd

For any rpd changes, we need to restart the services to make rpd from offline to online mode

Automatically it will take you to the “Overview” tab and click on “Restart” button and say “Yes”

It will take for a while for services to be up and working fine

Once the services started, Open the Analytics link to generate the reports

Link: http://servername:7001/analytics/

Give the valid credentials  

Click on New-> Analysis

Select the SA as “Sample”

With this we can create reports and dashboards

Published in Hyperion
  • «
  •  Start 
  •  Prev 
  •  1 
  •  2 
  •  3 
  •  Next 
  •  End 
  • »
Page 1 of 3