Oracle Training Oracle Support Development Oracle Apps

 
 Home
 E-mail Us
 Oracle Articles
New Oracle Articles


 Oracle Training
 Oracle Tips

 Oracle Forum
 Class Catalog


 Remote DBA
 Oracle Tuning
 Emergency 911
 RAC Support
 Apps Support
 Analysis
 Design
 Implementation
 Oracle Support


 SQL Tuning
 Security

 Oracle UNIX
 Oracle Linux
 Monitoring
 Remote s
upport
 Remote plans
 Remote
services
 Application Server

 Applications
 Oracle Forms
 Oracle Portal
 App Upgrades
 SQL Server
 Oracle Concepts
 Software Support

 Remote S
upport  
 Development  

 Implementation


 Consulting Staff
 Consulting Prices
 Help Wanted!

 


 Oracle Posters
 Oracle Books

 Oracle Scripts
 Ion
 Excel-DB  

Don Burleson Blog 


 

 

 


 

 

 

 


Oracle OLAP Tips and Oracle BI 10g Product Demos
February 23, 2006
Mark Rittman

I've been over in Ireland for the last week, and in the evenings working on a new article for OTN on data profiling using OWB "Paris", so I've not had a chance to update the blog since the UKOUG conference. In the meantime though, here's a couple of Oracle OLAP tips that I picked up during the week.

The first one come from the Oracle OLAP Reference Programme newsletter and looks at how you can logically and physically partition your analytic workspaces:

"One way to improve OLAP performance is to take advantage of AW (logical) partitioning and relational (physical) partitioning. Since this can deliver terrific benefits, by enabling parallel update, we thought we'd provide some additional information on it below.

1. How should I partition the AW?

You can use AWM to logically partition your AW. Or, you can manually partition by creating partitions and partition templates by creating DIMENSION, DIMENSION CONCAT and PARTITION TEMPLATE objects. SEE OLAP DML Guide for further instructions.

Logically partitioned data is stored in different logical AW partitions. At build time when more than 1 process is specified for the maintenance job (via Oracle Job Queue) loads and updates will be parallelized and performance should be better.

To determine how to logically partition your AW, keep in mind the following; the dimension and levels you should partition depends on your model although the best advice is to partition at the lowest level of a dimension since this results in

  • More partitions
  • More opportunity for parallelization
  • Smaller composites
  • More granular rolloff of unused partitions
  • This is likely to be the month level of the time dimension)

Note though that this can result in less pre-summarization since any levels above the partitioned level of that dimension, and any other hierarchies for that dimension do not get aggregated. So, for these non aggregated levels, you might have poorer query performance unless you pre-aggregate these levels manually.

2. How can I relationally partition my AW?

If you have the 'Partitioning' option installed on your RDBMS, 10g will automatically create any partitions for your AW. This will implement 'physical' relational partitions for the AW 'logical' partitions you may have created. Having multiple partitions allows multiple parallel update since there are multiple LOB indexes that can be updated, thus dramatically improving (load) performance.

Note, you can also physically partition your AW manually as the example below shows:

SQL> exec dbms_aw.execute('aw create SCOTT.SALES_AW partitions 60');

This examples creates 60 partitions for the SALES AW. Note, physical relational partitioning requires the Partitioning option Licence.

Or, to add additional partition, do as follows:

SQL> alter table scott.aw$sales add partition test_p61 update indexes;"

Some interesting points there, about how the logical partitioning of an analytic workspace means that you can load and aggregate in parallel (sort of, using multiple concurrent jobs rather than PQ as we know it) and how if you've installed the partitioning option, your logical AW partitions get turned into physical partitioned tables automatically. Note however the comment about pre-summarization - that sounds like an important limitation to be aware of. I've not really played around with AW partitioning yet so can't comment on the advice, but I've heard good things from other people and I'll need to to put together some test cases soon.

Another tip from the newsletter:

"To continue with the theme of partitioning, our DBA tip is to recommend that you store partitions (LOBs) in different tablespaces as this will achieve reduced I/O and so will also improve performance.

How can I distribute partitioned LOBs across tablespaces?

In the example above, you have 60 relational partitions. Now you want to store these data partitions or LOBs in different tablespaces.

To move a partition's tablespace in 10.1 do as follows:

SQL> alter table scott.aw$sales move partition sys_p48 lob (awlob) store as (tablespace scott_mnth_02) update indexes;

Or to add a new partition and specify tablespace, do:

SQL> alter table scott.aw$sales add partition test_p10 lob (awlob) store as (tablespace scott_mnth_02) update indexes;

For 10.2 you must add a new subpartition with tablespace specified. You cannot move partitions as above since you cannot move LOBs that are subpartitioned by HASH

To do this do as follows:

SQL> alter table scott.aw$sales modify partition ptnn add subpartition ptnn_010 lob (awlob) store as (tablespace scott_mnth_02) update indexes; "

One bit to point out on this tip is that of course splitting your LOB (the physical container for the AW) over different tablespaces doesn't by itself reduce I/O - this only occurs if the datafiles that make up the different tablespaces are stored on different physical disk units, and then of course you need to bear in mind factors such as the available disk controller bandwidth and so on. Most of the time nowadays I find that the tablespace datafiles are in fact stored on a SAN, which does the job itself of striping datafiles across different disk units, so in most cases this would be a redundant technique, but it's good to know how it's done anyway.

The other bit of interesting info that I came across was on this OTN OLAP Forum posting by Brad O'Hare. Brad was having an issue with the OLAP_TABLE feature and it turns out that the issue he was having was due to him using a datamap - through his use of the FETCH command - to specify his OLAP query, rather than the more normal limitmap. Brad asked the question as to why limitmaps are now preferred over datamaps/FETCH and got this reply from Chris Chiappa:

"The big reason [for not using datamaps] historically has been ease of use - it's a lot easier to mess up with them than with LIMITMAPs. In 10.1 and beyond however they also disable the AW optimizations in SQL MODEL clauses. This can be a big win in terms of throughput (generally in 10.1 and 10.2 you probably want to have MODEL clauses around your table function whether you're using the MODEL features or not) so it's probably the case that you want to avoid datamaps so that you get this optimization. If there's something you think you need a datamap for, maybe post a description of what you're trying to do?

with a subsequent follow up by 'cwelton'

"Adding to what Chris mentions; limitmaps do allow the table function to make a number of different optimizations including select list pruning and where clause filtering that are unavailable if the table function is being driven by a datamap."

Anyway, it'll probably be another week or so before I post again, but in the meantime if you didn't get a chance to come along to the SolStonePlus stand at the UKOUG event, or come along to one of my talks, there are a couple of software demo videos that have just been put up on the company website where you can see me demonstrating the Oracle BI10g product stack. They're in Windows Media format and can be accessed from these URLs:

  • Building an Oracle 10g OLAP Cube using Analytic Workspace Manager
  • Analyzing OLAP Data using OracleBI Discoverer 10g
  • Building a Discoverer data mart using Oracle Warehouse Builder 10g
  • Using OracleAS Portal and OracleBI Discoverer to build a business intelligence portal

 

 
��  
 
 
Oracle Training at Sea
 
 
 
 
oracle dba poster
 

 
Follow us on Twitter 
 
Oracle performance tuning software 
 
Oracle Linux poster
 
 
 

 

Burleson is the American Team

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals.  Feel free to ask questions on our Oracle forum.

Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.

Errata?  Oracle technology is changing and we strive to update our BC Oracle support information.  If you find an error or have a suggestion for improving our content, we would appreciate your feedback.  Just  e-mail:  

and include the URL for the page.


                    









Burleson Consulting

The Oracle of Database Support

Oracle Performance Tuning

Remote DBA Services


 

Copyright © 1996 -  2017

All rights reserved by Burleson

Oracle ® is the registered trademark of Oracle Corporation.

Remote Emergency Support provided by Conversational