One of the interesting new
features in OWB "Paris" is the way in which source data is
loaded into dimensions and cubes.With previous versions of
Warehouse Builder, you loaded data into a dimension or cube in
exactly the same way as you loaded tables. Source columns are
mapped onto dimension columns, surrogate keys required for
cube loads are retrieved from dimensions using key lookups, in
short there's no different in loading a dimensional object to
loading a regular table. All of that changes with OWB "Paris".
In this example, I'm loading a dimension called PRODUCT_DIM
that has three "regular" levels, PRODUCT, PRODUCT_GROUP and
PRODUCT_CATEGORY, and a top level called TOTAL_PRODUCT to
which all child levels roll up. It's a relational OLAP
dimension, although with Paris it could just as well be a
MOLAP dimension, they're loaded in exactly the same way. My
source data is in a table called PRODUCT_SRC that has natural
key and DESCRIPTION columns for each of these levels.
SQL> select * from products_src;
PROD_ID PROD_NAME PROD_GROUP_ID PROD_GROUP_NAME PROD_CATEGORY_ID PROD_CATEGORY_N
---------- -------------------------- ------------- --------------- ---------------- ---------------
1 Fixed-Income Bond 10 Bonds 100 Investments
2 Variable-Income Bond 10 Bonds 100 Investments
3 Pension AVC 11 Pension Top-Ups 101 Pensions
4 Level Term Life Insurance 12 Term Assurance 102 Insurance
5 Critical Illness Insurance 12 Term Assurance 102 Insurance
When you put these objects onto a mapping canvas and start
to join source to target, this is what you end up with:

There's quite a bit different here to what you
get with previous versions of OWB; the main thing to bear in
mind is with Paris, when you map to a dimension object, you're
not directly loading source columns into the underlying table,
you're loading data into a nested, system generated mapping
that takes your source data and does a number of things to it
"behind the scenes" to maintain the dimension.
What you're loading into is what in Paris is
termed a nested mapping "signature", a kind of interface to a
nested program where all you have to worry about is what to
put it and what comes out. If you expand the nested mapping,
here's what actually goes on behind the scenes:

which isn't too dissimilar to what you had to
do with earlier OWB versions to maintain a dimension, when you
were working with slowly changing dimensions (which "Paris"
supports natively). The good bit here though is that all this
is generated automatically for you, all you have to do is map
source data onto the dimension logical object. So what does
the dimension object do?
Looking back at the original mapping, you'll
notice that there are sections for each of the dimension
levels, each of which has the attributes for that level, plus
attributes that link through to the level above. In each
level, the ID attribute is a system generated surrogate key
for each level, which you don't have to maintain and OWB
completes for you. OWB automatically creates a sequence for
you, one per dimension, and uses it to populate the dimension
ID attributes without you needing to explicitly include it in
the mapping. Note also that one sequence is used per
dimension, not per level, as using one per dimension ensures
that dimension member IDs at every level are unique.
The NAME attribute is where you put the
natural key from the source system. Both the surrogate and the
natural key are stored in the dimension, as OWB "Paris" uses
the natural key to determine whether to write a new SCD2
dimension record due to a trigger attribute changing. The NAME
becomes the SHORT_DESC for the level, and the DESCRIPTION
field becomes the LONG_DESC, and these are what appears in BI
Beans or Discoverer for OLAP when you use the dimension member
selector.
So when you do your dimension mapping, all you
do is drag across the source data fields and you can forget
about surrogate key maintenance. Note also the other sections
for Error records - these are used when you build business
rules into your mapping to reject records based on certain
criteria. Again all you do is set the rules up and Paris deals
with the process of putting rejected records into a holding
table for you to deal with later. Looks good.
With cube loading it's a similar story. The
cube object you drop onto the mapping canvas is actually a
"signature" for a larger cube loading process, and all you
have to do is drag your source data onto the cube, and Paris
will go away and retrieve the surrogate keys for the
dimensions without you having to put together lots of key
lookups.

The BRANCH_DIM and PRODUCTS_DIM attributes in
the cube object are placeholders for where Paris will fill in
the surrogate keys, based on the natural keys that you feed in
from the transactions source table. Again there's automatic
handling of error records without you having to code the
handler yourself.