Call now: 252-767-6166  
Oracle Training Oracle Support Development Oracle Apps

 
 Home
 E-mail Us
 Oracle Articles
New Oracle Articles


 Oracle Training
 Oracle Tips

 Oracle Forum
 Class Catalog


 Remote DBA
 Oracle Tuning
 Emergency 911
 RAC Support
 Apps Support
 Analysis
 Design
 Implementation
 Oracle Support


 SQL Tuning
 Security

 Oracle UNIX
 Oracle Linux
 Monitoring
 Remote s
upport
 Remote plans
 Remote
services
 Application Server

 Applications
 Oracle Forms
 Oracle Portal
 App Upgrades
 SQL Server
 Oracle Concepts
 Software Support

 Remote S
upport  
 Development  

 Implementation


 Consulting Staff
 Consulting Prices
 Help Wanted!

 


 Oracle Posters
 Oracle Books

 Oracle Scripts
 Ion
 Excel-DB  

Don Burleson Blog 


 

 

 


 

 

 

 

 

PL/SQL Trigger Enhancements in 11g

Oracle 11g New Features Tips by Donald BurlesonJuly 17, 2015

Oracle 11g New Features Tips

One issue that triggers have always had (and still do) is that if there are multiple triggers on a statement, the order in which they fire is not defined.  The programmer had to place the trigger bodies in one trigger to insure the firing order.  In 11g, Oracle has provided a FOLLOWS statement to allow multiple triggers of the same type on the same table to fire in a specified order.

create or replace trigger fist_fire_tr
before insert on table1 for each row
begin
  do something.
end;
/

create or replace trigger second_fire_tr
before insert on table1 for each row
follows first_fire_tr
begin
  do stuff
end;
/

The programmer can also create a trigger in a disables state.  This is useful to allow the trigger to be created and compiled without enabling it.  Once the trigger compiles correctly, the programmer can enable it.

create or replace trigger disabled_tr
before insert on table1 for each row
disable
begin
  do something.
end;
/

alter trigger disabled_tr enable; 

alter table table1 enable all triggers;

By creating a trigger in a disabled state, the programmer has the flexibility to add the trigger without having it immediately impact the table data.

Cross_Session PL/SQL Function Result Cache

The Result Cache is a powerful new feature that allows the programmer to specify that a function's results be stored in the SGA for future use.  The use of the Result Cache with SQL statements is equally powerful and is covered in the next chapter.  Basically, the 11g database has the ability to save the result of a PL/SQL function call in the SGA and reuse it if the same function is called with the same parameters.  The capability is called cross-session because once the results are in the SGA, they are available for any session to reuse.  The performance results come from the fact that once a function is executed, all future calls to that function with the same parameters will return the results stored in the SGA, rather than execute the function call.  For example, a call to a function that returns a value executes in 10 seconds, future calls will execute immediately because the result has been samed.  This is a two edged performance sword.  Once stored the database will continue to server the stored result, even if the data changes and a new call to the function would return a different value.  More on invalidating the result cache in a moment, first let's look at how the result cache works.

In the following example we need to know if a store has above average sales on books.  Below is a function that is passed a store_key and returns true if the store has above average sales in the sales table.

 create or replace function above_avg(str_ky in varchar2)
  return boolean
 
result_cache
as
  n_avg number := 0;
  n_total number := 0;
begin
  select avg(st_total) into n_avg
  from (select   sum(quantity) st_total
        from sales
        group by store_key)
  ;

  select sum(quantity) into n_total
  from sales
  where sales.store_key = str_ky; 

  if n_total > n_avg then return true;
  else return false;
  end if;
end;
/

Notice the result cache clause after the function signature.  This tells the database to save the results for use in any session that calls this function with the same store_key.

Now we will use the function below to run through the 10 stores and see which store_key represent above average sales.

declare
  t_begin number;
begin
  t_begin := DBMS_UTILITY.get_time;
  for r_c1 in (select distinct(store_key)
               from sales
               order by store_key) loop
  if (above_avg(r_c1.store_key)) then
    dbms_output.put_line('Store_key: '||r_c1.store_key||
                         ' Above Average');
  else
    dbms_output.put_line('Store_key: '||r_c1.store_key||
                         ' Below Average');
  end if;
  end loop;
  dbms_output.put_line('Time: '||(DBMS_UTILITY.get_time - t_begin));
end;
/

Store_key: S101 Below Average
Store_key: S102 Above Average
Store_key: S103 Below Average
Store_key: S104 Above Average
Store_key: S105 Above Average
Store_key: S106 Below Average
Store_key: S107 Above Average
Store_key: S108 Below Average
Store_key: S109 Above Average
Store_key: S110 Below Average

Time: 1

The actual time was 0.14 seconds on average reported by SQL Developer 1.5.

When the anonymous block is run again, time goes to 0.  That is because of running the function and calculating the results, the database is returning the result cached from the first run.

Store_key: S101 Below Average
Store_key: S102 Above Average
Store_key: S103 Below Average
Store_key: S104 Above Average
Store_key: S105 Above Average
Store_key: S106 Below Average
Store_key: S107 Above Average
Store_key: S108 Below Average
Store_key: S109 Above Average
Store_key: S110 Below Average

Time: 0

This is a real performance boost.  Instead of calculating the average sale for each call to the function, the database simply checks to see if the result is cached and returns that cached result.

However there is a problem with the underlying data changes.  The sales table has only 100 rows by default so we need to jump that number up to see the impact.  The code below adds rows to the sales table.  Note, it only adds them to the S110 store_key.

begin
  for indx in 1..100 loop
    insert into sales (select * from sales
                       where store_key='S110');
    commit;
  end loop;
end;
/

When we re-run the anonymous block again we get the same results as before.

Store_key: S101 Below Average
Store_key: S102 Above Average
Store_key: S103 Below Average
Store_key: S104 Above Average
Store_key: S105 Above Average
Store_key: S106 Below Average
Store_key: S107 Above Average
Store_key: S108 Below Average
Store_key: S109 Above Average
Store_key: S110 Below Average

Time: 0

Again the database used the results cached from the first run.  The issue is that the data has changed.  The key S110 before had below average sales, but now has almost all the sales.  I can tell that the function is dependent on a table's data and that it should not use the result cache if the table data has changed since the function was run to create that cache.  To establish this dependency add the RELIES_ON clause to the function.

create or replace function above_avg(str_ky in varchar2)
  return boolean
  result_cache relies_on (pubs.sales)
as
  n_avg number := 0;
  n_total number := 0;
begin
  select avg(st_total) into n_avg
  from (select   sum(quantity) st_total
        from sales
        group by store_key)
  ;

  select sum(quantity) into n_total
 
from sales
  where sales.store_key = str_ky;

  if n_total > n_avg then return true;
  else return false;
  end if;
end;
/

Now before using the result cache, the depend table is checked and if the data in the table has changed the database will execute the function and update the result cache. Below the anonymous block is run twice, after reloading the function and reloading the additional rows into the sales table.

Store_key: S101 Below Average
Store_key: S102 Below Average
Store_key: S103 Below Average
Store_key: S104 Below Average
Store_key: S105 Below Average
Store_key: S106 Below Average
Store_key: S107 Below Average
Store_key: S108 Below Average
Store_key: S109 Below Average
Store_key: S110 Above Average

Time: 11

PL/SQL procedure successfully completed.

SQL> /

Store_key: S101 Below Average
Store_key: S102 Below Average
Store_key: S103 Below Average
Store_key: S104 Below Average
Store_key: S105 Below Average
Store_key: S106 Below Average
Store_key: S107 Below Average
Store_key: S108 Below Average
Store_key: S109 Below Average
Store_key: S110 Above Average

Time: 1

PL/SQL procedure successfully completed.

Note that this run took significantly more time to complete and S110 is now the only store with above average sales.  Running the block again will show the result cache again being used.  After entering another row into the sales table the anonymous block is run to show that the result cache was invalidated.

SQL> INSERT INTO SALES VALUES ('S102', 'B115', 'O189',
  2  to_date('05-10-2004 16:30','MM-DD-YYYY HH24:MI'), 8800);

1 row created.

SQL> commit;
Commit complete. 

SQL> declare
  2    t_begin number;
  3  begin
  ?
 17  end;

 
18  /

Store_key: S101 Below Average
Store_key: S102 Below Average
Store_key: S103 Below Average
Store_key: S104 Below Average
Store_key: S105 Below Average
Store_key: S106 Below Average
Store_key: S107 Below Average
Store_key: S108 Below Average
Store_key: S109 Below Average
Store_key: S110 Above Average

Time: 12

PL/SQL procedure successfully completed.

The block run in 12 hsec, showing that the result cache was properly invalidated.

 

This is an excerpt from the new book Oracle 11g New Features: Expert Guide to the Important New Features by John Garmany, Steve Karam, Lutz Hartmann, V. J. Jain, Brian Carr.

You can buy it direct from the publisher for 30% off.

 

 
��  
 
 
Oracle Training at Sea
 
 
 
 
oracle dba poster
 

 
Follow us on Twitter 
 
Oracle performance tuning software 
 
Oracle Linux poster
 
 
 

 

Burleson is the American Team

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals.  Feel free to ask questions on our Oracle forum.

Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.

Errata?  Oracle technology is changing and we strive to update our BC Oracle support information.  If you find an error or have a suggestion for improving our content, we would appreciate your feedback.  Just  e-mail:  

and include the URL for the page.


                    









Burleson Consulting

The Oracle of Database Support

Oracle Performance Tuning

Remote DBA Services


 

Copyright © 1996 -  2020

All rights reserved by Burleson

Oracle ® is the registered trademark of Oracle Corporation.