 |
|
Export/Import and the Oracle
Scheduler Scheduling
Oracle Tips by Burleson Consulting |
Export/Import and the Scheduler
Jobs defined using the dbms_jobs package can be
exported and imported using the exp and imp utilities at both schema
and full database level.
The import and export of scheduler objects
defined using the dbms_scheduler package is only supported via the
new datapump utilities (expdp and impdp), which are also capable of
transferring legacy job definitions. The export process
generates Data Definition Language (DDL) which is used to recreate
the scheduler objects as they were originally defined, including
time zone information. The following simple example shows how
these utilities work.
First, a directory object for the expdp and
impdp utilities to work with must be created.
conn
system/password
create or replace directory export_dir AS '/tmp/';
grant read, write on directory export_dir to job_user;
Then, a basic job to be exported by the expdp
utility is created.
conn job_user/job_user
BEGIN
DBMS_SCHEDULER.create_job (
job_name =>
'test_expdp_job_1',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN
DBMS_LOCK.sleep(10); END;',
start_date => SYSTIMESTAMP,
repeat_interval => 'freq=hourly;',
end_date =>
SYSTIMESTAMP + 1,
enabled =>
TRUE,
comments => 'Job to
test expdp.');
END;
/
From the operating system prompt, run the expdp
utility to export the job_user schema. The following listing
shows both the export command and the export log output:
expdp
system/password schemas=JOB_USER directory=EXPORT_DIR dumpfile=JOB_USER.dmp
logfile=expdpJOB_USER.log
Export:
Release 10.1.0.2.0 - Production on Saturday, 11 September, 2004
17:47
Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release
10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********
schemas=JOB_USER
directory=EXPORT_DIR dumpfile=JOB_USER.dmp logfile=expdpJOB_USER.log
Estimate in progress using BLOCKS method...
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/SE_POST_SCHEMA_PROCOBJACT/PROCOBJ
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully
loaded/unloaded
********************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/tmp/JOB_USER.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 17:48
On completion of the export, the impdp utility
can be run with the sqlfile parameter set to create a DDL script
containing all the object creation code. The following listing
shows both the import command and the import log output:
impdp
system/password sqlfile=JOBS.sql directory=EXPORT_DIR dumpfile=JOB_USER.dmp
logfile=impdpJOB_USER.log
Import:
Release 10.1.0.2.0 - Production on Saturday, 11 September, 2004
17:48
Copyright (c)
2003, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "SYSTEM"."SYS_SQL_FILE_FULL_01" successfully
loaded/unloaded
Starting "SYSTEM"."SYS_SQL_FILE_FULL_01": system/******** sqlfile=JOBS.sql
directory=EXPORT_DIR dumpfile=JOB_USER.dmp logfile=impdpJOB_USER.log
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/SE_POST_SCHEMA_PROCOBJACT/PROCOBJ
Job "SYSTEM"."SYS_SQL_FILE_FULL_01" successfully completed at 17:48
The resulting sqlfile (JOBS.sql) contains
creation DDL for all the job_user schema objects, including the
following job creation script code.
BEGIN
dbms_scheduler.create_job ('"TEST_EXPDP_JOB_1"',
job_type=>'PLSQL_BLOCK', job_action=>
'BEGIN DBMS_LOCK.sleep(10); END;'
, number_of_arguments=>0,
start_date=>'11-SEP-04 17.43.18.552000 +01:00', repeat_interval=>
'freq=hourly;'
, end_date=>'12-SEP-04 17.43.18.000000 +01:00',
job_class=>'"DEFAULT_JOB_CLASS"', enabled=>FALSE, auto_drop=>TRUE,comments=>
'Job to test expdp.'
);
dbms_scheduler.enable('"TEST_EXPDP_JOB_1"');
COMMIT;
END;
/
It can be helpful to keep scheduler object
definitions as text files in a source control system. As a
result, the object definitions would be loaded using the original
text files rather than transferring them between databases.
Now that methods for importing and exporting
jobs to and from the scheduler have been introduced, the focus of
the following section will shift to information on the use of
services and instance stickiness in Real Application Cluster (RAC)
environments.
Services and Instance Stickiness
Services allow the classification or grouping
of applications within a database. This allows application
priorities and resource allocation to be managed more effectively.
Services can be defined and utilized in both single-node and Real
Application Cluster (RAC) environments. The RAC is where they
are most useful as they facilitate the coordination of grid
computing.
A job class can be assigned to a service, which
affects how jobs associated with the job class are executed.
When using RAC, jobs belonging to a job class will only run in a RAC
instance that is assigned to the specific service. The
following rules apply to job classes in relation to services:
* All job classes are assigned to a service.
If a service is not explicitly specified, the job class is assigned
to the default service, meaning it can run on any RAC instance in
the cluster.
* Dropping a service will cause any dependant
job classes to be reassigned to the default service.
* Specifying a non-existent service will cause
the job class creation to fail.
Services can be configured using the Database
Configuration Assistant (dbca), srvctl utility or the dbms_service
package. The dbms_service package is limited to service
administration on a single node; while the dbca and srcvtl utilities
can perform cluster-wide configuration and administration.
Examples of administering services with the dbms_service package are
show below.
BEGIN
-- Create a new service associated with the specified TSN service name.
DBMS_SERVICE.create_service (
service_name => 'test_service',
network_name => 'DB10G.MYDOMAIN.COM');
--
Start the specified service.
DBMS_SERVICE.start_service (
service_name => ' test_service');
--
Disconnects all sessions associated with the specified service.
DBMS_SERVICE.disconnect_session (
service_name => 'test_service');
--
Stop the specified service.
DBMS_SERVICE.stop_service (
service_name => 'test_service');
--
Delete the specified service.
DBMS_SERVICE.delete_service (
service_name => 'test_service');
END;
/
Some examples of using the srvctl utility to do
similar actions are listed below.
# Create the
service on two nodes.
srvctl add service -d DB10G -s TEST_SERVICE -r DB10G1,DB10G2
# Stop and
start the service on a single or multiple nodes.
srvctl stop service -d DB10G -s TEST_SERVICE -i DB10G1,DB10G2
srvctl start service -d DB10G -s TEST_SERVICE -i DB10G1
# Disable and
enable the service on a single or multiple nodes.
srvctl disable service -d DB10G -s TEST_SERVICE -i DB10G1,DB10G2
srvctl enable service -d DB10G -s TEST_SERVICE -i DB10G1
# Display the
current status of the service.
srvctl status service -d DB10G -s TEST_SERVICE -v
# Remove the
service from both nodes.
srvctl remove service -d DB10G -s TEST_SERVICE -i DB10G1,DB10G2
Once a service is present, it can be assigned
to a job class during creation or subsequently using the
set_attribute procedure, as shown below.
BEGIN
DBMS_SCHEDULER.create_job_class (
job_class_name => 'test_job_class',
service => ?test_service?);
DBMS_SCHEDULER.set_attribute (
name => 'test_job_class',
attribute => 'service',
value => ?admin_service?);
END;
/
The following scenario will explain more
specifically how services can be used to partition applications in a
three node RAC environment.
For services to function correctly, the Global
Services Daemon (GSD) must be running on each node in the cluster.
The GSD?s are started using the gsdctl utility, which is part of the
Cluster Ready Services (CRS) installation, so they must be started
from that environment.
# Set
environment.
export ORACLE_HOME=/u01/app/oracle/product/10.1.0/crs
export PATH=$ORACLE_HOME/bin:$PATH
# Start GSD
daemon.
gsdctl start
Once the GSD?s are running, the user must check
that the cluster configuration is correct. The following
command and output show the expected configuration for a three node
database called ORCL.
srvctl config
database -d ORCL
server01 ORCL1 /u01/app/oracle/product/10.1.0/db_1
server02 ORCL2 /u01/app/oracle/product/10.1.0/db_1
server03 ORCL3 /u01/app/oracle/product/10.1.0/db_1
This configuration is typically performed
during the cluster database creation, but it can be performed
subsequently using the following commands.
srvctl add
database -d ORCL -o /u01/app/oracle/product/10.1.0/db_1
srvctl add instance -d ORCL -i ORCL1 -n server01
srvctl add instance -d ORCL -i ORCL2 -n server02
srvctl add instance -d ORCL -i ORCL3 -n server03
Assume that two applications should run in the
following way:
* OLTP - Should run on nodes one and two of the
RAC, but is able to run on node three if nodes one and two are not
available.
* BATCH - Should run on node three, but is able
to run on nodes one and two if node three is not available.
To meet this requirement, the following
services can be created:
# Set
environment.
export ORACLE_HOME=/u01/app/oracle/product/10.1.0/db_1
export PATH=$ORACLE_HOME/bin:$PATH
# Create
services.
srvctl add service -d ORCL -s OLTP_SERVICE -r ORCL1,ORCL2 -a
ORCL1,ORCL2,ORCL3
srvctl add service -d ORCL -s BATCH_SERVICE -r ORCL3 -a
ORCL1,ORCL2,ORCL3
The OLTP_SERVICE is able to run on all RAC
nodes, indicated by the -a option, but will run in preference on
nodes one and two, indicated by the -r option. The
BATCH_SERVICE is able to run on all RAC nodes, indicated by the -a
option, but will run in preference on node three, indicated by the
-r option.
The services can be started and stopped using
the following commands.
srvctl start
service -d ORCL -s OLTP_SERVICE
srvctl start service -d ORCL -s BATCH_SERVICE
srvctl stop
service -d ORCL -s OLTP_SERVICE
srvctl stop service -d ORCL -s BATCH_SERVICE
The Oracle10g scheduler allows jobs to be
linked with job classes, which in turn can be linked to services to
allow jobs to run on specific nodes in a RAC environment. To
support the requirements for the job, two job classes might have to
be created as follows.
-- Create
OLTP and BATCH job classes.
BEGIN
DBMS_SCHEDULER.create_job_class(
job_class_name => 'OLTP_JOB_CLASS',
service => 'OLTP_SERVICE');
DBMS_SCHEDULER.create_job_class(
job_class_name => 'BATCH_JOB_CLASS',
service => 'BATCH_SERVICE');
END;
/
-- Make sure
the relevant users have access to the job classes.
GRANT EXECUTE ON sys.oltp_job_class TO job_user;
GRANT EXECUTE ON sys.batch_job_class TO job_user;
These job classes can then be assigned to
existing jobs or during job creation.
-- Create a
job associated with a job class.
BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'my_user.oltp_job_test',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN NULL;
END;',
start_date => SYSTIMESTAMP,
repeat_interval => 'FREQ=DAILY;',
job_class => 'SYS.OLTP_JOB_CLASS',
end_date => NULL,
enabled =>
TRUE,
comments => 'Job
linked to the OLTP_JOB_CLASS.');
END;
/
-- Assign a
job class to an existing job.
EXEC DBMS_SCHEDULER.set_attribute ('MY_BATCH_JOB', 'JOB_CLASS', 'BATCH_JOB_CLASS');
The use of services is not restricted to
scheduled jobs. These services can be used in the tnsnames.ora
file to influence which nodes are used for each applications.
An example of the tnsnames.ora file entries are displayed below.
OLTP =
(DESCRIPTION =
(LOAD_BALANCE = ON)
(FAILOVER = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST = server01)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = server02)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = server03)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = OLTP_SERVICE)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 20)
(DELAY = 1)
)
)
)
BATCH =
(DESCRIPTION =
(LOAD_BALANCE = ON)
(FAILOVER = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST = server01)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = server02)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = server03)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = BATCH_SERVICE)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 20)
(DELAY = 1)
)
)
)
Provided applications use the appropriate
connection identifier, they should only connect to the nodes
associated to the service.
Although not directly related to services, it
is sensible to discuss the concept of instance stickiness at this
point. It has been shown that services can be used to
associate jobs to one or more RAC instances in the cluster, but
having a job run on a different instance each time can result in
performance issues. For example, a job on the first node of
the cluster may be executed, during which time all the data
necessary to perform the job is read from disk into the buffer
cache. On the second execution the job runs on the second
node.
As most of the data necessary to perform the
job is already in the cache of the first node, the data must be
passed across the clusters interconnect before the second job can
proceed. If the job had executed on the first node again, this
network transfer would not have been necessary. This is the
reason for instance stickiness.
The instance_stickiness parameter for an
individual job defaults to TRUE, meaning that the job will run
repeatedly on the same node, assuming it is available and not
severely overloaded. The default value can be modified using
the set_attribute procedure as shown below.
BEGIN
DBMS_SCHEDULER.set_attribute (
name => 'test_stickiness_job',
attribute => 'instance_stickiness',
value => FALSE);
END;
/
With the instance_stickiness parameter set to
FALSE, the job can run on any available node in the cluster.
Now that information has been presented on how
services can be used in a RAC environment, it would be prudent to
move on to the subject of the security issues related to the
scheduler.
 |
This is an excerpt from the book "Oracle
Job Scheduling" by Dr. Tim Hall. You can buy it direct
from the publisher for 30%-off and get instant access to the
code depot of Oracle job scheduling scripts. |