Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
9 views16 pages

9i Bulk PLSQL

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 16

INTRODUCTION TO 9i BULK PL/SQL ENHANCEMENTS

Author: Adrian Billington


Date: 18-SEP-2003

INTRODUCTION
1
In December 2001, I wrote a paper entitled "Introduction to 8i Bulk PL/SQL Processing" . This
demonstrated the (then new) bulk PL/SQL processing enhancements introduced in Oracle 8i. Oracle
has continued to improve and extend the possibilities for bulk PL/SQL processing through its 9i
releases One and Two. This paper introduces these enhancements.

It is assumed that readers are familiar with the bulk PL/SQL fetching and binding constructs available
in Oracle 8i ( i.e. BULK COLLECT and FORALL ). If you require some background, follow the link to
my 8i-specific paper above.

ENHANCEMENTS
We will be coverings the following enhancements in this paper :-

 BULK COLLECT support for collections of records;


 BULK COLLECT support for Native Dynamic SQL;
 BULK COLLECT support for objects;
 FORALL SAVE EXCEPTIONS clause;
 FORALL support for record-based DML;
 FORALL support for Native Dynamic SQL.

BULK COLLECTING RECORDS


A well-documented limitation of BULK COLLECT in 8i was that each column fetched from a SQL
statement or cursor required its own index-by table or collection. This meant that many types needed
to be declared and many index-by tables or collections maintained, which could be very code-
intensive. There were workarounds available to overcome this "annoyance", but they often performed
less well.

Oracle 9i Release Two introduces BULK COLLECT support for collections of records. This means
that from 9.2.0.1 onwards, we can declare a single associative array ( new name for index-by tables
in 9i ) or collection type based on one of the following :-

 table%ROWTYPE;
 view%ROWTYPE;
 user-defined record type;
 cursor%ROWTYPE ( 9.2.0.3 onwards only ).

A simple example of this simple, yet powerful new feature is as follows:-

1
http://mysite.wanadoo-members.co.uk/adriansoraclepages/papers/8i_BULK_PLSQL.pdf

Copyright © Adrian Billington 2003. All Rights Reserved. 1


920> DECLARE
2
3 /*
4 * Declare an associative array type of
5 * USER_TABLES structure...
6 */
7 TYPE typ_aa_tables IS TABLE OF user_tables%ROWTYPE
8 INDEX BY PLS_INTEGER;
9 aa_tables typ_aa_tables;
10
11 BEGIN
12
13 /* Fetch all of my USER_TABLES data in one pass... */
14 SELECT *
15 BULK COLLECT INTO aa_tables
16 FROM user_tables;
17
18 DBMS_OUTPUT.PUT_LINE (
19 'Fetched ' || TO_CHAR ( aa_tables.COUNT ) ||
20 ' records from USER_TABLES.' );
21
22 END;
23 /
Fetched 21 records from USER_TABLES.

PL/SQL procedure successfully completed.

Some points to note are:-

Lines 7-8 our associative array type is based on a table rowtype. We have indexed the array by
PLS_INTEGER ( new in 9i ) for even greater efficiency than BINARY_INTEGER (
pre-9i ).

Line 9 we only need to declare one associative array variable, irrespective of the number of
columns to be fetched ( USER_TABLES in 9.2.0.3 has 46 columns - that's 45 fewer
types and 45 fewer variables required in 9i as opposed to 8i ).

Line 14 we can feel quite comfortable with selecting * from the source table as our associative
array has been defined as being of exactly the same structure.

Line 15 a simple BULK COLLECT into a single associative array of records is all that is
required.

This feature in 9iR2 dramatically reduces the amount of code required to utilise bulk fetching, which
results in performance gains as well as improved legibility and simpler maintenance.

BULK COLLECT WITH NATIVE DYNAMIC SQL


Oracle 9i now enables us to bulk fetch from Native Dynamic SQL statements. Prior to 9i we had
DBMS_SQL ( with its reasonably complex, low-level interface ) or NDS workarounds using dynamic
PL/SQL. However, it is now much simpler in 9i. In the following example, we will simulate a process
whereby we fetch a collection of keys from a table based on a different set of criteria passed in as a
parameter ( our parameter in this case will be represented by a sqlplus variable ).

920> VAR where_clause VARCHAR2(256)


920>
920> exec :where_clause := ' created > TRUNC(SYSDATE) - 150 ';

PL/SQL procedure successfully completed.

Copyright © Adrian Billington 2003. All Rights Reserved. 2


920>
920>
920> DECLARE
2
3 TYPE typ_aa_object IS TABLE OF user_objects%ROWTYPE
4 INDEX BY PLS_INTEGER;
5 aa_objects typ_aa_object;
6
7 v_predicates VARCHAR2(256) := :where_clause;
8
9 BEGIN
10
11 /* Execute the statement and bulk fetch the results... */
12 EXECUTE IMMEDIATE ' SELECT *
13 FROM user_objects
14 WHERE ' || v_predicates
15 BULK COLLECT INTO aa_objects;
16
17 /* How many did we get ? */
18 DBMS_OUTPUT.PUT_LINE ( aa_objects.COUNT ||
19 ' records fetched.' );
20
21 END;
22 /
54 records fetched.

PL/SQL procedure successfully completed.

Some points to note are:-

Line 7 our predicates are being assigned from our "parameter" to be appended to a dynamic
SQL statement at runtime.

Line 15 the BULK COLLECT extension to EXECUTE IMMEDIATE..INTO in 9i, combined with
the enhancement to bulk fetch into an associative array / collection of records.

Note that the support for BULK COLLECT with Native Dynamic SQL also includes the OPEN FOR
statement. The following is a re-work of the above example for completeness.

920> VAR where_clause VARCHAR2(256)


920>
920> exec :where_clause := ' created > TRUNC(SYSDATE) - 150 ';

PL/SQL procedure successfully completed.

920>
920>
920> DECLARE
2
3 TYPE typ_aa_object IS TABLE OF user_objects%ROWTYPE
4 INDEX BY PLS_INTEGER;
5 aa_objects typ_aa_object;
6
7 v_predicates VARCHAR2(256) := :where_clause;
8
9 cv SYS_REFCURSOR;
10
11 BEGIN
12
13 /* Open a cursor variable for dynamic SQL... */
14 OPEN cv FOR ' SELECT *

Copyright © Adrian Billington 2003. All Rights Reserved. 3


15 FROM user_objects
16 WHERE ' || v_predicates;
17
18 /* Fetch from the ref cursor... */
19 FETCH cv BULK COLLECT INTO aa_objects;
20
21 /*
22 * Not passing the ref cursor to a client
23 * in this example so close it...
24 */
25 CLOSE cv;
26
27 /* How many did we get ? */
28 DBMS_OUTPUT.PUT_LINE ( aa_objects.COUNT ||
29 ' records fetched.' );
30
31 END;
32 /
54 records fetched.

PL/SQL procedure successfully completed.

This approach is useful if passing a pointer to a resultset (REF CURSOR) to a client program. Some
points to note are :-

Line 9 we declare a weak cursor variable of 9i type SYS_REFCURSOR.

Lines 14-16 we explicitly open a cursor variable for our dynamic SQL statement.

Line 19 we bulk fetch the cursor into our associative array of records.

Line 25 usually we use this approach to pass a cursor variable to a client program so would
not need to close the ref cursor in the server code. As this is a self-contained
example, we explicitly close the cursor.

BULK COLLECT SUPPORT FOR EXTENDED OBJECT TYPES


Oracle release 8.0 gave us the opportunity to define our own data types and create tables with
columns based on these user-defined types. When bulk PL/SQL processing was introduced in 8i, it
provided a very limited scope for bulk fetching columns of user-defined types from tables into index-by
tables or collections. Until 9i, we could only BULK COLLECT columns of scalar types or single
instances of object types ( i.e. we couldn't bulk fetch collections of records ). In keeping with 9i's
extended support for record-based operations, we can now BULK COLLECT a collection of records
stored in a table into either an associative array or collection variable.

In the following example we are going to denormalise the way we store information on our tables and
columns. We will create a table with an "embedded" nested table of column information. The objects
required for this are :-

 an object type to define the structure of our column information;


 a collection type ( nested table ) based on this object to hold multiple records of column
information;
 a relational table with just two columns - a table name and a collection of column information
records.

When we have set up our objects, we will then bulk fetch the embedded collection of records from our
source table. First the object type:-

Copyright © Adrian Billington 2003. All Rights Reserved. 4


920> CREATE TYPE typ_obj_colinfo AS OBJECT
2 ( column_name VARCHAR2(30)
3 , column_id INTEGER
4 );
5 /

Type created.

Our object type defines the information we will store in one "row" of column information. Next we
create a collection type to hold multiple "rows" of this column information :-

920> CREATE TYPE typ_nt_colinfo


2 AS
3 TABLE OF typ_obj_colinfo;
4 /

Type created.

We can now create a table to hold information on all of our tables and their column attributes. Each of
our tables will occupy one record in this table. The column attributes are "embedded" as nested tables
:-

920> CREATE TABLE objrel_table


2 ( table_name VARCHAR2(30)
3 , table_columns typ_nt_colinfo
4 )
5 NESTED TABLE table_columns
6 STORE AS objrel_table_nt;

Table created.

920>
920> INSERT INTO objrel_table
2 SELECT a.table_name
3 , CAST (
4 MULTISET (
5 SELECT b.column_name, b.column_id
6 FROM user_tab_columns b
7 WHERE a.table_name = b.table_name
8 ) AS typ_nt_colinfo
9 ) AS table_columns
10 FROM user_tables a;

22 rows created.

Now for the 9i enhancement - we can directly BULK COLLECT a stored collection of records into a
collection of records variable.

920> DECLARE
2
3 TYPE typ_nt_tabcol IS TABLE OF objrel_table%ROWTYPE;
4 nt_tabcols typ_nt_tabcol;
5
6 BEGIN
7
8 SELECT table_name, table_columns
9 BULK COLLECT INTO nt_tabcols
10 FROM objrel_table;
11
12 /* How many did we get ? */
13 DBMS_OUTPUT.PUT_LINE ( nt_tabcols.COUNT ||

Copyright © Adrian Billington 2003. All Rights Reserved. 5


14 ' records fetched.' );
15
16 END;
17 /
22 records fetched.

PL/SQL procedure successfully completed.

Some points to note are :-

Lines 3-4 we have declared a nested table type based on our table ROWTYPE. This will
comprise a VARCHAR2 attribute and a collection of records attribute, so we end up
with a multi-level collection.

Line 8 we request two columns from the database. To PL/SQL, they are just columns - the
fact that the "TABLE_COLUMNS" column is based on a user-defined type is complex
only to us !

The ability to bulk fetch data of a range of structures is an important inclusion to PL/SQL's processing
capabilities, especially as object enhancements in 9i have made object-oriented programming
methods viable in PL/SQL for the first time. It is equally as important to be efficient in processing
complex data as it is in "simple" scalar data.

FORALL EXCEPTION HANDLING


Bulk binding in PL/SQL was introduced in Oracle 8i as the FORALL statement. It provided us with the
ability to submit "batches" of data for DML activity to the SQL engine, removing the need to switch
context between PL/SQL and SQL on a row-by-row basis. The performance improvements were
significant, although on the "downside" each FORALL statement either succeeded or failed in its
entirety. Oracle 9i has addressed this issue and now enables us to skip any records that would have
previously caused FORALL to fail. This is known as the SAVE EXCEPTIONS clause.

In the following example we will UPDATE a CUSTOMERS table in our warehouse by passing in a
divisor to reduce our customers' credit limits. Due to a lack of validation on our front-end, we've
received a file to process that contains a zero divisor for two customers. This will cause two iterations
of our FORALL statement to error but rather than fail the entire transaction, we will report the errors (
in a production system you would probably store these off in an exceptions table ) and allow the
"clean" data to process through.

First we'll create a CUSTOMERS table from a few USER_OBJECTS rows :-

920> CREATE TABLE customers


2 NOLOGGING
3 AS
4 SELECT object_id AS customer_id
5 , ROWNUM * 1000 AS credit_limit
6 FROM user_objects
7 WHERE ROWNUM <= 5;

Table created.

Now we'll create a staging table for the new credit limit data of our existing customers:-

920> CREATE TABLE credit_limits


2 NOLOGGING
3 AS
4 SELECT customer_id
5 , DECODE(MOD(ROWNUM,2),0,0,1.25) AS limit_adjustor
6 FROM customers;

Copyright © Adrian Billington 2003. All Rights Reserved. 6


Table created.

Now we'll run a PL/SQL block to fetch the staging data and bulk bind an UPDATE to the
CUSTOMERS table:-

920> DECLARE
2
3 TYPE typ_aa_id IS TABLE OF credit_limits.customer_id%TYPE
4 INDEX BY PLS_INTEGER;
5
6 TYPE typ_aa_divisor IS TABLE OF credit_limits.limit_adjustor%TYPE
7 INDEX BY PLS_INTEGER;
8
9 aa_ids typ_aa_id;
10 aa_divisors typ_aa_divisor;
11
12 x_bad_iteration EXCEPTION;
13 PRAGMA EXCEPTION_INIT (
14 x_bad_iteration, -24381 );
15
16
17 BEGIN
18
19 SELECT customer_id, limit_adjustor
20 BULK COLLECT INTO aa_ids, aa_divisors
21 FROM credit_limits;
22
23 FORALL i IN aa_ids.FIRST .. aa_ids.LAST SAVE EXCEPTIONS
24 UPDATE customers
25 SET credit_limit = credit_limit/aa_divisors(i)
26 WHERE customer_id = aa_ids(i);
27
28 DBMS_OUTPUT.PUT_LINE (
29 'Do we make it here ' ||
30 ' if we hit any exceptions ?' );
31
32 EXCEPTION
33
34 WHEN x_bad_iteration THEN
35
36 DBMS_OUTPUT.PUT_LINE (
37 'Successful UPDATE of ' ||
38 TO_CHAR(SQL%ROWCOUNT) || ' records.' );
39
40 DBMS_OUTPUT.PUT_LINE (
41 'Failed on ' ||
42 TO_CHAR(SQL%BULK_EXCEPTIONS.COUNT) || ' records.' );
43
44
45 FOR i IN 1 .. SQL%BULK_EXCEPTIONS.COUNT LOOP
46
47 DBMS_OUTPUT.PUT_LINE (
48 'Error occurred on iteration ' ||
49 SQL%BULK_EXCEPTIONS(i).ERROR_INDEX ||
50 ' due to ' ||
51 SQLERRM (
52 -1 * SQL%BULK_EXCEPTIONS(i).ERROR_CODE) );
53
54 DBMS_OUTPUT.PUT_LINE (
55 'Customer = ' ||

Copyright © Adrian Billington 2003. All Rights Reserved. 7


56 TO_CHAR (
57 aa_ids (
58 SQL%BULK_EXCEPTIONS(i).ERROR_INDEX )) ||
59 ' Value = ' ||
60 TO_CHAR (
61 aa_divisors (
62 SQL%BULK_EXCEPTIONS(i).ERROR_INDEX )) );
63
64 END LOOP;
65
66 END;
67 /
Successful UPDATE of 3 records.
Failed on 2 records.
Error occurred on iteration 2 due to ORA-01476: divisor is equal to zero
Customer = 1414002 Value = 0
Error occurred on iteration 4 due to ORA-01476: divisor is equal to zero
Customer = 1482845 Value = 0

PL/SQL procedure successfully completed.

Some points to note are:-

Lines 9-10 we are using two associative arrays for our fetched data - one for each column
fetched. Note that this is despite the fact that in 9iR2 we can now BULK COLLECT
into associative arrays / collections of records. This is because, even in the later
versions of 9i, we still cannot use associative arrays / collections of records with the
FORALL statement if we wish to reference individual attributes of the record. We
receive a " PLS-00436: implementation restriction: cannot reference fields of BULK
In-BIND table of records" error. So for this purpose, we are caught somewhere
between 8i and 9i PL/SQL methods ! Note, however, that PL/SQL in 9i supports
record-based operations, so we can combine FORALL with entire-record inserts or
updates, but as stated we cannot reference any attributes of the records (see BULK
BINDING AND RECORD-BASED OPERATIONS below).

Lines 12-14 we declare our own exception and tie it to a new Oracle error -24381 using the
EXCEPTION_INIT pragma. This error code is that which tells us we have
encountered some bad iterations. We will use the pragma to trap any errors.

Line 23 we use the new SAVE EXCEPTIONS syntax to tell Oracle that we don't want the
entire FORALL statement to fail in the event of any bad records.

Lines 28-30 we have some output after the FORALL statement to demonstrate the behaviour of
FORALL in the event of SAVE EXCEPTIONS being invoked. Note in the output
following the script, we do not see this message. This means that SAVE
EXCEPTIONS has thrown us into the EXCEPTION block, which is what we would
expect. If we wanted to use SAVE EXCEPTIONS to overlook any errors and continue
processing after the FORALL, we would nest the FORALL statement and
x_bad_iteration EXCEPTION trap in an anonymous BEGIN..EXCEPTION..END
block.

Line 34 we trap the ORA-24381 error.

Lines 36-38 some output to report how many successful UPDATEs we managed via FORALL.

Lines 40-42 some output to report how many "bad" records we had, using the count of elements
in the SQL%BULK_EXCEPTIONS associative array.

Line 45 as we know we have at least one error (else we wouldn't be in this section), we now
LOOP through the SQL%BULK_EXCEPTIONS associative array to determine both

Copyright © Adrian Billington 2003. All Rights Reserved. 8


the cause of each error and those elements in our associative arrays which failed (
which we can use to peek back at the offending values ). Note that interestingly we
cannot use the FIRST and LAST methods defined in the PLITBLM package for the
SQL%BULK_EXCEPTIONS associative array - instead we are limited to COUNT,
which we hope Oracle guarantees to be synonymous with LAST in this instance.

Lines 47-52 we are displaying both the element in our associative array that failed and the reason
for the failure. The ERROR_INDEX attribute in SQL%BULK_EXCEPTIONS
associative array or records contains the offsets in our variable array that failed the
FORALL statement. The ERROR_CODE attribute contains the Oracle error code. We
can use this as an argument (multiplied by -1) to the SQLERRM function to retrieve
the associated error message.

Lines 54-62 we use the ERROR_INDEX attribute of SQL%BULK_EXCEPTIONS to look back at


our associative array of credit limit data to determine what the values were that
caused the errors. In the example, of course, the offending values were both zero,
hence the zero divisor messages. You would probably store this information in an
exceptions table in an audited production system.

BULK BINDING AND RECORD-BASED OPERATIONS


A useful PL/SQL feature in Oracle 9i is the ability to perform record-based operations without having
to code references to the individual attributes. For example, if we wanted to INSERT a record into a
table pre-9i, we would have to build a potentially lengthy VALUES statement containing all the
relevant "record.attribute" listings. Now in 9i we can simply supply the record, assuming it meets with
the INSERT target's column definition. We can utilise this feature in conjunction with FORALL to
workaround part of the implementation restriction that prevents us from using FORALL with
associative arrays / collections of records as noted in the previous section.

In the following example, we will imagine that ALL_OBJECTS is a source staging table providing us
with database information to be replicated elsewhere. Using record-based PL/SQL we can now BULK
COLLECT into one associative array and FORALL INSERT the entire record into a target table
without ever having to reference either a column from the staging or target tables or an attribute from
a record element in the associative array.

First we'll create a target table:-

920> CREATE TABLE our_objects


2 NOLOGGING
3 AS
4 SELECT *
5 FROM all_objects
6 WHERE ROWNUM < 1;

Table created.

Next we'll write the PL/SQL to utilise the full power of 9i bulk processing while keeping code volumes
to a bare minimum:-

920> DECLARE
2
3 TYPE typ_aa_allobj IS TABLE OF all_objects%ROWTYPE
4 INDEX BY PLS_INTEGER;
5 aa_allobjs typ_aa_allobj;
6
7 BEGIN
8
9 SELECT * BULK COLLECT INTO aa_allobjs
10 FROM all_objects;

Copyright © Adrian Billington 2003. All Rights Reserved. 9


11
12 FORALL i IN aa_allobjs.FIRST .. aa_allobjs.LAST
13 INSERT INTO our_objects
14 VALUES aa_allobjs(i);
15
16 DBMS_OUTPUT.PUT_LINE (
17 TO_CHAR(SQL%ROWCOUNT) ||
18 ' records inserted.' );
19
20 END;
21 /
28590 records inserted.

PL/SQL procedure successfully completed.

Obviously this type of operation would usually be best implemented via SQL as an INSERT..SELECT,
but in the real-world we might have many transformations and business rules to apply to the data
between the initial fetch and the final DML. This example provides the structure to such an
implementation. Some points to note are :-

Lines 3-5 we declare an associative array type and variable based on the ROWTYPE of
ALL_OBJECTS. This will hold all of our data between bulk fetch and bulk bind.

Lines 12-14 we combine a FORALL INSERT with PL/SQL's new record-based operations. Note in
line 14 we supply an entire record to the VALUES clause of our INSERT statement.
Note also that for a record-based INSERT we do not use parentheses with the
VALUES clause.

This method is a good workaround to the stated implementation restriction that prevents us from
referencing individual attributes of associative arrays / collections of records within a FORALL
statement. The reality is, however, that this method will only be suited to a small number of
implementations, especially as a large proportion of the DML we perform in PL/SQL is controlled by
predicates, such as primary key joins.

For example, if we are processing thousands of customers and we need to bulk update their base
information from new source data, we need to target their respective records via their customer IDs.
PL/SQL in 9i supports record-based updates in addition to the INSERT example above, but we need
yet more workarounds to make this work for us in a bulk bind implementation. The following is an
example of how we can use the record-based UPDATE with FORALL - we'll examine the code first
and then consider the implications of such an approach.

In the following example, we will create a CUSTOMERS table based on the data in USER_OBJECTS,
using OBJECT_ID as a surrogate key. We will then create a source table called
CUSTOMER_SOURCE, also based on USER_OBJECTS, but with modified data to represent
changes to our customer records on the source system. We will then UPDATE our target data from
the modified source.

First our CUSTOMERS table:-

920> CREATE TABLE customers


2 NOLOGGING
3 AS
4 SELECT object_id AS customer_id
5 , object_name AS customer_name
6 , TRUNC(created) - 100 AS start_date
7 , object_type AS branch_name
8 FROM user_objects;

Table created.

Copyright © Adrian Billington 2003. All Rights Reserved. 10


Next our source table. Note how we modify some of the data simply to make it differ from the data in
our CUSTOMERS table:-

CREATE TABLE customer_source


2 NOLOGGING
3 AS
4 SELECT object_id AS customer_id
5 , LOWER(object_name) AS customer_name
6 , TRUNC(created) - 100 AS start_date
7 , INITCAP(object_type) AS branch_name
8 FROM user_objects;

Table created.

Now for the PL/SQL combining a FORALL UPDATE with an associative array of records:-

920> DECLARE
2
3 TYPE typ_aa_customer IS TABLE OF customer_source%ROWTYPE
4 INDEX BY PLS_INTEGER;
5 aa_customers typ_aa_customer;
6
7 TYPE typ_aa_id IS TABLE OF customer_source.customer_id%TYPE
8 INDEX BY PLS_INTEGER;
9 aa_ids typ_aa_id;
10
11 BEGIN
12
13 SELECT * BULK COLLECT INTO aa_customers
14 FROM customer_source;
15
16 FOR i IN aa_customers.FIRST .. aa_customers.LAST LOOP
17 aa_ids(i) := aa_customers(i).customer_id;
18 END LOOP;
19
20 FORALL i IN aa_customers.FIRST .. aa_customers.LAST
21 UPDATE customers
22 SET ROW = aa_customers(i)
23 WHERE customer_id = aa_ids(i);
24
25 DBMS_OUTPUT.PUT_LINE (
26 TO_CHAR(SQL%ROWCOUNT) ||
27 ' records updated.' );
28
29 END;
30 /
94 records updated.

PL/SQL procedure successfully completed.

Some points to note are:-

Lines 3-5 a standard associative array type and variable based on the ROWTYPE of our source
table. We have now seen this several times throughout this paper.

Lines 7-9 we declare an additional associative array based on the primary key of our target
data - in this example just one column, CUSTOMER_ID.

Lines 13-15 we BULK COLLECT into our associative array of customer-based records.

Copyright © Adrian Billington 2003. All Rights Reserved. 11


Lines 16-18 as an additional step to make this implementation work, we need to copy all the
CUSTOMER_ID values from our customer-based associative array to their
corresponding positions in our scalar associative array of CUSTOMER_IDs.

Line 22 we use the new SET ROW clause in PL/SQL to UPDATE the CUSTOMERS table
using an entire record. As with the record-based INSERT, the SET ROW clause
works by position, not by name, so care must be taken to ensure our record definition
matches the target table, not only in data type, but also in intended values ( for
example we don't put CUSTOMER_NAMEs in the BRANCH_NAME column because
we incorrectly ordered our record definition ). Using ROWTYPE declarations protects
us from this to some degree.

Line 23 we need to make sure each element in our associative array is only used to UPDATE
its relevant customer record. To do this, we bulk bind an additional associative array
of primary key values, in this case our array of CUSTOMER_IDs. This is the
workaround to the restriction that prevents us from coding WHERE customer_id =
aa_customers(i).customer_id.

A key point to note when using record-based UPDATE syntax is that we will be updating the primary
key on our target table. This is generally accepted to be bad practice as it usually entails extra index
updates, additional resource usage, foreign key verification etc. In addition to this, we also need to
code extra arrays to replicate the primary key information, so potentially we could have several extra
associative arrays depending on how many columns comprise our primary keys.

On the face of it, however, we appear to have workarounds for restrictions to FORALL and
associative arrays / collections of records covered. Personally, I'm not sold on the SET ROW feature
as yet, though the INSERT workaround is very useful and saves a lot of typing ! All of which
contributes towards PL/SQL being a powerful and reasonably flexible bulk processing language for
the database.

BULK BINDING AND NATIVE DYNAMIC SQL


Oracle 9i now enables us to bulk bind Native Dynamic SQL statements using FORALL. Prior to 9i we
had DBMS_SQL or NDS workarounds using dynamic PL/SQL. The following example is slightly more
elaborate than the previous examples in this paper, but demonstrates a good use for FORALL in a
dynamic SQL context. We will build an archiving application that replicates data from main warehouse
tables to archived versions, based on specific customer criteria. We are supplied a condition to search
our customer base and all customers that satisfy the criteria will have their data moved to an archive
area. First we will create our customer database ( represented by just two tables for brevity ). Note
that one in five or our customers joined us a long time ago:-

920> CREATE TABLE customers


2 NOLOGGING
3 AS
4 SELECT object_id AS customer_id
5 , object_name AS customer_name
6 , ROWNUM * 1000 AS credit_limit
7 , CASE MOD(ROWNUM,5)
8 WHEN 0
9 THEN created - 10000
10 ELSE created
11 END AS date_joined
12 FROM user_objects;

Table created.

Copyright © Adrian Billington 2003. All Rights Reserved. 12


920> CREATE TABLE addresses
2 NOLOGGING
3 AS
4 SELECT customer_id
5 , TO_CHAR(ROWNUM) ||
6 ' ACACIA AVENUE ' AS add_1
7 , 'SOME DISTRICT' AS add_2
8 , 'SOME TOWN' AS add_3
9 , 'SOME COUNTY' AS add_4
10 , 'XY1 2ZA' AS post_code
11 FROM customers;

Table created.

Next we will need copies of these base tables to store our archived data:-

920> CREATE TABLE archived_customers


2 AS
3 SELECT *
4 FROM customers
5 WHERE ROWNUM < 1;

Table created.

920> CREATE TABLE archived_addresses


2 AS
3 SELECT *
4 FROM addresses
5 WHERE ROWNUM < 1;

Table created.

As we are in a dynamic SQL context and don't want to write a PL/SQL package for every table we
need to archive, we'll drive the whole process from a reference "jobs" table. We have just two "jobs" in
our example, to archive CUSTOMERS and ADDRESSES:-

920> CREATE TABLE customer_tables_to_archive


2 (
3 base_table VARCHAR2(30)
4 , archive_table VARCHAR2(30)
5 );

Table created.

920> INSERT INTO customer_tables_to_archive


2 VALUES ( 'CUSTOMERS', 'ARCHIVED_CUSTOMERS' );

1 row created.

920> INSERT INTO customer_tables_to_archive


2 VALUES ( 'ADDRESSES', 'ARCHIVED_ADDRESSES' );

1 row created.

As we are using dynamic SQL, we will need a collection type to make use of bind variables in our
PL/SQL implementation ( more about this later ). We will create a scalar nested table type of
numbers.

Copyright © Adrian Billington 2003. All Rights Reserved. 13


920> CREATE TYPE typ_nt_customer_id
2 AS
3 TABLE OF NUMBER;
4 /

Type created.

Our business community determines which customers are archived and this is passed as a dynamic
WHERE clause ( we are using sqlplus variables in this anonymous PL/SQL example to simulate
parameter passing ). For a reason unknown to us IT staff, our business community has decided that
we are to archive all customers who joined more than 100 days ago.

920> VAR where_clause VARCHAR2(256);


920> exec :where_clause := ' WHERE date_joined < SYSDATE-100 ';

PL/SQL procedure successfully completed.

We now run our single archiving process to determine the customers to archive, store their keys and
"sweep" the database.

920> DECLARE
2
3 nt_ids typ_nt_customer_id;
4
5 BEGIN
6
7 EXECUTE IMMEDIATE ' SELECT customer_id
8 FROM customers ' ||
9 :where_clause
10 BULK COLLECT INTO nt_ids;
11
12 IF nt_ids.COUNT > 0 THEN
13
14 FOR rec_tables IN ( SELECT base_table, archive_table
15 FROM customer_tables_to_archive )
16 LOOP
17
18 FORALL i IN nt_ids.FIRST .. nt_ids.LAST
19 EXECUTE IMMEDIATE
20 ' INSERT INTO ' || rec_tables.archive_table ||
21 ' SELECT * FROM ' || rec_tables.base_table ||
22 ' WHERE customer_id = :bv '
23 USING IN nt_ids(i);
24
25 DBMS_OUTPUT.PUT_LINE (
26 ' Added ' || TO_CHAR(SQL%ROWCOUNT) ||
27 ' records to ' || rec_tables.archive_table );
28
29 FORALL i IN nt_ids.FIRST .. nt_ids.LAST
30 EXECUTE IMMEDIATE
31 ' DELETE FROM ' || rec_tables.base_table ||
32 ' WHERE customer_id = :bv '
33 USING IN nt_ids(i);
34
35 DBMS_OUTPUT.PUT_LINE (
36 ' Removed ' || TO_CHAR(SQL%ROWCOUNT) ||
37 ' records from ' || rec_tables.base_table );
38
39 END LOOP;
40 END IF;
41 END;

Copyright © Adrian Billington 2003. All Rights Reserved. 14


42 /
Added 71 records to ARCHIVED_CUSTOMERS
Removed 71 records from CUSTOMERS
Added 71 records to ARCHIVED_ADDRESSES
Removed 71 records from ADDRESSES

PL/SQL procedure successfully completed.

Some points to note are:-

Line 3 we use a nested table based on the collection type we created in our schema. We will
be using this because we will want to make use of bind variables in our dynamic SQL
later in the program. Native Dynamic SQL can only bind variables of user-defined
types known to SQL ( i.e not declared locally in PL/SQL but created on the database
using the CREATE TYPE... syntax ).

Lines 7-10 another example of combining BULK COLLECT with Native Dynamic SQL statement,
as discussed in BULK COLLECT WITH NATIVE DYNAMIC SQL.

Line 12 we ensure we have some keys to process before attempting to reference the nested
table.

Lines 14-16 we loop through the "jobs" tables to fetch the names of the tables we wish to archive.
Note that we do not bother with BULK COLLECT here as we are collecting so few
records that the performance gains would be negligible.

Line 18-23 we pass a dynamic SQL statement through FORALL, new in Oracle 9i. Note that the
FORALL statement precedes the Native Dynamic SQL, unlike the BULK COLLECT
example where NDS leads. Note also that we can make full use of the SAVE
EXCEPTIONS clause as detailed in FORALL EXCEPTION HANDLING should we
require.

Lines 22-23 we make use of bind variables. We include a placeholder (prefixed by a colon) in our
WHERE clause and bind each value in our collection one at a time with the USING
clause.

Line 26 we still get a row count in the same way as any other DML statement.

Lines 29-33 another example of bulk binding with NDS - this time a DELETE statement.

The combination of FORALL and Native Dynamic SQL is very powerful and the above example
provides us with all the elements required to harness this power. We could take this example a stage
further and develop an implementation which combines Native Dynamic PL/SQL and FORALL to
make more use of the RETURNING clause ( we could DELETE from the base table, RETURNING the
removed records into a collection, then FORALL INSERT the returned collection into the archive table
). This to my mind would be a slightly more elegant approach than the traditional "save first, delete
second" method as above, but it is more complex and beyond the scope of this paper.

CONCLUSIONS
Oracle 9i further extends the capabilities of the bulk PL/SQL processing introduced in 8i. When
processing large volumes of data using PL/SQL ( when SQL just cannot fulfil our requirements ), bulk
fetching and binding should be essential components of our coding arsenals. At the time of writing,
Oracle 10g is being launched and who knows what further improvements are included ? Perhaps a
solution to the "implementation restriction" that prevents FORALL from accessing individual attributes
in collections of records ? Perhaps a BULK COLLECT into an associative array indexed by
VARCHAR2 ? It is understood that FORALL will be able to process non-contiguous arrays for the first
time. Needless to say, much of the work in enabling us to bulk up our PL/SQL has been done up to
and including 9i, so anything from now on will be a bonus !

Copyright © Adrian Billington 2003. All Rights Reserved. 15


EXAMPLES
For more examples of bulk processing enhancements in Oracle 9i, visit the following URL :-
http://mysite.wanadoo-members.co.uk/adriansoraclepages/demos/9i/9i.htm. Please feel free to e-mail
me any comments / corrections / modifications to billiauk@yahoo.co.uk.

*************** END ***************

Copyright © Adrian Billington 2003. All Rights Reserved. 16

You might also like