introduction to bulk pl/sql enhancements in 9i

In December 2001, I wrote a paper entitled Introduction to Bulk PL/SQL Processing in 8i. This demonstrated the (then new) bulk PL/SQL processing enhancements introduced in Oracle 8i. Oracle has continued to improve and extend the possibilities for bulk PL/SQL processing through its 9i releases One and Two. This article introduces these enhancements.

It is assumed that readers are familiar with the bulk PL/SQL fetching and binding constructs available in Oracle 8i (i.e. BULK COLLECT and FORALL). If you require some background, follow the link to my 8i-specific article above.

enhancements

We will be covering the following enhancements in this article.

bulk collecting records

A well-documented limitation of BULK COLLECT in 8i was that each column fetched from a SQL statement or cursor required its own index-by table or collection. This meant that many types needed to be declared and many index-by tables or collections maintained, which could be very code-intensive. There were workarounds available to overcome this limitation, but they often performed less well.

Oracle 9i Release Two introduces BULK COLLECT support for collections of records. This means that from 9.2.0.1 onwards, we can declare a single associative array (new name for index-by tables in 9i) or collection type based on one of the following.

An example of this simple, yet powerful new feature is as follows.

SQL> DECLARE
  2  
  3     /*
  4      * Declare an associative array type of
  5      * USER_TABLES structure...
  6      */
  7     TYPE typ_aa_tables IS TABLE OF user_tables%ROWTYPE
  8        INDEX BY PLS_INTEGER;
  9     aa_tables typ_aa_tables;
 10  
 11  BEGIN
 12  
 13     /* Fetch all of my USER_TABLES data in one pass... */
 14     SELECT *
 15     BULK COLLECT INTO aa_tables
 16     FROM user_tables;
 17  
 18     DBMS_OUTPUT.PUT_LINE (
 19        'Fetched ' || TO_CHAR ( aa_tables.COUNT ) ||
 20           ' records from USER_TABLES.' );
 21  
 22  END;
 23  /
Fetched 21 records from USER_TABLES.

PL/SQL procedure successfully completed.

Some points to note are:

This feature in 9iR2 dramatically reduces the amount of code required to utilise bulk fetching, which results in performance gains as well as improved legibility and simpler maintenance.

bulk collect with native dynamic sql

Oracle 9i now enables us to bulk fetch from Native Dynamic SQL statements. Prior to 9i we had DBMS_SQL (with its reasonably complex, low-level interface) or NDS workarounds using dynamic PL/SQL. However, it is now much simpler in 9i. In the following example, we will simulate a process whereby we fetch a collection of keys from a table based on a different set of criteria passed in as a parameter (our parameter in this case will be represented by a sqlplus variable).

SQL> VAR where_clause VARCHAR2(256)
 
SQL> exec :where_clause := ' created > TRUNC(SYSDATE) - 150 ';

PL/SQL procedure successfully completed.
 
SQL> DECLARE
  2  
  3     TYPE typ_aa_object IS TABLE OF user_objects%ROWTYPE
  4        INDEX BY PLS_INTEGER;
  5     aa_objects typ_aa_object;
  6  
  7     v_predicates VARCHAR2(256) := :where_clause;
  8  
  9  BEGIN
 10  
 11     /* Execute the statement and bulk fetch the results... */
 12     EXECUTE IMMEDIATE ' SELECT *
 13                         FROM   user_objects
 14                         WHERE  ' || v_predicates
 15     BULK COLLECT INTO aa_objects;
 16  
 17     /* How many did we get ? */
 18     DBMS_OUTPUT.PUT_LINE ( aa_objects.COUNT ||
 19                              ' records fetched.' );
 20  
 21  END;
 22  /
54 records fetched.

PL/SQL procedure successfully completed.

Some points to note are:

Note that the support for BULK COLLECT with Native Dynamic SQL also includes the OPEN FOR statement. The following is a re-work of the above example for completeness.

SQL> VAR where_clause VARCHAR2(256)

SQL> exec :where_clause := ' created > TRUNC(SYSDATE) - 150 ';

PL/SQL procedure successfully completed.

SQL> DECLARE
  2  
  3     TYPE typ_aa_object IS TABLE OF user_objects%ROWTYPE
  4        INDEX BY PLS_INTEGER;
  5     aa_objects typ_aa_object;
  6  
  7     v_predicates VARCHAR2(256) := :where_clause;
  8  
  9     cv SYS_REFCURSOR;
 10  
 11  BEGIN
 12  
 13     /* Open a cursor variable for dynamic SQL... */
 14     OPEN cv FOR ' SELECT *
 15                   FROM   user_objects
 16                   WHERE  ' || v_predicates;
 17  
 18     /* Fetch from the ref cursor... */
 19     FETCH cv BULK COLLECT INTO aa_objects;
 20  
 21     /*
 22      * Not passing the ref cursor to a client
 23      * in this example so close it...
 24      */
 25     CLOSE cv;
 26  
 27     /* How many did we get ? */
 28     DBMS_OUTPUT.PUT_LINE ( aa_objects.COUNT ||
 29                              ' records fetched.' );
 30  
 31  END;
 32  /
54 records fetched.

PL/SQL procedure successfully completed.

This approach is useful if passing a pointer to a resultset (REF CURSOR) to a client program. Some points to note are:

bulk collect support for extended object types

Oracle release 8.0 gave us the opportunity to define our own data types and create tables with columns based on these user-defined types. When bulk PL/SQL processing was introduced in 8i, it provided a very limited scope for bulk fetching columns of user-defined types from tables into index-by tables or collections. Until 9i, we could only BULK COLLECT columns of scalar types or single instances of object types (i.e. we couldn't bulk fetch collections of records). In keeping with 9i's extended support for record-based operations, we can now BULK COLLECT a collection of records stored in a table into either an associative array or collection variable.

In the following example we are going to denormalise the way we store information on our tables and columns. We will create a table with an "embedded" nested table of column information. The objects required for this are as follows.

When we have set up our objects, we will then bulk fetch the embedded collection of records from our source table. First the object type:

SQL> CREATE TYPE typ_obj_colinfo AS OBJECT
  2  (   column_name VARCHAR2(30)
  3  ,   column_id   INTEGER
  4  );
  5  /

Type created.

Our object type defines the information we will store in one "row" of column information. Next we create a collection type to hold multiple "rows" of this column information:

SQL> CREATE TYPE typ_nt_colinfo
  2  AS
  3     TABLE OF typ_obj_colinfo;
  4  /

Type created.

We can now create a table to hold information on all of our tables and their column attributes. Each of our tables will occupy one record in this table. The column attributes are "embedded" as nested tables as follows.

SQL> CREATE TABLE objrel_table
  2  (   table_name    VARCHAR2(30)
  3  ,   table_columns typ_nt_colinfo
  4  )
  5  NESTED TABLE table_columns
  6     STORE AS objrel_table_nt;

Table created.

SQL> INSERT INTO objrel_table
  2  SELECT a.table_name
  3  ,      CAST (
  4            MULTISET (
  5               SELECT b.column_name, b.column_id
  6               FROM   user_tab_columns b
  7               WHERE  a.table_name = b.table_name
  8               ) AS typ_nt_colinfo
  9         ) AS table_columns
 10  FROM   user_tables a;

22 rows created.

Now for the 9i enhancement - we can directly BULK COLLECT a stored collection of records into a collection of records variable.

SQL> DECLARE
  2  
  3     TYPE typ_nt_tabcol IS TABLE OF objrel_table%ROWTYPE;
  4     nt_tabcols typ_nt_tabcol;
  5  
  6  BEGIN
  7  
  8     SELECT table_name, table_columns
  9     BULK COLLECT INTO nt_tabcols
 10     FROM objrel_table;
 11  
 12     /* How many did we get ? */
 13     DBMS_OUTPUT.PUT_LINE ( nt_tabcols.COUNT ||
 14                              ' records fetched.' );
 15  
 16  END;
 17  /
22 records fetched.

PL/SQL procedure successfully completed.

Some points to note are:

The ability to bulk fetch data of a range of structures is an important inclusion to PL/SQL's processing capabilities, especially as object enhancements in 9i have made object-oriented programming methods viable in PL/SQL for the first time. It is equally as important to be efficient in processing complex data as it is in "simple" scalar data.

forall exception handling

Bulk binding in PL/SQL was introduced in Oracle 8i as the FORALL statement. It provided us with the ability to submit "batches" of data for DML activity to the SQL engine, removing the need to switch context between PL/SQL and SQL on a row-by-row basis. The performance improvements were significant, although on the "downside" each FORALL statement either succeeded or failed in its entirety. Oracle 9i has addressed this issue and now enables us to skip any records that would have previously caused FORALL to fail. This is known as the SAVE EXCEPTIONS clause.

In the following example we will UPDATE a CUSTOMERS table in our warehouse by passing in a divisor to reduce our customers' credit limits. Due to a lack of validation on our front-end, we've received a file to process that contains a zero divisor for two customers. This will cause two iterations of our FORALL statement to error but rather than fail the entire transaction, we will report the errors (in a production system you would probably store these off in an exceptions table) and allow the "clean" data to process through.

First we'll create a CUSTOMERS table from a few USER_OBJECTS rows:

SQL> CREATE TABLE customers
  2  NOLOGGING
  3  AS
  4     SELECT object_id AS customer_id
  5     ,      ROWNUM * 1000 AS credit_limit
  6     FROM   user_objects
  7     WHERE  ROWNUM <= 5;

Table created.

Now we'll create a staging table for the new credit limit data of our existing customers:

SQL> CREATE TABLE credit_limits
  2  NOLOGGING
  3  AS
  4     SELECT customer_id
  5     ,      DECODE(MOD(ROWNUM,2),0,0,1.25) AS limit_adjustor
  6     FROM   customers;

Table created.

Now we'll run a PL/SQL block to fetch the staging data and bulk bind an UPDATE to the CUSTOMERS table:

SQL> DECLARE
  2  
  3     TYPE typ_aa_id IS TABLE OF credit_limits.customer_id%TYPE
  4        INDEX BY PLS_INTEGER;
  5  
  6     TYPE typ_aa_divisor IS TABLE OF credit_limits.limit_adjustor%TYPE
  7        INDEX BY PLS_INTEGER;
  8  
  9     aa_ids      typ_aa_id;
 10     aa_divisors typ_aa_divisor;
 11  
 12     x_bad_iteration EXCEPTION;
 13     PRAGMA EXCEPTION_INIT (
 14        x_bad_iteration, -24381 );
 15  
 16  
 17  BEGIN
 18  
 19     SELECT customer_id, limit_adjustor
 20     BULK COLLECT INTO aa_ids, aa_divisors
 21     FROM credit_limits;
 22  
 23     FORALL i IN aa_ids.FIRST .. aa_ids.LAST SAVE EXCEPTIONS
 24        UPDATE customers
 25        SET credit_limit = credit_limit/aa_divisors(i)
 26        WHERE customer_id = aa_ids(i);
 27  
 28     DBMS_OUTPUT.PUT_LINE (
 29        'Do we make it here ' ||
 30           ' if we hit any exceptions ?' );
 31  
 32  EXCEPTION
 33  
 34     WHEN x_bad_iteration THEN
 35  
 36        DBMS_OUTPUT.PUT_LINE (
 37           'Successful UPDATE of ' ||
 38              TO_CHAR(SQL%ROWCOUNT) || ' records.' );
 39  
 40        DBMS_OUTPUT.PUT_LINE (
 41           'Failed on ' ||
 42              TO_CHAR(SQL%BULK_EXCEPTIONS.COUNT) || ' records.' );
 43  
 44  
 45        FOR i IN 1 .. SQL%BULK_EXCEPTIONS.COUNT LOOP
 46  
 47           DBMS_OUTPUT.PUT_LINE (
 48              'Error occurred on iteration ' ||
 49                 SQL%BULK_EXCEPTIONS(i).ERROR_INDEX ||
 50                    ' due to ' ||
 51                       SQLERRM (
 52                          -1 * SQL%BULK_EXCEPTIONS(i).ERROR_CODE) );
 53  
 54           DBMS_OUTPUT.PUT_LINE (
 55              'Customer = ' ||
 56                 TO_CHAR (
 57                    aa_ids (
 58                       SQL%BULK_EXCEPTIONS(i).ERROR_INDEX )) ||
 59              ' Value = ' ||
 60                 TO_CHAR (
 61                    aa_divisors (
 62                       SQL%BULK_EXCEPTIONS(i).ERROR_INDEX )) );
 63  
 64        END LOOP;
 65  
 66  END;
 67  /
Successful UPDATE of 3 records.
Failed on 2 records.
Error occurred on iteration 2 due to ORA-01476: divisor is equal to zero
Customer = 1414002 Value = 0
Error occurred on iteration 4 due to ORA-01476: divisor is equal to zero
Customer = 1482845 Value = 0

PL/SQL procedure successfully completed.

Some points to note are:

bulk binding and record-based operations

A useful PL/SQL feature in Oracle 9i is the ability to perform record-based operations without having to code references to the individual attributes. For example, if we wanted to INSERT a record into a table pre-9i, we would have to build a potentially lengthy VALUES statement containing all the relevant "record.attribute" listings. Now in 9i we can simply supply the record, assuming it meets with the INSERT target's column definition. We can utilise this feature in conjunction with FORALL to workaround part of the implementation restriction that prevents us from using FORALL with associative arrays / collections of records as noted in the previous section.

In the following example, we will imagine that ALL_OBJECTS is a source staging table providing us with database information to be replicated elsewhere. Using record-based PL/SQL we can now BULK COLLECT into one associative array and FORALL INSERT the entire record into a target table without ever having to reference either a column from the staging or target tables or an attribute from a record element in the associative array.

First we'll create a target table.

SQL> CREATE TABLE our_objects
  2  NOLOGGING
  3  AS
  4     SELECT *
  5     FROM   all_objects
  6     WHERE  ROWNUM < 1;

Table created.

Next we'll write the PL/SQL to utilise the full power of 9i bulk processing while keeping code volumes to a bare minimum.

SQL> DECLARE
  2  
  3     TYPE typ_aa_allobj IS TABLE OF all_objects%ROWTYPE
  4        INDEX BY PLS_INTEGER;
  5     aa_allobjs typ_aa_allobj;
  6  
  7  BEGIN
  8  
  9     SELECT * BULK COLLECT INTO aa_allobjs
 10     FROM all_objects;
 11  
 12     FORALL i IN aa_allobjs.FIRST .. aa_allobjs.LAST
 13        INSERT INTO our_objects
 14        VALUES aa_allobjs(i);
 15  
 16     DBMS_OUTPUT.PUT_LINE (
 17        TO_CHAR(SQL%ROWCOUNT) ||
 18           ' records inserted.' );
 19  
 20  END;
 21  /
28590 records inserted.

PL/SQL procedure successfully completed.

Obviously this type of operation would usually be best implemented via SQL as an INSERT..SELECT, but in the real-world we might have many transformations and business rules to apply to the data between the initial fetch and the final DML. This example provides the structure to such an implementation. Some points to note are:

This method is a good workaround to the stated implementation restriction that prevents us from referencing individual attributes of associative arrays / collections of records within a FORALL statement. The reality is, however, that this method will only be suited to a small number of implementations, especially as a large proportion of the DML we perform in PL/SQL is controlled by predicates, such as primary key joins.

For example, if we are processing thousands of customers and we need to bulk update their base information from new source data, we need to target their respective records via their customer IDs. PL/SQL in 9i supports record-based updates in addition to the INSERT example above, but we need yet more workarounds to make this work for us in a bulk bind implementation. The following is an example of how we can use the record-based UPDATE with FORALL - we'll examine the code first and then consider the implications of such an approach.

In the following example, we will create a CUSTOMERS table based on the data in USER_OBJECTS, using OBJECT_ID as a surrogate key. We will then create a source table called CUSTOMER_SOURCE, also based on USER_OBJECTS, but with modified data to represent changes to our customer records on the source system. We will then UPDATE our target data from the modified source.

First our CUSTOMERS table.

SQL> CREATE TABLE customers
  2  NOLOGGING
  3  AS
  4     SELECT object_id            AS customer_id
  5     ,      object_name          AS customer_name
  6     ,      TRUNC(created) - 100 AS start_date
  7     ,      object_type          AS branch_name
  8     FROM   user_objects;

Table created.

Next our source table. Note how we modify some of the data simply to make it differ from the data in our CUSTOMERS table.

SQL> CREATE TABLE customer_source
  2  NOLOGGING
  3  AS
  4     SELECT object_id            AS customer_id
  5     ,      LOWER(object_name)   AS customer_name
  6     ,      TRUNC(created) - 100 AS start_date
  7     ,      INITCAP(object_type) AS branch_name
  8     FROM   user_objects;

Table created.

Now for the PL/SQL combining a FORALL UPDATE with an associative array of records.

SQL> DECLARE
  2  
  3     TYPE typ_aa_customer IS TABLE OF customer_source%ROWTYPE
  4        INDEX BY PLS_INTEGER;
  5     aa_customers typ_aa_customer;
  6  
  7     TYPE typ_aa_id IS TABLE OF customer_source.customer_id%TYPE
  8        INDEX BY PLS_INTEGER;
  9     aa_ids typ_aa_id;
 10  
 11  BEGIN
 12  
 13     SELECT * BULK COLLECT INTO aa_customers
 14     FROM customer_source;
 15  
 16     FOR i IN aa_customers.FIRST .. aa_customers.LAST LOOP
 17        aa_ids(i) := aa_customers(i).customer_id;
 18     END LOOP;
 19  
 20     FORALL i IN aa_customers.FIRST .. aa_customers.LAST
 21        UPDATE customers
 22        SET ROW = aa_customers(i)
 23        WHERE customer_id = aa_ids(i);
 24  
 25     DBMS_OUTPUT.PUT_LINE (
 26        TO_CHAR(SQL%ROWCOUNT) ||
 27           ' records updated.' );
 28  
 29  END;
 30  /
94 records updated.

PL/SQL procedure successfully completed.

Some points to note are:

A key point to note when using record-based UPDATE syntax is that we will be updating the primary key on our target table. This is generally accepted to be bad practice as it usually entails extra index updates, additional resource usage, foreign key verification etc. In addition to this, we also need to code extra arrays to replicate the primary key information, so potentially we could have several extra associative arrays depending on how many columns comprise our primary keys.

On the face of it, however, we appear to have workarounds for restrictions to FORALL and associative arrays / collections of records covered. Personally, I'm not sold on the SET ROW feature as yet, though the INSERT workaround is very useful and saves a lot of typing! All of which contributes towards PL/SQL being a powerful and reasonably flexible bulk processing language for the database.

bulk binding and native dynamic sql

Oracle 9i now enables us to bulk bind Native Dynamic SQL statements using FORALL. Prior to 9i we had DBMS_SQL or NDS workarounds using dynamic PL/SQL. The following example is slightly more elaborate than the previous examples in this paper, but demonstrates a good use for FORALL in a dynamic SQL context. We will build an archiving application that replicates data from main warehouse tables to archived versions, based on specific customer criteria. We are supplied a condition to search our customer base and all customers that satisfy the criteria will have their data moved to an archive area. First we will create our customer database (represented by just two tables for brevity). Note that one in five or our customers joined us a long time ago.

SQL> CREATE TABLE customers
  2  NOLOGGING
  3  AS
  4     SELECT object_id          AS customer_id
  5     ,      object_name        AS customer_name
  6     ,      ROWNUM * 1000      AS credit_limit
  7     ,      CASE MOD(ROWNUM,5)
  8               WHEN 0
  9               THEN created - 10000
 10               ELSE created
 11            END                AS date_joined
 12     FROM   user_objects;

Table created.

SQL> CREATE TABLE addresses
  2  NOLOGGING
  3  AS
  4     SELECT customer_id
  5     ,      TO_CHAR(ROWNUM) ||
  6               ' ACACIA AVENUE ' AS add_1
  7     ,      'SOME DISTRICT'      AS add_2
  8     ,      'SOME TOWN'          AS add_3
  9     ,      'SOME COUNTY'        AS add_4
 10     ,      'XY1 2ZA'            AS post_code
 11     FROM   customers;

Table created.

Next we will need copies of these base tables to store our archived data.

SQL> CREATE TABLE archived_customers
  2  AS
  3     SELECT *
  4     FROM   customers
  5     WHERE  ROWNUM < 1;

Table created.

SQL> CREATE TABLE archived_addresses
  2  AS
  3     SELECT *
  4     FROM   addresses
  5     WHERE  ROWNUM < 1;

Table created.

As we are in a dynamic SQL context and don't want to write a PL/SQL package for every table we need to archive, we'll drive the whole process from a reference "jobs" table. We have just two "jobs" in our example, to archive CUSTOMERS and ADDRESSES.

SQL> CREATE TABLE customer_tables_to_archive
  2  (
  3      base_table    VARCHAR2(30)
  4  ,   archive_table VARCHAR2(30)
  5  );

Table created.

SQL> INSERT INTO customer_tables_to_archive
  2  VALUES ( 'CUSTOMERS', 'ARCHIVED_CUSTOMERS' );

1 row created.

SQL> INSERT INTO customer_tables_to_archive
  2  VALUES ( 'ADDRESSES', 'ARCHIVED_ADDRESSES' );

1 row created.

As we are using dynamic SQL, we will need a collection type to make use of bind variables in our PL/SQL implementation (more about this later). We will create a scalar nested table type of numbers.

SQL> CREATE TYPE typ_nt_customer_id
  2  AS
  3     TABLE OF NUMBER;
  4  /

Type created.

Our business community determines which customers are archived and this is passed as a dynamic WHERE clause (we are using sqlplus variables in this anonymous PL/SQL example to simulate parameter passing). For a reason unknown to us IT staff, our business community has decided that we are to archive all customers who joined more than 100 days ago.

SQL> VAR where_clause VARCHAR2(256);
SQL> exec :where_clause := ' WHERE date_joined < SYSDATE-100 ';

PL/SQL procedure successfully completed.

We now run our single archiving process to determine the customers to archive, store their keys and "sweep" the database.

SQL> DECLARE
  2  
  3     nt_ids typ_nt_customer_id;
  4  
  5  BEGIN
  6  
  7     EXECUTE IMMEDIATE ' SELECT customer_id
  8                         FROM customers ' ||
  9                         :where_clause
 10     BULK COLLECT INTO nt_ids;
 11  
 12     IF nt_ids.COUNT > 0 THEN
 13  
 14        FOR rec_tables IN ( SELECT base_table, archive_table
 15                            FROM   customer_tables_to_archive )
 16        LOOP
 17  
 18           FORALL i IN nt_ids.FIRST .. nt_ids.LAST
 19              EXECUTE IMMEDIATE
 20                 ' INSERT INTO ' || rec_tables.archive_table ||
 21                 ' SELECT * FROM ' || rec_tables.base_table ||
 22                 ' WHERE customer_id = :bv '
 23              USING IN nt_ids(i);
 24  
 25           DBMS_OUTPUT.PUT_LINE (
 26              ' Added ' || TO_CHAR(SQL%ROWCOUNT) ||
 27                 ' records to ' || rec_tables.archive_table );
 28  
 29           FORALL i IN nt_ids.FIRST .. nt_ids.LAST
 30              EXECUTE IMMEDIATE
 31                 ' DELETE FROM ' || rec_tables.base_table ||
 32                 ' WHERE customer_id = :bv '
 33              USING IN nt_ids(i);
 34  
 35           DBMS_OUTPUT.PUT_LINE (
 36              ' Removed ' || TO_CHAR(SQL%ROWCOUNT) ||
 37                 ' records from ' || rec_tables.base_table );
 38  
 39        END LOOP;
 40     END IF;
 41  END;
 42  /
Added 71 records to ARCHIVED_CUSTOMERS
Removed 71 records from CUSTOMERS
Added 71 records to ARCHIVED_ADDRESSES
Removed 71 records from ADDRESSES

PL/SQL procedure successfully completed.

Some points to note are:

The combination of FORALL and Native Dynamic SQL is very powerful and the above example provides us with all the elements required to harness this power. We could take this example a stage further and develop an implementation which combines Native Dynamic PL/SQL and FORALL to make more use of the RETURNING clause (we could DELETE from the base table, RETURNING the removed records into a collection, then FORALL INSERT the returned collection into the archive table). This to my mind would be a slightly more elegant approach than the traditional "save first, delete second" method as above, but it is more complex and beyond the scope of this article.

conclusions

Oracle 9i further extends the capabilities of the bulk PL/SQL processing introduced in 8i. When processing large volumes of data using PL/SQL (when SQL just cannot fulfil our requirements), bulk fetching and binding should be essential components of our coding arsenals. At the time of writing, Oracle 10g is being launched and who knows what further improvements are included? Perhaps a solution to the "implementation restriction" that prevents FORALL from accessing individual attributes in collections of records? Perhaps a BULK COLLECT into an associative array indexed by VARCHAR2? It is understood that FORALL will be able to process non-contiguous arrays for the first time. Needless to say, much of the work in enabling us to bulk up our PL/SQL has been done up to and including 9i, so anything from now on will be a bonus!

source code

The source code for the examples in this article can be downloaded from here.

Adrian Billington, September 2003

Back to Top