PL-SQL Functions
PL-SQL Functions
PL-SQL Functions
Comparison Operators
Select Operators
Single Row Functions for Numbers, Chars and Dates
Conversion Functions
Miscellaneous Single Row Functions
Aggregate Functions
Analytical Functions
Object Reference Functions
Date Format Models
Date Prefixes and Suffixes
Number Format Models
Comparison Operators
Table 1-1. Comparison Operators
Operator
What it does
!= ^= -= <>
>
>=
<
<=
IN
NOT IN
ANY, SOME
True if one or more of the values in the list of expressions or subquery satisfies the
condition
ALL
True if all of the values in the list of expressions or subquery satisfies the condition
BETWEEN x
AND y
True if greater than or equal to x and less than or equal to y (can be reversed in
meaning with NOT)
EXISTS
True if the subquery returns at least one row (can be reversed in meaning with NOT)
LIKE pattern
[ESCAPE 'c']
IS NULL
Select Operators
Also called SET operators
What it does
UNION
This combines the results of two queries and returns the set of
distinct rows returned by either query
UNION ALL This combines the results of two queries and returns all rows
returned by either query, including duplicates
INTERSECT This combines the results of two queries and returns the set of
distinct rows returned by both queries
MINUS
This combines the results of two queries and returns the distinct
rows that were in the first query, but not in the second
What it does
(+)
PRIOR
ALL
What it does
ABS(n)
ACOS(n)
ASIN(n)
ATAN(n)
ATAN2(n,m)
BITAND(n,m)
CEIL(n)
COS(n)
COSH(n)
EXP(n)
Returns en
FLOOR(n)
LN(n)
LOG(m,n)
Function
What it does
MOD(m,n)
POWER(m,n)
ROUND (m[,n])
SIGN(n)
SIN(n)
SINH(n)
SQRT(n)
TAN(n)
TANH(n)
TRUNC (m[,n])
WIDTH_BUCKET
(exp,min,max,num)
Character Functions
Table 1-7. Character Single Row Functions
Function
What it does
CHR (n)
CONCAT (char1,char2)
INITCAP(char)
LOWER(char)
LPAD(char1,n[,char2])
LTRIM(char[,set])
NLS_INITCAP(char[,nlsparam])
NLS_LOWER(char[,nlsparam])
NLSSORT(char[,nlsparam])
NLS_UPPER(char[,nlsparam])
Function
What it does
RTRIM(char[,set])
SOUNDEX(char)
SUBSTR(string,n[,m])
also:
SUBSTRB - bytes
SUBSTRC - unicode
SUBSTR2 - UCS2 codepoints
SUBSTR4 - UCS4 codepoints
TRANSLATE(char,from,to)
TRIM([[LEADING|TRAILING|BOTH]
[trimchar]FROM]source)
UPPER (char)
ASCII (char)
INSTR(str,substr[,pos[,occur]])
also:
INSTRB - bytes
INSTRC - unicode
Function
What it does
also:
LENGTHB - bytes
LENGTHC - unicode
LENGTH2 - UCS2 codepoints
LENGTH4 - UCS4 codepoints
Date Functions
Table 1-8. Date Single Row Functions
Function
What it does
ADD_MONTHS(d,n)
CURRENT_DATE
CURRENT_TIMESTAMP
[(precision)]
DBTIMEZONE
FROM_TZ(timestamp,
time_zone)
LAST_DAY(date)
LOCALTIMESTAMP [(precision)]
MONTHS_BETWEEN(date1,
date2)
NEW_TIME(date,zone1,zone2)
NEXT_DAY(date,weekday)
Function
What it does
literal. char can be 'DAY,' 'HOUR,' 'MINUTE,' or 'SECOND,'
or an expression that resolves to one of those
ROUND (date[,fmt])
SESSIONTIMEZONE
SYS_EXTRACT_UTC (datetz)
SYSDATE
SYSTIMESTAMP
TO_TIMESTAMP_TZ
(char[,fmt[nlsparm]])
TO_YMINTERVA(char)
TRUNC (date[,fmt])
TZ_OFFSET(tzname |
SESSIONTIMEZONE |
DBTIMEZONE | '+|-hh:mi')
Conversion Functions
Table 1-9. Conversion Single Row Functions
Function
What it does
ASCIISTR(string)
BIN_TO_NUM(expr[,expr])
CHARTOROWID(char)
COMPOSE('string')
CONVERT(char, dest_set
Function
What it does
[,source_set])
DECOMPOSE(string [CANONICAL |
COMPATIBILITY])
HEXTORAW (char)
RAWTOHEX(raw)
RAWTONHEX(raw)
ROWIDTOCHAR(rowid)
ROWIDTONCHAR(rowid)
TO_CLOB (lob_col|char)
TO_LOB(long_col)
TO_MULTI_BYTE(char)
TO_NCHAR(char [,fmt[nlsparm]])
TO_NCHAR (datetime |
interval[,fmt[nlsparm]])
TO_NCHAR (n [,fmt[nlsparm]])
TO_NUMBER(char[,fmt[nlsparm]])
TO_SINGLE_BYTE(char)
TO_YMINTERVAL(char [nlsparm])
Function
What it does
NCHAR_CS)
UNISTR(string)
What it does
BFILENAME('dir','fname')
COALESCE(expr[,expr,...])
DECODE(expr,search ,result
[ ,search,result...][,default])
DEPTH(correlation_int)
DUMP(expr[,return_fmt
[,start[,length]]])
EMPTY_BLOB()
EMPTY_CLOB()
EXISTSNODE(XML_Instance, path
[expr])
Walks the XML tree and, if nodes are found which match
the specified path, returns those nodes
EXTRACTVALUE(XML_Instance, path Walks the XML tree and, if nodes are found that match
[expr])
the specified path, returns the scalar value of those
nodes
GREATEST(expr[,expr,...])
LEAST(expr[,expr,...])
NLS_CHARSET_DECL_LEN
(bytes,set_id)
NLS_CHARSET_ID(text)
Function
What it does
NLS_CHARSET_NAME(num)
NULLIF(expr1,expr2)
NVL(expr1,expr2)
NVL2(expr1,expr2,expr3)
PATH (correlation_int)
SYS_CONNECT_BY_PATH
(column,char)
SYS_CONTEXT('namespace',
'param'[,len])
SYS_DBURIGEN(col|attr
[rowid][,col|attr [rowid],...]
[,'text()'])
SYS_EXTRACT_UTC(time)
SYS_GUID()
SYS_TYPEID(obj_val)
SYS_XMLAGG(expr [fmt])
SYS_XMLGEN(expr [fmt])
UID
UPDATEXML(XML_instance, path,
expr)
USER
USERENV(param)
VSIZE(expr)
XMLAGG(XML_instance [ORDER BY
sortlist])
XMLCOLATTVAL
XMLCONCAT(XML_instance [,
XML_instance,...])
XMLFOREST
10
Function
What it does
<column name>column value</column name>.
XMLSEQUENCE
XMLTRANSFORM
Aggregate Functions
All of the aggregate functions described below can have an analytical clause appended to them using the
OVER (analytical_clause) syntax. For space considerations, we've omitted this from the Function column.
Table 1-11. Aggregate Functions
Function
What it does
AVG([DISTINCT|ALL] expr)
GROUP_ID()
GROUPING(expr)
GROUPING_ID(expr[,expr...])
11
Function
What it does
[FIRST|LAST])
MAX([DISTINCT|ALL] expr)
MIN([DISTINCT|ALL] expr)
STDDEV([DISTINCT|ALL] expr)
STDDEV_POP([DISTINCT|ALL] expr)
STDDEV_SAMP([DISTINCT|ALL] expr)
SUM([DISTINCT|ALL] expr)
VAR_POP(expr)
VAR_SAMP(expr)
VARIANCE([DISTINCT|ALL] expr)
What it does
Returns the slope of a least squares regression line of the set of
number pairs defined by (expr,expr2)
12
Function
What it does
REGR_INTERCEPT(expr,expr2) Returns the Y intercept of a least squares regression line of the set
of number pairs defined by (expr,expr2)
REGR_COUNT(expr,expr2)
Returns the number of NOT NULL pairs used to fit the least squares
regression line of the set of number pairs defined by (expr,expr2)
REGR_R2(expr,expr2)
REGR_AVGX(expr,expr2)
REGR_AVGY(expr,expr2)
REGR_SXX(expr,expr2)
REGR_SYY(expr,expr2)
REGR_SXY(expr,expr2)
Analytical Functions
All of the aggregate functions described above can also have analytic functionality, using the OVER
(analytical_clause) syntax. For space considerations, we've declined to list them twice. Note that you
cannot nest analytic functions.
Table 1-13. Analytical Functions
Function
What it does
LAG(expr[,offset][,default]) OVER
(analytical_clause)
LEAD(expr[,offset][,default]) OVER
(analytical_clause)
13
Function
What it does
number of buckets
RATIO_TO_REPORT(expr) OVER
(analytical_clause)
ROW_NUMBER(expr) OVER
([partition_clause]order_by_clause)
What it does
DEREF(expr)
MAKE_REF(table|view,key
[,key...])
REF(correlation_var)
REFTOHEX(expr)
VALUE(correlation_var)
Value Returned
AM A.M. PM
P.M.
BC B.C.
CC SCC
DAY
The name of the day of the week (Monday, Tuesday, etc.). Padded to
9 characters.
DD
14
Element
Value Returned
DDD
DY
Abbreviated era name (for Japanese Imperial, ROC Official, and Thai
Buddha calendars)
EE
FF [19]
HH
HH12
HH24
IW
IYY IY I
IYYY
MI
Minute (059)
MM
Month (0112)
MON
MONTH
RM
RR
Last two digits of the year, for years in previous or next century
(where previous if current year is <=50, next if current year >50)
RRRR
SS
Seconds (059)
SSSSS
TZD
TZH
TZM
WW
Y, YYY
YEAR
SYEAR
Y
YY
YYY
Date Prefixes and Suffixes
15
SP
Value Returned
9,999
99.99
$9999
0999
9990
9999
B9999
C999
99D99
EEEE
9.9EEEE
FM
FM90.9
9G999
Returns the value with the NLS group separator in the specified
position
L999
Returns the value with the NLS Local Currency Symbol in the
specified position. Negative values have a trailing minus sign
(), positive values with a trailing blank.
PR
9999PR
RN rn
RN rn
S9999
9999S
TM
TM
U9999
Returns the Euro (or other) NLS dual currency symbol in the
specified position
16
Element Example
Value Returned
999V99
XXXX
Advanced cursors
Ref Cursor
BULK COLLECT INTO
SET SERVEROUTPUT ON
DECLARE
-- EMP_CURSOR will retrieve all columns and all rows from the EMP table
CURSOR emp_cursor IS
SELECT *
FROM emp;
emp_record emp_cursor%ROWTYPE;
BEGIN
17
OPEN emp_cursor;
LOOP
--Advance the pointer in the result set, assign row values to EMP_RECORD
FETCH emp_cursor INTO emp_record;
--Test to see if no more results
EXIT WHEN emp_cursor%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(emp_record.ename||' [' ||emp_record.empno||']');
END LOOP;
CLOSE emp_cursor;
END;
/
DECLARE
-- EMP_CURSOR will retrieve all columns and all rows from the EMP table
CURSOR emp_cursor IS
SELECT *
FROM emp;
BEGIN
FOR emp_record IN emp_cursor LOOP
DBMS_OUTPUT.PUT_LINE(emp_record.ename||' ['||emp_record.empno||']');
END LOOP;
END;
/
You can use a cursor for loop without a declared cursor by including a query in the FOR statement.
This can enable very compact code.
BEGIN
FOR emp_record IN (SELECT * FROM emp) LOOP
DBMS_OUTPUT.PUT_LINE(emp_record.ename||' ['||emp_record.empno||']');
END LOOP;
END;
/
While you can use EXIT statement within a FOR cursor loop, you should not use a cursor FOR loop
if you may need to exit the LOOP prematurely. Use a basic or WHILE loop instead.
Cursors can use variables to adjust which rows they select when opened. Instead of hard-coding a
value into the WHERE clause of a query, you can use a variable as a placeholder for a literal value.
The variable placeholder will substituted with the value of the variable when the cursor is opened.
This makes a query more flexible.
DECLARE
v_deptno NUMBER;
v_job VARCHAR2(15);
v_sum_sal NUMBER;
/* Since v_deptno and v_job are declared above, they are in scope,
* and can be referenced in the cursor body. They will be used as
* placeholders until the cursor is opened, at which
18
This method works, but there is a better way. You can declare a cursor using parameters; then
whenever you open the cursor, you pass in appropriate parameters. This technique is just as
flexible, but is easier to maintain and debug. The above example adapted to use a parameterized
cursor:
DECLARE
v_sum_sal NUMBER;
/* The parameters are declared in the cursor declaration.
* Parameters have a datatype, but NO SIZE; that is, you
19
Parameterized cursors are open easier to debug in larger PL/SQL blocks. This is because the the
declaration of the cursor body is often far from where the cursor is opened, but processing of the
cursor's result set is usually close to where the cursor is opened.
o When opening a cursor which uses variables, you must assign appropriate values to those
variables before opening the cursor. So when you are debugging how the cursor is opened,
20
you must confirm the appropriate variable names where the cursor is declared . This is
often inconvenient.
o When using PL/SQL variables, it's difficult to confirm the values of the variables when the
cursor is opened because the values could be set at an any point from the declaration on.
Parameterized cursors eliminate both these problems because the values used in the cursor can be
determined in one place, the OPEN statement. And you don't need to know the names of the
cursor parameters. (Though you do need to know the order of the parameters.)
An example which combines a cursor FOR loop with a parameterized query:
DECLARE
v_sum_sal NUMBER;
CURSOR emp_stats_cursor(cp_deptno NUMBER, cp_job VARCHAR2) IS
SELECT SUM(sal) sum_sal
FROM emp
WHERE deptno=cp_deptno
AND job=cp_job;
BEGIN
FOR dept_job_rec IN (SELECT DISTINCT deptno,job FROM emp) LOOP
OPEN emp_stats_cursor(dept_job_rec.deptno, dept_job_rec.job);
FETCH emp_stats_cursor INTO v_sum_sal;
CLOSE emp_stats_cursor;
DBMS_OUTPUT.PUT_LINE(dept_job_rec.deptno ||' : '||dept_job_rec.job||' : '||v_sum_sal);
END LOOP;
END;
/
Ref Cursor
Ref Cursor is THE method to returns result sets to client applications (like C, VB, etc).
You cannot define ref cursors outside of a procedure or function in a package specification or body. Ref
cursors can only be processed in the defining procedure or returned to a client application. Also, a ref
cursor can be passed from subroutine to subroutine and a cursor cannot. To share a static cursor like that,
you would have to define it globally in a package specification or body. Because using global variables is
not a very good coding practice in general, Ref cursors can be used to share a cursor in PL/SQL
without having global variables getting into the mix.
Last, using static cursorswith static SQL (and not using a ref cursor) is much more efficient than using
ref cursors, and the use of ref cursors should be limited to
In short, you want to use static SQL first and use a ref cursor only when you absolutely have to.An
Example of Ref cursor is here:
create or replace function sp_ListEmp return types.cursortype
as
l_cursor types.cursorType;
begin
open l_cursor for select ename, empno from emp order by ename;
return l_cursor;
end;
/
21
22
Explicit Cursors
Concepts
Working with explicit cursors
Cursor Attributes
Records and %ROWTYPE
You may need to process more general result sets which return more or less than one row
You may need to process rows in a specific order
You may need to control the execution of your program depending on the result set.
23
CURSOR ename_cursor IS
SELECT ename
FROM emp
WHERE empno=v_empno;
BEGIN
OPEN ename_cursor;
FETCH ename_cursor INTO v_ename;
CLOSE ename_cursor;
END;
/
Examining each statement in turn:
DECLARE
...
CURSOR ename_cursor IS
SELECT ename
FROM emp
WHERE empno=v_empno;
BEGIN
OPEN ename_cursor;
FETCH ename_cursor INTO v_ename;
CLOSE ename_cursor;
END;
/
Cursor Attributes
Use cursor attributes to determine whether the row was found and what number the row is.
Cursor Attributes
Attribute
Description
cur%ISOPEN
cur%NOTFOUND
cur%FOUND
cur%ROWCOUNT
SQL%BULK_ROWCOUNT Returns the number of rows processed for each execution of the bulk DML
operation.
Example using cursor attributes:
DECLARE
v_empno emp.empno%TYPE;
v_ename emp.ename%TYPE;
24
CURSOR emp_cursor IS
SELECT empno, ename
FROM emp;
BEGIN
OPEN emp_cursor;
LOOP
FETCH emp_cursor INTO v_empno, v_ename;
EXIT WHEN emp_cursor%ROWCOUNT>10 or emp_cursor%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(INITCAP(v_ename)||' ['||v_empno||']');
END LOOP;
CLOSE emp_cursor;
END;
/
Records and %ROWTYPE
Instead of fetching values into a collection of variables, you could fetch the entire row into a record like so.
DECLARE
CURSOR emp_cursor IS
SELECT empno, ename, sal, job, deptno
FROM emp
WHERE deptno=30;
-- This creates a record named emp_row
-- based on the structure of the cursor emp_cur
emp_row emp_cursor%ROWTYPE;
BEGIN
OPEN emp_cursor;
LOOP
FETCH emp_cursor
INTO emp_row;
EXIT WHEN emp_cursor%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(emp_row.ename||' ['
||emp_row.empno||'] makes '||TO_CHAR(emp_row.sal*12,'$99,990.00'));
END LOOP;
END;
/
You can reference the fields of a record using the syntax record_name.field_name.
In addition to basing a record on a cursor, you can also define records based on tables like so.
DECLARE
CURSOR emp_cursor IS
SELECT *
FROM emp
WHERE deptno=30;
emp_row emp%ROWTYPE; -- This creates a record named EMP_ROW
-- based on the structure of the EMP table
BEGIN
OPEN emp_cursor;
LOOP
FETCH emp_cursor
INTO emp_row;
25
Storage
---------16
16
16
Range/Length
-------------40 digit floating point
40 digit floating point
40 digit floating point
Comments
-------------------
NUMBER(a,b)
FLOAT(a,b)
varies
varies
a digits, b precision
a digits, b precision
DECIMAL
16
40 digit
INTEGER
INTEGER(a)
16
varies
40 digits
a digits
CHAR(a)
VARCHAR(a)
VARCHAR2(a)
a
varies
varies
a=(1-255)
1 - 255
1 - 2000
DATE
1/1/4217BC - 12/31/4712AD
precision to minutes
LONG
LONG RAW
varies
varies
0 - 2 GB
0 - 2 GB
LONG VARCHAR
varies
0 - 2 GB
26
BLOB
CLOB
NCLOB
varies
varies
varies
0 - 4 GB
0 - 4 GB
0 - 4 GB
BFILE
??
??
ROWID
n/a
* Long datatypes are discouraged in Oracle 8. Note that are long and blob
datatypes are incompatible.
====== PL-SQL data types (differences) ======
Type
Storage
Range/Length Comments
----------------- ---------- -------------- ---------------------------NUMERIC
VARCHAR
VARCHAR2
BLOB
CLOB
NCLOB
Creating a table
PCTFREE = Amount of space to leave in block during insert operations. Allows room for records to grow
within the same area.
PCUSED = The threshold at which the block is placed back on the free block list.
INITIAL/NEXT = The initial disk allocated, and the next extent size.
LOGGING = Indicates whether operations are written to the redo logs.
CREATE TABLE EMPLOYEE (
EMP_ID
NUMBER(8),
LNAME
VARCHAR2(30),
FNAME
VARCHAR2(15),
HIRE_DT DATE,
SALARY
NUMBER(8,2) )
PCTFREE 20
PCTUSED 50
STORAGE (
INITIAL 200K NEXT 200K
PCTINCREASE 0 MAXEXTENTS 50 )
TABLESPACE ts01
LOGGING ;
27
Creating indexes
Creating constraints
28
Creating triggers
The example below illustrates versioning of the EMP_RESUME table, which contains a blob field.
CREATE OR REPLACE TRIGGER EMP_RES_INS_TR
AFTER INSERT ON EMP_RES
FOR EACH ROW
DECLARE
VER1 NUMBER ;
EBLOB BLOB ;
VBLOB BLOB ;
BEGIN
EBLOB := EMPTY_BLOB();
SELECT (COUNT(*) + 1) INTO VER1
FROM VEMP_RES
WHERE EMP_ID =:NEW.EMP_ID ;
VBLOB := :NEW.RESUME ;
INSERT INTO VEMP_RES
( EMP_ID, DOC_URL,
A_USERID, D_MODIFIED, VER_NO, RESUME)
VALUES (
:NEW.EMP_ID, :NEW.DOC_URL,
USER, SYSDATE, VER1, EBLOB ) ;
SELECT RESUME
INTO EBLOB
FROM VEMP_RES
WHERE EMP_ID =:NEW.EMP_ID AND
VER_NO = VER1
29
FOR UPDATE ;
UPDATE VEMP_RES
SET RESUME = VBLOB
WHERE EMP_ID =:NEW.EMP_ID AND
VER_NO = VER1 ;
END;
Renaming a table
-- Synonym Creation
GRANT SELECT ON USER5.COMPANY TO USER6 ;
CREATE SYNONYM USER6.COMPANY5 FOR USER5.COMPANY ;
-- Database Link
CREATE DATABASE LINK ARCHIVE_DATA CONNECT TO USER5 IDENTIFIED BY TIGER USING 'SERVER5' ;
/* user within this system can now reference tables using ARCHIVE_DATA.tablename */
30
SQL-Plus is a query / command line utility which has some powerful formatting capabilities.
Getting Started
;
Command line terminator
/
Execute the current batch of commands
SET SERVEROUTPUT ON Allow messages from PL-SQL to be displayed
SHOW ERRORS
Show errors from last batch
EDIT
Run editor, and load buffer
CLEAR BUFFER
Clear buffer commands
&
Prompt for value
@
Run commands in @filename
/**** Examples ****/
/* prompt for process id, and kill */
alter system kill session '&Victim'
/
Displaying output
31
All SELECT statements in PL-SQL must have an INTO clause; therefore another method is needed to
display output to the console.
DBMS_OUTPUT.PUT_LINE('TEST OUTPUT');
salary := 24000;
dbms_output.put_line(salary);
Output variables
Output variables are used to return data to another procedure, or to an external application which has
invoked the stored procedure.
/* sample procedure header using output variables */
TYPE INV_ARRAY IS TABLE OF NUMBER(8)
INDEX BY BINARY_INTEGER ;
CREATE OR REPLACE PROCEDURE PROC_GET_INV_NOS
( USERID1 IN VARCHAR2, INV_IDS OUT INV_ARRAY)
AS
...
Arrays and structures
Arrays and structures are implemented thought the use of "tables" and "records" in PL-SQL.
/* EXAMPLE OF A SIMPLE RECORD TYPE */
TYPE INVOICE_REC_TYPE IS RECORD
(INV_ID INVOICE.INV_ID%TYPE,
INV_DT INVOICE.INV_DT%TYPE ) ;
/* ARRAY DECLARATION */
TYPE NAME_TABLE_TYPE IS TABLE OF VARCHAR2(20)
INDEX BY BINARY_INTEGER ;
NAME_TABLE
NAME_TABLE_TYPE ;
/* ARRAY SUBSCRIPTING */
I := I + 1;
NAME_TABLE(I) := 'JSMITH';
Conditionals
32
Cursors
The first example depicts dbase-style row processing ; the second a more traditional "fetch" approach.
PROCEDURE PROC_SCAN_INVOICES (EXPIRE_DT IN DATE)
IS
CURSOR INVOICE_CUR IS
SELECT INV_ID, INV_DT FROM INVOICE ;
TYPE INVOICE_REC_TYPE IS RECORD
(INV_ID INVOICE.INV_ID%TYPE,
INV_DT INVOICE.INV_DT%TYPE ) ;
INVOICE_REC
INVOICE_REC_TYPE ;
BEGIN
FOR INVOICE_REC1 IN INVOICE_CUR
LOOP
IF INVOICE_REC.INV_DT < EXPIRE_DT THEN
DELETE FROM INVOICE
WHERE INV_ID = INV_REC.INV_ID ;
DBMS_OUTPUT.PUT_LINE('INVOICE DELETETED:');
DBMS_OUTPUT.PUT_LINE(INV_REC.INV_ID);
END
33
END LOOP;
END;
/* ======================================= */
CREATE OR REPLACE PROCEDURE PROC_DOCEXPIRE_RPT
( RPT_BODY OUT LONG RAW )
IS
RPT_LINE
VARCHAR2(1900);
RPT_PART
VARCHAR2(1900);
RPT_LEAD
VARCHAR2(200);
GLIB_ID1
NUMBER ;
GLIB_ID2
VARCHAR(12);
ORIG_LOC_CD1 VARCHAR2(12);
AUTHOR_ID1 VARCHAR2(30);
CONTRIBUTORS1 VARCHAR2(80);
TOPIC1
VARCHAR2(80);
NBR_ACCESS1 NUMBER ;
NBR_ACCESS2 VARCHAR2(12);
TOT_EXPIRED1 NUMBER ;
TOT_EXPIRED2 VARCHAR2(12);
COUNT1
NUMBER ;
RPT_BODY_PART LONG ;
CURSOR CUR1 IS
SELECT GLIB_ID, ORIG_LOC_CD, AUTHOR_ID, CONTRIBUTORS, TOPIC, NBR_ACCESS
FROM GEN_DOC
WHERE EXPIRE_DT < (SYSDATE + 30)
ORDER BY ORIG_LOC_CD, GLIB_ID ;
BEGIN
SELECT COUNT(*)
INTO TOT_EXPIRED1
FROM GEN_DOC
WHERE STAT_CD='90';
TOT_EXPIRED2 := TO_CHAR(TOT_EXPIRED1);
RPT_LEAD := '<H5>TOTAL EXPIRED DOCUMENT COUNT TO DATE: ... ' ||
TOT_EXPIRED2 || '</H5><HR>' ;
RPT_LINE := '<HTML><BODY BGCOLOR=#FFFFFF>' ||
'<H6>ABC Corporation</H6>' ||
'<H2>Gen Doc System - Documents Expiring Within 30 Days</H2><HR>' ||
RPT_LEAD ;
34
COUNT1 := 0;
OPEN CUR1;
RPT_LINE := RPT_LINE || '<TABLE>'
||
'<TD><U>No. Accesses</U></TD>' ||
'<TD><U>Document #</U></TD>' ||
'<TD><U>Topic</U></TD>'
||
'<TD><U>Author</U></TD>' ;
RPT_BODY := UTL_RAW.CAST_TO_RAW(RPT_LINE);
RPT_LINE := '';
LOOP
COUNT1 := COUNT1 + 1;
EXIT WHEN (COUNT1 > 500);
EXIT WHEN (UTL_RAW.LENGTH(RPT_BODY) > 32000);
FETCH CUR1 INTO
GLIB_ID1, ORIG_LOC_CD1, AUTHOR_ID1, CONTRIBUTORS1, TOPIC1, NBR_ACCESS1 ;
EXIT WHEN CUR1%NOTFOUND ;
RPT_PART := '<TR><TD>';
NBR_ACCESS2 := TO_CHAR(NBR_ACCESS1);
RPT_PART := CONCAT(RPT_PART,NBR_ACCESS2);
RPT_PART := CONCAT(RPT_PART,'</TD><TD>');
GLIB_ID2 := TO_CHAR(GLIB_ID1);
RPT_PART := RPT_PART || ORIG_LOC_CD1 || '-' || GLIB_ID2 ||
'</TD><TD>' || TOPIC1 || '</TD><TD>' ||
AUTHOR_ID1 || '</TD><TR>' ;
RPT_LINE := CONCAT(RPT_LINE, RPT_PART);
RPT_BODY_PART := UTL_RAW.CAST_TO_RAW(RPT_LINE);
RPT_BODY
:= UTL_RAW.CONCAT(RPT_BODY,RPT_BODY_PART);
-- RPT_BODY := RPT_BODY || RPT_LINE;
RPT_LINE := '';
END LOOP;
CLOSE CUR1 ;
RPT_LINE := '</TABLE></BODY></HTML>';
RPT_BODY_PART := UTL_RAW.CAST_TO_RAW(RPT_LINE);
RPT_BODY
:= UTL_RAW.CONCAT(RPT_BODY, RPT_BODY_PART);
EXCEPTION
WHEN OTHERS THEN
BEGIN
DBMS_OUTPUT.PUT_LINE('ERROR: PROC_DOCSTAT_RPT');
GLIB_ID1 := UTL_RAW.LENGTH(RPT_BODY);
DBMS_OUTPUT.PUT_LINE(GLIB_ID1);
END;
END;
35
Packages
A package is a construct which bounds related procedures and functions together. Variables declared in
the declaration section of a package can be shared among the procedures/functions in the body of the
package.
/* package */
CREATE OR REPLACE PACKAGE INVPACK
IS
FUNCTION COUNTINV (SALESREP IN VARCHAR2) RETURN INTEGER;
PROCEDURE PURGEINV (INV_ID IN INTEGER) ;
END INVPACK;
/* package body */
CREATE OR REPLACE PACKAGE BODY INVPACK
IS
COUNT1 NUMBER;
FUNCTION COUNTINV (SALESREP IN VARCHAR2) RETURN INTEGER
IS
BEGIN
SELECT COUNT(*)
INTO COUNT1
FROM INVOICE
WHERE SALES_REP_ID = SALESREP ;
RETURN COUNT1 ;
END COUNTINV;
PROCEDURE PURGEINV (INV_ID1 IN INTEGER)
IS
BEGIN
DELETE FROM INVOICE
WHERE INV_ID = INV_ID1
36
END PURGEINV;
/* initialization section for package */
BEGIN
COUNT1 := 0 ;
END INVPACK;
Exception Handling
The following block could appear at the end of a stored procedure:
EXCEPTION
WHEN NO_DATA_FOUND THEN
DBMS_OUTPUT.PUT_LINE('End of data !!);
WHEN OTHERS THEN
BEGIN
DBMS_OUTPUT.PUT_LINE('OTHER CONDITION OCCURRED !');
END;
Using Blobs
Blob variables require special handling in PL-SQL. When reading from a file to a blob, only one statement
is required. When reading from a blob field to a PL-SQL variable, only 32k blocks can be processed, thus
necessitating a loop construct.
/*---------------------------------------*/
/* Read a blob from a file, and write */
/* it to the database.
*/
/*---------------------------------------*/
set serveroutput on size 500000 ;
truncate table image_test ;
create or replace directory image_dir as '/apps/temp/images' ;
create or replace procedure proc_imp_jpg
(fname1 in varchar2, image_id1 in numeric) is
file1 bfile ;
lblob blob ;
len
int ;
e_blob blob ;
begin
file1 := bfilename('IMAGE_DIR',fname1);
e_blob := empty_blob();
insert into image_test (image_id, image_data)
values (image_id1, e_blob )
37
38
39
40
Version information
41
SELECT FILE#,T1.NAME,STATUS,ENABLED,BYTES,CREATE_BYTES,T2.NAME
FROM V$DATAFILE T1, V$TABLESPACE T2
WHERE T1.TS# = T2.TS# ;
Extent information
SELECT segment_name, extent_id, blocks, bytes
FROM dba_extents
WHERE segment_name = TNAME ;
42
43
set pagesize 0;
select 'TABLE:',table_name,'current' from user_tables
union
select 'SYNONYM:',synonym_name,table_owner from user_synonyms
order by 1,2 ;
Constraint columns
SELECT constraint_name,table_name, column_name
FROM dba_cons_columns
WHERE table_name = TNAME
ORDER BY table_name, constraint_name, position
END IF;
44
Constraint listing
SELECT constraint_name, table_name,
constraint_type, validated, status
FROM dba_constraints;
Indexed column listing
select
b.uniqueness, a.index_name, a.table_name, a.column_name
from user_ind_columns a, user_indexes b
where a.index_name=b.index_name
order by a.table_name, a.index_name, a.column_position;
Trigger listing
SELECT trigger_name, status
FROM dba_triggers ;
45
Calculation:
buffer cache hit ratio = 1 - (phy reads/(db_block_gets + consistent_gets))
Goal:
get hit ratio in the range 85 - 90%
Tuning parm:
adjust DB_BLOCK_BUFFERS in the initxx.ora file, increasing by small increments
SELECT NAME, VALUE
FROM V$SYSSTAT WHERE NAME IN
('DB BLOCK GETS','CONSISTENT GETS','PHYSICAL READS');
Tuning: sorts
Goal:
Increase number of memory sorts vs disk sorts
Tuning parm:
adjust SORT_AREA_SIZE in the initxx.ora file, increasing by small increments
SELECT NAME, VALUE
FROM V$SYSTAT
WHERE NAME LIKE '%SORT%';
Killing Sessions
Runaway processes can be killed on the UNIX side, or within server manager.
46
Recovering an Instance
An incomplete recovery is the only option if backups are run periodically on a cold instance. Complete
recovery is possible if archive logging is enabled, and backups are run while the database is active.
/* diagnose data file problem */
select * from v$recover_file ;
/* diagnose data file problem, by displaying tablespace info */
select file_id, file_name, tablespace_name, status
from dba_data_files ;
/* find archive log files */
select * from v$recovery_log ;
/* incomplete recovery #1 */
svrmgrl> shutdown abort
[[ In Unix copy data files from backup area to data directory(s). ]]
svrmgrl> connect;
svrmgrl> startup;
/* incomplete recovery #2 */
svrmgrl> shutdown abort;
svrmgrl> connect;
47
connect;
startup mount;
recover database until time '2002-03-04:15:00:00' ;
alter database open resetlogs;
connect;
startup mount;
recover database ;
recover datafile '/data4/ts03.dbf'
startup open;
connect;
startup mount;
set autorecovery on ;
recover tablespace ts03 ;
recover datafile 4 ;
startup open;
48
49
Connection Errors
------------------------------------------------------ORA-01034: ORACLE not available
------------------------------------------------------TNS-12564: TNS:connection refused
------------------------------------------------------TNS-12530: Unable to start a dedicated server process
50
------------------------------------------------------Connection errors can crop up out of nowhere ; the error message tend to be vague, and not useful at all.
Here's a plan of attack which will solve many connection issues. Try each step, and proceed if the problem
persists.
1) Check your environment ; verify the variables depicted below are set.
( NT: check the registry )
The example below details a Solaris/CSH environment.
Note the TWO_TASK setting ...
setenv ORACLE_BASE /apps/oracle
setenv ORACLE_HOME ${ORACLE_BASE}
setenv ORACLE_SID db22
setenv TWO_TASK $ORACLE_SID
setenv LD_LIBRARY_PATH $ORACLE_HOME/lib:/usr/lib/X11
setenv ORACLE_PATH $ORACLE_HOME/bin:/usr/bin:/usr/local/bin
setenv ORA_CLIENT_LIB shared
set path = ($ORACLE_HOME/bin /bin /usr/bin /usr/local/bin /sbin /usr/sbin /usr/bin/X11 .)
2) Try to ping the instance:
tnsping db22
If there's an error, check $ORACLE_HOME/network/admin/tnsnames.ora
3) Restart the TNS service.
Solaris:
1) kill the process, running the tnslsnr binary
2) nohup $ORACLE_HOME/bin/tnslsnr start &
NT:
1) restart the service, in the control panel
4) SQL-Plus / ServerMgr
Try using this syntax:
sqlplus user/password@instance
51
sqlplus $1 $2 $3
# OR
#sqlplus $1@$ORACLE_SID
Also verify the oracle user owns the oracle directory tree.
6) Check the pfile, verify the settings detailed below. For this example,
the machine should have at least 512mb of memory, to handle the OS and
other processes.
# 100 MB shared pool memory
shared_pool_size = 104857600
# 65 processes need 130 MB of additional memory
processes = 65
sessions = 65
Solaris: check the "shared memory" and "semaphores" settings
also, in the /etc/system file.
8) Verify the Oracle version, SQLNet version, and patched OS are all compatible.
52
Oracle 9i Articles
Oracle9i
Oracle9i
Oracle9i
Oracle9i
53
54
Performance Enhancements In Oracle9i - Read about some of the new performance features in Oracle9i.
Persistent Initialization Parameters - Make database parameter changes persist between shutdowns.
Real Application Clusters - A brief introduction to the Oracle9i replacement for Oracle Parallel Server.
Recovery Enhancements In Oracle9i - Reduce unplanned downtime by using the new crash, instance and
media recovery features of Oracle9i.
Recovery Manager (RMAN) Enhancements In Oracle9i - Use the latest RMAN features which make backup
and recovery quicker and more reliable.
Resource Manager Enhancements In Oracle9i - Manage system resources more precisely using the
Resource Manager enhancements in Oracle9i.
Resumable Space Allocation - Make long running operations suspend rather than abort if they encounter
space errors.
Scalable Session Management - Learn about the new session management features in Oracle9i.
Security Enhancements In Oracle9i - A run through the new security features available in Oracle9i,
focusing on those relevant for the Oracle9i Database: New Features For Administrators OCP exam.
SQL New Features In Oracle9i - Check out the new SQL features with emphasis on those relevant for the
Oracle9i Database: New Features For Administrators OCP exam.
Workspace Management In Oracle9i - Allow multiple transactionally consistent environments to exist
within one database.
Oracle9i Database Release 2: New Features
Associative Arrays - Oracle9i Release 2 allows you to index-by string values using this renamed collection.
Bulk Binds and Record Processing in Oracle9i Release 2 - Take advantage of bulk binds for performance
improvements whilst using record structures.
DBNEWID Utility - Change the internal DBID and the database name using this new utility without
rebuilding your controlfile.
DBMS_XPLAN - Easily format the output of an explain plan with this replacement for the utlxpls.sql script.
Export BLOB Contents Using UTL_FILE - Use the new UTL_FILE functionality to write binary data to files.
FTP From PL/SQL - A description of two methods for triggering FTP jobs directly from PL/SQL.
InterMedia - Import-Export Of Images - Use Oracle interMedia to store and retrieve images without using
Java stored procedures.
Renaming Columns And Constraints - Oracle9i Release 2 now allows the renaming of columns and
constraints. Check out the syntax here.
SQL/XML - Oracle9i Release 2 includes functionality to support the emerging SQL/XML standard to simplify
XML generation from SQL queries.
STATISTICS_LEVEL - Let Oracle9i Release 2 control the collection of statistics and advisories with a single
parameter.
Streams - Based on Advanced Queuing and LogMinor, Oracle Streams form a distributed messaging
55
technology that can be used for a variety of tasks including messaging, replication and ETL processes.
UTL_FILE Enhancements - Oracle9i Release 2 includes some long overdue enhancements including basic
file handling and support for NCHAR and RAW data.
UTL_FILE - Random Access of Files - Use the UTL_FILE package for random access of files from PL/SQL.
XML DB - Store and retrieve XML documents from the Oracle XML DB repository using HTTP, FTP and
WebDAV in seconds.
XMLSEQUENCE - Use this operator to split multi-value results from XMLTYPE queries into multiple rows.
56
Oracle9i (9.2.0.1.0) Installation On RedHat 9.0 Linux - A brief guide to installing Oracle9i (9.2.0.1.0) on
RedHat 9.0 Linux.
Oracle9i (9.2.0.4.0) Installation On RedHat Advanced Server 2.1 Linux - A brief guide to installing Oracle9i
(9.2.0.4.0) on RedHat Advanced Server 2.1 Linux.
Oracle9i (9.2.0.4.0) Installation On RedHat Advanced Server 3.0 Linux - A brief guide to installing Oracle9i
(9.2.0.4.0) on RedHat Advanced Server 3.0 Linux.
Oracle9i (9.2.0.1.0) Installation On Tru64 5.1b - A brief guide to installing Oracle9i (9.2.0.1.0) on Tru64
5.1b.
Oracle9i RAC Installation On Tru64 - A brief guide to installing Oracle9i (9.2.0.4.0) Real Application
Clusters (RAC) on Tru64 5.1b.
Manual Oracle Uninstall - Having trouble removing all Oracle software using the OUI? Try these methods.
Oracle9i XML Articles
Load XMLTYPE From File - A simple method to load XMLTYPE data from a file.
Parse XML Documents - Explode unstructured XML documents into relational tables using the new
integrated XDB packages.
XML DB - Store and retrieve XML documents from the Oracle XML DB repository using HTTP, FTP and
WebDAV in seconds.
SQL/XML - Oracle9i Release 2 includes functionality to support the emerging SQL/XML standard to simplify
XML generation from SQL queries.
XMLSEQUENCE - Use this operator to split multi-value results from XMLTYPE queries into multiple rows.
XMLType Datatype - Store XML documents in tables and query them using SQL.
XML Generation In Oracle9i Using DBMS_XMLQuery, DBMS_XMLGen, Sys_XMLGen And Sys_XMLAgg Generate XML and perform XSL transformations with ease using the new XML features of Oracle9i.
XML-Over-HTTP - XML-over-HTTP was the precursor to web services allowing easy access to XML via HTTP
GETs and POSTs.
XSQL Servlet and XSQL Pages - Publish dynamic XML documents through HTTP using the XSQL Servlet
utility.
Oracle9i Web Articles
Consuming Web Services - Access web services directly from PL/SQL using this simple API.
Email From PL/SQL In Oracle9i - Email from PL/SQL rather than using external procedures or Java.
File Upload and Download Procedures - Upload and download files directly from the database using
Database Access Descriptors.
FTP From PL/SQL - A description of two methods for triggering FTP jobs directly from PL/SQL.
Images from Oracle Over HTTP - Retrieve images directly from the database over HTTP.
57
Java Server Pages - Use Java as a scripting language to interact with the database from web pages.
PL/SQL Server Pages - Use PL/SQL as a scripting language to generate web pages directly from the
database.
PL/SQL Web Toolkit - Generate web pages directly from the database using this simple toolkit.
SQL*Plus Web Reports - Generate HTML reports directly from SQL*Plus.
Stateless Locking Methods - Learn how to avoid data loss in stateless environments.
Oracle9i Miscellaneous Articles
Advanced Queuing In Oracle9i - Get to grips with the basics of advanced queuing in Oracle9i.
Archivelog Mode On RAC - The differences between resetting the archive log mode on a single node
instance and a Real Application Cluster (RAC).
CASE Expressions And Statements - Learn how to use CASE expressions in both SQL and PL/SQL. In
addition, learn how to use the CASE statement in PL/SQL.
Complete Data Audit - A simple and generic solution for auditing before and after snapshots of data.
Compressed Tables - Compress whole tables or individual table partitions to reduce disk space
requirements.
Duplicate a Database Using RMAN - Use RMAN to create a duplicate, or clone, of a database from a recent
backup.
DBMS_LDAP - Accessing LDAP From PL/SQL - Use the DBMS_LDAP package to query and modify LDAP
entries from PL/SQL.
DBMS_LIBCACHE - Warm up the library cache of an instance by compiling the SQL and PL/SQL statements
from the library cache of another instance.
DBMS_PROFILER - Profile the run-time behaviour of PL/SQL code to identify potential bottlenecks.
DBMS_TRACE - Trace the run-time behaviour of PL/SQL code to identify potential bottlenecks.
Dynamic Binds Using Contexts - Simplify dynamic variable binds within dynamic SQL using contexts.
External Tables - Query the contents of flat files as if they were regular tables.
Full Text Indexing using Oracle Text - Efficiently query free text and produce document classification
applications using Oracle Text.
Generating CSV Files - A simple example of using the UTL_FILE package to generate CSV extract files.
Heterogeneous Services - Generic Connectivity In Oracle9i - Query non-Oracle datasources using ODBC.
Java Native Compilation - Improve the performance of Java procedural code by compiling it to native
shared libraries.
Mutating Table Exceptions - A simple method to prevent triggers producing mutating table exceptions.
Oracle Internet Directory - Use the Oracle Internet Directory to replace local Oracle Net configuration files
and Oracle Names Server.
58
Oracle Label Security - Configure row-level security with this out-of-the-box solution.
Pipelined Table Functions - Improve performance of ETL processes by pipelining all transformation
functions.
PL/SQL Native Compilation - Improve the performance of PL/SQL procedural code by compiling it to native
shared libraries.
RANK, DENSE_RANK, FIRST and LAST Analytic Functions - Simple examples of how to use these analytic
functions.
Recovery Manager (RMAN) - Explanation of RMANs basic backup, recovery and reporting functionality.
Storing Passwords In The Database - Store passwords securely in the database using this simple hashing
technique.
Transportable Tablespaces - Copy tablespaces to new instances in the time it takes to copy the datafiles.
Unregister a Database From RMAN - A step-by-step guide to unregistering unwanted databases from the
RMAN catalog.
Universal Unique Identifier (UUID) - Reduce data migration and replication issues by replacing sequence
generated IDs with UUIDs.
Useful Procedures And Functions - Procedures and functions you may have overlooked which can come in
useful during development.
Oracle9i Application Server Articles
Oracle9iAS Backup and Recovery - Simplify backup and recovery of Oracle9i Application Server using this
Oracle supplied Perl utility.
Oracle9iAS dcmctl Utility - Speed up 9iAS administration by avoiding Enterprise Manager.
Oracle9iAS (9.0.3.0.0) Installation On RedHat Advanced Server 2.1 - A brief guide to installing Oracle9iAS
(9.0.3.0.0) on RedHat Advanced Server 2.1.
Oracle9iAS (9.0.2.0.1) Portal Installation On Tru64 5.1b - A brief guide to installing Oracle9iAS (9.0.2.0.1)
Portal on Tru64 5.1b.
Oracle9iAS (9.0.2.0.1) Portal Installation On Windows 2000 - A brief guide to installing Oracle9iAS
(9.0.2.0.1) Portal on Windows 2000.
SQL*Loader
Maximazing SQL*Loader Performance
Use Direct Path Loads - The conventional path loader essentially loads the data by using standard
insert statements. The direct path loader (direct=true) loads directly into the Oracle data files and creates
blocks in Oracle database block format. There are certain cases, however, in which direct path loads
59
cannot be used (clustered tables). To prepare the database for direct path loads, the script
$ORACLE_HOME/rdbms/admin/catldr.sql.sql must be executed.
Disable Indexes and Constraints. For conventional data loads only, the disabling of indexes and
constraints can greatly enhance the performance.
Use a Larger Bind Array. For conventional data loads only, larger bind arrays limit the number of
calls to the database and increase performance. The size of the bind array is specified using the bindsize
parameter. The bind array's size is equivalent to the number of rows it contains (rows=) times the
maximum length of each row.
Use ROWS=n to Commit Less Frequently. For conventional data loads only, the rows parameter
specifies the number of rows per commit. Issuing fewer commits will enhance performance.
Use Parallel Loads. Available with direct path data loads only, this option allows multiple
SQL*Loader jobs to execute concurrently.
$ sqlldr control=first.ctl parallel=true direct=true
$ sqlldr control=second.ctl parallel=true direct=true
Use Fixed Width Data. Fixed width data format saves Oracle some processing when parsing the
data. The savings can be tremendous.
Disable Archiving During Load. While this may not be feasible in certain environments, disabling
database archiving can increase performance considerably.
Use unrecoverable. The unrecoverable option (unrecoverable load data) disables the writing of the
data to the redo logs. This option is available for direct path loads only.
Using the table table_with_one_million_rows, the following benchmark tests were performed with the
various SQL*Loader options. The table was truncated after each test.
SQL*Loader
Option
direct=false
rows=64
direct=false
bindsize=512000
rows=10000
direct=false
bindsize=512000
rows=10000
database in
noarchivelog
direct=true
direct=true
unrecoverable
direct=true
unrecoverable
fixed width data
Elapsed Time
(Seconds)
135
Time Reduction
92
32%
85
37%
47
41
65%
70%
41
70%
The results above indicate that conventional path loads take the longest. However, the bindsize and rows
parameters can aid the performance under these loads. The test involving the conventional load didnt
come close to the performance of the direct path load with the unrecoverable option specified.
60
It is also worth noting that the fastest import time achieved for this table (earlier) was 67 seconds,
compared to 41 for SQL*Loader direct path a 39% reduction in execution time. This proves that
SQL*Loader can load the same data faster than import. These tests did not compensate for indexes. All
database load operations will execute faster when indexes are disabled.
SQL*Loader Control File
The control file is a text file written in a language that SQL*Loader understands. The control file describes
the task that the SQL*Loader is to carry out. The control file tells SQL*Loader where to find the data, how
to parse and interpret the data, where to insert the data, and more. See Chapter 4, "SQL*Loader Case
Studies" for example control files.
Although not precisely defined, a control file can be said to have three sections:
1. The first section contains session-wide information, for example:
o global options such as bindsize, rows, records to skip, etc.
o INFILE clauses to specify where the input data is located
o data character set specification
2. The second section consists of one or more "INTO TABLE" blocks. Each of these blocks contains
information about the table into which the data is to be loadedsuch as the table name and the
columns of the table.
3. The third section is optional and, if present, contains input data.
Examples
Case 1: Loading Variable-Length Data
Loads stream format records in which the fields are delimited by commas and may be
enclosed by quotation marks. The data is found at the end of the control file.
Case 2: Loading Fixed-Format Fields:
Loads a datafile with fixed-length fields, stream-format records, all records the same length.
Case 3: Loading a Delimited, Free-Format File
Loads data from stream format records with delimited fields and sequence numbers. The
data is found at the end of the control file.
Case 4: Loading Combined Physical Records
Combines multiple physical records into one logical record corresponding to one database
row
Case 5: Loading Data into Multiple Tables
Loads data into multiple tables in one run
Case 6: Loading Using the Direct Path Load Method
Loads data using the direct path load method
Case 7: Extracting Data from a Formatted Report
Extracts data from a formatted report
Case 8: Loading Partitioned Tables
Loads partitioned tables.
Case 9: Loading LOBFILEs (CLOBs)
Adds a CLOB column called RESUME to the table emp, uses a FILLER field (RES_FILE), and
loads multiple LOBFILEs into the emp table.
61
Case 10: How to use TRIM, TO_NUMBER, TO_CHAR, User Defined Functions with SQL*Loader
How to use the functions TRIM, TO_CHAR/TO_NUMBER, and user defined functions in
connection with SQL*Loader
Case 11: Calling Stored Functions
How to call a Function from SQL*Loader
OPTIONS Clause
Continue Interrupted Load
Identifying Data Files
Loading into Non-Empty Tables
Loading into Multiple Tables
A simple control file identifying one table and three columns to be loaded.
Including data to be loaded from the control file itself, so there is no separate datafile.
Loading data in stream format, with both types of delimited fields -- terminated and enclosed.
Control File
1) LOAD DATA
2) INFILE *
3) INTO TABLE dept
4) FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
5) (deptno, dname, loc)
6) BEGINDATA
12,RESEARCH,"SARATOGA"
10,"ACCOUNTING",CLEVELAND
11,"ART",SALEM
13,FINANCE,"BOSTON"
21,"SALES",PHILA.
22,"SALES",ROCHESTER
42,"INT'L","SAN FRAN"
Notes:
1. The LOAD DATA statement is required at the beginning of the control file.
2. INFILE * specifies that the data is found in the control file and not in an external file.
3. The INTO TABLE statement is required to identify the table to be loaded (DEPT) into. By default,
SQL*Loader requires the table to be empty before it inserts any records.
4. FIELDS TERMINATED BY specifies that the data is terminated by commas, but may also be
enclosed by quotation marks. Datatypes for all fields default to CHAR.
5. Specifies that the names of columns to load are enclosed in parentheses. Since no datatype is
specified, the default is a character of length 255.
62
A separate datafile.
Data conversions.
In this case, the field positions and datatypes are specified explicitly.
Control File
1) LOAD DATA
2) INFILE 'ulcase2.dat'
3) INTO TABLE emp
4) (empno
POSITION(01:04) INTEGER EXTERNAL,
ename
POSITION(06:15) CHAR,
job
POSITION(17:25) CHAR,
mgr
POSITION(27:30) INTEGER EXTERNAL,
sal
POSITION(32:39) DECIMAL EXTERNAL,
comm
POSITION(41:48) DECIMAL EXTERNAL,
5) deptno
POSITION(50:51) INTEGER EXTERNAL,
6) modifieddate "SYSDATE",
7) customerid constant "0"
)
Notes:
1.
2.
3.
4.
The LOAD DATA statement is required at the beginning of the control file.
The name of the file containing data follows the keyword INFILE.
The INTO TABLE statement is required to identify the table to be loaded into.
Lines 4 and 5 identify a column name and the location of the data in the datafile to be loaded into
that column. EMPNO, ENAME, JOB, and so on are names of columns in table EMP. The datatypes
(INTEGER EXTERNAL, CHAR, DECIMAL EXTERNAL) identify the datatype of data fields in the file,
not of corresponding columns in the EMP table.
5. Note that the set of column specifications is enclosed in parentheses.
6. This statement let me insert the current sysdate in this field
7. This statement let me put a constant value
Datafile
Below are a few sample data lines from the file ULCASE2.DAT. Blank fields are set to null automatically.
7782
7839
7934
7566
7499
7654
.
Case
CLARK
KING
MILLER
JONES
ALLEN
MARTIN
MANAGER
PRESIDENT
CLERK
MANAGER
SALESMAN
SALESMAN
7839 2572.50
5500.00
7782 920.00
7839 3123.75
7698 1600.00
7698 1312.50
300.00
1400.00
10
10
10
20
30
30
63
In this case, the field positions and datatypes are specified explicitly.
Control File
This control file loads the same table as in Case 2, but it loads three additional columns (HIREDATE,
PROJNO, LOADSEQ). The demonstration table EMP does not have columns PROJNO and LOADSEQ. So if
you want to test this control file, add these columns to the EMP table with the command:
ALTER TABLE EMP ADD (PROJNO NUMBER, LOADSEQ NUMBER)
The data is in a different format than in Case 2. Some data is enclosed in quotation marks, some is set off
by commas, and the values for DEPTNO and PROJNO are separated by a colon.
1) -- Variable-length, delimited and enclosed data format
LOAD DATA
2) INFILE *
3) APPEND
INTO TABLE emp
4) FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
(empno, ename, job, mgr,
5) hiredate DATE(20) "DD-Month-YYYY",
sal, comm, deptno CHAR TERMINATED BY ':',
projno,
6) loadseq SEQUENCE(MAX,1))
7) BEGINDATA
8) 7782, "Clark", "Manager", 7839, 09-June-1981, 2572.50,, 10:101
7839, "King", "President", , 17-November-1981,5500.00,,10:102
7934, "Miller", "Clerk", 7782, 23-January-1982, 920.00,, 10:102
7566, "Jones", "Manager", 7839, 02-April-1981, 3123.75,, 20:101
7499, "Allen", "Salesman", 7698, 20-February-1981, 1600.00,
(same line continued)
300.00, 30:103
7654, "Martin", "Salesman", 7698, 28-September-1981, 1312.50,
(same line continued)
1400.00, 3:103
7658, "Chan", "Analyst", 7566, 03-May-1982, 3450,, 20:101
Notes:
1. Comments may appear anywhere in the command lines of the file, but they should not appear in
data. They are preceded with a double dash that may appear anywhere on a line.
2. INFILE * specifies that the data is found at the end of the control file.
3. Specifies that the data can be loaded even if the table already contains rows. That is, the table
need not be empty.
4. The default terminator for the data fields is a comma, and some fields may be enclosed by double
quotation marks (").
5. The data to be loaded into column HIREDATE appears in the format DD-Month-YYYY. The length of
the date field is a maximum of 20. If a length is not specified, the length is a maximum of 20. If a
length is not specified, then the length depends on the length of the date mask.
6. The SEQUENCE function generates a unique value in the column LOADSEQ. This function finds the
current maximum value in column LOADSEQ and adds the increment (1) to it to obtain the value
for LOADSEQ for each row inserted.
7. BEGINDATA specifies the end of the control information and the beginning of the data.
8. Although each physical record equals one logical record, the fields vary in length so that some
records are longer than others. Note also that several rows have null values for COMM.
64
Combining multiple physical records to form one logical record with CONTINUEIF
Inserting negative numbers.
Indicating with REPLACE that the table should be emptied before the new data is inserted
Specifying a discard file in the control file using DISCARDFILE
Specifying a maximum number of discards using DISCARDMAX
Rejecting records due to duplicate values in a unique index or due to invalid data values
Control File
LOAD DATA
INFILE 'ulcase4.dat'
1) DISCARDFILE 'ulcase4.dsc'
2) DISCARDMAX 999
3) REPLACE
4)
CONTINUEIF THIS (1) = '*'
INTO TABLE emp
(empno
POSITION(1:4)
ename
POSITION(6:15)
job
POSITION(17:25)
mgr
POSITION(27:30)
sal
POSITION(32:39)
comm
POSITION(41:48)
deptno
POSITION(50:51)
hiredate
POSITION(52:60)
Notes:
INTEGER
CHAR,
CHAR,
INTEGER
DECIMAL
DECIMAL
INTEGER
INTEGER
EXTERNAL,
EXTERNAL,
EXTERNAL,
EXTERNAL,
EXTERNAL,
EXTERNAL)
Data File
The datafile for this case, ULCASE4.DAT, is listed below. Note the asterisks in the first position and, though
not visible, a new line indicator is in position 20 (following "MA", "PR", and so on). Note that CLARK's
commission is -10, and SQL*Loader loads the value converting it to a negative number.
*7782
*7839
*7934
*7566
*7499
*7654
*7658
*
*7658
CLARK
KING
MILLER
JONES
ALLEN
MARTIN
CHAN
CHEN
CHIN
MANAGER
PRESIDENT
CLERK
MANAGER
SALESMAN
SALESMAN
ANALYST
ANALYST
ANALYST
Rejected Records
7839 2572.50
5500.00
7782 920.00
7839 3123.75
7698 1600.00
7698 1312.50
7566 3450.00
7566 3450.00
7566 3450.00
-10
300.00
1400.00
2512-NOV-85
2505-APR-83
2508-MAY-80
2517-JUL-85
25 3-JUN-84
2521-DEC-85
2516-FEB-84
2516-FEB-84
2516-FEB-84
65
The last two records are rejected, given two assumptions. If there is a unique index created on column
EMPNO, then the record for CHIN will be rejected because his EMPNO is identical to CHAN's. If EMPNO is
defined as NOT NULL, then CHEN's record will be rejected because it has no value for EMPNO.
Case 5: Loading Data in Multiple Tables
Control File
-- Loads EMP records from first 23 characters
-- Creates and loads PROJ records for each PROJNO listed
-- for each employee
LOAD DATA
INFILE 'ulcase5.dat'
BADFILE 'ulcase5.bad'
DISCARDFILE 'ulcase5.dsc'
1) REPLACE
2) INTO TABLE emp
(empno POSITION(1:4)
INTEGER EXTERNAL,
ename POSITION(6:15) CHAR,
deptno POSITION(17:18) CHAR,
mgr
POSITION(20:23) INTEGER EXTERNAL)
2) INTO TABLE proj
-- PROJ has two columns, both not null: EMPNO and PROJNO
3) WHEN projno != ' '
(empno POSITION(1:4)
INTEGER EXTERNAL,
3) projno POSITION(25:27) INTEGER EXTERNAL) -- 1st proj
3) INTO TABLE proj
4) WHEN projno != ' '
(empno POSITION(1:4)
INTEGER EXTERNAL,
4) projno POSITION(29:31 INTEGER EXTERNAL) -- 2nd proj
2) INTO TABLE proj
5) WHEN projno != ' '
(empno POSITION(1:4) INTEGER EXTERNAL,
5) projno POSITION(33:35) INTEGER EXTERNAL) -- 3rd proj
Notes:
REPLACE specifies that if there is data in the tables to be loaded (EMP and PROJ), SQL*loader
should delete the data before loading new rows.
Multiple INTO clauses load two tables, EMP and PROJ. The same set of records is processed three
times, using different combinations of columns each time to load table PROJ.
WHEN loads only rows with non-blank project numbers. When PROJNO is defined as columns
25...27, rows are inserted into PROJ only if there is a value in those columns.
When PROJNO is defined as columns 29...31, rows are inserted into PROJ only if there is a value in
those columns.
When PROJNO is defined as columns 33...35, rows are inserted into PROJ only if there is a value in
those columns.
Data File
1234 BAKER
66
1234
2664
5321
2134
2414
6542
2849
4532
1244
123
1453
JOKER
YOUNG
OTOOLE
FARMER
LITTLE
LEE
EDDS
PERKINS
HUNT
DOOLITTLE
MACDONALD
10
20
10
20
20
10
xx
10
11
12
25
9999
2893
9999
4555
5634
4532
4555
9999
3452
9940
5532
777
425
321
236
236
102
888 999
abc 102
55 40
456
456 40
321 14
294 40
40
665 133 456
132
200
Use of the direct path load method to load and index data
How to specify the indexes for which the data is pre-sorted.
Loading all-blank numeric fields as null
The NULLIF clause
Note: Specify the name of the table into which you want to load data; otherwise, you will see LDR927. Specifying DIRECT=TRUE as a command-line parameter is not an option when loading into a
synonym for a table.
The SORTED INDEXES clause identifies indexes:presorting data:case study the indexes on which
the data is sorted. This clause indicates that the datafile is sorted on the columns in the EMPIX
index. This clause allows SQL*Loader to optimize index creation by eliminating the sort phase for
this data when using the direct path load method.
The NULLIF...BLANKS clause specifies that the column should be loaded as NULL if the field in the
datafile consists of all blanks
67
Note: This example creates a trigger that uses the last value of unspecified fields.
Data File
The following listing of the report shows the data to be loaded:
Today's Newly Hired Employees
Dept Job
Manager MgrNo Emp Name EmpNo Salary (Comm)
---- -------- -------- ----- -------- ----- --------- -----20
Salesman Blake
7698 Shepard
8061
$1,600.00
(3%)
Falstaff
8066
$1,250.00
(5%)
Major
8064
$1,250.00 (14%)
30
Clerk
Scott
7788 Conrad
8062
$1,100.00
Ford
7369
DeSilva
8063
$800.00
Manager
King
7839 Provo
8065
$2,975.00
Insert Trigger
In this case, a BEFORE INSERT trigger is required to fill in department number, job name, and manager's
number when these fields are not present on a data line. When values are present, they should be saved
in a global variable. When values are not present, the global variables are used.
The INSERT trigger and the package defining the global variables is:
CREATE OR REPLACE PACKAGE uldemo7 AS -- Global Package Variables
last_deptno NUMBER(2);
last_job
VARCHAR2(9);
last_mgr
NUMBER(4);
END uldemo7;
/
CREATE OR REPLACE TRIGGER uldemo7_emp_insert
BEFORE INSERT ON emp
FOR EACH ROW
BEGIN
IF :new.deptno IS NOT NULL THEN
uldemo7.last_deptno := :new.deptno; -- save value for later
ELSE
:new.deptno := uldemo7.last_deptno; -- use last valid value
END IF;
IF :new.job IS NOT NULL THEN
uldemo7.last_job := :new.job;
ELSE
:new.job := uldemo7.last_job;
END IF;
IF :new.mgr IS NOT NULL THEN
uldemo7.last_mgr := :new.mgr;
ELSE
:new.mgr := uldemo7.last_mgr;
END IF;
END;
/
Note: The phrase FOR EACH ROW is important. If it was not specified, the INSERT trigger would only fire
once for each array of inserts because SQL*Loader uses the array interface.
Control File
LOAD DATA
INFILE 'ULCASE7.DAT'
APPEND
INTO TABLE emp
68
1)
2)
3)
4)
5)
6)
7)
8)
9)
)
Notes:
The decimal point in column 57 (the salary field) identifies a line with data on it. All other lines in
the report are discarded.
The TRAILING NULLCOLS clause causes SQL*Loader to treat any fields that are missing at the end
of a record as null. Because the commission field is not present for every record, this clause says to
load a null commission instead of rejecting the record when only six fields are found instead of the
expected seven.
Employee's hire date is filled in using the current system date.
This specification generates a warning message because the specified length does not agree with
the length determined by the field's position. The specified length (3) is used.
Because the report only shows department number, job, and manager when the value changes,
these fields may be blank. This control file causes them to be loaded as null, and an RDBMS insert
trigger fills in the last valid value.
The SQL string changes the job name to uppercase letters.
It is necessary to specify starting position here. If the job field and the manager field were both
blank, then the job field's TERMINATED BY BLANKS clause would cause SQL*Loader to scan
forward to the employee name field. Without the POSITION clause, the employee name field would
be mistakenly interpreted as the manager field.
Here, the SQL string translates the field from a formatted character string into a number. The
numeric value takes less space and can be printed with a variety of formatting options.
In this case, different initial and trailing delimiters pick the numeric value out of a formatted field.
The SQL string then converts the value to its stored form.
Partitioning of data
Explicitly defined field positions and datatypes.
Loading using the fixed record length option
Control File
LOAD DATA
1) INFILE 'ulcase10.dat' "fix 129"
BADFILE 'ulcase10.bad'
TRUNCATE
INTO TABLE lineitem
PARTITION (ship_q1)
2) (l_orderkey
position
(1:6) char,
69
l_partkey
l_suppkey
l_linenumber
l_quantity
l_extendedprice
l_discount
l_tax
l_returnflag
l_linestatus
l_shipdate
l_commitdate
l_receiptdate
l_shipinstruct
l_shipmode
l_comment
Notes:
position
(7:11)
position (12:15)
position (16:16)
position (17:18)
position (19:26)
position (27:29)
position (30:32)
position (33:33)
position (34:34)
position (35:43)
position (44:52)
position (53:61)
position (62:78)
position (79:85)
position (86:128)
char,
char,
char,
char,
char,
char,
char,
char,
char,
char,
char,
char,
char,
char,
char)
Specifies that each record in the datafile is of fixed length (129 characters in this example). See
Input Data and Datafiles.
Identifies the column name and location of the data in the datafile to be loaded into each column.
Table Creation
In order to partition the data the lineitem table is created using four (4) partitions according to the
shipment date:
Create table lineitem
(l_orderkey
number,
l_partkey
number,
l_suppkey
number,
l_linenumber
number,
l_quantity
number,
l_extendedprice number,
l_discount
number,
l_tax
number,
l_returnflag
char,
l_linestatus
char,
l_shipdate
date,
l_commitdate
date,
l_receiptdate
date,
l_shipinstruct char(17),
l_shipmode
char(7),
l_comment
char(43)
)
partition by range (l_shipdate)
(
partition ship_q1 values less than (TO_DATE('01-APR-1996', 'DD-MON-YYYY'))
tablespace p01,
partition ship_q2 values less than (TO_DATE('01-JUL-1996', 'DD-MON-YYYY'))
tablespace p02,
partition ship_q3 values less than (TO_DATE('01-OCT-1996', 'DD-MON-YYYY'))
tablespace p03,
partition ship_q4 values less than (TO_DATE('01-JAN-1997', 'DD-MON-YYYY'))
tablespace p04
)
Input Data File
The datafile for this case, ULCASE8.DAT, is listed below. Each record is 129 characters in length. Note that
five(5) blanks precede each record in the file.
1 151978511724386.60 7.04.0NO09-SEP-6412-FEB-9622-MAR-96DELIVER IN PERSONTRUCK
iPBw4mMm7w7kQ zNPL i261OPP
70
Control File
LOAD DATA
INFILE *
INTO TABLE EMP
REPLACE
FIELDS TERMINATED BY ','
( EMPNO INTEGER EXTERNAL,
ENAME CHAR,
JOB
CHAR,
MGR
INTEGER EXTERNAL,
SAL
DECIMAL EXTERNAL,
COMM
DECIMAL EXTERNAL,
DEPTNO INTEGER EXTERNAL,
1) RES_FILE FILLER CHAR,
2) "RESUME" LOBFILE (RES_FILE) TERMINATED BY EOF NULLIF RES_FILE = 'NONE'
)
BEGINDATA
7782,CLARK,MANAGER,7839,2572.50,,10,ulcase91.dat
7839,KING,PRESIDENT,,5500.00,,10,ulcase92.dat
7934,MILLER,CLERK,7782,920.00,,10,ulcase93.dat
7566,JONES,MANAGER,7839,3123.75,,20,ulcase94.dat
7499,ALLEN,SALESMAN,7698,1600.00,300.00,30,ulcase95.dat
7654,MARTIN,SALESMAN,7698,1312.50,1400.00,30,ulcase96.dat
7658,CHAN,ANALYST,7566,3450.00,,20,NONE
Notes:
This is a filler field. The filler field is assigned values from the datafield to which it is mapped.
RESUME is loaded as a CLOB. The LOBFILE function is used to specify the name of the field that
specifies name of the file which contains the data for the LOB field.
71
72
1)
2)
3)
4)
5)
6)
";"xxxxxxxSmithxxx";CLERK;2459,25
";"xxxxxxxAllenxxx";SALESMAN;4563,9
";"xxxxxxxWardxxxx";SALESMAN;4815,81
";"xxxxxxxJonesxxx";MANAGER;9765,33
";"xxxxxxxMartinxx";SALESMAN;4214,56
";"xxxxxxxBlakexxx";MANAGER;10333,87
";"xxxxxxxGablexxx";MANAGER;11011,11
";"xxxxxxxTigerxxx";ANALYST;6865,88
";"xxxxxxxKingxxxx";PRESIDENT;18955,45
";"xxxxxxxTurnerxx";SALESMAN;5324,44
";"xxxxxxxAdamsxxx";CLERK;1899,48
";"xxxxxxxJamesxxx";CLERK;2288,99
";"xxxxxxxFordxxxx";ANALYST;7564,83
";"xxxxxxxMillerxx";CLERK;1865,93
1) TRIM deletes the leading/trailing blanks in the column FIRST_NAME (i.e. " Martin " becomes
"Martin")
2) TRIM deletes the leading/trailing 'x' characters in the column LAST_NAME (i.e. "xxxxxxxSmithxxx"
becomes "Smith")
3) TO_NUMBER shows that the format of the numbers in the column SALARY is in the form: 99999D99.
That means max. 5 digits integer with max. 2 digit
post-decimal positions. The decimal separator is ','. If the format is not specified, then the records are
not loaded (ORA-1722 invalid number, if NLS_NUMERIC_CHARACTERS = '.,')
4) The column BONUS is calculated with the user defined function GET_BONUS. The Function expects an
input parameter, DEPARTMENT (VARCHAR2), and
returns the value, BONUS (NUMBER(2,2))
5) The column DESCRIPTION is a composition of the information from the previous columns. The Function
DECODE checks if a bonus is available to the department. If no bonus is available, then the message 'No
bonus' will be printed. The new thing here is the function TO_CHAR. This function modifies the format of
the BONUS in this form: sign, 2 integer digits with leading zeros, decimal separator, 2 post-decimal
positions with trailing zeros.
6) The column TOTAL is calculated with the user defined function CALC_SAL (the BONUS, if available, is
applied to the SALARY)
The result after the loading procedure looks like this in the table TEST:
SQL> select * from test;
ID FIRST_NAME
LAST_NAME
DEPARTMENT
SALARY
---------- -------------------- -------------------- -------------------- ---------1 Martin
Smith
CLERK
2459.25
2 David
Allen
SALESMAN
4563.9
73
3 Brad
4 Marvin
5 Dean
6 John
7 Clark
8 Scott
9 Ralph
10 Tina
11 Bryan
12 Jesse
13 John
14 John
Ward
Jones
Martin
Blake
Gable
Tiger
King
Turner
Adams
James
Ford
Miller
SALESMAN
MANAGER
SALESMAN
MANAGER
MANAGER
ANALYST
PRESIDENT
SALESMAN
CLERK
CLERK
ANALYST
CLERK
4815.81
9765.33
4214.56
10333.87
11011.11
6865.88
18955.45
5324.44
1899.48
2288.99
7564.83
1865.93
SKIP = n
-- Number of logical records to skip (DEFAULT 0)
LOAD = n
-- Number of logical records to load (DEFAULT all)
ERRORS = n -- Number of errors to allow (DEFAULT 50)
ROWS = n
-- Number of rows in conventional path bind array (DEFAULT 64)
BINDSIZE = n -- Size of conventional path bind array in bytes
SILENT = {HEADER | FEEDBACK | ERROR | DISCARDS | ALL }
-- Suppress messages during run
74
For example:
OPTIONS (BINDSIZE=10000, SILENT=(ERRORS, FEEDBACK) )
Values specified on the command line override values specified in the control file. With this precedence,
the OPTIONS keyword in the control file established default values that are easily changed from the
command line.
Continuing Interrupted Loads
If SQL*Loader runs out of space for data rows or index entries, the load is discontinued. (For example, the
table might reach its maximum number of extents.) Discontinued loads can be continued after more space
is made available.
When a load is discontinued, any data already loaded remains in the tables, and the tables are left in a
valid state. SQL*Loader's log file tells you the state of the tables and indexes and the number of logical
records already read from the input data file. Use this information to resume the load where it left off.
For example:
SQLLOAD / CONTROL=FAST1.CTL SKIP=345
CONTINUE\_LOAD DATA statement is used to continue a discontinued direct path load involving multiple
tables with a varying number of records to skip. For more information on this command, see chapter 6 of
``ORACLE7 Server Utilities Users Guide''.
75
The INTO TABLE clause allows you to tell which table you want to load data into. To load multiple tables,
you would include one INTO TABLE clause for each table you wish to load.
The INTO TABLE clause may continue with some options for loading that table. For example, you may
specify different options (INSERT, APPEND, REPLACE) for each table in order to tell SQL*Loader what to do
if data already exists in the table.
The WHEN clause appears after the table name and is followed by one or more field conditions. For
example, the following clause indicates that any record with the value ``q'' in the fifth column position
should be loaded:
WHEN (5) = 'q'
A WHEN clause can contain several comparisons as long as each is preceded by AND. Parentheses are
optional but should be used for clarity with multiple comparisons joined by AND. For example:
WHEN (DEPTNO = '10') AND (JOB = 'SALES')
To evaluate the WHEN clause, SQL*Loader first determines the values of all the fields in the record. Then
the WHEN clause is evaluated. A row is inserted into the table only if the WHEN clause is true.
When the control file specifies more fields for a record than are present in the record, SQL*Loader must
determine whether the remaining (specified) columns should be considered null, or whether an error
should be generated. TRAILING NULLCOLS clause tells SQL*Loader to treat any relatively positioned
columns that are not present in the record as null columns. For example, if the following data
10 Accounting
is read with the following control file
INTO TABLE dept
TRAILING NULLCOLS
( deptno CHAR TERMINATED BY " ",
dname CHAR TERMINATED BY WHITESPACE,
loc CHAR TERMINATED BY WHITESPACE )
and the record ends after DNAME, then the remaining LOC field is set to null. Without the TRAILING
NULLCOLS clause, an error would be generated, due to missing data.
Oracle datatypes
Datatype summary for Oracle 7, 8 & 9
76
Datatype
Description
Max Size:
Oracle 7
Max Size:
Oracle 8
Max Size:
Oracle 9
VARCHAR2(size)
NVARCHAR2(size)
4000 bytes
4000 bytes
minimum is
minimum is 1
1
VARCHAR
CHAR(size)
NCHAR(size)
M
P
3
m
3
m
2000 bytes
3
Default and 2000 bytes
D
minimum Default and minimum size is
m
size is 1
1 byte.
1
byte.
M
1
1
NUMBER(p,s)
The
precision p
can range
from 1 to
38.
The
precision p
can range
The precision p can range
from 1 to
from 1 to 38.
38.
The scale s
can range
from -84 to
127.
m
p
b
w
e
d
T
r
t
F
p
s
R
m
p
b
w
e
d
PLS_INTEGER
signed integers
PL/SQL
PLS_INTEGER values require only
less storage and provide
better performance than
PL/SQL
only
PL/SQL only
m
r
2
77
NUMBER values.
So use PLS_INTEGER where
you can!
BINARY_INTEGER
LONG
m
r
2
3
N
s
t
w
c
2
Gigabytes
from
January 1,
4712 BC to
December
31, 4712
AD.
from
f
January 1,
4
4712 BC to from January 1, 4712 BC to D
December December 31, 9999 AD.
9
31, 9999
(
AD.
4
Accepted values of
fractional_seconds_precision
are 0 to 9. (default = 6)
TIMESTAMP
As above with time zone
(fractional_seconds_precision)
displacement value
WITH {LOCAL} TIMEZONE
Accepted values of
fractional_seconds_precision
are 0 to 9. (default = 6)
DATE
INTERVAL YEAR
(year_precision) TO MONTH
LONG RAW
ROWID
day_precision may be 0 to 9.
(default = 2)
fractional_seconds_precision
may be 0 to 9. (default = 6)
Maximum Maximum
size is 255 size is
Maximum size is 2000 bytes 3
bytes.
2000 bytes
2
2
2 Gigabytes - but now
Gigabytes. Gigabytes. deprecated
3
N
s
t
w
R
H
s
r
u
78
o
t
(
v
b
p
UROWID
u
r
l
o
i
t
p
o
O
MLSLABEL
Binary format of an
operating system label.This
datatype is used with
Trusted Oracle7.
CLOB
NCLOB
BLOB
BFILE
N/A
The
maximum
The maximum size and
size and
default is 4000 bytes
default is
4000 bytes
4Gigabytes 4Gigabytes
4Gigabytes 4Gigabytes
4Gigabytes 4Gigabytes
T
B
d
c
f
(
b
'
CHAR:
Over time, when varchar2 columns are updated they will sometimes create chained rows - because CHAR
columns are fixed width they are not affected by this - so less DBA effort is required to maintain
performance.
NUMBER
When retrieving data for a NUMBER column, consider (if you can) using the PL/SQL datatype:
PLS_INTEGER for better performance.
LONG
You should start using BLOB instead of LONG
79
int6
int1
char(n)
blob
Oracle 8
NUMBER(10)
NUMBER(6)
NUMBER(1)
VARCHAR2(n)
BLOB
Sybase system 10
NUMERIC(10)
NUMERIC(6)
NUMERIC(1)
VARCHAR(n)
IMAGE
MS Access 97
Single
Byte
TEXT(n)
LONGBINARY
TERADATA
INTEGER
DECIMAL(6)
DECIMAL(1)
VARCHAR(n)
VARBYTE(20480)
DB2
INTEGER
DECIMAL(6)
DECIMAL(1)
VARCHAR(n)
VARCHAR(255)
RDB
INTEGER
DECIMAL(6)
DECIMAL(1)
VARCHAR(n)
LONG VARCHAR
INFORMIX
INTEGER
DECIMAL(6)
DECIMAL(1)
VARCHAR(n)
BYTE
SYBASE
NUMERIC(10)
NUMERIC(6)
NUMERIC(1)
VARCHAR(n)
IMAGE
NUMERIC(10)
NUMERIC(6)
NUMERIC(1)
VARCHAR(n)
IMAGE
RedBrick
integer
int
int
char(n)
char(1024)
INGRES
INTEGER
INTEGER
INTEGER
VARCHAR(n)
VARCHAR(1500)
Type of Indexes
Oracle8i also allows you to rebuild your indexes while online. In the past, creating or rebuilding the index
required a full lock on the table. On a large table, this could mean that an application is unusable for
several hours.
Now, however, Oracle allows you to create or rebuild the index while users can still perform the full range
of data processes. To do this, Oracle creates the index structure before populating it. While populating, all
changes to the table are recorded in a journal table. As the index is completed, the journal table changes
are then built in.
Brief table locks are made while the index structure is made and the journal table bought into the index.
To build an index on line, you use the following syntax:
CREATE INDEX my_index ON my_table (my_field) ONLINE;
a higher PCTFREE for OLTP applications, if the table is a popular one, incurring a lot of DML
changes (via interactive user-screens);
If the index-creation time is critical specify a lower PCTFREE. This will pack in more rows per leafblock, thereby avoiding the need for further splitting at creation-time. This is of paramount significance to
shops that have 24x7 availability requirements. Index-creation in most cases requires considerable
downtime (especially if the table is a multi-million row table). Lesser the amount of index-creation time,
80
smaller can be the maintenance-window. I have seen this tiny, often unnoticed parameter save around
20% of index-creation time. At a high-availability site, an index on a table containing around 11 million
rows took me about 80 minutes to build using PCTFREE of 30 and a parallel degree of 4. The same index
on the same table, with 13.5 million rows took me around 90 minutes to create with a PCTFREE of 0
(without any hardware/software enhancements). Nologging (minimal redo) was on during both creations.
For any column where the values are constantly increasing, it is probably a good idea to set a very low
PCTFREE (even, zero). This is because only the rightmost leaf-block will always be inserted into. This will
make the tree grow towards the right. The leftmost leaves would remain static. So, there is no sense in
leaving any part of those blocks empty with a non-zero PCTFREE.
1- Partitioned Indexes
Like tables, indexes can also be partitioned; but with indexes you have a more options because the
underlying table might or might not also be partitioned. The objective of this type of index is to separate
the index into smaller partitions, just as we do now for a database table. There are essentially two
different types of partitioned indexes available:
Global indexes--These are created in a manner different from the underlying partitioning of the table
that is indexed.
Local indexes--These are partitioned in the same manner as the underlying table partitioning.
Global Indexes
To create a global partitioned index, use the CREATE INDEX parameter GLOBAL. This specifies that the
index will be a global index. Further partitioning of the index is accomplished by using the following
parameters:
GLOBAL--This parameter specifies a global partitioned index.
PARTITION part_name--This parameter is used to identify the partition. If you do not specify the
partition name, a default name will be provided. It is not usually necessary to provide the partition
name.
VALUES LESS THAT--This parameter is used to specify the range that is allocated for that particular
partition in the same way as the partition was specified in the CREATE TABLE statement (discussed
yesterday).
NOTE: The last partition should contain the keyword MAXVALUE for its range.
Local Indexes
81
In contrast to the global index, a local partitioned index is individually created on each partition. If you
specify a local partitioned index, Oracle automatically maintains the index's partitioning along with that of
the underlying table.
Local partitioned indexes are created through the use of the LOCAL parameter with the CREATE INDEX
statement. It is unnecessary to provide partitioning information because the underlying table partitioning
will be used. A local index can be created with the following syntax:
CREATE INDEX "ETW".dogs_ix1
ON DOGS (ID)
LOCAL;
Because the index is local, all partition changes to the table will be automatically reflected on the index
partitions as well.
Local partitioned indexes have some inherent advantages that are similar to the advantages you get from
partitioned tables. These advantages include the following:
Because the index exists entirely on one partition, any maintenance operations affect only that one
partition.
The Oracle optimizer can use the local index to generate better query plans based on the fact that a
local index is used.
If a partition is lost and must be recovered, only the data and index for that particular partition needs
to be recovered. With a global index, the entire index would need recovery.
As you can see, there are many advantages of using both global and local partitioned indexes.
2- Index-Only Tables
Many systems contain several small tables (1 to 3 columns) where all of the elements form the primary
key. This is typically the case of tables created to physically implement conceptual relations of the type
O,n O,n. However, there exists an extremely efficient way to create such tables, by using a B*-Tree
structure. In ORACLE 8, an index-organized table, that is a table with the same physical structure as an
index, allows us to do exactly that. In an index-organized table, the database engine will place the data
values in a table segment, but with a B*-tree structure.
An index-only table is a schema object introduced in Oracle8. An index-only table is similar to an index,
but whereas an index contains the primary key value and a ROWID pointing to where the data is kept, the
index-only table stores the column data in the leaf block of the index.
Because the leaf blocks of the Oracle index are traditionally very small and tightly packed, there can be
some drawbacks to having large rows stored there. Oracle has developed a way to compensate for this: If
rows become too large (by a set threshold), the row data is stored in an overflow area as specified in the
CREATE TABLE statement. This creates storage more like the traditional index and table relationship.
An index-only table contains the same structure as the Oracle B*-tree index. Only the leaf blocks have
changed. Index-only tables have many of the attributes of both indexes and tables, but there are a few
exceptions:
Because it is part index and part table, no other indexes can be added to the index-only table.
82
Tables that are not accessed via the primary key value are not good candidates for index-only tables. Also,
tables whose primary key values are updated and tables that have frequent insertions are not good
candidates for index-only tables.
How to Create Index-Only Tables
Index-only tables are created with the CREATE TABLE command; the ORGANIZATION INDEXED qualifier is
used to identify the table as index-only. The following qualifiers are used in creating index-only tables:
ORGANIZATION INDEXED--This qualifier specifies an index-only table organization.
83
possible. Other column types that might have low cardinality include
Marital status
Account status (good or bad)
Sales region (if there are only a few)
Rank (if there are only a few)
Special notes (whether there is a note)
With columns that have low cardinality, the bitmap index can greatly improve performance. Columns with
high cardinality are not candidates for bitmap indexes.
Locking issues affect data manipulation operations in Oracle. As a result, bitmapped indexes are not
appropriate for OLTP applications that have a high level of concurrent insert, update and delete operations.
Concurrency is usually not an issue in a data-warehousing environment where the data is maintained by
bulk loads, inserts and updates.
In addition, bitmapped index maintenance is deferred until the end of the bulk DML operation. If 100 rows
are inserted into a table, the inserted rows are placed into a sort buffer and the updates of all 100-index
entries are applied as a group. As a result, bitmapped indexes are appropriate for most decision support
applications (even those that have bulk updates applied on a regular basis).
Mass updates, inserts and delete will run faster if you drop the bitmapped indexes, execute the DML and
recreate the bitmapped indexes when the DML completes. Run timings using the straight DML and
compare it to the total time consumed by the drop bitmapped index/execute DML/recreate bitmapped
index process.
How to Create Bitmapped Indexes
A bitmap index is created with the CREATE INDEX command with the BITMAP qualifier. For example, the
following will create a bitmap index:
CREATE BITMAP INDEX
To create a bitmap index on the SEX field in the DOGS table, you can use the following syntax:
CREATE BITMAP INDEX "ETW".dogs_bx1
ON DOGS (SEX);
This simple statement will create the bitmap index on the column specified. At this time, bitmap indexes
cannot be created with the graphical tools.
84
and full index scans can be performed on a reverse-key index. Therefore, it is not a good idea to build a
reverse-key index on a column that might use range-scans.
85
the index. This can be accomplished by using the ANALYZE INDEX VALIDATE STRUCTURE command.
Normally, the ANALYZE INDEX command creates either computed or estimated statistics for the index that
can be seen in the DBA_INDEXES view. This action may produce unintentional side effects, especially if
the index has not previously been analyzed. The VALIDATE STRUCTURE command can be safely executed
without affecting the optimizer. The VALIDATE STRUCTURE command populates the SYS.INDEX_STATS
table only. The SYS.INDEX_STATS table can be accessed with the public synonym INDEX_STATS. The
INDEX_STATS table will only hold validation information for one index at a time. You will need to query
this table before validating the structure of the next index.
Below is an example of ANALYZE INDEX VALIDATE STRUCTURE and sample output from INDEX_STATS:
ANALYZE INDEX shopping_basket_pk VALIDATE STRUCTURE;
SELECT name,height,lf_rows,lf_blks,del_lf_rows,distinct_keys,used_space
FROM INDEX_STATS;
NAME
HEIGHT LF_ROWS LF_BLKS DEL_LF_ROW DISTINCT_K USED_SPACE
------------------------- --------- ---------- ---------- ---------- ---------- ---------SHOPPING_BASKET_PK
2
1
3
1
1
65
I have the information, now what?
There are two rules of thumb to help determine if the index needs to be rebuilt. If it is determined that the
index needs to be rebuilt, this can easily be accomplished by the ALTER INDEX REBUILD command.
Although not necessarily recommended, this command could be executed during normal operating hours.
Rebuilding the index uses the existing index as a basis. The alternative is to drop and re-create the index.
Creating an index uses the base table as its data source that needs to put a lock on the table. The index is
also unavailable during creation.
First rule of thumb is if the index has height greater than four, rebuild the index. For most indexes, the
height of the index will be quite low, i.e. one or two. I have seen an index on a 3 million-row table that
had height three. An index with height greater than four may need to be rebuilt as this might indicate a
skewed tree structure. This can lead to unnecessary database block reads of the index. It is helpful to
know the data structure for the table and index. Most times, the index height should be two or less, but
there are exceptions.
The second rule of thumb is that the deleted leaf rows should be less than 20% of the total number of leaf
rows. An excessive number of deleted leaf rows indicates that a high number of deletes or updates have
occurred to the index column(s). The index should be rebuilt to better balance the tree. The INDEX_STATS
table can be queried to determine if there are excessive deleted leaf rows in relation to the total number
of leaf rows. Lets look at an example:
ANALYZE INDEX item_basket_pk VALIDATE STRUCTURE;
SELECT name,height,lf_rows,del_lf_rows,
(del_lf_rows/lf_rows)*100 as ratio
FROM INDEX_STATS;
NAME
HEIGHT
LF_ROWS DEL_LF_ROW RATIO
------------------------------ ---------- ---------- ---------- ---------ITEM_BASKET_PK
1
235
74 31.4893617
In this example, the ratio of deleted leaf rows to total leaf rows is clearly above 20%. This is a good
candidate for rebuilding. Lets rebuild the index and examine the results.
ALTER INDEX item_basket_pk REBUILD;
ANALYZE INDEX item_basket_pk VALIDATE STRUCTURE;
SELECT name,height,lf_rows,del_lf_rows,
(del_lf_rows/lf_rows)*100 as ratio
86
FROM INDEX_STATS;
NAME
HEIGHT
LF_ROWS DEL_LF_ROW RATIO
------------------------------ ---------- ---------- ---------- ---------ITEM_BASKET_PK
1
161
0
0
The index is rebuilt and validated once again. Examining the INDEX_STATS table shows that the 74
deleted leaf rows were dropped from the index. Notice that the total number of leaf rows went from 235 to
161, which is a difference of 74 leaf rows. This index should provide better performance for the
application.
validate_idx.sql
This script will check indexes to find candidates for rebuilding.
Run this script in SQL*Plus as a user with SELECT ANY TABLE
privileges.
This script can be used and modified without permission. Run this
script at your own risk! The script author is not responsible for
any problems that may arise from running this script.
87
vCursor := DBMS_SQL.OPEN_CURSOR;
/* Set up dynamic string to validate structure */
vAnalyze := 'ANALYZE INDEX ' || vOwner || '.' || vIdxName || ' VALIDATE STRUCTURE';
DBMS_SQL.PARSE(vCursor,vAnalyze,DBMS_SQL.V7);
vNumRows := DBMS_SQL.EXECUTE(vCursor);
/* Close DBMS_SQL cursor */
DBMS_SQL.CLOSE_CURSOR(vCursor);
/* Does index need rebuilding? */
/* If so, then generate command */
SELECT height,lf_rows,del_lf_rows INTO vHeight,vLfRows,vDLfRows
FROM INDEX_STATS;
IF vDLfRows = 0 THEN
/* handle case where div by zero */
vDLfPerc := 0;
ELSE
vDLfPerc := (vDLfRows / vLfRows) * 100;
END IF;
IF (vHeight > vMaxHeight) OR (vDLfPerc > vMaxDel) THEN
DBMS_OUTPUT.PUT_LINE('ALTER INDEX ' || vOwner || '.' || vIdxName || ' REBUILD;');
END IF;
END LOOP;
CLOSE cGetIdx;
END;
/
Database Comparision
Table of Contents
1.Introduction
2.Operational Concerns
a.Scalability
b.Platform Availability
c.Networking & Internet Readiness
3.Vendor Related Issues
a.Licensing
88
89
Oracles products are definitely not the cheapest on the market. If an evaluation of the application
necessitates a high need for reliability, scalability, security and performance then Oracle should be
considered. Oracle is the worlds leading supplier of software for information management, holding 27% of
the database market share across all platforms.Oracle is the undisputed database leader on UNIX
platforms, commanding 60.9% of the market share according to Dataquest.[iii],[iv]
2.Operational Concerns
a. Scalability
Scalability in the context of database software is defined as the softwares ability to continue to perform at
a similar level with a larger amount of data and a growing number of users and transactions.
Amount of Data
Both SQL Server 7.0 and Oracle 8i are designed to be client-server database products that can take
advantage of distributed database architecture.
A distributed database is a network of databases managed by multiple database servers that appears to a
user as a single database. This means the database could be distributed across several disks and servers
with multiple processors.The data of all databases in the distributed databases can be simultaneously
accessed and modified. The database architecture on the server will dictate how fast the transaction
response time is.The speed of transactions can vary greatly based on the database design as well as
server hardware configurations, including RAM, the number and speed of the CPUs.
SQL Server 7.0 can grow up to 1,048,516 terra-bytes.Microsoft uses SMP (systems with 4 processors)
technology to distribute databases. Other maximum sizes and numbers can be referenced in Appendix C,
which outlines other technical specifications of SQL Server 7.0.
Oracle 8i is scalable up to hundreds of terabytes to manage very large databases (VLDB).Oracle takes
advantage of distributed processing and storage capabilities through architectural features that use more
than one processor to divide the processing for a set of related jobs.This distributed architecture is a good
example of the expression the sum of the parts is greater than the whole, because as individual
processors work on a subset of related tasks, performance of the whole system is improved.
Server Engine
Support for multiple CPUs
90
Excellent
Good
Good
Excellent
Excellent
Excellent
Excellent
Excellent
SQL Server 7.0 still lags behind in its ability to support multimedia data support and in
programmability, which are necessary for many Internet applications. Third party software will
have to be used to store special images, sound, video or geographic data support. SQL Server
7.0 doesnt support Java, which is an industry standard for developing network applications.
Oracle 8i is the best product for companies wanting to move their database applications to the
Web. Oracle leads the market in handling of multimedia objects. Multimedia support is
particularly relevant when building Web-based applications like online stores that include
91
multimedia items such as pictures or video clips of items for sale. Oracle uses a product call
JServer, which brings Java and relational databases together. It allows for controlling the
database through Java and supports the creation of JavaBeans. JavaBeans are the basic
building blocks for Java-based Internet applications, and are (or will be) supported by just
about every high-end Internet application server on the market.
In the ZDNet Scorecard (Appendix D), SQL Server 7.0 and Oracle 8i were rated as follows, in
respect to their internet readiness features:
Web connectivity
Poor
Excellent
Poor
Excellent
Good
Excellent
92
Oracle 8i
Oracles pricing structure is different from Microsofts, in that it doesnt charge per server license or client
access license.Rather Oracle charges by licensing units:named user, concurrent system and power
unit.
A named user is defined as an individual who is authorized by his/her company to use the Oracle
Software programs, regardless of whether the individual is actively using these programs at any given
time.[vii]
A concurrent device is defined as an input device accessing the program on the designated system at any
given point in time. The number of "Concurrent Devices" you are licensed for is the maximum number of
input devices accessing the programs on the Designated System at any given point in time. If multiplexing
hardware or software (e.g., a TP Monitor or a Web server product) is used, this number must be measured
at the multiplexing front-end.vii
A power unit is defined as one MHz of power in any Intel compatible or RISC processor in any computer of
the Designated Systems on the Order Confirmation page on which the Oracle software programs are
installed and operating. (Intel refers to Intel Solaris, Linux, and Windows NT; RISC refers to Sun SPARC
Solaris, HP-UX, and IBM/AIX. A "Processor" license shall be for the actual number of processors installed
in the licensed computer and running the Oracle program(s), regardless of the number of processors
which the computer is capable of running.)vii
A named user licensing unit costs $600, a concurrent device costs $1495 and a power unit costs $200 for
the Oracle 8i Enterprise Edition.These prices are 5 times more than what Oracle charges for the Standard
Edition.
The Enterprise Editions includes these advanced features on top of the Standard Edition:large-database
partitioning (which helps you keep monster gigabyte-size databases under control), flexible security
features, and speed features such as bitmapped indexes, summary tables, and parallelism.
Two other modules that Oracle offers for enhanced web integration and multi-media handling are Oracle
JServer Standard Edition and WebDB, which if necessary add to the total cost of the Oracle solution.They
are also priced based on the licensing unit discussed above.
b.Support and Maintenance
Availability of qualified database administrators (DBA) and programmers is one issue that cannot be
overlooked in considering which database will be best for the given organization and application.Due to the
relationship between supply, demand and cost, the shortage of Oracle DBAs and programmers can mean
only mean one thing.They are hard to find and when one is available, they command a very high salary.
The nature of Oracle is that it can be more difficult to program and administer, so it requires specially
trained personnel.SQL Server, on the other hand, is an easier product to learn and administer so the
number of available programmers is higher and less expensive to staff a database project.
No cost information could be obtained on the annual maintenance fees to remain current on the licensing
agreements with either Oracle or SQL Server.
4. User Considerations
a. Database Administrator (DBA) Concerns
Recovery & Backup
93
In every database system, the possibility of a system or hardware failure always exists. Should a failure
occur and affect the database, the database must be recovered. The goals after a failure are to ensure
that the effects of all committed transactions are reflected in the recovered database and to return to
normal operations as quickly as possible while insulating users from problems caused by the failure.
Databases can fail due power outages, operating system crashes, disk failure or operator error.
Both SQL Server 7.0 and Oracle make commitments on a two-phase commit approach, which allows for
users to control a logical set of SQL statements so they all either succeed or fail as a unit. This two-phase
mechanism guarantees that no matter what type of system or network failure might occur, a distributed
transaction either commits on all involved nodes or rolls back on all involved nodes to maintain data
consistency across the global distributed database.
Monitoring & Tuning Capabilities
SQL Server 7.0 is an exceptionally easy product to administer and is more forgiving than previous SQL
Server versions.SQL Server 7.0 has an auto-tuning feature that allows for memory to be self-managed
and there are several new wizards that simply advanced tasks such as creating databases, scheduling
backups, importing and exporting data and configuring replication.This should make the training of
database administrators much easier.
Oracle 8i databases can be administered and controlled very tightly, but it is a complex and requires
trained database administrators to do so proficiently.Oracle8i tools are Java-based and can even be run
from a Web browser. They provide all the essentials for designing and setting up a database, including
some advanced features like letting you selectively delegate authority to users of its Enterprise Manager
administration console. This is a handy tool for branch office deployment.Like previous releases of
Enterprise Manager, though, this one is a version behind the database, and it doesn't know a thing about
new Oracle8i features such as Java stored procedures.[viii]
b.Programmability
There are languages supported within the database software for programming and controlling the
database.For example, since PL/SQL can be stored in the database, network traffic between applications
and the database is reduced, thereby increasing application and system performance.
SQL Server 7.0 comes with an internal programming language called Transact-SQL, which has received a
poor rating in several reviews.While everyone else in the SQL database market is moving (or has already
moved) to a modern programming language like Java, SQL Server customers are still stuck in the
programming Dark Agesno object orient development, no big class libraries to use, and no code
interoperability with anything else.[ix]The programming can be done, but it will require a lot more work.
Oracle gets an excellent rating for its internal language offerings, which include Java and PL/SQL.
5.Conclusion
In comparing these two database products, it became apparent they each hold a different place and
purpose in the market.They dont compete in the same niche.Microsoft SQL Server, a client-server
database, continues to make strides toward the enterprise database market, but is still most appropriate
for a departmental or small to mid-sized company whose database doesnt have such high scalability,
reliability and availability needs.SQL Servers greatest weakness is the Windows NT platform, which it
operates on, is not mature enough to provide the kind of availability that enterprise worthy systems
require.In the small-business market, the differentiating factors are ease of database administration, Web
connectivity, the speed and features of the database server engine, branch-office and mobile support, and
94
the ability to warehouse data efficiently. SQL Server 7.0 shines in all of these areas except Web
connectivity. Its administration tools include many wizards and self-tuning settings that make it the only
database we reviewed that might not require a specially trained administrator.[x]
Oracle, also a client-server database, operates on the high end of the database market and is also
reaching out to start ups, small to medium sized businesses who have a need for a complete, integrated
platform for critical applications for the internet. Oracle is harder to administer is an expensive choice,
unless the application being developed requires its Java or multimedia features.Another selling point to
Oracle is that is it sold on a multitude of platforms, in comparison to SQL Server 7.0, which may be
appealing to some customers who are seeking a more mature platform.
Appendix A
System Requirements for Microsoft SQL Server 7.0*
Client Access Licenses required
Server
Microsoft Internet Explorer 4.01 with Service Pack 1 or later (both included)
32 MB of RAM
CD-ROM drive
Note SQL Server 7.0 can utilize up to four processors. Additional processor support is available with SQL
Server 7.0 Enterprise Edition.
Desktop
Identical to Server requirements with the following exceptions:
95
Networking Support Windows 95, Windows 98, or Windows NT built-in network software (additional
network software is not required unless you are using Banyan VINES or AppleTalk ADSP; Novell NetWare
client support is provided by NWLink)
Clients Supported Windows 95, Windows 98, or Windows NT Workstation, UNIX,** Apple Macintosh,**
and OS/2**
*Actual requirements will vary based on your system configuration and the features you choose to install.
**Requires ODBC client software from a third-party vendor.
Appendix B
Maximum Sizes and Numbers of SQL Server 7.0
This table specifies the maximum sizes and numbers of various objects defined in Microsoft SQL Server
databases, or referenced in Transact-SQL statements.
Object
Batch size
Bytes per character or binary column
2GB-2
96
8060
900
900
900
8060
Batch size
1
Limited only by
number of bytes
10
Database size
Files per database
File size (data)
Object
File size (log)
FOREIGN KEY constraints per table
16
16
16
1024
4096
1024
Max. value of
configured
connections
1,048,516 TB
32,767
32 TB
SQL Server 7.0
4 TB
63
63
128
900
Max. value of locks
configured
64
32
250
Nested subqueries
Nested trigger levels
Nonclustered indexes or constraints per
table
Objects in a database *
Parameters per stored procedure
2,147,483,647
1024
Limited by available
storage
128 *TDS packet size
Limited by number of
objects in a database
256
Limited by number of
objects in a database
97
* Database objects include all tables, views, stored procedures, extended stored procedures, triggers,
rules, defaults, and constraints.
Appendix C
Scorecard of Microsofts SQL Server & Oracles 8i
Server Administration
Graphical tools
Ease of maintenance
Server Engine
Support for multiple CPUs
Join and index selection
Degree of concurrency
Multimedia Data Handling
Web connectivity
Support for sound, media,
video, images
Full text search
Interoperability
Links with other databases
Single log-on
Operating-system support
Programmability
Stored procedures and
triggers
Internal programming
language
Database Design
SQL language support
Object-oriented design
Branch Office Support
Replication
Distributed transactions
Remote administration
Data Warehousing and
Reporting
Loading tools
Good
Good
Good
Good
Fair
Fair
Good
Excellent
Fair
Poor
Good
Good
Excellent
Excellent
Poor
Excellent
Good
Excellent
Poor
Excellent
Excellent
Excellent
Good
Excellent
Excellent
Excellent
Excellent
Excellent
Excellent
Excellent
Excellent
Excellent
Excellent
Excellent
Good
Good
Good
Excellent
Good
Excellent
Excellent
Excellent
Excellent
Excellent
98
Appendix D
Summary of Features of Microsofts SQL Server & Oracles 8i
$1,399
5 users (named or concurrent)
$127 / $127
Windows NT, Windows 9x
Oracle 8i Standard
Edition
$3,925 per CPU
5 concurrent users
$785 / $392.50
Yes
Yes Yes
Yes
Yes Yes
Yes Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes Yes
Yes Yes
Yes
Yes
Yes
Yes
Yes
Optional
Yes
Optional
Yes Yes
Yes
Yes
Yes Yes
Yes
Optional
Optional Optional
Yes
Yes No
Yes
Yes Optional
Java, OS commands,
PL/SQL, SQL, TCL
Yes
Yes Yes
Yes Yes
Yes
Yes
99
No Yes
No No
Yes Yes No
Yes No No
Yes Yes Yes
Yes
Yes Yes Yes**
Yes No
Yes
Yes
Yes
Optional Optional
Optional
Yes Yes Yes
Yes
Yes Yes Optional
Yes Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
4
Optional
Yes
No
Yes
No
4
Yes
Yes
Optional
Yes Yes
Optional Optional
8K
No
Yes Yes
2K or 8K
Yes
Yes Yes
Yes
Yes
Yes Yes
Yes Yes
Yes
No No No
Yes No Yes**
Yes
Yes Yes Yes
Yes Yes Optional
ODBC, OLE DB
Yes No No
Optional
Yes Optional Optional
Yes Yes
Transact-SQL
Optional
Yes Yes
Java, PL/SQL
Optional
CORBA, Enterprise
JavaBeans, JDBC, OCI,
ODBC, Oracle Objects for
OLE
100
No No
No No
Yes No
Yes Yes
Yes Yes
Yes Yes
Yes
Yes Yes
Yes Yes
Yes
No
Yes Yes
Yes
Yes Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes**
Optional
Yes
No
Optional
Optional
Yes**
Optional
No
Optional
No
Yes Yes
Yes
Yes Yes
Optional
Yes No
No
Yes Yes
Appendix E
Oracle 8 and Oracle 8i Standard Edition Platform Availability
Operating System
Chip
Hardware
Intel
Any, up to 4 cpus *
Digital Unix
Alpha
Hewlett-Packard HP-UX
PA-RISC
101
PowerPC
Bull/Motorola AIX
IBM OS/2
Intel
Any, up to 4 cpus *
Microsoft Windows NT
Alpha
Intel
Any, up to 4 cpus *
MIPS
NCR MP-RAS
Intel
Novell NetWare
Intel
Any, up to 4 cpus *
SCO UnixWare
Intel
Any, up to 4 cpus *
Siemens Nixdorf
MIPS
Intel
Any, up to 4 cpus *
SPARC
SGI IRIX
MIPS
SINIX/Reliant UNIX
[i]
Dyck, Timothy. SQL Server makes enterprise in roads. PC Week. November 10, 1998.November 6,
1999 <http://www.zdnet.com/pcweek/stories/news/0,4153,372285,00.html.
[iii]
ONeill, Paige. Oracle Trumps Microsoft in Battle for NT Database Marketshare.March 29, 1999.October
31, 1999. <www.oracle.com/cgi-bin/press/printpr.cgi?file=199903290500.29144.html&mode=corp.
[iv]
Oracle Charts Landmark Year for Oracle 8; Sets the Stage for the Next Release, Code-Named
Emerald.July 2, 1999. November 5, 1999 <www.uk.oracle.com/info/news/emerald.html.
102
[v]
Deck, Stewart. SQL users turn to Oracle 8 for bulk. Computerworld. May 10, 1999.October 15, 1999
<www.computerworld.com/home/print.nsf/all/990510A506.
[vii]
103
Null?
--------NOT NULL
NOT NULL
NOT NULL
NOT NULL
Type
------------NUMBER
DATE
NUMBER
VARCHAR2(12)
VARCHAR2(25)
Comment
----------------Primary key
Non-unique index
172
Database Server
CPU Seconds
52
104
during the index build. NOLOGGING has significant impacts on recoverability and standby databases, so
do your homework before using the NOLOGGING keyword.
I modified the application used in the last section to disable the primary key on the CALLS table and drop
its one non-unique index before loading the data, putting both back after the load was complete. In this
example, the CALLS table was empty before the data load. Factoring in the amount of time required to
recreate the two indexes, elapsed time for the load dropped from 172 seconds to 130 seconds. CPU time
used by the database server process dropped from 52 seconds to 35 seconds.
Dropping and rebuilding indexes before and after a data load can speed up the load and yield more
efficient indexes. Some drawbacks include the added complexity and potential embedding of schema
design information into the application code. (When you add another index to the table being loaded, will
you have to update your application code?) Dropping indexes before a load could also have significant
performance impacts if users need to be able to query the target table while the load is taking place.
Finally, dropping or disabling primary or unique key constraints could cause difficulties if foreign key
constraints reference them.
Data Loading Method
Elapsed Seconds
Database Server
CPU Seconds
130
35
Elapsed Seconds
Database Server
CPU Seconds
14
105
Elapsed Seconds
Database Server
CPU Seconds
15
Elapsed Seconds
Database Server
CPU Seconds
106
81
12
Conclusion
There are many different ways to load data into Oracle. Each technique offers its own balance between
speed, simplicity, scalability, recoverability, and data availability.To recap, here are all of the timing figures
in one place:
Data Loading Method
Elapsed Seconds
Database Server
CPU Seconds
172
52
130
35
14
15
81
12
Please keep in mind that I did not even touch on the subject of parallelism in data loads. (Inserts with the
Append hint can use parallelism in the Enterprise Edition of the Oracle software.
Oracle XML
What is XML and what is it used for?
XML (eXtensible Markup Language) is a W3C initiative that allows information and services to be encoded
with meaningful structure and semantics that both computers and humans can understand. XML is great
for information exchange, and can easily be extended to include user-specified and industry-specified
tags. Look at this simple example defining a FAQ:
<?xml version="1.0"?>
<!DOCTYPE question-list SYSTEM "faq.dtd">
<?xml-stylesheet type="text/xml" href="faq.xsl"?>
<FAQ-LIST>
<QUESTION>
<QUERY>Question goes here</QUERY>
<RESPONSE>Answer goes here.</RESPONSE>
</QUESTION>
107
<QUESTION>
<QUERY>Another question goes here.</QUERY>
<RESPONSE>The answer goes here.</RESPONSE>
</QUESTION>
</FAQ-LIST>
What is a DTD and what is it used for?
A Document Type Definition (DTD) defines the elements or record structure of a XML document. A DTD
allows your XML files to carry a description of its format with it. The DTD for the above XML example looks
like this:
<?xml version="1.0"?>
Notes:
#PCDATA (parsed character data) means that the element contains data that can be parsed by a
parser like HTML
The + sign in the example above declares that the "QUESTION" element must occur one or more
times inside the "FAQ-LIST" element.
The * sign in the example above declares that the "QUERY" element can occur zero or more times
inside the "QUESTION" element.
The W3C also formulated a new standard, called XML Schemas that superceded DTD's. Schemas allow for
more complex data types within your tags and better ways to constrain (validate) data within these tags.
XMLDB
Standard option that ships with the Oracle 9i database (from 9.2.0). Previously called
Project XDB.
108
If you're using Oracle 8i, use the DBMS_XMLQUERY and DBMS_XMLSAVE JAVA based packages. For Oracle
9i, use the C-based package DBMS_XMLGEN.
Look at the following Oracle 9i code example:
connect scott/tiger
set serveroutput on
DECLARE
Ctx
DBMS_XMLGEN.ctxHandle;
xml
clob;
emp_no NUMBER := 7369;
BEGIN
xmlc
off
len
varchar2(4000);
integer := 1;
integer := 4000;
END;
/
The same results can be achieved using SQLX (see http://sqlx.org/). Some of the SQLX functions are
XMLElement(), XMLForest(), XMLSequence(), etc. Look at this example.
set long 32000
SELECT XMLELEMENT("EMP_TABLE",
(select XMLELEMENT("EMP_ROW",
XMLFOREST(empno, ename, job, mgr, hiredate, sal, deptno)
)
from
emp
where empno = 7369))
from dual;
varchar2(4000);
integer := 1;
integer := 4000;
109
END;
/
How does one store and extract XML data from Oracle?
XML data can be stored in Oracle (9.2.0 and above) using the XMLType data type. Look at this example:
connect scott/tiger
create table XMLTable (doc_id number, xml_data XMLType);
insert into XMLTable values (1,
XMLType('<FAQ-LIST>
<QUESTION>
<QUERY>Question 1</QUERY>
<RESPONSE>Answer goes here.</RESPONSE>
</QUESTION>
</FAQ-LIST>'));
select extractValue(xml_data, '/FAQ-LIST/QUESTION/RESPONSE') -- XPath expression
from XMLTable
where existsNode(xml_data, '/FAQ-LIST/QUESTION[QUERY="Question 1"]') = 1;
Hello! Could you elaborate the example of serializing the XML into a table,
let's say the XML file has data which corresponds to 2 tables (repetitive tags
for a detail table), is it possible?
110
Followup:
Sean here...
This is pretty easy to accomplish. There is no automated XML utility in Oracle
to insert a single XML document into two different tables... What you could do,
however, is create a join view on the two tables, then write an INSTEAD OF
trigger on the join view. Insert the XML document into the join view. The
INSTEAD OF trigger's job would be to insert rows into the appropriate tables
based on the values of the parent key found in each ROWSET of the XML document.
As an example, I have an XML document that looks like SO:
<?xml version = "1.0"?>
<ROWSET>
<ROW num="1">
<DEPTNO>10</DEPTNO>
<DNAME>SALES</DNAME>
<EMPNO>100</EMPNO>
<ENAME>MARK JOHNSON</ENAME>
</ROW>
<ROW num="2">
<DEPTNO>20</DEPTNO>
<DNAME>TECHNOLOGY</DNAME>
<EMPNO>200</EMPNO>
<ENAME>TOM KYTE</ENAME>
</ROW>
<ROW num="3">
<DEPTNO>20</DEPTNO>
<DNAME>TECHNOLOGY</DNAME>
<EMPNO>300</EMPNO>
<ENAME>SEAN DILLON</ENAME>
</ROW>
</ROWSET>
So you can see... the department data and the employee data co-mingled. We want
to normalize this into two tables... so here's what I'd do:
----------------------------system@SLAP> create table dept (
2 deptno number
3
primary key,
4 dname varchar2(30));
Table created.
system@SLAP> create table emp (
2 empno number
3
primary key,
4 deptno number,
5 ename varchar2(30));
Table created.
system@SLAP> create view deptemp as
2 select d.deptno, d.dname, e.empno, e.ename
3 from dept d, emp e
4 where d.deptno = e.empno;
View created.
111
112
How can I handle this? One row for the department table and 3 rows for the
employee table in the same XML row.
<?xml version = "1.0"?>
<ROWSET>
<ROW num="1">
<DEPTNO>10</DEPTNO>
<DNAME>SALES</DNAME>
<EMPLOYEE>
<EMPNO>100</EMPNO>
<ENAME>MARK JOHNSON</ENAME>
</EMPLOYEE>
<EMPLOYEE>
<EMPNO>200</EMPNO>
<ENAME>VICTOR JAEN</ENAME>
</EMPLOYEE>
<EMPLOYEE>
<EMPNO>300</EMPNO>
<ENAME>JHON SMITH</ENAME>
</EMPLOYEE>
</ROW>
</ROWSET>
Thanks a lot!
Followup:
1* select dbms_xmlgen.getxml( 'select deptno, dname, cursor( select empno,
ename from emp where emp.deptno = dept.deptno ) employee from dept where deptno
= 10' ) from dual
scott@ORA920> /
DBMS_XMLGEN.GETXML('SELECTDEPTNO,DNAME,CURSOR(SELECTEMPNO,ENAMEFROMEMPWHEREEMP.D
-------------------------------------------------------------------------------<?xml version="1.0"?>
<ROWSET>
<ROW>
<DEPTNO>10</DEPTNO>
<DNAME>ACCOUNTING</DNAME>
<EMPLOYEE>
113
<EMPLOYEE_ROW>
<EMPNO>7782</EMPNO>
<ENAME>CLARK</ENAME>
</EMPLOYEE_ROW>
<EMPLOYEE_ROW>
<EMPNO>7839</EMPNO>
<ENAME>KING</ENAME>
</EMPLOYEE_ROW>
<EMPLOYEE_ROW>
<EMPNO>7934</EMPNO>
<ENAME>MILLER</ENAME>
</EMPLOYEE_ROW>
</EMPLOYEE>
</ROW>
</ROWSET>
Oracle8i ONLY
Job Queue
Processes
I/O Slave
Processes
maximum per
instance
maximum per
background
process (DBWR,
LGWR, etc.)
36
15
15
10
114
LCK
Processes
maximum per
Backup
session
maximum per
instance
maximum per
instance
MTS
Servers
maximum per
instance
Dispatchers
maximum per
instance
maximum per
instance
maximum per
instance
Sessions
Parallel
Execution
Slaves
Backup
Sessions
10
parame
Oracle 9i Limits
Datatype Limits
VARCHAR2
NVARCHAR2
NUMBER( p,s)
LONG
DATE
TIMESTAMP( fractional_seconds_precision)
115
RAW( size)
LONG RAW
ROWID
UROWID [( size)]
NCHAR( size)
CLOB
NCLOB
BLOB
BFILE
Minimum
Database Blocks
Maximum
Operating system dependent; never more than 32 KB
Minimum in initial
2 blocks
extent of a segment
Controlfiles
Database files
Maximum per
datafile
Number of control
files
116
Database extents
Maximum
Maximum
MAXEXTENTS
Default value
Maximum
Unlimited
Maximum number of Limited by value of MAXLOGFILES parameter in the
logfiles
CREATE DATABASE statement. Control file can be
resized to allow more entries; ultimately an operating
system limit
Maximum number of
logfiles per group
Unlimited
Minimum size
50 KB
Maximum size
Operating system limit; typically 2 GB
Maximum number
64 KB Number of tablespaces cannot exceed the
per database
number of database files, as each tablespace must
include at least one file
Maximum length
Indexes
Columns
Constraints
Subqueries
Partitions
Per table
Per index (or clustered
index)
Per bitmapped index
Maximum per column
64 K-1 partitions
117
Rollback Segments
Rows
SQL Statement Length
Stored Packages
Maximum length of
statements
Maximum size
Maximum value
Maximum
2,147,483,638
Tables
Operating system-dependent
Locks
SGA size
15
Sessions
15
32K; limited by PROCESSES and
SESSIONS initialization parameters
10
Shared Servers
Dispatchers
Unlimited
118
Backup Sessions
Oracle 8i Limits
Datatype Limits
Datatypes
BFILE
BLOB
Limit
maximum size: 4GB
maximum size of file name: 255
characters
maximum size of
open
BFILEs: see
comments
4GB maximum
CHAR
CHAR VARYING
CLOB
Literals (characters or
numbers in SQL or PL/SQL
LONG
NCHAR
NCHAR VARYING
NCLOB
NUMBER
Precision
RAW
VARCHAR
VARCHAR2
Comments
The maximum number of BFILEs is limited
by SESSION_MAX_OPEN_FILES, which is
itself limited by the maximum number of
open
files the operating system will allow.
The number of LOB columns per table is
limited only by the maximum number
of
columns per table (i.e., 1000)
Type of Limit
minimum
maximum
Database Blocks
Controlfiles
Limit Value
2048 bytes; must be a multiple of
O/S
physical
block size
O/S-dependent never more than 32KB
2 blocks
platform dependent; typically 2 to power of
22 blocks
1 minimum: 2 or more (on separate devices)
119
size of controlfile
Database files
maximum
MAXEXTENTS
default value
maximum
maximum number of
strongly recommended
dependent on O/S and database creation
options; maximum of 20,000 x (database
block size)
O/S dependent, usually 1022
65533; may be less on some operating
systems; limited also by size of database
blocks, and by the DB_FILES init
parameter
for
a particular instance
O/S dependent, limited by maximum O/S file
size; typically 2 to power of 22 or 4M blocks
derived from tablespace default storage or
DB_BLOCK_SIZE
logfiles
unlimited
LOG_FILES initialization parameter,
or
MAXLOGFILES in CREATE DATABASE;
controlfile can be resized to allow more
entries; ultimately an O/S limit
Unlimited
50K bytes
O/S limit, typically 2GB
64K ; Number of tablespaces cannot exceed
the number of database files, as each
tablespace must include at least one file.
Type
maximum length
Indexes
Columns
Constraints
Nested Queries
Partitions
Rollback Segments
table
indexed (or clustered index)
bitmapped index
maximum per column
maximum number
maximum length of linear
partitioning
key
maximum number of columns in
partition key
maximum number of partitions
allowed per table or index
maximum number per
Limit
The group-by expression and all of
the
non-distinct aggregate (e.g., sum,
avg)
need to fit within a single database
block.
Unlimited
40% of the database block size
minus some overhead.
1000 columns maximum
32 columns maximum
30 columns maximum
Unlimited
255
4KB - overhead
16 columns
64K-1 partitions
no limit; limited within a session by
120
database
Rows
SQL Statement Length
Stored Packages
table
maximum value
maximum
maximum per clustered table
maximum per database
MAX_ROLLBACK_SEGMENTS init
parameter
no limit
64K maximum; particular tools may
impose lower limits
PL/SQL and Developer/2000 may
have limits on the size of stored
rocedures they can call. Consult
your PL/SQL or Developer/2000
documentation for details. The
limits typically range from 20003000 lines of code.
O/S dependent, typically 32
2,147,483,638
32 tables
unlimited
Limit
8 bytes
128 bytes
30 bytes
Type
maximum number of OPS
instances per database
row-level
Distributed Lock Manager
maximum value
Limit
O/S dependent
unlimited
O/S dependent
O/S dependent, typically 2-4 GB for 32bit O/S, > 4 GB for 64 bit O/S
121
122
WHILE
IF
BEGIN
TO_CHAR
Application-specific identifiers are the names that you give to data and program structures that are
specific to your application and that vary from system to system. The compiler treats these two kinds of
text very differently. You can improve the readability of your code greatly by reflecting this difference in
the way the text is displayed. Many developers make no distinction between reserved words and
application-specific identifiers. Consider the following lines of code:
if to_number(the_value)>22 and num1 between lval and hval then
newval := 100;
elsif to_number(the_value) < 1 then
calc_tots(to_date('12-jan-95'));
else
clear_vals;
end if;
While the use of indentation makes it easier to follow the logical flow of the IF statement, all the words in
the statements tend to blend together. It is difficult to separate the reserved words and the application
identifiers in this code. Changing entirely to uppercase also will not improve matters. Indiscriminate, albeit
consistent, use of upper- or lowercase for your code reduces its readability. The distinction between
reserved words and application-specific identifiers is ignored in the formatting. This translates into a loss
of information and comprehension for a developer.
123
Right-align the reserved words for the clauses against the DML statement.
I recommend that you visually separate the SQL reserved words which identify the separate
clauses from the application-specific column and
table names. The following table shows how I use right-alignment on the reserved words to create
a vertical border between them and the rest of
the SQL statement:
SELECT
INSERT
UPDATE
DELETE
124
SELECT
FROM
WHERE
AND
OR
GROUP BY
HAVING
AND
OR
ORDER BY
INSERT INTO
VALUES
UPDATE
SET
WHERE
DELETE
FROM
WHERE
INSERT INTO
SELECT
FROM
WHERE
125
126
blank line before each section, as I do above, for the executable section (before BEGIN) and the exception
section (before EXCEPTION). I usually place the IS keyword on its own line to clearly differentiate between
the header of a module and its declaration section.
Maintain Indentation
Inline commentary should reinforce the indentation and therefore the logical structure of the program. For
example, it is very easy to find the comments in the make_array procedures shown below.
PROCEDURE make_array (num_rows_in IN INTEGER)
/* Create an array of specified numbers of rows */
IS
/* Handles to Oracle Forms structures */
col_id GROUPCOLUMN;
rg_id RECORDGROUP;
BEGIN
/* Create new record group and column */
rg_id := CREATE_GROUP ('array');
col_id := ADD_GROUP_COLUMN ('col');
/*
|| Use a loop to create the specified number of rows and
|| set the value in each cell.
*/
FOR row_index IN 1 .. num_rows_in
127
LOOP
/* Create a row at the end of the group to accept data */
ADD_GROUP_ROW (return_value, END_OF_GROUP);
FOR col_index IN 1 .. num_columns_in
LOOP
/* Set the initial value in the cell */
SET_GROUP_NUMBER_CELL (col_id, row_index, 0);
END LOOP;
END LOOP;
END;
Documenting the Entire Package
A package is often a complicated and long construct. It is composed of many different types of objects,
any of which may be public (visible to programs and users outside of the package) or private (available
only to other objects in the package). You can use some very simple documentation guidelines to clarify
the structure of the package. As usual when discussing packages, one must consider the specification
separately from the body. As a meta-module or grouping of modules, the specification should have a
standard header. This header needn't be as complicated as that of a specific module, because you do not
want to repeat in the package header any information which also belongs in specific modules. I suggest
using the template header shown in the following example. In the "Major Modifications" section of the
header, do not include every change made to every object in the package. Instead note significant
changes to the package as a whole, such as an expansion of scope, a change in the way the package and
global variables are managed, etc. Place this header after the package name and before the IS statement:
PACKAGE package_name
/*
|| Author:
||
|| Overview:
||
|| Major Modifications (when, who, what)
||*/
IS
...
END package_name;
Document the Package Specification
The package specification is, in essence, a series of declaration statements. Some of those statements
declare variables, while others declare modules. Follow the same recommendation in commenting a
package as you do in commenting a module's declaration section: provide a comment for each
declaration. In addition to the comments for a specific declaration, you may also find it useful to provide a
banner before a group of related declarations to make that connection obvious to the reader. Surround the
banner with whitespace (blank lines for the start/end of a multiline comment block). While you can use
many different formats for this banner, use the simplest possible design that gets the point across.
Everything else is clutter. The package specification below illustrates the header and declaration-level
comment styles, as well as group banners:
PACKAGE rg_select
/*
|| Author: Diego Pafumi
||
|| Overview: Manage a list of selected items correlated with a
|| block on the screen.
||
|| Major Modifications (when, who, what)
|| 12/94 - DP - Create package
|| 3/95 - IS - Enhance to support coordinated blocks
128
||*/
IS
/*----------------- Modules to Define the List -------------------*/
/* Initialize the list/record group. */
PROCEDURE init_list (item_name_in IN VARCHAR2);
/* Delete the list */
PROCEDURE delete_list;
/*------------------ Modules to Manage Item Selections -----------*/
/* Mark item as selected */
PROCEDURE select_item (row_in IN INTEGER);
/* De-select the item from the list */
PROCEDURE deselect_item (row_in IN INTEGER);
END rg_select;
129
Views
Atlantis views will have a prefix of "av_". A project specific view should have a prefix of "v_" or "view_".
Following the prefix the view name should contain some sort of descriptive reference. If the view contains a simple
join of two tables, then include the table names. For example: v_Table1Table2.
The suffix should be upper/lowercase.
Indexes
Index names should have an "in_" prefix. The rest of the name is upper and lowercase. This suffix contains some
meaningful text about the nature of the index.
Example: in_EmployeeID.
Constraints
Primary keys are to be prefixed with "pk_", unique keys with "uk_" or "unique_" and foreign keys start with "fk_".
The remainder of the name is upper and lowercase and usually contains the name of the field(s) included in the key.
Example: pk_FormID, fk_ImageType.
Sequence
Sequence names begin with an "s_", followed by an underscore and then the field name (i.e. s_Field). If field name is
ambiguous, then precede the field name with table name s_TableField.
[
[
[
[
[
[
v
v
[
[
[
[
[
[
[
[
[
[
[
[