Oracle Important Notes
Oracle Important Notes
Summary :-
Detail :-
Are you willing to move part of data ? Yes Can not transport
Svrmgrl>EXECUTE
DBMS_TTS.TRANSPORT_SET_CHECK(TS_LIST=>'<TABLESPACE1>,<TABLESPA
CE2',incl_constraints=><FALSE/TRUE>);
SQL> execute
dbms_tts.transport_Set_check(TS_LIST=>('STablespace1,STablespace2'),incl_c
onstraints=>TRUE);
no rows selected
Tablespace altered.
Tablespace altered.
Step 3:- Grab the information about data files belongs to desired
tablespaces.
TABLESPACE_NAME FILE_NAME
--------------------
---------------------------------------------------------------------
SourceTablespace1 F:\OR8I\ORADATA\SourceTablespace11ORSV.DBF
SourceTablespace2 F:\OR8I\ORADATA\INDX01ORSV.DBF
C:\>set ORACLE_SID=SAMDB
Step 5:- Physical copy the tablespace datafiles from source to target
location. Note that the datafile names can be changed in this process.
Step 6:- Physical copy the export file, created in step 4 , from source to
destination location.
Step 7:- Import the tablespace metadata into the target database
,using the names the datafiles were give in target system
Step 8:- The tablespace can be returned to read write mode on either
or both nodes
the below query wil be used to find out which user allocated more
space in temporary tablespace
select se.username
,se.sid
,su.extents
,su.blocks * to_number(rtrim(p.value)) as Space
,tablespace
,segtype
from v$sort_usage su
,v$parameter p
,v$session se
where p.name = 'db_block_size'
and su.session_addr = se.saddr
order by se.username, se.sid
the above query wil be used to find out which user allocated more space in
temporary tablespace
Below query is used to find out chin rows (only oracle 10g)b
My query was fine last week and now it is slow. Why?
The likely cause of this is because the execution plan has changed. Generate a
current explain plan of the offending query and compare it to a previous one that
was taken when the query was performing well. Usually the previous plan is not
available.
• Which tables are currently analyzed? Were they previously analyzed? (ie. Was
the query using RBO and now CBO?)
• Has OPTIMIZER_MODE been changed in INIT.ORA?
• Has the DEGREE of parallelism been defined/changed on any table?
• Have the tables been re-analyzed? Were the tables analyzed using estimate
or compute? If estimate, what percentage was used?
• Have the statistics changed?
• Has the INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT been
changed?
• Has the INIT.ORA parameter SORT_AREA_SIZE been changed?
• Have any other INIT.ORA parameters been changed?
What do you think the plan should be? Run the query with hints to see if this
produces the required performance.
====================================================
If you want to get some idea of which users spend most time and consume most
resources on the system, you don’t necessarily have to do anything subtle and
devious to find out what’s been happening. There has been a (simple) audit trail
built into the database for as long as I can remember. (The 8.1 – 10.1 in the banner
simply covers the fact that I’ve only recently checked the following comments
against those versions)
The init.ora file contains an audit_trail parameter. This can take the values true,
false, none, os, db (the true/false options are for backwards compatibility). If you
set the value to db (or true), then you have enabled auditing in the database. Once
you have restarted the database (the parameter is not modifiable online), you can
decide what events you want to audit.
If you need to turn this audit off, the corresponding command is:
audit connect;
noaudit connect;
With this level of audit turned on, every session that logs on (except the SYS
sessions) will insert a row into the table sys.aud$ giving various details of who they
are and what time they connected. When the session ends, its last action is to
update this row with various session-related details, such as log-off time, and the
amount of work done. To make the results more readable, Oracle has superimposed
the view dba_audit_session on top of the aud$ table; the 9.2 version of this view
is as follows:
=========================================================================
1.shu immediate
2.startup mount
Log Miner
Log Miner enables the analysis of the contents of archived redo logs. It can be used to
provide a historical view of the database without the need for point-in-time recovery.
It can also be used to undo operations, allowing repair of logical corruption.
DBMS_LOGMNR.add_logfile (
options => DBMS_LOGMNR.addfile,
logfilename => 'C:\Oracle\Oradata\TSH1\Archive\TSH1\T001S00007.ARC');
END;
/
Starting LogMiner
At this point LogMiner can be started using the overloaded START_LOGMNR
procedure. The analysis range can be narrowed using time or SCN.
BEGIN
-- Start using all logs
DBMS_LOGMNR.start_logmnr (
dictfilename => 'C:\Oracle\Oradata\TSH1\Archive\TSH1dict.ora');
PRIVILEGE
----------------------------------------
CREATE VIEW
CREATE TABLE
ALTER SESSION
CREATE CLUSTER
CREATE SESSION
CREATE SYNONYM
CREATE SEQUENCE
CREATE DATABASE LINK
8 rows selected.
SQL>
ORACLE 10G
PRIVILEGE
----------------------------------------
CREATE SESSION
SQL>
Exchange partition :
It is use to move table data for non partition table to partition table
SQL> SQL>
1 row created.
SQL>
1 row created.
SQL>
1 row created.
SQL>
1 row created.
Commit;
Table created.
Step 4: the above step all my_table data will be move to partition my_table_2 table
no rows selected
ID DESCRIPTION
---------- --------------------------------------------------
1 One
2 Two
3 Three
4 Four
The above ouput first u insert the record my_table table only but now all the table
data is moved to my_table_2 partiton table
Optimize Oracle UNDO Parameters
Overview
Starting in Oracle9i, rollback segments are re-named undo logs. Traditionally transaction undo
information was stored in Rollback Segments until a commit or rollback statement was issued,
at which point it was made available for overlaying.
Best of all, automatic undo management allows the DBA to specify how long undo
information should be retained after commit, preventing "snapshot too old" errors on long
running queries.
This is done by setting the UNDO_RETENTION parameter. The default is 900 seconds (5
minutes), and you can set this parameter to guarantee that Oracle keeps undo logs for
extended periods of time.
Rather than having to define and manage rollback segments, you can simply define an Undo
tablespace and let Oracle take care of the rest. Turning on automatic undo management is
easy. All you need to do is create an undo tablespace and set UNDO_MANAGEMENT = AUTO.
You can choose to allocate a specific size for the UNDO tablespace and then set the
UNDO_RETENTION parameter to an optimal value according to the UNDO size and the database
activity. If your disk space is limited and you do not want to allocate more space than necessary
to the UNDO tablespace, this is the way to proceed. The following query will help you to
optimize the UNDO_RETENTION parameter:
-----------------------------------------------
DB_BLOCK_SIZE * UNDO_BLOCK_PER_SEC
Because these following queries use the V$UNDOSTAT statistics, run the queries only after the
database has been running with UNDO for a significant and representative time!
UNDO_SIZE
----------
209715200
SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
"UNDO_BLOCK_PER_SEC"
FROM v$undostat;
UNDO_BLOCK_PER_SEC
------------------
3.12166667
DB Block Size
DB_BLOCK_SIZE [Byte]
--------------------
4096
If you are not limited by disk space, then it would be better to choose the UNDO_RETENTION
time that is best for you (for FLASHBACK, etc.). Allocate the appropriate size to the UNDO
tablespace according to the database activity:
The previous query may return a "NEEDED UNDO SIZE" that is less than the "ACTUAL UNDO
SIZE". If this is the case, you may be wasting space. You can choose to resize your UNDO
tablespace to a lesser value or increase your UNDO_RETENTION parameter to use the
additional space.
- Stripe Width—Striping can be fine grained as in Redo Log Files (128K for
faster transfer rate) and coarse for datafiles (1MB for transfer of a large
number of blocks at one time).
Mirroring
The Oracle-specific nature of ASM means that Oracle can mirror segment extents
on seperate disks in a disk group. There will be a primary extent on one disk and a
mirrored extent on a different disk in the same disk group. Failure groups allow you
to group disks so that primary and mirrored extents reside in separate failure
groups. This would be necessary to mitigate the loss of a disk controller or IO
channel. There are 3 levels of redundancy:
The above error occurs when libodm9.so file is corrupted or misplaced in the
$ORACLE_HOME/lib path
SOLUTION :
In Solaris, from the command line (you don’t have to be root in most cases) run this
command:
/usr/bin/isainfo -kv
If you are running Linux, you can check your distribution with the uname command:
uname -m
The output will read x86_64 for 64-bit and i686 or similar for 32-bit.
The question here is weather your Oracle binaries are 64-bit. While some of the
binaries associated with Oracle may be 32-bit, the important ones will be 64 bit. To
check those, follow these steps from the command line:
cd $ORACLE_HOME/bin
file oracl*
This will display the file type of your oracle binaries. If you are running 64-bit
binaries, the output should look like this:
oracle: ELF 64-bit MSB executable SPARCV9 Version 1, dynamically linked, not
stripped
oracleO: ELF 64-bit MSB executable SPARCV9 Version 1, dynamically linked, not
stripped
If your binaries are 32-bit, the output will look like this:
oracle: ELF 32-bit MSB executable SPARC Version 1, dynamically linked, not stripped
If you find you are running 32-bit and decide to go to 64 be careful. The switch can
be a bit tricky. Read the documentation closely and make sure your service contract
is payed up!
# Oracle Parameters
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.shmal = 32000
kernel.sem = 500 32000 100 128
fs.file-max = 65536
Following our cookbook we installed CRS and RAC software on OCFS2 on our linux5
server.
See our PoC MAA- Blogs 5,6 and 7 about OCFS2, CRS and RAC install respectively.
[oracle@linux5 ~]$ df -k
/dev/mapper/VolGroup00-LogVol00
We followed the steps outlined in Chapter 3 of the Oracle Data Guard Concepts and
Administration Guide to create our physical standby database.
<!–More–>
Ensure log file sizes are identical on the primary and standby databases:
The primary database has 8 redolog groups of 50Mb each (All 4 nodes have two
groups each).
Because we use a backup to setup the Standby database the redologfiles of the
Standby database are defacto the same size.
Use: alter database backup controlfile to trace; + check the trace in the
user_dump_dest directory. From 10.2 onwards the controlfile automatically expands
for the MAX settings parameters. Be sure that 8 + 12 = 20 redolog files can be
created.
If that is not the case re-create the controlfile on the primary database for 10.1.
databases only.
Add 12 standby redolog groups on the primary database all of the same size with:
Verify that the standby redo log file groups were created with:
9 0 0 YES UNASSIGNED
10 0 0 YES UNASSIGNED
11 0 0 YES UNASSIGNED
12 0 0 YES UNASSIGNED
13 0 0 YES UNASSIGNED
14 0 0 YES UNASSIGNED
15 0 0 YES UNASSIGNED
16 0 0 YES UNASSIGNED
17 0 0 YES UNASSIGNED
18 0 0 YES UNASSIGNED
19 0 0 YES UNASSIGNED
20 0 0 YES UNASSIGNED
12 rows selected.
Use the following alter system commands to change the necessary parameter
settings:
And the most important one: specify online redolog location on Primary and Standby
location.
Note that the scope is only spfile, so you have to restart the instance to make the
parameter active.
shutdown immediate;
startup mount;
shutdown immediate;
(This might take a while … depending on size of you’re database and network speed).
Step 5 Create a Control File and Parameter File for the Standby Database
startup mount;
*.db_file_name_convert='prac','stdb'
*.log_archive_dest_1='LOCATION=/u02/oradata/prac_arch
VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=stdb'
*.fal_client='stdb'
*.fal_server='prac'
Save the file and copy it with the standby controlfile to the standby database server.
Now we have on the standby database server in the /u02/oradata/prac directory the
Cold Backup, the Standby Control File and the initstdb.ora file.
Create a password file for the “prac” standby database on the standby
server:
Configure the Oracle Net Services names on the primary and standby
server:
Add the following two tnsnames.ora entries on the primary server and standby
server.
prac =
(DESCRIPTION =
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = prac) ) )
stdb =
(DESCRIPTION =
(ADDRESS_LIST =
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = prac) ) )
Check if the listeners are configured on both the primary and standby
database servers.
lsnrctl stop
lsnrctl start
Specify the correct listener name if needed, check the listener.ora file for the name.
Copy the standby control file to the controlfile names mentioned in the initstdb.ora
file:
cp stdb.ctl /u02/oradata/prac/control01.ctl
cp stdb.ctl /u02/oradata/prac/control02.ctl
cp stdb.ctl /u02/oradata/prac/control03.ctl
Database mounted.
SQL> alter database recover managed standby database disconnect from session;
Database altered.
YES!!!
In the applied column you can see up to which logfile the standby database has
catched up.
To see the role of the standby database do:
You can also check the file system to see that the archived log files really arrive on
the standby destination site.
Troubleshoot tips
Always check the alert file on both the primary and the standby database
site.
On Unix systems use tail –f alertprac.log to see the new messages appear at the end
of the file.
OR
ORA-12545: Connect failed because target host or object does not exist
==> These are Oracle Net Configuration errors, fix in listener.ora, tnsnames.ora or
sqlnet.ora.
Test the connectivity from the primary database to the standby with:
[oracle@linux1 admin]$ sqlplus system/arnhem@stdb
ERROR:
Enter user-name:
THIS IS THE CORRECT MESSAGE YOU SHOULD GET SINCE THE STANDBY DATABASE
IS IN MOUNT MODE!
Additional information: 3
Note that you have to recycle the instance because the scope is spfile only!
So parameter settings are KEY!
Popularity: 64%
overmars Says:
July 11th, 2007 at 10:40 am
Our mission
$ export ORACLE_SID=+ASM
$ sqlplus "/ as sysdba"
You can get most of the ASM details & implementation details from OTN here:
http://www.oracle.com/technology/asm/index.html
/usr/bin/isainfo -kv
If you are running Linux, you can check your distribution with the uname command:
uname -m
The output will read x86_64 for 64-bit and i686 or similar for 32-bit.
The question here is weather your Oracle binaries are 64-bit. While some of the
binaries associated with Oracle may be 32-bit, the important ones will be 64 bit. To
check those, follow these steps from the command line:
cd $ORACLE_HOME/bin
file oracl*
This will display the file type of your oracle binaries. If you are running 64-bit
binaries, the output should look like this:
oracle: ELF 64-bit MSB executable SPARCV9 Version 1, dynamically linked, not
stripped
oracleO: ELF 64-bit MSB executable SPARCV9 Version 1, dynamically linked, not
stripped
If your binaries are 32-bit, the output will look like this:
oracle: ELF 32-bit MSB executable SPARC Version 1, dynamically linked, not stripped
If you find you are running 32-bit and decide to go to 64 be careful. The switch can
be a bit tricky. Read the documentation closely and make sure your service contract
is payed up!
Now we start to really see the power of the find command. It has identified files not
only in the working directory but in a subdirectory as well! Let’s verify the findings
with some ls commands:
$ ls –alt
total 56
drwxrwxr-x 2 tclark authors 4096 Feb 3 17:45 examples
-rw——- 1 tclark tclark 8793 Feb 3 14:04 .bash_history
drwx—— 4 tclark tclark 4096 Feb 3 11:17 .
-rw——- 1 tclark tclark 1066 Feb 3 11:17 .viminfo
-rw-rw-r– 1 tclark tclark 0 Feb 3 09:00 example1.fil
-rw-r–r– 1 tclark authors 0 Jan 27 00:22 umask_example.fil
drwxr-xr-x 8 root root 4096 Jan 25 22:16 ..
-rw-rw-r– 1 tclark tclark 0 Jan 13 21:13 example2.xxx
-rw-r–r– 1 tclark tclark 120 Aug 24 06:44 .gtkrc
-rw-r–r– 1 tclark tclark 24 Aug 18 11:23 .bash_logout
-rw-r–r– 1 tclark tclark 191 Aug 18 11:23 .bash_profile
-rw-r–r– 1 tclark tclark 124 Aug 18 11:23 .bashrc
-rw-r–r– 1 tclark tclark 237 May 22 2003 .emacs
-rw-r–r– 1 tclark tclark 220 Nov 27 2002 .zshrc
drwxr-xr-x 3 tclark tclark 4096 Aug 12 2002 .kde
$ cd examples
$ ls -alt
total 20
drwxrwxr-x 2 tclark authors 4096 Feb 3 17:45 .
-rw-rw-r– 1 tclark tclark 0 Feb 3 17:45 other.txt
-rw-rw-r– 1 tclark authors 360 Feb 3 17:44 preamble.txt
drwx—— 4 tclark tclark 4096 Feb 3 11:17 ..
-rw-r–r– 1 tclark authors 2229 Jan 13 21:35 declaration.txt
-rw-rw-r– 1 tclark presidents 1310 Jan 13 17:48 gettysburg.txt
So we see that find has turned up what we were looking for. Now we will refine our
search even further.
Sometimes we are only concerned specific files in the directory. For example, say you
wrote a text file sometime in the past couple days and now you can’t remember what
you called it or where you put it. Here’s one way you could find that text file without
having to go through your entire system:
$ find . -name '*.txt' -mtime -3
./preamble.txt
./other.txt
Now you’ve got even fewer files than in the last search and you could easily identify
the one you’re looking for.
If a user is running short of disk space, they may want to find some large files and
compress them to recover space. The following will search from the current directory
and find all files larger than 10,000KB. The output has been abbreviated.
Similarly a – could be used in this example to find all files smaller than 10,000KB. Of
course there would be quite a few of those on a Linux system.
The find command is quite flexible and accepts numerous options. We have only
covered a couple of the options here but if you want to check out more of them take
a look at find’s man page.
Most of find’s options can be combined to find files which meet several criteria. To do
this we can just continue to list criteria like we did when finding .txt files which had
been modified in the past three days.
# date
Fri Nov 30 12:01:52 IST 2007
# pwd
/
# find / -mtime -7
/oracle/ora10gr2/oracle/product/10.2.0/oradata/oracle/control02.ctl
/oracle/ora10gr2/oracle/product/10.2.0/oradata/oracle/control03.ctl
/oracle/ora10gr2/oracle/product/10.2.0/oradata/oracle/control01.ctl
# cd /oracle/ora10gr2/oracle/product/10.2.0/oradata/oracle
# ls -al
total 1438020
drwxr-x--- 3 oracle dba 4096 Oct 16 12:48 .
drwxr-x--- 3 oracle dba 4096 Oct 9 16:56 ..
-rw-r--r-- 1 oracle dba 7258112 Nov 30 12:01 control01.ctl
-rw-r--r-- 1 oracle dba 7258112 Nov 30 12:01 control02.ctl
-rw-r--r-- 1 oracle dba 7258112 Nov 30 12:01 control03.ctl
# cd /
/var/log/boot.log
Finding files by type .trc which has been changed for the past 7 days
$ id
uid=501(oracle) gid=501(dba) groups=501(dba)
$ cd $ORACLE_HOME
$ pwd
/oracle/ora10gr2/oracle/product/10.2.0/db_1
./admin/oracle/bdump/oracle_arc3_4232.trc
./admin/oracle/bdump/oracle_arc1_4228.trc
./admin/oracle/bdump/oracle_arc2_4230.trc
./admin/oracle/bdump/oracle_arc0_4226.trc
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
NAME
---------
KNIGHTS
SQL> select ses.sid SID,sqa.SQL_TEXT SQL from v$session ses, v$sqlarea sqa,
v$process proc
where ses.paddr=proc.addr and ses.sql_hash_value=sqa.hash_value and proc.spid=
2972;
SID
----------
SQL
--------------------------------------------------------------------------------
967
select /*+ use_merge(a b) index(b PK_ELMNTRY_PDT_ELMNTRY_PDT_ID) */
b.ELMNTRY_
PDT_ID,b.ELMNTRY_PDT_NAME, b.ELMNTRY_PDT_ACTIV_FRM_DT,
b.ELMNTRY_PDT_ACTIV_TO_DT
, b.ELMNTRY_PDT_TYP_FLG, b.ELMNTRY_PDT_DESC, b.NRML_PDT_FLG,
b.MS_PDT_ID,b.IMPRT
_SEQ, a.ms_pdt_name, a.ms_pdt_desc, b.EXTN_FLG, b.rv_flg from
UC_ELMNTRY_PRODUCT
_MS b,(SELECT owner_id FROM PM_VIEW WHERE PRVDR_ID =:"SYS_B_00" AND
TAB_DESC = :
"SYS_B_01" UNION (SELECT XYZ.ELMNTRY_PDT_ID FROM(SELECT
A.ELMNTRY_PDT_ID, A.MS_P
DT_ID FROM UC_ELMNTRY_PRODUCT_MS A WHERE A.ELMNTRY_PDT_ID NOT IN
(SELECT OWNER_
ID FROM PM_VIEW WHERE TAB_DESC = :"SYS_B_02" )) XYZ, (SELECT
A.MS_PDT_ID FROM UC
SID
----------
SQL
--------------------------------------------------------------------------------
_MASTER_PRODUCT_MS A WHERE A.MS_PDT_ID NOT IN ( SELECT OWNER_ID FROM
PM_VIEW WHE
RE TAB_DESC = :"SYS_B_03" )) ABC WHERE XYZ.MS_PDT_ID = ABC.MS_PDT_ID
UNION SELEC
T XYZ.ELMNTRY_PDT_ID FROM( SELECT A.ELMNTRY_PDT_ID, A.MS_PDT_ID FROM
UC_ELMNTRY_
PRODUCT_MS A WHERE A.ELMNTRY_PDT_ID NOT IN (SELECT OWNER_ID FROM
PM_VIEW WHERE T
AB_DESC = :"SYS_B_04" )) XYZ, (SELECT A.
Main impact of this architecture is that the maximum amount of memory that can be
used for both SGA and PGA is limited by a maximum amount of memory available to
single process. On Windows (we assume a 32-bit platform in this article), the default
maximum user addressable memory is 2GB. It is a limitation imposed by a 32-bit
addressing (4GB) and fact that Windows reserves 2GB out of user space for kernel
memory.
So, this means that sum of SGA and PGA cannot be over 2GB... Not a very pleasant
situation, given that nowadays it is rather common to have a cheap Intel server
running Windows with Oracle with 4GB of RAM.
What can be done to get around this? Well, good news is that you can do
something...
First, you can use a /3GB switch in boot.ini file. Just put it like this:
[boot loader]
timeout=30
default=multi(0)disk(0)rdisk(1)partition(2)\WINNT
[operating systems]
multi(0)disk(0)rdisk(1)partition(2)\WINNT="Microsoft Windows 2000 Server"
/fastdetect /3GB
What does it do? Basically, what it looks like - pushes the limit of 2GB to 3GB? How?
- Windows reserves not 2GB but only 1GB for kernel use. This gives us extra 1GB for
user process.
If you have over 4GB of RAM, you can also use another mecahnism in Windows to
leveragr this memory. It is called Address Windowing Extensions (AWE) and is
enabled by /PAE switch in boot.ini:
[boot loader]
timeout=30
default=multi(0)disk(0)rdisk(1)partition(2)\WINNT
[operating systems]
multi(0)disk(0)rdisk(1)partition(2)\WINNT="Microsoft Windows 2000 Server"
/fastdetect /3GB /PAE
But just setting the switch is not enough. Now you need to configure Oracle to use
the extra memory you've got in there. How Oracle does it is that it "maps" the extra
memory through a "window" in the usual memory space. One limitation here is that
Oracle can do it only for buffer cache, not shared pool.
So, what you need to set in Oracle is:
· USE_INDIRECT_DATA_BUFFERS=TRUE in init.ora
· increase DB_BLOCK_BUFFERS in init.ora (Note: if you use Oracle 9.2 and use
DB_CACHE_SIZE instead of DB_BLOCK_BUFFERS, you will get an error starting
instance. Comment out DB_CACHE_SIZE and use DB_BLOCK_BUFFERS instead)
· You might want to adjust a registry setting in HKLM\Software\Oracle\Homex called
AWE_WINDOW_MEMORY. It specifies the size of the "window" that Oracle will use for
using the extra memory. See Metalink Note 225349.1 for details on calculating
miminum size of this setting.
The one behavior that I quickly realized was distinctly different was that Trusted
Oracle transparently filtered data records. I found out that the DoD security
requirements dictated mandatory separation of records based on a user’s
authorizations. In this case the users were authorized for access at different
sensitivity levels—SECRET, CONFIDENTIAL, and UNCLASSIFIED. The data was
intermingled within tables at various sensitivity levels. One user accessing the data
would see one set of records, and a different user with different authorizations would
see a different set of records.
The interesting part was that the security was implemented so that it was
transparent and could not be subverted. The manner in which Trusted Oracle
behaved and the requirements from customers in other industries gave Oracle the
idea of abstracting the row-level security features from Trusted Oracle into a
framework that would support practically any data model and security policy. This
was the genesis of the Virtual Private Database technology.
Officially, the phrase “Virtual Private Database (VPD)” refers to the use of row-level
security (RLS) and the use of application contexts. (Application contexts were
discussed in detail in Chapter 9.) However, the term “VPD” is commonly used when
discussing the use of the row-level security features irrespective of implementation.
Many examples you see using VPD involve the use of application contexts and/or
several data tables with esoteric column names and complicated referential integrity
constraints. I find that these elements, while truthful in their representation of many
database schemas, tend to confuse and mislead the reader about how the row-level
security technology works and precisely what is needed to enable it. Using RLS is
easy, and the purpose of this section is to prove this very point.
VPD’s row-level security allows you to restrict access to records based on a security
policy implemented in PL/SQL. A security policy, as used here, simply describes the
rules governing access to the data rows. This process is done by creating a PL/SQL
function that returns a string. The function is then registered against the tables,
views, or synonyms you want to protect by using the DBMS_RLS PL/SQL package.
When a query is issued against the protected object, Oracle effectively appends the
string returned from the function to the original SQL statement, thereby filtering the
data records.
This example will focus on the process required to enable RLS. The intention is to
keep the data and security policy simple so as not to distract from how to enable an
RLS solution.
The RLS capability in Oracle requires a PL/SQL function. The function accepts two
parameters, as shown next. The database will call this function automatically and
transparently. The string value returned from the function (called the predicate) will
be effectively added to the original SQL. This results in an elimination of rows and
thus provides the row-level security.
The security policy for this example will exclude Department 10 records from queries
on SCOTT.EMP. The PL/SQL function to implement this will look as follows:
To protect the SCOTT.EMP table, simply associate the preceding PL/SQL function to
the table using the DBMS_RLS.ADD_POLICY procedure:
sec_mgr@KNOX10g> BEGIN
2 DBMS_RLS.add_policy
3 (object_schema => 'SCOTT',
4 object_name => 'EMP',
5 policy_name => 'quickstart',
6 policy_function => 'no_dept10');
7 END;
8 /
PL/SQL procedure successfully completed.
That’s it; you are done! To test this policy, log on as a user with access to the
SCOTT.EMP table and issue your DML. The following shows all the department
numbers available in the table. Department 10 is no longer seen because the RLS
policy transparently filters out those records:
scott@KNOX10g> -- Show department numbers. scott@KNOX10g> -- There
should be no department 10.
scott@KNOX10g> SELECT DISTINCT deptno FROM emp;
DEPTNO
---------
20
30
NOTE
Changing the security implementation is trivial, too. Suppose the security policy is
changed so that no records should be returned for the user SYSTEM:
Notice that the security policy implemented by the function can change without
requiring any re-registration with the DBMS_RLS package.
rem -----------------------------------------------------------------------
rem Filename: sga_stat.sql
rem Purpose: Display database SGA statistics
rem Date: 14-Jun-2001
rem Author: Anjan Roy (AnjanR@innotrex.com)
rem
-----------------------------------------------------------------------
DECLARE
libcac number(10,2);
rowcac number(10,2);
bufcac number(10,2);
redlog number(10,2);
spsize number;
blkbuf number;
logbuf number;
BEGIN
select value into redlog from v$sysstat
where name = 'redo log space requests';
select 100*(sum(pins)-sum(reloads))/sum(pins) into libcac from v$librarycache;
select 100*(sum(gets)-sum(getmisses))/sum(gets) into rowcac from v$rowcache;
select 100*(cur.value + con.value - phys.value)/(cur.value + con.value) into bufcac
from v$sysstat cur,v$sysstat con,v$sysstat phys,v$statname ncu,v$statname
nco,v$statname nph
where cur.statistic# = ncu.statistic#
and ncu.name = 'db block gets'
and con.statistic# = nco.statistic#
and nco.name = 'consistent gets'
and phys.statistic# = nph.statistic#
and nph.name = 'physical reads';
select value into spsize from v$parameter where name = 'shared_pool_size';
select value into blkbuf from v$parameter where name = 'db_block_buffers';
select value into logbuf from v$parameter where name = 'log_buffer';
dbms_output.put_line('> SGA CACHE STATISTICS');
dbms_output.put_line('> ********************');
dbms_output.put_line('> SQL Cache Hit rate = '||libcac);
dbms_output.put_line('> Dict Cache Hit rate = '||rowcac);
dbms_output.put_line('> Buffer Cache Hit rate = '||bufcac);
dbms_output.put_line('> Redo Log space requests = '||redlog);
dbms_output.put_line('> ');
dbms_output.put_line('> INIT.ORA SETTING');
dbms_output.put_line('> ****************');
dbms_output.put_line('> Shared Pool Size = '||spsize||' Bytes');
dbms_output.put_line('> DB Block Buffer = '||blkbuf||' Blocks');
dbms_output.put_line('> Log Buffer = '||logbuf||' Bytes');
dbms_output.put_line('> ');
if
libcac < 99 then dbms_output.put_line('*** HINT: Library Cache too low!
Increase the Shared Pool Size.');
END IF;
if
rowcac < 85 then dbms_output.put_line('*** HINT: Row Cache too low!
Increase the Shared Pool Size.');
END IF;
if
bufcac < 90 then dbms_output.put_line('*** HINT: Buffer Cache too low!
Increase the DB Block Buffer value.');
END IF;
if
redlog > 100 then dbms_output.put_line('*** HINT: Log Buffer value is rather
low!');
END IF;
END;
/
To start the experiment, each mview refreshes used to take some 18-20 mins, which
is totally against the business requirement. Then we tried to figure out on why the
mview refresh is taking so much time, in spite of dropping all the bitmap indexes on
the mview (generally b-map indexes are not good for inserts/updates).
The 10046 trace (level 12) highlighted that there were many “db file sequential
reads” on mview because of optimizer using “I_SNAP$_mview” to fetch the rows
from mview and merge the rows with that of master table to make the aggregated
data for the mview.
Good part of the story is access to master table was quite fast because we used
direct load (using sqlldr direct=y) to insert the data in it. When you use direct load to
insert the data, oracle maintains the list of rowids added to table in a view called
“SYS.ALL_SUMDELTA”. So while doing fast mview refresh, news rows inserted are
picked directly from table using the rowids given from ALL_SUMDELTA view and not
from Mview log, so this saves time.
Concerned part was still Oracle using I_SNAP$ index while fetching the data from
mview and there were many “db file sequential read” waits and it was clearly visible
that Oracle waited on sequential read the most. We figured it out that full table scan
(which uses scattered read, and multi block read count) was very fast in comparison
to index access by running simple test against table. Also the tables are dependent
mviews are only for the day. End of the day the master table and mview’s data will
be pushed to historical tables and master table and mviews will be empty post
midnight.
I gathered the stats of mview and then re-ran the mview refresh, and traced the
session, and this time optimizer didn’t use the index which was good news.
Now the challenge was to run the mview stats gathering job every half an hour or
induce wrong stats to table/index to ensure mview refresh never uses index access
or may be to lock the stats using DBMS_STATS.LOCK_TABLE_STATS.
But we found another solution by creating the mview with “USING NO INDEX”
clause. This way “I_SNAP$” index is not created with “CREATE MATERIALIZED VIEW’
command. As per Oracle the “I_SNAP$” index is good for fast refresh but it proved to
be reverse for us because our environment is different and the data changes is quite
frequent.
Now, we ran the tests again, we loaded 48 slices of data (24 hrs x 2 times within
hour) and the results were above expectations. We could load the data with max 3
mins per load of data.
This is not the end of story. In the trace we could see the mview refresh using
“MERGE” command and using full table scan access to mview (which we wanted) and
rowid range access to master table.
Interesting twist in the story is when I saw the wait events in trace file.
Again, even when we are doing full table scan, there are “db file sequential reads”?
To confirm I opened the raw trace file (before tkprof), and checked the obj# on
sequential read wait event, it was the mview (SF_ENV_DATA_MV) !! and there were
many. To further investigate I checked if there were any scattered reads to mview or
not. I found there were scattered reads but there were many sequential reads also
on which Oracle waited more than that of scattered read which did most of the data
fetching.
After giving some thought, I realized that we created the mviews without storage
clause, which means Oracle created the mview with default storage clause.
So assuming there are 17 blocks in an mview (container table) extent and Multi
block read count is 16, Oracle will use scattered read mechanism (multiple blocks) to
read the first 16 blocks and for the rest 1 it will use sequential read mechanism (one
block), so you will find many sequential reads wait events sandwiched between
scattered reads. To overcome this we created the mview with larger extent
sizes and also multiple of MBCR (multi block read count).
Also, another cause of sequential read is chained or migrated rows, if your mview (or
table) rows are migrated, the pointer to the next row is maintained in old (original)
block, which will always be read by a single i/o call i.e. by sequential read.You can
check the count of chained rows using DBA_TABLES.CHAIN_CNT after analysing the
table . So to overcome this, we created the mview with genuine pctfree so
when the merge runs (as a part of mview refresh) and updates few rows,
the rows are not moved to a different block and hence avoiding sequential
read.
Conclusion:
1. Mview creation with “USING NO INDEX” does not create “I_SNAP$” index
which sometimes help in fast refresh when the data changes are quite
frequent and you cannot afford to collect stats after every few mins.
2. Create mview with storage clause suiting to your environment. Default extent
sizes may not be always good.
3. PCTFREE can be quite handy to avoid sequential reads and extra block read.
Out of all Oracle RDBMS modules, optimizer code is actually the most complicated
code and different optimizer modes seem like jack while lifting your car in case of a
puncture.
This paper focuses on how optimizer behaves differently when you have optimizer
mode set to ALL_ROWS or FIRST_ROWS.
FIRST_ROWS and ALL_ROWS are both cost based optimizer features. You may use
them according to their requirement.
FIRST_ROWS/ FIRST_ROWS[n]
In simple terms it ensures best response time of first few rows (n rows).
This mode is good for interactive client-server environment where server serves first
few rows and by the time user scroll down for more rows, it fetches other. So user
feels that he has been served the data he requested, but in reality the request is still
pending and query is still fetching the data in background.
Best example for this is toad, if you click on data tab, it instantaneously start
showing you data and you feel toad is faster than sqlplus, but the fact is if you scroll
down, you will see the query is still running.
Table created.
Index created.
COUNT(*)
----------
37944
COUNT(*)
----------
14927
You see out of almost 38k records, 15k are of JAVA class. And now if you select the
rows having object_type=’JAVA_CLASS’, it should not use index as almost half of the
rows are JAVA_CLASS. It will be foolish of optimizer to read the index first and then
go to table.
Execution Plan
----------------------------------------------------------
Plan hash value: 1357081020
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1001 | 94094 | 10 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| TEST | 1001 | 94094 | 10 (0)| 00:00:01 |
--------------------------------------------------------------------------
As you see above, optimizer has not used Index we created on this table.
Execution Plan
----------------------------------------------------------
Plan hash value: 3548301374
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14662 | 1345K| 536 (1)| 00:00:07 |
| 1 | TABLE ACCESS BY INDEX ROWID| TEST | 14662 | 1345K| 536 (1)| 00:00:07 |
|* 2 | INDEX RANGE SCAN | TEST_IN | 14662 | | 43 (3)| 00:00:01 |
---------------------------------------------------------------------------------------
Q> Why?
Ans> Because you wanted to see first few rows quickly. So, following your
instructions oracle delivered you first few rows quickly using index and later
delivering the rest.
See the difference in cost, although the response time (partial) of second query was
faster but resource consumption was high.
But that does not mean that this optimizer mode is bad. As I said this mode may be
good for interactive client-server model. In most of OLTP systems, where users want
to see data fast on their screen, this mode of optimizer is very handy.
1. It gives preference to Index scan Vs Full scan (even when index scan is not
good).
2. It prefers nested loop over hash joins because nested loop returns data as
selected (& compared), but hash join hashes one first input in hash table
which takes time.
3. Cost of the query is not the only criteria for choosing the execution plan. It
chooses plan which helps in fetching first rows fast.
4. It may be a good option to use this in an OLTP environment where user wants
to see data as early as possible.
ALL_ROWS
While FIRST_ROWS may be good in returning first few rows, ALL_ROWS ensures the
optimum resource consumption and throughput of the query. In other words,
ALL_ROWS is better to retrieve the last row first.
In above example while explaining FIRST_ROWS, you have already seen how
efficient ALL_ROWS is.
1. ALL_ROWS considers both index scan and full scan and based on their
contribution to the overall query, it uses them. If Selectivity of a column is
low, optimizer may use index to fetch the data (for example ‘where
employee_code=7712’), but if selectivity of column is quite high ('where
deptno=10'), optimizer may consider doing Full table scan. With ALL_ROWS,
optimizer has more freedom to its job at its best.
2. Good for OLAP system, where work happens in batches/procedures. (While
some of the report may still use FIRST_ROWS depending upon the anxiety
level of report reviewers)
3. Likes hash joins over nested loop for larger data sets.
Conclusion
Cost based optimizer gives you flexibility to choose response time or throughput. So
use them based on your business requirement.
In this algorithm, an outer loop is formed which consists of few entries and then for
each entry, and inner loop is processed.
Ex:
It is processed like:
NESTED LOOPS
outer_loop
inner_loop
Optimizer uses nested loop when we are joining tables containing small number of
rows with an efficient driving condition. It is important to have an index on column of
inner join table as this table is probed every time for a new value from outer table.
<!--[endif]-->
<!--[endif]-->
Note: You will see more use of nested loop when using FIRST_ROWS optimizer mode
as it works on model of showing instantaneous results to user as they are fetched.
There is no need for selecting caching any data before it is returned to user. In case
of hash join it is needed and is explained below.
Hash join
Hash joins are used when the joining large tables. The optimizer uses smaller of the
2 tables to build a hash table in memory and the scans the large tables and
compares the hash value (of rows from large table) with this hash table to find the
joined rows.
<!--[if !supportLists]--><!--[endif]-->
<!--[endif]-->
In simpler terms it works like
Build phase
Probe Phase
Optimizer uses has join while joining big tables or big fraction of small tables.
Unlike nested loop, the output of hash join result is not instantaneous as hash joining
is blocked on building up hash table.
Note: You may see more hash joins used with ALL_ROWS optimizer mode, because
it works on model of showing results after all the rows of at least one of the tables
are hashed in hash table.
Sort merge join is used to join two independent data sources. They perform better
than nested loop when the volume of data is big in tables but not as good as hash
joins in general.
They perform better than hash join when the join condition columns are already
sorted or there is no sorting required.
Important point to understand is, unlike nested loop where driven (inner) table is
read as many number of times as the input from outer table, in sort merge join each
of the tables involved are accessed at most once. So they prove to be better than
nested loop when the data set is large.
Note: Sort merge join can be seen with both ALL_ROWS and FIRST_ROWS optimizer
hint because it works on a model of first sorting both the data sources and then start
returning the results. So if the data set is large and you have FIRST_ROWS as
optimizer goal, optimizer may prefer sort merge join over nested loop because of
large data. And if you have ALL_ROWS as optimizer goal and if any inequality
condition is used the SQL, optimizer may use sort-merge join over hash join
With Oracle 9i, CBO is equipped with many more features, one of them is “Index skip
scan” .This means even if you have a composite index on more than one column and
you use the non-prefix column alone in your SQL, it may still use index.
I said “may” because CBO will calculate the cost of using the index and if it is more
than that of full table scan, then it may not use index.
Index skip scan works differently from a normal index (range) scan.
A normal range scan works from top to bottom first and then move horizontal.
But a Skip scan includes several range scans in it. Since the query lacks the leading
column it will rewrite the query into smaller queries and each doing a range scan.
Ex:
SQL> create table test (a number, b number, c number);
Table created.
Index created.
SQL> begin
2 for i in 1 .. 100000
3 loop
4 insert into test values(mod(i, 5), i, 100);
5 end loop;
6 commit;
7 end;
8/
Execution Plan
----------------------------------------------------------
0
SELECT STATEMENT Optimizer=ALL_ROWS (Cost=22 Card=1 Bytes=10)
10
TABLE ACCESS (BY INDEX ROWID) OF 'TEST' (TABLE) (Cost=22 Card=1 Bytes=10)
21
INDEX (SKIP SCAN) OF 'TEST_I' (INDEX) (Cost=21 Card=1)
I above example, “select * from test where b=95267” was broken down to several
small range scan queries. It was effectively equivalent to following
In concrete, saying that skip scan is not as efficient as normal “single range scan” is
correct. But yet saves some disk space and overhead of maintaining another index.
They 2 considered to be the most important parameter for shared pool tuning, but I
guess most of us generally don’t use them or sometimes use them incorrectly.
The idea to put them here to understand “what they do?”, “when to use them?”,
“how to use them?” and finally “see the impact”.
SESSION_CACHED_CURSORS
In most of the environments, there are many SQL’s which are re-fired many a times
within a session, and every time they are issued, the session searches the shared
pool for the parsed state, if it doesn’t get the parsed version, it will “hard parse” it,
and if it exists in shared pool, it will still do a “soft parse”.
As we know “hard parse” is a costly operation, even a “soft parse” requires library
cache latch and CPU overhead, which if aggregated is a significant number.
This parameter if set to a non-zero value (default is 50), improves the “soft parse”
performance by doing a softer soft parse.
How?
If enabled, oracle maintains a local session cache which stores recently closed
cursors of a session.
To avoid this space getting misused or overused, oracle maintains the cursors for
which there have been 3 parsed calls in the past, so all the SQL’s issued by a session
are not here. Remember each cursor if pinned here, is not freeable and hence you
may require more shared pool area.
MAX(VALUE)
----------
100
A value near 100 is considered very good. But you may still consider increasing this
parameter if MAX(VALUE) in query one shows you equal number of cached cursor
which you have set.
Conclusion: In an OLTP application, where the same set of SQL is issued number of
times, one must configure this parameter more than its default value (50).
Also increasing this parameter will mean extra memory required for shared pool, so
you must increase your shared pool when you use this parameter.
CURSOR_SPACE_FOR_TIME
b) When the cursor is open: Oracle requires parsed state of SQL at PARSE and
EXECUTE phase. If oracle parses (soft or hard) a statement, there is a likely hood
that Oracle may age out your SQL out of shared pool after PARSE state if it requires
to accommodate a new SQL coming its way. So in the EXECUTE state, there is a
possibility that parsed information is lost and oracle parse it again.
CURSOR_SPACE_FOR_TIME if set to TRUE, ensures that SQL is not aged out before
the cursor is closed, so in EXECUTE phase, you will have the PARSE information.
But this is generally a rare case and happens in a very highly active environment
because to accommodate a new SQL, Oracle first check the free space and if it
doesn’t get, it checks the closed cursors and see if any cursor can be aged out and
when there is no space which can be reclaimed, Oracle comes to open cursors which
are not EXECUTED.
This generally happens when the space of shared pool is too less.
Conclusion:
As I said, I don’t suggest setting this parameter to TRUE in most of the cases. An
alternative to set this parameter is to increase shared pool size or/and check your
code on how many numbers of cursors you are opening/closing. That will be a better
approach. Setting this parameter is like taking paracetamol without knowing the
cause of fever.
Labels: Tuning
I have seen many developers getting confused on index usage with like operator.
Few are of the feeling that index will be used and few are against this feeling.
Table created.
Index created.
Above example shows that using % wild card character towards end probe an Index
search.
But if it is used towards end, it will not be used. And sensibly so, because Oracle
doesn’t know which data to search, it can start from ‘A to Z’ or ‘a to z’ or even 1 to
any number.
See this.
SQL> select * from sac where object_type like '%ABLE';
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=148 Card=1004 Byte
s=128512)
Now how to use the index if you are using Like operator searches. The answer is
Domain Indexes.
SQL>
SQL> drop index sac_indx;
Index dropped.
Labels: Tuning
One of the important tasks of the DBA is to know what the high CPU consuming
processes on database server are.
In my last organization, we used get number of request saying that DB server is
running slow.
Now the problem is that, this server is hosting 86 databases, and finding out which is
the culprit process and database sounds a daunting task (but it isn't).
See this:
You may use TOP (or ps aux) or any other utility to find the top cpu consuming
process.
bash-3.00$ top
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
17480 oracle 11 59 0 1576M 1486M sleep 0:09 23.51% oracle
9172 oracle 258 59 2 1576M 1476M sleep 0:43 1.33% oracle
9176 oracle 14 59 2 1582M 1472M sleep 0:52 0.43% oracle
17550 oracle 1 59 0 3188K 1580K cpu/1 0:00 0.04% top
9178 oracle 13 59 2 1571M 1472M sleep 2:30 0.03% oracle
You can see the bold section. Process# 17480 is consuming 23.51 % CPU.
Now this process can belong to any process out of many instances on this server.
To find out which instance this process belongs to:
Connected to:
Oracle Database 10g Release 10.2.0.2.0 - Production
SID SQL
--------- -----------------
67 delete from test
Now you have the responsible SQL behind 23% CPU using process.
In my case it was a deliberate DELETE statement to induct this test but in your case
it can be a query worth tuning.
Mostly knowing what is the problem is solution of the problem. (At least you know
what the issue is).
Issue is to be addressed right away or to be taken to development team is a
subjective issue which i don’t want to comment.
Labels: Tuning
Actually, lot of us doesn’t use it all. Let’s first understand this feature and implement
this in our systems.
Definition: Share the plan only if text of SQL matches exactly with the text of SQL
lying in shared pool
Table created.
1 row created.
1 row created.
SQL> commit;
Commit complete.
T1
----------
1
SQL_TEXT
-----------------------------------------------------
As you see there were 2 statements in V$sql, so it generated 2 plans. Oracle had to
do the same work again to generate the plan even when the difference between the
two SQL was just literal value.
Definition: Share the plan (forcibly) of a SQL if the text of SQL matches (except the
literal values) with text of SQL in shared pool
This means if 2 SQL’s are same except their literal values, share the plan.
I’m using the same table and data which is used in case of above example.
System altered.
Session altered.
T1
----------
1
T1
----------
2
SQL> select sql_text
2 from v$sql
3 where sql_text like 'select * from test1%'
4 order by sql_text;
SQL_TEXT
---------------------------------------------------
You can see for both the statements there was only one entry in V$sql. This means
for second occurrence, oracle did not generate a new plan.
This not only helps in savings DB server engine time for generating the plan but also
helps in reducing the number of plans shared pool can hold.
Important note:
Cursor_sharing = force can have some flip behavior as well, so you must be careful
to use this. Using this we are forcing oracle to use the same plan for 2(or more)
SQL’s even when using the same plan may not be good for similar SQL’s.
Example: “where t1=2” may be a good candidate for index scan while “where t1=10”
should use a full table scan because 90% of the rows in the table has t1=10
(assumption).
Definition: SIMILAR causes statements that may differ in some literals, but are
otherwise identical, to share a cursor, unless the literals affect either the meaning of
the statement or the degree to which the plan is optimized. (Source: Oracle
documentation)
To avoid 2 statements using the same plan when the same plan is not good for one
of them, we have cursor_sharing=similar
System altered.
SQL> drop table test1;
Table dropped.
Table created.
SQL>
1 begin
2 for i in 1 .. 100 loop
3 insert into test1 values(1,i);
4 end loop;
5 commit;
6 update test1 set t1=2 where rownum <> /
In this case t1 has value “2” in first row and “1” in rest 99 rows
SQL> create index tt_indx on test1(t1);
Index created.
Session altered.
1 row selected.
99 rows selected.
SQL_TEXT
----------------------------------------------------
This tells us that even though the 2 statements were similar, Oracle opted for a
different plan. Now even if you put t1=30 (0 rows), Oracle will create another plan.
SQL_TEXT
---------------------------------------------------
This is because the first time when the SQL ran, oracle engine found the literal value
as “unsafe” because using the same literal value can cause bad plans for other
similar SQL’s. So along with the PLAN, optimizer stored the literal value also. This will
ensure the reusability of the plan only in case the same lieteral is provided. In case
of any change, optimizer will generate a new plan.
But this doesn’t mean that SIMILAR and EXACT are same.
See this:
System altered.
no rows selected
no rows selected
SQL_TEXT
--------------------------------------------------------------
Conclusions:
1. Use CURSOR_SHARING=similar only when you have library cache misses and/or
most of the SQL statements differ only in literal values
2. CURSOR_SHARING=force/similar significantly reduces the number of plans in
shared pool
Note:
Labels: Tuning
I was reading a wonderful article on Tom Kyte’s blog on repercussion of ill defined
data types.
- Varchar2(40) Vs Varchar2(4000)
- Date Vs varchar2
Varchar2(40) Vs Varchar2(4000)
Generally developers ask for this to avoid issues in the application. They always want
the uppermost limit to avoid any application errors. The fact on which they argue is
“varchar2” data type will not reserve 4000 character space so disk space is not an
issue. But what they don’t know is how costly are these.
Repercussions
- Generally application does an “array fetch” from the database i.e. they select 100
(may be more) rows in one go. So if you are selecting 10 varchar2 cols. Then
effective RAM (not storage) usage will be 4000(char) x 10 (cols) x 100 (rows) = 4
MB of RAM. On contrary, had this column defined with 40 char size the usage would
have been 40 x 10 x 100 ~ 40KB (approx)!! See the difference; also we didn’t
multiply the “number of session”. That could be another shock!!
- Later on, it will be difficult to know for what the column was made. Ex: for
first_name, if you define varchar2 (40000), it’s confusing for a new developer to
know for what this column was made.
Date Vs varchar2
Again lots of developers define date cols as varchar2 (or char) for their convenience.
But what they forget is not only data integrity (a date could be 01/01/03 .. what was
dd,mm,yy .. then u don’t know what did you defined) but also performance.
Labels: Tuning
This feature has been introduced in 9i rel 2 and is most useful in a warehouse
environment (for fact tables).
Table altered.
Table compression can significantly reduce disk and buffer cache requirements for
database tables while improving query performance. Compressed tables use fewer
data blocks on disk, reducing disk space requirements.
First create the following function which will get you the extent of compression
1 declare
2 a number;
3 begin
4 a:=compression_ratio('TEST');
5 dbms_output.put_line(a);
6 end
7 ;
8 /
2.91389728096676737160120845921450151057
Size in MB
----------
18
Table altered.
Size in MB
----------
6
After compressing the table, you need to rebuild indexes because the rowid's have
changed.
Notes:
- This feature can be best utilized in a warehouse environment where there are lot of
duplicate values (for fact tables). Infact a larger block size is more efficient, becuase
duplicate values will be only stored once within a block.
- This feature has no -ve effect, infact it accelerates the performance of queries
accessing large amount of data.
- I suggest you to read the following white paper by Oracle which explains the whole
algorithm in details along with industry recognized TPC test cases.
http://www.vldb.org/conf/2003/papers/S28P01.pdf
I wrote the above article after reading the oramag. I suggest you to read
the full article on Oracle site
Labels: Tuning
The clustering factor is a number which represent the degree to which data is
randomly distributed in a table.
In simple terms it is the number of “block switches” while reading a table using an
index.
Figure: Bad clustering factor
The above diagram explains that how scatter the rows of the table are. The first
index entry (from left of index) points to the first data block and second index entry
points to second data block. So while making index range scan or full index scan,
optimizer have to switch between blocks and have to revisit the same block more
than once because rows are scatter. So the number of times optimizer will make
these switches is actually termed as “Clustering factor”.
The above image represents "Good CF”. In an event of index range scan, optimizer
will not have to jump to next data block as most of the index entries points to same
data block. This helps significantly in reducing the cost of your SELECT statements.
Clustering factor is stored in data dictionary and can be viewed from dba_indexes (or
user_indexes)
Table created.
Index created.
CLUSTERING_FACTOR
-----------------
545
COUNT(*)
----------
38956
SQL> select blocks from user_segments where segment_name='OBJ_ID_INDX';
BLOCKS
----------
96
The above example shows that index has to jump 545 times to give you the full data
had you performed full table scan using the index.
Note:
- A good CF is equal (or near) to the values of number of blocks of table.
Myth:
- Rebuilding of index can improve the CF.
- To improve the CF, it’s the table that must be rebuilt (and reordered).
- If table has multiple indexes, careful consideration needs to be given by which
index to order table.
Labels: Tuning
Important point
–When joining 2 views that themselves select from other views, check that the 2
views that you are using do not join the same tables!
–Avoid NOT in or NOT = on indexed columns. They prevent the optimizer from using
indexes. Use where amount > 0 instead of where amount != 0
- Avoid writing where is not null. nulls can prevent the optimizer from using an index
- Avoid calculations on indexed columns. Write WHERE amount > 26000/3 instead of
WHERE approved_amt/3 > 26000
- The query below will return any record where bmm_code = cORE, Core, CORE,
COre, etc.
But this query can be very inefficient as it results in a full table scan. It cannot make
use of the index on bmm_code.
You can also make this more efficient by using 2 characters instead of just one:
where ((bmm_code like 'CO%' or bmm_code like 'Co%' or bmm_code like 'cO%' or
bmm_code like 'co%') and upper(bmm_code) LIKE 'CORE%')
Inviting Experts
Friends, feel free to correct me. I will appreciate if you can add your comments also.
Labels: Tuning
People sometimes do not bother to define columns as NOT NULL in the data
dictionary, even though these columns should not contain nulls, and indeed never do
contain nulls because the application ensures that a value is always supplied. You
may think that this is a matter of indifference, but it is not. The optimizer sometimes
needs to know that a column is not nullable, and without that knowledge it is
constrained to choose a less than optimal execution plan.
1. An index on a nullable column cannot be used to drive access to a table unless the
query contains one or more predicates against that column that exclude null values.
Of course, it is not normally desirable to use an index based access path unless the
query contains such predicates, but there are important exceptions.
For example, if a full table scan would otherwise be required against the table and
the query can be satisfied by a fast full scan (scan for which table data need not be
read) against the index, then the latter plan will normally prove more efficient.
Index created.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=1046 Bytes=
54392)
Table altered.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=1046 Bytes=
54392)
2. If you are calling a sub-query in a parent query using the NOT IN predicate, the
indexing on column (in where clause of parent query) will not be used.
Because as per optimizer, results of parent query needs to be displayed only when
there is no equi-matching from sub-query, And if the sub-query can potentially
contain NULL value (UNKNOWN, incomparable), parent query will have no value to
compare with NULL value, so it will not use the INDEX.
Test-case for the above Reasoning
Index created.
Index created.
SQL> select * from emp where sal not in (select sal from emp where
ename='JONES');
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=17 Card=13 Bytes=4
81)
1 0 FILTER
2 1 TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=3 Card=14 Byt
es=518) –> you can see a full table scan even when index exist on SAL
Table altered.
SQL> select * from emp where sal not in (select sal from emp where
ename='JONES');
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=5 Card=12 Bytes=56
4)
The above article has been an inspiration after reading an article on ixora .
The article was missing some of the testcases, so I thought of adding few
for newbiews to relate to it.
Labels: Tuning
IN Vs Exist in SQL
IN Vs EXISTS
IN Clause
select *
from t1, ( select distinct y from t2 ) t2
where t1.x = t2.y;
It always results in a full scan of T1 whereas the first query can make use of
an index on T1(x).
So, when is exists appropriate and in appropriate?
is "huge" and takes a long time. But the table T1 is relatively small and
executing ( select null from t2 where y = x.x ) is fast (nice index on
t2(y)). Then the exists will be faster as the time to full scan T1 and do the
index probe into T2 could be less then the time to simply full scan T2 to build
the subquery we need to distinct on.
Lets say the result of the subquery is small -- then IN is typicaly more
appropriate.
If both the subquery and the outer table are huge -- either might work as well
as the other -- depends on the indexes and other factors.
In this algorithm, an outer loop is formed which consists of few entries and then for
each entry, and inner loop is processed.
Ex:
It is processed like:
NESTED LOOPS
outer_loop
inner_loop
Optimizer uses nested loop when we are joining tables containing small number of
rows with an efficient driving condition. It is important to have an index on column of
inner join table as this table is probed every time for a new value from outer table.
<!--[endif]-->
<!--[endif]-->
Note: You will see more use of nested loop when using FIRST_ROWS optimizer mode
as it works on model of showing instantaneous results to user as they are fetched.
There is no need for selecting caching any data before it is returned to user. In case
of hash join it is needed and is explained below.
Hash join
Hash joins are used when the joining large tables. The optimizer uses smaller of the
2 tables to build a hash table in memory and the scans the large tables and
compares the hash value (of rows from large table) with this hash table to find the
joined rows.
<!--[if !supportLists]--><!--[endif]-->
<!--[endif]-->
Build phase
For each row RW1 in small (left/build) table loop
Calculate hash value on RW1 join key
Insert RW1 in appropriate hash bucket.
End loop;
Probe Phase
Optimizer uses has join while joining big tables or big fraction of small tables.
Unlike nested loop, the output of hash join result is not instantaneous as hash joining
is blocked on building up hash table.
Note: You may see more hash joins used with ALL_ROWS optimizer mode, because
it works on model of showing results after all the rows of at least one of the tables
are hashed in hash table.
Sort merge join is used to join two independent data sources. They perform better
than nested loop when the volume of data is big in tables but not as good as hash
joins in general.
They perform better than hash join when the join condition columns are already
sorted or there is no sorting required.
Important point to understand is, unlike nested loop where driven (inner) table is
read as many number of times as the input from outer table, in sort merge join each
of the tables involved are accessed at most once. So they prove to be better than
nested loop when the data set is large.
Note: Sort merge join can be seen with both ALL_ROWS and FIRST_ROWS optimizer
hint because it works on a model of first sorting both the data sources and then start
returning the results. So if the data set is large and you have FIRST_ROWS as
optimizer goal, optimizer may prefer sort merge join over nested loop because of
large data. And if you have ALL_ROWS as optimizer goal and if any inequality
condition is used the SQL, optimizer may use sort-merge join over hash join
3 comments:
Sachin said...
I wanted to put some examples in the post itself, but missed it earlier.
Here it is:
Table created.
Table created.
SQL> create index e_deptno on e(deptno);
Index created.
Gather D stats as it is
A) With less number of rows(100 in E), you will see Nested loop getting used.
Execution Plan
----------------------------------------------------------
Plan hash value: 3204653704
---------------------------------------------------------------------------------------
-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
-
| 0 | SELECT STATEMENT | | 100 | 2200 | 6 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| E | 25 | 225 | 1 (0)| 00:00:01 |
| 2 | NESTED LOOPS | | 100 | 2200 | 6 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL | D | 4 | 52 | 3 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | E_DEPTNO | 33 | | 0 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
-
B) Let us set some more artificial stats to see which plans is getting used:
SQL> exec dbms_stats.set_table_stats(ownname => 'SCOTT', tabname =>
'E', numrows => 1000000, numblks => 10000, avgrlen => 124);
Now we have 1000000 number of rows in E and D table both and index on
E(DEPTNO) reflects the same.
Plans changes !!
Execution Plan
----------------------------------------------------------
Plan hash value: 51064926
-----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 250G| 5122G| | 3968K(100)| 13:13:45 |
|* 1 | HASH JOIN | | 250G| 5122G| 20M| 3968K(100)| 13:13:45 |
| 2 | TABLE ACCESS FULL| E | 1000K| 8789K| | 2246 (3)| 00:00:27 |
| 3 | TABLE ACCESS FULL| D | 1000K| 12M| | 2227 (2)| 00:00:27 |
-----------------------------------------------------------------------------------
C) Now to test MERGE JOIN, we set moderate number of rows and do some
ordering business.
Execution Plan
----------------------------------------------------------
Plan hash value: 915894881
---------------------------------------------------------------------------------------
--
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
--
| 0 | SELECT STATEMENT | | 2500K| 52M| 167 (26)| 00:00:02 |
| 1 | MERGE JOIN | | 2500K| 52M| 167 (26)| 00:00:02 |
| 2 | TABLE ACCESS BY INDEX ROWID| E | 10000 | 90000 | 102 (1)|
00:00:02 |
| 3 | INDEX FULL SCAN | E_DEPTNO | 10000 | | 100 (0)| 00:00:02 |
|* 4 | SORT JOIN | | 1000 | 13000 | 25 (4)| 00:00:01 |
| 5 | TABLE ACCESS FULL | D | 1000 | 13000 | 24 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
--
clear breaks
clear computes
Dynamic Memory:
SELECT name,
value
FROM v$parameter
WHERE SUBSTR(name, 1, 1) = '_'
ORDER BY name;
Job Programs?
SELECT owner,
program_name,
program_type,
program_action,
number_of_arguments,
enabled,
comments
FROM dba_scheduler_programs
ORDER BY owner, program_name;
SELECT window_name,
resource_plan,
enabled,
active,
comments
FROM dba_scheduler_windows
ORDER BY window_name;
NAME NETWORK_NAME
--------------------------------------------------- ------------------------------------
SYS$BACKGROUND
SYS$USERS
haf haf
hafXDB hafXDB
seeddata.regress.rdbms.dev.us.oracle.com
seeddata.regress.rdbms.dev.us.oracle.com
seeddataXDB seeddataXDB
It will display the current connected user with instance name, in my case it is
displayed as
ddo@haf
SQL> SELECT LPAD(' ', (level-1)*2, ' ') || NVL(s.username, '(oracle)') AS username,
s.osuser,
s.sid,
s.serial#,
s.lockwait,
s.status,
s.module,
s.machine,
s.program,
TO_CHAR(s.logon_Time,'DD-MON-YYYY HH24:MI:SS') AS logon_time
FROM v$session s
CONNECT BY PRIOR s.sid = s.blocking_session
START WITH s.blocking_session IS NULL;
Page: 1 2
dbms_stats:
The dbms_stats utility does a far better job in estimating the CBO statistics.
Especially it works good for large partitioned tables, and the better stats results in
faster SQL execution plans.
exec dbms_stats.gather_schema_stats( -
degree => 34 -
options clause:
When the options clause is specified you may specify GATHER options. When
GATHER AUTO is specified, the only additional valid parameters are ownname,
stattab, statid, objlist and statown; all other parameter settings are ignored.
exec dbms_stats.gather_schema_stats( -
)
There are several values for the options parameter
gather
Only analyze tables that have no existing statistics.
empty
gather Only re-analyze tables with more than 10% modifications (inserts, updates,
stale deletes. gather stale require monitoring.
This will re-analyze objects which currently have no statistics and objects
gather
with stale statistics. Using gather auto is like combining gather stale and
auto
gather empty . Both gather stale and gather auto require monitoring.
Page: 1 2
If you issue the alter table xxx monitoring command, Oracle tracks changed tables
with the dba_tab_modifications view. Below we see that the exact number of
inserts, updates and deletes are tracked since the last analysis of statistics.
Name Type
------------------------------------------------------
TABLE_OWNER VARCHAR2(30)
TABLE_NAME VARCHAR2(30)
PARTITION_NAME VARCHAR2(30)
SUBPARTITION_NAME VARCHAR2(30)
INSERTS NUMBER
UPDATES NUMBER
DELETES NUMBER
TIMESTAMP DATE
TRUNCATED VARCHAR2(3)
Because all statistics will become stale quickly in a robust OLTP database, we must
remember the rule for gather stale is > 10% row change (based on num_rows at
statistics collection time).
Almost every table except read-only tables will be re-analyzed with the gather stale
option. Hence, the gather stale option is best for systems that are largely read-only.
For example, if only 5% of the database tables get significant updates, then only 5%
of the tables will be re-analyzed with the “gather stale” option.
CASCADE option:
When analyzing specific tables, the cascade option can be used to analyze all related
objects based on foreign-key constraints. For example, stats$snapshot has foreign
key referential integrity into all subordinate tables ( stats$sysstat , etc.), so a single
analyze can invoke an analyze of all subordinate tables:
exec dbms_stats.gather_table_stats( -
degree => 7 -
DEGREE Option:
CBO does full-table and full-index scans and hence you can also parallelize the
collection of statistics. When you set degree=x , Oracle will invoke parallel query
slave processes to speed up table access. Degree is usually about equal to the
number of CPUs, minus 1 (for the OPQ query coordinator).
estimate_percent argument:
You can specify the sample size for dbms_stats . The estimate_percent argument
allow Oracle's dbms_stats to automatically estimate the best percentage of a
segment to sample when gathering statistics:
You can verify the accuracy of the automatic statistics sampling by looking at the
dba_tables sample_size column. Oracle chooses between 5% to 20% for a
sample_size when using automatic sampling.
=================================================
The following script creates a table, inserts a row, then sets the table to read-
only.
Any DML statements that affect the table data and SELECT ... FOR UPDATE
queries result in an ORA-12081 error message.
ERROR at line 1:
ORA-12081: update operation not allowed on table "DDO"."Test"
ERROR at line 1:
ORA-12081: update operation not allowed
on table "ddo"."test"
ERROR at line 1:
ORA-12081: update operation not allowed
on table "ddo"."test"
Operations on indexes associated with the table are unaffected by the read-only
state.
DML and DDL operations return to normal once the table is switched back to
read-write mode.
Page: 1 2
Unused indexes waste space and have overhead for DML and hence they should be
removed from the database. Tracking unused indexes is tricky and should be done
with caution. Oracle 10g provides an easy approach for tracking unused indexes but
for Oracle 9i and Oracle 8i, this is not an easy task. In this article I will discuss some
ways to determine unused indexes in oracle databases.
One of the great features of Oracle9i is the ability to easily locate and remove
unused indexes. When an index is not used by SQL queries with the cost-based
optimizer, the unused indexes waste space and cause INSERT statements to run
slower.
When you issue the alter index <index_name> monitoring usage command, Oracle
places an entry in the v$object_usage view so you can see if the index is used. This
is just a bit-flag that is set to “1” when the index is accessed by any SQL statement.
Below is a simple SQL*Plus script to track all index usage in all Oracle schemas:
spool off;
@run_monitor
After a significant amount of SQL has executed on the database you can query the
new v$object_usage view:
The v$object_usage has a single column called used which will be set to YES or NO.
This script will track the unused indexes but does not tell how many times the index
has been used
select
io.name, t.name,
decode(bitand(i.flags, 65536), 0, 'NO', 'YES'),
decode(bitand(ou.flags, 1), 0, 'NO', 'YES'),
u.start_monitoring,
ou.end_monitoring
from
sys.obj$ io,
sys.obj$ t,
sys.ind$ i,
sys.object_usage ou
where io.owner# = userenv('SCHEMAID') and i.obj# = ou.obj# and
io.obj# = ou.obj# and t.obj# = i.bo#;
select
u.name "owner",
io.name "index_name",
t.name "table_name",
decode(bitand(i.flags, 65536), 0, 'no', 'yes') "monitoring",
decode(bitand(nvl(ou.flags,0), 1), 0, 'no', 'yes') "used",
ou.start_monitoring "start_monitoring",
ou.end_monitoring "end_monitoring"
from sys.obj$ io, sys.obj$ t, sys.ind$ i, sys.object_usage ou,
sys.user$ = u
where t.obj# =3d i.bo# and io.owner# =3d u.user# and io.obj# =3d
i.obj# and u.name not in ('sys','system') and i.obj# =3d ou.obj#(+);
In Oracle10g enables one to easily see which indexes are used, when they are used
and the context where they are used. Below is a simple AWR query to plot index
usage
Below script tracks unused indexes, show the invocation count of all indexes and the
columns referenced for multi-column indexes:
break on c1 skip 2
select begin_interval_time c1, count(*) c3 from dba_hist_sqltext
natural join dba_hist_snapshot
where lower(sql_text) like lower('%cust_name_idx%')
)
where
dup!=1
start with seq=1
connect by prior seq+1=seq
and prior index_owner=index_owner
and prior index_name=index_name
)) a,
(
select
table_owner,
table_name,
index_owner,
index_name,
substr(SYS_CONNECT_BY_PATH(column_name, ','),2) column_name_list
from
(
select index_owner, index_name, table_owner, table_name, column_name,
count(1) OVER ( partition by index_owner, index_name) cnt,
ROW_NUMBER () OVER ( partition by index_owner, index_name order by
column_position) as seq
from sys.dba_ind_columns
where index_owner not in ('SYS', 'SYSTEM'))
where seq=cnt
start with seq=1
connect by prior seq+1=seq
and prior index_owner=index_owner
and prior index_name=index_name
) b, dba_indexes i
where
a.dup=a.dup_mx
and a.index_owner=b.index_owner
and a.index_name=b.index_name
and a.index_owner=i.owner
and a.index_name=i.index_name
order by
a.table_owner, a.table_name, column_name_list_dup;
• Set analyze=n and analyze with dbms_stats after the load has completed.
• Many large companies use partitioned tables, and keep the current partition
on SSD for fast imports.
• The recordlength needs to be a multiple of your I/O chunk size and
db_block_size (or dbnnk_block_size).
• commit=n should be set for tables that can afford not to commit until the end
of the load. However larger tables may not be suitable for this option due to
the required rollback/undo space.
• A single large rollback segment can be created that take all others offline
during the import.
• Index creation can be postponed until after import completes, by specifying
indexes=n . Setting indexes=n eliminates this maintenance overhead. You
can also Use the indexfile parm to rebuild all the indexes once, after the data
is loaded.
• By using a larger buffer setting, import can do more work before disk access
is performed.
• You can also use the hidden parameter _disable_logging = true to reduce
redo, but beware that the resulting import will be unrecoverable.
Exact performance gain can be achieved depending upon the following factors
• DB_BLOCK_SIZE
• The types of columns in your table
• The I/O layout
Oracle SQL*Loader is flexible and offers many options that should be considered to
maximize the speed of data loads. These include:
• The direct path loader ( direct=true ) loads directly into the Oracle data files
and creates blocks in Oracle database block format. To prepare the database
for direct path loads, the script $ORACLE_HOME/rdbms/admin/catldr.sql .sql
must be executed.
• Disabling of indexes and constraints for conventional data loads can greatly
enhance the performance of SQL*Loader.
• Larger bind arrays limit the number of calls to the database and increase
performance for conventional data loads only.
• rows specifies the number of rows per commit. If you issue fewer commits
for rows then it will enhance performance for conventional data loads.
• Parallel Loads option allows multiple SQL*Loader jobs to execute concurrently.
• Fixed width data format saves Oracle some processing when parsing the
data.
• Disabling Archiving During Load option may not be feasible in certain
environments; however disabling database archiving can increase
performance considerably.
• The unrecoverable load data option disables the writing of the data to the
redo logs. This option is available for direct path loads only.
When using standard SQL statements to load Oracle data tables, there are several
tuning approaches:
Oracle Parsing:
Parsing is the first step in the processing of any statement in an Oracle database. The statements
are broken down into its component parts and the type of statement whether DML, or DDL is
determined and various checks are performed on it. A statement must be evaluated and validated
before execution. Oracle evaluates the statements for syntax, validation of objects and the
privileges assigned to user.
Oracle parsing process follows below steps in order to execute the SQL statement and arrive at the
output.
Syntactical check:
Semantic check:
In semantic check the query is checked for the validity of the objects being referred in the
statement and the privileges available to the user firing the statement. This is a data dictionary
check.
Allocation:
This step includes the allocation of private SQL area in the memory for the statement.
Generating Parsed Representation and allocation Shared SQL area:
In this step a parsed representation of the statement is generated and shared SQL area is
allocated. This involves finding an optimal execution path for the statement. Oracle first checks if
the same statement is already parsed and exists in the memory. If yes then soft parse will be done
in which the parsed representation will be picked up and the statement will be executed
immediately. However if the statement is not found then hard parsing will be done where the
parsed representation is generated and stored in a shared SQL area and then the statement is
executed.
Oracle does the following in order to decide on a soft parse or hard parse.
When a new statement is fired, a hash value is generated for the text string. Oracle checks if this
new hash value matches with any existing hash value in the shared pool.
In this step the text string of the new statement is compared with the hash value matching
statements.
If a match is found, the objects referred in the new statement are compared with the matching
statement objects. The bind variable types of the new statement should be of same type as the
identified matching statement.
If all of the above is satisfied, Oracle either uses the soft parse by re-using the existing parse.
However if a match is not found, Oracle follows hard parse and goes through the process of
parsing the statement and putting it in the shared pool.
Check for statements with a lot of executions. Avoid PARSE_CALLS value close to the
EXECUTIONS value.
select parse_calls, executions,
substr(sql_text, 1, 300)
from v$sqlarea
where command_type in (2, 3, 6, 7);
In the below code the sessions involve lot of re-parsing. Query these sessions from V$SESSION
and then locate the program that is being executed, resulting in so much parsing.
Provide enough private SQL area to accommodate all of the SQL statements for a session.
Depending on the requirement, the parameter OPEN_CURSORS may need to be reset to a higher
value. Set the SESSION_CACHED_CURSORS to a higher value to allow more cursors to be
cached at session level and to avoid re-parsing.
The below code will help in identifying the open cursors for a session and how near the count is to
the OPEN_CURSORS parameter value. If the margin is very small, consider increasing the
OPEN_CURSORS parameter.
The CACHE_CNT ('session cursor cache hits') of a session should be compared to the
PARSE_CNT ('parse count (total)'), if the difference is high, consider increasing the
SESSION_CACHED_CURSORS parameter.
The shared SQL area can be further utilized for identical as well as some-what similar queries by
setting the initialization parameter CURSOR_SHARING to FORCE. The default value is EXACT.
Try out this parameter for your application in test mode before making changes in production.
Pinning:
Pin frequent objects in memory using the DBMS_SHARED_POOL package. Use it to pin most
frequently used objects that should be in memory while the instance is up. Pin objects when the
instance starts to avoid memory fragmentation. Below code provides a list of frequently used and
re-loaded objects
In order to pin a package in memory and to view the list of pinned objects, use below syntax
The size of the shared pool can be increased by setting the parameter SHARED_POOL_SIZE in
the initialization file. Increasing the shared pool size is an immediate solution, but the above steps
need to be carried out to optimize the database in the long run.
SELECT
e.SID,
e.username,
e.status,
a.UGA_MEMORY,
b.PGA_MEMORY
FROM
-- Current UGA size for the session.
(select y.SID, TO_CHAR(ROUND(y.value/1024),99999999) || ' KB' UGA_MEMORY
from v$sesstat y, v$statname z where y.STATISTIC# = z.STATISTIC# and NAME =
'session uga memory') a,
-- Current PGA size for the session.
(select y.SID, TO_CHAR(ROUND(y.value/1024),99999999) || ' KB' PGA_MEMORY
from v$sesstat y, v$statname z where y.STATISTIC# = z.STATISTIC# and NAME =
'session pga memory') b,
v$session e
WHERE e.sid=a.sid
AND e.sid=b.sid
ORDER BY
e.status,
a.UGA_MEMORY desc
Linux free
Pre-requisites
A valid full database backup of the target database
RMAN> list backup summary;
List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Tag
------- -- -- - ----------- --------------- ------- ------- ---
14 B A A DISK 03-NOV-04 1 1 TAG20041103T163334
15 B F A DISK 03-NOV-04 1 1 TAG20041103T163336
16 B A A DISK 03-NOV-04 1 1 TAG20041103T163651
17 B F A DISK 03-NOV-04 1 1
Target database must be mounted or open
$ sqlplus "/ as sysdba"
Steps Required
$ orapwd file=/u01/app/oracle/product/9.2.0/dbs/orapwTESTDB
password=change_on_install
Copy the initialization parameter from the target database and make the
necessary changes for the duplicated database.
$ export ORACLE_SID=ORA920
$ sqlplus "/ as sysdba"
SQL> create
pfile='/u01/app/oracle/product/9.2.0/dbs/initTESTDB.ora' from
spfile;
File created.
After creating the initialization parameter for the duplicate database, change
at least the following parameters:
db_file_name_convert = ('/u06/app/oradata/ORA920',
'/u06/app/oradata/TESTDB')
log_file_name_convert = ('/u03/app/oradata/ORA920',
'/u03/app/oradata/TESTDB',
'/u04/app/oradata/ORA920', '/u04/app/oradata/TESTDB',
'/u05/app/oradata/ORA920', '/u05/app/oradata/TESTDB')
control_files = '/u03/app/oradata/TESTDB/control01.ctl'
, '/u04/app/oradata/TESTDB/control02.ctl'
, '/u05/app/oradata/TESTDB/control03.ctl'
db_name = 'TESTDB'
instance_name = 'TESTDB'
audit_file_dest = '/u01/app/oracle/admin/TESTDB/adump'
background_dump_dest = '/u01/app/oracle/admin/TESTDB/bdump'
core_dump_dest = '/u01/app/oracle/admin/TESTDB/cdump'
user_dump_dest = '/u01/app/oracle/admin/TESTDB/udump'
service_names = 'TESTDB.IDEVELOPMENT.INFO'
dispatchers = '(PROTOCOL=TCP) (SERVICE=TESTDBXDB)'
log_archive_dest_1 = 'location=/u06/app/oradata/TESTDB/archive
mandatory'
$ mkdir /u01/app/oracle/admin/TESTDB
$ mkdir /u01/app/oracle/admin/TESTDB/adump
$ mkdir /u01/app/oracle/admin/TESTDB/bdump
$ mkdir /u01/app/oracle/admin/TESTDB/cdump
$ mkdir /u01/app/oracle/admin/TESTDB/create
$ mkdir /u01/app/oracle/admin/TESTDB/pfile
$ mkdir /u01/app/oracle/admin/TESTDB/scripts
$ mkdir /u01/app/oracle/admin/TESTDB/udump
$ mkdir /u03/app/oradata/TESTDB
$ mkdir /u04/app/oradata/TESTDB
$ mkdir /u05/app/oradata/TESTDB
$ mkdir /u06/app/oradata/TESTDB
$ mkdir /u06/app/oradata/TESTDB/archive
$ export ORACLE_SID=TESTDB
Modify both the listener.ora and tnsnames.ora file to be able to connect to the
auxiliary database. After making changes to the networking files, test the
connection keeping in mind that you must be able to connect to the auxiliary
instance with SYSDBA privileges, so a valid password file must exist.
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
SQL>
$ export ORACLE_SID=ORA920
$ sqlplus "/ as sysdba"
SQL> startup open
Ensure You Have the Necessary Backups and Archived Redo Log Files
List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Tag
------- -- -- - ----------- --------------- ------- ------- ---
14 B A A DISK 03-NOV-04 1 1
TAG20041103T163334
15 B F A DISK 03-NOV-04 1 1
TAG20041103T163336
16 B A A DISK 03-NOV-04 1 1
TAG20041103T163651
17 B F A DISK 03-NOV-04 1 1
The following RUN block can be used to fully duplicate the target database
from the latest full backup.
Note that you can duplicate the database to a specific date/time using the
UNTIL TIME '<DATE>' clause. For example, to duplicate the new database to
yesterdays date/time, use the following:
database dismounted
Oracle instance shut down
database opened
Finished Duplicate Db at 03-NOV-04
RMAN> exit
In almost all cases, you will need to create the tempfiles for your temporary
tablespace:
$ export ORACLE_SID=TESTDB
Tablespace altered.
Put the following line in init.ora. It will enable trace for all sessions and the
background
processes
sql_trace = TRUE
to disable trace:
sql_trace = FALSE
- or -
to enable tracing without restarting database run the following command in sqlplus
to start trace:
to stop trace:
- or -
- or -
EXECUTE dbms_support.start_trace;
EXECUTE dbms_support.stop_trace;
to start trace:
to stop trace:
- or -
which will prevent corruption from getting to your disks (at the cost of a
database crash).
For tracing of a MAX_CURSORS exceeded error:
For ORA-04030 errors: Take a dump by setting this event in your INIT file and
analyze the trace file. This will clearly pinpoint the problem.
alter session set events 'immediate trace name CONTROLF level 10'
alter session set events 'immediate trace name FILE_HDRS level 10'
alter session set events 'immediate trace name REDOHDR level 10'
alter session set events 'immediate trace name SYSTEMSTATE level 10'
You should be noticing a pattern here for tracing events related to error codes: the
first argument in the EVENT is the error code followed by the action you want to take
upon receiving the code.
Events are also used as the SESSION level using the ALTER SESSION command or
calls to the DBMS_SYSTEM.SET_EV() procedure. The general format for the ALTER
SESSION command is:
where:
ALTER SESSION SET EVENTS '10046 trace name context forever level NN'
where NN:
ALTER SESSION SET EVENTS 'immediate trace name coalesce level XX'
where:
ALTER SESSION SET EVENTS 'immediate trace name drop_segments level &x';
where:
To get the information out of the db block buffers regarding order of LRU chains:
ALTER SESSION SET EVENTS 'immediate trace name buffers level x';
where:
x is 1-3 for buffer header order or 4-6 for LRU chain order.
ALTER SESSION SET EVENT '10297 trace name context forever, level 1';
To cause "QKA Disable GBY sort elimination". This affects how Oracle will process
sorts:
* You can disable the Index FFS using the event 10156. In this case, CBO will lean
toward FTS or Index scan.
* You can set the event 10092 if you want to disable the hash joins completely.
It is very easy to see how SMON cleans up rollback entries by using the event 10015.
You can use event 10235 to check how the memory manager works internally.
CBO is definitely not a mystery. Use event 10053 to give the detail of the various
plans considered, depending on the statistics available; be careful using this for large
multi-table joins, as the report can be quite lengthy! The data density, sparse
characteristics, index availability, and index depth all lead the optimizer to make its
decisions. You can see the running commentary in trace files generated by the10053
event.
Virtual Indexes are an undocumented feature used by Oracle. These are pseudo-
indexes that will not behave the same way that normal indexes behave, and are
meant for a very specific purpose.
A virtual index is created in a slightly different manner than the normal indexes. A
virtual index has no segment attached to it, i.e., the DBA_SEGMENTS view will not
show an entry for this. Oracle handles such indexes internally and few required
dictionary tables are updated so that the optimizer can be made aware of its
presence and generate an execution plan considering such indexes.
As per Oracle, this functionality is not intended for standalone usage. It is part of the
Oracle Enterprise Manger Tuning Pack (Virtual Index Wizard). The virtual index
wizard functionality allows the user to test a potential new index prior to actually
building the new index in the database. It allows the CBO to assess the potential new
index for a selected SQL statement by building an explain plan that is aware of the
potential new index. This allows the user to determine if the optimizer would use the
index, once implemented.
Therefore, the feature is here to be supported from Enterprise Manager and not for
standalone usage. I went a bit further and actually tested it using SQL*Plus,
basically, trying to use the same feature but without the enterprise manager.
I do not see much use of Virtual Indexes in a development area where we can create
and drop indexes while testing. However, this feature could prove handy if a query or
group of queries has to be tested in production (for want of simulation or urgency!),
to determine if a new index will improve the performance, without impacting existing
or new sessions.
Their creation will not have an effect on existing and new sessions. Only sessions
marked for Virtual Index usage will become aware of their existence.
Hidden Parameter:
The Rule based optimizer did not recognize Virtual Indexes when I tested, however,
CBO recognizes them. In all of my examples, I have used CBO. However, I did not
carry out rigorous testing in RBO and you may come across exceptions to this view.
Dictionary View:
Dictionary view DBA_SEGMENTS will not show an entry for Virtual Indexes. The table
DBA_INDEXES and DBA_OBJECTS will have an entry for them in Oracle 8i; in Oracle
9i onwards, DBA_INDEXES no longer show Virtual Indexes.
Alteration:
ANALYZE Command:
Creating a Virtual Index can be achieved by using the NOSEGMENT clause with the
CREATE INDEX command.
no rows selected
Execution Plan
----------------------------------------------------------
As the Virtual Index has an entry in some of the dictionary tables, it will prevent the
creation of an object with the same name. The alternative is to drop and recreate the
Virtual Index as a real index.
Virtual Index: Drop and Recreate
SQL> drop index am301_n1; Index dropped.
SQL> create index am301_n1 on am301(col1); Index created.
However, a Virtual Index will not put off the creation of an index with the same
column(s).
In the example below, a Virtual Index is created with name DUMMY, afterwards a
new index with a different name is created with the same column and structure. Both
of the indexes will show in the DBA_OBJECTS listing.
SQL> create index dummy on am310(col1, col2, col3) nosegment; Index
created.
SQL> create index am310_n1 on am310(col1, col2, col3); Index created.
Conclusion
As I mentioned earlier, this is undocumented, so use it at your own risk. The above
feature may not be a must-use option, but is a good-to-know fact. Drop the index
once you are done with it, without fail! Its presence can baffle some of the regular
scripts that are run to monitor the databases.
================================================================================
Chris Marquez
Oracle DBA
====================================
Drop Rollback or UNDO Tablspace With Active / Corrupt / "NEEDS RECOVERY"
Segments
====================================
------------------------------------
The Issue:
------------------------------------
---SQL*PLUS
SQL> alter database mount;
Database altered.
__OR__
---alert.log
Errors in file /o01/app/oracle/admin/report/bdump/report_smon_1295.trc:
ORA-01578: ORACLE data block corrupted (file # 2, block # 192423)
ORA-01110: data file 2: '/o01/oradata/report/undotbs01.dbf'
*OR*
Tue May 31 13:56:41 2005
Errors in file /o01/app/oracle/admin/report/bdump/report_smon_1646.trc:
ORA-01595: error freeing extent (16) of rollback segment (4))
ORA-00607: Internal error occurred while making a change to a data block
ORA-00600: internal error code, arguments: [4193], [1088], [992], [],
[], [], [], []
*OR EVEN*
Sun Jul 17 01:25:56 2005
Errors in file /oracle//bdump/orcl_j001_115070.trc:
ORA-00603: ORACLE server session terminated by fatal error
ORA-00600: internal error code, arguments: [kteuPropTime-2], [], [],
[], [], [], [], []
+++++++++++++++++++++++++++++++++++++
A. IF YOU CAN STILL OPEN THE DATABASE
+++++++++++++++++++++++++++++++++++++
------------------------------------
UNDO/RBS Seem OK!?
------------------------------------
col segment_name format a15
select segment_name, status from dba_rollback_segs;
SEGMENT_NAME STATUS
--------------- ------------------------------------------------
SYSTEM ONLINE
_SYSSMU1$ ONLINE
_SYSSMU2$ ONLINE
...
------------------------------------
Edit init.ora to Comment UNDO/RBS parameters
------------------------------------
---vi init.ora
#undo_management=AUTO
#undo_tablespace=UNDOTBS
#undo_retention = 18000
------------------------------------
UNDO/RBS Issue Obvious now!
------------------------------------
shutdown
startup
col segment_name format a15
select segment_name, status from dba_rollback_segs;
SEGMENT_NAME STATUS
--------------- ------------------------------------------------
SYSTEM ONLINE
_SYSSMU1$ PARTLY AVAILABLE
_SYSSMU2$ OFFLINE
...
+++++++++++++++++++++++++++++++++++++
B. IF YOU CAN *NOT* OPEN THE DATABASE
+++++++++++++++++++++++++++++++++++++
------------------------------------
Edit init.ora to Comment UNDO/RBS parameters & ADD "_smu_debug_mode", event
10015
------------------------------------
---vi init.ora
#undo_management=AUTO
#undo_tablespace=UNDOTBS
#undo_retention = 18000
------------------------------------
startup Again
------------------------------------
SQL> startup nomount pfile=/.../init.ora.UNOD_PARAM;
ORACLE instance started.
SQL> alter database mount;
Database altered.
SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
------------------------------------
View event="10015 trace file for corrupted rollback/undo segments
------------------------------------
udump/> more orcl_ora_815334.trc
....
Recovering rollback segment _SYSSMU2$
UNDO SEG (BEFORE RECOVERY): usn = 2 Extent Control Header
-----------------------------------------------------------------
+++++++++++++++++++++++++++++++++++++
NOW FIX CORRUPTED SEGMENTS
+++++++++++++++++++++++++++++++++++++
------------------------------------
Edit init.ora to "force" Rollback or UNDO offline
------------------------------------
SQL>select '"'||segment_name||'"'||',' from sys.dba_rollback_segs where
tablespace_name = 'UNDOTBS'
---vi init.ora
For example TRADITIONAL ROLLBACK SEGMENTS:
_OFFLINE_ROLLBACK_SEGMENTS=(rbs1,rbs2)
_CORRUPTED_ROLLBACK_SEGMENTS=(rbs1,rbs2)
------------------------------------
Drop Rollback or UNDO Segments:
------------------------------------
SQL>select 'drop rollback segment '||'"'||segment_name||'"'||';' from
sys.dba_rollback_segs where tablespace_name = 'UNDOTBS1'
1 rows selected.
------------------------------------
Drop The Rollback or UNDO Tablespace
------------------------------------
col FILE_NAME for a60
col BYTES for 999,999,999,999,999
select FILE_ID, BYTES, FILE_NAME from dba_data_files where TABLESPACE_NAME
='UNDOTBS';
FILE_ID BYTES FILE_NAME
---------- --------------------
------------------------------------------------------------
2 6,291,456,000 /o01/oradata/report/undotbs01.dbf
------------------------------------
RE-Create The Rollback or UNDO Tablespace
------------------------------------
SQL> CREATE UNDO TABLESPACE "UNDOTBS" DATAFILE
'/o01/oradata/orcl920/undotbs01.dbf' SIZE 500M REUSE AUTOEXTEND OFF;
Tablespace created.
------------------------------------
Edit init.ora to Comment _OFFLINE_ROLLBACK_SEGMENTS= and UNcomment
"undo_",
"rbs" parameters.
------------------------------------
---vi init.ora
#_OFFLINE_ROLLBACK_SEGMENTS
undo_management=AUTO
undo_tablespace=UNDOTBS
undo_retention = 18000
---alert.log
Mon May 16 17:50:02 2005
Database Characterset is WE8ISO8859P1
replication_dependency_tracking turned off (no async multimaster replication
found)
Completed: ALTER DATABASE OPEN
-----------------------------------
DOCS
-----------------------------------
Doc ID: Note:1013221.6
Subject: RECOVERING FROM A LOST DATAFILE IN A ROLLBACK TABLESPACE
Don Burleson
n rare cases (usually DBA error) the Oracle UNDO tablespace can become corrupted.
This manifests with this error: ORA-00376: file xx cannot be read at this time
Dropping the corrupt UNDO tablespace can be tricky and you may get the message:
ORA-00376: file string cannot be read at this time
select
segment_name,
status
from
dba_rollback_segs
where
tablespace_name='undotbs_corrupt'
and
status = ‘NEEDS RECOVERY’;
SEGMENT_NAME STATUS
------------------------------ ----------------
_SYSSMU22$ NEEDS RECOVERY
_OFFLINE_ROLLBACK_SEGMENTS=_SYSSMU22$