Oracle Performance Checks
Oracle Performance Checks
Below are the ways you can start with your investigation:
DISK
79204
TEXT
SELECT Ename FROM Emp;
Explanation: The SQL is retrieving every single employee name from emp table. The problem is that
each of the executions required about 16000 disk reads (79204 /5).
SQL with high disk_reads will generate a load that is disk I/O intensive (high physical reads).
Note: Small decimal 0.01 in the query is inserted in the denominator so as to prevent a divide by zero
error. You can take any small number of your choice.
Besides looking for excessive disk reads, it is wise to look for number of logical reads as well. Although
not as expensive as disk reads, there is nevertheless a cost associated with each logical read.
The logical reads are associated with the column called BUFFER_GETS in the V$SQL view.
TEXT
121305
Explanation: Excessive logical reads frequently indicates problem SQL statement. The script has
20,000 per execution versus 10,000 for disk reads that we used in our previous query.
During the search for troublesome SQL statements, be cautious when using these criterions based on disk
reads. Once data is cached, the number of disk reads will probably fall sharply on subsequent executions,
especially if the query is run several times in quick successions. Logical reads, on the other hand, will be
nearly identical on subsequent execution.
Result:
Exec
DISK
TEXT
1231421
3518243
Explanation: In this SQL each execution runs fine, consuming only 3 disk reads per execution
(3518243/1231421).
The problem here is not due to poorly tuned SQL statement; rather, it is due to huge number of
executions (1231421).
Now one should be curious to know why this statement has been executed over 1Million times, or why it
was not cached.
When searching for BAD SQL, the lack of bind variables presents a slight difficulty; the query of V$SQL
might return hundred or even thousands of different SQL statements. Each SQL statement only differ
slightly; nevertheless, the shared SQL area (shared pool) in memory will consider the statement unique.
In these scenarios, the question is, how to group all the similar statement together?
There are several ways to accomplish this, but one very simple way is to group the SQL statement by the
amount of memory they consume. This tactic works because SQL statements that are identical except for
one parameter typically consume exactly same amount of memory. In V$SQL view, this is called
PERSISTENT_MEM.
Result:
MEM
DISK
422
516
1014157
1084906
163
2004713
682
5719359
Explanation: We see from this query that SQL statements having a memory usage of 682 bytes are
responsible for over 5 million disk reads. You might think there is only one sql but if I add count(*)
probably it will display the number of SQL statements(either same or different) using this amount of
memory.
Having identified this row, the next step would be to list some of the individual SQL statements having the
shown value for Persistent_Mem. But when using this method, note that there will occasionally be other
innocent SQL statements that happen to have exactly the same value for PERSISTENT_MEM.
But this is the minor inconvenience that can be rectified by using SUBSTR function to select only first few
characters as given below:
SELECT Substr( sql_text , 1, 50) Similar Sql, COUNT(*)
FROM V$SQL
GROUP BY Substr( sql_text , 1, 50)
HAVING COUNT(*) > 1000
ORDER BY COUNT(*);
This Code would list and count the occurrence of SQL statements that are identical for atleast first 50
characters.
V$SQL LIMITATIONS
When querying the V$SQL view, remember that the statistics are not kept in memory forever; depending
on the size of the shared pool, statistics may soon aged out, thus making the statistics useless.
In some cases, it is also helpful to flush the shared pool prior to running the application in question. This
will reset the statistics in this view, so that there will be no confusion about which statistics were from
prior operations. Ofcourse this act of flushing of shared pool should be done very carefully in production
as this may further degrade the performance as each SQL have to repeat parsing for every SQL statement
received because each one will be seen as new.
The vast majority of sessions are inactive that is they are really not doing anything. The user is still
connected to the database but no queries are being run at present.
2) Activating SQL_TRACE
SQL Tracing is a very powerful tool for finding out exactly what an application is doing. You have 2
choices for starting the trace; either trace a particular session or trace the entire database. Both have their
uses, and it is important to understand clearly how to activate each method.
If tracing is desired as entire DB level, simply change one parameter in init.ora file then restart the DB.
SQL_TRACE = TRUE
For session level tracing simply issue the following command in your SQL*PLUS:
ALTER SESSION SET SQL_Trace = TRUE;
Similarly for disabling for your own session, simply issue this command:
ALTER SESSION SET SQL_Trace = FALSE;
However, you might ask how an application could issue the above statements. The answer is probably it
may not, but ofcourse there are ways to achieve the same.
In order to activate tracing for another session, it is first necessary to obtain the SID and Serial# for that
session. However this is easily retrieve from the V$SESSION dynamic view. Once these two parameters
are shown you can issue the following command:
EXECUTE SYS.dbms_system.set_sql_trace_in_session (SID, -2 SERIAL#, TRUE);
If many different trace files are being generated, this will assist the analyst in identifying which trace is
which.
Use a SQL hint to place flags in the SQL trace file.
To obtain the maximum benefits from the trace file the timing flag should be turned ON. By default the
timing is turned OFF.
Thus it is good practice to activate timing by including the following init.ora parameter:
Timed_Statistics = True
For an individual session, timing is easily enabled with the following command:
ALTER SESSION SET Timed_Statistics = True;
For database as a whole, statistics may be activated with the following command:
ALTER SYSTEM SET Timed_Statistics = True;
Time_Statistics only produce very slight performance degradation; the benefits far outweigh the cost.