How To Retrieve Entire SQL
How To Retrieve Entire SQL
uk/page/10/
where b.snap_id(+)
= :pBgnSnap
and b.dbid(+)
= e.dbid
and b.instance_number(+) = e.instance_number
and b.hash_value(+)
= e.hash_value
and b.address(+)
= e.address
and b.text_subset(+)
= e.text_subset
and e.snap_id
= :pEndSnap
and e.dbid
= :pDbId
and e.instance_number = :pInstNum
order by 3 desc
Show SQL Stmts where SQL_TEXT like '%'
select
e.hash_value "E.HASH_VALUE"
, e.module "Module"
, e.buffer_gets - nvl(b.buffer_gets,0) "Buffer Gets"
, e.executions - nvl(b.executions,0) "Executions"
, Round( decode ((e.executions - nvl(b.executions, 0)), 0, to_number(NULL)
, (e.buffer_gets - nvl(b.buffer_gets,0)) /
(e.executions - nvl(b.executions,0))) ,3) "Gets / Execution"
, Round(100*(e.buffer_gets nvl(b.buffer_gets,0))/sp920.getGets(:pDbID,:pInstNum,:pBgnSnap,:pEndSnap,'NO'),3) "Percent of Total"
, Round((e.cpu_time - nvl(b.cpu_time,0))/1000000,3) "CPU (s)"
, Round((e.elapsed_time - nvl(b.elapsed_time,0))/1000000,3) "Elapsed (s)"
, Round(e.fetches - nvl(b.fetches,0)) "Fetches"
, sp920.getSQLText ( e.hash_value , 400) "SQL Statement"
from stats$sql_summary e
, stats$sql_summary b
where b.snap_id(+)
= :pBgnSnap
and b.dbid(+)
= e.dbid
and b.instance_number(+) = e.instance_number
and b.hash_value(+)
= e.hash_value
and b.address(+)
= e.address
and b.text_subset(+)
= e.text_subset
and e.snap_id
= :pEndSnap
and e.dbid
= 2863128100
and e.instance_number = :pInstNum
and sp920.getSQLText ( e.hash_value , 400) like '%ZPV_DATA%'
order by 3 desc
Locate Server Workload from Statspack for days in Past
Change a.statistic# to respective value
Stats for Working Hours
select to_char(trunc(b.snap_time),'DD-MM-YYYY') ,statistic#,name, sum(value) from STATS$SYSSTAT A,
stats$snapshot B
where a.snap_id=b.snap_id
and trunc(b.snap_time) > trunc(sysdate -30)
and to_char(b.SNAP_TIME,'HH24') > 8
and to_char(b.SNAP_TIME,'HH24') <18
and a.statistic#=54
group by trunc(b.snap_time) ,statistic#,name
order by trunc(b.snap_time)
Locate kind of stats you want to pull from statspack
(select * from STATS$SYSSTAT where name like '%XXX%' )
9 session logical reads
Physical Reads
54 Physical Reads
56 Physical reads direct
58 physical read bytes
39 physical read total bytes
42 physical write total bytes
66 physical write bytes
66 physical writes
CPU Related
355 OS Wait-cpu (latency) time
328 parse time cpu
8 recursive cpu usage
o
o
Application Server/iAS
E Business Suite
Concepts
Daily Admin
Installation,Upgrade
Patching,Cloning
General DBA Tasks
o
o
Utilties
Grid Control
Linux/Unix
o
o
Installation
Miscellaneous
MultiMaster Replication
o
o
Daily Admin
Administration
ASM
Performance
Queue/Setup Errors
Setup
Oracle Dataguard
Concepts/Build
New Features
PLSQL Development
RAC
Basic Admin
Build/Install
ClusterWare CRS
Errors
FAQ/Concepts
OCFS
TAF/Failover
RMAN
o
o
o
Restore
Scripts
SQL Server
Tuning
AWR/ASH (10g/11g)
Index Tuning
SGA/PGA Tuning
Statspack (8i/9i)
Search Errors
ORA8i R3
9i R1
9i R2
10g R1
10g R2
11g R1
Search Docs
Metalink
8i R3
9i R1
9i R2
10g R1
10g R2
11g R1
Oracle Documentation
8i R3 , 9i R2 , 10g R2 , 11g R1 , AS10g R2 , AS10g R3
Most Read
o Installing Oracle 10.2.0.1 on Red Hat Linux AS release 4 Update 5 (Nahant Update)
(418)
o Installing Oracle 9.2.0.6 on Red Hat Linux AS release 5 Update 3 (278)
o How AWR works? (132)
o EMD upload error: uploadXMLFiles skipped :: OMS version not checked yet.. (86)
o Locks,Monitoring SQL Server Pocesses (76)
o Maintaining SQL Server High Availability (76)
Editors Pick
o
o
o
o
o
o
Metalink Notes
Scripts
Tuning
RAC
10g Grid
E Business Suite
Recent Posts
o
o
o
o
o
Archives
Previous 20 items
Next 20 items
Page 10 of 14 First...89101112...Last
Summary report of ASM disk groups and Space Utilised
PURPOSE : Provide a summary report of all disk groups.
SET LINESIZE 145
SET PAGESIZE 9999
SET VERIFY off
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN
+----------------------------------------------------------------------------+
| Jeffrey M. Hunter |
|----------------------------------------------------------------------------|
| PURPOSE : Provide a summary report of all disks contained within all ASM |
| disk groups along with their performance metrics. |
| NOTE : As with any code, ensure to test this script in a development |
| environment before attempting to run it in production. |
+----------------------------------------------------------------------------+
+----------------------------------------------------------------------------+
| From : Jeffrey M. Hunter |
| PURPOSE : Provide a summary report of all disks contained within all disk |
| groups. This script is also responsible for queriing all |
| candidate disks - those that are not assigned to any disk |
| group. |
+----------------------------------------------------------------------------+
, b.name disk_file_name
, b.failgroup disk_file_fail_group
, b.total_mb total_mb
, (b.total_mb - b.free_mb) used_mb
, ROUND((1- (b.free_mb / b.total_mb))*100, 2) pct_used
FROM
v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number)
ORDER BY
a.name
/
Mastering ASMCMD
cd Changes the current directory to the specified directory.
duDisplays the total disk space occupied by ASM files in the specified ASM directory and all its
subdirectories, recursively.
exit Exits ASMCMD.
find Lists the paths of all occurrences of the specified name (with wildcards) under the specified directory.
ASMCMD> find +dgroup1 undo* +dgroup1/SAMPLE/DATAFILE/UNDOTBS1.258.555341963
+dgroup1/SAMPLE/DATAFILE/UNDOTBS1.272.557429239
The following example returns the absolute path of all the control files in the
+dgroup1/sample directory.ASMCMD> find -t CONTROLFILE +dgroup1/sample *
+dgroup1/sample/CONTROLFILE/Current.260.555342185
+dgroup1/sample/CONTROLFILE/Current.261.555342183
ls Lists the contents of an ASM directory, the attributes of the specified file, or the names and attributes
of all disk groups.
lsct Lists information about current ASM clients.
lsdg Lists all disk groups and their attributes.
mkalias Creates an alias for a system-generated filename.
mkdir Creates ASM directories.
pwd Displays the path of the current ASM directory.
rm Deletes the specified ASM files or directories.
rmalias Deletes the specified alias, retaining the file that the alias points to.
List the connected Users , Machines to Database
SELECT s.username, s.logon_time, s.machine, s.osuser, s.program
FROM v$session s, v$process p, sys.v_$sess_io si
WHERE s.paddr = p.addr(+) AND si.sid(+) = s.sid
AND s.machine like '%otau157%' order by 3;
SELECT s.username, s.logon_time, s.machine, s.osuser, s.program
FROM v$session s, v$process p, sys.v_$sess_io si
WHERE s.paddr = p.addr(+) AND si.sid(+) = s.sid
AND s.type='USER';
Display tablespace usage
column
column
column
column
linesize 1000;
trimspool on;
pagesize 32000;
verify off;
feedback off;
PROMPT
PROMPT *************************
PROMPT *** TABLESPACE STATUS ***
PROMPT *************************
SELECT df.tablespace_name tsname
, round(sum(df.bytes)/1024/1024) tbs_size_mb
, round(nvl(sum(e.used_bytes)/1024/1024,0)) used
, round(nvl(sum(f.free_bytes)/1024/1024,0)) avail
, rpad(' '||rpad('X',round(sum(e.used_bytes)
*10/sum(df.bytes),0), 'X'),11,'-') used_visual
, nvl((sum(e.used_bytes)*100)/sum(df.bytes),0) pct_used
FROM sys.dba_data_files df
, (SELECT file_id
, sum(nvl(bytes,0)) used_bytes
FROM sys.dba_extents
GROUP BY file_id) e
, (SELECT max(bytes) free_bytes
, file_id
FROM dba_free_space
GROUP BY file_id) f
WHERE e.file_id(+) = df.file_id
AND df.file_id = f.file_id(+)
GROUP BY df.tablespace_name
ORDER BY 6;
Will produce results like
XYZ Live Database
===================
Size Used Free
Tablespace Name (MB) (MB) (MB) Used % Used
------------------------------ ---------- ---------- ---------- ----------- -----STATSPACK 2,048 0 2,047 ---------- 0
TOOLS 1,024 0 1,024 ---------- 0
ACF_XYZ 2,048 0 2,048 ---------- 0
ACF_IABC 2,048 3 2,045 ---------- 0
UNDOTBS1 1,024 337 449 XXX------- 33
SYSTEM 1,024 557 467 XXXXX----- 54
SYSAUX 5,000 2,738 1,032 XXXXX----- 55
USERS 14,000 9,210 2,678 XXXXXXX--- 66
UNDOTBS2 1,024 703 20 XXXXXXX--- 69
UNDOTBS3 1,024 740 5 XXXXXXX--- 72
Locate deprecated parameters
You can determine deprecated parameters using the column "isdeprecated" in the v$parameter view.
Enabling ArchiveLog Mode in a RAC Environment
Login to one of the nodes (i.e. linux1) and disable the cluster instance parameter by setting
cluster_database to FALSE from the current instance:
$ sqlplus "/ as sysdba"
SQL> alter system set cluster_database=false scope=spfile sid='orcl1';
Shutdown all instances accessing the clustered database:
$ srvctl stop database -d orcl
Using the local instance, MOUNT the database:
$ sqlplus "/ as sysdba"
GATHER_SCHEMA_STATS(ownname,estimate_percent,block_sample,method_opt,degree,granularity,casca
de, stattab,statid,options,statown,no_invalidate,gather_temp,gather_fixed);
SQL> begin
dbms_stats.gather_schema_stats
(ownname=>'DOTCOM',estimate_percent => 100,method_opt=>'for all indexed columns',degree=>16,
CASCADE=>TRUE);
end ;
GENERATE_STATS(ownname,objname,organized);
GATHER_SYSTEM_STATS (gathering_mode,interval,stattab,statid,statown);
GATHER_TABLE_STATS
(ownname,tabname,partname,estimate_percent,block_sample,method_opt,
degree,granularity,cascade,stattab,statid,statown,no_invalidate,stattype);
How to Backup/Export Oracle Optimizer Statistics into Table
Exporting and Importing Statistics
Caveats: Always use import/export and use imp/exp utility on schema user who owns tables.
I have wasted a week where I was exporting as DBA for XYZ user and then importing into
different system under different username.
Statistics can be exported and imported from the data dictionary to user-owned tables. This enables you
to create multiple versions of statistics for the same schema. It also enables you to copy statistics from
one database to another database. You may want to do this to copy the statistics from a production
database to a scaled-down test database.
Note:
Exporting and importing statistics is a distinct concept from the EXP and IMP utilities of the database. The
DBMS_STATS export and import packages do utilize IMP and EXP dump files.
Before exporting statistics, you first need to create a table for holding the statistics. This statistics table
is created using the procedure DBMS_STATS.CREATE_STAT_TABLE. After this table is created, then you
can export statistics from the data dictionary into your statistics table using the
DBMS_STATS.EXPORT_*_STATS procedures. The statistics can then be imported using the
DBMS_STATS.IMPORT_*_STATS procedures.
Note that the optimizer does not use statistics stored in a user-owned table. The only statistics used by
the optimizer are the statistics stored in the data dictionary. In order to have the optimizer use the
statistics in a user-owned tables, you must import those statistics into the data dictionary using the
statistics import procedures.
In order to move statistics from one database to another, you must first export the statistics on the first
database, then copy the statistics table to the second database, using the EXP and IMP utilities or other
mechanisms, and finally import the statistics into the second database.
Note:
The EXP and IMP utilities export and import optimizer statistics from the database along with the table.
One exception is that statistics are not exported with the data if a table has columns with systemgenerated names.
14.5.3 Restoring Statistics Versus Importing or Exporting Statistics
The functionality for restoring statistics is similar in some respects to the functionality of importing and
exporting statistics. In general, you should use the restore capability when:
* You want to recover older versions of the statistics. For example, to restore the optimizer behavior to an
earlier date. * You want the database to manage the retention and purging of statistics histories.You
should use EXPORT/IMPORT_*_STATS procedures when:
* You want to experiment with multiple sets of statistics and change the values back and forth.
* You want to move the statistics from one database to another database. For example, moving statistics
from a production system to a test system.
* You want to preserve a known set of statistics for a longer period of time than the desired retention
date for restoring statistics.
1. Create the statistics table.
exec DBMS_STATS.CREATE_STAT_TABLE(ownname =>'SCHEMA_NAME' ,stat_tab => 'STATS_TABLE' ,
tblspace => 'STATS_TABLESPACE');
>>>>>>>> For 10G
exec DBMS_STATS.CREATE_STAT_TABLE(ownname =>'SYSTEM',stat_tab => 'STATS_TABLE');
>>>>>>>> For 9i and earlier
begin
DBMS_STATS.CREATE_STAT_TABLE('dba_admin','STATS_TABLE');
end;
2. Export statistics to statistics table
EXEC DBMS_STATS.EXPORT_SCHEMA_STATS('ORIGINAL_SCHEMA' ,'STATS_TABLE',NULL,'SYSTEM');
3. Import statistics into the data dictionary.
exec DBMS_STATS.IMPORT_SCHEMA_STATS('NEW_SCHEMA','STATS_TABLE',NULL,'SYSTEM');
e.hash_value "E.HASH_VALUE"
, e.module "Module"
, e.buffer_gets - nvl(b.buffer_gets,0) "Buffer Gets"
, e.executions - nvl(b.executions,0) "Executions"
, Round( decode ((e.executions - nvl(b.executions, 0)), 0, to_number(NULL)
, (e.buffer_gets - nvl(b.buffer_gets,0)) /
(e.executions - nvl(b.executions,0))) ,3) "Gets / Execution"
, Round(100*(e.buffer_gets nvl(b.buffer_gets,0))/sp920.getGets(:pDbID,:pInstNum,:pBgnSnap,:pEndSnap,'NO'),3) "Percent of Total"
, Round((e.cpu_time - nvl(b.cpu_time,0))/1000000,3) "CPU (s)"
, Round((e.elapsed_time - nvl(b.elapsed_time,0))/1000000,3) "Elapsed (s)"
, Round(e.fetches - nvl(b.fetches,0)) "Fetches"
, sp920.getSQLText ( e.hash_value , 400) "SQL Statement"
from stats$sql_summary e
, stats$sql_summary b
where b.snap_id(+)
= :pBgnSnap
and b.dbid(+)
= e.dbid
and b.instance_number(+) = e.instance_number
and b.hash_value(+)
= e.hash_value
and b.address(+)
= e.address
and b.text_subset(+)
= e.text_subset
and e.snap_id
= :pEndSnap
and e.dbid
= :pDbId
and e.instance_number = :pInstNum
order by 3 desc
Show SQL Stmts where SQL_TEXT like '%'
select
e.hash_value "E.HASH_VALUE"
, e.module "Module"
, e.buffer_gets - nvl(b.buffer_gets,0) "Buffer Gets"
, e.executions - nvl(b.executions,0) "Executions"
, Round( decode ((e.executions - nvl(b.executions, 0)), 0, to_number(NULL)
, (e.buffer_gets - nvl(b.buffer_gets,0)) /
(e.executions - nvl(b.executions,0))) ,3) "Gets / Execution"
, Round(100*(e.buffer_gets nvl(b.buffer_gets,0))/sp920.getGets(:pDbID,:pInstNum,:pBgnSnap,:pEndSnap,'NO'),3) "Percent of Total"
, Round((e.cpu_time - nvl(b.cpu_time,0))/1000000,3) "CPU (s)"
, Round((e.elapsed_time - nvl(b.elapsed_time,0))/1000000,3) "Elapsed (s)"
, Round(e.fetches - nvl(b.fetches,0)) "Fetches"
, sp920.getSQLText ( e.hash_value , 400) "SQL Statement"
from stats$sql_summary e
, stats$sql_summary b
where b.snap_id(+)
= :pBgnSnap
and b.dbid(+)
= e.dbid
and b.instance_number(+) = e.instance_number
and b.hash_value(+)
= e.hash_value
and b.address(+)
= e.address
and b.text_subset(+)
= e.text_subset
and e.snap_id
= :pEndSnap
and e.dbid
= 2863128100
and e.instance_number = :pInstNum
and sp920.getSQLText ( e.hash_value , 400) like '%ZPV_DATA%'
order by 3 desc
Locate Server Workload from Statspack for days in Past
Change a.statistic# to respective value