Oracle File Structure
Oracle File Structure
com/search/label/oracle
We had a problem with a badly designed database file layout.
SAP application, Oracle data files and the redo logs were all configured to be stored on a single
disk.
A higher than normal load on SAP system caused the entire system to crawl. Quickly checking a
few transactions, which included ST06, showed that the response time of disks was very high,
more than 500 ms.
Before planning to redistribute the data files and redo logs away to a new disk, someone wanted
to try a DB reorganization. When you reorganize the database, the fragmentation of data is
reduced. So a read requires fewer blocks to be read and thus reduces the load on the disk.
After the DB was reorged, problem was gone!
No comments:
Labels: basis, oracle, performance, sap
COMPATIBLE
Defines the Oracle version whose features can be used to the greatest extent
As a rule, it must not be reset to an earlier release (see Note SAP 598470).
A value with three parts (such as 10.2.0) rather than five parts (such as 10.2.0.2.0) is
recommended to avoid changing the parameter as part of a patch set installation.
If an ORA-00201 error occurs when you try to convert the value with five parts
10.2.0.2.0 to 10.2.0, you can leave the value 10.2.0.2.0 (independent of the patch set
used).
CONTROL_FILES
CONTROL_FILE_RECORD_KEEP_TIME
Defines how many days historic data is retained in the control files
CORE_DUMP_DEST
DB_BLOCK_SIZE.
Can be set to a value higher than 8K in well-founded individual cases after it has been
approved by SAP Support (see Note 105047)
DB_CACHE_SIZE
Optimal size depends on the available memory (see Notes 789011 and 617416)
DB_FILES
DB_NAME
DB_WRITER_PROCESSES
EVENT
FILESYSTEMIO_OPTIONS
Activation of file system functions (see Note 999524 and Note 793113)
If you previously used a large file system cache (>= 2 * Oracle Buffer Pool), the
performance may get worse after you activated the direct I/O if you set
FILESYSTEMIO_OPTIONS to SETALL. Therefore, it is important that you enlarge the
Oracle buffer pool to replace the file system cache that is no longer available.
HPUX_SCHED_NOAGE
The privileges RTSCHED and RTPRIO must be assigned to the dba group to enable you
to use the functions (see Note 1285599).
LOG_ARCHIVE_DEST
LOG_ARCHIVE_DEST_1
LOG_ARCHIVE_FORMAT
To avoid the problems described in Note 132551, it must be explicitly set to WINDOWS
at least.
LOG_BUFFER
Oracle internally determines the buffer's actual size, so it is normal for "SHOW
PARAMETER LOG_BUFFER" or a SELECT on V$PARAMETER to return values
between 1MB and 16MB.
LOG_CHECKPOINTS_TO_ALERT
MAX_DUMP_FILE_SIZE
A limitation is useful to avoid file system overflows and to reduce the duration of the
dump generation.
OPEN_CURSORS
OPTIMIZER_DYNAMIC_SAMPLING
Level 2 (the default setting for Oracle 10g): Dynamic sampling is performed only if
tables do not have any statistics.
Level 6: As level 2 and includes dynamic sampling of 128 table blocks if literals are used
and there are no bind variables.
OPTIMIZER_INDEX_COST_ADJ
Adjusts the calculated index costs; when there is a value of 20 (percent), index costs are
reduced by a factor of 5, for example.
A value lower than 100 is advisable so that index accesses are preferred instead of full
table scans.
PARALLEL_EXECUTION_MESSAGE_SIZE
Defines size of the memory area for parallel query messages (in bytes)
PARALLEL_MAX_SERVERS
Defines the maximum number of parallel execution processes (see Note 651060)
The number of CPU Cores generally corresponds to the default value for the Oracle
parameter CPU_COUNT. If you are unsure in individual cases, you can use the value of
the parameter CPU_COUNT (for example, in transaction DB26).
If the database shares the server with other software (for example, SAP central instance,
other Oracle instances), you can also select a lower value (for example, 8 CPU Cores, the
SAP central instance and the Oracle database should share resources 50:50 ->
PARALLEL_MAX_SERVERS = 8 * 0.5 * 10 = 40).
PARALLEL_THREADS_PER_CPU
Defines the number of parallel query processes that can be executed in parallel for each
CPU
Influences the DEFAULT level of parallel processing during a parallel execution (see
Note 651060).
PGA_AGGREGATE_TARGET
Checks the available PGA memory (see Notes 789011 and 619876)
PROCESSES
The component relating to ABAP work processes is only relevant in systems with ABAP
stacks. The component relating to J2EE server processes is only relevant in systems with
Java stacks.
<max-connections> indicates the maximum number of connections (also called pool size)
of the J2EE system DataSource (sysDS.maximumConnections). You can set the value of
this parameter using the VisualAdmin tool or other J2EE administration tools.
QUERY_REWRITE_ENABLED
Defines whether query transformations are also factored in when the access path is
determined
RECYCLEBIN
REMOTE_OS_AUTHENT
Defines whether TCP database access via OPS$ users is allowed (see Note 400241)
REPLICATION_DEPENDENCY_TRACKING
Defines whether the system has to create replication information when the database is
accessed
SESSIONS
Defines the maximum number of Oracle sessions that exist in parallel - must be
configured larger than PROCESSES, since single processes can serve several sessions
(for example, in the case of multiple database connections from work processes)
SHARED_POOL_SIZE
Defines the size of the Oracle shared pool (see Notes 690241 and 789011)
STAR_TRANSFORMATION_ENABLED
UNDO_MANAGEMENT
UNDO_TABLESPACE
USER_DUMP_DEST
_B_TREE_BITMAP_PLANS
Defines whether data of a B*TREE index can be transformed into a bitmap display
during a database access.
_BLOOM_FILTER_ENABLED
_DB_BLOCK_NUMA
_ENABLE_NUMA_OPTIMIZATION
_FIX_CONTROL
Note 1454675 describes a problem whereby the _FIX_CONTROL values do not work
despite being displayed correctly in V$PARAMETER.
_CURSOR_FEATURES_ENABLED
With a value of 10 and in connection with fix 6795880, the following is prevented:
sporadic hanging during parsing
_FIRST_SPARE_PARAMETER
This is a generic parameter that can be used for different purposes in certain cases.
With Oracle 10.2.0.4 and fix 6904068, you use this parameter to introduce a break of
1/100 second between two "cursor: pin S" mutex requests instead of continually
executing requests. This may help to avoid critical CPU bottlenecks.
_INDEX_JOIN_ENABLED
Controls whether index joins can be used or not; within an index join, two indices of a
table are directly linked together.
_IN_MEMORY_UNDO
_OPTIM_PEEK_USER_BINDS
Defines whether Oracle takes the contents of the bind variables into account during
parsing
May cause various problems (Notes 755342, 723879) if not set to FALSE.
_OPTIMIZER_BETTER_INLIST_COSTING
If the parameter is set to OFF, long IN lists are evaluated very favorably.
The CBO performs a reasonable cost calculation for IN lists using the value ALL.
Therefore, you should always use the default value ALL (this means that you should not
set the parameter). If the CBO takes incorrect decisions in individual cases, these
incorrect decisions must be analyzed and corrected.
_OPTIMIZER_MJC_ENABLED
_PUSH_JOIN_UNION_VIEW
Controls whether join predicates may be used in a UNION ALL construct beyond the
view boundaries.
_SORT_ELIMINATION_COST_RATIO
Controls rule-based CBO decision in connection with the FIRST_ROWS hint and
ORDER BY (see Note 176754).
_TABLE_LOOKUP_PREFETCH_SIZE
Controls whether table prefetching is used (a value of zero means no table prefetching).
No comments:
Labels: basis, oracle, sap, sap netweaver
exit
When you execute this from sql prompt, you will get the number of employees returned.
When a script has to execute the sql file, it will first enter sqlplus, which returns banner and the
initial SQL prompt to the user. we don't need that either. Therefore, sqlplus is called "silently"
using -S option.
Now we assign the value of the total number of employees to a UNIX variable using command
substitution.
SUMEMP=`sqlplus -S user/pass @getemployeecount.sql`
No comments:
Labels: basis, oracle, unix
Now the following SQL will list out the corrupt indexes:
select distinct file#,segment_name,segment_type, TABLESPACE_NAME,
partition_name from dba_extents a,v$database_block_corruption
b where a.file_id=b.file# and a.BLOCK_ID <= b.BLOCK#
and a.BLOCK_ID + a.BLOCKS >= b.BLOCK#;
No comments:
Labels: oracle
One of the most important steps in Oracle DB restore/recovery is the control file creation on the
target system as the file locations and SID of the database change. Here are the steps to create
control file:
Generate the control file trace on the source system:
1. Ensure that the source DB is on open or mounted mode by running the following command
select open_mode from v$database;
The output should be MOUNTED or READ WRITE
2. Write the control file to trace by running the following command
alter database backup controlfile to trace;
3. Find out where the trace is written by running the following
show parameter dump;
The location is most
likely /oracle/<SID>/saptrace/diag/rdbms/<sid>/<SID>/trace for SAPOracle database. Check the latest trace file.
4. Open the file and copy the section resembling the following as a new file ex:
createcontrolfile.sql by removing all lines above STARTUP NOMOUNT and changing REUSE
to SET (that's because SID is changing) and replace prod SID with QA SID
ARCHIVELOG
'/oracle/<SID>/sapdata3/sr3700_10/sr3700.data10',
'/oracle/<SID>/sapdata3/sr3700_11/sr3700.data11',
'/oracle/<SID>/sapdata3/sr3700_12/sr3700.data12',
'/oracle/<SID>/sapdata4/sr3700_13/sr3700.data13',
'/oracle/<SID>/sapdata4/sr3700_14/sr3700.data14',
'/oracle/<SID>/sapdata4/sr3700_15/sr3700.data15',
'/oracle/<SID>/sapdata4/sr3700_16/sr3700.data16',
'/oracle/<SID>/sapdata1/sr3usr_1/sr3usr.data1'
CHARACTER SET UTF8
;
5. Adjust the SID and datafile locations as per the target system.
Run control file on the target system after datafiles are restored:
1. Start the database on NOMOUNT mode
startup nomount;
2. Run the createcontrolfile.sql file created in step 4 on source system
@createcontrolfile.sql
3. Check the status of the database to ensure that it is in MOUNTED state by running the SQL
command select status from v$instance
4. Recover database using one the options (until a specific time or until all redo logs in oraarch
are applied)
recover database using backup controlfile until time '2013-0817:11:56:00';
recover database until cancel using backup controlfile;
You will get the following prompt:
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
Choose AUTO
Once the redo logs are applied, you will get the prompt again, this time choose CANCEL
5. Open the database
alter database open resetlogs;
6. Create the temporary tablespace. The trace file created in the source system carries the
command to recreate temporary tablespace. The command will resemble the following syntax:
ALTER TABLESPACE PSAPTEMP ADD TEMPFILE
'/oracle/<SID>/sapdata3/temp_1/temp.data1' SIZE 4000M REUSE
AUTOEXTEND ON NEXT 20971520 MAXSIZE 10000M;
2 comments:
Labels: basis basics, Basis interviews, oracle, r3, sap, sap netweaver
between 3000 and 30000 (assuming employee belongs to at least one department and can clock
for multiple departments).
The CBO is not intelligent to know these relations and this assumption can have serious
performance impact on join operations.
In order to calculate better statistics, we can use extended statistics from Oracle 11g onwards.
SAP has provided these statistics for AUSP, BKPF, MSEG and HRP1001 tables as part of SAP
note 1020260.
You can run the following commands to define the extended statistics on the above listed tables
on SAP ERP application.
SELECT DBMS_STATS.CREATE_EXTENDED_STATS('SAPR3',
KLART, ATINN)') FROM DUAL;
SELECT DBMS_STATS.CREATE_EXTENDED_STATS('SAPR3',
BUKRS, BSTAT)') FROM DUAL;
SELECT DBMS_STATS.CREATE_EXTENDED_STATS('SAPR3',
'(RELAT, SCLAS, OTYPE, PLVAR)') FROM DUAL;
SELECT DBMS_STATS.CREATE_EXTENDED_STATS('SAPR3',
MATNR, WERKS, LGORT)') FROM DUAL;
SELECT DBMS_STATS.CREATE_EXTENDED_STATS('SAPR3',
MBLNR, MJAHR)') FROM DUAL;
SELECT DBMS_STATS.CREATE_EXTENDED_STATS('SAPR3',
WERKS, BWART)') FROM DUAL;
SELECT DBMS_STATS.CREATE_EXTENDED_STATS('SAPR3',
WERKS, BWART, LGORT)') FROM DUAL;
SELECT DBMS_STATS.CREATE_EXTENDED_STATS('SAPR3',
WERKS, LGORT)') FROM DUAL;
'AUSP', '(MANDT,
'BKPF', '(MANDT,
'HRP1001',
'MSEG', '(MANDT,
'MSEG', '(MANDT,
'MSEG', '(MANDT,
'MSEG', '(MANDT,
'MSEG', '(MANDT,
If you are aware of similar relationships, you can use the following syntax.
To define extended statistics:
SELECT DBMS_STATS.CREATE_EXTENDED_STATS ('<owner>',
'<table_name>', ' (<col1>, ..., <colN>)') FROM DUAL;
To define and create extended statistics:
EXEC DBMS_STATS.GATHER_TABLE_STATS('<owner>', '<table_name>',
METHOD_OPT => 'FOR COLUMNS (<col1>, ..., <colN>) SIZE 1');
No comments:
Labels: basis, ecc6, oracle, r3, sap
areas which have relevance to database usage with an SAP application as SAP does not make use
of all the memory areas available.
There are two broad memory areas
1. Memory shared by all the processes - System Global Area in Oracle
2. Memory assigned to exactly one process - Program Global Area in Oracle
Dirty Buffer - holds data that is modified, but not moved to Write List
Write List - holds data that is modified and ready to be written to disk
Free Buffer, Pinned Buffer and Dirty Buffer form LRU list of the Buffer pool. Free Buffer is the
LRU end of LRU List and Dirty Buffer is the MRU end. Each buffer again has data placed by
last used order.
When a process requires data, it starts looking for it in the data buffer. If it finds that data, it is a
cache hit otherwise it is a cache miss. In the event of a cache miss, the Oracle process has to
copy the data from datafile into the LRU List.
Before copying, the process will first start looking for free space in the LRU list starting from the
LRU end. As the process starts to scans for free space, if it hits a dirty data, it copies that over to
the Write List.
It continues until it has found enough free space or if it hits a threshold to search. If it hits the
threshold, it asks DBWn process to write some of the data blocks from Write List to datafiles and
free up space. If it gets the required free space, it reads the data into the MRU end of LRU List.
Whenever an Oracle process accesses any of the data in the LRU list (cache hit), the data is
moved to the MRU end of that buffer. Over the time, the older data (except for full table scans)
moves to the LRU end of the buffer.
The size of data buffer is defined by DB_BLOCK_BUFFERS (in blocks) or by
DB_CACHE_SIZE (in bytes) if using a dynamic SGA.
Redo Buffer
Redo Buffer is circular buffer. The contents of this buffer are periodically written to the active
online redo log files by LGWR process. Database operations such as INSERT, UPDATE,
DELETE, CREATE, ALTER and DROP are logged into the this buffer. These operations help
redo the changes to the table and hence are important in restore of database in the event of a
crash.
The size of redo buffer is defined by LOG_BUFFER (in bytes). SAP recommends to set it to 1
MB or less.
Shared Pool
Shared Pool is made up of various memory areas of which Dictionary Cache and Shared SQL
Area are of high importance.
Dictionary cache (or Row Cache) contains meta information about database tables, views and
users. As the meta information is stored in the form of tables or views (principally rows),
Dictionary cache is also known as Row cache.
Shared SQL Area (or Shared Cursor Cache) is a part of Library Cache. Before any SQL
statement can be executed, the statement is first parsed and stored in the Library Cache along
with its execution plan. Each SQL statement has parts that are common to two or more users
executing similar statement and bind variables that are private to each user. The Shared SQL
Area stores the shared part of the statement.
The size of shared pool is set by SHARED_POOL_SIZE (in bytes). SAP recommends to set this
value to 600 MB or above.
The SGA comprises of other areas such as Java Pool, Large Pool and Streams Pool which are not
utilized by SAP. However, these memory cannot be set to 0 if you are using Oracle utilities
(RMAN, Oracle VM etc)
It is recommended to limit SGA to 1/4th of the RAM.
Control-M by BMC
Cronacle by Redwood
No comments:
Labels: ecc6, job, oracle, r3, sap, sap tools
need to add more CPU (and memory) can be avoided, which in turn reduces the licensing cost.
SAP buffers are a part of the application memory that conceive this concept.
Buffer Synchronization
SAP buffers are local to each instance. When a change is made, the application instance where
the transaction ran can be made aware of the change easily. However, it is very important to
ensure that the changes are communicated to other application instances to ensure validity of the
buffered information. This is realized by using the table DDLOG to centrally log and read the
changes. The inevitable need to synchronize data across the application instances is the biggest
challenge and a limitation to the type of data that is synchronized.
A change operation is first executed at the database level (but not committed yet), and if it is
successful, the change is applied to the buffer. All the changes made to buffered objects are
registered by the database interface of the work process in a main memory structure known as a
"notebook". At the end of the transaction, these registered synchronization requests are inserted
to the NOTEBOOK field of DDLOG table. When the insert operation on DDLOG table
completes, the database transaction is committed. The changes may not always be triggered by a
transaction; a change could also be triggered by transports (tp and R3trans), but the technical
realization of logging changes to DDLOG table remains the same. The newly created DDLOG
data record is identified by a sequence number (SEQNUMBER field) that is automatically
assigned in an ascending order by the database. In case of Oracle database, DDLOG_SEQ
sequence takes care of sequencing.
The other application servers note the change as follows:
The dispatcher of the instance triggers buffer synchronization, which reads the new
synchronization requests (since the last synchronization) from the table DDLOG. Only those
sequence numbers that are higher than the previous sequence number are read to fetch the new
entries from DDLOG. The new synchronization requests are applied to the buffers. The delay
between two buffer synchronizations is determined by the parameter rdisp/bufreftime. This
parameter (defaults 120 sec) should not be changed. If synchronization is required at a more
frequent interval, then the buffering on the specific objects should be disabled or special access
commands should be used to bypass the buffer.
Due to the delay in buffer synchronization (and performance reasons) transactional data should
never be buffered.
The chief parameters to control buffer synchronization are:
rdisp/bufrefmode - controls read and write on DDLOG table
rdisp/bufreftime (already discussed above) - controls the frequency of buffer synchronization
rdisp/bufrefmode defines (a) if synchronization records are written to the table DDLOG (possible
values "sendon" or "sendoff") and (b) if the periodic buffer invalidation occurs by reading
synchronization records from the table DDLOG (possible values "exeauto" or "exeoff"). The
parameter should be set to sendoff,exeauto if only one instance (the central instance) is
configured. If there are more than one instances (i.e at least one dialog instance is installed in
You can confirm the flag being used using the following SQL command:
select * from user_sequences where sequence_name = 'DDLOG_SEQ';
The fix to this problem is same as the previous fix (3b).
4. Dispatcher or work process problems
The buffer synchronization is triggered by the dispatcher and executed using dialog work
process. If the dispatcher does not get enough CPU time or there are not enough dialog work
processes, buffer synchronization does not happen. In such cases, you have to increase the trace
level, reproduce the problem and engage SAP.
To increase the trace level
Add the following parameters using RZ11
rdisp/TRACE_LOGGING = on, 150 m
Go to SM50 on each server, and increse the trace level from menu Process--> Trace -->
Dispatcher --> Increase level
Keep an eye on this blog for more posts on SAP buffers!
3 comments:
Labels: basis, basis basics, oracle, r3, sap, trace
No comments:
Labels: basis, oracle, sap, unix
4. Click on the JDBC/ODBC Connection type entry and provide the port number used by
the listener