Best Practise For Ebs
Best Practise For Ebs
Best Practise For Ebs
Concurrent program "Purge Debug Log and System Alerts" in Release 11i and
"Purge Logs and Closed System Alerts" in Release 12 is recommended way to
purge messages. This program purges all messages up to the specified date,
except messages for active transactions (new or open alerts, active ICX sessions,
concurrent requests, and so on). This program is by default scheduled to run
daily and purge messages older than 7 days. Internally this concurrent program
invokes FND_LOG_ADMIN APIs.
Purge Debug Log and System Alerts scheduled to run everyday at 06:25 PM.
Purge Signon Audit data (FNDSCPRG)
Submit the "Purge Signon Audit Data" program and define audit date
parameter. This program will delete all Sign-On Audit information created
before this date.
Purge Signon Audit Data requires 1 parameter, Audit Date. Enter a date in
DD-MON-RR format, all signon audit data older that this date will be deleted
from the above tables.
Purge Signon Audit data scheduled to run everyday at 5:30 AM.
Purge Concurrent Request and/or Manager Data (FNDCPPUR)
Of all the tables that occupy very large amount of space within APPLSYSD and
APPLSYSX tablespaces of Oracle Applications instances, FND_LOBS is usually
one of the top 10. This is because, it stores all the attachments that have been
uploaded to Oracle Applications. There is a LOB field within this table called
FILE_DATA, the corresponding LOB segment (e.g.,
APPLSYS.SYS_LOB0000680397C00004$$) is where the actual attachment
data is stored, and is usually very large.
Purge Inactive Sessions (ICXDLTMP)
Gather Schema Stats
How Often Should Gather Schema Statistics Program be Run? (Doc ID 168136.1)
Gather Schema Stats - Gather
GSS if ran with parameter Gather option it takes more that 24 hours to
complete.
System becomes slow if its running in background.
Cannot run more frequently since the time taken is more.
Recommendation :- We can run every 3 days. And one day before CTB.
Re-Org of Tables
I have identified more that 30 Objects which are fragmented more than 1 GB.
We can do a re-org of those tables to help performance improvement.
Top 10 tables all those finance objects which have performance problems like
XLA & GL .
Concurrent Processing - Best Practices for Performance for
Concurrent Managers in E-Business Suite (Doc ID 1057802.1)
1. Increase the cache size (number of requests cached) to at least twice the
number of target processes. For example, if a manager's work shift has 1
target process and a cache value of 3, it will read three requests, and try to
run those three requests before reading any new requests
We are taking backup of views, package body , package specs, views etc. in the database
by editing the name of the script to the ITR numbers. This is increasing the number of
invalids in the database as well as it is creating a ad-hoc task for DBA to delete them on
quarterly basis.
I would suggest that we can create directories of the ITR nos in the TCL_TOP and then
keep the files for migrations in them and then compile into the database. By doing this we
can have backups of previous ITRs also and also the files management would be easier.
Copy the files from ITR to TCL_TOP as below.
If we need to roll back we can go to the previous ITR directory and roll back it.
Tables Backup
We take tables backup in APPS schema which increases the size of the tablespace
(APPS_TS_TX_DATA). We can have a backup schema and then take the backup in that
schema so that we can clean the database every 90 days. This will also avoid unnecessary
consumption of the APPS_TS_TX_DATA tablespace.
There are 157 backup objects at present in TCLERP in the APPS_TS_TX_DATA tablespace.
Thank You