Oracle Developer Ref Guide
Oracle Developer Ref Guide
1. After we enabled Parallel Calculation, by default, Essbase uses the last sparse dimension in
an outline to identify tasks that can be performed concurrently. But the distribution of data may cause
one or more tasks to be empty; that is, there are no blocks to be calculated in the part of the database
identified by a task. This situation can lead to uneven load balancing, reducing parallel calculation
effectiveness.
2. To resolve this situation, you can enable Essbase to use additional sparse dimensions in the
identification of tasks for parallel calculation. For example, if you have a FIX statement on a member
of the last sparse dimension, you can include the next-to-last sparse dimension from the outline as
well. Because each unique member combination of these two dimensions is identified as a potential
task, more and smaller tasks are created, increasing the opportunities for parallel processing and
improving load balancing.
3. Add or modify CALCTASKDIMS in the essbase.cfg file on the server, or use the calculation
script command SET CALCTASKDIMS at the top of the script.
This will enable last 2 sparse dimensions to be included in the checking, it may significantly increase
the running performance. (416-3025810)
1. We can enable parallel calculation in Essbase.cfg file in system level, or enable in calculation
sript level. Sample code:
SET CALCPARALLEL
SET CALCTASKDIMS
3. There is a risk that the parallel calculation may freeze the computer.
4. Use FIX commands so that special data block is calculated, don’t use cross dimension operator
in most cases.
Sample:
SET UPDATECALC OFF;
SET CLEARUPDATESTATUS AFTER;
CALC TWOPASS;
By default, the time dimension is set to be dense. But if you use incremental data loading in MaxL srcipt.
And the data is loaded in the end of every month. You can set Time dimension as sparse dimension,
if you have Intelligent Calculation enabled, only the data blocks marked as dirty are
recalculated.This will significantly increase the data loading performance.
The 2nd is not efficient, it will look through all of time dimension even if only the Jan is calculated.
The 1st one only calculates the Jan for sales block which is more efficient.
2. The data block size setting. It should be 10k - 100k, if the data block size is too big (>100k), the
intelligent calculation will not work well. If the data block size is too small (nearby 10k), the index
may become too huge, and this will affect the calculation speed.
Under uncommitted access, Essbase locks blocks for write access until Essbase finishes updating the block.
Under committed access, Essbase holds locks until a transaction completes.With uncommitted access,
blocks are released more frequently than with committed access. The essbase performace is better if we set
uncommitted access. Besides, parallel calculation only works with uncommitted access.
Database performance:
Uncommitted access always yields better database performance than committed access. When using
uncommitted access, Essbase does not create locks that are held for the duration of a transaction but
commits data based on short-term write locks.
Data consistency:
Committed access provides a higher level of data consistency than uncommitted access. Retrievals from a
database are more consistent. Also, only one transaction at a time can update data blocks when the isolation
level is set to committed access. This factor is important in databases where multiple transactions attempt to
update the database simultaneously.
Data concurrency:
Uncommitted access provides better data concurrency than committed access. Blocks are released more
frequently than during committed access. With committed access, deadlocks can occur.
Database rollbacks:
If a server crash or other server interruption occurs during active transactions, the Essbase kernel rolls back
the transactions when the server is restarted. With committed access, rollbacks return the database to its
state before transactions began. With uncommitted access, rollbacks may result in some data being
committed and some data not being committed.
Essbase Restructure
There are 3 restructure: Dense restructure, sparse restructure, and outline only restreucture. If a member of
dense dimension is changed, thr resturcture command will make a dense restructure. Dense restructure use a
long time, becuase some data blocks will be created. Sparse restructure happens only if a sparse member is
changed, sparse restructure only resturcture index, it should not use too much time. Outline only restucture
don't change data block or index, no data block or index restructure happen, it uses no time.
Amit Sharma aloo_a2@yahoo.com Contact for Hyperion Training and consultancies
• Outline-only restructure: If a change affects only the database outline, Essbase does not restructure
the index or data files. Member name changes, creation of aliases, and dynamic calculation formula
changes are examples of changes that affect only the database outline.
Using VALIDATE to Check Integrity. The VALIDATE command performs many structural and data integrity
checks:
• Compares the data block key in the index page with the data block key in the corresponding data block.
• The Essbase index contains an entry for every data block. For every read operation, VALIDATE
automatically compares the index key in the index page with the index key in the corresponding data block
and checks other header information in the block. If it encounters a mismatch, VALIDATE displays an error
message and continues processing until it checks the entire database.
• Restructures data blocks whose restructure was deferred with incremental restructuring.
• Checks every block in the database to make sure each value is a valid floating point number.
Note:
When you issue the VALIDATE command, we recommend placing the database in read-only mode.
As Essbase encounters mismatches, it records error messages in the VALIDATE error log. You can specify a file
name for error logging; Essbase prompts you for this information if you do not provide it. The VALIDATE utility
runs until it has checked the entire database.
You can use the VALIDATE command in ESSCMD to perform these structural integrity checks.
During index free space validation, the VALIDATE command verifies the structural integrity of free space
information in the index. If integrity errors exist, Essbase records them in the VALIDATE log. The file that you
specified on the VALIDATE command holds the error log.
Amit Sharma aloo_a2@yahoo.com Contact for Hyperion Training and consultancies
If VALIDATE detects integrity errors regarding the index free space information, the database must be rebuilt.
You can rebuild in three ways:
• Restore the data by exporting data from the database; creating an empty database; and loading the
exported data into the new database.
Point 1:
Don't use too deep levels; it's ok to have a lot of members in one level. But it is not wise if we have deep
nested level, and each level doesn't have too much members. This is very important principle when we
design a cube.
The outline secuqence: time, account, the other dense dimensions, the sparse dimension that has fewest
members, other sparse dimensions, the attribute dimensions.
Point 2:
Calculation performance may be affected if a database outline has multiple flat dimensions. A flat
dimension has very few parents, and each parent has many thousands of children; in other words, flat
dimensions have many members and few levels. You can improve performance for outlines with multiple
flat dimensions by adding intermediate levels to the database outline.
The above 2 points are from different source, they looks some different and even in the opporsite. my
understanding is: we should have fewer levels anyway, but the huge amount of member should happen in
the parent level. That means we have thousands of parents, but each parent has few members.
Many companies load data incrementally. For example, a company may load data each month for that
month. To optimize calculation performance when you load data incrementally, make the dimension tagged
as TIME a SPARSE dimension. If the time dimension is sparse, the database contains a datablock for each
time period.
When you load data by time period, Essbase accesses fewer data blocks because fewer blocks contain the
relevant time period. Thus, if you have Intelligent Calculation enabled, only the data blocks marked as dirty
are recalculated.
For example, if you load data for March, only the data blocks for March and the dependent parents of March
are updated. However, making the time dimension sparse when it is naturally dense may significantly
increase the size of the index, creating possibly slower performance due to more physical I/O activity to
accommodate the large index.
If the dimension tagged as time is dense, you still receive some benefit from Intelligent Calculation when
Amit Sharma aloo_a2@yahoo.com Contact for Hyperion Training and consultancies
you do a partial data load for a sparse dimension. For example, if Product is sparse and you load data for
one product, Essbase recalculates only the blocks affected by the partial load, although time is dense and
Intelligent Calculation is enabled.
Simulated Calculation
You can simulate a calculation using SET MSG ONLY in a calculation script. A simulated calculation produces
results that help you analyze the performance of a real calculation that is based on the same data and outline. By
running a simulated calculation with a command such as SET NOTICE HIGH, you can mark the relative amount of
time each sparse dimension takes to complete. Then, by performing a real calculation on one or more dimensions, you
can estimate how long the full calculation will take, because the time a simulated calculation takes to run is
proportional to the time that the actual calculation takes to run.
For example, if the calculation starts at 9:50:00 AM, and the first notice is time-stamped at 09:50:10 AM and the
second is time-stamped at 09:50:20 AM, you know that each of part of the calculation took 10 seconds. If you then
run a real calculation on only the first portion and note that it took 30 seconds to run, you know that the other portion
also will take 30 seconds. If there were two messages total, then you would know that the real calculation will take
approximately 60 seconds (20 / 10 * 30 = 60 seconds). Use the following topics to learn how to perform a simulated
calculation and how to use a simulated calculation to estimate calculation time.
1. Create a data model that uses all dimensions and all levels of detail about which you want information.
2. Load all data. This procedure calculates only data loaded in the database.
If you are using dynamic calculations on dense dimensions, substitute the CALC ALL command with the specific
dimensions that you need to calculate; for example, CALC DIM EAST.
Note:
If you try to validate the script, Essbase reports an error. Disregard the error.
5. Find the first sparse calculation message in the application log and note the time in the message.
7. Calculate the dense dimensions of the model that are not being dynamically calculated:
CALC DIM (DENSE_DIM1, DENSE_DIM2, …);
Amit Sharma aloo_a2@yahoo.com Contact for Hyperion Training and consultancies
9. Project the intervals at which notices will occur, and then verify against sparse calculation results. You can
then estimate calculation time.
1. If you have not too much information for the essbase, you don't need to make any compression
setting on the Essbase. By default, the essbase is compressed using BITMAP which is the best way
in most cases.
2. If your essbase is 90% dense, you may use ZLIB for the compression method.
3. If your Essbase is sparse, and you have huge repeated no missing data cells, you should use RLE for
essbase compression method.
By the way, the "Index Value Pair" compression is selected automatically by the Essbase system.
Index Value Pair addresses compression on databases with larger block sizes, where the blocks are
highly sparse. This compression algorithm is not selectable but is automatically used whenever
appropriate by the database. The user must still choose between the compression types None,
bitmap, RLE, and zlib through Administration Services.
The Essbase Spreadsheet addin can update Essbase in data cell level. In case of Essbase disaster, the essbase
can be restored from the last backup. Suppose the last backup is yesterday night, and the Essbase disaster
happens in lunch time today. The data this morning is not restored by default. How to restore all of the detail
data include the data one second before the Essbase disaster? Here is the method:
2. After adding the above code to the essbase.cfg, restart Essbase Server. You will see the next words
in the C:\Hyperion\logs\essbase\app\Sample\Sample.log
[Sun Nov 15 22:22:50 2009]Local/Sample///Info(1002088)
Starting Spreadsheet Log
[C:\Hyperion\products\Essbase\EssbaseServer\APP\Sample\Basic\Basic.alg] For Database [Basic]
This means the setting is successful.
Amit Sharma aloo_a2@yahoo.com Contact for Hyperion Training and consultancies
3. You can go on to make updating in Sample.Basic using Excel Spreadsheet addin, everything is
logged now.
4. Now, suppose your Essbase is just restored from a backup, further more, you can recover
transactions from the update log. To do so, use the Essbase command-line facility, ESSCMD, from
the server console. The following ESSCMD command sequence loads the update log:
LOGIN hostnode username password
SELECT appname dbname //Example: Select Sample Basic
LOADDATA 3 filepath:appname.ATX
//LOADDATA 3 C:\Hyperion\products\Essbase\EssbaseServer\APP\Sample\Basic\Basic.atx
EXIT
Steps:
1. Back up any related components, including Shared Services relational database and the OpenLDAP
database. Note: The Shared Services relational database and the OpenLDAP database must be
backed up at the same time. Ensure that the administrator does not register a product application or
create an application group at backup time.
2. Recover the Shared Services relational database with RDBMS tools, using the backup with the same
date as the OpenLDAP backup.
3. If you use OpenLDAP as Native Directory, recover the OpenLDAP database by running: Examples:
Windows noncatastrophic recovery—C:/Hyperion/products/Foundation/server/scripts/recover.bat
c:/HSS_backup
UNIX catastrophic recovery
—/home/username/Hyperion/products/Foundation/server/scripts/recover.sh
/home/username/HSS_backup catRecovery
Amit Sharma aloo_a2@yahoo.com Contact for Hyperion Training and consultancies
Note: Physical backup and logical backup. A physical backup can be hot or cold:
*. Hot backup—Users can make changes to the database during a hot backup. Log files of changes made
during the backup are saved, and the logged changes are applied to synchronize the database and the
backup copy. A hot backup is used when a full backup is needed and the service level does not allow
system downtime for a cold backup.
*. Cold backup—Users cannot make changes to the database during a cold backup, so the database and
the backup copy are always synchronized. Cold backup is used only when the service level allows for
the required system downtime.
Note: A cold full physical backup is recommended.
* Full—Creates a copy of data that can include parts of a database such as the control file,transaction
files (redo logs), archive files, and data files. This backup type protects data from application error and
safeguards against unexpected loss by providing a way to restore original data. Perform this backup
weekly, or biweekly, depending on how often your data changes. Making full backups cold, so that users
cannot make changes during the backups, is recommended.
Note: The database must be in archive log mode for a full physical backup.
* Incremental—Captures only changes made after the last full physical backup. The files differ for
databases, but the principle is that only transaction log files created since the last backup are archived.
Incremental backup can be done hot, while the database is in use, but it slows database performance.
In addition to backups, consider the use of clustering or log shipping to secure database content.
Logical Backup
A logical backup copies data, but not physical files, from one location to another. A logical backup is
used for moving or archiving a database, tables, or schemas and for verifying the structures in a
database.
2. Back up the Shared Services directory from the file system.Shared Services files are in
HYPERION_HOME/deployments and HYPERION_HOME/products/Foundation.
3. Optional:
* Windows—Back up these Windows registry entries using REGEDIT and export:
HKLM/SOFTWARE/OPENLDAP
HKLM/SOFTWARE/Hyperion Solutions
* UNIX—Back up these items:
.hyperion.* files in the home directory of the user name used for configuring the
product user profile (.profile or equivalent) file for the user name used for configuring the product.
4. Shut down the Shared Services relational database and perform a cold backup using RDBMS tools.
Amit Sharma aloo_a2@yahoo.com Contact for Hyperion Training and consultancies
2. Using Oracle Hyperion Enterprise Performance Management System Installer, Fusion Edition,
install Shared Services binaries. Note: Do not configure the installation.
OpenLDAP Services is created during installation.
3. Restore the Shared Services cold backup directory from the file system.
4. Restore the cold backup of the Shared Services relational database using database tools.
5. Optional: Restore the Windows registry entries from the cold backup.
Note: in the essbase.cfg file, set the SPLITARCHIVEFILE configuration to TRUE. This will split archive
file to smaller size(<2 GB).
MaxL Sample:
You should clear the log file and the replay file from time to time.
/Hyperion/trlog/Sample/Basic
ARBORPATH/app/appname/dbname/Replay
-------------------------------------------------------------------------------
Amit Sharma aloo_a2@yahoo.com Contact for Hyperion Training and consultancies
1. Places the database in read-only mode, protecting the database from updates during the archive
process while allowing requests to query the database.
2. Writes a copy of the database files to an archive file that resides on the Essbase Server computer.
The files include:
essxxxxx.pag -- Essbase data files
essxxxxx.ind -- Essbase index files
dbname.esm -- Essbase Kernel file
dbname.tct -- Transaction control table
dbname.ind -- Free fragment file
dbname.otl -- Outline file
dbname.otl.keep -- Temporary backup of dbname.otl
essx.lro -- Linked reporting objects
dbname.otn -- Temporary outline file
dbname.db -- Database file containing database settings
dbname.ddb -- Partition definition file
dbname.ocl -- Outline change log created during incremental dimension build.
essxxxx.chg -- Outline synchronization change log
dbname.alg -- Spreadsheet update log that stores spreadsheet update transactions
dbname.atx -- Spreadsheet update log that contains historical transaction information
essbase.sec* -- Essbase security file
essbase.bak -- Backup of the Essbase security file
essbase.cfg -- Essbase Server configuration file
dbname.app -- Application file containing application settings
.otl,.csc,.rul,.rep,.eqd,.sel
ESSCMD or MaxL scripts
Method 1:
Method 2:
Note: in the essbase.cfg file, set the SPLITARCHIVEFILE configuration to TRUE. This will split archive
file to smaller size(<2 GB).
Note: Partition commands (for example, synchronization commands) are not logged and,
therefore, cannot be replayed. When recovering a database, you must replay logged
transactions and manually make the same partition changes in the correct chronological order.
When using partitioned databases or using the @XREF function in calculation scripts, you must selectively
replay logged transactions in the correct chronological order between the source and target databases.
2. Use the file system to copy the contents of the application directory
(ARBORPATH/app/appname),excluding the temp directory
-----------------------------------------------------------------------------
Set Essbase in read only mode when backing up:
alter database begin archive
Set Essbase back to read/write after backing up:
alter database end archive
----------------------------------------------------------------------------
Use export command to export data file is also a simple option to keep text format data backup, level
0 data is ok
Amit Sharma aloo_a2@yahoo.com Contact for Hyperion Training and consultancies
1. Perform a complete backup of the Oracle Essbase Integration Services catalog repository.
3. Create and save a list of all source Open Database Connectivity (ODBC) Data Source Names
(DSNs) that were set up.
4. Keep a current copy of installed software, along with all property files such as ais.cfg.
• If Integration Services installation files are lost because of hardware failure, you must
reinstall Integration Services.
• If the database containing the catalog is corrupted, you must restore it and then create an
ODBC DSN to the catalog and use it to retrieve models and metaoutlines.
• If the backup catalog database is also corrupted, then, from Oracle Essbase Integration
Services Console, create an empty catalog and import each model and metaoutline using XML files.
Application Library—A summary of applications that have been created and/or deployed to Financial
Management, Planning, Profitability and Cost Management, Essbase Aggregate Storage Option (ASO), or
Essbase Block Storage Option (BSO).
Calculation Manager— Enables you to create, validate, and deploy business rules and business rule sets.
Application Upgrade—Enables upgrades from previous Financial Management and Planning releases.
Library Job Console—Provides a summary, including status, of Dimension library and application
activities, including imports, deployments, and data synchronizations.
To use Performance Management Architect for application administration, you can move applications being
managed using Financial Management or Planning Classic administration. After you upgrade classic
application to PMA with workspace, you cannot move it back.
Amit Sharma aloo_a2@yahoo.com Contact for Hyperion Training and consultancies
SET CALCPARALLEL 4;
SET CALCTASKDIMS 3;
SET UPDATECALC OFF;
SET AGGMISSG ON;
SET FRMLBOTTOMUP ON;
1. When the intelligent calculation is turned off and CREATENONMISSINGBLK is ON, within the
scope of the calculation script, all blocks are calculated, regardless if they are marked clean or dirty.
2. Cross dimension operators can be reduced, include more items inside FIX instead of using "->"
How can we create bulk users in Essbase ? say if there are 500 users need to be created at a time. what is the
technique ?
Suppose you have 500 users in c:\user.csv, you can create batch of MaxL command using next JavaScript
code. Copy the below code and name as gm.js, execute gm.js in Windows.
for(var i=1;i<=500;i++)
{
username = rs.ReadLine();
var x="create user " + username + "identified by " +"'password'" +" member of group " + "'testgroup'" ;
ws.WriteLine(x);
}
rs.Close();
ws.close();
• Minimize the number of hierarchies. (For example, each additional stored hierarchy slows down
view selection and potentially increases the size of the aggregated data).
• If a hierarchy is a small subset of the first hierarchy, consider making the small hierarchy a dynamic
hierarchy. Considerations include how often the hierarchy data is queried and the query performance
impact when it is dynamically queried at the time of retrieval.
• The performance of attributes is the same as for members on a stored hierarchy.
• The higher the association level of an attribute to the base member, the faster the retrieval query
• Convert BSO to ASO in case, ASO has much faster running performance compared to BSO. ASO
cannot include Calculation Script however. Must use migration wizard for this transmision, besides,
as soon as it is transferred to ASO, it cannot be transferred back t BSO.
Amit Sharma aloo_a2@yahoo.com Contact for Hyperion Training and consultancies
Principles:
• The candidate for compression dimension should not have too many "Stored level 0 members".
• Average bundle fill:is the average number of values stored in the groups. It is between 1 to 16. 16 is
the best, 1 is the worst. For the candidate dimension, it should have the greatest value compared to
other dimensions.
• Expected level 0 size: This field indicates the estimated size of the compressed database. A smaller
expected level 0 size indicates that choosing this dimension is expected to enable better
compression.
Amit Sharma aloo_a2@yahoo.com Contact for Hyperion Training and consultancies
Aggregate storage database outline is page able; therefore, ASO can put huge members in memory that will
significantly increase the running performance. Depending on how you want to balance memory usage and
data retrieval time, you can customize outline paging for aggregate storage outlines by using one or more of
the following settings in the essbase.cfg file:
• Compact outline from time to time, that is because when a member is deleted, it is not really deleted
inside the file, it is actually marked as 'deleted', if you don't compact the outline, it will become
huger and huger.
BSO can have "write back" functionality to any level, and BSO can perform complex calculations. ASO is
for large aggregation focused databases with many dimensions and many members. Sometimes, we want to
use the advantage of both ASO and BSO.
In the world where I can now make an ASO database the source of a partition, I can take advantage of the
BSO strengths (write back to any level, powerful calculation engine) and then source this information to a
consolidated ASO database that maybe has the volumes of detail from other sources.
Note - the new Hyperion Profitability and Cost Management solution uses this model: BSO for allocation
calcs and loads to an ASO cube for reporting.
Steps:
• Create the BSO database in a separate application from the one in which the ASO database is
located.Typically, the block storage database contains a subset of the dimensions in the aggregate
storage database.
• Create a transparent partition based on where you want the data to be stored. Make the block storage
database the target and the aggregate storage database the source.
Amit Sharma aloo_a2@yahoo.com Contact for Hyperion Training and consultancies
Note: It is better to delete the dimension and use the Create Date-Time Dimension wizard to recreate it with
the changes built in by the wizard, particularly if changes involve adding or removing members. It is risky if
simply delete or add a member for Date-Time dimension.
Linked attribute dimensions can be associated only with the date-time dimension. Linked attribute
dimension can be built up when in the process of creating Date-Time Dimensions using wizard.
Amit Sharma aloo_a2@yahoo.com Contact for Hyperion Training and consultancies
If you use multiple import database data MaxL statements to load data values to aggregate storage
databases, you can significantly improve performance by loading values to a temporary Data Load Buffer
first, with a final write to storage after all data sources have been read.
2. Import data to data load buffer, many importing process can happen in the same time.
MaxL Code:
import database AsoSamp.Sample data from server data_file 'file_1.txt' to load_buffer with buffer_id 1
on error abort;
Amit Sharma aloo_a2@yahoo.com Contact for Hyperion Training and consultancies
import database AsoSamp.Sample data from server data_file 'file_2.dat' using server rules_file ‘rule’ to
load_buffer with buffer_id 1 on error abort;
import database AsoSamp.Sample data from server excel data_file 'file_3.xls' to load_buffer with
buffer_id 1 on error abort;
To create a data load buffer that combines duplicate cells by accepting the value of the cell that was
loaded last into the load buffer, use the alter database MaxL statement with the aggregate_use_last
grammar.
For example:
For example, to create a slice by overriding values (the default), use this statement:
import database AsoSamp.Sample data from load_buffer with buffer_id 1 override values create slice;
WITH
SET [Lowest 5% products] AS
'BottomPercent (
{ [Product].members },
5,
([Measures].[Sales], [Year].[Qtr2])
)'
MEMBER
[Product].[Sum of all lowest prods] AS
'Sum ( [Lowest 5% products] )'
SELECT
{[Year].[Qtr2].children}
on columns,
{
[Lowest 5% products],
[Product].[Sum of all lowest prods],
Amit Sharma aloo_a2@yahoo.com Contact for Hyperion Training and consultancies
[Product],
[Product].[Percent that lowest sellers hold of all product sales]
}
on rows
FROM Sample.Basic
WHERE ([Measures].[Sales])
WITH
MEMBER [Measures].[Starting Inventory] AS
'
IIF (
IsLeaf (Year.CurrentMember),
[Measures].[Opening Inventory],
([Measures].[Opening Inventory],
OpeningPeriod (
[Year].Levels(0),
[Year].CurrentMember
))
)'
SELECT
CrossJoin (
{ [100-10] },
{ [Measures].[Starting Inventory], [Measures].[Closing Inventory] }
)
ON COLUMNS,
Hierarchize ( [Year].Members , POST)
ON ROWS
FROM Sample.Basic
The next is the sequence to correctly start the EAS for version 11:
1. Start the RLDB that hold the reporsitory of LDAP, Shared Service, and Essbase Server, in Windows
system, it is related the next 3 services:
OracleOraDb11g_home1ConfigurationManager
OracleServiceORCL
OracleOraDb11g_home1TNSListener
ASO always have a lot of members. the most polular methods are generation reference method and children
reference method.
2. Create some data files, the data files are not for data loading, but for member creation. Check the
consistence between the data file and the rule files.
3. If there are too much data members, you may have to use a computer language (such as JavaScript)
to generate the better formated data files from a more basic original data file.
4. Use Maxl Import command to load data to buffer from the data files in parallel.
import database AsoSamp.Sample data connect as TBC identified by 'password' using multiple
rules_file 'rule1','rule2' to load_buffer_block starting with buffer_id 100 on error write to "error.txt";
import database AsoSamp.Sample data from load_buffer with buffer_id 1, 2;