Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

T24 Archiving and Data Lifecycle Management - User Guide: Release R15.000

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20
At a glance
Powered by AI
The document discusses archiving historic data and moving them to a separate database. It details configuration setups and processing involved in data lifecycle management.

This guide provides more details about archiving historic data and moving them to a separate database. It details the configuration setups and the processing involved.

This user guide is intended for internal Temenos users and clients.

T24 Archiving and Data Lifecycle Management - User

Guide
Release R15.000
June 2015

©2015 Temenos Headquarters SA - all rights reserved.

Warning: This document is protected by copyright law and international treaties. Unauthorised reproduction of this document, or any portion of it, may
result in severe and criminal penalties, and will be prosecuted to the maximum extent possible under law.
Table of Contents
Introduction 3
Purpose of this Guide 3
Intended Audience 3
Overview 4
Setup 6
DL Setup 6
Pre-requisite 6
Features 6
Configuration 8
Changes in T24 8
Mechanism 10
Data Lifecycle Management Setup 11
Configuring ARCHIVE application 11
Setting up ARC.GENERIC.REQUEST 13
Data Lifecycle Management Processing 14
ARC.GENERIC .REQUEST 14
Data Lifecycle Management Processing 16
ARC.GENERIC .REQUEST 16
Post Separation Actions 18
Reports 19
Appendix 20
Customisation 20

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 2 of 20
Introduction

Purpose of this Guide


This document provides more details about archiving historic data and moving them to a separate database. It details the configuration setups
and the processing involved.

Intended Audience
This User Guide is intended for the use of Internal Temenos users and Clients.

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 3 of 20
Overview

Archiving is the process of separating historic non-volatile data from the live data. The existing T24 Archival model moves the non-volatile data
from the live data tables into $ARC tables or files. In the case of relational deployments these $ARC tables remains in the same database as the
live data and as such archiving has no impact on the overall size of the database, and without removal of these tables the database just con-
tinues to grow.

Removal of the historic non-volatile data is often not an option as data is required to be retained and kept available online according to various
regulatory as well as business requirements.

Data Lifecycle Management has been introduced to address these issues and enhance the existing T24 Archive model by provision of mech-
anisms to not only safely separate the historic non-volatile data from the live data but also to move the non-volatile data into a separate data-
base.

The separated data is then subject to a lifecycle management process whereby the non-volatile data can be retained according to requirements
regulatory or otherwise seamlessly and available online to the application.

The non-volatile data is placed into $RO (read only) tables rather than $ARC tables, which remain “associated” with the live database tables
such that queries will automatically span tables in both the live database and the associated $RO table in the non-volatile database.

The removal of the historic non-volatile data greatly improves the performance of the live database as the database memory buffers have more
room for the recent or volatile data required for processing transactions.

The life cycle retention period is configurable but normally would be for configured for a number of years, the tables are partitioned by month
so that data can be moved without fragmentation.

Once data exceeds the specified retention period the data is then archived properly and moved into $ARC archive tables. The $ARC tables can
be moved to cheaper storage and is available for historic queries or be taken offline completely.

The Data Separation and Lifecycle Management enhancement is enabled by installing the DL product.

The archival mechanism creates a $RO file that is stored in a separate database. The tables corresponding to these $RO files has three columns
namely RECID, XMLRECORD and PDATE in “RO “database.

Data Life Cycle Management

The $RO tables are partitioned based on the value in PDATE column. The PDATE is a real column in $RO table, where it cannot be set to
null. The user must have a date value since the records are partitioned based on PDATE column. This date value is extracted within the file or
from some related file for that application file.

There is a date field for each application in FILE.CONTROL and the PDATE is extracted from this field is used for partitioning data.

CONTRACT.DATE, that is; date of maturity is used for PDATE when

1. Date field is not specified in FILE.CONTROL


2. Date value in the specified field does not match with YYYYMMDD format

As per the archival process, each and every file like FBNK.FUNDS.TRANSFER$HIS, has a corresponding $RO file called
FBNK.FUNDS.TRANSFER$HIS$RO

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 4 of 20
T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 5 of 20
Setup

This sections details about the Archival setup in T24 when Data Life Cycle Management (DL) installed.

DL Setup
This section details the various steps required to setup the DL product.

DL is an optional product which brings additional benefits over the functionality of the original Archiving capability.

Note: DL product is not part of T24 core.

Pre-requisite
The following are the pre-requisites for using the DL product:

l TAFC PB201312 or above and T24 R14 or above version must be installed
l Deploy DF.SEPERATION and run the package. Refer “Table list generation User Guide” for more details.
l Supported by SQL Server 2012, DB2 9.7 and ORACLE 11g and upwards

Features

Separation of read only data from live data


T24 data is separated as “live” data and “archival” data. The “live” data are frequently updated and are stored in the “live” database. The
“archival” tables are never changed and are moved into to the “RO” database. The processed data is removed from the “live” database.

The following tables support this separating process.

l RO.COPY.KEY.LIST – The list of “archival” tables which are to be moved to “RO” database are stored in this table.
l RO.PURGE.KEY.LIST – The list of tables which were successfully moved to the “RO” database are stored in this table.
l RO.ERROR.KEY.LIST – The list of tables which were unsuccessful in moving to the “RO” database are stored in this table.

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 6 of 20
RO.COPY.KEYLIST

RO.PURGE.KEYLIST

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 7 of 20
RO.ERROR.KEYLIST

Provision of Data Life Cycle Management


In Data Lifecycle Management (DLM) historic data is organised based on months in to multiple partitions depending on the retention period
specified. Once the retention period is elapsed, data can be archived proper and can be moved to much cheaper disk through the lifecycle solu-
tion. The old partition is recycled for the next partition providing the Sliding Window Filtering Technique.

Configuration
l The DL product must be installed in that company record for which the service is running.
l If the user configures the DL product, date value is fetched from DL.DATE.FIELD of FILE.CONTROL application. Based on this field
the data is partitioned in the “RO” table.
l If the user does not configure the above field, then the date is derived from running the ARC GENERIC service and this is used for par-
titioning records in RO table.

Changes in T24

SPF
Data Life Cycle Management (DL) is added in PRODUCTS field of SPF table.

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 8 of 20
SPF record

FILE.CONTROL
DL.DATE.FIELD is added in the reserved fields of FILE.CONTROL.FIELD.DEFINITIONS routine. This specifies the partition date for each file
in “RO” database. If this field is left blank, then the date derived from ARC.GENERIC service is used.

FILE.CONTROL record

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 9 of 20
Mechanism

DL Archival

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 10 of 20
Data Lifecycle Management Setup

This section details about the services executed in the Archiving process.

Configuring ARCHIVE application


Data Tables to be separated needs to have an entry in ARCHIVE application. Create a record each in ARCHIVE, for each set of files which
needs to be archived.

There are four distinct sections in this record.

The first set of fields includes PURGE.DATE, RETENTION.PERIOD, ARCHIVE.DATA and $ARC.PATHNAME.

1. Use ARCHIVE.DATA to archive or delete the selected records. Choose “Y” to archive and “N/None” just to delete.
2. Specify PURGE.DATE or RETENTION.PERIOD. Records selection for archiving is based on a date which can either be specified in
PURGE.DATE (must be the first of the month and for CATEG.ENTRY must be before the last financial year end) or
RETENTION.PERIOD. Records older than this date are archived (or deleted). Purge date is automatically calculated from retention
period at runtime. For E.g., If today's date is
3. 23/05/2012 and the retention period of 3 months is specified (03M), three months is calculated from the beginning of the month.
Therefore, records dated before 1/2/2012 will be archived (or deleted).
4. Specify the destination location of the $ARC archive files in $ARC.PATHNAME. If this field is left null, then the $ARC files are created
in the archive directory (bnk.arc).
l The second set of fields (ARC.FILENAME to MODULUS) are related multi- value fields, which describe the archive files to be
created.
a. ARC.FILENAME – This field indicates names of all the $ARC files which will be created based on Type and Mod-
ulo specified. In absence of Type and Modulo specification, $ARC files inherit the same type and modulus as the cor-
responding LIVE files.
l The third set (COMPANY.RUN.IN to TIME.ENDED) are related multi-value fields, which are auto populated by the system
after the contracts are archived. They maintain a history.
a. The fourth set comprises of the fields GENERIC.METHOD, MAIN.FILE, FIELD.TO.CHECK, FILTER.RTN,
RELATED.FILES.RTN and ROUTINE.
a. MAIN.FILE – Accepts the file name that has to be archived. Example: -FUNDS.TRANSFER$HIS.
b. FIELD.TO.CHECK – This field can indicate the date field in the contract, which should be compared with the
Purge date for archiving. If this field is left blank, then the standard DATE.TIME field is used for comparison. For
e.g., to archive the history records of FUNDS.TRANSFER, use PROCESSING.DATE of the contract.

Note: If the above field MAIN.FILE is multi valued and populated with two or more applications, the date field
mentioned in FIELD.TO.CHECK, will take into consideration the application populated in the first multi value
set.

c. FILTER.RTN – Hook routine to select/ignore a contract for archiving. It is used as an alternative to


FIELD.TO.CHECK.

The parameters of this routine are:

i. FILTER.RTN (ID.CONTRACT, R.CONTRACT,CONTRACTARCHIVE.DATE, SKIP.FLAG, '', '')


ii. ID.CONTRACT (IN Parameter 1) - Record key of the contract
iii. R. CONTRACT (IN Parameter 2) – Entire contract record
iv. CONTRACTARCHIVE.DATE (OUT Parameter 1) – Date against which the purge date set in the ARCHIVE
record should be compared with.

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 11 of 20
Example: In FUNDS.TRANSFER, you can compare debit value date & credit value date of the contract and
return a final date as the OUT parameter, that will be compared with purge date finally for archival.

v. SKIP.FLAG (OUT Parameter 2) – Returns Value ‘1’ as the OUT parameter, to skip the current contract from
archiving. Value ‘1’ confirms that the current contract need not be archived [Logic to ignore the contract
should be made available in the filter routine. So, current contract will be skipped from being archived]
d. RELATED.FILES.RTN – Hook routine that returns the names of related files that have to be archived along with
the main archival record in a dynamic array.

The parameters of this routine are:

i. RELATED.FILES.ROUTINE (ID.CONTRACT,R.CONTRACT,RELATED.FILES,'','')
ii. ID.CONTRACT (IN Parameter 1) - Record key of the main contract that is ready to be archived.
iii. R.CONTRACT (IN Parameter 2) – Entire contract record.
iv. RELATED.FILES (OUT Parameter 1) – Information of Related Files to be archived in the format - File name,
File ID, Archival Flag separated by @VM. If there are multiple related files, each file information can be delim-
ited by @FM marker.

Example: Upon archiving LOANS.AND.DEPOSITS records, its balances file records should also be archived.
So, pass the balances file name, its ID, and a ‘Y’ to the Archival flag.

Two Spare parameters for future expansion.

e. GENERIC.METHOD
i. To indicate “Y” when generic archival process must be executed. This allows archival service (ARC.GENERIC
service) to take care of all selection & purging of records. For example, FUNDS.TRANSFER, TELLER,
STMT.ENTRY.DETAIL, etc. are archived using the generic archival process based on the inputs provided in
MAIN.FILE, FIELD.TO.CHECK or FILTER.ROUTINE.
ii. To indicate “No/None” when application specific archival routine specified in the field ROUTINE must be
invoked.
f. ROUTINE

This field indicates a valid multithreaded routine that will be responsible for archiving the set of files specified in the
ARCHIVE record. These are application specific routines and should not be changed unless a site specific program
has been written. For example, for FOREX, the routine is ARC.FOREX. This record routine is responsible to decide
on the archival logic and to do archiving. Separate ARC.FOREX.LOAD, ARC.FOREX.SELECT routines should be avail-
able for opening and selecting all necessary files for archiving.

Howsoever, it is not necessary to create a separate ARC.FOREX service. It is the responsibility of ARC.GENERIC ser-
vice to simply invoke ARC.FOREX.LOAD, ARC.FOREX.SELECT, ARC.FOREX routines internally for archiving in the
presence of ROUTINE field.

g. DL.KEYGEN

Creates the KEYLIST to move the corresponding $RO files when DL product is enabled. To enable the DL feature in
local archival solution, modify the existing local routines defined in ROUTINE field of the ARCHIVE application

To insert the DL.KEYGEN API, append the DL product named I_ DL.KEYGEN.COMMON to the local archival
routines. After confirming the availability of DL feature in T24, call the routine with the below script:

IF DL.INSTALLED EQ 'YES' THEN ;* If DL product is installed in T24 system

CALL DL.KEYGEN(FN.MAIN.FILE, FILE.ID, R.ITEM, FN.MAIN.ARCHIVE, CONTRACT.DATE,


ARC.ID, PURGE.DATE, RESERVED.1)

END ELSE

;* call F.WRITE/F.DELETE to copy from LIVE/$HIS files to $ARC files and delete
from LIVE/$HIS files

END

Following are the parameters of this routine:

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 12 of 20
1. FN.MAIN.FILE - Source file that is being archived.
2. FILE.ID - Record key of the source file that is ready to be archived.
3. R.ITEM - Entire record of the source file.
4. FN.MAIN.ARCHIVE - Destination file (i.e. $ARC file) to which the record is being archived.
5. CONTRACT.DATE - Manually derived contract date for the record in source file is any.
6. ARC.ID - ARCHIVE application ID.
7. PURGE.DATE - Date before which the records are being archived (PURGE.DATE is defined in ARCHIVE
application)
8. RESERVED.1 - Reserved for future use.

Setting up ARC.GENERIC.REQUEST
Creates a ‘SYSTEM’ record in ARC.GENERIC.REQUEST and specifies the ARCHIVE ID for archival.

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 13 of 20
Data Lifecycle Management Processing

This section details the process involved in the separation procedure in T24

ARC.GENERIC .REQUEST
On verifying the ARC.GENERIC REQUEST, the ARC.GENERIC service is started in the background and in turn it reads the ARCHIVE record.
Record are selected for archival based on the set up of generic method or the application specific ROUTINE.

Note: The user must ensure that TSM is already running.

If the user has installed DL product, the KEYLIST generation is done. If the user has not installed the DL product, the user proceeds to the
Post Archiving process.

The KEYLIST GENERATION is done using the existing ARC.GENERIC service. It creates the list of keys to be archived from each application.
The DL.KEYGEN API creates the list of keys and copies it to RO.COPY.KEYLIST file.

DL COPY service

The RO COPY service processes the list of keys from RO.COPY.KEYLIST and copies the records from live database to “RO” database. The pro-
cessed keys are written to RO.PURGE.KEYLIST, if the record has been successfully moved. The processed keys are written to
RO.ERROR.KEYLIST, if the movement has failed.

To start the DL COPY service, invoke TSA.SERVICE application and set the SERVICE.CONTOL field to START in DL.COPY.PROCESS record.

DL.COPY.PROCESS record

DL PURGE service

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 14 of 20
The RO PURGE service processes the list of keys from RO.PURGE.KEYLIST file and deletes the record from “live” database. The processed
keys are deleted from RO.PURGE.KEYLIST, if the record has been successfully deleted from the original record. The processed keys are written
to RO.ERROR.KEYLIST, if the movement has failed.

To start the DL PURGE service, invoke TSA.SERVICE application and set the SERVICE.CONTOL field to START in DL.PURGE.PROCESS
record.

DL.PURGE.PROCESS record

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 15 of 20
Data Lifecycle Management Processing

This section details the process involved in the separation procedure in T24

ARC.GENERIC .REQUEST
On verifying the ARC.GENERIC REQUEST, the ARC.GENERIC service is started in the background and in turn it reads the ARCHIVE record.
Record are selected for archival based on the set up of generic method or the application specific ROUTINE.

Note: The user must ensure that TSM is already running.

If the user has installed DL product, the KEYLIST generation is done. If the user has not installed the DL product, the user proceeds to the
Post Archiving process.

The KEYLIST GENERATION is done using the existing ARC.GENERIC service. It creates the list of keys to be archived from each application.
The DL.KEYGEN API creates the list of keys and copies it to RO.COPY.KEYLIST file.

DL COPY service

The RO COPY service processes the list of keys from RO.COPY.KEYLIST and copies the records from live database to “RO” database. The pro-
cessed keys are written to RO.PURGE.KEYLIST, if the record has been successfully moved. The processed keys are written to
RO.ERROR.KEYLIST, if the movement has failed.

To start the DL COPY service, invoke TSA.SERVICE application and set the SERVICE.CONTOL field to START in DL.COPY.PROCESS record.

DL.COPY.PROCESS record

DL PURGE service

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 16 of 20
The RO PURGE service processes the list of keys from RO.PURGE.KEYLIST file and deletes the record from “live” database. The processed
keys are deleted from RO.PURGE.KEYLIST, if the record has been successfully deleted from the original record. The processed keys are written
to RO.ERROR.KEYLIST, if the movement has failed.

To start the DL PURGE service, invoke TSA.SERVICE application and set the SERVICE.CONTOL field to START in DL.PURGE.PROCESS
record.

DL.PURGE.PROCESS record

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 17 of 20
Post Separation Actions

The Data Lifecycle Management is implemented in two parts, the initial part which separates the bulk of the historical non-volatile from the
live database and the second part whereby non-volatile data is regularly moved from the live to the non-volatile database in order to minimize
the size of the live database.

The initial part is performed by the Data Lifecycle Management package whereby the non-volatile database is created offline from a copy of the
production database, thus ensuring minimal impact on production. Only the successfully moved data is then purged from the production sys-
tem.

The initial purge process can consume large amounts of resource on the production system but can be scheduled as and when convenient and
also broken down into smaller jobs, which can be safely executed over a period. Similarly the defragmentation of the tables after the purge can
be organised for minimal impact on production.

The subsequent movement and purge processes can be configured according to the business processes and/or operational requirements,
whereby non-volatile data can be scheduled for movement on a daily, weekly, monthly or yearly basis. Similarly different table sets can be con-
figured accordingly.

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 18 of 20
Reports

This section details the reports/enquiry displayed on completion of the archival process.

The archived files can be viewed using the INCLUDE.DL field in ENQUIRY.SELECTION screen. If “Yes” is selected, the enquiry displays the
record from both original and $RO tables. If “No” is selected, the enquiry displays the record from original table only. If this field is left blank,
the enquiry displays the records based on the environment settings.

For example, an enquiry to view the CATEG.ENTRY$RO file can be based on the CATEG.ENTRY enquiry, changing the FILE.NAME to
CATEG.ENTRY$RO.

INCLUDE.DL in Enquiry

Note:

l $HIS$RO/$RO files can also be queried using the T24 Enquiry screen.
l INCLUDE.DL field is displayed only if the DL product is installed in the APPLICATION field of COMPANY table.

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 19 of 20
Appendix

Customisation
Using the enquiry, the user can read the record from original and $RO table first by default.

Using the Read Re-direction option, though the records are moved from original file to “$RO” file, the user can still read the record directly
from original file.

Using the Select Re-direction option, the user can run a query from the original file and automatically the driver selects both original file and
“$RO” file and the results are merged and provided back to user.

By default both Read Re-direction and Select Re-direction are enabled.

This functionality is controlled by the variable JEDI_XMLDRIVER_ASSOCIATE_FILE.

l Set “JEDI_XMLDRIVER_ASSOCIATE_FILE=1” to disable both Read Re-direction and Select Re-direction.


l Set “JEDI_XMLDRIVER_ASSOCIATE_FILE=2” to disable Select Re-direction.

T24 Archiving and Data Lifecycle Management - User Guide - Release R15.000 - Page 20 of 20

You might also like