Open Text Archive Server
Open Text Archive Server
Open Text Archive Server
Administration Guide
AR100000-ACN-EN-1
Open Text Archive Server
Administration Guide
AR100000-ACN-EN-1
Rev.: 2010-July-28
This documentation has been created for software version 10.0.
It is also valid for subsequent software versions as long as no new document version is shipped with the product or is
published at https://knowledge.opentext.com.
Open Text Corporation
275 Frank Tompa Drive, Waterloo, Ontario, Canada, N2L 0A1
Tel: +1-519-888-7111
Toll Free Canada/USA: 1-800-499-6544 International: +800-4996-5440
Fax: +1-519-888-0677
Email: support@opentext.com
FTP: ftp://ftp.opentext.com
For more information, visit http://www.opentext.com
PRE Introduction 17
i About this document .............................................................................. 17
ii Further information................................................................................. 18
iii Conventions ........................................................................................... 19
Part 1 Overview 23
Part 2 Configuration 45
Audience and This document is written for administrators of Archive Server, and for the project
knowledge managers responsible for the introduction of archiving. All readers share an interest
in administration tasks and have to ensure the trouble-free operation of Archive
Server. These are the issues dealt with in this manual. The following knowledge is
required to take full advantage of this document.
• Familiarity with the relevant operation system Windows or UNIX.
• A general understanding of TCP/IP networks, HTTP protocol, network and data
security, and the databases (ORACLE or MS SQL Server).
• Additional knowledge of NFS file systems would be helpful.
Besides these technical backgrounds, a general understanding of the following
business issues is important:
• the number and type of documents to be electronically archived each day or each
month
• how often archived documents will be retrieved
• are retrieval requests predictable or independent
• for what period of time documents will be frequently accessed
• the length of time for which documents must be archived
• which archived documents are highly sensitive and might have to be updated
(personal files, for example).
On the basis of this information you can decide which scenario you are going to use
for archiving and how many logical archives you need to configure. You can
determine the size of disk buffers and caches in order to guarantee fast access to
archived data.
ii Further information
This manual This manual is available in PDF format and can be downloaded from the Open Text
Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/open/12331031). You can
print the PDF file if you prefer to read longer text on paper.
Online help For all administration clients (Administration Client, Monitor Web Client,
Document Pipeline Info and configuration properties), online help files are
available. You can open the online help via help menu, help button, or F1.
Other manuals In addition to this Administration Guide, use part 7 "Configuration Parameter
Reference" in Open Text Archive Server - Administration Online Help (AR-H-ACN) for
a reference of all configuration properties.
To learn about Document Pipelines and their usage in document import scenarios,
refer to the guide Open Text Document Pipelines - Overview and Import Interfaces (AR-
CDP).
Open Text Online is a single point of access for the product information provided by
Open Text. Depending on your role, you have access to different levels of
information.
You can access Open Text Online via the Internet at http://online.opentext.com/ or
the support sites at http://support.opentext.com/.
The following information and support sources can be accessed through Open Text
Online:
Knowledge Center
Open Text's corporate extranet and primary site for technical support. It is the
official source for:
• Open Text products and modules.
• Documentation for all Open Text products.
• Open Text Developer Network, OTDN: developer documentation and
programming samples for Open Text products.
The following role-specific information is available:
Partners
• Information on the Open Text Partner Program.
• Programs and support for registered partners.
Business Users
• Tips, help files, and further information from Open Text staff and other users
in one of the Open Text Online communities
Administrators/developers
• Documentation
• Product information
• Discussions
• Product previews
Feedback on If you have any comments, questions, or suggestions to improve our
documentation documentation, contact us by email at documentation@opentext.com.
iii Conventions
Read the following conventions before you use this documentation.
Typography In general, this documentation uses the following typographical conventions:
New terms
This format is used to introduce new terms, emphasize particular terms,
concepts, long product names, and to refer to other documentation.
User interface
This format is used for elements of the graphical user interface (GUI), such as
buttons, names of icons, menu items, names of dialog boxes, and fields.
Filename
command
sample data
This format is used for file names, paths, URLs, and commands at the command
prompt. It is also used for example data, text to be entered in text boxes, and
other literals.
Note: If a guide provides command prompt examples, these examples may
contain special or hidden characters in the PDF version of the guide (for
technical reasons). To copy commands to your application or command
prompt, use the HTML version of the guide.
Key names
Key names appear in ALL CAPS, for example:
Press CTRL+V.
<Variable name>
The brackets < > are used to denote a variable or placeholder. Enter the correct
value for your situation, for example: Replace <server_name> with the name of
the relevant server, for example serv01.
Hyperlink
Weblink (for example http://www.opentext.com)
These formats are used for hyperlinks. In all document formats, these are active
references to other locations in the documentation (hyperlink) and on the Inter-
net (Weblink), providing further information on the same subject or a related
subject. Click the link to move to the respective target page. (Note: The hyperlink
above points to itself and will therefore produce no result).
Cross- The documentation uses different types of cross-references:
references
Internal cross-references
Clicking on the colored part of a cross-reference takes you directly to the target
of the reference. This applies to cross-references in the index and in the table of
contents.
External cross-references
External cross-references are references to other manuals. For technical reasons,
these external cross-references often do not refer to specific chapters but to an
entire manual. If a document is available in HTML format, external references
can be active links though, that directly lead you to the corresponding section in
the other manual.1
Tip: Tips offer information that make your work more efficient or show
alternative ways of performing a task.
1 This applies, if target and source documents are shipped together, for example, on a product or documentation CD-ROM.
Important
If this important information is ignored, major problems may be
encountered.
Caution
Cautions contain very important information that, if ignored, may cause
irreversible problems. Read this information carefully and follow all
instructions!
Applications
Application or services deliver documents or content to Archive Server using
Archive Services or Archive Link. Retrieval requests are also sent by applications to
get documents back from the archive server.
Archive Server
Archive Server incorporates the following components for storing, managing and
retrieving documents and data:
• Document Service (DS), handles the storage and retrieval of documents and
components.
• Storage Manager (STORM), manages and controls the storage devices.
• Administration Server, provides the interface to the Administration Client
which helps the administrator to create and maintain the environment of archive
servers, including logical archives, storage devices, pools, etc.
Administration Tools
To administer, configure and monitor the components mentioned above, you can
use the following tools:
• Open Text Administration Client is the tool to create logical archives and to
perform most of the administrative work like user management and monitoring.
See also “Important directories on the archive server” on page 27.
• Monitor Web Client is used to monitor information regarding the status of
relevant processes, the file system, the size of the database and available
Storage Devices
Various types of storage devices offered by leading storage vendors can be used by
Archive Server for long-time archiving. See “Storage Devices” on page 33.
1. Content is requested by a client. For this, the client sends the unique document
ID and archive ID to Archive Server.
2. Archive Server checks whether the content consists of more components and
where the components are stored.
3. If the content is still stored in the buffer or in the cache, it is delivered directly to
the client.
4. If the content is already archived on the storage device, Archive Server sends a
request to the storage device, gets the content and leads it forward to the
application. Content is returned in chunks, so the client does not have to wait
until the complete file is read. That is important for large files or if the client
only reads parts of a file.
• Buffer(s) and disk volumes to store incoming content temporarily, see also “Disk
buffers” on page 33.
• Storage devices and storage volumes for long-time archiving of content, see also
“Installing and configuring storage devices” on page 58.
• Cache to accelerate content retrieval. Only necessary if slow storage devices are
used, see also “Caches” on page 37.
• Retention period for content, see also “Retention” on page 70.
• Compression and encryption settings, see also “Data compression” on page 68
and “Encrypted document storage” on page 111.
• Security settings and certificates, see also “Configuring the archive security
settings” on page 81.
• A cache server if used, see also “Configuring Archive Cache Server” on page 195.
See also:
• “Configuring Buffers” on page 49
• “Configuring disk volumes” on page 47
See also:
• “Installing and configuring storage devices” on page 58
• “Pools and pool types” on page 35
• “Creating and modifying pools” on page 86
ISO images
• Very small files
• Same document type
• Same lifecycle
• Bulk deletion at the end of the lifecycle
See also:
• “Installing and configuring storage devices” on page 58
• “Pools and pool types” on page 35
• “Creating and modifying pools” on page 86
See also:
• “Creating and modifying pools” on page 86
• “Installing and configuring storage devices” on page 58
2.4.5 Caches
Caches are used to speed up the read access to documents. Archive Server can use
several caches: the disk buffer, the local cache volumes and a cache server. The local
cache resides on the archive server and can be configured. The local cache is
recommended to accelerate retrieval actions especially with optical storage devices.
A cache server is intended to reduce and speed up the data transfer in a WAN. It is
installed on its own host in a separate subnet.
See also:
• “Configuring Caches” on page 55
• “Configuring disk volumes” on page 47
• “Configuring Archive Cache Server” on page 195
2.5 Jobs
Jobs are recurrent tasks, which are automatically started according to a time
schedule or when certain conditions are met. This allows, for example, that
temporarily stored content is transferred automatically from the disk buffer to the
storage device. See also “Configuring jobs and checking job protocol” on page 97.
3.2.1 Infrastructure
Within this object, you configure the required infrastructure objects to enable the
usage with logical archives.
Buffers
Documents are collected in disk buffers before they are finally written to the
storage medium. To create disk buffers, see “Configuring Buffers” on page 49.
To get more information about buffer types, see “Disk buffers” on page 33.
Caches
Caches are used to accelerate the read access to documents. To create caches, see
“Configuring Caches” on page 55.
Devices
Storage devices are used for long-time archiving. To configure storage devices,
see “Installing and configuring storage devices” on page 58.
Disk Volumes
Disk volumes are used for buffers and pools. To configure disk volumes, see
“Configuring disk volumes” on page 47.
3.2.2 Archives
Within this object, you create logical archives and pools, you can define replicated
archives for remote standby scenarios and you can see external archives of known
servers.
Original Archives
Logical archives of the selected server. To create and modify archives, see
“Configuring archives and pools” on page 67.
Replicated Archives
Shows replicated archives, see “Logical archives” on page 67.
External Archives
Shows external archives of known servers, see “Logical archives” on page 67.
3.2.3 Environment
Within this object, you configure the environment of an archive server. For example,
cache servers must first be configured in the environment if it should be assigned to
a logical archive.
Cache Servers
Cache servers can be used to accelerate content retrieval in a slow WAN. See
“Configuring Archive Cache Server” on page 195
Known Servers
Known servers are used for replicating archives in remote standby scenarios. See
“Adding and modifying known servers” on page 179.
SAP Servers
The configuration of SAP gateways and systems to connect SAP servers to
Archive Server. See “Connecting to SAP servers” on page 165.
Scan Stations
The configuration of scan stations and archive modes to connect scan stations to
Archive Server. See “Configuring scan stations” on page 171.
3.2.4 System
Within this object, you configure global settings for the archive server. You also find
all jobs and a collection of useful utilities.
Alerts
Displays alerts of the “Admin Client Alert” type. See “Checking alerts” on
page 305. To receive alerts in the Administration Client, configure the events and
notifications appropriately. See, “Monitoring with notifications” on page 297.
Events and Notifications
Events and notifications can be configured to get information on predefined
server events. See “Monitoring with notifications” on page 297.
Jobs
Jobs are recurrent tasks which are automatically started according to a time
schedule or when certain conditions are met, e.g. to write content from the buffer
to the storage platform. A protocol allows the administrator to watch the
successful execution of jobs. See “Configuring jobs and checking job protocol” on
page 97.
Key Store
The certification store is used to administer encryption certificates, security keys
and timestamps. See “Configuring a certificate for document encryption” on
page 129.
Policies
Policies are a combination of rights which can be assigned to user groups. See
“Checking, creating and modifying policies” on page 158.
Reports
Reports contains the tabs "Reports" and "Scenarios" which display the generated
reports and available scenarios respectively. See “Generating Scenario Reports”
on page 211.
Storage Tiers
Storage tiers designate different types of storage. See “Creating and modifying
storage tiers” on page 93.
Users and Groups
Administration of users and groups. See “Checking, creating and modifying
users” on page 160 and “Checking, creating and modifying user groups” on
page 161.
Utilities
Utilities are tools which are started interactively by the administrator, see
“Utilities” on page 255.
3.2.5 Configuration
Within this object, you can set the configuration variables for:
Archive Server
Shows configuration variables related to the archive server. This includes
administration server, database server, document service logging, notification
server, timestamp server.
Monitor Server
Shows configuration variables related to the monitor server. This includes
monitor server and Web Monitor server and logging variables for both monitor
server types.
Document Pipeline
Shows configuration variables related to the document server.
For a description of how to set, modify, delete and search configuration variables,
see “Setting Configuration Variables” on page 213.
Proceed as follows:
1. Create and configure disk volumes at the operating system level to use it as
buffer, cache or storage device.
2. Configure the storage device for long-time archiving and set up the connection
to the archive server.
3. In the Administration Client:
• Add prepared disk volumes for various uses as buffers or local storage
devices (HDSK).
• Create disk buffers and attach hard disk volumes.
• Create caches and specify volume paths.
• Check whether the storage device is usable.
the ISO image, and in addition, the amount of data that has to be stored in the
buffer between two Write jobs.
• If the volume is used as cache:
If documents are retrieved after archiving, e.g. in Early Archiving scenarios, they
should stay on the hard disk for a while. The cache volume must be large
enough to store documents for the required time. You can configure and
schedule the Purge_Buffer job to copy documents automatically to the cache
(see “Configuring Caches” on page 55).
• If the volume is used as storage device:
Hard disk volumes can be used for NAS (Network Attached Storage) systems
and as local storage device (HDSK pool). Using HDSK pools is only
recommended for test purposes. Ensure that the volume is large enough to store
your test documents.
The IXOS spawner service must be able to access the path. You might have
to run the service under a dedicated user to achieve this. If you use a drive
letter you will have to make sure that the drive is mapped at boot time
before the IXOS spawner service is started and will not disconnect after
being idle for a while. For the latter reason it is recommended to use UNC
paths and not mapped network drives with drive letters.
Click Browse to open the directory browser. Select the designated directory
and click OK to confirm.
If you enter the directory path manually, ensure that a backslash is inserted
in front of the directory name if you are using volume letters (e.g. e:\vol2).
Volume class
Select the storage medium or storage system to ensure correct handling of
documents and their retention.
Hard Disk
Hard disk volume that provides WORM functionality or that can be used
as disk buffer. Documents are written from the buffer to the volume
without additional attributes. Use this volume class for buffers.
Hard Disk based read-only system
Local hard disk volume read-only, documents are written from the buffer
to the volume and the read-only attribute is set.
Network Appliance Filer with Snaplock
Documents are written from the buffer to the corresponding storage
system with NetApp-specific setting of the retention period. This volume
class is usually used as storage device with pools, not as buffer.
SUN Sam FS and StorEdge 5310 NAS
Documents are written from the buffer to the corresponding storage
system with SUN specific setting of the retention period. This volume
class is usually used as storage device with pools, not as buffer.
6. Click Finish.
Create as many hard disk volumes as you need.
Renaming disk To rename a disk volume, select it in the result pane and click Rename in the action
volumes pane.
Note: If you want to rename a disk volume, make sure that an existing
replicated disk volume is also renamed. Then start the Synchronize_Replicates
job on the remote server. This will update the volume names on both servers.
Further steps:
• “Creating and modifying a disk buffer” on page 50
• “Creating and modifying a HDSK (write through) pool” on page 87
• “Creating and modifying pools with a buffer” on page 88
• “Write incremental (IXW) pool settings” on page 91
7. Schedule the Purge_Buffer job. The command and the arguments are entered
automatically and can be modified later. See “Setting the start mode and
scheduling of jobs” on page 102.
Modifying a disk To modify a disk buffer, select it and click Properties in the action pane. Proceed in
buffer the same way as when creating a disk buffer. The name of the disk buffer and the
Purge_Buffer job cannot be changed.
Deleting a disk To delete a disk buffer, select it and click Delete in the action pane. A disk buffer
buffer can only be deleted if it is not assigned to a pool.
Proceed as follows:
1. Select Buffers in the Infrastructure object in the console tree.
2. Select the designated disk buffer in the top area of the result pane.
3. Click Attach Volume in the action pane. A window with all available volumes
opens.
4. Select an existing volume. The volume must have been created previously, see
“Creating and modifying disk volumes” on page 48.
5. Click OK to attach the volume.
See also:
• “Creating and modifying disk volumes” on page 48
• “Creating and modifying a disk buffer” on page 50
Proceed as follows:
1. Select Buffers in the Infrastructure object in the console tree.
2. Select the designated disk buffer in the top area of the result pane.
3. Select the volume to be detached in the bottom area of the result pane.
4. Click Detach Volume in the action pane.
5. Confirm with OK to detach the volume.
Proceed as follows:
1. Select Buffers in the Infrastructure object in the console tree.
2. Select the designated disk buffer in the top area of the result pane.
3. Click Edit Purge Job in the action pane.
4. Enter the settings:
Job name
The job name is set during buffer creation and cannot be changed.
Command
The command is set to Purge_Buffer during buffer creation.
Arguments
The argument is set to the buffer's name during buffer creation.
Start mode
Configures whether the job starts at a certain time or after a previous job was
finished. See also “Setting the start mode and scheduling of jobs” on
page 102.
5. Click Next.
6. Enter the settings for the selected start mode.
7. Click Finish.
See also:
• “Creating and modifying jobs” on page 102.
Proceed as follows:
1. Select Buffers in the Infrastructure object in the console tree.
2. Select the Original Disk Buffers tab or the Replicated Disk Buffers tab,
according to the type of buffer you want to check or modify.
3. Select the designated disk buffer in the top area of the result pane.
4. Select the volume you want to check in the bottom area of the result pane.
5. Click Properties in the action pane. A window with volume information opens.
Volume name
The name of the volume
Type
Original or replicated
Capacity (MB)
Maximum capacity of the volume
Free (MB)
Free capacity of the volume
Last Backup or Last Replication
Date, when the last backup or the last replication was performed. Depends
on the type of the volume.
Host
Specifies the host on which the replicated volume resides if the disk buffer is
replicated
6. Modify the volume status if necessary. To do this, select or clear the status. The
settings that can be modified depends on the volume type.
Full, Offline
These flags are set by Document Service and cannot be modified.
Write locked
No more data can be copied to the volume. Read access is possible; write
access is protected.
Locked
The volume is locked. Read or write access is not possible.
Modified
Is automatically selected, if the write component (WC) performs a write
access to a HDSK volume. If cleared manually,Modified is selected with the
next write access again.
7. Click OK.
Proceed as follows:
1. Select Buffers in the Infrastructure object or select Archives in the in the
console tree.
2. Click Synchronize Servers in the action pane.
3. Click OK to confirm. The synchronization is started.
Proceed as follows:
1. Select Known Servers in the Environment object in the console tree.
2. Select the designated disk buffer in the top area of the result pane.
3. Select the Disk Buffer you want to replicate in the bottom area of the result pane.
4. Click Replicate in the action pane.
5. Enter a name for the replicated disk buffer, click Next.
Note: If you want to rename a replicated disk volume, you also have to
rename the original disk volume to the same new name. Then start the
Global cache
If no cache path is configured and assigned to a logical archive, the global cache is
used. The global cache is usually created during installation but there is no volume
assigned. To use the global cache a volume must be assigned. See “Adding hard
disk volumes to caches” on page 57.
Depending on the time when you want to cache documents, you select the
appropriate configuration setting:
Enable caching for the Caching option in the archive configuration, see “Configuring
logical archive the archive settings” on page 82
Caching when the If the Write job is performed, documents are also written to the
document is written cache.
Caching when the Cache documents before purging option in the disk buffer
buffer is purged properties. See “Creating and modifying a disk buffer” on
page 50.
See also:
• “Adding hard disk volumes to caches” on page 57
• “Creating and deleting caches” on page 56
• “Defining priorities of cache volumes” on page 58
Proceed as follows:
1. Create the volumes for the caches on the operating system level.
2. Start the Administration Client.
3. Select Caches in the Infrastructure object in the console tree.
4. Click New Cache in the action pane.
5. Enter the Cache name and click Next.
6. Enter the Location of the hard disk volume.
7. Click Finish.
Note: If you want to change the priority of assigned hard disk volumes, see
“Defining priorities of cache volumes” on page 58.
Deleting a cache To delete a cache, select it and click Delete in the action pane. It is not possible to
delete a cache which is assigned to a logical archive. The global cache cannot be
deleted either.
See also:
• “Adding hard disk volumes to caches” on page 57
• “Defining priorities of cache volumes” on page 58
Caution
Be aware that your cache content gets invalid if you change the volume
priority.
Proceed as follows:
1. Select Caches in the Infrastructure object in the console tree.
2. Select the designated cache in the top area of the result pane. In the bottom area
of the result pane, the assigned hard disk volumes are listed.
3. Click Add Cache Volume in the action pane.
4. Click Browse to open the directory browser. Select the designated Location of
the hard disk volume and click OK to confirm.
5. Click Finish to add the new cache volume.
Note: If you want to change the priority of hard disk volumes, see “Defining
priorities of cache volumes” on page 58.
See also:
• “Configuring Caches” on page 55
• “Defining priorities of cache volumes” on page 58
Proceed as follows:
1. Select Caches in the Infrastructure object in the console tree.
2. Select the designated cache in the top area of the result pane. In the bottom area
of the result pane, the assigned hard disk volumes are listed.
3. Select the hard disk volume you want to delete.
4. Click Delete in the action pane.
5. Click OK to confirm.
Note: If you want to change the priority of hard disk volumes, see “Defining
priorities of cache volumes” on page 58.
See also:
• “Configuring Caches” on page 55
• “Defining priorities of cache volumes” on page 58
Caution
Be aware that your cache content gets invalid if you change the volume
priority.
Proceed as follows:
1. Select Caches in the Infrastructure object in the console tree.
2. Select the designated cache in the top area of the result pane. In the bottom area
of the result pane the assigned hard disk volumes are listed.
3. Click Change Volume Priorities in the action pane. A window to change the
priorities of the volumes opens.
4. Select a volume and click the designated arrow button to increase or decrease
the priority.
5. Click Finish.
The configuration of storage devices depends on the storage system and the storage
type. If you are not sure how to install your storage device, contact Open Text
Customer Support.
After installation the storage devices are administered in Devices in the
Infrastructure object in the console tree. There are two main types of devices
possible:
• Optical storage devices managed by STORM.
• Hard disk based storage devices (GS) connected with API.
Note: NAS and Local hard disk devices are administered in Disk Volumes in
the Infrastructure object in the console tree (see “Configuring disk volumes”
on page 47).
Important
Although you can configure most storage systems for container file storage
as well as for single file storage, the configuration is completely different.
Proceed as follows:
1. Select Devices in the Infrastructure object in the console tree.
2. Select the designated device in the top area of the result pane.
3. Click New Volume in the action pane.
4. Enter settings:
Volume name
Unique name of the volume.
Base directory
Base directory, which was defined with storage system with system-specific
tools, during installation.
5. Click Finish to create the new volume.
Attaching devices
Proceed as follows:
1. Select Devices in the Infrastructure object in the console tree.
2. Select the designated device in the top area of the result pane.
3. Click Attach in the action pane.
Detaching devices
Proceed as follows:
1. Select Devices in the Infrastructure object in the console tree.
2. Select the designated device in the top area of the result pane.
3. Click Detach in the action pane.
This device can no longer be accessed and can be turned off. The status is set to
“Detached”.
Proceed as follows:
1. Insert the medium into the jukebox.
2. Select Devices in the Infrastructure object in the console tree.
3. Select the jukebox where you inserted the medium in the top area of the result
pane.
4. Click Insert Volume in the action pane.
The new volume is listed in the bottom area of the result pane.
The status is -blank- .
Proceed as follows:
1. Select Devices in the Infrastructure object in the console tree.
2. Select the jukebox where you inserted the media in the top area of the result
pane.
3. Click Insert Volume Without Import in the action pane.
The new volumes are listed in the bottom area of the result pane.
The status is -notst- (not tested). The media are known to the Storage
Manager, but they can not be used to store data.
4. Click Import Untested Media in the action pane.
5. Click Yes to start the import.
The utility tests and imports all volumes with the status -notst-. A protocol
window shows the progress and the result of the import. After that, the media
that have been successfully imported can be used to store data.
To check the protocol later on, see “Checking utilities protocols” on page 256.
Proceed as follows:
1. Select Devices in the Infrastructure object in the console tree. All available
devices are listed in the top area of the result pane.
2. Select the designated jukebox. The attached volumes are listed in the bottom
area of the result pane.
3. Click Test Slots in the action pane.
4. Enter the numbers of the slots to be tested.
Use the following entry syntax:
7 Specifies slot 7
3,6,40 Specifies slots 3, 6, and 40.
3–7 Specifies slots 3 to 7 inclusive
2,20-45 Specifies slot 2 and slots 20 to 45 inclusive
5. Click OK.
A protocol window shows the progress and the result of the slot test. To check
the protocol later on, see “Checking utilities protocols” on page 256.
a name and assigned to a pool when the volume is written. The original and backup
volumes are assigned the same name. Identically named ISO volumes are
automatically assigned to the correct pool. In contrast, storage media that are used
in IXW pools have to be initialized and assigned to a pool. You can perform the
initialization automatically or manually.
Caution
Under Windows, writing signatures to media with the Windows Disk
Manager is not allowed. These signatures make the medium unreadable for
the archive.
Details:
• “Write incremental (IXW) pool settings” on page 91
• “Pools and pool types” on page 35
Proceed as follows:
1. Select Devices in the Infrastructure object in the console tree.
2. Select the jukebox where you inserted the media in the top area of the result
pane.
3. Select a volume with the -blank- status in the bottom area of the result pane.
4. Click Initialize Original in the action pane. The Init Volume window opens.
5. Enter the Volume name.
The maximum length is 32 characters. You can only use letters (no umlauts),
digits and underscores. Give a unique name to every volume in the entire
network. This is a necessary precondition for the replication strategy in which
the replicates of archives and volumes must have the same name as the
corresponding originals. The following name structure is recommended:
<archive-name>_<pool-name>_<serial-number>_<side>.
Note: WORM or UDO volumes, which are manually initialized, must be added
to the document service before they can be attached to a pool (see “Add
volume to document service” on page 64).
Proceed as follows:
1. Select Devices in the Infrastructure object in the console tree.
2. Select the jukebox where you inserted the media in the top area of the result
pane.
3. Select a volume with the -blank- status in the bottom area of the result pane.
4. Click Initialize Backup in the action pane. The Init Backup Volume window
opens.
5. Select the original volume and click OK to initialize the backup volume.
Proceed as follows:
1. Select Devices in the Infrastructure object in the console tree.
2. Select the jukebox where you inserted the media in the top area of the result
pane.
3. Select a volume that does not have the -blank- status in the bottom area of the
result pane.
4. Click Add Volume to Document Service in the action pane.
1. Select Configuration object in the console tree and search for the respective
variable (see “Searching configuration variables” on page 214).
2. Make sure to create a secure password.
Note: Characters, allowed within a password: all printable ASCII
characters except: “;”, “'” and “"”.
Open the User password of database configuration parameter (internal name:
DBPASSWORD) and enter the new password.
3. Click OK.
Result: the password is encrypted automatically.
• External Archives
Logical archives of known servers. These archives are located on known servers
and can be reached for retrieval (see “Adding and modifying known servers” on
page 179).
For each original archive, you give a name and configure a number of settings:
• Encryption, compression, blobs and single instance affect the archiving of a
document.
• Caching and cache servers affect the retrieval of documents (see “Configuring
archive access via a cache server” on page 206).
• Signatures, SSL and restrictions for document deletion define the conditions for
document access.
• Timestamps and certificates for authentication ensure the security of documents.
• Auditing mode, retention and deletion define the end of the document lifecycle.
Some of these settings are pure archive settings. Other settings depend on the
storage method, which is defined in the pool type. The most relevant decision
criterion for their definition is single file archiving or container archiving.
Note on IXW pools
Volumes of IXW pools are regarded as container files. Although the documents
are written as single files to the medium, they cannot be deleted individually,
neither from finalized volumes (which are ISO volumes) nor from non-
finalized volumes using the IXW file system information.
Of course, you can use retention also with container archiving. In this case, consider
the delete behavior that depends on the storage method and media (see “When the
retention period has expired” on page 222).
and the job is finished. The next time the Write job starts, the new data is
compressed and the amount of data is checked again.
HDSK pool When you create an HDSK pool, the Compress_<Archive name>_<Pool name> job is
created automatically for data compression. This job is activated by default.
Important
• Open Text strongly recommends not using single instance in
combination with retention periods for archives containing pools for
single file archiving (FS, VI, HDSK).
• If you want to use SIA together with retention periods, consider
“Retention” on page 70.
For each archiving request, a filter (Filters for Single Instance enabled Archives) is
used to identify components to be decomposed. The filter applies to archives that
are enabled for SIA.
For each retrieval request, a filter (Filters for all Archives) is used to identify
components to be composed. The filter applies to all archives.
Note: If your system is configured for archiving emails, do not modify these
filters.
If your system is not configured for archiving emails, to improve performance,
disable the archive filters, i.e., both filters must be empty.
Composing emails may use a lot of memory, that has impact to the performance. To
avoid this, you may configure:
• Maximum email size in MB to decompose
Maximum size (in M Bytes) an email can have to be decomposed. Emails larger
than this value won’t be decomposed.
Default: 200 MB
• Maximum email size held in memory
Maximum size (in M Bytes) an email can have when composing or decomposing
to be held in memory. Emails larger than this value will temporarily be stored in
the filesystem.
Default: 10000000 = 10 MB
• Temporary storage for large emails
Temporary storage for large emails when composing or decomposing, i.e. for
emails larger than specified by the Maximum email size held in memory
parameter.
In addition, this directory is always used to temporarily hold a backup of the
email during decomposition.
Default: in the Tomcat installation directory /temp/EA
Note: Make sure, the available storage is sufficient.
The configuration parameters can be found inn: Runtime and Core Services >
Configuration > Content Service.
5.1.3 Retention
Introduction This part explains the basic retention handling mechanism of Archive Server. It is
strongly recommended to read this part if you use retention periods for documents.
For administration, see “Configuring the archive retention settings” on page 83.
Retention period The retention period of a document defines a time frame, during which it is
impossible to delete or modify the document.
The retention period - more precisely the expiration date of the retention period - is
a property of a document and is stored in the database and additionally – if possible
– together with the document on the storage medium. The document gets the
retention period in one of the following ways:
• The client of the leading application sends the retention period explicitly. This
means, the leading application specifies a retention period (and a retention
behavior) during the creation of a document.
• If nothing is specified by the leading application, the document can inherit a
default retention period and a retention behavior on the Archive Server. The
retention behavior is then part of the document, i.e., modifying the archive-
specific retention does not modify the document's retention. The default values
are configured per logical archive within Open Text Administration Client (see
“Configuring the archive retention settings” on page 83).
If both are given, the leading application has priority.
Compliance Various regulations require storing documents for a defined retention period. To
facilitate compliance with regulations and meet the demand of companies, Archive
Server can handle retention of documents in cooperation with the leading
application and the storage subsystem. The leading application manages the
retention of documents, and Archive Server executes the requests or passes them to
the storage system.
To meet compliance, the content of documents needs to be physically protected or
protected by a system supporting a WORM capability or by optical media. This
means that it is not sufficient to store the components with a specified retention
period on a simple hard disk.
Archive Server supports two different kinds of compliance regulations:
• Fixed retention
The retention period is known at creation time, and can be propagated to the
storage system. The storage system protects against illegal deletion: neither an
application nor Archive Server are able to delete the object on the storage system
before the retention period has expired.
• Variable Retention (up from 9.7.x):
The retention period is unknown at creation time, or can change during the
document life cycle. In this case, retention periods have to be handled by the
leading application only (i.e., the leading application sets retention to
READ_ONLY), and cannot be passed to Archive Server (i.e., no retention is set at
the archive).
Terms used The terms storage system or storage platform are used for any long-term
storage device supported by Archive Server, such as optical media, Content-
Addressed Storage (CAS), Network-Attached Storage (NAS), Hierarchical Storage
Management Systems (HSM) and others. The term delete refers to the logical
deletion of a component and the term purge is used to describe the cleanup of
content on the storage system.
See also:
• “Configuring the archive retention settings” on page 83
• “When the retention period has expired” on page 222
Retention types Different retention types can be applied during the creation of a document by the
leading application or by inheritance of default values on the archive server (see
“Configuring the archive retention settings” on page 83).
Retention The following table lists settings and their impact on the retention behavior (see
behavior “Configuring the archive retention settings” on page 83):
Setting Description
Deferred archiving Deferred archiving prevents the archive server from
writing the content from the disk buffer to the stor-
age system until another call removes the deferred
flag from the document. This can be useful in combi-
nation with EVENT retention, if the retention cannot
be set during the creation of the document.
Setting Description
Destroy Destroy activates overwriting the document several
times before purging. Destroy is not available for all
storage system.
For the concrete retention support of the storage system, refer to the storage release
notes.
Purge process A document or component can be deleted after the retention of the document has
expired or no retention has been applied.
The leading application can delete a single component or delete the document.
Deleting a document implies that all components are deleted and then the document
itself. Due to the nature of storage, deletion cannot be handled within a transaction.
Purging content In single file archiving scenarios, the content on the storage system is purged during
the delete command. Content on ISO images or optical WORMs cannot be purged,
and an additional job is necessary to purge the content as soon as all content of the
partition is deleted from Archive Server.
The purging capabilities depend on storage system and pool type. The following
table lists the purge behavior depending on the pool type.
Note: If the document’s retention date has changed on the original server due
to a migrate call, the new values are only held by Archive Server and not
written to the ATTRIB.ATR file. The ATTRIB.ATR file will only be updated, if
the document is updated, e.g. a component is added on the original server or if
the document is copied to a different volume.
As soon as the updated ATTRIB.ATR has been replicated to the Remote Standby
Server, the new retention value will be known on the Remote Standby Server.
Default values The following table lists default values of pool-independent archive settings.
Setting Default value Value for single file ar- Value for container
chiving archiving
pool types: HDSK, Single pool types: ISO, IXW
file (FS), Single file (VI)
Blobs Off Off On (possible)
Single instance Off Off On (possible)
Retention Off On (possible) Off (recommended)
Proceed as follows:
1. Select Original Archives in the Archives object in the console tree.
2. Click New Archive in the action pane. The window to create a new logical
archive opens.
3. Enter archive name and description.
Archive name
Unique name of the new logical archive. Consider the “Naming rule for
archive components” on page 67.
In the case of SAP applications, the archive name consists of two
alphanumeric characters (only uppercase letters and digits).
Description
Brief, self-explanatory description of the new archive.
4. Click Next and read the information carefully.
5. Click Finish to create the new archive.
Note: After creating the logical archive, default configuration values are for all
settings are provided. If you want to change these settings, open the Properties
window and modify the settings of the respective tab.
General The description of the new archive can be viewed and modified (open Properties in
information the action pane and select the General tab).
Proceed as follows:
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Security tab. Check the settings and modify it, if needed.
Authentication (secKey) required to
Set the archive-specific access permissions :
• Read documents
• Update documents
• Create documents
• Delete documents
Each permission marked for the current archive has to be checked when
verifying the signed URL. With their first request, clients evaluate the access
permissions required for the current archive and preserve this information.
With the next request, the signed URL contains the access permissions
required, if these are not in conflict with other access permission settings
(e.g. set per document).
The settings determine the access rights to documents in the selected archive
which were archived without a document protection level, or if document
protection is ignored. The document protection level is defined by the
leading application and archived with the document. It defines for which
operations on the document a valid secKey is required.
See also: “Activating secKey usage for a logical archive” on page 109
Select the operations that you want to protect. Only users with a valid
secKey can perform the selected operations. If an operation is not selected,
everybody can perform it.
SSL
Specifies whether SSL is used in the selected archive for authorized,
encrypted HTTP communication between the Imaging Clients, archive
servers, cache servers and Open Text Document Pipelines.
• Use: SSL must be used.
• Don't use: SSL is not used.
• May use: The use of SSL for the archive is allowed. The behavior
depends on the clients' configuration parameter HTTP UseSSL (see also
the Open Text Imaging Viewers and DesktopLink - Configuration Guide (CL-
CGD) manual).
Open Text Imaging Java Viewer does not support SSL.
Document deletion
Here you decide whether deletion requests from the leading application are
performed for documents in the selected archive, and what information is
given. You can also prohibit deletion of documents for all archives of the
archive server. This central setting has priority over the archive setting.
See also: “Setting the operation mode of Archive Server” on page 336.
Deletion is allowed
Documents are deleted on request, if no maintenance mode is set and the
retention period is expired.
Deletion Causes error
Documents are not deleted on request, even if the retention period is
expired. A message informs the administrator about deletion requests.
4. Click OK to resume.
Proceed as follows:
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Settings tab. Check the settings and modify it, if needed.
Compression
Activates data compression for the selected archive.
See also: “Data compression” on page 68
Encryption
Activates the data encryption to prevent that unauthorized persons can
access archived documents.
See also: “Encrypted document storage” on page 111.
Blobs
Activates the processing of blobs (binary large objects).
Very small documents are gathered in a meta document (the blob) in the disk
buffer and are written to the storage medium together. The method
improves performance. If a document is stored in a blob, it can be destroyed
only when all documents of this blob are deleted. Thus, blobs are not
supported in single file storage scenarios and should not be used together
with retention periods.
Single instance
Enables single instance archiving.
See also: “Single instance” on page 69.
Deferred archiving
Select this option, if the documents should remain in the disk buffer until the
leading application allows Archive Server to store them on final storage
media.
Example: The document arrives in the disk buffer without a retention period
and the leading application will provide the retention period shortly after.
The document must not be written to the storage media before it gets the
retention period. To ensure this processing, enable the Event based
retention option in the Edit Retention dialog box, see “Configuring the
archive retention settings” on page 83.
Cache enabled
Activates the caching of documents to the DS cache at read access.
Cache
Pull down menu to select the cache path. Before you can assign a cache path,
you must create it. (See “Creating and deleting caches” on page 56 and
“Configuring Caches” on page 55).
Auditing
Auditing enabled – If auditing is enabled, all document-related actions are
audited (see “Configuring auditing” on page 319).
4. Click OK to resume.
not otherwise prohibited, Archive Server accepts and executes deletion requests
from the leading application.
Proceed as follows:
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Retention tab. Check the settings and modify them, if needed.
No retention
Use this option if the leading application does not support retention, or if
retention is not relevant for documents in the selected archive. Documents
can be deleted at any time if no other settings prevent it.
No retention – read only
Like No retention, but documents cannot be changed.
Retention period of x days
Enter the retention period in days. The retention period of the document is
calculated by adding this number of days to the archiving date of the
document. It is stored with the document.
Event based retention
This method is used if a retention period is required but at the time of
archiving, it is unknown when the retention period will start. The leading
application must send the retention information after the archiving request.
When the retention information arrives, the retention period is calculated by
adding the given period to the event date. Until the document gets the
calculated retention period it is secured with maximum (infinite) retention.
You can use the option in two ways:
Together with the Deferred archiving option
The leading application sends the retention period separately from and
shortly after the archiving request (for example, in Extended ECM for
SAP Solutions). The documents should remain in the disk buffer until
they get their retention period. They are written to final storage media
together with the calculated retention period when the leading
application requests it. To ensure this scenario, enable the Deferred
archiving option in the Settings tab, see “Configuring the archive
settings” on page 82. Regarding storage media and deletion of
documents, the scenario does not differ from that with a given Retention
period of x days.
Without Deferred archiving
The retention period is set a longer time after the archiving request, and
the document should be stored on final storage media during this time.
For example, in Germany, personnel files of employees must be stored
for 5 years after the employee left the company. The files are immediately
archived on storage media, and the retention period is set at the leaving
date. This scenario is only supported for archives with HDSK pool or
Single File (VI) pool (if supported by the storage system). In all other
pools, the documents would be archived with infinite retention, and the
retention period cannot be changed after archiving (only with migration).
For the same reason, do not use blobs in this scenario.
Infinite retention
Documents in the archive never can be deleted. Use this setting for
documents that must be stored for a very long time.
Purge
Destroy (unrecoverable) – This additional option is only relevant for
archives with hard disk storage. If enabled, the system at first overwrites the
file content several times and then deletes the file.
4. Click OK to resume.
Important
Documents with expired retention period are only deleted, if:
• document deletion is allowed, see “Configuring the archive security
settings” on page 81, and
• no maintenance mode is set, see “Setting the operation mode of Archive
Server” on page 336.
See also:
• “Retention” on page 70
• “When the retention period has expired” on page 222
Proceed as follows:
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Timestamps tab. In the Timestamps area, select one of the following
options:
Old Timestamps
If selected: use old timestamp.
Note: Cannot be used any more. Only visible for compatibility reasons.
No Timestamps
If selected: no use of timestamps, i.e., the archive server generates no
timestamp for the archived documents.
ArchiSig
If selected: enables ArchiSig timestamp usage, i.e., an ArchiSig timestamp is
generated for the archived documents.
For a description of ArchiSig, see section “Timestamp Usage” on page 115.
4. Select the Timestamps tab. In the Verification area, select one of the following
options:
None
If selected: timestamps are not verified. Each requested document is
delivered.
Relaxed
If selected: timestamps are verified. Each requested document is delivered. If
the timestamp cannot be verified, an auditing entry is written (if auditing is
enabled).
Strict
Timestamps are verified. Requested documents are delivered only, if the
timestamp is verified.
In addition, an auditing entry is written (if auditing is enabled).
Note: Even if no timestamps are used, documents can have timestamps
assigned by clients. If not verified, these documents cannot be
delivered.
5. Click OK to resume.
• Settings of the Write job. The Write job writes the data from the buffer to the
final storage media. For all pool types, except the HDSK pool, a Write job must
be configured.
Note: Consider that the component types of a pool (known as application
types in former archive server versions) are displayed for information, but
cannot be changed (read only).
To determine the pool type that suits the scenario and the storage system in use,
read the Hardware Release Notes (see Open Text Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/fetch/2001/744073/3551
166/customview.html?func=ll&objId=3551166)).
For more information on pools and pool types, see “Pools and pool types” on
page 35.
Proceed as follows:
1. Select Original Archives in the Archives object in the console tree.
2. Select the designated archive in the console tree.
3. Click New Pool in the action pane. The window to create a new pool opens.
4. Enter a unique, descriptive Pool name. Consider the naming conventions, see
“Naming rule for archive components” on page 67.
5. Select Write through (HSDK) and click Next.
6. Select a Storage tier (see “Creating and modifying storage tiers” on page 93).
The name of the associated compression job is created automatically.
7. Click Finish to create the pool.
8. Select the pool in the top area of the result pane and click Attach Volume. A
window with all available hard disk volumes opens (see “Creating and
modifying disk volumes” on page 48).
9. Select the designated disk volume and click OK to attach it.
Scheduling the To schedule the associated compression job, select the pool and click Edit Compress
compression job Job in the action pane. Configure the scheduling as described in “Configuring jobs
and checking job protocol” on page 97.
Modifying a To modify pool settings, select the pool and click Properties in the action pane. Only
HDSK pool the assignment of the storage tier can be changed.
Proceed as follows:
1. Select Original Archives in the Archives object in the console tree.
2. Select the designated archive in the console tree.
3. Click New Pool in the action pane. The window to create a new pool opens.
4. Enter a unique (per archive), descriptive Pool name. Consider the naming
conventions, see “Naming rule for archive components” on page 67
5. Select the designated pool type and click Next.
6. Enter additional settings according to the pool type:
• “Write at-once pool (ISO) settings” on page 89
• “Write incremental (IXW) pool settings” on page 91
• “Single File (VI, FS) pool settings” on page 92
7. Click Finish to create the pool.
8. Select the pool in the top area of the result pane and click Attach Volume. A
window with all available hard disk volumes opens (see “Creating and
modifying disk volumes” on page 48).
9. Select the designated disk volume and click OK to attach it.
10. Schedule the Write job, see “Configuring jobs and checking job protocol” on
page 97.
Modifying a pool To modify pool settings, select the pool and click Properties in the action pane.
Depending on the pool type you can modify settings or assign another buffer.
Important
You can assign another buffer to the pool. If you do so, make sure that:
• all data from the old buffer is written to the storage media,
• the backups are completed,
• no new data can be written to the old buffer.
Data that remains in the buffer will be lost after the buffer change.
Storage Selection
Storage tier
Select the designated storage tier (see “Creating and modifying storage tiers” on
page 93).
Buffering
Used disk buffer
Select the designated buffer (see “Configuring Buffers” on page 49).
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see “Configuring jobs and checking job protocol” on page 97.
Original jukebox
Select the original jukebox.
Volume Name Pattern
Defines the pattern for creating volume names.
$(PREF)_$(ARCHIVE)_$(POOL)_$(SEQ) is set by default. $(ARCHIVE) is the
placeholder for the archive name, $(POOL) for the pool name and $(SEQ) for an
automatic serial number. The prefix $(PREF) is defined in Configuration, search
for the Volume name prefix variable (internal name: ADMS_PART_PREFIX;
see “Searching configuration variables” on page 214). You can define any
pattern, only the placeholder $(SEQ) is mandatory. You can also insert a fixed
text. The initialization of the medium is started by the Write job.
Click Test Pattern to view the name planned for the next volume based on this
pattern.
Allowed media type
Here you specify the permitted media type. ISO pools support:
DVD-R Which DVD-R types are supported you find in the Hardware Release
Notes (see Open Text Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/fetch/2001/744
073/3551166/customview.html?func=ll&objId=3551166)).
WORM Which WORM types are supported you find in the Hardware Release
Notes (see Open Text Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/fetch/2001/744
073/3551166/customview.html?func=ll&objId=3551166)).
HD-WO HD-WO is the media type supported with many storage systems. An HD-WO
medium combines the characteristics of a hard disk and WORM: fast ac-
cess to documents and secure document storage. Enter also the maximum
size of an ISO image in MB, separated by a colon:
For some storage systems, the maximum size is not required, refer to the
documentation of your storage system (see Open Text Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/fetch/2001/744
073/3551166/customview.html?func=ll&objId=3551166)).
Number of volumes
Number of ISO volumes to be written in the original jukebox. This number
consists of the original and the backup copies in the same jukebox. For virtual
jukeboxes (HD-WO media), the number of volumes must always be 1, as
backups must not be written to the same medium in the same storage system.
Minimum amount of data
Minimum amount of data to be written in MB. At least this amount must have
been accumulated in the disk buffer before any data is written to storage media.
The quantity of data that you select here depends on the media in use. For HD-WO
media type, the value must be less than the maximum size of the ISO image that
you entered in the Allowed media type field.
Backup
Backup enabled
Enable this option if the volumes of a pool are to be backed up locally in a
second jukebox of this archive server. During the backup operation, the
Local_Backup jobs only considers the pools for which backup has been enabled.
Number of backups
Number of backup media that is written in the backup jukebox. For virtual
jukeboxes (HD-WO media), the number of backups is restricted to 1.
Number of drives
Number of write drives that are available on the backup jukebox. The setting is
only relevant for physical jukeboxes.
See also:
• “Creating and modifying pools with a buffer” on page 88
• “Pools and pool types” on page 35
Storage Selection
Storage tier
Select the designated storage tier (see “Creating and modifying storage tiers” on
page 93).
Buffering
Used disk buffer
Select the designated buffer (see “Configuring Buffers” on page 49).
Initializing
Auto initialization
Select this option if you want to initialize the IXW media in this pool
automatically, see also “Initializing storage volumes” on page 62.
Original jukebox
Select the original jukebox.
Volume Name Pattern
Defines the pattern for creating volume names.
$(PREF)_$(ARCHIVE)_$(POOL)_$(SEQ) is set by default. $(ARCHIVE) is the
placeholder for the archive name, $(POOL for the pool name and $(SEQ) for an
automatic serial number. The prefix $(PREF) is defined in Configuration, search
for the Volume name prefix variable (internal name: ADMS_PART_PREFIX;
see “Searching configuration variables” on page 214). You can define any
pattern, only the placeholder $(SEQ) is mandatory. You can also insert a fixed
text. The initialization of the medium is started by the Write job.
Click Test Pattern to view the name planned for the next volume based on this
pattern.
Allowed media type
The media type is always WORM, for both WORM and UDO media.
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see “Configuring jobs and checking job protocol” on page 97.
Number of drives
Number of write drives that are available on the original jukebox.
Auto finalization
Select this option if you want to finalize the IXW media in this pool
automatically, see also “Finalizing storage volumes” on page 219.
Filling level of volume: ... %
Defines the filling level in percent at which the volume should be finalized. The
Storage Manager automatically calculates and reserves the storage space
required for the ISO file system. The filling level therefore refers to the space
remaining on the volume.
and last write process: ... days
Defines the number of days since the last write access.
Backup
Backup enabled
Enable this option if the volumes of a pool are to be backed up locally in a
second jukebox of this archive server. During the backup operation, the
Local_Backup jobs only considers the pools for which backup has been enabled.
Backup jukebox
Select the backup jukebox.
Number of backups
Number of backup media that is written in the backup jukebox.
Number of drives
Number of write drives that are available on the backup jukebox. The setting is
only relevant or physical jukeboxes.
See also:
• “Creating and modifying pools with a buffer” on page 88
• “Pools and pool types” on page 35
Storage Selection
Storage tier
Select the designated storage tier (see “Creating and modifying storage tiers” on
page 93).
Buffering
Used disk buffer
Select the designated buffer (see “Configuring Buffers” on page 49).
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see “Configuring jobs and checking job protocol” on page 97.
Documents written in parallel
Number of documents that can be written at once.
See also:
• “Creating and modifying pools with a buffer” on page 88
• “Pools and pool types” on page 35
Proceed as follows:
1. Select Original Archives in the Archives object in the console tree.
2. Select the designated archive in the console tree.
3. Select the pool, which should be the default pool, in the top area of the result
pane.
4. Click Set as Default Pool in the action pane and click OK to confirm.
Proceed as follows:
1. Select Storage Tiers in the System object. The present storage tiers are listed in
the result pane.
2. Click New Storage Tier in the action pane.
3. Enter name and a short description of the storage tier.
4. Click Finish.
Modifying To modify a storage tier, select it and click Properties in the action pane. Proceed in
storage tiers the same way as when creating a storage tier.
See also:
• “Creating and modifying pools” on page 86
Important
In case you are using Archive Cache Server, consider that a re-initialization
in secure environments can only work if the current certificates are available
on the cache server. To avoid problems, the Update documents security
setting must be deselected before certificates are enabled. See step 3.
Proceed as follows:
1. Select the logical archive in the Original Archives or Replicated Archives object
of the console tree.
2. Select the Certificates tab in the result pane.
For scenarios using a cache server, go on with step 3.
Otherwise, go on with step 4.
3. A cache server is assigned to a logical archive.
For these logical archives that are assigned to the relevant certificate, proceed as
follows:
a. Select Original Archives in the Archives object of the console tree.
b. Select the logical archive in the console tree.
c. Click Properties in the action pane and select the Security tab.
d. De-select temporarily Update documents.
4. Select the respective certificate by its name (in the result pane).
5. Click Enable / Disable in the action pane.
The certificate is enabled or disabled, respectively.
Proceed as follows:
1. Select the logical archive in the Original Archives, Replicated Archives, or
External Archives object of the console tree.
2. Click Change Server Priorities in the action pane.
3. In the Change Server Priorities window, select the server(s) to add from the
Related servers list on the left.
Click the button to move the selected server(s) to the Set priorities list.
Note: You can use up to three servers.
4. Use the arrows on the right to define the order of the servers: Select a server and
click the or to move the server up or down in the list, respectively.
If you want to remove a server from the priorities list, select the server to
remove and click the button.
5. Click Finish.
Command Description
Write_CD Writes data from disk buffer to storage media as ISO images, belongs
to ISO pools.
Write_WORM Writes data incrementally from disk buffer to WORM and UDO, be-
longs to IXW pools.
Command Description
Write_GS Writes single files from disk buffer to a storage system through the
interface of the storage system (vendor interface), belongs to Single
File (VI) pools.
Write_HDSK Writes single files from disk buffer to the file system of an external
storage system, belongs to Single File (FS) pools.
Purge_Buffer Deletes the contents of the disk buffer according to conditions, see
“Configuring Buffers” on page 49.
backup_pool Performs the backup of all volumes of a pool.
Compress_HDSK Compresses the data in an HDSK pool.
Command Description
Copy_Back Transfers cached documents from the cache server to the archive
server. The Copy_Back job is disabled by default and must only be
enabled for archive servers with enabling “write back” mode. See
“Configuring Archive Cache Server” on page 195. By default,
documents not older than three days are transferred. A message
appears if there are older documents remaining. The default setting
can be modified by changing the job settings.
Add the argument: -i <days> to set the interval.
The certificate is sent to the archive server with the putCert command or imported
with the Import Certificate for Authentication utility (see “Configuring a certificate
for authentication” on page 126). You can use the certtool utility (command line)
to create a certificate, or to generate a request to get a trusted certificate.
You find a description of the certtool utility in the Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/open/15491558).
Proceed as follows:
1. Create a certificate with the certtool utility (command line). The <key>.pem
file contains the private key and is used to sign the URL. <cert>.pem contains
the public key that the archive server uses to verify the signatures.
2. Store the certificate and the private key on the server of your leading application
(see the corresponding Administration Guide for details).
Example:
• certificate=cert.pem, store in the directory C:\config\seckey\
• private key=key.pem, store in the directory C:\config\seckey\
3. Import the certificate as global (see “Configuring a certificate for authentication”
on page 126).
4. Set the following variables:
Search for these variables in Configuration (see “Searching configuration
variables” on page 214).
• Client certificate file variable (internal name: DSH_CERTIFICATE_FILE)
This variable specifies the absolute path of the certificate file used by
pagelist. It is needed when authentication (SecKey) is required to access the
logical archive, e.g., C:\config\seckey\cert.pem
• Client private key file (internal name: DSH_PRIVATE_KEY_FILE)
This variable specifies the absolute path of the private key file used by
pagelist. It is needed when authentication (SecKey) is required to access the
logical archive, e.g., C:\config\seckey\key.pem
5. Enable the certificate (see “Enabling a certificate” on page 122).
Proceed as follows:
Proceed as follows:
1. Select Jobs in the System object in the console tree.
2. Select the Jobs tab in the top area of the result pane. The jobs are listed.
3. Select the job you want to start or stop.
4. Depending on the actual status of the job, click Start or Stop in the action pane
to change the status of the job.
Proceed as follows:
1. Select Jobs in the System object in the console tree.
2. Select the Jobs tab in the top area of the result pane. The jobs are listed.
3. Select the job you want to enable or disable.
4. Click Enable or Disable in the action pane to change the status of the job.
Proceed as follows:
1. Select Jobs in the System object in the console tree.
2. Select the Jobs tab in the top area of the result pane.
3. Click New Job in the action pane. The wizard to create a new job opens.
4. Enter a name for the new job. Select the command and enter the arguments
depending on the job.
Name
Unique name of the job that describes its function so that you can distinguish
between jobs having the same command. Do not use blanks and special
characters. You cannot modify the name later.
Command
Select the job command to be executed. See also “Important jobs and
commands” on page 97.
Argument
Entries can expand the selected command. The entries in the Arguments
field are limited to 250 characters. See also “Important jobs and commands”
on page 97.
5. Select the start mode of the job and click Next.
6. Depending on the start mode, define the scheduling settings or the previous job.
See also “Setting the start mode and scheduling of jobs” on page 102.
7. Click Finish to complete.
Modifying jobs To modify a job, select it and click Edit in the action pane. Proceed in the same way
as when creating a job.
• Monitor the job messages and check the time period the jobs take. Adapt the job
scheduling accordingly.
• Only one drive is used for Write jobs on WORM/UDO. Therefore, only one
WORM/UDO can be written at a time. That means, only one logical archive can
be served at a time.
• Backup jobs need two drives, one for the original, one for the backup media.
The entries in the job protocol are regularly deleted by the SYS_CLEANUP_PROTOCOL
job that usually runs weekly. You can modify the maximum age and number of
protocol entries in Configuration, search for the Max. number of job protocol
entries variable (internal name: ADMS_PROTOCOL_MAX_SIZE; see “Searching
configuration variables” on page 214).
Proceed as follows:
1. Select Jobs in the System object in the console tree.
2. Select the Jobs tab in the top area of the result pane.
3. Select the job you want to check.
The latest message of the job is listed in the bottom area of the result pane.
Proceed as follows:
1. Select Jobs in the System object in the console tree.
2. Select the Protocol tab in the top area of the result pane. All protocol entries are
listed. Protocol entries with a red icon are terminated with an error. Green icons
identify jobs that have run successfully.
3. Select a protocol entry to see detailed messages in the bottom area of the result
pane.
4. Solve the problem.
5. Restart the job.
6. Check whether the execution was successful.
Proceed as follows:
1. Select Jobs in the System object in the console tree.
2. Select the Protocol tab in the top area of the result pane. All protocol entries are
listed.
3. Click Clear protocol list in the action pane.
All protocol entries are deleted.
7.1 Overview
Introduction Archive Server provides several methods to increase security for data transmission
and data integrity:
• secKeys / signed URLs, for verification of URL requests (see “Authentication
using signed URLs” on page 108).
• Protection of files and documents (see “Encrypted document storage” on
page 111).
• Timestamps to ensure that documents were not modified unnoticed in the
archive (see “Timestamp Usage” on page 115 and “Configuring Open Text
Timestamp Server” on page 133).
These methods make use of:
• Certificates, for authentication, encryption and timestamps (see “Certificates” on
page 121).
• Checksums to recognize and reveal unwanted modifications to the documents
on their way through the archive (see “Activating the usage of checksums” on
page 130).
Configuration The main GUI elements used for configuration and administration of security
and settings include:
administration
• The Archives node: each time a new archive is added or new pools are created,
security settings are to be configured (Security tab of the Properties dialog).
• The Key Store in the System object of the console tree: used for configuration of
certificates and system keys.
Structure of this This topic describes the main tasks for configuration and administration of security
topic settings. General procedures (e.g. enabling a certificate) are described once and
referred to thereafter.
For each main task, a list of procedures, named How to ... tells you what to do.
More information You can find more information on security topics in the “Security” folder in the
Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/open/15491557).
Configuration settings concerning security topics are described in more detail in
part 7 "Configuration Parameter Reference" in Open Text Archive Server -
Administration Online Help (AR-H-ACN), see sections Archive Server and:
• “AS.DS - Document Service (DS)” –> “AS.DS.SECURITY - Security Settings”,
• “AS.RICO - Key Store backup/restore tool (RICO)”,
• “AS:TS - Timestamp Server (TSTP)”.
Activating Select the operations that you want to protect. Only client applications using a valid
secKey usage secKey can perform the selected operations. If an operation is not selected,
everybody can perform it.
Proceed as follows:
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Security tab. Check the settings and modify them, if needed.
Authentication (SecKey) Required To
Set the archive-specific access permissions:
• Read documents
• Update documents
• Create documents
• Delete documents
4. Click OK to resume.
1. Create a certificate with the certtool utility (command line), or create the
request and send it to a trust center (see Table 7-1 on page 124 and Table 7-2 on
page 124).
Example for the a result: the <key>.pem file contains the private key and is used
to sign the URL. <cert>.pem contains the public key and the certificate that the
archive server uses to verify the signatures.
2. Store the certificate and the private key on the server of your leading application
(see the corresponding Administration Guide for details). Correct the path, if
necessary, and add the file names.
For client programs of Archive Server, store the certificates in the directories
defined in the file <OT config>\Pipeline\config\setup\common.setup:
• The Client Private Key File entry defines the directory for the key.pem file.
• The Client Certificate File entry for the cert.pem file.
• The <OT config>< AS>\secKey\ directory is entered by default.
By storing the certificates in the file system, they are recognized by Enterprise
Scan and the client programs.
Important
For security reasons, limit the read permission for these directories to
the system user (Windows) or the archive user (UNIX).
3. To provide the certificate to the Archive Server use one of the following options:
• Import the certificate, see “Importing an authentication certificate” on
page 127.
Or:
• Send the certificate with the putcert command (see Table 7-3 on page 125).
Repeat this step, if you want to use the certificate for several archives.
4. Enable the certificate (see “Enabling a certificate” on page 122).
Note: HDSK pools are not released for use in productive archive systems. Use
them only for test purposes.
How to ... setup document encryption:
• “Activating encryption usage for a logical archive” on page 111
• “Creating a system key for document encryption” on page 112
• “Exporting and importing system keys” on page 113
• “Configuring a certificate for document encryption” on page 129
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Security tab. Activate Encryption (mark the check box).
4. Click OK to resume.
Caution
Be sure to store this key securely, so that you can re-import it if necessary.
If the key gets lost, the documents that were encrypted with it can no
longer be read!
Do not delete any key if you set a newer one as current. It is still used for
decryption.
Handling for The Synchronize_Replicates job updates the system keys and certificates between
replicated archive servers, before it synchronizes the documents. The system keys are
archives
transmitted encrypted.
If you do not want to transmit the system keys through the network, you can also
export them from the original server to an external data medium and re-import
them on the remote standby server (see “Exporting and importing system keys” on
page 113).
E
Exports the contents of the System key node. Use the export in particular to
store the system keys for document encryption.
The user must log on and specify a path for the export files. The option -t NN:MM
splits the contents of the key store into several different files (MM; maximum 8).
At least NN files must be reimported in order to restore the complete key store.
Example:
sunny:~> /usr/ixos-archive/bin/recIO E -t 3:5
IMPORTANT: -----------------------------------------------------
IMPORTANT: recIO (release) 10.0.0.724
IMPORTANT: -----------------------------------------------------
V
Verifies the contents of the System key node against the exported files.
The user must log on and specify the path for the exported data. Then the
exported data is compared with the key store on the archive server.
Example:
sunny:~> /usr/ixos-archive/bin/recIO V
IMPORTANT: -----------------------------------------------------
IMPORTANT: recIO (release) 10.0.0.724
IMPORTANT: -----------------------------------------------------
recIO 10.0.0.724 (C) 2001-2010 Open Text Corporation
This product includes software developed by the OpenSSL Project
for use in the OpenSSL Toolkit (http://www.openssl.org/)
Please authenticate!
User :dsadmin
Password :
Token[1/?] (default = /floppy/key.pem)
File (CR to accept above) : p1.pem
Token[2/3] (default = /floppy/key.pem)
File (CR to accept above) : p2.pem
Token[3/3] (default = /floppy/key.pem)
File (CR to accept above) : p3.pem
key 1 : 1EE312C064A27F73 : OK
key 2 : BEEB5213EF5FFABF : OK
key 3 : 10C8D409E585E43B : OK
D
Displays the information on the exported files. The information is shown in a
table.
Example:
sunny:~> /usr/ixos-archive/bin/recIO D
IMPORTANT: -----------------------------------------------------
IMPORTANT: recIO (release) 10.0.0.724
IMPORTANT: -----------------------------------------------------
recIO 10.0.0.724 (C) 2001-2010 Open Text Corporation
This product includes software developed by the OpenSSL Project
for use in the OpenSSL Toolkit (http://www.openssl.org/)
Token[1/?] (default = /floppy/key.pem)
File (CR to accept above) : p1.pem
Token[2/3] (default = /floppy/key.pem)
File (CR to accept above) : p2.pem
Token[3/3] (default = /floppy/key.pem)
File (CR to accept above) : p3.pem
I
Imports the saved contents of the System key node.
The user must log on and specify the path for the exported data. The data in the
System key node is restored, encrypted with the archive server's public key and
sent to the administration server. The results are displayed. Keys already
contained in the archive server's store are not overwritten.
Example:
sunny:~> /usr/ixos-archive/bin/recIO V
IMPORTANT: -----------------------------------------------------
IMPORTANT: recIO (release) 10.0.0.724
IMPORTANT: -----------------------------------------------------
recIO 10.0.0.724 (C) 2001-2010 Open Text Corporation
This product includes software developed by the OpenSSL Project
for use in the OpenSSL Toolkit (http://www.openssl.org/)
Please authenticate!
User :dsadmin
Password :
Token[1/?] (default = /floppy/key.pem)
File (CR to accept above) : p1.pem
Token[2/3] (default = /floppy/key.pem)
File (CR to accept above) : p2.pem
Token[3/3] (default = /floppy/key.pem)
File (CR to accept above) : p3.pem
ID:BEEB5213EF5FFABF created:2000/11/08 09:26:36 origin:emma
Key already exists
ID:276CBED602BDFC25 created:2010/01/18 12:09:32 origin:arthomasa
Key successfully imported
of documents. Thus, it is available only for archives that used timestamps in former
archive server versions. You can migrate these timestamps to ArchiSig timestamps,
see “Migrating existing document timestamps” on page 120.
ArchiSig With ArchiSig timestamps, the timestamps are not added per document, but for
timestamps containers of hash trees calculated from the documents:
A job builds the hash tree that consists of hash values of as many documents as
configured, and adds one single timestamp. Thus, you can collect, for example, all
documents of a day in one hash tree. Only one timestamp per hash tree is required.
The verification process needs only the document and the hash chain leading from
the document to the timestamp but not the whole hash tree:
Configuration You can set up signing documents with timestamps and the verification of
timestamps including the response behavior for each archive (see “Configuring the
archive settings” on page 82). Consider the recommendations given above.
If you use both methods in parallel, the document timestamp secures the document
until the hash tree is built and signed. As this time period is short, a document
timestamp is sufficient for these documents, while the hash tree, in general, gets a
timestamp created with a certificate of an accredited provider. This trusted
certificate is used for verification.
ArchiSig timestamps have a better performance and can be easily renewed. Open
Text recommends to use this method. You can also migrate document timestamps to
ArchiSig timestamps, see “Migrating existing document timestamps” on page 120.
Important
Once you have decided to use ArchiSig timestamps, you cannot go back to
document timestamps.
Timestamps and hash trees may become invalid or unsafe. To prevent this, they can
be renewed, see “Renewing timestamps of hash trees” on page 119 and “Renewing
hash trees” on page 119.
Remote Standby In Remote Standby environment, the Synchronize_Replicates job replicates the
timestamp certificates. Only enabled certificates are copied. The certificate on the
Remote Server is automatically enabled after synchronization.
How to ... setup timestamp verification:
• “Basic settings” on page 117.
• “Activating and configuring timestamp usage” on page 85.
• “Creating a hash tree” on page 119
• “Configuring a certificate for timestamp verification” on page 130
• Optional: “Basic settings” on page 117
• timeproof TSS80
• AuthentiDate
• Quovadis
• Open Text Timestamp Server, topic of this chapter
1. In the Archives object of the console tree. Create a new archive (e.g., with the
name ATS) and a pool to define where the hash trees are stored.
2. In Jobs in the System object of the console tree, create jobs to build the hash
trees. You need one job for each archive that uses timestamps.
See also: “Configuring jobs and checking job protocol” on page 97.
Command
hashtree
Arguments
Archive name
Scheduling
If you use ArchiSig timestamps, schedule a nightly job. If the hash trees are
written to a storage system, make sure that the job is finished before the
Write job starts.
3. In the resulting list, find the distinguished subject name(s) of your timestamp
service (subject of the service's certificate).
4. In a command line, enter:
dsHashTree -a <ArchiveName> -s <DistinguishedNameOfOldCertificate>
The process finds all timestamps that were created with the certificate indicated in
the command. It calculates hash values for the timestamps and builds new hash
trees. Each hash tree is signed with a new timestamp.
Important
You can migrate document timestamps only once! Never disable ArchiSig
timestamps after starting migration.
Proceed as follows:
1. Configure as described in “Basic settings” on page 117.
2. In a command line, call the timestamp migration tool for each pool to be
migrated:
dsReSign —p <pool name>
3. Call the hash tree creation tool for each archive with migrated timestamps:
dsHashTree <archive name>
The tools calculate hash values from the existing timestamps, build hash trees and
get a timestamp for each tree.
7.5 Certificates
Certificates A certificate is an electronic document which uses a digital signature to bind
together a public key with information on the client issuing this public key
(information such as the name of a person or an organization, their address, and so
forth). The certificate can be used to verify that a public key belongs to an
individual, e.g., an archive uses this information to verify requests based on signed
URLs from various clients.
Certificate use Archive Server uses certificates for various use cases:
cases
Proceed as follows:
1. Select Key Store in the System object of the console tree.
2. Select the Certificates object and select the appropriate <certificate> tab in the
result pane.
All certificates of the selected certificate type are listed.
3. Select the respective tab and the designated certificate and click View
Certificate in the action pane.
4. Check the general information and the certification path.
General
This tab provides detailed information to identify the certificate
unambiguously: the certificate's issuer, the duration of validity, and the
fingerprint.
Certification Path
Here you can follow the certificate's path from the root to the current
certificate. A certificate can be created from another certificate. The path
shows the complete derivation chain. You can also view the parent certificate
information from here.
Proceed as follows:
1. Select Key Store in the System object of the console tree.
2. Select the Certificates object and select the appropriate <certificate> tab in the
result pane.
All certificates of the selected certificate type are listed.
3. Select the respective certificate by its name and click Enable in the action pane
pane.
Proceed as follows:
1. Select Key Store in the System object of the console tree.
2. Select the Certificates object and select the appropriate <certificate> tab in the
result pane.
All certificates of the selected certificate type are listed.
3. Select the respective tab and the designated certificate and click Delete
Certificate in the action pane.
4. Confirm the upcoming message with OK.
Command: The following table describes the command to be used to create self-signed
generate certificates.
certificate
Command: The following table describes the command to be used to request a certificate from a
request trust center.
certificate
Send your <requestOutFile> to a trust center. The trust center will return you a
certificate including the public key. The certificate from the trust center must be in
pem format.
Command: send The following table describes the command to be used to send a certificate to
certificate Archive Server. After using the Refresh action (System –> Key Store –>
(putCert)
Certificates), the certificates sent using putCert are displayed at Archive Server.
Note: Hint: putCert cannot be used with SSL. To transfer the certificate to the
server switch the SSL settings for the logical archive to May use or Don’t use.
Alternatively, if provided, you can also use dsh to send the certificate to Archive
Server. Proceed as follows:
1. Open a command line, enter the following command and press ENTER:
C:\>dsh -h <host>
<host> is the name of your archive server.
The following prompt is displayed: command: _
2. Enter the following command and press ENTER:
setAuthId -I <myserver>
<myserver> is the name of your leading application server.
3. Enter the following command and press ENTER:
putCert -a <archive> -f <file>
For the <archive> variable, enter the logical archive on the archive server for
which the certificate is relevant. Replace the <file> variable with the name of the
certificate, i.e. cert.pem.
If you need the certificate for several archives, call the command again for each
archive.
4. Quit the program with exit.
Proceed as follows:
1. Select the Certificates node of the Key Store in the System object of the console
tree.
In the console tree select System –> Key Store –> Certificates.
2. Click the Import Authentication Certificate ... in the action pane.
The Import Authentication Certificate window is opened.
3. In the Certificate Import area, enter a new ID or select an existing ID if you
want to replace an existing certificate.
4. Click Browse to open the file browser for the archive server file system and
select the designated Certificate. Click OK to resume.
5. In the Certificate Assignment area, choose:
• Global, if you want to assign the certificate to all archives
• Assign to archive, if you want to assign the certificate to a dedicated archive.
In the selection list select the dedicated archive.
6. Click OK to start the import.
A protocol window shows the progress and the result of the import. To check
the protocol later on, see “Checking utilities protocols” on page 256.
Important
Any change to the settings affects all archives that use this certificate!
1. Select the Certificates entry of the Key Store node in the System object of the
console tree.
2. Select the Encryption Certificates tab in the result pane. All available certificates
are listed.
3. Click Set Encryption Certificates in the action pane.
4. Enter the path and the complete file name of the certificate or click Browse to
open the file browser. Select the designated Certificate and click OK to confirm.
5. Click OK to set the certificate.
6. Check the protocol whether the certificate is successfully imported, see
“Checking utilities protocols” on page 256.
Proceed as follows:
1. Select the Certificates entry of the Key Store node in the System object of the
console tree.
2. Click Import Timestamp Certificate in the action pane.
3. Enter a new ID or select an existing ID if you want to replace an existing
certificate.
4. Click Browse to open the file browser and select the designated Certificate.
Click OK to resume.
5. Click OK to start the import.
A protocol window shows the progress and the result of the import. To check
the protocol later on, see “Checking utilities protocols” on page 256.
Enterprise Scan generates checksums for all scanned documents and passes them on
to Document Service. Document Service verifies the checksums and reports errors
(see “Monitoring with notifications” on page 297). On the way from Document
Service to STORM, the documents are provided with checksums as well, in order to
recognize errors when writing to the media.
The leading application, or some client, can also send a timestamp (including
checksum) instead of the document checksum see “Timestamp Usage” on page 115.
Verification can check timestamps as well as checksums.
The certificates for those timestamps must be known to the archive server and
enabled, before the timestamp checksums can be verified (see “Importing a
certificate for timestamp verification” on page 130).
Procedure Activating the usage of checksums:
• You can retrieve the Subject from the certificate and use it as application ID
(name of the application); see the procedure below.
Procedure Retrieving the application name from a certificate:
4. In the result pane, from the Certificates tab, select the imported certificate.
5. In the action pane, click View Certificate.
6. From the Subject entry, note or copy the value after CN=
Use this value as the application ID when creating the application type (<server
name> > Enterprise Library Services > Applications).
• Private key – If your Timestamp Server runs on a machine different from the
one where you run Timestamp Control Client, you must copy the file
containing the private key to a directory on the machine where Timestamp
Server runs. This is typically the <OT config>< AS>/timestamp/ directory. Then
you can configure Timestamp Server to use the signature key from that file in the
configuration as described in “Configuration for Autostart” on page 149.
• Timestamp certificates – After the installation of Archive Server, Timestamp
Server is ready to use with default signature keys and certificates. However, it is
recommended to create your own signature keys and certificates. These
signature keys and certificates have to be provided to Timestamp Server.
• Passphrase (optional) used to protect the private key
Configuration The required settings are to be administrated using configuration variables at
variables Administration Client. Search the respective configuration variables in:
Configuration, (see “Searching configuration variables” on page 214).
Configuration variables:
• Path to the certificate <n>
Location
Supply your location in a suitable format like <city>, <country>. The
minimum length of this string is 3 characters.
Server
This is the hostname of the computer on which Timestamp Server runs.
Port
The one and only communication interface of Timestamp Server is a TCP
port. Timestamp requests sent to this address will be processed if Timestamp
Server is running and configured. Therefore, you must specify the port
number. The default value is 32001; any number between 1 and 32767 might
work unless another process is using that port. Ports up to 1024 can only be
used if Timestamp Server runs with root privileges. When in doubt, contact
your system administrator.
Warning
A notification will be sent a given number of hours before the timeout is
reached. The status of the Timestamp service icon in Monitor Web Client
will change to “warning”. A setting of 0 disables this feature. See also
“Creating and modifying notifications” on page 301.
Time display
The main dialog retrieves the time from Timestamp Server and displays it
permanently. It can show the time as GMT (Greenwich Mean Time), or as a
local time representation, or both formats at the same time.
Signature Key File
For a full configuration, you can leave this entry empty for now. If you want
to do a quick start, select the file <OT config>< AS>/timestamp/stampkey.-
pem. The passphrase for this key file is ixos.
Change Passphrase
You can change the passphrase, which protects the signature key. If you
change the passphrase, the key file will be re-written.
Note: Any older copy of that file will still be usable with the old
passphrase.
Timeout
Because the internal clock of a computer has limited precision, this setting
provides a possibility to set a timeout period in hours after which
Timestamp Server refuses to timestamp incoming requests. The timeout
counter is reset every time you transmit the signing key as described in
“Starting Timestamp Control Client” on page 135. A timeout setting of 0 will
disable this feature and leave the server running unlimited.
Administration
If Timestamp Server is installed on a windows platform, Timestamp Control
Client can be installed on the same machine. Otherwise, it can be installed
on a remote computer to do the administration via remote access.
Configuration requests will only be accepted by Timestamp Server if the
remote host is specified in this line. Multiple hostnames and IP addresses
must be separated by semicolons (;). If no host is supplied, only local
administration is possible.
Allow remote administration from any host
This is not recommended! Selecting this check box causes Timestamp Server
to accept configuration requests from any host. Only use this for debugging
or experimental purposes!
Timestamp Policy
Timestamps in the PKIX format (RFC 3161) contain an object identifier
(OID), which defines a timestamp policy. Leave the default value
(1.3.6.1.5.7.7.2) unless you know exactly what you need.
Notification
Enter the number of days before one of the certificates used expires. Starting
that day, Timestamp Server starts sending a notification per day to warn the
administrator about the upcoming invalid certificate.
Passphrase(!)
This entry is needed for auto-initialization. If you enter a passphrase here, it
will be stored in Timestamp Server's configuration in an encrypted format.
At startup time, Timestamp Server can read and decrypt this passphrase and
use it to decode the signature key and initialize itself.
Hash Algorithm
If a certain hash algorithm is specified here, Timestamp Server will use that
algorithm to create the signatures. The default setting is same as in TS
request which causes Timestamp Server to use the same hash algorithm for
the signature as the one specified in the timestamp request it receives from
Archive Server.
Protocol file location
The path of the protocol file location.
Note: The path for the protocol file must exist or no protocol file will be
written. When starting up, Timestamp Server reads the last serial
number issued and continues timestamping with the next serial
Proceed as follows:
1. Start Timestamp Control Client and click Certificates.
The Certificates window opens.
2. Click Generate keys. The Generate new key pair window opens.
3. Enter settings:
Passphrase
Enter the passphrase twice. This passphrase will be used to encrypt the key-
pair before storing it in a file.
Caution
The program can decrypt the key-pair only if you supply the
passphrase, so do not forget it. Timestamp Server cannot create
timestamps without it. The usual good advice for password selection
and handling applies: use a difficult password, do not write it down!
Key length
At least 1024 bits are recommended. Longer keys increase security and
validity time of the issued timestamps, but they also increase the time
needed to sign and verify those timestamps.
RSA/DSA
Selects the signature algorithm for which the key will be generated. RSA is
recommended since not all trust centers support DSA.
4. Click Start to generate the key. This may take several minutes depending on the
key length and your machine's computing power. Generating a 2048 bit DSA
key on a P133 can take almost one hour!
After key generation, you will be asked where to store the key. You are basically free
to select the location. Two locations make special sense:
• In the <OT config>< AS>/timestamp/ directory. Easy to find but also readable
by an attacker.
• On a memory stick. The memory stick can be removed and stored in a secure
place. However, it is needed every time the key-pair is sent to Timestamp Server,
i.e. every time you start Timestamp Server and every time the timeout expires.
Auto-initialization If you are using auto initialization, the key must be stored on the Timestamp Server
machine, for further information see “Using the auto initialization mode” on
page 134
Proceed as follows:
1. Start Timestamp Control Client and click Certificates.
3. Enter settings. The fields Country, Organization and Common Name are
mandatory. Common Name should be the fully qualified hostname of
Timestamp Server. Organizational Unit, State / Province, Location and Email
are optional.
4. Click Generate Request to start.
If you have not used your passphrase since you started Timestamp Control
Client, you will be asked for the passphrase now. If you stored the key-pair on a
memory stick, make sure that the memory stick is inserted. The program needs
the private key to sign the certificate request.
5. Enter a filename and save the file. The contents of the file should look
something like this:
-----BEGIN CERTIFICATE REQUEST-----
MIICaDCCAiQCAQEwYzELMAkGA1UEBhMCREUxGTAXBgNVBAoTEElYT1MgU09GVFdB
UkUgQUcxDjAMBgNVBAsTBVRTMDAxMQ8wDQYDVQQHEwZNdW5pY2gxGDAWBgNVBAMT
...
I/ofikRvFV+fnw/kkddqr7VdNMH2oOHlozmgADALBgcqhkjOOAQDBQADMQAwLgIV
AJPkQtYi7uSSA3II6xeG6ucxJNz0AhUAh3acSLKnILYwnqdR7Vz8/R0b53s=
-----END CERTIFICATE REQUEST-----
6. Use the request in the file to apply for a certificate at a trust center in a PEM file
format.
Proceed as follows:
1. Start Timestamp Control Client and click Certificates.
2. Select the old certificates (bottom up) and click Remove Certificate. Click Yes to
confirm.
3. Click Add Certificate. A window to select a certificate in PEM format opens.
4. Add certificates. Start with the self-signed root certificate (either issued by the
trust center for itself or issued by the root authority for itself). The program will
complain if the order is not correct. A dialog displays the properties of each
certificate you are about to install.
5. Verify this information thoroughly, especially the Valid not before and Valid
not after items.
6. Click Yes to confirm that you want to use this certificate. The certificate will be
copied to the application directory.
Note: The program checks the certificate's Valid not before and Valid not
after specifications and rejects it if it is not valid.
Note: If Timestamp Server for some reason does not grant you access for
configuration requests, the server's system time is displayed but the status
values for Signature key, Certificates, Location, and Time only show a
question mark.
If you are performing remote administration (i.e. with Timestamp Control
Client on your local host and Timestamp Server on another computer), make
sure that the correct hostname for the administration host is entered on the
computer that runs Timestamp Server (see “Configuring basic settings” on
page 135).
The debug output should give you a hint, why Timestamp Server refuses to
start.
Checking the The general status of Timestamp Server together with some details about its
status via Web configuration can also be retrieved and displayed with a standard Web browser.
browser
Use the following URL:
http://<servername>:<port>
As <servername> use the machine name of Timestamp Server and as <port> use
the configured port. (The default port is 32001.)
Note: The status can only be retrieved on machines that are configured as
Administration hosts in Timestamp Server setup. If Allow remote
administration from any host is selected, the Web status can be used on any
host, of course.
There is a link to Timestamp Server's logfile. Following this link may take some time
if the logfile is large. Your browser may even hang or crash if the logfile is too large.
This is not a bug in the server software!
Proceed as follows:
1. Start Timestamp Control Client and click Transmit Parameters.
2. Check the displayed time whether it is correct. If not, you must cancel this
dialog and adjust the time for Timestamp Server first (see “Checking and
adjusting the time” on page 145).
3. Enter the passphrase and click OK.
Proceed as follows:
1. Start Timestamp Control Client.
2. Click Open Logfile.
Proceed as follows:
1. Make sure that the system time on the server is correct.
2. Start Timestamp Control Client
Certificates The certificates status reflects whether Timestamp Server has accepted the
certificates and a key-pair that matches the public key in the server's certificate.
After a fresh start of Timestamp Server, no certificates are available and the
certificates status will be not set. After you transmitted a set of valid certificates
(see “Transmitting configuration parameters” on page 144) along with the signature
key and the location, the status should change to set.
No timestamps must be issued at a time when a certificate required for verification
of that timestamp has expired. Therefore, Timestamp Server checks the validity
dates of its certificates against the system time for every timestamp. It sends a
notification every 24 hours starting a configurable number of days before a
certificate expires.
For detailed information about the Certificates window, see “Configuring
certificates and signature keys” on page 118.
• Second, it is verified that every certificate is currently valid and has not
expired. A certificate has expired is displayed otherwise.
• Finally all certificates are verified with the issuer's public keys (taken from
the issuer's certificates). If this fails, the error message Verification of
certification path failed is displayed.
5. If you receive errors, check whether the signature keys, the certificates and the
time settings are configured correctly (see “Configuring certificates and
signature keys” on page 118, “Checking and adjusting the time” on page 145).
6. Click Transmit Parameters again and provide your passphrase when asked (see
“Transmitting configuration parameters” on page 144).
If no error occurs and you see the message Certification path verified
successfully, the configuration is correct and can be used to run Timestamp
Server.
8.3.1.3 Quovadis
Introduction Quovadis offers qualified timestamps over the Internet. This kind of service
provides the highest level of trustworthiness.
ArchiSig Configuration recommendation:
timestamps
Example: tshost1:32001;tshost2:10318
AS.DS.COMPONENT.TIMESTAMPS.TIME_STAMP_MODE
IETF (RFC 3161 without HTTP header). SIGIA4 timestamps are strongly
discouraged!
AS.DS.COMPONENT.TIMESTAMPS.MAX_TSS_CONNECTIONS
Use 2 Open Text Timestamp Server usually is fast enough so that higher values
do not increase performance.
Important
See “Password security and settings” below for additional information
on passwords.
To open the change password dialog: select Archive Server in the console and click
on Set Password....
Password You can specify a minimum length for passwords, if a user is locked out after
settings several unsuccessful logons and how long the lockout is to be.
Minimum length You can define a minimum character length for passwords. If you do not set this
for passwords property, the default value is eight.
Proceed as follows:
1. From the <OT config>< AS>\setup directory, open the DS.Setup file in a text
editor.
2. Enter the following line (or modify it if present already):
DS_MIN_PASSWD_LEN=<required password length>
Lock out after You can define that a user is locked out after a specified number of failed attempts
failed logons to log on; default is 0 (no lockout).
Note: The dsadmin user will never be locked out.
Proceed as follows:
1. From the <OT config>< AS>\setup directory, open the DS.Setup file in a text
editor.
2. Enter the following line (or modify it if present already):
DS_MAX_BAD_PASSWD=<number of failed attempts>
Unlock after You can define how long a user is locked out after a failed attempt; default is zero
failed logons seconds.
Note: The dsadmin user will never be locked out.
Proceed as follows:
1. From the <OT config>< AS>\setup directory, open the DS.Setup file in a text
editor.
2. Enter the following line (or modify it if present already):
DS_BAD_PASSWD_ELAPS=<unlock time in seconds>
9.2 Concept
Modules To keep administrative effort as low as possible, the rights are combined in policies
and users are combined in user groups. The concept consists of three modules:
User groups
A user group is a set of users who have been granted the same rights. Users are
assigned to a user group as members. Policies are also assigned to a user group.
The rights defined in the policy apply to every member of the user group.
Users
A user is assigned to one or more user groups, and he is allowed to perform the
functions that are defined in the policies of these groups. It is not possible to
assign individual rights to individual users.
Policies
A policy is a set of rights, i.e. actions that a user with this policy is allowed to
carry out. You can define your own policies in addition to using predefined and
unmodifiable policies.
Standard users During the installation of Archive Server, some standard users, user groups and
policies are preconfigured:
dsadmin in aradmins group
This is the administrator of the archive system. The group has the “ALL_ADMS”
policy and can perform all administration tasks, view accounting information,
and start/stop the Spawner. After installation, the password is empty, change it
as soon as possible, see “Creating and modifying users” on page 160.
dpuser in dpusers group
This user controls the DocTools of the Document Pipelines. The group has the
“DPinfoDocToolAdministration” policy. The password is set by the “dsadmin”
user, see “Creating and modifying users” on page 160.
dpadmin in dpadmins group
This user controls the DocTools of the Document Pipelines and the documents in
the queues. The group has the “ALL_DPINFO” policy. The password is set by
the “dsadmin” user, see “Creating and modifying users” on page 160.
Proceed as follows:
1. Create and configure the policy, see “Creating and modifying policies” on
page 159.
2. Create the user, see “Checking, creating and modifying users” on page 160.
3. Create and configure the user group and add the users and the policies, see
“Checking, creating and modifying user groups” on page 161.
Group Description
Open Text Administra- Summary of rights to control creation, configuration and dele-
tion Client tion of logical archives.
Archive Users Summary of rights to control creation, configuration and dele-
tion of users and groups and their associated policies.
Notifications Summary of rights to control creation, configuration and dele-
tion of notifications and events.
Policies Summary of rights to control creation, configuration and dele-
tion of policies.
Important
Rights out of the following policy groups should no longer be used. These
rights are still available to ensure compatibility to policies created for former
versions of Archive Server.
• Accounting
• Administration Server
• DPinfo
• Scanning Client
• Spawner
Modifying a To modify a self-defined policy, select the policy in the top area of the result pane
policy and click Edit Policy in the action pane. Proceed in the same way as when creating a
new policy. The name of the policy cannot be changed.
Deleting a policy To delete a self-defined policy, select the policy in the top area of the result pane and
click Delete in the action pane. The rights themselves are not lost, only the set of
them that makes up the policy. Pre-defined policies cannot be deleted.
See also:
• “Checking, creating and modifying users” on page 160
• “Checking, creating and modifying user groups” on page 161
• “Concept” on page 157
Proceed as follows:
1. Select Users and Groups in the System object in the console tree.
2. Select the Users tab in the result pane. All available users are listed in the top
area of the result pane.
3. Click New User in the action pane. The window to create a new user opens.
4. Enter the user name and the password.
Username
User name for Archive Server. The name may be a maximum of 14
characters in length. Spaces are not permitted. This name cannot be changed
subsequently.
Password
Password for the specified user.
Note: Characters, allowed within a password: all printable ASCII
characters except: “;”, “'” and “"”.
Confirm password
Enter exactly the same input as you have already entered under Password.
Global
Select this check box to replicate the user to all known servers.
5. Click Next. A window with available user groups opens.
6. Select the groups the user should be assigned to. Click Finish.
Modifying user To modify a user's settings, select the user and click Properties in the action pane.
settings Proceed in the same way as when creating a new user. The name of the user cannot
be changed.
Deleting users To delete a user, select the user and click Delete in the action pane.
See also:
• “Creating and modifying policies” on page 159
• “Checking, creating and modifying user groups” on page 161
• “Concept” on page 157
Name
A name that clearly identifies each user group. The name may be a
maximum of 14 characters in length. Spaces are not permitted.
Global
Select this check box to replicate the users of this group to all known servers.
Implicit
Implicit groups are used for the central administration of clients. If a group is
configured as implicit, all users are automatically members. If users who
have not been explicitly assigned to a user group log on to a client, they are
considered to be members of the implicit group and the client configuration
corresponding to the implicit group is used. If several implicit groups are
defined, the user at the client can select which profile is to be used.
5. Click Finish.
Modifying group To modify the settings of a group, select it and click Properties in the action pane.
settings Proceed in the same way as when creating a user group.
Deleting a user To delete a user group, select it and click Delete in the action pane. Neither users
group nor policies are lost, only the assignments are deleted.
See also:
• “Adding users and policies to a user group” on page 162
• “Creating and modifying policies” on page 159
• “Checking, creating and modifying users” on page 160
• “Concept” on page 157
Removing users To remove a user or a policy, select it in the bottom area and click Remove in the
and policies action pane.
Proceed as follows:
1. Select Users and Groups in the System object of the console tree.
2. Select the Users tab in the top area of the result pane and select the user. Note
the groups listed under Members in the bottom area.
3. Select the Groups tab in the top area of the result pane and select Policies in the
bottom area of the result pane.
4. Select one of the groups you noted and note also the assigned policies listed in
the bottom area.
5. Select Policies in the System object.
6. Select one of the policies you noted. The associated groups of rights and
individual rights appear in the bottom area. Make a note of these.
7. Repeat step 6 for all policies that you noted for the user group.
8. Repeat steps 4 to 7 for the other user groups which the user is a member of.
Proceed as follows:
1. Select SAP Servers in the Environment object in the console tree.
2. Select the SAP System conntections tab in the result pane.
3. Click SAP System Connection in the action pane. A window to configure the
SAP system opens.
4. Enter the settings for the SAP system connection.
Connection name
SAP system connection name with which the administered server
communicates. You cannot modify the name later.
Description
Here you can enter an optional description (restricted to 255 characters).
Server name
Name of the SAP server on which the logical archives are set up in the SAP
system.
Client
Three-digit number of the SAP client in which archiving occurs.
Feedback user
Feedback user in the SAP system. The cfbx process sends a notification
message back to this SAP user after a document has been archived using
asynchronous archiving. A separate feedback user (CPIC type) should be set
up in the SAP system for this purpose.
Password
Password for the SAP R/3 feedback user. This is entered, but not displayed,
when the SAP system is configured. The password for the feedback user
must be identical in the SAP system and in Open Text Administration Client.
Instance number
Two-digit instance number for the SAP system. The value 00 is usually used
here. It is required for the sapdpxx service on the gateway server in order to
determine the number of the TCP/IP port (xx = instance number) being
used.
Codepage
Relevant only for languages which require a 16-bit character set for display
purposes or when different character set standards are employed in different
computer environments. A four-digit number specifies the type of character
set which is used by the RFCs. The default is 1100 for the 8-bit character set.
To determine the codepage of the SAP system, log into the SAPGUI and
select System > Status. If the SAP system uses another codepage, two
conversion files must be generated in SAP transaction sm59, one from the
SAP codepage to 1100 and the other in the opposite direction. Copy these
files to the Archive Server directory <OT config>< AS>/r3config and
declare the codepage number here in Open Text Administration Client. The
cfbx DocTool reads these files.
Language
Language of the SAP system; default is English. If the SAP system is
installed exclusively in another language, enter the SAP language code here.
Test Connection
Click this button to test the connection to the SAP system. A window opens
and shows the test result.
5. Click Finish.
Modifying SAP To modify a SAP system, select it in the SAP System Connections tab and click
system Properties in the action pane. Proceed in the same way as when creating a SAP
connections
system connection.
Deleting SAP To delete a SAP system, select it in the SAP System Connections tab and click
system Delete in the action pane.
connection
Testing a SAP To test a SAP connection, select it in the SAP System Connections tab and click Test
connection Connection in the action pane. A window opens and shows the test result.
Proceed as follows:
1. Select SAP Servers in the Environment object in the console tree.
2. Select the SAP Gateways tab in the result pane.
3. Click New SAP Gateway in the action pane. A window to configure the SAP
gateway opens.
4. Enter the settings for the SAP gateway.
Subnet address
Specifies the address for the subnet in which an archive server or Enterprise
Scan is located. At least the first part of the address (e.g. NNN.0.0.0 in case of
IPv4) must be specified. A gateway must be established for each subnet.
IPv6
If you use IPv6, do not enclose the IPv6 address with square brackets.
Subnet mask / Length
Specifies the sections of the IP address that are evaluated. You can restrict
the evaluation to individual bits of the subnet address.
IPv4
Enter a subnet mask, for example 255.255.255.0.
IPv6
Enter the address length, i.e. the number of relevant bits, for example 64.
SAP system connection
SAP system connection name of the SAP system for which the gateway is
configured. If this is not specified, then the gateway is used for all SAP
system connections for which no gateway entry has been made. If subnets
overlap, the smaller network takes priority over the larger one. If the
networks are of the same size, the gateway to which a concrete SAP system
is assigned has priority over the default gateway that is valid for all the SAP
system connections.
Gateway address
Name of the server on which the SAP gateway runs. This is usually the SAP
server.
Gateway number
Two-digit instance number for the SAP system. The value 00 is usually used
here. It is required for the sapgwxx service on the gateway server in order to
determine the number of the TCP/IP port (xx = instance number; e.g.
instance number = 00, sapgw00, port 3300).
5. Click Finish.
Modifying SAP To modify a SAP gateway, select it in the SAP Gateways tab and click Properties in
gateways the action pane. Proceed in the same way as when creating a SAP gateway.
Deleting SAP To delete a SAP gateway, select it in the SAP Gateways tab and click Delete in the
gateways action pane.
Proceed as follows:
1. Select SAP Servers in the Environment object in the console tree.
2. Select the Archive Assignments tab in the result pane. All archives are listed in
the top area of the result pane.
3. Select the archive to which a SAP system should be assigned. Keep in mind, that
SAP system can be assigned only to original archives.
4. Click New Archive SAP Assignment in the action pane. A window to configure
the SAP archive assignment opens.
5. Enter the settings for SAP archive assignment:
Modifying To modify an archive assignment, select it in the bottom area of the result pane and
archive click Properties in the action pane. Proceed in the same way as when assigning a
assignments
SAP system.
Removing To delete an archive assignment, select it in the bottom area of the result pane and
archive click Remove Assignment in the action pane.
assignments
PS_ENCODING_BASE64_UTF8N 1
BIZ_APPLICATION<name>
User:
key = BIZ_DOC_RT_USER
value = <domain>\<name>
User group:
key = BIZ_DOC_RT_GROUP
value = <domain>\<name>
Late indexing to Process Inbox of TCP GUI
Archives the document to the Transactional Content Processing Servers and starts a process
with the document in the TCP GUI inbox. Documents are indexed in TCP.
PS_ENCODING_BASE64_UTF8N 1
BIZ_REG_INDEXING
Leave the values empty
BIZ_APPLICATION<name>
BIZ_APPLICATION<name>
User:
key = BIZ_DOC_RT_USER
value = <domain>\<name>
User group:
key = BIZ_DOC_RT_GROUP
value = <domain>\<group>
Late indexing for plug-in event
Archives the document to the Transactional Content Processing Servers and calls a plug-in
event in the TCP Application Server. Documents are indexed in TCP.
DMS_Indexing PILE_INDEX n/a BIZ_ENCODING_BASE64_UTF8N
BIZ_APPLICATION<name>
BIZ_PLG_EVENT=<plugin>:<event>
Proceed as follows:
1. Select Scan Stations in the Environment object in the console tree.
2. Select the Archive Modes tab in the result pane.
Modifying an To modify the settings of an archive mode, select it in the Archive Modes tab in the
archive mode result pane and click Properties in the action pane. Proceed in the same way as
when adding an archive mode. Details: “Archive mode settings” on page 174
Deleting an To delete an archive mode, select it in the Archive Modes tab in the result pane.
archive mode Click Delete in the action pane. If the archive mode is assigned to a scan host, it
must be removed first, see “Removing assigned archive modes” on page 178.
See also:
• “Archive mode settings” on page 174
• “Scenarios and archive modes” on page 171
• “Adding a new scan host and assigning archive modes” on page 176
General tab
Archive mode name
Name of the archive mode. Do not use spaces. You cannot change the name of
the archive mode after creation.
Scenario
Name of the archiving scenario (also known by the technical name Opcode).
Scenarios apply to leading applications.
Archive name
Name of the logical archive, to which the document is sent.
SAP system connection
SAP system connection name with which the administered server
communicates.
Advanced tab
Workflow
Name of the workflow that will be started in Enterprise Process Services when
the document is archived. For details concerning the creation of workflows, see
the Enterprise Process Services documentation.
Conditions
These archiving conditions are available:
R3EARLY
Early archiving with SAP.
BARCODE
If this option is activated, the document can only be archived if a barcode was
recognized. For Late Archiving, this is mandatory. For Early Archiving, the
behavior depends on your business process:
• If a barcode or index is required on every document, select the Barcode
condition. This makes sure that an index value is present before archiving.
The barcode is transferred to the leading application.
• If no barcode is needed, or it is not present on all documents, do not select
the Barcode condition. In this case, no barcode is transferred to the
leading application.
PILE_INDEX
Sorts the archived documents into piles for indexing according to certain
criteria. For example, the pile can be assigned to a document group, and the
access to a document pile in a leading application like Transactional Content
Processing can be restricted to a certain user group.
INDEXING
Indexing is done manually.
ENDORSER
Special setting for certain scanners. Only documents with a stamp are stored.
Extended Conditions
This table is used to hand over archiving conditions to the COMMANDS file, for
example, to provide the user name so that the information is sent to the correct
task inbox. The extended conditions are key-value pairs. Click Add to enter a
new condition. To modify a extended condition select it and click Edit. Click
Remove to delete the selected condition.
See also:
• “Adding and modifying archive modes” on page 173
• “Adding a new scan host and assigning archive modes” on page 176
Proceed as follows:
1. Select Scan Stations in the Environment object in the console tree.
2. Select the Archive Modes tab in the result pane.
3. Select the archive mode to assign scan hosts.
4. Click Add Scan Host in the action pane. A window with available scan hosts
opens.
5. Select the designated scan hosts and click OK.
See also:
• “Adding and modifying archive modes” on page 173
• “Adding a new scan host and assigning archive modes” on page 176
Proceed as follows:
Deleting an To delete an archive mode, select it in the Archive Mode tab in the result pane. Click
archive mode Delete in the action pane. If the archive mode is assigned to a scan host, it must be
removed first, see “Adding a new scan host and assigning archive modes” on
page 176.
See also:
• “Adding additional archive modes” on page 177
• “Adding and modifying archive modes” on page 173
• “Archive mode settings” on page 174
Proceed as follows:
1. Select Scan Stations in the Environment object in the console tree.
2. Select the Scan Hosts tab in the result pane.
3. Select the scan host to assign archive modes.
4. Click Add Archive Mode in the action pane. A window with available archive
modes opens.
5. Select the archive modes and click OK.
See also:
• “Adding and modifying archive modes” on page 173
• “Archive mode settings” on page 174
Proceed as follows:
1. Select Scan Stations in the Environment object in the console tree.
2. Select the Scan Hosts tab in the result pane.
3. Select the scan host to change the default archive mode.
4. Click Properties in the action pane.
5. Choose the new default archive mode and click OK.
Example:
<host> = host03100
<port> = 8080
<secure port> = 8090
<context> = /archive
http://host03100:8080/archive?...
https://host03100:8090/archive?...
Modifying known To modify the settings of a known server, select it in the top area of the result pane
server settings and click Properties in the action pane. Proceed in the same way as when adding a
known server.
Proceed as follows:
1. Select Known Servers in the Environment object in the console tree.
2. Click Synchronize Servers in the action pane.
3. Click OK to confirm. The synchronization is started.
In a remote standby scenario, all new and modified documents are asynchronously
transmitted from the original archive to the replicated archive of a known server.
This is done by the Synchronize_Replicates job on the Remote Standby Server.
The job physically copies the data on the storage media between these two servers.
Therefore, the Remote Standby Server provides more data security than the local
backup of media.
The Remote Standby Server has the following advantages:
• Increased availability of the archive, since the Remote Standby Server is accessed
when the original server in not available.
• Backup media are located in greater distance from the original archive server,
providing security in case of fire, earthquake, and other catastrophes.
Proceed as follows:
1. Log on to the original archive server.
2. Add the Remote Standby Server as known server (see “Adding known servers”
on page 179). Ensure that Remote server is allowed to replicate from this host
is set.
3. Click OK. The Remote Standby Server is listed in Known Servers in the
Environment object of the console tree.
Important
These volumes have to be named the same way as the original volume. The
replicate volumes need at least the same amount of disk space.
See also:
• “Configuring disk volumes” on page 47
• “Installing and configuring storage devices” on page 58
b. Enter Mount Path and Device Type and click OK. Repeat this for every
missing volume.
ISO volumes
ISO volumes will be replicated by the asynchronously running
Synchronize_Replicates job (see also “ISO volumes” on page 187).
a. Select Replicated Archives in the console tree and select the designated
archive.
b. Select a replicated pool in the console tree and click Properties in the
action pane.
c. Enter settings (see “Write at-once pool (ISO) settings” on page 89) for
Number of Backups to n (n>0, for volumes on HDWO: n=1) and select
the Backup Jukebox.
d. Configure the Synchronize_Replicates job according to your needs
(see “Setting the start mode and scheduling of jobs” on page 102).
IXW volumes
IXW volumes will be replicated by the asynchronously running
Synchronize_Replicates job (see also “IXW volumes” on page 188).
a. Select Replicated Archives in the console tree and select the designated
archive.
b. Select a replicated pool in the console tree and click Properties in the
action pane.
c. Enter settings (see “Write incremental (IXW) pool settings” on page 91)
for Number of Backups to n (n>0) and select the Backup Jukebox.
d. Configure the Synchronize_Replicates job according to your needs
(see “Setting the start mode and scheduling of jobs” on page 102).
4. Schedule the replication job Synchronize_Replicates (see “Setting the start
mode and scheduling of jobs” on page 102).
Note: On the original archive server, the backup jobs can be disabled if no
additional backups should be written.
A message is shown, that the disk buffer gets replicated and a volume has to be
attached to this disk buffer.
5. Select Buffers in the Infrastructure object in the console tree.
6. Select the Replicated Disk Buffers tab in the result pane. The replicated buffers
are listed in the top area.
7. Select the replicated buffer in the top area. In the bottom area, the assigned
volumes are listed. Volumes which are not configured are labeled with the
missing type.
8. Select the first missing volume and click Attach or Create Missing Volume in
the action pane.
9. Enter Mount Path and click OK. Repeat this for every missing volume.
Proceed as follows:
1. Log on to the Remote Standby Server.
2. Select Replicated Archives in the console tree and select the designated archive.
3. Select a replicated pool in the console tree and click Properties in the action
pane.
4. Enter settings (see “Write at-once pool (ISO) settings” on page 89) for Number
of Backups to n (n>0, for volumes on HDWO: n=1) and select the Backup
Jukebox.
5. Configure the Synchronize_Replicates job according to your needs (see
“Setting the start mode and scheduling of jobs” on page 102).
The Synchronize_Replicates job now backups the data of the original ISO
pool according to the scheduling.
Note: If problems occur, have a look at the protocol of the
Synchronize_Replicates job (see “Checking the execution of jobs” on
page 103).
Proceed as follows:
1. Log on to the Remote Standby Server.
2. Select Replicated Archives in the console tree and select the designated archive.
3. Select a replicated pool in the console tree and click Properties in the action
pane.
4. Enter settings (see “Write incremental (IXW) pool settings” on page 91) for
Number of Backups to n (n>0) and select the Backup Jukebox.
5. Configure the Synchronize_Replicates job according to your needs (see
“Setting the start mode and scheduling of jobs” on page 102).
According to the scheduling, the Synchronize_Replicates job performs a
backup of the new data on the original medium since the last backup to one
backup media.
Note: If problems occur, have a look the protocol of the
Synchronize_Replicates job (see “Checking the execution of jobs” on
page 103).
1. Write lock the original volume to avoid write access, see “To write lock the
original volume:” on page 189.
2. Update the replicated volume, see “To update the replicated volume:” on
page 189.
3. Export and remove the replicated volume, see “To export and remove the
replicated volume:” on page 189.
4. In case of IXW: insert a new volume for replication, see “To export and remove
the replicated volume:” on page 189.
5. Remove the original volume and insert the replicate volume, see “To remove the
defective original volume and insert the replicate volume:” on page 190.
6. Update the new replicated volume, see “To update the new replicated volume:”
on page 191.
Note: For double-sided media, you have to execute the following steps for both
sides!
Important
If this job is executed during office times, make sure there is enough
bandwidth between the original and remote standby server for the
replicated data available.
4. Check whether the job run successfully (see “Checking the execution of jobs” on
page 103). If it was not possible to back up all data, break off here and contact
Open Text Customer Support.
To remove the defective original volume and insert the replicate volume:
1. Log on to the original archive server.
2. Select the jukebox in Devices in the Infrastructure object in the console tree.
3. Select the defective volume in the bottom area of the result pane and click Eject
Volume in the action pane.
4. Remove the medium from the jukebox and label it as defective.
5. Insert the replicate IXW (ISO) medium and restore it as original:
a. Insert the replicate IXW (ISO) medium in the jukebox of the original archive
server.
b. Select the jukebox in Devices in the Infrastructure object in the console tree
and click Insert Volume in the action pane.
c. Select the medium (status bak) and select Restore in the action pane.
This makes the backup volume available as the original volume.
6. Select the designate archive in the console tree and the designated pool in the
result pane.
7. Select the backup volume in the bottom area of the result pane and select Clear
Backup Status in the action pane.
Important
If this job is executed during office times, make sure there is enough
bandwidth between the original and remote standby server for the
replicated data available.
4. Check whether the job run successfully (see “Checking the execution of jobs” on
page 103). If it was not possible to back up the data, break off here and contact
Open Text Customer Support.
1. Export and remove the replicated volume, see “To export and remove the
replicated volume:” on page 191.
2. In case of IXW: insert a new volume for replication, see “In case of IXW: insert
and initialize a new volume for replication” on page 192.
3. Update the new replicated volume, see “To update the new replicated volume:”
on page 192.
Note: For double sided media, you have to execute the following steps for both
sides!
Important
If this job is executed during office times, make sure there is enough
bandwidth between the original and remote standby server for the
replicated data available.
4. Check whether the job run successfully (see “Checking the execution of jobs” on
page 103). If it was not possible to back up the data, break off here and contact
Open Text Customer Support.
As the diagram hints, the Administration Server is central to the coordination of the
cache scenario at large. Administration Client is used to configure the settings of
each Archive Cache Server and the associated clients and archives.
Important
To ensure accurate retention handling, the clock of the cache server must be
synchronized with the clock of the archive server.
Topic Description
Restrictions valid for “write back”
MTA documents MTA documents can be stored but the single document in an
MTA document cannot be accessed until they are transferred
to a cache server.
Attribute Search Attribute Search in print lists is not available until the content
is transferred from a cache server to the related archive server.
VerifySig The signature verification is processed for write back items
but the signer chain is not verified (no timestamp certificates
are available on related archive server).
Deletion behavior To avoid problems with deletion, do not use the following
archive settings:
• Original Archive > Properties > Security > Document
Deletion > Deletion is ignored (see also “Configuring the
archive security settings” on page 81)
• Archive Server > Modify Operation Mode > Documents
cannot be deleted, no errors are returned (see also “Setting
the operation mode of Archive Server” on page 336
Retention behavior As long as write back documents are just stored on the cache
server, there is no protection based on the document
retention. After transferring documents to a related archive
server, the retention behavior gets effective. If there is no
client retention, the retention setting of the logical archive is
used.
Versioning of compo- As long as components are just stored on the cache server,
nents there is no version control! This means, after a successful
modification, the modified component is available, but the
version number will not be increment. A subsequent info call
still will deliver back version “1” of the just modified compo-
nent, until the component has been transferred to the related
archive server.
Topic Description
Transfer and commit Write back documents are transferred to the related archive
server in a two-phase process:
Application type Application type is not stored on the cache server and thus,
not transferred to the related archive server. This means that
automatic pool separation depending on application type
does not work.
Maintenance mode Documents cannot be accessed during maintenance mode.
Disabled archives Documents cannot be modified if the logical archive is dis-
abled.
Restrictions valid for “write through” and “write back”
Component name map- In write back mode, an error occurs if you try to create a
ping component matching one of these names:
• <n>.pg
• im
To support all component names, create a new entry in the
configuration:
1. Select Runtime and Core Services > Configuration >
Content Service.
2. Click New Property in the action pane.
3. Enter the property name:
contentservice.ILLEGALCOMPONENTNAMES
4. Select Global as Scope and String as Datatype.
5. Click Next.
6. Leave the Property Value field empty and select
Requires Restart?
7. Click Next and then Finish to resume.
Timestamp verification A mandatory signature check before reading can be config-
ured for each archive. This setting is ignored for cached
documents.
Encryption, Compression, Content on the cache server gets neither encrypted nor com-
Single Instance, Blobs pressed, regardless of the archive setting.
Destroy Documents are not destroyed on the cache server, regardless
of the archive setting.
Proceed as follows:
1. Select Cache Servers in the Environment object in the console tree.
2. Click New Cache Server in the action pane.
3. Enter the cache server parameters:
Cache server name
Unique name of the cache server. This name is used throughout the
configuration and administration to refer to the cache server.
Description
Brief, self-explanatory description of the cache server.
Host (client)
Physical host name to address the cache server when a client accesses it.
Note: Instead of the host name, you can also use IPv4 addresses.
However, IPv6 addresses are not supported.
'Copy back' job
Displays the associated Copy_Back job. This entry cannot be changed.
Host (archive server)
Physical host name used by the archive server to communicate with a cache
server. This name may be different from the host name relating to client.
Note: Instead of the host name, you can also use IPv4 addresses.
However, IPv6 addresses are not supported.
The <name to use by ACS for itself> name and the Host (archive
server) name must be identical. Otherwise, problems will arise during
the write-back scenario.
Port, Secure port, Context path
Specifies the port, the secure port and the context path, that enables the client
to create URLs of the designated cache server.
Structure of the URLs:
http://<host>:<port><context>?...
https://<host>:<secure port><context>?...
Example:
<host> = csrv03100
<port> = 8080
<secure port> = 8090
<context> = /archive
http://csrv03100:8080/archive?...
https://csrv03100:8090/archive?...
4. Click Finish.
5. Configure the Copy_Back job. See also “Configuring jobs and checking job
protocol” on page 97 and Table 6-3 on page 99.
Note: Be aware that this job is disabled by default. In case you intend to
use the "write back" mode, enable this job.
6. Click Finish. The new cache server is added to the Environment.
Next step:
• “Configuring archive access via a cache server” on page 206.
Caution
Do not modify the host name while writing back.
The following step ensures that pending write-back documents are
transferred to the related archive server. If this step fails, the cache server
must not be deleted before the problem is solved.
Proceed as follows:
Select the Copy_Back job which is assigned to the cache server and click
Start in the action pane. The cached documents are transferred to the related
archive server. A window to watch the transfer status opens.
Proceed as follows:
Proceed as follows:
1. Detach the cache server from all logical archives it is attached to. See “Deleting
an assigned cache server” on page 208.
2. Select Jobs in the System object in the console tree.
3. Select the Copy_Back job which is assigned to the cache server and click Start in
the action pane. The cached documents are transferred to the related archive
server. A window to watch the transfer status opens.
Caution
This step ensures that pending write-back documents are transferred to
the related archive server. If this step fails, the cache server must not be
deleted before the problem is solved.
Adding cache Adding a write back volume or write through volumes is the same. But, only one
volumes write back volume can be added whereas several write through volumes can be
added.
For each new cache volume 2 new properties are required:
• volume size
• path, where the volume is located
Proceed as follows:
1. In Runtime and Core Services > Configuration select the Content Service
object.
2. Volume size – Select New Property... action.
3. Create the cache volume size property:
For Property Name enter the volume size name of the new volume. Make sure
this volume already exists.
For Scope select Global.
Warning
Danger of loss of data
Make sure not to accidently remove the write back volume or to change the
path of the write back volume. In case of questions, contact Open Text
Customer Support.
1. In Runtime and Core Services > Configuration select the Content Service
object.
For re-sizing select one the following variables:
• ACS size of write back volume in MB
or
• contentservice.SIZE<n>
2. Click Properties in the action pane or double-click the variable name.
The Properties window opens.
3. Modify the Global Value to the appropriate value and confirm with OK.
Activating the Modifications of the volume size or adding new volumes must be activated before it
modification can be used. For activating, there are the following options:
• Cache server re-start and checking the volume size using the cscommand
command.This utility is provided in the contentservice subdirectory of the
<Web configuration directory> (“filestore”)
User and user password of the respective archive server have to be applied.
The result is a list of all volumes, split into data volume and volume reserved
for internal attributes per volume.
Note: Re-sized volumes may be viewed only after restart of the server.
• Switching the maintenance mode on and off again.
See “Backup of cache server data” on page 250.
Note: The advantage of switching on/off the maintenance mode is that the
client does not receive errors because possibly incoming requests are
redirected.
4. Copy all data from the current database location (see step 2) to the new location
(provided in step 1). The file permissions of the copy must match the original
ones.
5. Configure the Cache server to use the new database location:
In Runtime and Core Services > Configuration select the Content Service
object.
Open the ACS database directory variable and change the value to the new
database directory name.
6. Switching the maintenance mode off.
See “Backup of cache server data” on page 250.
Important
The subnet configuration will only be evaluated by our “intelligent” clients.
Note: Archive Cache Server keeps track of any relevant changes to the archive
settings and is synchronized automatically.
Proceed as follows:
1. Select Original Archives in the Archives object in the console tree.
2. Select the logical archive which a the cache server should get access to.
3. Select the Cache Servers tab in the top area of the result pane and click Assign
Cache Server.
4. Enter settings:
Cache server
The name of the cache server assigned to this archive.
Caching enabled
If caching is enabled, one of the following modes can be set.
Write through
The cache server will operate in “write through” mode for this logical
archive.
Write back
The cache server will operate in “write back” mode for this logical
archive.
Note: If caching is disabled, the cache server does not cache any new
documents for this logical archive. Instead, it acts as a proxy and forwards
all requests to Archive Server. Outstanding write back documents can still
be retrieved.
5. Click Next and enter settings for subnet address and subnet mask/length.
The combination of subnet mask and subnet address specifies a subnet. Clients
residing in this subnet will use the selected cache server. Typically, the cache
server resides in the same subnet. It is possible to add more than one subnet
definition to a cache server, see also “Subnet assignment of a cache server” on
page 205.
Several subnets
If a client belongs to more than one subnet, it will use the cache server that
is assigned to the best matching subnet.
Subnet address
Specifies the address for the subnet in which a cache server is located. At
least the first part of the address (e.g. NNN.0.0.0 in case of IPv4) must be
specified. A gateway must be established for each subnet.
IPv6
If you use IPv6, do not enclose the IPv6 address with square brackets.
Subnet mask / Length
Specifies the sections of the IP address that are evaluated. You can restrict
the evaluation to individual bits of the subnet address.
IPv4
Enter a subnet mask, for example 255.255.255.0.
IPv6
Enter the address length, i.e. the number of relevant bits, for example 64.
6. Click Finish to complete.
Modifying cache To modify the settings of a cache server, select it in the top area of the result pane
server settings and click Properties in the action pane. Proceed in the same way as when
configuring a cache server.
Modifying subnet To modify the subnet definitions of a cache server, select it in the bottom area and
definitions of a click Properties in the action pane. Proceed in the same way as when adding a
cache server
subnet definition.
Proceed as follows:
1. Select Original Archives in the Archives object in the console tree.
2. Select the logical archive which the cache server is assigned to.
3. Select the Cache Servers tab in the top area of the result pane and select the
cache server you want to delete.
4. Click Properties in the action pane.
5. Deselect enabled to stop caching. See also “Configuring archive access via a
cache server” on page 206.
6. Select Jobs in the System object in the console tree.
7. Select the Copy_Back job which is assigned to the cache server you want to
delete and click Start. The cached documents are transferred to the related
archive server. A window to watch the transfer status opens.
8. Select the cache server you want to delete again and click Delete in the action
pane.
9. Click Yes to confirm. The cache server is no longer assigned to the logical
archive.
Note: The property names for Archive Server must be administrated into
ascending order.
Generating Reports
Proceed as follows:
1. Select Reports in the System object in the console tree.
2. Select the Scenarios tab in the top area of the result pane.
3. Select the scenario for which you want to generate a report.
Currently only the reportArchive scenario is available.
4. Select the Run Scenario... action.
The resulting report is stored as HTML file and may be displayed in a standard
browser, see “Displaying a Report” on page 212 procedure.
Information The following information per report is displayed in the result pane:
about a report
Name Name of the report. The name is predefined, it is derived from the respective scenario
name extended by a serial number.
Date Date and time when the report was generated.
Format YYYY-MM-DD HH:MM:SS.
Deleting reports To delete a report, select it and click Delete in the action pane. Confirm the
displayed message with OK.
Displaying a Report
Proceed as follows:
1. Select Reports in the System object in the console tree.
2. Select the Reports tab in the top area of the result pane.
3. Select the Refresh action.
4. Select a report in the Reports tab.
5. Select the Open Report... action.
The result HTML file may be displayed using your standard browser.
report Generates a report comprising details for all archives (Original Archives, Replicated
Archive Archives and External Archives) currently on the Archive Server. These details include:
• Security
• Settings
• Retention
• Timestamps
• Pools, if defined
Proceed as follows:
1. Select the Configuration object in the console tree.
2. Select one of the entries (Archive Server, Monitor Server or Document
Pipeline) of the Configuration object.
A list of related components is displayed in the result pane.
3. Select a component.
A list of related variables is displayed below the list of components.
4. Select a variable using double-click or using the Properties action in the action
pane.
The Configuration Variable Properties window opens, displaying 2 tabs:
• The General tab displaying the name, the current value, a short description
and information on whether a server restart is required upon modifying this
variable.
• The Advanced tab displaying the full qualified internal name of the variable.
5. Select General tab and modify the current value.
Note: Special handling when adding/deleting an entry to a list:
To add a value to a list:
a) Enter the value into the Variable field.
Resetting to To reset a value to its default value, select it and click Reset to Default in the action
default value pane. This action is sensitive only if the value is currently not the default value.
Confirm confirmation dialog with OK.
Retrieving In the list of configuration variables, undefined values are marked with *** Value
unspecified not defined ***. In the properties window, undefined values are marked with an
values
icon:
Proceed as follows:
1. Select the Configuration object.
2. Enter the variable name to be searched for in the search field in the result pane
and click on the search icon, located to the right of the search field (see figure
below).
You may also use the internal name as search string, if you remove the prefix of
the internal variable name.
Example 1: for the AS.ADMS.ADMS_ALRT_EXPIRE variable, enter
ADMS_ALRT_EXPIRE
Proceed as follows:
1. Select the Configuration object (or one of the objects assigned to it).
2. Click Customize Configuration View... in the action pane.
The Customize Configuration View window opens.
3. Select one of the following options:
Show standard variables (recommended)
Shows the standard variables only.
Show all (including hidden variables)
Shows all variables, including hidden variables.
Write job has finished. It looks for volumes meeting the given conditions and, if
found, finalizes them.
You can enable automatic finalization and set the conditions either when creating
the pool or at a later time.
See also:
• “Manually finalizing IXW volumes” on page 220
Proceed as follows:
1. Select Original Archives in the Archives object in the console tree.
2. Select the original archive with the IXW pool the volume is assigned to.
3. Select the designated IXW pool in the top area and the volume to be finalized in
the bottom area of the result pane.
4. Click Finalize Volume in the action pane.
5. Click OK.
A protocol window shows the progress and the result of the finalization. To
check the protocol later on, see “Checking utilities protocols” on page 256.
To check the volume status, see “Checking the finalization status” on page 221.
See also:
• “Checking utilities protocols” on page 256
• “Checking the finalization status” on page 221
• “Automatic finalization of IXW volumes” on page 219
• “Manually finalizing IXW pools” on page 220
Proceed as follows:
1. Select Original Archives in the Archives object in the console tree.
2. Select the original archive with the IXW pool that should be finalized.
3. Select the designated IXW pool in the top area of the result pane.
See also:
• “Checking utilities protocols” on page 256
• “Checking the finalization status” on page 221
• “Manually finalizing IXW volumes” on page 220
• “Automatic finalization of IXW volumes” on page 219
Proceed as follows:
1. Select Devices in the Infrastructure object in the console tree. All available
devices are listed in the top area of the result pane.
2. Select the designated jukebox device. The attached volumes are listed in the
bottom area of the result pane.
3. Check the entry in the Final State column of the finalized volume(s), it must be
fin. The entry in the File System column of the volume must be ISO.
See also:
• “Setting the finalization status manually” on page 222
• “Manually finalizing IXW volumes” on page 220
• “Automatic finalization of IXW volumes” on page 219
Proceed as follows:
1. Select Devices in the Infrastructure object in the console tree. All available
devices are listed in the top area of the result pane.
2. Select the designated device. The attached volumes are listed in the bottom area
of the result pane.
3. Select the volume to set the finalization status.
4. Click Set Finalization Status in the action pane.
5. Click OK.
The Final state of the volume is set to fin_err.
Note: The failure of the finalization does not affect the security of the data on
the medium!
See also:
• “Checking utilities protocols” on page 256
• “Checking the finalization status” on page 221
• “Manually finalizing IXW volumes” on page 220
• “Automatic finalization of IXW volumes” on page 219
Document When the leading application sends the delete request for a document, the archive
deletion system works as follows:
Single files (from HDSK, FS, VI pools)
1. Archive Server deletes the index information of the document from the
archive database. The document cannot be retrieved any longer, the
document is logically deleted.2
2. Archive Server propagates the delete request to the storage system.
3. The storage system deletes the document physically and the client gets a
success message. Not all storage systems release the free space after deletion
for new documents (see documentation for your storage system). If deletion
is not possible for technical reasons, the information with the storage
location of the document is written into the TO_BE_DELETED.log file. The
administrator can configure a notification.
Note: If the state of an FS volume (NetApp or NASFiler) is set to “write
locked”, components will not be removed from this volume when one
tries to delete them from Document Service. The case will be handled as
if the removal was prevented by the hardware (entry in
TO_BE_DELETED.log, notification, additional delete from archive
database if the request was a docDelete).
Container files (from ISO, IXW pools, blobs)
1. Archive Server deletes the index information of the document from the
archive database. The document cannot be retrieved any longer.
2. The delete request is not propagated to the storage system and the content
remains in the storage. Only logically empty volumes can be removed in a
separate step.
Note on IXW pools
Volumes of IXW pools are regarded as container files. Although the documents
are written as single files to the medium, they cannot be deleted individually,
neither from finalized volumes (which are ISO volumes) nor from non-
finalized volumes using the IXW file system information.
Delete empty If documents with retention periods are stored in container files, the container
partitions volume gets the retention period of the document with the longest retention. The
retention period of the volume is propagated to the storage subsystem if possible.
The volume – and the content of all its documents – can be deleted only if all
documents are deleted from the archive database. The volume is purged by the
Delete_Empty_Volumes job. It checks for logically empty volumes meeting the
conditions defined in Configuration (see “Searching configuration variables” on
page 214):
Delete volumes which have not been modified since days variable
2 Deletion of components works differently: If the storage system cannot delete a component physically, the component
remains, it is not deleted logically.
Important
To ensure correct deletion, you must synchronize the clocks of the archive
server and the storage subsystem, including the devices for replication.
Storage Pool type Delete from Delete content physically Destroy con-
mode archive DB tent
Single file HDSK x x x (Destroy un-
storage recoverable)
FS and VI x x —
Container ISO, IXW x Delete volume, when the x (destroy me-
file stor- on optical last document is deleted: dia)
age media Delete_Empty_Volumes job
Notes:
• Not all storage systems release the space of the deleted volumes (see
documentation for your storage system).
• Blobs are handled like container file archiving.
Proceed as follows:
1. Select Original Archives in theArchives object in the console tree.
2. Click List Empty Volumes in the action pane. A window to start the utility
opens.
3. Enter settings.
Not modified since “xx” days
Number of days since the last modification. The parameter prevents that the
volume or image can be deleted very soon after the last document is deleted.
More than “xx” percent full
Only relevant for non-finalized IXW volumes. The parameter ensures that
the volume is filled with data at the given percentage (but logically, it is
empty).
4. Click Run and check the resulting list.
5. To delete volumes, start the Delete_Empty_Volumes job manually.
Before you start the job, check the settings which specify the volumes that
should be deleted. They are configured in Configuration (see “Searching
configuration variables” on page 214):
Delete volumes which have not been modified since days variable
(internal name: ADMS_DEL_VOL_NOT_MODIFIED_SINCE_DAYS)
Delete volumes which are more than percent full variable
(internal name: ADMS_DEL_VOL_AT_LEAST_FULL)
and avoid that new, empty volumes can be deleted.
Select Jobs in the System object in the console tree.
6. Select the Delete_Empty_Volumes job and click Start in the action pane.
7. If you work with optical media, proceed as described in step 2 in “Deleting
empty volumes automatically” on page 225.
Proceed as follows:
1. Select Jobs in the System object in the console tree.
Schedule and enable the Delete_Empty_Volumes job, see also “Creating and
modifying jobs” on page 102 and “Enabling and disabling jobs” on page 101.
2. If you work with optical media:
a. Select Devices in the Infrastructure object in the console tree. In the Servers
tab, open the Devices directory and check the jukeboxes for volumes with
the name XXXX. These are the deleted volumes.
Important
On double-sided media, check that both volumes are deleted.
b. Select the designated jukebox in the top area of the console tree. Check the
volume list in the bottom area of the result pane for volumes with the name
XXXX.
c. Select the XXXX volume and click Eject Volume in the action pane.
d. Destroy the medium physically.
Important
• Each side of a double-sided optical medium (WORM, UDO or DVD)
constitutes a volume. Export both volumes before you remove the
medium from the jukebox.
• Do not use the Export utility for volumes belonging to archives that are
configured for single instance archiving (SIA). A SIA reference to a
document may be created long after the document itself has been stored;
the reference is stored on a newer medium than the document. SIA
documents can be exported only when all references are outdated but the
Export utility does not analyze references to the documents.
• Volumes containing at least one document with non expired retention
are not exported.
Proceed as follows:
See also:
• “Utilities” on page 255
• “Checking utilities protocols” on page 256
Proceed as follows:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Import ISO Volume utility in the result pane and click Run in the
action pane.
3. Enter settings:
Volume name
Name of the volume(s) to be imported.
STORM server
Name of the STORM server by which the imported volume is managed.
Backup
The volume is imported as a backup volume and entered in the list of
volumes as a backup type. Not available for ISO volumes.
Arguments
Additional arguments. Not required for normal import, only for special tasks
like moving documents to another logical archive. Contact Open Text
Customer Support.
4. Click Run.
The import process may take some time. A message box shows the progress of
the import.
5. Select Original Archives in the Archives object in the console tree.
6. Select the designated archive and the pool.
7. Click Attach Volume in the action pane.
8. Select the volume and define the priority.
9. Click Finish to attach the imported volume to the pool.
See also:
• “Utilities” on page 255
• “Checking utilities protocols” on page 256
Proceed as follows:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Import IXW Or Finalized Volume(s) utility in the result pane and
click Run in the action pane.
3. Enter settings:
Volume name(s)
Name of the volume(s) to be imported.
STORM server
Name of the STORM server by which the imported volume is managed.
Import original volumes
The volumes are imported as original volumes.
Import backup partitions (for use in replicate archives only!)
The volumes are imported as backup volumes and entered in the list of
volumes as backup type.
Set read-only flag after import
The volume is imported as a write-protected volume.
Arguments
Additional Arguments. Not required for normal import, only for special
tasks like moving documents to another logical archive. Contact Open Text
Customer Support.
4. Click Run.
The import process may take some time. A message box shows the progress of
the import.
5. Select Original Archives in the Archives object in the console tree.
6. Select the designated archive and the pool.
7. Click Attach Volume in the action pane.
8. Select the volume and define the priority.
9. Click Finish to attach the imported volume to the pool.
See also:
• “Utilities” on page 255
• “Checking utilities protocols” on page 256
Proceed as follows:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Import HD Volume utility in the result pane and click Run in the
action pane.
3. Enter settings:
Volume name
Name of the hard disk volume to be imported.
Base directory
Mount path of the volume.
Backup
The volume is imported as a backup volume and entered in the list of
volumes as a backup type.
Read-only
The volume is imported as a write-protected volume.
Arguments
Additional Arguments. Not required for normal import, only for special
tasks like moving documents to another logical archive. Contact Open Text
Customer Support.
4. Click Run.
The import process may take some time. A message box shows the progress of
the import.
5. Select Original Archives in the Archives object in the console tree.
6. Select the designated archive and the FS or HDSK pool.
7. Click Attach Volume in the action pane.
8. Select the volume and define the priority.
9. Click Finish to attach the imported volume to the pool.
See also:
• “Utilities” on page 255
• “Checking utilities protocols” on page 256
Proceed as follows:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Import GS Volume utility in the result pane and click Run in the
action pane.
3. Enter settings:
Volume name
Name of the hard disk volume to be imported.
Base directory
Mount path of the volume.
Read-only
The volume is imported as a write-protected volume.
Arguments
Additional arguments. Not required for normal import, only for special tasks
like moving documents to another logical archive. Contact Open Text
Customer Support.
4. Click Run.
The import process may take some time. A message box shows the progress of
the import.
5. Select Original Archives in the Archives object in the console tree.
6. Select the designated archive and the VI pool.
7. Click Attach Volume in the action pane.
8. Select the volume and define the priority.
9. Click Finish to attach the imported volume to the VI pool.
See also:
• “Utilities” on page 255
Proceed as follows:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Check Database Against Volume utility.
3. Click Run in the action pane.
4. Type the volume name and specify how inconsistencies are to be handled.
Volume
Name of the volume that is to be checked.
copy document/component from other partition
The utility attempts to find the missing component on another volume. If the
component is found, it is copied to the checked volume. If not, the
component entry is deleted from the database, i.e. the component is
exported.
export component
The database entry for the missing component on the checked volume is
deleted.
Repair, if needed
Check this box if you really want to repair the inconsistencies.
If the option is deactivated, the test is performed and the result is displayed.
Nothing is copied and no changes are made to the database.
Important
Use this repair option only if you are sure that you do not need the
missing documents any longer! You may lose references to
document components that are still stored somewhere in the archive.
If in doubt, contact Open Text Customer Support.
5. Click Run.
A protocol window shows the progress and the result of the check.
See also:
• “Utilities” on page 255
• “Checking utilities protocols” on page 256
Proceed as follows:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Check Volume Against Database utility.
3. Click Run in the action pane.
4. Type the volume name and specify how documents missing in the database are
to be handled.
Volume
Name of the volume that is to be checked.
Import documents if they are not in the database
Missing document or component entries are imported into the database.
5. Click Run.
A protocol window shows the progress and the result of the check.
See also:
• “Utilities” on page 255
• “Checking utilities protocols” on page 256
Proceed as follows:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Check Document utility.
3. Click Run in the action pane.
4. Enter the document ID, the type and select whether the document should be
repaired.
DocID
Type the document ID accordingly to the Type setting.
You can determine the string form of the document ID by searching for the
document in the application (e.g. on document type and object type) and
displaying the document information in Windows Viewer or in Java Viewer.
Type
Select the type of document ID. The ID can be entered in numerical (Number)
or string (String) form.
Repair document, if needed
Check this box if you want to repair defective documents. The utility at-
tempts to copy the document from another volume. If this option is deacti-
vated, the utility simply performs the test and displays the result.
Important
Use this repair option only if you are sure that you do not need the
missing documents any longer! You may lose references to
document components that are still stored somewhere in the archive.
If in doubt, contact Open Text Customer Support.
5. Click Run.
A protocol window shows the progress and the result of the check.
See also:
• “Utilities” on page 255
• “Checking utilities protocols” on page 256
Proceed as follows:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Count Documents/Components utility.
3. Click Run in the action pane.
4. Enter the name of the volume.
5. Click Run.
A protocol window shows the progress and the result of the counting.
See also:
• “Utilities” on page 255
• “Checking utilities protocols” on page 256
suspect any problem with a storage medium. The medium must be online and is
only tested, no repair option is available.
Proceed as follows:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Check Volume utility.
3. Click Run in the action pane.
4. Enter the name of the volume.
5. Click Run.
A protocol window shows the progress and the result of the check.
See also:
• “Utilities” on page 255
• “Checking utilities protocols” on page 256
Proceed as follows:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Compare Backup WORMs utility.
3. Click Run in the action pane.
4. Enter the Backup volume to be compared. You can specify multiple volumes
separated by spaces. You can also use the * character as a wildcard.
5. Click Run.
A protocol window shows the progress and the result of the comparison.
See also:
• “Utilities” on page 255
• “Checking utilities protocols” on page 256
Number of Partitions 1
Number of Backups 1
Backup Jukebox Must be different from Original Jukebox
Backup On for Local_Backup job
Proceed as follows:
1. Select Devices in the Infrastructure object in the console tree.
2. Select the ISO jukebox in the top area of the result pane.
3. Check whether new ISO media have been added to the list in the bottom area of
the result pane. You can click the column title Name to sort by names. The ISO
volumes in each pool are numbered sequentially.
4. Select the new ISO volume and click Eject Volume in the action pane.
5. Label the ISO medium.
Do not use solvent-based pens or stickers. Never use a ballpoint pen or any
other sharp object to label your discs. The safest area for a label is within the
center stacking ring. If you use adhesive labels, make sure that they are attached
accurately and smoothly.
6. Remove and label all the new ISO media in this way.
7. Re-insert one of each set of identically named ISO media. To do this, select the
ISO jukebox in the top area of the result pane and click Insert Volume in the
action pane.
8. Remove all defective ISO media with the name --bad--. Label these as
defective. They must not be re-used.
9. Store the backup ISO media in a safe place.
Note: Perform these tasks also for the jukeboxes of the remote standby server.
Proceed as follows:
1. Select Devices in the Infrastructure object in the console tree.
2. Select the jukebox from which you want to remove a volume in the top area of
the result pane.
3. Select the volume in the bottom area of the result pane and click Eject Volume
in the action pane.
4. Remove the backup volume in the same way.
• The original and backup optical media must possess identical capacities and
sector sizes.
• Regarding optical media, backup media must have the same name as the origi-
nal. Make sure that the identification of backups is clear on volume labels.
Important
You can also use a Remote Standby Server for backing up data. For details
refer to “Configuring remote standby scenarios” on page 183.
Notes:
• The Local_Backup job considers all pools, for which the Backup option is
set. The backup_pool job considers only the pool for which it is created.
You can schedule additional backups of a pool by configuring both jobs, or
configure the pool backup separately.
• If problems occur, have a look in the protocol of the relevant job (see
“Checking the execution of jobs” on page 103).
Proceed as follows:
1. Select Devices in the Infrastructure object in the console tree.
2. Select the jukebox where the damaged volume is located in the top area of the
result pane.
3. Select the damaged volume in the bottom area of the result pane and click Eject
Volume in the action pane.
4. Insert the backup copy in the jukebox and click Insert Volume in the action
pane. It is now used as the original ISO volume without any further
configuration.
5. Select Original Archives in the Archives object in the console tree.
6. Select the original archive in which the volume used.
7. Select the pool in the top area and the volume in the bottom area of the result
pane.
Automatic backup
Normally the backup of IXW volumes is done asynchronously by the Local_Backup
job.
Proceed as follows:
1. Select Original Archives in the Archives object in the console tree.
2. Select the designated archive in the console tree.
3. Select the designated pool in the top area of the result pane and click Properties
(see “Write incremental (IXW) pool settings” on page 91).
4. Check the Backup option.
5. Set the value for Number of Backups to n>0 and select the required Backup
Jukebox.
Semi-automatic backup
With this method, you initialize the original and backup volumes manually in
the corresponding jukebox devices. The backup volume must have the same
name as the original one. To initialize the volume, proceed as described in
“Manual initialization of original volumes” on page 63. The configuration
procedure is the same as for automatic backup except for steps 5 and 6 which
are here: No Auto Initialization, no Number of Backups and no Backup
Jukebox selection. The backup job finds the backup volumes by their names.
A protocol window shows the progress and the result of the backup. To check
the protocol later on, see “Checking utilities protocols” on page 256.
The volume list now contains a volume of the backup type and the same name
as the original volume.
12. Check the columns Unsaved (MB) and Last Backup/Replication:
The Unsaved (MB) column should now be blank, indicating that there is no
more data on the original volume that has not been backed up. The Last
Backup/Replication column shows the date and time of the last backup. The
Host column indicates the server where the backup resides.
13. For double-sided media, backup the second side of the medium in the same
way.
Proceed as follows:
1. Select Devices in the Infrastructure object in the console tree.
2. Select the jukebox where the damaged volume is located in the top area of the
result pane.
3. Select the damaged volume in the bottom area of the result pane and click Eject
Volume in the action pane. Label it clearly as defective.
4. Select the backup volume of the damaged volume the bottom area of the result
pane.
5. Click Restore Volume in the action pane. This makes the backup volume
available as original. If a volume has already been written to the second side of
the defective IXW medium, restore it in exactly the same way.
6. Create a new backup volume (see “Manual backup of one volume” on
page 244).
Note: If an IXW backup volume is damaged, remove the medium with Eject
and create a new backup volume (see “Manual backup of one volume” on
page 244).
There are several parts that have to be protected against data loss:
Volumes
All hard disk volumes that may hold the only instance of a document must be
protected against data loss by RAID. Which volumes have to be protected you
find in the Installation overview chapter of the installation guides for Archive
Server.
Open Text Document Pipelines
The Open Text Document Pipeline on the Enterprise Scan has to be protected
against data loss.
Database
The database with the configuration for logical archives, pools, jobs and relations
to other archive servers and leading applications has to be protected against data
loss. The process depends on the type of database you are using (see “Backup of
the database” on page 248).
Optical media
Optical storage media have to be protected against data loss. The process differs
if you use ISO or IXW media (see “Backup and recovery of optical media” on
page 240).
Storage Manager configuration
The IXW file system information and the configuration of the Storage Manager
must be saved, see “Backup and restoring of the Storage Manager configuration”
on page 250.
Data in storage systems
Data that is archived on storage systems like HSM, NAS, CAS needs also a
backup, either by means of the storage system or with archive server tools, see
“Backup for storage systems” on page 237.
Cache Server
If “write back” mode is enabled, the cache server locally stores newly created
documents without saving them immediately to the destination. It is
recommended to perform regular backups of the cache server data, see“Backup
and recovery of a cache server” on page 250.
Important
If you have installed Enterprise Process Services and/or Transactional
Content Processing, database backups are required for all databases of the
system: Archive Server, the Context Server and the User Management
Server. Note that storage media does neither contain any data of the Context
Server database and the User Management Server database, i.e. you cannot
restore these databases by importing from media. The database backup
procedures are very similar.
Important
During the configuration phase of installation, you can either select default
values for the database configuration or configure all relevant values. To
make sure that this user guide remains easy to follow, the default values are
used below. If you configured the database with non-default values, replace
these defaults with your values.
Caution
If “write back” mode is enabled, the cache server locally stores newly
created documents without saving them immediately to the destination.
This means that “highly critical” data are hold on the local disk of the
related archive server. For security reasons, Open Text strongly
recommends storing data on a RAID system. For performing regular
backups of cache server data, you should include relevant items in your
backup.
With the cache server installation comes a small utility (cscommand), which allows to
activate or deactivate the maintenance mode. The commands to activate and
deactivate maintenance mode may be called from any script or batch file. Usually
the commands are added to the script that controls your backup. You can find
cscommand in the contentservice subdirectory of the <Web configuration
directory> (“filestore”).
Proceed as follows:
1. Run Copy_Back jobs (recommended).
3. Start your backup. Be sure that all relevant directories are included.
4. Deactivate maintenance mode. Use:
cscommand -c setOnline -u <username> -p <password>
Directories to be backed up
Note: The directories used by Archive Cache Server are configured during the
installation.
Cache volumes One or more cache volumes to be used for write through caching. Not
highly critical but useful for reducing time to rebuild cached data.
Write back One single cache volume to be used for write back caching. This
volume volume contains the following subdirectories:
dat
Components are stored here.
idx
Per document, additional information is stored, which contains all
necessary information to reconstruct the data in case of a crash.
log
Special protocol files (one per day) are stored here. Containing
relevant info when a document is transferred to and committed by
the Document Service.
Path to store The absolute path to the volume where the cache server stores its
database files metadata for the cached documents. Necessary to recover.
Proceed as follows:
Proceed as follows:
1. Activate maintenance mode. Use
cscommand -c setOffline -u <username> -p <password>
2. If the write back volume is still available, rename the root directory of the write
back volume (see step 5, <location of write back data>).
3. Copy your backup of the data to the correct location to replace the corrupt one.
If you have also a partial loss of data volumes, copy the lost data from your
backup to the correct location.
4. Activate consistency check. Use
cscommand –c checkVolume -u <username> -p <password>
Important
Each successfully recovered document is listed on the command line
and removed from <location of write back data>. This means that
the recover operation can just be processed once.
6. If you do not get any error messages, the renamed directory (<location of
write back data>) can be deleted. Any data left in this subtree is no longer
needed for operation.
Important
If you get error messages, do not delete any data. If you cannot fix the
problem, contact Open Text Customer Support.
Utility Link
Check Database Against Volume “Checking database against volume” on page 232
Check Document “Checking a document” on page 234
Check Volume “Checking a volume” on page 235
Check Volume Against Database “Checking volume against database” on page 233
Compare Backup WORMs “Comparing backup and original IXW volume”
on page 236
Count Documents/Components “Counting documents and components in a vol-
ume” on page 235
Export Volumes(s) “Exporting volumes” on page 226
Import GS Volume “Importing GS volumes for Single File (VI) pool”
on page 231
Import HD Volume “Importing hard disk volumes” on page 230
Import ISO Volume(s) “Importing ISO volumes” on page 228
Import IXW Or Finalized Volume(s) “Importing finalized and non-finalized IXW vol-
umes” on page 229
Utility Link
View Installed Archive Server “Viewing installed archive server patches” on
Patches page 330
VolMig Cancel Migration Job “Canceling a migration job” on page 286
VolMig Continue Migration Job “Continuing a migration job” on page 285
VolMig Fast Migration Of ISO Vol- “Creating a local fast migration job for ISO vol-
ume umes” on page 276
VolMig Fast Migration Of remote “Creating a remote fast migration job for ISO
ISO Volume volumes” on page 277
VolMig Migrate Components On “Creating a local migration job” on page 271
Volume
VolMig Migrate Remote Volumes “Creating a remote migration job” on page 274
VolMig Pause Migration Job “Pausing a migration job” on page 285
VolMig Renew Migration Job “Renewing a migration job” on page 286
VolMig Status “Monitoring the migration progress” on page 281
Proceed as follows:
Proceed as follows:
1. Select Utilities in the System object in the console tree.
2. Select the Protocol tab in the top area of the result pane.
3. Select the protocol you want to check.
The messages created during the execution of the utility are listed in the bottom
area of the result pane.
To clear protocols
Proceed as follows:
1. Select Utilities in the System object in the console tree.
2. Select the Protocol tab in the top area of the result pane.
3. Click Clear Protocol in the action pane.
All protocol entries are deleted.
To reread scripts
Utilities and jobs are read by Archive Server during the startup of the server. If
utilities or jobs are added or modified they can be reread. Thus avoids a restart of
Archive Server.
Proceed as follows:
1. Select Utilities in the System object in the console tree.
2. Select the Protocol tab in the top area of the result pane.
• Compression, encryption
Compression and/or encryption of documents before they are written to new
media.
• Retention
Setting of a retention period for documents during the migration process.
• Automatic Verification
Verifying of all migrated documents. A verification strategy can be defined for
each volume, specifying the verification procedure. Timestamps or different
checksums can be selected as well as a binary comparison.
21.2 Restrictions
The following restrictions are valid for the volume migration features:
• Remote single-file
Remote migration is only possible for volumes that are handled by STORM and
that can be mounted via NFS. Single-File volumes like HSM or HD volumes
cannot be migrated from a remote archive server.
• DBMS provider
Remote migration is only possible if the remote archive server uses the same
DBMS provider as the local archive server. For a cross-provider migration setup,
contact Open Text Services.
• Fast migration of ISO images
It is not possible to filter components. Everything is copied regardless whether it
is very new, very old or has been deleted logically. No changes are possible on
the documents, i.e. documents cannot be compressed, decompressed or
encrypted. Also, retention periods cannot be applied. This holds for local and
remote Fast Migrations.
Caution
Consider that replication and backup settings are not transferred to the
target archive during migration. Therefore, the configuration for backup and
replicated archives must be performed for the migrated archive again. See
“Configuring remote standby scenarios” on page 183 and “Creating and
modifying pools” on page 86.
Preconditions
• The hostname of the “old” server is supposed to be oldarchive. The volumes to
be migrated are located on oldarchive. The volumes of the oldarchive are
listed in Devices in the Infrastructure object of the console tree. This server is
also called “remote server”.
• The hostname of the new archive server (destination of migration) is supposed to
be newarchive. The target devices for remote migration are located on
newarchive. This server is also called “ local server”.
Proceed as follows:
1. Normally, newarchive cannot access the volumes of oldarchive. Thus, you
have to make sure that the local server (newarchive) is configured in the
STORM's hosts list on the remote server (oldarchive). This will allow access to
newarchive.
Modify the configuration file: <OT config>< AS>/storm/server.cfg
Add newarchive to the hosts { } section
2. Restart the jbd on oldarchive after you made changes here.
> spawncmd stop jbd
> spawncmd start jbd
3. For Oracle only: On the local server, extend the $TNS_ADMIN/tnsnames.ora file
to contain a section for the remote computer.
4. The actual read access of the media is done via NFSSERVERs. To add access to
oldarchive media, set the respective variabel: in Configuration, search for the
NFS Server n variable (internal name: NFSSERVERN; see “Searching
configuration variables” on page 214; on the local server newarchive). Add an
entry for each NFSSERVER on the remote computer (at least for those that you
intend to read from). This will create access to the media on oldarchive.
6. For the newarchive, select Configuration > Archive Server in the Runtime and
Core Services object in the console tree.
7. Search for the variable in Configuration (see “Searching configuration
variables” on page 214). Add the List of mappings from remote NFSSERVER
The entrylocal is fixed syntax; it is not the name of the local server!
Proceed as follows:
1. For Oracle only: On the local server, extend $TNS_ADMIN/tnsnames.ora to
contain a section for the remote computer.
2. On the remote server (old archive), modify the DS configuration (<OT config><
AS>/DS.Setup).
If the version is older than 9.7.0, you have to change the registry entry on
Windows: HKEY_LOCAL_MACHINE\SOFTWARE\IXOS\IXOS_ARCHIVE\DS.
Add the variable
BACKUPSERVER1 = BKCD,<newarchive>,0
<newarchive> is the hostname of the target archive server. Do not use blanks and
do not type the angle brackets in the value!
3. Disable backup for the original pool to avoid that the server creates additional
(unwanted) backups in the original pool.
4. Restart the Backup Server
> spawncmd restart bksrvr
Character Description
* Wildcard: 0 to n arbitrary characters
e.g. vol5*, matches all volumes that name begins with vol5, e.g. vol5a,
vol5c78, vol52e4r
Target archive
Enter the target archive name.
Target pool
Enter the target pool name.
Migrate only components that were archived: On date or after
You can restrict the migration operation to components that were archived after
or on a given date. Specify the date here. The specified day is included.
Migrate only components that were archived: Before date
You can restrict the migration operation to components that were archived
before a given date. Specify the date here. The specified day is excluded.
Set retention in days
Enter the retention period in days. With this entry, you can change the retention
period that was set during archiving. The new retention period is added to the
archiving date of the document. The following settings are possible:
• >0 (days)
• 0 (none)
• -1 (infinite)
• -6 (archive default)
• -8 (keep old value)
• -9 (event)
Note: The retention date of migrated documents can only be kept or extended.
The following table provides allowed settings:
Verification mode
Select the verification mode that should be applied for volume migration. The
following settings are possible:
• None
• Timestamp
• Checksum
• Binary Compare
• Timestamp or Checksum
• Timestamp or Binary Compare
• Checksum or Binary Compare
• Timestamp or Checksum or Binary Compare
Notes:
• Many documents (including all BLOB documents) do not have a checksum
or a timestamp. When migrating a volume that contains such documents or
BLOBs, it is strictly recommended to select a mode that provides “binary
compare” as a last alternative.
• If a migration job cannot be finished because the source volume contains
documents that cannot be verified using the specified verification methods,
it is possible to change the verification mode. See “Modifying attributes of a
migration job” on page 290 (-v parameter).
Additional arguments
-e
Export source volumes after successful migration.
-k
Keep exported volume (export only the document entries, allow dsPurgeVol
to destroy this medium).
-i
Migrate only latest version, ignore older versions.
-A <archive>
Migrate components only from a certain archive.
Character Description
[] Specifies a set of volume names:
• “[ ]” can be used only once
• “,” can be used to separate numbers
• “-” can be used to specify a range
e.g. [001,005-099]
Verification mode
Select the verification mode that should be applied for volume migration. The
following settings are possible:
• None
• Timestamp
• Checksum
• Binary Compare
• Timestamp or Checksum
• Timestamp or Binary Compare
• Checksum or Binary Compare
• Timestamp or Checksum or Binary Compare
Notes:
• Many documents (including all BLOB documents) do not have a checksum
or a timestamp. When migrating a volume that contains such documents or
BLOBs, it is strictly recommended to select a mode that provides “binary
compare” as a last alternative.
• If a migration job cannot be finished because the source volume contains
documents that cannot be verified using the specified verification methods,
it is possible to change the verification mode. See “Modifying attributes of a
migration job” on page 290 (-v parameter).
Additional arguments
-i
Migrates only latest version, ignores older versions.
-A <archive>
Migrates components only from a certain archive.
4. Enter appropriate settings to all fields (see “Settings for remote fast migration”
on page 278). Click Run.
Verification mode
Select the verification mode which should be applied for volume migration. The
following settings are possible:
• None
• Timestamp
• Checksum
• Binary Compare
• Timestamp or Checksum
• Timestamp or Binary Compare
• Checksum or Binary Compare
• Timestamp or Checksum or Binary Compare
Notes:
• Many documents (including all BLOB documents) do not have a checksum
or a timestamp. When migrating a volume that contains such documents or
BLOBs, it is strictly recommended to select a mode that provides “binary
compare” as a last alternative.
• If a migration job cannot be finished because the source volume contains
documents that cannot be verified using the specified verification methods,
it is possible to change the verification mode. See “Modifying attributes of a
migration job” on page 290 (-v parameter).
Additional arguments
-d (dumb mode)
Import of document/component entries into local DB by dsTools instead of
reading directly from the remote DB. The dumb mode disables automatic
verification. Archive- and retention settings cannot be changed.
-A <archive>
Migrates components only from a certain archive. Does not work with dumb
mode (–d ).
Proceed as follows:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Determine the ID of the migration job you want to pause via the VolMig Status
utility, see “Monitoring the migration progress” on page 281.
3. Select the VolMig Pause Migration Job utility.
4. Click Run in the action pane.
5. Enter the ID of the migration job that you want to pause in the Migration Job
ID(s) field.
6. Click Run.
The migration job is set to the Paus status.
Proceed as follows:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Determine the ID of the migration job you want to continue via the VolMig
Status utility, see “Monitoring the migration progress” on page 281.
3. Select the VolMig Continue Migration Job utility.
Proceed as follows:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Determine the ID of the migration job you want to cancel via the VolMig Status
utility. See “Monitoring the migration progress” on page 281.
3. Select the VolMig Cancel Migration job utility.
4. Click Run in the action pane.
5. Enter the ID of the migration job that you want to cancel in the Migration Job
ID(s) field.
6. Click Run.
A protocol window shows the progress and the result. The migration job is set
to the Canc status. All copy jobs for this migration job are deleted.
Proceed as follows:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Determine the ID of the migration job you want to renew via the VolMig Status
utility. See “Monitoring the migration progress” on page 281.
3. Select the VolMig Renew Migration job utility.
Proceed as follows:
1. Open a command shell.
2. Enter > vmclient <command> <attribute> [<attribute>...]
Proceed as follows:
1. Open a command shell.
2. Enter > vmclient -h to get help.
jobID
The ID of the migration job to be deleted.
jobID
The ID of the migration job to be finished.
jobID
The ID of the migration job to be modified.
attribute
The attributes which can be modified.
Note: Attributes with one hyphen (-) will be added/updated.
Attributes with two hyphens (--) will be removed.
-e (export)
Export source volumes after successful migration.
-k (keep)
Do not set the exported flag for the volume (so dsPurgeVol can destroy it).
-i (ignore old versions)
Migrate only the latest version of each component, ignore older versions.
-r <value> (retention)
Set a new value for the retention of the migrated documents.
Not supported in Fast Migration scenarios.
-v <value> (verification level)
Define how components should be verified by VolMig.
become the new default pool. To have the documents that are archived during the
migration written into the target pool rather than the source pool, you can use this
command to update the Write jobs.
> vmclient updateDsJob <old poolname> <new poolname> -d|-v
old poolname
Is constructed by concatenating the source archive name, an underscore
character and the source pool name, e.g. H4_worm.
new poolname
Is constructed by concatenating the target archive name, an underscore character
and the target pool name, e.g. H4_iso.
-d
Update pools in ds_job only.
-v
Update pools in both, ds_job and vmig_jobs.
Note: This works only for local migration scenarios. Write jobs in a remote
migration environment remain on the remote server and cannot be moved to
the local machine.
jobID
The ID of the migration job which components should be listed.
max results
How many components should be listed at most.
archive
The archive name.
pool 1
Name of the first pool.
pool 2
Name of the second pool.
archive
The archive name.
pool
The pool name.
sequence number
New number of the sequence.
sequence letter
New letter (for ISO pools only).
volume name
Name of the primary volume.
output file
File to write the output to instead of stdout.
Proceed as follows:
1. Select Events and Notifications in the System object in the console tree.
2. Select the Event Filters tab. All available event filters are listed in the top area of
the result pane.
3. Click New Event Filter in the action pane. The window to create a new event
filter opens.
4. Enter the conditions for the new event filter. See “Conditions for event filters”
on page 298.
5. Click Finish.
Modifying event To modify an event filter, select it in the top area of the result pane and click
filters Properties in the action pane. Proceed in the same way as when creating a new
event filter. The name of the event filter cannot be changed.
Deleting event To delete an event filter, select it in the top area of the result pane and click Delete in
filters the action pane.
See also:
• “Conditions for event filters” on page 298
• “Available event filters” on page 300
• “Creating and modifying notifications” on page 301
• “Checking alerts” on page 305
See also:
• “Creating and modifying event filters” on page 297
• “Available event filters” on page 300
• “Creating and modifying notifications” on page 301
See also:
• “Conditions for event filters” on page 298
• “Creating and modifying notifications” on page 301
• “Checking alerts” on page 305
Proceed as follows:
1. Select Events and Notifications in the System object in the console tree.
2. Select the Notifications tab. All available notifications are listed in the top area
of the result pane.
3. Click New Notification in the action pane. The wizard to create a new
notification opens.
4. Enter the name and the type of the notification and click Next. Enter the
additional settings for the new notification event. See “Notification settings” on
page 302.
5. Click OK. The new notification is created.
6. Select the new notification in the top area of the result pane.
7. Click Add Event Filter in the action pane. A window with available event filters
opens.
8. Select the event filters which should be assigned to the notification and click
OK.
• Select the new notification in the top area of the result pane and click Test in the
action pane.
• Click the Test button in the notification window while creating or modifying no-
tifications.
Modifying To modify the notification settings, select the notification in the top area of the result
notifications pane and click Edit in the action pane. Proceed in the same way as when creating a
settings
new notification. The name of the notification cannot be changed.
Deleting To delete a notification, select the notification in the top area of the result pane and
notifications click Delete in the action pane.
Adding event To add event filters, select the notification in the top area of the result pane. Click
filters Add Event Filter in the action pane. Proceed in the same way as when creating a
new notification.
Remove an To remove an event filter, select it in the bottom area of the result pane and click
event filter Remove in the action pane. The notification events are not lost, only the assignments
is deleted.
See also:
• “Notification settings” on page 302
• “Using variables in notifications” on page 304
• “Checking alerts” on page 305
Name
The name should be unique and meaningful.
Notification Type
Select the type of notification and enter the specific settings. The following
notification types and settings are possible:
Alert
Alerts are notifications, which can be checked by using Administration
Client. They are displayed in Alerts in the System object in the console tree
(see “Checking alerts” on page 305).
Mail Message
E-mails can be sent to respond immediately to an event or in standby time. If
you want to send it via SMS, consider that the length of SMS text (includes
Subject and Additional text) is limited by most providers. Enter the following
additional settings:
• Sender address: E-mail address of the sender. It appears in the from field
in the inbox of the recipient. The entry is mandatory.
• Mail host: Name of the target mail server. The mail server is connected
via SMTP. The entry is mandatory.
• Recipient address: E-mail address of the recipient. If you want to specify
more than one recipient, separate them by a semicolon. The entry is
mandatory.
• Subject of the mail, $-variables can be used (see “Using variables in
notifications” on page 304). If not specified, the subject is $SEVERITY
message from $HOSTNAME/$USERNAME($TIME).
SNMP Trap
Provides an interface to an external monitoring system that supports the
SNMP protocol. Enter the information on the target system.
Active Period
Weekdays and time of the day at which the notification is to be sent.
Text
Free text field with the maximum length of 255 characters. $-variables can be
used (see “Using variables in notifications” on page 304).
See also:
• “Creating and modifying notifications” on page 301
• “Using variables in notifications” on page 304
• “Checking alerts” on page 305
See also:
• “Notification settings” on page 302
• “Checking alerts” on page 305
Proceed as follows:
1. Select Alerts in the System object in the console tree. All notifications of the alert
type are listed in the top area of the result pane.
2. Select the alert to be checked in the top area of the result pane. Alert details are
displayed in the bottom area of the result pane. The yellow icon of the alert
entry turns to grey if read.
Marking To mark all messages as read click Mark All as Read in the action pane. The yellow
messages as icons of the alert entries turn to grey.
read
Button bar
The button bar contains buttons to configure Monitor Web Client. All these
settings apply only to the current browser session. If you want to reuse your
settings, pass them as parameters when you start the program (see “Customizing
Monitor Web Client” on page 311).
Left column: Monitored servers
Here you find a list of the monitored Archive Servers. Click a name. The current
status of this Archive Server is displayed in the other two columns. If you click
the name again, the status is checked at Monitor Server and the display in
Monitor Web Client is updated if needed.
Otherwise, the status of the components is updated after the specified refresh
interval (see “Setting the refresh interval” on page 310). If it is not possible to
establish a connection to a Web server, then the icon is displayed in front of
the server name.
Tip: If you want to compare the status of different servers, open Monitor
Web Client for each of them and use the task bar to switch between the
different instances.
Middle column: Components
In a hierarchical structure, you see the groups of components that run on the
interrogated host. Below each component group, you see the associated
components. Click a component to display its current status in the right column.
Click the icon to display the status of the component group on the right. For
information on the components and the possible messages, refer to “Component
status display” on page 312.
The icon in front of the component group name represents a summary of the
individual statuses of the components in the group. If you move the mouse
pointer to an icon in front of a component, abbreviated status information is
displayed in a tool tip even if the detailed information is not displayed in the
third column. In this way, you can compare the statuses of two components.
Right column: Detailed information and status
This column contains detailed status information on the selected components or
component groups. If the right column is too narrow to display the information,
move the mouse pointer to the icon to display the status information in a tooltip.
Status line
Provides information on the status of the initiated processes.
Status icons The icons identify the system status at a glance. To configure the icons, see
“Configuring the icon type” on page 311. The possible statuses are:
• Available without restriction.
• Warning, storage space problems are imminent. You can continue working for
the present but the problem must be resolved soon.
• Error, component not available.
Note: To refresh the display of the host status manually, click the name of the
host in the left column. In the Internet Explorer, you can also refresh the
display with F5 or CRTL+R.
3. Click OK. The selected Archive Server is entered in the list of hosts.
To remove a host
Proceed as follows:
1. In the Monitor Web Client window, click Remove Hosts.
2. Select one or more Archive Servers that you no longer want to monitor.
3. Click OK. The selected Archive Server is removed from the host list.
If you do not pass any parameters with the URL, Monitor Web Client starts with the
default settings: LEDs, refresh interval 120 seconds and no additional hosts.
30.2.1 DP Space
Monitors the storage space for the Document Pipelines that are used for the
temporary storage of documents during the archiving process. A special directory
on the hard disk is reserved for the Document Pipelines. You can determine its
location in Configuration in Administration Client (see “Searching configuration
variables” on page 214).
During archiving, the documents are temporarily copied to this directory and are
then deleted once they have been successfully saved. The directory must be large
enough to accommodate the largest documents, e.g. print lists generated by SAP.
The status can be Ok,Warning and Error.
In Details you can see the free storage space in MB, the total storage space in MB
and the proportion of free storage space in percent. The values refer to the hard disk
volume in which the DPDIR directory was installed. A warning or error message is
issued if insufficient free storage space is available. Possible causes are:
Error during the processing of documents in the Document Pipeline
Normally, the documents are processed rapidly and deleted immediately. If
problems occur, the documents may remain in the pipeline and storage space
may become scarce. Check the status of the DocTools (DP Tools group in the
Monitor) and the status of the Document Pipelines in Document Pipeline Info.
Document is larger than the available storage space
If no separate volume is reserved for the Document Pipeline, the storage space
may be occupied by other data and processes. In this case, the volume should be
cleaned up to create space for the pipeline. To avoid this problem, reconfigure
the Document Pipeline and locate it in a separate volume. The volume must be
larger than the largest document that is to be archived.
jbd
Displays the status of the Storage Manager. The status is Active, if the server is
running. A status of either Can't call server, Can't connect to server or Not
active indicates that the server is either not reachable or not running. Check the
jbd.log log file for errors. If necessary, solve the problem and start the Storage
Manager again.
inodes
Displays how full the inode files are. Either the status OK or Error is displayed.
In Details, you can see filling level in percent as well as the number of
configured and used inodes. If an error is displayed, the storage space for the file
system information must be increased.
<jukebox_name>
Provides an overview of the volumes for each attached jukebox. The possible
status specifications are Ok, Warning or Error. Warning means that there are no
writeable volumes or no empty slots in the jukebox. Error is displayed if at least
one corrupt medium is found in a jukebox (display -bad- in Devices in Open
Text Administration Client).
The following information is displayed in Details:
30.2.4 DS Pools
The Monitor checks the free storage space which is available to the pools (and
therefore the logical archives). The pools and buffers are listed. The availability of
the components depends on two factors. Volumes must be assigned and there must
be sufficient free storage space in the individual volumes.
• The Ok status specifies that volumes are present and sufficient storage space is
available.
• The Error status together with the No volumes present message means that a
volume (WORM or hard disk) needs to be assigned to this buffer or pool.
• The Error status with the No writable partitions message refers to WORM
volumes and means that the available volumes are full or write-protected.
Initialize and assign a new volume and/or remove the write-protection.
• The Full status refers to disk buffers or hard disk pools and means that there is
no free storage space on the volume. In the case of a hard disk pool, create a new
volume and assign it to this pool.
In the case of a disk buffer, check whether the Purge_Buffer job has been
processed successfully and whether the parameters for this job are set correctly.
The status is Ok,Warning or Error. In Details, you can see the free storage space in
MB, the total storage space in MB and the proportion of free storage space in
percent. The values refer to the hard disk volume in which the log directory was
installed.
A warning or error message is issued if insufficient free storage space is available.
Delete all log files that are no longer needed. To avoid problems, delete log files
regularly.
DP Tools
The Monitor checks the availability of the DocTools. The status is Registered if the
DocTool has been started. Various messages may appear under Details for the
status:
Lazy
The DocTool is unoccupied. There are no documents available for processing.
Active
The DocTool is processing documents.
Disabled
The DocTool has been locked. To check this status, start Document Pipeline Info.
Here, all the queues that are associated with a locked DocTool are identified by
the locked symbol. In general, a DocTool is only locked if an error has occurred.
Once the problem has been analyzed and eliminated, restart the DocTool in
Document Pipeline Info.
Not registered
The DocTool has not been started.
DP Queues
Monitors all queues of the Document Pipelines and specifies the number of
documents in each queue. Precisely one DocTool is assigned to each queue. One
DocTool may be assigned to multiple queues. You can find the same queues in
Document Pipeline Info but with different names.
Usually, the documents are processed very quickly by the associated DocTool and
the queues are empty. The Empty status is specified. If there are documents in the
queue, the status is set to Not empty. In Details, you find the number of documents
in the queue. To analyze this situation, check the availability of the DocTool under
DP Tools and use the functions provided in Document Pipeline Info.
DP Error Queues
Monitors the error queues and specifies the number of documents in each queue.
There is an error queue for each ordinary queue. Documents in error queues cannot
be processed because of an error. The processing DocTool is specified for each
queue. You can find the corresponding queues in Document Pipeline Info but with
different names.
The error queues are usually Empty. If a DocTool cannot process a document, the
document is moved to the error queue. The status is set to Not empty. In Details,
you can see the number of unprocessed documents. If the same error occurs for all
the documents in this pipeline, then all the documents are gathered in the error
queue. The documents cannot be processed until the error has been eliminated and
the documents have been transferred for processing again with Restart in Document
Pipeline Info.
...doctods
One or more documents cannot be archived.
• In the DocService component group, check the wc component. If Error is
displayed, Archive Server is not available and must be restarted.
• Check the DS Pools component group. If Warning or Error is displayed for
the logical archive in which the document is to be archived or for the
corresponding disk buffer, there is no storage space available for archiving.
Please note the comments on DS Pools above.
...wfcfbc and ...notify
These DocTools are used to subdivide collective documents into single
documents. It is unusual for errors to occur here.
...cfbx
The response cannot be sent to the SAP system.
• The connection to the SAP system is not established. Check the cbfx.log log
file for information on the possible error causes.
• The configuration parameters for setting up the connection are incorrect.
Check the configuration of the SAP system and the archive in the Servers tab
in Open Text Administration Client.
...docrm
The temporary data in the pipeline are not be deleted following the correct
execution of all the preceding DocTools. Start Document Pipeline Info and
remove the documents in the corresponding error queue. You require special
access rights to do this.
31.1 Auditing
The auditing feature of Archive Server traces events of two aspects:
• It records the document lifecycle, or history of a document, when the document
was created, modified, migrated, deleted etc. These are the events of the
Document Service.
• It records administrative jobs performed with Administration Client.
Important
Administrative changes are only recorded if they are done with
Administration Client. To get complete audit trails, make sure that other
configuration ways cannot be used, for example, editing configuration files
directly. At least, such jobs must be logged by other means.
The auditing data is collected in separate database tables and can be extracted from
there with the exportAudit command to files, which can be evaluated in different
ways.
exportAudit [-s date] [-e date] [-A|-S] [-a] [-x] [-o ext] [-h] [-c sepchar]
With further optional options, you can adept the output to your needs.
Option Description
-a Only relevant for document lifecycle information (-S is set). Extracts data
about all document related jobs on the given timeframe. The generated file
name reflects this option with the ALL indicator: STR-<begin date>-<end
date>-ALL.<ext>.
-x Deletes data from the database after successful extraction. This option is not
supported if -a is set, so only information on deleted documents can be re-
moved from the database after extraction.
-o ext Defines the file format. For example, with -o csv you get a .csv file for
evaluation in Excel, independently of the extracted data.
-h Adds a header line with column descriptions to the output file.
-c sepchar Defines the separator character directly (e.g. -c , ) or as ASCII number in
0x<val> syntax (e.g. -c 0x7c ). The default separator is the semicolon. Con-
sider changing the separator if it does not fit your Excel settings.
Event Description
EVENT_CREATE_DOC Document created
EVENT_CREATE_COMP Document component created on volid1
EVENT_UPDATE_ATTR Attributes updated
Event Description
EVENT_TIMESTAMPED Document timestamped on volid1 (dsSign,
dsHashTree)
EVENT_TIMESTAMP_VERIFIED Timestamp verified on volid1
EVENT_TIMESTAMP_VERIF_FAILED Timestamp verification failed on volid1
EVENT_COMP_MOVED Document component moved from HDSK
volid1 to volid2 (dsCD etc. with -d)
EVENT_COMP_COPIED Document component copied from volid1 to
volid2 (dsCD etc. without -d)
EVENT_COMP_PURGED Document component purged from HDSK
volid1 (dsHdskRm)
EVENT_COMP_DELETED Component deleted from volid1
EVENT_COMP_DELETE_FAILED Component deletion from volid1 failed
EVENT_COMP_DESTROYED Component destroyed from volid1
EVENT_DOC_DELETED Document deleted
EVENT_DOC_MIGRATED Document migrated
EVENT_DOC_SET_EVENT setDocFlag with retention called
EVENT_DOC_SECURITY Security error when attempting to read doc
The result of an extraction of document related audit information in Excel may look
like shown in the graphic.
The options -S -o csv -a -h were set, which results in a filename like this:
STR-2005_07_04_12_00_00-2005_07_19_08_00_00-ALL.csv
31.2 Accounting
Archive Server allows collecting of accounting data for further analysis and billing.
Proceed as follows:
1. Enable the Accounting option and configure accounting in Configuration, see
“Settings for accounting” on page 322.
The Document Service writes the accounting information into accounting files.
2. Evaluate the accounting data, see “Evaluating accounting data” on page 323.
3. Schedule the Organize_Accounting_Data job to remove the old accounting
data (see “Setting the start mode and scheduling of jobs” on page 102).
If you archive the old accounting data, you can also access the archived files. The
Organize_Accounting_Data job writes the DocIDs of the archived accounting files
into the ACC_STORE.CNT file which is located in the accounting directory (defined in
Path to accounting data files).
To restore archived accounting files, you can use the command
dsAccTool -r -f <target directory>
The tool saves the files in the <target directory> where you can use them as usual.
Proceed as follows:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the View Installed Archive Server Patches utility.
3. Click Run in the action pane.
4. In the field View patches for packages enter the package whose patches you
want to list. Leave the field empty to view all packages.
5. Click Run to start the utility.
See also:
• “Utilities” on page 255
• “Checking utilities protocols” on page 256
c. In the result pane, right-click Directory where ISO trees are built (internal
name: CDDIR), select Properties and set the Global Value to the correct
absolute path of the CDDIR directory.
Click OK.
d. Analogously, right-click Directory where ISO images are built (internal
name: CDIMG), select Properties and set the Global Value to the correct
absolute path of the CDIMG directory.
Click OK.
3. Restart the Archive Spawner processes (for details, see “Starting and stopping of
Archive Server” on page 333).
Important
Stop the Spawner before you delete the log files!
On client workstations, other log files are used. For more information, refer to the
Imaging documentation.
UNIX
$ORACLE_HOME/network/log/listener.log (log file)
$ORACLE_HOME/network/trace (trace file)
$ORACLE_HOME/rdbms/log/*.trc/* (trace files)
Starting
Windows To start Archive Server using the Windows Services, proceed as follows:
Services
Command line To start Archive Server from the command line, enter the following commands in
this order:
net start OracleServiceECR (Oracle database) or net start mssqlserver (MS
SQL database)
net start Oracle<ORA_HOME>TNSListener (Oracle database)
net start spawner (archive components)
Stopping
Windows To stop Archive Server components using the Windows Services, proceed as
Services follows:
1. On the desktop, right-click the My Computer icon and select Manage.
The Computer Management window now opens.
2. Open the Services and Applications directory and click Services.
3. Right-click the following entries in the given order and select Stop:
• Archive Spawner (archive components)
• Oracle<ORA_HOME>TNSListener (Oracle database)
• OracleServiceECR (Oracle database) or MSSQLSERVER (MS SQL data-
base)
Command line To stop Archive Server components from the command line, enter the following
commands in this order:
net stop spawner (archive components)
net stop Oracle<ORA_HOME>TNSListener (Oracle database)
net stop OracleServiceECR (Oracle database) or net stop mssqlserver (MS SQL
database)
Starting
Use the commands listed below to restart Archive Server after the archive system
has been stopped without shutting down the hardware.
Proceed as follows:
1. Log on as root.
2. Start the archive system including the corresponding database instance with:
HP-UX /sbin/rc3.d/S910spawner start
Stopping
Enter the commands below to terminate Archive Server manually.
Proceed as follows:
1. Log on as root.
2. Terminate the archive system and the database instance with:
HP-UX /sbin/rc3.d/S910spawner stop
2. Check the status of the process with spawncmd status (see “Analyzing
processes with spawncmd” on page 337).
3. Enter the command:
spawncmd {start|stop} <process>
Description of parameters:
{start|stop}
To start or stop the specified process.
<process>
The process you want to start or stop. The name appears in the first column of
the output generated by spawncmd status.
Important
You cannot simply restart a process if it was stopped, regardless of the
reason. This is especially true for Document Service, since its processes must
be started in a defined sequence. If a Document Service process was
stopped, it is best to stop all the processes and then restart them in the
defined sequence. Inconsistencies may also occur when you start and stop
the monitor program or the Document Pipelines this way.
Proceed as follows:
1. Open the Archive Server object in the console tree.
2. Click Modify Operation Mode in the action pane.
Select the operation mode.
No maintenance mode
No restrictions to access the server.
Documents cannot be deleted, errors are returned
Deletion is prohibited for all archives, no matter what is defined for the
archive access. Errors are returned and a message informs about deletion
requests.
Use full maintenance mode
Clients cannot access Archive Server, and thus not display and archive
documents. Only administration and access via the Administration Client is
possible.
3. Click OK.
• reread
• start <service>
• status
• stop <service>
• startall
• stopall
You can execute the commands startall, stopall, exit and status in the Open
Text Administration Client, with the corresponding commands in the File >
Spawner menu.
Process status To check the status of the processes, do one of the following:
• In the Open Text Administration Client, on the File menu, select Spawner >
Status.
• Enter spawncmd status in the command line.
A brief description of some processes is listed here:
1. Check in the Monitor Web Client in which component Archive Server the
problem has occurred.
2. Locate the corresponding log file in Explorer. The protocol is written
chronologically and the last messages are at the end of the file.
Note: The system might write several log files for a single component, or
several components are affected by a problem. To make sure you have the
most recent log files, sort them by the date.
Log file analysis When analyzing log files, consider the following:
• The message class - that is the error type - is shown at the beginning of a log
entry.
• The latest messages are at the end of the file.
Note: In jbd.log, old messages are overwritten if the file size limit is
reached. In this case, check the date and time to find the latest messages.
• Messages with identical time label normally belong to the same incident.
• The final error message denotes which action has failed. The messages before
often show the reason of the failure.
• A system component may fail due to a previous failure of another component.
Check all log files that have been changed at the same or similar time. The time
labels of the messages help you to track the causal relationship.
Important
Higher log levels can generate a large amount of data and even can slow
down the archive system. Reset the log levels to the default values as soon as
you have solved the problem. Delete the log files only after you have
stopped the spawner.
Time setting Additionally to the log levels, you can define the time label in the log file for each
component. Normally, the time is given in hours:minutes:seconds. If you select
Log using relative time, the time elapsed between one log entry and the next is
given in milliseconds instead of the date, additionally to the normal time label. This
is used for debugging and fine tuning.
Annotation
The set of all graphical additions assigned to individual pages of an archived
document (e.g. coloured marking). These annotations can be removed again.
They simulate hand-written comments on paper documents. There are two
groups of annotations: simple annotations (lines, arrows, highlighting etc.) and
OLE annotations (documents or parts of documents which can be copied from
other applications via the clipboard).
See also: Notes.
Archive ID
Unique name of the logical archive.
Archive mode
Specifies the different scenarios for the scan client (such as late archiving with
barcode, preindexing).
ArchiveLink
The interface between SAP system and the archive system.
Buffer
Also known as “disk buffer”. It is an area on hard disk where archived
documents are temporarily stored until they are written to the the final storage
media.
Burn buffer
A special burn buffer is required for ISO pools in addition to a disk buffer. The
burn buffer is required to physically write an ISO image. When the specified
amount of data has accumulated in the disk buffer, the data is prepared and
transferred to the burn buffer in the special format of an ISO image. From the
burn buffer, the image is transferred to the storage medium in a single,
continuous, uninterruptible process referred to “burning” an ISO image. The
burn buffer is transparent for the administration.
Cache
Memory area which buffers frequently accessed documents.
The archive server stores frequently accessed documents in a hard disk volume
called the Document Service cache. The client stores frequently accessed
documents in the local cache on the hard disk of the client.
Cache Server
Separate machine, on which documents are stored temporarily. That way the
network traffic in WAN will be reduced.
Device
Short term for storage device in the archive server environment. A device is a
physical unit that contains at least storage media, but can also contain additional
software and/or hardware to manage the storage media. Devices are:
• local hard disks
• jukeboxes for optical media
• virtual jukeboxes for storage systems
• storage systems as a whole
Digital Signature
Digital signature means an electronic signature based upon cryptographic
methods of originator authentication, computed by using a set of rules and a set
of parameters such that the identity of the signer and the integrity of the data can
be verified. (21 CFR Part 11)
Disk buffer
See: Buffer
DocID
See: Document ID (DocID)
DocTools
Programs that perform single, discrete actions on the documents within a Open
Text Document Pipeline.
Document ID (DocID)
Unique string assigned to each document with which the archive system can
identify it and trace its location.
DP
See: Document Pipeline (DP)
DPDIR
The directory in which the documents are stored that are being currently
processed by a document pipeline.
DS
See: Document Service (DS)
Hot Standby
High-availability archive server setup, comprising two identical archive servers
tightly connected to each other and holding the same data. Whenever the first
server becomes out of order, the second one immediately takes over, thus
enabling (nearly) uninterrupted archive system operation.
ISO image
An ISO image is a container file containing documents and their file system
structure according to ISO 9660. It is written at once and fills one volume.
Job
A job is an administrative task that you schedule in the Open Text
Administration Client to run automatically at regular intervals. It has a unique
name and starts command which executes along with any argument required by
the command.
Known server
A known server is an archive server whose archives and disk buffers are known
to another archive server. Making servers known to each other provides access to
all documents archived in all known servers. Read-write access is provided to
other known servers. Read-only access is provided to replicate archives. When a
request is made to view a document that is archived on another server and the
server is known, the inquired archive server is capable of displaying the
requested document.
Log file
Files generated by the different components of Archive Server to report on their
operations providing diagnostic information.
Log level
Adjustable diagnostic level of detail on which the log files are generated.
Logical archive
Logical area on the archive server in which documents are stored. The archive
server may contain many logical archives. Each logical archive may be
configured to represent a different archiving strategy appropriate to the types of
documents archived exclusively there. An archive can consist of one or more
pools. Each pool is assigned its own exclusive set of volumes which make up the
actual storage capacity of that archive.
Media
Short term for “long term storage media” in the archive server environment. A
media is a physical object: optical storage media (CD, DVD, WORM, UDO), hard
disks and hard disk storage systems with or without WORM feature. Optical
storage media are single-sided or double-sided. Each side of an optical media
contains a volume.
MONS
See: Monitor Server (MONS)
Notes
The list of all notes (textual additions) assigned to a document. An individual
item of this list should be designated as “note”. A note is a text that is stored
together with the document. This text has the same function as a note clipped to
a paper document.
Pool
A pool is a logical unit, a set of volumes of the same type that are written in the
same way, using the same storage concept. Pools are assigned to logical archives.
RC
See: Read Component (RC)
Remote Standby
Archive server setup scenario including two (ore more) associated archive
servers. Archived data is replicated periodically from one server to the other in
order to increase security against data loss. Moreover, network load due to
document display actions can be reduced since replicated data can be accessed
directly on the replication server.
Replication
Refers to the duplication of an archive or buffer resident on an original server on
a remote standby server. Replication is enabled when you add a known server to
the connected server and indicate that replication is to be allowed. That means,
the known server is permitted to pull data from the original server for the
purpose of replication.
Scan station
Workstation for high volume scanning on which the Enterprise Scan client is
installed and to which a scanner is connected. Incoming documents are scanned
here and then transferred to Archive Server.
Slot
In physical jukeboxes with optical media, a slot is a socket inside the jukebox
where the media are located. In virtual jukeboxes of storage systems, a slot is
virtually assigned to a volume.
Spawner
Service program which starts and terminates the processes of the archive system.
Storage Manager
Component that controls jukeboxes and manages storage subsystems.
Timestamp Server
A timestamp server signs documents by adding the time and signing the
cryptographic checksum of the document. To ensure evidence of documents, use
an external timestamp server like Timeproof or AuthentiDate. Open Text
Timestamp Server is a software that generates timestamps.
Volume
• A volume is a memory area of a storage media that contains documents.
Depending on the device type, a device can contain many volumes (e.g. real
and virtual jukeboxes), or is treated as one volume (e.g. storage systems w/o
virtual jukeboxes). Volumes are attached - or better, assigned or linked -
logically to pools.
• Volume is a technical collective term with different meaning in STORM and
Document Service (DS). A DS volume is a virtual container of volumes with
identical documents (after the complete backup is written). A STORM
volume is a virtual container of all identical copies of a volume. For ISO
volumes, there is no difference between DS and STORM volumes. Regarding
WORM (IXW) volumes, the STORM differenciates between original and
backup, they are different volumes, while DS considers original and backup
together as one volume.
WC
See: Write Component (WC)
Windows Viewer
Component for displaying, occasional scanning with Twain scanners and
archiving documents. The Windows Viewer can attach annotations and notes to
the documents.
WORM
WORM means Write Once Read Multiple. An optical WORM disk has two
volumes. A WORM disk supports incremental writing. On storage systems, a
WORM flag is set to prevent changes in documents. UDO media are handled like
optical WORMs.
Write job
Scheduled administrative task which regularly writes the documents stored in a
disk buffer to appropriate storage media.
W
Web Monitor
See “Monitor Web Client”
Workflow in archive mode 175
WORM
damaged 230
Write at once 35
Write files incrementally 35
Write job 33
Write through 36