Guia Tivoli Admin Aix
Guia Tivoli Admin Aix
Guia Tivoli Admin Aix
for AIX
Administrators Guide
Version 5.2
GC32-0768-01
Administrators Guide
Version 5.2
GC32-0768-01
Note!
Before using this information and the product it supports, be sure to read the general information under Appendix C,
Notices, on page 663.
| Changes since the March 2002 edition are marked with a vertical bar ( | ) in the left margin. Ensure that you are
| using the correct edition for the level of the product.
Order publications through your sales representative or the branch office serving your locality.
Your feedback is important in helping to provide the most accurate and high-quality information. If you have
comments about this book or any other Tivoli Storage Manager documentation, please see Contacting Customer
Support on page xv.
Copyright International Business Machines Corporation 1993, 2003. All rights reserved.
US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Preface . . . . . . . . . . . . . . xiii
. xiii
.
.
.
. xiii
. xiii
. xiii
. . xv
. . xv
.
.
. xv
. xv
. . xvi
. . xvii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
31
32
32
32
34
Device Class . . . . . . . . . . . . .
Library, Drive, and Device Class . . . . . .
Storage Pool and Storage Pool Volume . . . .
Data Movers . . . . . . . . . . . . .
Path . . . . . . . . . . . . . . . .
Server . . . . . . . . . . . . . . .
IBM Tivoli Storage Manager Volumes. . . . . .
The Volume Inventory for an Automated Library
Planning for Server Storage . . . . . . . . .
Selecting a Device Configuration . . . . . . .
Devices on a Local Area Network . . . . . .
Devices on a Storage Area Network . . . . .
LAN-Free Data Movement . . . . . . . .
Network-Attached Storage . . . . . . . .
How IBM Tivoli Storage Manager Mounts and
Dismounts Removable Media . . . . . . . .
How IBM Tivoli Storage Manager Uses and Reuses
Removable Media . . . . . . . . . . . .
Configuring Devices . . . . . . . . . . .
Mapping Devices to Device Classes . . . . .
Mapping Storage Pools to Device Classes and
Devices . . . . . . . . . . . . . . .
34
35
36
37
38
38
38
39
39
40
40
41
42
44
46
47
50
50
51
. 53
. 54
.
.
.
.
.
54
55
56
56
56
. 56
.
.
.
.
.
.
.
59
59
60
61
61
62
62
. 63
64
. 64
. 65
. 66
iii
. 70
|
|
|
|
iv
|
|
.
.
.
. 102
. 102
. 102
.
.
.
. 102
. 104
. 104
. 104
.
.
.
.
.
.
.
.
.
.
.
.
.
.
105
105
106
107
108
108
109
|
|
|
113
113
114
114
114
119
120
122
122
124
125
125
126
127
128
128
128
129
129
129
130
131
131
131
133
134
137
140
141
141
141
142
143
144
144
145
145
145
146
146
147
148
148
149
149
150
150
150
150
151
151
151
152
152
152
152
153
154
154
154
155
159
159
159
159
160
160
160
160
160
161
163
. 163
. 165
|
|
168
169
169
170
172
173
174
175
175
176
176
Contents
180
180
181
181
182
186
187
188
188
188
190
190
192
193
194
195
196
197
197
198
199
199
200
205
207
207
208
208
208
209
210
211
212
212
|
|
213
213
216
217
217
217
218
218
220
221
221
223
223
223
225
231
233
234
237
238
238
238
239
241
241
242
243
243
244
244
245
245
246
247
248
248
vi
.
.
.
.
.
.
. 251
. 252
. 252
252
254
254
254
255
255
256
256
256
256
256
257
257
258
258
258
258
261
.
.
.
.
.
.
.
.
261
262
262
263
263
264
264
264
.
.
.
.
.
.
.
.
.
.
.
265
267
269
270
270
278
279
279
280
280
282
283
. 283
. 284
. 285
. 286
286
288
. 288
288
. 290
. 291
293
. 294
|
|
|
298
299
300
300
301
302
302
302
303
304
305
307
307
308
308
310
312
312
314
314
315
315
316
316
317
318
319
320
321
327
328
329
330
330
330
331
331
331
332
332
333
334
335
336
337
337
338
338
339
339
340
340
340
341
341
341
|
|
|
|
|
|
|
343
344
344
344
345
346
347
347
350
350
351
351
352
353
353
353
354
354
355
.
.
.
.
.
.
359
360
360
361
361
362
363
. 363
. 364
. 365
vii
Modifying Schedules . . . . . . . . . .
Deleting Schedules . . . . . . . . . .
Displaying Information about Schedules . . .
Managing Node Associations with Schedules . . .
Adding New Nodes to Existing Schedules. . .
Moving Nodes from One Schedule to Another
Displaying Nodes Associated with Schedules
Removing Nodes from Schedules. . . . . .
Managing Event Records . . . . . . . . .
Displaying Information about Scheduled Events
Managing Event Records in the Server Database
Managing the Throughput of Scheduled
Operations . . . . . . . . . . . . . .
Modifying the Default Scheduling Mode . . .
Specifying the Schedule Period for Incremental
Backup Operations . . . . . . . . . .
Balancing the Scheduled Workload for the
Server . . . . . . . . . . . . . . .
Controlling How Often Client Nodes Contact
the Server . . . . . . . . . . . . .
Specifying One-Time Actions for Client Nodes . .
Determining How Long the One-Time Schedule
Remains Active. . . . . . . . . . . .
368
368
369
369
369
369
370
370
370
370
371
372
372
374
|
|
379
.
.
.
.
.
.
.
.
.
.
.
.
.
383
384
386
387
387
387
392
393
394
394
395
396
396
397
398
. 398
. 399
. 399
399
viii
.
.
.
.
.
.
.
.
.
.
.
. 404
. 405
. 405
405
405
406
406
407
410
412
413
413
414
414
415
415
416
375
377
378
.
.
.
.
.
.
.
.
.
.
.
.
.
401
402
402
403
420
420
422
422
423
423
423
424
425
427
427
428
428
431
431
432
432
433
433
434
434
435
|
|
444
444
444
447
448
448
448
449
449
450
450
450
451
452
453
454
455
456
461
463
463
464
464
466
467
.
.
.
.
.
.
.
442
442
443
468
469
469
470
470
472
472
498
498
499
499
500
500
500
500
501
503
505
505
507
508
510
|
|
|
|
|
|
|
|
|
|
|
513
514
514
514
516
516
517
519
519
519
520
520
520
520
522
525
534
537
. 472
.
.
.
.
475
478
479
480
. 483
484
. 491
. 493
. 497
542
542
542
543
543
544
ix
|
|
|
|
544
546
546
547
547
548
548
549
551
551
552
552
553
553
554
554
554
555
556
557
559
562
562
562
562
563
564
567
568
570
570
571
572
572
573
574
578
578
579
579
579
579
580
580
580
581
|
|
. 581
583
. 585
. 586
. 587
590
590
592
594
595
598
598
599
599
600
600
600
601
601
602
602
604
605
607
608
609
609
612
614
614
615
616
616
619
619
619
622
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
643
644
644
644
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
645
645
646
646
646
647
648
649
649
652
. 656
. 659
660
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 664
. 665
Glossary . . . . . . . . . . . . . 667
Index . . . . . . . . . . . . . . . 677
Contents
xi
xii
Preface
|
|
|
|
|
Order Number
GH09-4572
GC32-0767
GC32-0768
GC32-0769
xiii
Publication Title
Order Number
GC32-0770
The following table lists Tivoli Storage Manager storage agent publications.
Publication Title
Order Number
IBM Tivoli Storage Manager for AIX Storage Agent Users Guide
GC32-0771
IBM Tivoli Storage Manager for HP-UX Storage Agent Users Guide
GC32-0727
IBM Tivoli Storage Manager for Linux Storage Agent Users Guide
GC23-4693
IBM Tivoli Storage Manager for Sun Solaris Storage Agent Users Guide
GC32-0781
IBM Tivoli Storage Manager for Windows Storage Agent Users Guide
GC32-0785
Order Number
IBM Tivoli Storage Manager for Space Management for UNIX: Users
Guide
GC32-0794
GC32-0787
GC32-0786
GC32-0789
GC32-0788
GC32-0793
Order
Number
IBM Tivoli Storage Manager for Application Servers: Data Protection for
WebSphere Application Server Installation and Users Guide
SC32-9075
IBM Tivoli Storage Manager for Databases: Data Protection for Microsoft SQL
Server Installation and Users Guide
SC32-9059
IBM Tivoli Storage Manager for Databases: Data Protection for Oracle for UNIX
Installation and Users Guide
SC32-9064
IBM Tivoli Storage Manager for Databases: Data Protection for Oracle for
Windows Installation and Users Guide
SC32-9065
IBM Tivoli Storage Manager for Databases: Data Protection for Informix
Installation and Users Guide
SH26-4095
IBM Tivoli Storage Manager for Enterprise Resource Planning: Data Protection for SC33-6341
R/3 Installation and Users Guide for DB2 UDB
IBM Tivoli Storage Manager for Enterprise Resource Planning: Data Protection for SC33-6340
R/3 Installation and Users Guide for Oracle
IBM Tivoli Storage Manager for Hardware: Data Protection for EMC Symmetrix
for R/3 Installation and Users Guide
xiv
SC33-6386
Order
Number
Publication Title
IBM Tivoli Storage Manager for Hardware: Data Protection for Enterprise Storage
Server Databases (DB2 UDB) Installation and Users Guide
SC32-9060
IBM Tivoli Storage Manager for Hardware: Data Protection for Enterprise Storage
Server Databases (Oracle) Installation and Users Guide
SC32-9061
IBM Tivoli Storage Manager for Hardware: Data Protection for IBM ESS for R/3
Installation and Users Guide for DB2 UDB
SC33-8204
IBM Tivoli Storage Manager for Hardware: Data Protection for IBM ESS for R/3
Installation and Users Guide for Oracle
SC33-8205
IBM Tivoli Storage Manager for Mail: Data Protection for Lotus Domino for
UNIX and OS/400 Installation and Users Guide
SC32-9056
IBM Tivoli Storage Manager for Mail: Data Protection for Lotus Domino for
Windows Installation
SC32-9057
IBM Tivoli Storage Manager for Mail: Data Protection for Lotus Domino, S/390
Edition Licensed Program Specifications
GC26-7305
IBM Tivoli Storage Manager for Mail: Data Protection for Microsoft Exchange
Server Installation and Users Guide
SC32-9058
|
|
|
|
Title
Order Number
GA32-0279
GA32-0298
GA32-0345
GC35-0154
GA32-0330
GA32-0280
For support for this or any Tivoli product, you can contact IBM Customer Support
in one of the following ways:
Preface
xv
Reporting a Problem
Please have the following information ready when you report a problem:
v The Tivoli Storage Manager server version, release, modification, and service
level number. You can get this information by entering the QUERY STATUS
command at the Tivoli Storage Manager command line.
v The Tivoli Storage Manager client version, release, modification, and service
level number. You can get this information by entering dsmc at the command
line.
xvi
v The communication protocol (for example, TCP/IP), version, and release number
you are using.
v The activity you were doing when the problem occurred, listing the steps you
followed before the problem occurred.
v The exact text of any error messages.
Translations
Selected IBM Tivoli Storage Manager publications have been translated into
languages other than American English. Contact your sales representative for more
information about the translated publications and whether these translations are
available in your country.
Preface
xvii
xviii
The following changes have been made to the product for this edition:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
NDMP Operations
IBM 3494 Library Support
NDMP support to the library type IBM 3494 Tape Library
DataServer is now provided.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Copyright IBM Corp. 1993, 2003
xix
|
|
|
|
|
|
|
|
|
|
|
See Recovering from Device Changes on the SAN on page 109 for more
information.
|
|
|
|
|
|
|
|
|
|
|
|
See Chapter 11, Managing Client Nodes, on page 261 for more
information.
|
|
|
|
|
|
|
|
|
|
Tape Autolabeling
Tivoli Storage Manager now provides the option to have tape volumes
automatically labeled by the server. This option is available for SCSI library
types. The server will label both blank and incorrectly labeled tapes when
they are initially mounted. This eliminates the need to pre-label a set of
tapes.
|
|
|
|
|
|
|
|
|
|
|
xx
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Former name
Notes
|
|
|
|
|
|
|
|
|
|
|
|
|
|
xxi
Former name
Notes
|
||
||
|
|
|
|
|
|
|
|
|
|
|
|
|
xxii
xxiii
xxiv
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Application client
Application clients allow users to perform online backups of data for
applications such as database programs. After the application program
initiates a backup or restore, the application client acts as the interface to
Tivoli Storage Manager. The Tivoli Storage Manager server then applies its
storage management functions to the data. The application client can
perform its functions while application users are working, with minimal
disruption.
|
|
The following products provide application clients for use with the Tivoli
Storage Manager server:
|
|
|
|
v
v
v
v
|
|
|
Also available is Tivoli Storage Manager for Hardware, which works with
the backup-archive client and the API to help eliminate backup-related
performance effects.
Tivoli
Tivoli
Tivoli
Tivoli
Storage
Storage
Storage
Storage
Manager
Manager
Manager
Manager
|
|
|
|
The storage agent is available for use with backup-archive clients and
application clients on a number of operating systems. The Tivoli Storage
Manager for Storage Area Networks product includes the storage agent.
|
|
|
For information about supported operating systems for clients, see the IBM Tivoli
Storage Manager Web site at www.ibm.com/software/sysmgmt/products/
support/IBMTivoliStorageManager.html
Client programs such as the backup-archive client and the HSM client (space
manager) are installed on systems that are connected through a LAN and are
registered as client nodes. From these client nodes, users can back up, archive, or
migrate files to the server.
The following sections present key concepts and information about IBM Tivoli
Storage Manager. The sections describe how Tivoli Storage Manager manages client
files based on information provided in administrator-defined policies, and manages
devices and media based on information provided in administrator-defined Tivoli
Storage Manager storage objects.
The final section gives an overview of tasks for the administrator of the server,
including options for configuring the server and how to maintain the server.
Concepts:
How IBM Tivoli Storage Manager Stores Client Data
How the Server Manages Storage on page 15
Configuring and Maintaining the Server on page 17
Figure 1. How IBM Tivoli Storage Manager Controls Backup, Archive, and Migration
Processes
initially stored. For example, you may want to set up server storage so that
Tivoli Storage Manager migrates files from a disk storage pool to tape volumes
in a tape storage pool.
Files remain in server storage until they expire and expiration processing occurs, or
until they are deleted from server storage. A file expires because of criteria that are
set in policy. For example, the criteria include the number of versions allowed for a
file and the number of days that have elapsed since a file was deleted from the
clients file system.
For information on assigning storage destinations in copy groups and management
classes, and on binding management classes to client files, see Chapter 12,
Implementing Policies for Client Data, on page 297.
For information on managing the database, see Chapter 18, Managing the
Database and Recovery Log, on page 419.
For information about storage pools and storage pool volumes, see Chapter 9,
Managing Storage Pools and Volumes, on page 179.
Do this...
Create an archive for a backup-archive client, Use the backup-archive client to perform
from data that is already stored for backup.
incremental backups, and then generate a
backup set by using the Tivoli Storage
Manager server. This is also called instant
archive.
Table 2. Examples of Meeting Your Goals with IBM Tivoli Storage Manager (continued)
For this goal...
Do this...
|
|
|
|
|
|
|
|
|
|
|
Schedule the backups of client data to help enforce the data management policy
that you establish. If you schedule the backups, rather than rely on the clients to
perform the backups, the policy that you establish is followed more consistently.
See Chapter 14, Scheduling Operations for Client Nodes, on page 359.
The standard backup method that Tivoli Storage Manager uses is called progressive
incremental backup. It is a unique and efficient method for backup. See Progressive
Incremental Backup Compared with Other Backup Types on page 14.
Table 3 on page 11 summarizes the client operations that are available. In all cases,
the server tracks the location of the backup data in its database. Policy that you set
determines how the backup data is managed.
10
11
Progressive
incremental
backup
Adaptive
subfile backup
Selective
backup
Description
Type of
operation
Restore options
Usage
See Incremental
Backup on page 312
and Backup-Archive
Clients Installation and
Users Guide.
12
Journal-based
backup
Image backup
Backup using
hardware
snapshot
capabilities
Group backup
See Backup-Archive
Clients Installation and
Users Guide.
Restore options
Usage
Image backup
with
differential
backups
Description
Type of
operation
13
Description
Type of
operation
Archive
Instant archive
Usage
See Archive on
page 315 and
Backup-Archive Clients
Installation and Users
Guide.
14
LAN-free data movement allows storage agents that are installed on client nodes
to move data without sending the data over the LAN to the server. See LAN-Free
Data Movement on page 42.
For network-attached storage, use NDMP operations to avoid data movement over
the LAN. See NDMP Backup Operations on page 45.
15
Tivoli Storage Manager represents physical storage devices and media with the
following administrator-defined objects:
Library
A library is one or more drives (and possibly robotic devices) with similar
media mounting requirements.
Drive
Each drive represents a drive mechanism in a tape or optical device.
Data mover
A data mover represents a device that accepts requests from Tivoli Storage
Manager to transfer data on behalf of the server. Data movers transfer data
between storage devices.
Path
A path represents how a source accesses a destination. For example, the source
can be a server, and the destination can be a tape drive. A path defines the
one-to-one relationship between a source and a destination. Data may flow
from the source to the destination, and back.
Device class
Each device is associated with a device class that specifies the device type and
how the device manages its media.
Storage pools and volumes
A storage pool is a named collection of volumes that have the same media
type. A storage pool is associated with a device class. For example, an LTO tape
storage pool contains only LTO tape volumes. A storage pool volume is
associated with a specific storage pool.
For details about device concepts, see Chapter 2, Introducing Storage Devices, on
page 31.
16
You control the frequency of the expiration process by using a server option, or
you can start the expiration processing by command or scheduled command.
See Running Expiration Processing to Delete Expired Files on page 330.
17
18
v Using product features that allow the server to provide services to clients while
minimizing traffic on the communications network:
LAN-free data movement
Data movement using NDMP to protect data on network-attached storage
(NAS) file servers
v Using the Tivoli Storage Manager product to help you to manage the drives and
media, or using an external media manager to do the management outside of
the Tivoli Storage Manager product.
For an introduction to key storage concepts, see Chapter 2, Introducing Storage
Devices, on page 31.
19
20
You manage storage volumes by defining, updating, and deleting volumes, and by
monitoring the use of server storage. You can also move files within and across
storage pools to optimize the use of server storage.
For more information about storage pools and volumes and taking advantage of
storage pool features, see Chapter 9, Managing Storage Pools and Volumes, on
page 179.
21
For more information, see Chapter 10, Adding Client Nodes, on page 251 and
Chapter 11, Managing Client Nodes, on page 261.
Other important tasks include the following:
Controlling client options from the server
Client options on client systems allow users to customize backup, archive, and
space management operations, as well as schedules for these operations. On
most client systems, the options are in a file called dsm.opt. In some cases, you
may need or want to provide the clients with options to use. To help users get
started, or to control what users back up, you can define sets of client options
for clients to use. Client options sets are defined in the server database and are
used by the clients that you designate.
Among the options that can be in a client option set are the include and
exclude options. These options control which files are considered for the client
operations.
For more information, see Chapter 11, Managing Client Nodes, on page 261.
Allowing subfile backups
For mobile and remote users, you want to minimize the data sent over the
network, as well as the time that they are connected to the network. You can
set the server to allow a client node to back up changed portions of files that
have been previously backed up, rather than entire files. The portion of the file
that is backed up is called a subfile.
For more information, see Chapter 13, Managing Data for Client Nodes, on
page 343.
Creating backup sets for client nodes
You can perform an instant archive for a client by creating a backup set. A
backup set copies a client nodes active, backed-up files from server storage
onto sequential media. If the sequential media can be read by a device available
to the client system, you can restore the backup set directly to the client system
without using the network. The server tracks backup sets that you create and
retains the backup sets for the time you specify.
For more information, see Chapter 13, Managing Data for Client Nodes, on
page 343.
Managing Security
Tivoli Storage Manager includes security features for user registration and
passwords. Also included are features that can help ensure security when clients
connect to the server across a firewall.
|
|
|
Registration for clients can be closed or open. With closed registration, a user with
administrator authority must register all clients. With open registration, clients can
register themselves at first contact with the server. See Registering Nodes with the
Server on page 252.
You can ensure that only authorized administrators and client nodes are
communicating with the server by requiring the use of passwords. You can also set
the following requirements for passwords:
v Number of characters in a password.
v Expiration time.
v A limit on the number of consecutive, invalid password attempts. When the
client exceeds the limit, Tivoli Storage Manager locks the client node from access
to the server.
22
For better security when clients connect across a firewall, you can control whether
clients can initiate contact with the server for scheduled operations. See Quick Start
for details.
|
|
For additional ways to manage security, see Managing IBM Tivoli Storage
Manager Security on page 288.
23
24
v Determine how long Tivoli Storage Manager retains information about schedule
results (event records) in the database.
v Balance the workload on the server so that all scheduled operations complete.
For more information about these tasks, see Chapter 15, Managing Schedules for
Client Nodes, on page 367.
25
26
When you have a network of Tivoli Storage Manager servers, you can simplify
configuration and management of the servers by using enterprise administration
functions. You can do the following:
v Designate one server as a configuration manager that distributes configuration
information such as policy to other servers. See Setting Up an Enterprise
Configuration on page 479.
v Route commands to multiple servers while logged on to one server. See
Routing Commands on page 501.
v Log events such as error messages to one server. This allows you to monitor
many servers and clients from a single server. See Enterprise Event Logging:
Logging Events to Another Server on page 461.
v Store data for one Tivoli Storage Manager server in the storage of another Tivoli
Storage Manager server. The storage is called server-to-server virtual volumes.
See Using Virtual Volumes to Store Data on Another Server on page 505 for
details.
v Share an automated library among Tivoli Storage Manager servers. See Devices
on a Storage Area Network on page 41.
v Store a recovery plan file for one server on another server, when using disaster
recovery manager. You can also back up the server database and storage pools to
another server. See Chapter 23, Using Disaster Recovery Manager, on page 589
for details.
v You can export part or all of a servers data to sequential media, such as tape or
a file on hard disk. You can then take the media to another server and import
the data to that server
v You can export part or all of a servers data and import the data directly to
another server, if server-to-server communications are set up.
For more information about moving data between servers, see Chapter 21,
Exporting and Importing Data, on page 513.
27
For information about protecting the server with these measures, see Chapter 22,
Protecting and Recovering Your Server, on page 541.
In addition to taking these actions, you can prepare a disaster recovery plan to
guide you through the recovery process by using the disaster recovery manager,
which is available with Tivoli Storage Manager Extended Edition. The disaster
recovery manager (DRM) assists you in the automatic preparation of a disaster
recovery plan. You can use the disaster recovery plan as a guide for disaster
recovery as well as for audit purposes to certify the recoverability of the Tivoli
Storage Manager server.
The disaster recovery methods of DRM are based on taking the following
measures:
v Sending server backup volumes offsite or to another Tivoli Storage Manager
server
v Creating the disaster recovery plan file for the Tivoli Storage Manager server
v Storing client machine information
v Defining and tracking client recovery media
For more information about protecting your server and for details about recovering
from a disaster, see Chapter 22, Protecting and Recovering Your Server, on
page 541.
28
29
30
How IBM Tivoli Storage Manager Mounts and Dismounts Removable Media on page 46
How IBM Tivoli Storage Manager Uses and Reuses Removable Media on page 47
Configuring Devices on page 50
Chapter
Plan, configure, and manage an environment Chapter 6, Using NDMP for Operations
for NDMP operations
with NAS File Servers, on page 111
Perform routine operations such as labeling
volumes, checking volumes into automated
libraries, and maintaining storage volumes
and devices.
31
For a summary, see Table 5 on page 50. For details about specific devices that are
supported, visit the IBM Tivoli Storage Manager Web site at this address
www.ibm.com/software/sysmgmt/products/
support/IBMTivoliStorageManager.html.
Library
Drive
Device class
Storage pool
Storage pool volume
v Data mover
v Path
v Server
The following sections describe these objects.
Libraries
A physical library is a collection of one or more drives that share similar media
mounting requirements. That is, the drive may be mounted by an operator or by
an automated mounting mechanism. A library object definition specifies the library
type (for example, SCSI or 349X) and other characteristics associated with the
library type (for example, the category numbers used by an IBM 3494 library for
private and scratch volumes).
Tivoli Storage Manager supports a variety of library types described in the
following sections.
Shared Libraries
|
|
|
|
|
|
Shared libraries are logical libraries that are represented physically by SCSI or 349X
libraries. The physcial SCSI or 349X library is controlled by the Tivoli Storage
Manager server configured as a library manager. Tivoli Storage Manager servers
using the SHARED library type are library clients to the library manager server.
Shared libraries reference a library manager.
32
|
|
|
|
ACSLS Libraries
|
|
|
|
|
|
|
|
|
The ACSLS software selects selects the appropriate drive for media access
operations. You do not define the drives, check in media, or label the volumes in
an external library.
|
|
Manual Libraries
|
|
|
|
In a manual library, an operator mounts the volumes. You cannot combine drives
of different types or formats, such as Digital Linear Tape (DLT) and 8mm, in a
single manual library. A separate manual library would have to be created for each
device type.
When the server determines that a volume must be mounted on a drive in a
manual library, the server issues mount request messages that prompt an operator
to mount the volume. The server sends these messages to the server console and to
administrative clients that were started by using the special mount mode or console
mode parameter.
For help on configuring a manual library, see Chapter 5, Configuring Storage
Devices, on page 69. For information on how to monitor mount messages for a
manual library, see Mount Operations for Manual Libraries on page 150.
SCSI Libraries
|
|
|
|
|
A SCSI library is controlled through a SCSI interface, attached either directly to the
servers host via SCSI cabling or by a storage area network. A robot or other
mechanism automatically handles volume mounts and dismounts. The drives in a
SCSI library may be of different types. A SCSI library may contain drives of mixed
technologies, for example LTO Ultrium and DLT drives.
349X Libraries
A 349X library is a collection of drives in an IBM 3494. Volume mounts and
demounts are handled automatically by the library. A 349X library has one or more
33
library management control points (LMCP) that the server uses to mount and
dismount volumes in a drive. Each LMCP provides an independent interface to the
robot mechanism in the library.
The drives in a 3494 library can be all of the same type (IBM 3490 or 3590) or a
mix of both types. For help on configuring a 349X library, see Chapter 5,
Configuring Storage Devices, on page 69.
External Libraries
An external library is a collection of drives managed by an external media
management system that is not part of Tivoli Storage Manager. The server provides
an interface that allows external media management systems to operate with the
server. The external media management system performs the following functions:
v Volume mounts (specific and scratch)
v Volume dismounts
v Freeing of library volumes (return to scratch)
The external media manager selects the appropriate drive for media access
operations. You do not define the drives, check in media, or label the volumes in
an external library.
An external library allows flexibility in grouping drives into libraries and storage
pools. The library may have one drive, a collection of drives, or even a part of an
automated library.
An ACSLS or LibraryStation controlled StorageTek library used in conjunction with
an external library manager (ELM) like Greshams EDT-DistribuTAPE is a type of
external library.
|
|
|
For a definition of the interface that Tivoli Storage Manager provides to the
external media management system, see Appendix A, External Media
Management Interface Description, on page 643.
Drives
Each drive mechanism within a device that uses removable media is represented
by a drive object. For devices with multiple drives, including automated libraries,
each drive is separately defined and must be associated with a library. Drive
definitions can include such information as the element address (for drives in SCSI
libraries), how often the drive is cleaned (for tape drives), and whether or not the
drive is online.
Tivoli Storage Manager drives include tape and optical drives that can stand alone
or that can be part of an automated library. Supported removable media drives
also include removable file devices such as re-writable CDs.
|
|
|
Device Class
Each device defined to Tivoli Storage Manager is associated with one device class.
That device class specifies a device type and media management information, such
as recording format, estimated capacity, and labeling prefixes. A device class for a
tape or optical drive must also specify a library.
A device type identifies a device as a member of a group of devices that share
similar media characteristics. For example, the 8MM device type applies to 8mm
tape drives. Device types include a variety of removable media types and also
FILE and SERVER.
34
Disk Devices
Magnetic disk devices are the only random access devices. All disk devices share
the same device type and predefined device class: DISK.
Removable Media
Tivoli Storage Manager provides a set of specified removable media device types,
such as 8MM for 8mm tape devices, or REMOVABLEFILE for Jaz or Zip drives.
The GENERICTAPE device type is provided to support certain devices that do not
use the Tivoli Storage Manager device driver. See Chapter 8, Defining Device
Classes, on page 163 and Administrators Reference for more information about
supported removable media device types.
35
Figure 2. Removable Media Devices Are Represented by a Library, Drive, and Device Class
v For more information about the drive object, see Defining Drives on page 107
and Managing Drives on page 154.
v For more information about the library object, see Defining Libraries on
page 106 and Managing Libraries on page 152.
v For more information about the device class object, see Chapter 8, Defining
Device Classes, on page 163.
For DISK device classes, you must define volumes. For other device classes, such
as tape and FILE, you can allow the server to dynamically acquire scratch volumes
36
and define those volumes as needed. For details, see Preparing Volumes for
Random Access Storage Pools on page 190 and Preparing Volumes for Sequential
Access Storage Pools on page 190.
One or more device classes are associated with one library, which can contain
multiple drives. When you define a storage pool, you associate the pool with a
device class. Volumes are associated with pools. Figure 4 shows these relationships.
Vol.
Storage
Pool
Vol.
Storage
Pool
Device Class
Storage
Pool
Device Class
Library
Drive
Drive
Drive
Drive
For more information about the storage pool and volume objects, see Chapter 9,
Managing Storage Pools and Volumes, on page 179.
Data Movers
Data movers are devices that accept requests from Tivoli Storage Manager to
transfer data on behalf of the server. Data movers transfer data:
v Between storage devices
v Without using significant Tivoli Storage Manager server or client resources
v Without using significant network resources
For NDMP operations, data movers are NAS file servers. The definition for a NAS
data mover contains the network address, authorization, and data formats required
for NDMP operations. A data mover enables communication and ensures authority
for NDMP operations between the Tivoli Storage Manager server and the NAS file
server.
37
Path
Paths allow access to drives and libraries. A path definition specifies a source and
a destination. The source accesses the destination, but data can flow in either
direction between the source and destination. Here are a few examples of paths:
v Between a server and a drive or a library.
v Between a storage agent and a drive.
v Between a data mover and a drive, a disk, or a library.
|
|
|
For more information about the path object, see Defining Paths on page 108 and
Managing Paths on page 159.
Server
You need to define a server object for the following purposes:
v To use a library that is on a SAN and that is managed by another Tivoli Storage
Manager server. You must define that server and then specify it as the library
manager when you define the library. For more information, Setting up the
Library Client Servers on page 79.
v To use LAN-free data movement. You define the storage agent as a server. For
more information, see IBM Tivoli Storage Manager Storage Agent Users Guide.
v To store client data in the storage of another Tivoli Storage Manager server. For
more information, see Using Virtual Volumes to Store Data on Another Server
on page 505.
Among other characteristics, you must specify the server TCP/IP address.
38
Any storage pools associated with the same automated library can dynamically
acquire volumes from the librarys pool of scratch volumes. You do not need to
allocate volumes to the different storage pools. Even if only one storage pool is
associated with a library, you do not need to explicitly define all the volumes for
the storage pool. Volumes are automatically added to and deleted from the storage
pool by the server.
Note: A disadvantage of using scratch volumes is that volume usage information,
which you can use to determine when the media has reached its end of life,
is deleted when the private volume is returned to the scratch volume pool.
1. Determine which drives and libraries are supported by the server. For more
information on device support, see Devices Supported by Tivoli Storage
Manager on page 59.
2. Determine which storage devices may be selected for use by the server. For
example, determine how many tape drives you have that you will allow the
server to use. For more information on selecting a device configuration, see
Selecting a Device Configuration on page 40.
The servers can share devices in libraries that are attached through a SAN. If
the devices are not on a SAN, the server expects to have exclusive use of the
drives defined to it. If another application (including another Tivoli Storage
Manager server) tries to use a drive while the server to which the drive is
defined is running, some server functions may fail. See
www.ibm.com/software/sysmgmt/products/
support/IBMTivoliStorageManager.html for more information about specific
drives and libraries.
3. Determine the device driver that supports the devices. For more information
on device driver support, see Installing and Configuring Device Drivers on
page 61.
4. Determine how to attach the devices to the server. For more information on
attaching devices, see Attaching an Automated Library Device on page 60.
5. Determine whether to back up client data directly to tape or to a storage
hierarchy.
6. Determine which client data is backed up to which device, if you have
multiple device types.
39
7. Determine the device type and device class for each of the available devices.
Group together similar devices and identify their device classes. For example,
create separate categories for 4mm and 8mm devices.
Note: For sequential access devices, you can categorize the type of removable
media based on their capacity. For example, standard length cartridge
tapes and longer length cartridge tapes require different device classes.
8. Determine how the mounting of volumes is accomplished for the devices:
v Devices that require operators to load volumes must be part of a defined
MANUAL library.
v Devices that are automatically loaded must be part of a defined SCSI or
349X. Each automated library device is a separate library.
v Devices that are controlled by StorageTek Automated Cartridge System
Library Software (ACSLS) must be part of a defined ACSLS library.
v Devices that are managed by an external media management system must
be part of a defined EXTERNAL library.
9. If you are considering storing data for one Tivoli Storage Manager server
using the storage of another Tivoli Storage Manager server, consider network
bandwidth and network traffic. If your network resources constrain your
environment, you may have problems using the SERVER device type
efficiently.
Also consider the storage resources available on the target server. Ensure that
the target server has enough storage space and drives to handle the load from
the source server.
10. Determine the storage pools to set up, based on the devices you have and on
user requirements. Gather users requirements for data availability. Determine
which data needs quick access and which does not.
11. Be prepared to label removable media. You may want to create a new labeling
convention for media so that you can distinguish them from media used for
other purposes.
40
|
|
|
|
|
41
Library Manager
Server
Data Flow
Tape Library
Figure 5. Library Sharing in a Storage Area Network (SAN) Configuration. The servers
communicate over the LAN. The library manager controls the library over the SAN. The
library client stores data to the library devices over the SAN.
When Tivoli Storage Manager servers share a library, one server, the library
manager, controls device operations. These operations include mount, dismount,
volume ownership, and library inventory. Other Tivoli Storage Manager servers,
library clients, use server-to-server communications to contact the library manager
and request device service. Data moves over the SAN between each server and the
storage device.
Tivoli Storage Manager servers use the following features when sharing an
automated library:
Partitioning of the Volume Inventory
The inventory of media volumes in the shared library is partitioned among
servers. Either one server owns a particular volume, or the volume is in
the global scratch pool. No server owns the scratch pool at any given time.
Serialized Drive Access
Only one server accesses each tape drive at a time. Drive access is
serialized and controlled so that servers do not dismount other servers
volumes or write to drives where other servers mount their volumes.
Serialized Mount Access
The library autochanger performs a single mount or dismount operation at
a time. A single server (library manager) performs all mount operations to
provide this serialization.
|
|
42
|
|
|
Client
Storage Agent installed
Library Control
Client Metadata
LAN
Client
Data
Library
Control
SAN
File Library
Tape Library
|
|
|
|
Figure 6. LAN-Free Data Movement. Client and server communicate over the LAN. The
server controls the device on the SAN. Client data moves over the SAN to the device.
LAN-free data movement requires the installation of a storage agent on the client
machine. The server maintains the database and recovery log, and acts as the
library manager to control device operations. The storage agent on the client
handles the data transfer to the device on the SAN. This implementation frees up
bandwidth on the LAN that would otherwise be used for client data movement.
The following outlines a typical backup scenario for a client that uses LAN-free
data movement:
1. The client begins a backup operation. The client and the server exchange policy
information over the LAN to determine the destination of the backed up data.
For a client using LAN-free data movement, the destination is a storage pool
that uses a device on the SAN.
2. Because the destination is on the SAN, the client contacts the storage agent,
which will handle the data transfer. The storage agent sends a request for a
volume mount to the server.
3. The server contacts the storage device and, in the case of a tape library, mounts
the appropriate media.
4. The server notifies the client of the location of the mounted media.
5. The client, through the storage agent, writes the backup data directly to the
device over the SAN.
6. The storage agent sends file attribute information to the server, and the server
stores the information in its database.
Chapter 2. Introducing Storage Devices
43
If a failure occurs on the SAN path, failover occurs. The client uses its LAN
connection to the Tivoli Storage Manager server and moves the client data over the
LAN.
Note: See the IBM Tivoli Storage Manager home page at
www.ibm.com/software/sysmgmt/products/
support/IBMTivoliStorageManager.html for the latest information on clients
that support the feature.
Network-Attached Storage
Network-attached storage (NAS) file servers are dedicated storage machines whose
operating systems are optimized for file-serving functions. NAS file servers
typically do not run third-party software. Instead, they interact with programs like
Tivoli Storage Manager through industry-standard network protocols, such as
NDMP. Tivoli Storage Manager uses the NDMP protocol to communicate with and
direct backup and restore operations for NAS file servers.
|
|
|
|
|
|
Using NDMP, Tivoli Storage Manager can back up and restore images of complete
file systems. NDMP allows the Tivoli Storage Manager server to control the backup
of a NAS file server. The file server transfers the backup data to a drive in a
SCSI-attached tape library. The NAS file server can be distant from the Tivoli
Storage Manager server.
Tivoli Storage Manager tracks file system image backups on tape, and has the
capability to perform NDMP file-level restores. For more information regarding
NDMP file-level restores, see NDMP File-Level Restore on page 45.
Tape
Library
Server
NAS File
Server
Legend:
SCSI or Fibre
Channel Connection
TCP/IP
Connection
Data Flow
When Tivoli Storage Manager uses NDMP to protect NAS file servers, the Tivoli
Storage Manager server controls operations while the NAS file server transfers the
data. To use a backup-archive client to back up a NAS file server, mount the NAS
file server file system on the client machine (with either an NFS mount or a CIFS
map) and back up as usual. The following table compares the two methods:
44
Table 4. Comparing NDMP Operations and Tivoli Storage Manager Backup-Archive Client
Operations
NDMP
45
gathering file-level information. You will not be able to list the files on the client.
You will be able to restore them if you already know what they are. This is the
default setting.
|
|
|
|
|
|
|
|
If you choose to enable the file-level restore option, the Tivoli Storage Manager
server constructs a table of contents (TOC) of file-level information for a single
backup image produced by NDMP operations. The TOC is stored in the server
storage of the Tivoli Storage Manager server. The server can then retrieve the TOC
so that information can be queried by the client or server.
|
|
|
|
|
The TOC is created when backing up using backup images produced by NDMP
operations using the:
v BACKUP NAS client command, with include.fs.nas specified in the client options
file or specified in the client options set
v BACKUP NODE server command
If a defined volume in the storage pool can be used, the server selects
that volume.
46
4. The server dismounts the volume when it has finished accessing the volume
and the mount retention period has elapsed.
v For a manual library, the server ejects the volume from the drive so that an
operator can place it in its storage location.
v For an automated library, the server directs the library to move the volume
from the drive back to its original storage slot in the library.
How IBM Tivoli Storage Manager Uses and Reuses Removable Media
Tivoli Storage Manager allows you to control how removable media are used and
reused. After Tivoli Storage Manager selects an available medium, that medium is
used and eventually reclaimed according to its associated policy.
Tivoli Storage Manager manages the data on the media, but you manage the media
itself, or you can use a removable media manager. Regardless of the method used,
managing media involves creating a policy to expire data after a certain period of
time or under certain conditions, move valid data onto new media, and reuse the
empty media.
In addition to information about storage pool volumes, the volume history
contains information about tapes used for database backups and exports (for
disaster recovery purposes). The process for reusing these tapes is slightly different
from the process for reusing tapes containing client data backups.
Figure 8 on page 48 shows a typical life cycle for removable media. The numbers
(such as 1) refer to numbers in the figure.
47
1. You label 1 and check in 2 the media. Checking media into a manual
library simply means storing them (for example, on shelves). Checking media
into an automated library involves adding them to the library volume
inventory.
See Labeling Removable Media Volumes on page 134.
2. If you plan to define volumes to a storage pool associated with a device, you
should check in the volume with its status specified as private. Use of scratch
volumes is more convenient in most cases.
3. A client sends data to the server for backup, archive, or space management.
The server stores the client data on the volume. Which volume the server
selects 3 depends on:
v The policy domain to which the client is assigned.
v The management class for the data (either the default management class for
the policy set, or the class specified by the client in the clients
include/exclude list or file).
v The storage pool specified as the destination in either the management class
(for space-managed data) or copy group (for backup or archive data). The
storage pool is associated with a device class, which determines which
device and which type of media is used.
v Whether the maximum number of scratch volumes that a server can request
from the storage pool has been reached when the scratch volumes are
selected.
48
49
5. You determine when the media has reached its end of life.
For volumes that you defined (private volumes), check the statistics on the
volumes by querying the database. The statistics include the number of write
passes on a volume (compare with the number of write passes recommended
by the manufacturer) and the number of errors on the volume.
You must move any valid data off a volume that has reached end of life. Then,
if the volume is in an automated library, check out the volume from the library.
If the volume is not a scratch volume, delete the volume from the database.
Configuring Devices
Before the Tivoli Storage Manager server can use a device, the device must be
configured to the operating system as well as to the server. Table 5 summarizes the
definitions that are required for different device types.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Device Types
Library
Drive
Path
Device Class
Magnetic disk
DISK
Yes
FILE
Yes
Tape
3570
3590
4MM
8MM
CARTRIDGE
DLT
DTF
ECARTRIDGE
GENERICTAPE
LTO
NAS
QIC
VOLSAFE
Yes
Yes
Yes
Yes
Optical
OPTICAL
WORM
WORM12
WORM14
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Virtual volumes
Yes
SERVER
||
The CARTRIDGE device type is for IBM 3480, 3490, and 3490E tape drives.
|
|
|
The ECARTRIDGE device type is for StorageTeks cartridge tape drives such
as the SD-3, 9480, 9890, and 9940 drives.
50
Description
DISK
8MM_CLASS
Storage volumes that are 8mm tapes, used with the drives in
the automated library
DLT_CLASS
Storage volumes that are DLT tapes, used on the DLT drive
You must define any device classes that you need for your removable media
devices such as tape drives. See Chapter 8, Defining Device Classes, on page 163
for information on defining device classes to support your physical storage
environment.
51
Drives
Volume Type
Storage
Destination
BACKUPPOOL DISK
Storage
volumes on
the internal
disk drive
BACKTAPE1
8MM_CLASS
AUTO_8MM
(Exabyte
EXB-210)
DRIVE01,
DRIVE02
8mm tapes
BACKTAPE2
DLT_CLASS
MANUAL_LIB DRIVE03
(Manually
mounted)
DLT tapes
Storage Pool
Device Class
Note: Tivoli Storage Manager has default disk storage pools named BACKUPPOOL, ARCHIVEPOOL,
and SPACEMGPOOL. For more information, see Configuring Random Access Volumes on Disk
Devices on page 54.
52
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
Note: Some of the tasks described in this chapter require an understanding of
Tivoli Storage Manager storage objects. For an introduction to these storage
objects, see IBM Tivoli Storage Manager Storage Objects on page 32.
System
System
53
If you do not specify a full path name, the command uses the current path. See
Defining Storage Pool Volumes on page 191 for details.
This one-step process replaces the former two-step process of first formatting a
volume (using DSMFMT) and then defining the volume. If you choose to use
the two-step process, the DSMFMT utility is available from the operating
system command line. See Administrators Reference for details.
Another option for preparing a volume is to create a raw logical volume by
using SMIT.
3. Do one of the following:
v Specify the new storage pool as the destination for client files that are backed
up, archived, or migrated, by modifying existing policy or creating new
policy. See Chapter 12, Implementing Policies for Client Data, on page 297
for details.
v Place the new storage pool in the storage pool migration hierarchy by
updating an already defined storage pool. See Example: Updating Storage
Pools on page 186.
54
This command defines device class FILECLASS with a device type of FILE.
See Defining and Updating FILE Device Classes on page 170.
To store database backups or exports on FILE volumes, this step is all you need
to do to prepare the volumes. For more information, see Defining Device
Classes for Backups on page 554 and Planning for Sequential Media Used to
Export Data on page 521.
2. Define a storage pool that is associated with the new FILE device class.
For example, enter the following command on the command line of an
administrative client:
define stgpool engback2 fileclass maxscratch=100 mountlimit=2
This command defines storage pool ENGBACK2 with device class FILECLASS.
See Defining or Updating Primary Storage Pools on page 182 for details.
To allow Tivoli Storage Manager to use scratch volumes for this device class,
specify a value greater than zero for the number of maximum scratch volumes
when you define the device class. If you do set MAXSCRATCH=0 to not allow
scratch volumes, you must define each volume to be used in this device class.
See Preparing Volumes for Sequential Access Storage Pools on page 190 for
details.
3. Do one of the following:
v Specify the new storage pool as the destination for client files that are backed
up, archived, or migrated, by modifying existing policy or creating new
policy. See Chapter 12, Implementing Policies for Client Data, on page 297
for details.
v Place the new storage pool in the storage pool migration hierarchy by
updating an already defined storage pool. See Example: Updating Storage
Pools on page 186.
System or operator
If Tivoli Storage Manager encounters a problem with a disk volume, the server
automatically varies the volume offline.
55
You can make the disk volume available to the server again by varying the volume
online. For example, to make the disk volume named /storage/pool001 available to
the server, enter:
vary online /storage/pool001
Using Cache
When you define a storage pool that uses disk random access volumes, you can
choose to enable or disable cache. When you use cache, a copy of the file remains
on disk storage even after the file has been migrated to the next pool in the storage
hierarchy (for example, to tape). The file remains in cache until the space it
occupies is needed to store new files.
Using cache can improve how fast a frequently accessed file is retrieved. Faster
retrieval can be important for clients storing space-managed files. If the file needs
to be accessed, the copy in cache can be used rather than the copy on tape.
However, using cache can degrade the performance of client backup operations
and increase the space needed for the database. For more information, see Using
Cache on Disk Storage Pools on page 207.
56
DELETE VOLHISTORY command. For information about the volume history and
volume history files, see Saving the Volume History File on page 557.
Note: If your server is licensed for the disaster recovery manager (DRM) function,
the volume information is automatically deleted during MOVE DRMEDIA
command processing. For additional information about DRM, see
Chapter 23, Using Disaster Recovery Manager, on page 589.
57
58
59
Note: Each device that is connected in a chain to a single SCSI bus must be set
to a unique SCSI ID. If each device does not have a unique SCSI ID,
serious system problems can arise.
4. Follow the manufacturers instructions to attach the device to your server
system hardware.
Attention:
a. Power off your system before attaching a device to prevent damage to the
hardware.
b. Attach a terminator to the last device in the chain of devices connected on
one SCSI adapter card.
5. Install the appropriate device drivers. See Installing and Configuring Device
Drivers on page 61.
6. Find the device worksheet that applies to your device. See
www.ibm.com/software/sysmgmt/products/
support/IBMTivoliStorageManager.html.
7. Record the pre-determined name of your device on the device worksheet. The
device name for a tape drive is a special file name.
See Determining Device Special File Names on page 62 for details.
|
|
|
Note: The information you record on the worksheets can help you when you need
to perform operations such as adding volumes. Keep the worksheets for future
reference.
60
|
|
|
|
|
|
|
|
|
|
|
61
Device Type
Library
Device Driver
4MM drive
4MM
8MM drive
8MM
DLT drive
DLT
DTF drive
DTF
QIC drive
QIC
ECARTRIDGE
Optical drive
OPTICAL
WORM drive
WORM
3570
CARTRIDGE
3590
LTO
Library Type
Device Driver
SCSI
SCSI
349X
atldd
SCSI
62
|
|
Device
Device
Example
|
|
/dev/mtx
mtx
|
|
/dev/lbx
lbx
|
|
/dev/ropx
opx
/dev/rmtx
rmtx
|
|
/dev/rmtx.smc
rmtx
/dev/smcx
smcx
/dev/lmcpx
lmcpx
|
|
|
/dev/cdx
Logical File
Name
cdx
or
smcx Available
63
The special file name for the media changer device is /dev/smcx.
For example, if the message is rmt0 Available, enter /dev/rmt0 in the Device Name
field for the drive. Enter /dev/smc0 in the Device Name field on the worksheet
for the librarys robotics. Always use the /dev/ prefix with the name provided by
the system.
Note: For multidrive devices(for example, IBM 3570 Model B12 or B22, or IBM
3575), you need only one smcx worksheet entry. Although you will receive
a /dev/smcx Available message for each rmt device in the library, you need
only one smc entry for the library on the worksheet.
64
65
66
Remove a Device
This option removes a single FC SAN-attached Tivoli Storage Manager
device whose state is DEFINED in the ODM database. If necessary,
rediscover the device by selecting the Discover Devices Supported by
Tivoli Storage Manager option after removal of a defined device.
To configure an FC SAN-attached device:
1. Run the SMIT program.
2. Select Devices.
3. Select Tivoli Storage Manager Devices.
4. Select Fibre Channel SAN Attached devices.
5. Select Discover Devices Supported by TSM. The discovery process can take
some time.
6. Go back to the Fibre Channel menu, and select List Attributes of a Discovered
Device.
7. Note the 3-character device identifier, which you use when defining a path to
the device to Tivoli Storage Manager. For example, if a tape drive has the
identifier mt2, specify /dev/mt2 as the device name.
67
68
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
Note: Some of the tasks described in this chapter require an understanding of
Tivoli Storage Manager storage objects. For an introduction to these storage
objects, see IBM Tivoli Storage Manager Storage Objects on page 32.
69
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Tivoli Storage Manager supports mixing different device types within a single
automated library, as long as the library itself can distinguish among the different
media for the different device types. Libraries with this capability are those models
supplied from the manufacturer already containing mixed drives, or capable of
supporting the addition of mixed drives. Check with the manufacturer, and also
check the Tivoli Storage Manager Web site for specific libraries that have been
tested on Tivoli Storage Manager with mixed device types. For example, you can
have Quantum SuperDLT drives, LTO Ultrium drives, and StorageTek 9940 drives
in a single library defined to the Tivoli Storage Manager server. For examples of
how to set this up, see Configuration with Multiple Drive Device Types on
page 74 and Configuration with Multiple Drive Device Types on page 84.
|
|
While the Tivoli Storage Manager server now allows mixed device types in a
library, the mixing of different generations of the same type of drive is still not
70
|
|
|
|
supported. A new generation of media can be read and written to by new drives,
but cannot be read by older drives. The previous generation of media usually can
be read but cannot be written to by the new drives that support the new
generation of media.
|
|
|
|
|
Mixing generations of the same type of drive and media technology is generally
not supported in a Tivoli Storage Manager library. All media must be readable, if
not writable, by all such drives in a single library. If the new drive technology
cannot write to media formatted by older generation drives, the older media must
be marked read-only to avoid problems for server operations.
|
|
Some examples of combinations that the Tivoli Storage Manager server does not
support in a single library are:
|
|
|
|
|
|
An exception is that the server does support the mixing of LTO Ultrium Generation
1 drives and media with LTO Ultrium Generation 2 drives and media. The server
supports this mix because the LTO Ultrium Generation 2 drives can read and write
to Generation 1 media.
|
|
|
|
|
|
Description
3494SHARED
ACSACCESSID
ACSLOCKDRIVE
ACSQUICKINIT
ACSTIMEOUTX
ASSISTVCRRECOVERY
71
|
|
|
|
Option
Description
DRIVEACQUIRERETRY
ENABLE3590LIBRARY
NOPREEMPT
RESOURCETIMEOUT
SEARCHMPQUEUE
|
|
|
|
|
|
|
|
|
|
|
|
v In the first configuration, both drives in the SCSI library are the same device
type. You define one device class.
v In the second configuration, the drives are different device types. You define a
device class for each drive device type.
Drives with different device types are supported in a single library if you define
a device class for each type of drive. If you are configuring this way, you must
include the specific format for the drives device type by using the FORMAT
parameter with a value other than DRIVE.
|
|
|
|
|
|
72
In this example, the SCSI library contains two DLT tape drives.
1. Define a SCSI library named AUTODLT. The library type is SCSI because the
library is a SCSI-controlled automated library. Enter the following command:
define library autodltlib libtype=scsi
The DEVICE parameter specifies the device drivers name for the library, which
is the special file name.
See Defining Libraries on page 106 and SCSI Libraries on page 33. For more
information about paths, see Defining Paths on page 108.
3. Define the drives in the library. Both drives belong to the AUTODLTLIB library.
|
|
|
|
|
This example uses the default for the drives element address, which is to have
the server obtain the element number from the drive itself at the time that the
path is defined.
|
|
|
|
|
The element address is a number that indicates the physical location of a drive
within an automated library. The server needs the element address to connect
the physical location of the drive to the drives SCSI address. You can have the
server obtain the element number from the drive itself at the time that the path
is defined, or you can specify the element number when you define the drive.
|
|
|
|
|
|
Depending on the capabilities of the library, the server may not be able to
automatically detect the element address. In this case you must supply the
element address when you define the drive. If you need the element numbers,
check the device worksheet filled out in step 7 on page 60. Element numbers for
many libraries are available at www.ibm.com/software/sysmgmt/products/
support/IBMTivoliStorageManager.html.
See Defining Drives on page 107. For more information about paths, see
Defining Paths on page 108.
4. Define a path from the server to each drive:
define path server1 drive01 srctype=server desttype=drive
library=autodltlib device=/dev/mt4
define path server1 drive02 srctype=server desttype=drive
library=autodltlib device=/dev/mt5
|
|
|
The DEVICE parameter specifies the device drivers name for the drive, which
is the device special file name. For more about device special file names, see
Determining Device Special File Names on page 62.
|
|
If you did not include the element address when you defined the drive, the
server now queries the library to obtain the element address for the drive.
For more information about paths, see Defining Paths on page 108.
5. Classify drives according to type by defining Tivoli Storage Manager device
classes. Use FORMAT=DRIVE as the recording format only if all the drives
73
associated with the device class are identical. For example, to classify two
drives in the AUTODLTLIB library, use the following command to define a
device class named AUTODLT_CLASS:
define devclass autodlt_class library=autodltlib devtype=dlt format=drive
library
drive
path
devclass
Key choices:
a. Scratch volumes are empty volumes that are labeled and available for use.
If you allow scratch volumes for the storage pool by specifying a value for
the maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. Collocation is turned off by default. Collocation is a process by which the
server attempts to keep all files belonging to a client node or client file
space on a minimal number of volumes. Once clients begin storing data in a
storage pool with collocation off, you cannot easily change the data in the
storage pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping a Clients Files Together:
Collocation on page 208 and How Collocation Affects Reclamation on
page 220.
For more information, see Defining or Updating Primary Storage Pools on
page 182.
|
|
|
In this example, the library is a StorageTek L40 library that contains one DLT drive
and one LTO Ultrium drive.
1. Define a SCSI library named MIXEDLIB. The library type is SCSI because the
library is a SCSI-controlled automated library. Enter the following command:
|
|
|
|
|
|
|
|
The DEVICE parameter specifies the device drivers name for the library, which
is the special file name.
|
|
|
See Defining Libraries on page 106 and SCSI Libraries on page 33. For more
information about paths, see Defining Paths on page 108.
3. Define the drives in the library:
74
|
|
|
|
|
This example uses the default for the drives element address, which is to have
the server obtain the element number from the drive itself at the time that the
path is defined.
|
|
|
|
|
The element address is a number that indicates the physical location of a drive
within an automated library. The server needs the element address to connect
the physical location of the drive to the drives SCSI address. You can have the
server obtain the element number from the drive itself at the time that the path
is defined, or you can specify the element number when you define the drive.
|
|
|
|
|
|
Depending on the capabilities of the library, the server may not be able to
automatically detect the element address. In this case you must supply the
element address when you define the drive. If you need the element numbers,
check the device worksheet filled out in step 7 on page 60. Element numbers for
many libraries are available at www.ibm.com/software/sysmgmt/products/
support/IBMTivoliStorageManager.html.
|
|
See Defining Drives on page 107. For more information about paths, see
Defining Paths on page 108.
|
|
|
|
|
|
|
|
The DEVICE parameter specifies the device drivers name for the drive, which
is the device special file name. For more about device special file names, see
Determining Device Special File Names on page 62.
|
|
If you did not include the element address when you defined the drive, the
server now queries the library to obtain the element address for the drive.
|
|
|
For more information about paths, see Defining Paths on page 108.
5. Classify the drives according to type by defining Tivoli Storage Manager device
classes, which specify the recording formats of the drives.
|
|
|
|
|
|
|
|
|
|
|
|
Note: Do not use the DRIVE format, which is the default. Because the drives
are different types, Tivoli Storage Manager uses the format specification
to select a drive. The results of using the DRIVE format in a mixed
media library are unpredictable.
define devclass dlt_class library=mixedlib devtype=dlt format=dlt40
define devclass lto_class library=mixedlib devtype=lto format=ultriumc
library
drive
path
devclass
75
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Key choices:
a. Scratch volumes are empty volumes that are labeled and available for use.
If you allow scratch volumes for the storage pool by specifying a value for
the maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. Collocation is turned off by default. Collocation is a process by which the
server attempts to keep all files belonging to a client node or client file
space on a minimal number of volumes. Once clients begin storing data in a
storage pool with collocation off, you cannot easily change the data in the
storage pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping a Clients Files Together:
Collocation on page 208 and How Collocation Affects Reclamation on
page 220.
|
|
Each volume used by a server for any purpose must have a unique name. This
requirement applies to all volumes, whether the volumes are used for storage
pools, or used for operations such as database backup or export. The requirement
also applies to volumes that reside in different libraries but that are used by the
same server.
|
|
The procedures for volume check-in and labeling are the same whether the library
contains drives of a single device type, or drives of multiple device types.
To check in and label volumes, do the following:
1. Check in the library inventory. The following shows two examples. In both
cases, the server uses the name on the barcode label as the volume name.
v Check in volumes that are already labeled:
checkin libvolume autodltlib search=yes status=scratch checklabel=barcode
76
used, you may also need to increase the number of scratch volumes allowed
in the storage pool that you defined for this library.
v If you want to use private volumes in addition to or instead of scratch
volumes in the library, define volumes to the storage pool you defined. The
volumes you define must have been already labeled and checked in. See
Defining Storage Pool Volumes on page 191.
|
|
|
|
|
Using a SAN with Tivoli Storage Manager allows the following functions:
v Multiple Tivoli Storage Manager servers share storage devices.
v Tivoli Storage Manager client systems directly access storage devices, both tape
libraries and disk storage, that are defined to a Tivoli Storage Manager server
(LAN-free data movement). Storage agents installed and configured on the client
systems perform the data movement. See Configuring IBM Tivoli Storage
Manager for LAN-free Data Movement on page 104.
The following tasks are required for Tivoli Storage Manager servers to share library
devices on a SAN:
1. Set up server-to-server communications.
Before Tivoli Storage Manager servers can share a storage device on a SAN, you
must set up server communications. This requires configuring each server as you
would for Enterprise Administration, which means you define the servers to each
other using the cross-define function. See Setting Up Communications Among
Servers on page 472 for details. For a discussion about the interaction between
library clients and the library manager in processing Tivoli Storage Manager
operations, see Performing Operations with Shared Libraries on page 148.
77
|
|
|
Note: You can configure a SCSI library so that it contains all drives of the same
device type or so that it contains drives of different device types. You can
modify the procedure described for configuring a library for use by one
server (Configuration with Multiple Drive Device Types on page 74) and
use it for configuring a shared library.
|
|
You must set up the server that is the library manager before you set up servers
that are the library clients.
Use the following procedure as an example of how to set up a server as a library
manager. The server is named ASTRO.
1. Define a shared SCSI library named SANGROUP:
define library sangroup libtype=scsi shared=yes
This example uses the default for the librarys serial number, which is to have
the server obtain the serial number from the library itself at the time that the
path is defined. Depending on the capabilities of the library, the server may not
be able to automatically detect the serial number. In this case, the server will
not record a serial number for the device, and will not be able to confirm the
identity of the device when you define the path or when the server uses the
device.
2. Define a path from the server to the library:
|
|
|
|
|
|
|
If you did not include the serial number when you defined the library, the
server now queries the library to obtain this information. If you did include the
serial number when you defined the library, the server verifies what you
defined and issues a message if there is a mismatch.
|
|
|
|
For more information about paths, see Defining Paths on page 108.
3. Define the drives in the library:
|
|
|
|
|
|
|
|
|
This example uses the default for the drives serial number, which is to have
the server obtain the serial number from the drive itself at the time that the
path is defined. Depending on the capabilities of the drive, the server may not
be able to automatically detect the serial number. In this case, the server will
not record a serial number for the device, and will not be able to confirm the
identity of the device when you define the path or when the server uses the
device.
78
|
|
|
This example also uses the default for the drives element address, which is to
have the server obtain the element number from the drive itself at the time that
the path is defined.
|
|
|
|
|
The element address is a number that indicates the physical location of a drive
within an automated library. The server needs the element address to connect
the physical location of the drive to the drives SCSI address. You can have the
server obtain the element number from the drive itself at the time that the path
is defined, or you can specify the element number when you define the drive.
|
|
|
|
|
|
Depending on the capabilities of the library, the server may not be able to
automatically detect the element address. In this case you must supply the
element address when you define the drive. If you need the element numbers,
check the device worksheet filled out in step 7 on page 60. Element numbers for
many libraries are available at www.ibm.com/software/sysmgmt/products/
support/IBMTivoliStorageManager.html.
4. Define a path from the server to each drive:
define path astro
library=sangroup
define path astro
library=sangroup
|
|
|
|
|
If you did not include the serial number or element address when you defined
the drive, the server now queries the drive or the library to obtain this
information.
For more information about paths, see Defining Paths on page 108.
5. Define all the device classes that are associated with the shared library.
define devclass tape library=sangroup devtype=3570
6. Check in the library inventory. The following shows two examples. In both
cases, the server uses the name on the barcode label as the volume name.
v Check in volumes that are already labeled:
checkin libvolume sangroup search=yes status=scratch checklabel=barcode
7. Set up any required storage pools for the shared library with a maximum of 50
scratch volumes.
define stgpool backtape tape maxscratch=50
2. Define the shared library named SANGROUP, and identify the library manager
servers name as the primary library manager. Ensure that the library name is
the same as the library name on the library manager:
define library sangroup libtype=shared primarylibmanager=astro
Chapter 5. Configuring Storage Devices
79
3. Perform this step from the library manager. Define a path from the library client
server to each drive. The device name should reflect the way the library client
system sees the device:
define path judy drivea srctype=server desttype=drive
library=sangroup device=/dev/rmt6
define path judy driveb srctype=server desttype=drive
library=sangroup device=/dev/rmt7
For more information about paths, see Defining Paths on page 108.
4. Return to the library client for the remaining steps. Define all the device classes
that are associated with the shared library.
|
|
|
Set the parameters for the device class the same on the library client as on the
library manager. Making the device class names the same on both servers is
also a good practice, but is not required.
|
|
|
5. Define the storage pool, BACKTAPE, that will use the shared library.
define stgpool backtape tape maxscratch=50
|
|
|
|
|
v
v
v
v
|
|
See also Categories in an IBM 3494 Library and Enabling Support for IBM 3590
Drives in Existing 3494 Libraries on page 81.
|
|
|
|
|
The library manager built into the IBM 3494 library tracks the category number of
each volume in the library. A single category number identifies all volumes used
for the same purpose or application. These category numbers are useful when
multiple systems share the resources of a single library.
|
|
|
|
Attention: If other systems or other Tivoli Storage Manager servers connect to the
same 3494 library, each must use a unique set of category numbers. Otherwise, two
or more systems may try to use the same volume, and cause a corruption or loss
of data.
|
|
Typically, a software application that uses a 3494 library uses volumes in one or
more categories that are reserved for that application. To avoid loss of data, each
80
|
|
|
|
|
|
application sharing the library must have unique categories. When you define a
3494 library to the server, you can use the PRIVATECATEGORY and
SCRATCHCATEGORY parameters to specify the category numbers for private and
scratch Tivoli Storage Manager volumes in that library. See IBM Tivoli Storage
Manager Volumes on page 38 for more information on private and scratch
volumes.
|
|
|
|
|
|
When a volume is first inserted into the library, either manually or automatically at
the convenience I/O station, the volume is assigned to the insert category
(X'FF00'). A software application such as Tivoli Storage Manager can contact the
library manager to change a volumes category number. For Tivoli Storage
Manager, you use the CHECKIN LIBVOLUME command (see Checking New
Volumes into a Library on page 137).
|
|
|
|
|
|
The number of categories that the server requires depends on whether you have
enabled support for 3590 drives. If support is not enabled for 3590 drives, the
server reserves two category numbers in each 3494 library that it accesses: one for
private volumes and one for scratch volumes. If you enable 3590 support, the
server reserves three categories in the 3494 library: private, scratch for 3490 drives,
and scratch for 3590 drives.
|
|
|
|
|
|
|
|
For this example, the server then uses the following categories in the new MY3494
library:
|
|
|
v 400 (X'190') Private volumes (for both 3490 and 3590 drives)
v 401 (X'191') Scratch volumes for 3490 drives
v 402 (X'192') Scratch volumes for 3590 drives
|
|
To avoid overlapping categories, do not specify a number for the private category
that is equal to the scratch category plus 1.
|
|
|
|
|
Attention: The default values for the categories may be acceptable in most cases.
However, if you connect other systems or Tivoli Storage Manager servers to a
single 3494 library, ensure that each uses unique category numbers. Otherwise, two
or more systems may try to use the same volume, and cause a corruption or loss
of data.
|
|
|
|
|
|
Also, if you share a 3494 library with other Tivoli Storage Manager servers or other
applications or systems, be careful when enabling 3590 support to prevent loss of
data. See Enabling Support for IBM 3590 Drives in Existing 3494 Libraries. For a
discussion regarding the interaction between library clients and the library
manager in processing Tivoli Storage Manager operations, see Performing
Operations with Shared Libraries on page 148.
|
|
|
|
81
|
|
|
|
|
currently sharing a 3494 library with other Tivoli Storage Manager servers or other
applications or systems and you enable support for 3590 drives, you need to be
careful. The server automatically creates a third category for 3590 scratch volumes
by adding one to the existing scratch category for any 3494 libraries defined to
Tivoli Storage Manager.
|
|
|
|
|
|
|
|
|
|
To prevent loss of data, do one of the following before enabling 3590 support:
v Update other applications and systems to ensure that there is no conflicting use
of category numbers.
v Delete the existing library definition and then define it again using a new set of
category numbers that do not conflict with categories used by other systems or
applications using the library. Do the following:
1. Use an external utility (such as mtlib) to reset all of the Tivoli Storage
Manager volumes to the insert category.
2. Delete the 3494 library definition.
3. Define the 3494 library again, using new category numbers.
|
|
|
|
Check in the Tivoli Storage Manager volumes that you put in the insert
category in step 1.
4. Specify the volume type. Users with both 3490 and 3590 drives must specify
DEVTYPE=3590 for a 3590 volume type.
|
|
|
After taking steps to prevent data loss, enable 3590 support by adding the
following line to the server options file (dsmserv.opt):
ENABLE3590LIBRARY
YES
|
|
|
|
|
|
|
|
|
|
82
|
|
|
|
have two device types (such as 3590E and 3590H), you define two libraries.
Then you define drives and device classes for each library. In each device class
definition, you can use the FORMAT parameter with a value of DRIVE, if you
choose.
The DEVICE parameter specifies the device special files for the LMCP.
See Defining Libraries on page 106 and SCSI Libraries on page 33. For more
information about paths, see Defining Paths on page 108.
3. Define the drives in the library:
define drive 3494lib drive01
define drive 3494lib drive02
The DEVICE parameter gives the device special file name for the drive. For
more about device names, see Determining Device Special File Names on
page 62. For more information about paths, see Defining Paths on page 108.
5. Classify drives according to type by defining Tivoli Storage Manager device
classes. For example, for the two 3590 drives in the 3494LIB library, use the
following command to define a device class named 3494_CLASS:
define devclass 3494_class library=3494lib devtype=3590 format=drive
library
drive
path
devclass
83
7. Define a storage pool named 3494_POOL associated with the device class
named 3494_CLASS.
define stgpool 3494_pool 3494_class maxscratch=20
Key choices:
a. Scratch volumes are empty volumes that are labeled and available for use.
If you allow scratch volumes for the storage pool by specifying a value for
the maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. Collocation is turned off by default. Collocation is a process by which the
server attempts to keep all files belonging to a client node or client file
space on a minimal number of volumes. Once clients begin storing data in a
storage pool with collocation off, you cannot easily change the data in the
storage pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping a Clients Files Together:
Collocation on page 208 and How Collocation Affects Reclamation on
page 220.
For more information, see Defining or Updating Primary Storage Pools on
page 182.
|
|
|
|
|
|
|
In this example, the 3494 library contains two IBM 3590E tape drives and two IBM
3590H tape drives.
1. Define two libraries, one for each type of drive. For example, to define
3590ELIB and 3590HLIB enter the following commands:
define library 3590elib libtype=349x scratchcategory=301 privatecategory=300
define library 3590hlib libtype=349x scratchcategory=401 privatecategory=400
Note: Specify scratch and private categories explicitly. If you accept the
category defaults for both library definitions, different types of media
will be assigned to the same categories.
2. Define a path from the server to each library:
|
|
|
|
|
|
The DEVICE parameter specifies the device special file for the LMCP.
|
|
|
|
|
|
|
|
|
For more information about paths, see Defining Paths on page 108.
3. Define the drives, ensuring that they are associated with the appropriate
libraries.
v Define the 3590E drives to 3590ELIB.
define drive 3590elib 3590e_drive1
define drive 3590elib 3590e_drive2
Note: Tivoli Storage Manager does not prevent you from associating a drive
with the wrong library.
|
|
84
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The DEVICE parameter gives the device special file name for the drive. For
more about device names, see Determining Device Special File Names on
page 62.. For more information about paths, see Defining Paths on page 108.
5. Classify the drives according to type by defining Tivoli Storage Manager device
classes, which specify the recording formats of the drives. Because there are
separate libraries, you can enter a specific recording format, for example 3590H,
or you can enter DRIVE.
define devclass 3590e_class library=3590elib devtype=3590 format=3590e
define devclass 3590h_class library=3590hlib devtype=3590 format=3590h
library
drive
path
devclass
|
|
|
|
|
|
|
7. Create the storage pools to use the devices in the device classes you just
defined. For example, define a storage pool named 3590EPOOL associated with
the device class 3490E_CLASS, and 3590HPOOL associated with the device
class 3590H_CLASS:
|
|
|
|
|
|
|
|
|
|
|
|
Key choices:
a. Scratch volumes are labeled, empty volumes that are available for use. If
you allow scratch volumes for the storage pool by specifying a value for the
maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. Collocation is turned off by default. Collocation is a process by which the
server attempts to keep all files belonging to a client node or client file
space on a minimal number of volumes. Once clients begin storing data in a
storage pool with collocation off, you cannot easily change the data in the
storage pool so that it is collocated. To understand the advantages and
85
|
|
|
|
|
Each volume used by a server for any purpose must have a unique name. This
requirement applies to all volumes, whether the volumes are used for storage
pools, or used for operations such as database backup or export. The requirement
also applies to volumes that reside in different libraries.
|
|
The procedures for volume check-in and labeling are the same whether the library
contains drives of a single device type, or drives of multiple device types.
|
|
|
|
|
|
|
Attention: If your library has drives of multiple device types, you defined two
libraries to the Tivoli Storage Manager server in the procedure in Configuration
with Multiple Drive Device Types on page 84. The two Tivoli Storage Manager
libraries represent the one physical library. The check-in process finds all available
volumes that are not already checked in. You must check in media separately to
each defined library. Ensure that you check in volumes to the correct Tivoli Storage
Manager library.
Do the following:
1. Check in the library inventory. The following shows two examples.
v Check in volumes that are already labeled:
checkin libvolume 3494lib search=yes status=scratch checklabel=no
86
v Have clients back up data directly to tape. For details, see Configuring Policy
for Direct-to-Tape Backups on page 332.
v Have clients back up data to disk. The data is later migrated to tape. For details,
see Overview: The Storage Pool Hierarchy on page 194.
You must first set up the device on the server system. This involves the following
tasks:
1. Set the 3494 Library Manager Control Point (LMCP). This procedure is
described in IBM TotalStorage Tape Device Drivers Installation and Users Guide.
2. Physically attach the devices to the SAN or to the server hardware.
3. On each server system that will access the library and drives, install and
configure the appropriate device drivers for the devices.
4. Determine the device names that are needed to define the devices to Tivoli
Storage Manager.
For details, see Attaching an Automated Library Device on page 60 and
Installing and Configuring Device Drivers on page 61.
|
|
|
|
Note: You can configure a 3494 library so that it contains drives all of the same
device type, or so that it contains drives of multiple device types. The
procedure is similar to the one described for a LAN (Configuration with
Multiple Drive Device Types on page 84).
Chapter 5. Configuring Storage Devices
87
The DEVICE parameter specifies the device special file for the LMCP.
For more information about paths, see Defining Paths on page 108.
3. Define the drives in the library:
define drive 3494san drivea
define drive 3494san driveb
For more information about paths, see Defining Paths on page 108.
5. Define all the device classes that are associated with the shared library.
|
|
6. Check in the library inventory. The following shows two examples. In both
cases, the server uses the name on the barcode label as the volume name.
To check in volumes that are already labeled, use the following command:
checkin libvolume 3494san search=yes status=scratch checklabel=no
7. Set any required storage pools for the shared library with a maximum of 50
scratch volumes.
define stgpool 3494_sanpool tape maxscratch=50
2. Define a shared library named 3494SAN, and identify the library manager:
define library 3494san libtype=shared primarylibmanager=manager
Note: Ensure that the library name agrees with the library name on the library
manager.
3. Perform this step from the library manager. Define a path from the library client
server to each drive. The device name should reflect the way the library client
system sees the device:
88
For more information about paths, see Defining Paths on page 108.
4. Return to the library client for the remaining steps. Define all the device classes
that are associated with the shared library.
|
|
|
Set the parameters for the device class the same on the library client as on the
library manager. Making the device class names the same on both servers is
also a good practice, but is not required.
5. Define the storage pool, BACKTAPE, that will use the shared library.
define stgpool backtape 3494_class maxscratch=50
If you have been sharing an IBM 3494 library among Tivoli Storage Manager
servers by using the 3494SHARED option in the dsmserv.opt file, you can migrate
to sharing the library by using a library manager and library clients. To help
ensure a smoother migration and to ensure that all tape volumes that are being
used by the servers get associated with the correct servers, perform the following
migration procedure.
1. Do the following on each server that is sharing the 3494 library:
a. Update the storage pools using the UPDATE STGPOOL command. Set the
value for the HIGHMIG and LOWMIG parameters to 100%.
b. Stop the server by issuing the HALT command.
c. Edit the dsmserv.opt file and make the following changes:
1) Comment out the 3494SHARED YES option line
2) Activate the DISABLESCHEDS YES option line if it is not active
3) Activate the EXPINTERVAL X option line if it is not active and change
its value to 0, as follows:
EXPINTERVAL 0
2. Set up the library manager on the Tivoli Storage Manager server of your choice
(see Setting up Server Communications on page 77 andSetting up the
Library Manager Server on page 78).
Chapter 5. Configuring Storage Devices
89
3. Do the following on the remaining Tivoli Storage Manager servers (the library
clients):
a. Save the volume history file.
b. Check out all the volumes in the library inventory. Use the CHECKOUT
LIBVOLUME command with REMOVE=NO.
c. Follow the library client setup procedure (Setting up the Library Client
Servers on page 88).
4. Do the following on the library manager server:
a. Check in each library clients volumes. Use the CHECKIN LIBVOLUME
command with the following parameter settings:
v STATUS=PRIVATE
v OWNER=<library client name>
Note: You can use the saved volume history files from the library clients
as a guide.
b. Check in any remaining volumes as scratch volumes. Use the CHECKIN
LIBVOLUME command with STATUS=SCRATCH.
5. Halt all the servers.
6. Edit the dsmserv.opt file and comment out the following lines in the file:
DISABLESCHEDS YES
EXPINTERVAL 0
|
|
Tivoli Storage Manager uses the capability of the 3494 library manager, which
allows you to partition a library between multiple Tivoli Storage Manager servers.
Library partitioning differs from library sharing on a SAN in that with
partitioning, there are no Tivoli Storage Manager library managers or library
clients.
When you partition a library on a LAN, each server has its own access to the same
library. For each server, you define a library with tape volume categories unique to
that server. Each drive that resides in the library is defined to only one server. Each
server can then access only those drives it has been assigned. As a result, library
partitioning does not allow dynamic sharing of drives or tape volumes because
they are pre-assigned to different servers using different names and category
codes.
In the following example, an IBM 3494 library containing four drives is attached to
a Tivoli Storage Manager server named ASTRO and to another Tivoli Storage
Manager server named JUDY.
Note: Tivoli Storage Manager can also share the drives in a 3494 library with other
servers by enabling the 3494SHARED server option. When this option is
enabled, you can define all of the drives in a 3494 library to multiple
servers, if there are SCSI connections from all drives to the systems on
which the servers are running. This type of configuration is not
|
|
|
|
|
90
|
|
|
recommended, however, because when this type of sharing takes place there
is a risk of contention between servers for drive usage, and operations can
fail.
You must first set up the 3494 library on the server system. This involves the
following tasks:
1. Set the 3494 Library Manager Control Point (LMCP). This procedure is
described in IBM TotalStorage Tape Device Drivers Installation and Users Guide.
2. Physically attach the devices to the server hardware.
3. On each server system that will access the library and drives, install and
configure the appropriate device drivers for the devices.
4. Determine the device names that are needed to define the devices to Tivoli
Storage Manager.
For details, see Attaching an Automated Library Device on page 60 and
Installing and Configuring Device Drivers on page 61.
The DEVICE parameter specifies the device special file for the LMCP.
See Defining Libraries on page 106 and SCSI Libraries on page 33. For more
information about paths, see Defining Paths on page 108.
3. Define the drives that are partitioned to server ASTRO:
define drive 3494lib drive1
define drive 3494lib drive2
4. Define the path from the server, ASTRO, to each of the drives:
define path astro drive1 srctype=server desttype=drive library=3494lib
device=/dev/rmt0
define path astro drive2 srctype=server desttype=drive library=3494lib
device=/dev/rmt1
The DEVICE parameter gives the device special file name for the drive. For
more about device names, see Determining Device Special File Names on
page 62. For more information about paths, see Defining Paths on page 108.
5. Classify drives according to type by defining Tivoli Storage Manager device
classes. For example, to classify the two drives in the 3494LIB library, use the
following command to define a device class named 3494_CLASS:
define devclass 3494_class library=3494lib devtype=3590 format=drive
|
|
|
91
specific recording formats when defining the device classes. See Configuration
with Multiple Drive Device Types on page 84.
|
|
library
drive
path
devclass
Key choices:
a. Scratch volumes are empty volumes that are labeled and available for use.
If you allow scratch volumes for the storage pool by specifying a value for
the maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. Collocation is turned off by default. Collocation is a process by which the
server attempts to keep all files belonging to a client node or client file
space on a minimal number of volumes. Once clients begin storing data in a
storage pool with collocation off, you cannot easily change the data in the
storage pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping a Clients Files Together:
Collocation on page 208 and How Collocation Affects Reclamation on
page 220.
For more information, see Defining or Updating Primary Storage Pools on
page 182.
The DEVICE parameter specifies the device special file for the LMCP.
See Defining Libraries on page 106 and SCSI Libraries on page 33. For more
information about paths, see Defining Paths on page 108.
3. Define the drives that are partitioned to server JUDY:
92
4. Define the path from the server, JUDY, to each of the drives:
define path judy drive3 srctype=server desttype=drive library=3494lib
device=/dev/rmt2
define path judy drive4 srctype=server desttype=drive library=3494lib
device=/dev/rmt3
The DEVICE parameter gives the device special file name for the drive. For
more about device names, see Determining Device Special File Names on
page 62. For more information about paths, see Defining Paths on page 108.
5. Classify drives according to type by defining Tivoli Storage Manager device
classes. For example, to classify the two drives in the 3494LIB library, use the
following command to define a device class named 3494_CLASS:
define devclass 3494_class library=3494lib devtype=3590 format=drive
|
|
|
|
|
library
drive
path
devclass
Key choices:
a. Scratch volumes are empty volumes that are labeled and available for use.
If you allow scratch volumes for the storage pool by specifying a value for
the maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. Collocation is turned off by default. Collocation is a process by which the
server attempts to keep all files belonging to a client node or client file
space on a minimal number of volumes. Once clients begin storing data in a
storage pool with collocation off, you cannot easily change the data in the
storage pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping a Clients Files Together:
Collocation on page 208 and How Collocation Affects Reclamation on
page 220.
For more information, see Defining or Updating Primary Storage Pools on
page 182.
93
|
|
|
|
|
|
|
|
|
|
The parameter ACSID specifies the number that Automatic Cartridge System
System Administrator (ACSSA) assigned to the library. Issue QUERY ACS to
your ACSLS system to determine the number for your library ID.
2. Define the drives in the library:
define drive acslib drive01 acsdrvid=1,2,3,4
define drive acslib drive02 acsdrvid=1,2,3,5
The ACSDRVID parameter specifies the ID of the drive that is being accessed.
The drive ID is a set of numbers that indicate the physical location of a drive
within an ACSLS library. This drive ID must be specified as a, l, p, d, where a is
the ACSID, l is the LSM (library storage module), p is the panel number, and d
is the drive ID. The server needs the drive ID to connect the physical location
of the drive to the drives SCSI address. See the StorageTek documentation for
details.
94
The DEVICE parameter gives the device special file name for the drive. For
more about device names, see Determining Device Special File Names on
page 62. For more information about paths, see Defining Paths on page 108.
4. Classify drives according to type by defining Tivoli Storage Manager device
classes. For example, to classify the two drives in the ACSLIB library, use the
following command to define a device class named ACS_CLASS:
define devclass acs_class library=acslib devtype=ecartridge format=drive
|
|
|
|
|
library
drive
path
devclass
Key choices:
a. Scratch volumes are labeled, empty volumes that are available for use. If
you allow scratch volumes for the storage pool by specifying a value for the
maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. Collocation is turned off by default. Collocation is a process by which the
server attempts to keep all files belonging to a client node or client file
space on a minimal number of volumes. Once clients begin storing data in a
storage pool with collocation off, you cannot easily change the data in the
storage pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping a Clients Files Together:
Collocation on page 208 and How Collocation Affects Reclamation on
page 220.
For more information, see Defining or Updating Primary Storage Pools on
page 182.
Chapter 5. Configuring Storage Devices
95
|
|
|
The following example shows how to set up and ACSLS library with a mix of two
9840 drives and two 9940 drives.
1. Define two ACSLS libraries that use the same ACSID. For example to define
9840LIB and 9940LIB, enter the following commands:
|
|
|
|
The ACSID parameter specifies the number that Automatic Cartridge System
System Administrator (ACSSA) assigned to the libraries. Issue QUERY ACS to
your ACSLS system to determine the number for your library ID.
|
|
|
|
|
2. Define the drives, ensuring that they are associated with the appropriate
libraries.
|
|
|
|
|
|
|
|
Note: Tivoli Storage Manager does not prevent you from associating a drive
with the wrong library.
v Define the 9840 drives to 9840LIB.
|
|
|
|
|
|
|
The ACSDRVID parameter specifies the ID of the drive that is being accessed.
The drive ID is a set of numbers that indicate the physical location of a drive
within an ACSLS library. This drive ID must be specified as a, l, p, d, where a is
the ACSID, l is the LSM (library storage module), p is the panel number, and d
is the drive ID. The server needs the drive ID to connect the physical location
of the drive to the drives SCSI address. See the StorageTek documentation for
details.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3. Define a path from the server to each drive. Ensure that you specify the correct
library.
v For the 9840 drives:
|
|
|
|
|
|
|
|
The DEVICE parameter gives the device special file name for the drive. For
more about device names, see Determining Device Special File Names on
page 62. For more information about paths, see Defining Paths on page 108.
4. Classify the drives according to type by defining Tivoli Storage Manager device
classes, which specify the recording formats of the drives. Because there are
separate libraries, you can enter a specific recording format, for example 9840,
or you can enter DRIVE. For example, to classify the drives in the two libraries,
use the following commands to define one device class for each type of drive:
96
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
query
query
query
query
library
drive
path
devclass
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Key choices:
a. Scratch volumes are labeled, empty volumes that are available for use. If
you allow scratch volumes for the storage pool by specifying a value for the
maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. Collocation is turned off by default. Collocation is a process by which the
server attempts to keep all files belonging to a client node or client file
space on a minimal number of volumes. Once clients begin storing data in a
storage pool with collocation off, you cannot easily change the data in the
storage pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping a Clients Files Together:
Collocation on page 208 and How Collocation Affects Reclamation on
page 220.
|
|
Each volume used by a server for any purpose must have a unique name. This
requirement applies to all volumes, whether the volumes are used for storage
pools, or used for operations such as database backup or export. The requirement
also applies to volumes that reside in different libraries.
97
Attention: If your library has drives of multiple device types, you defined two
libraries to the Tivoli Storage Manager server in the procedure in Configuration
with Multiple Drive Device Types on page 96. The two Tivoli Storage Manager
libraries represent the one physical library. The check-in process finds all available
volumes that are not already checked in. You must check in media separately to
each defined library. Ensure that you check in volumes to the correct Tivoli Storage
Manager library.
1. Check in the library inventory. The following shows examples for libraries with
a single drive device type and with multiple drive device types.
v Check in volumes that are already labeled:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Support for removable file devices allows portability of media between UNIX
systems. It also allows this media to be used to transfer data between systems that
support the media. Removable file support allows the server to read data from a
FILE device class that is copied to removable file media through third-party
software. The media is then usable as input media on a target Tivoli Storage
Manager server that uses the REMOVABLEFILE device class for input.
Note: Software for writing CDs may not work consistently across platforms.
Use a MAXCAPACITY value that is less than one CDs usable space to allow for a
one-to-one match between files from the FILE device class and copies that are on
CD. Use the DEFINE DEVCLASS or UPDATE DEVCLASS commands to set the
MAXCAPACITY parameter of the FILE device class to a value less than 650MB.
98
2. Export the node. This command results in a file name /home/user1/CDR03 that
contains the export data for node USER1.
export node user1 filedata=all devclass=file vol=cdr03
You can use software for writing CDs to create a CD with volume label CDR03
that contains a single file that is also named CDR03.
Server B
1. Follow the manufacturers instructions to attach the device to your server.
2. Issue this command on your system to mount the CD.
mount -r -v cdrfs /dev/cd0 /cdrom
-r
v cdrfs
Specifies that the media has a CD file system
/dev/cd0
Specifies the physical description of the first CD on the system
/cdrom
Specifies the mount point of the first CD drive
Note: CD drives lock while the file system is mounted. This prevents use of
the eject button on the drive.
3. Ensure that the media is labeled. The software that you use for making a CD
also labels the CD. Before you define the drive, you must put formatted,
labeled media in the drive. For label requirements, see Labeling Requirements
for Optical and Other Removable Files Devices on page 100. When you define
the drive, the server verifies that a valid file system is present.
4. Define a manual library named CDROM:
define library cdrom libtype=manual
6. Define a path from the server to the drive at mount point /cdrom:
define path serverb cddrive srctype=server desttype=drive
library=cdrom device=/cdrom
For more information about paths, see Defining Paths on page 108.
7. Define a device class with a device type of REMOVABLEFILE. The device type
must be REMOVABLEFILE.
define devclass cdrom devtype=removablefile library=cdrom
8. Issue the following Tivoli Storage Manager command to import the node data
on the CD volume CDR03.
import node user1 filedata=all devclass=cdrom vol=cdr03
99
|
|
|
|
|
|
Note: You do not define the drives to the server in an externally managed
library.
3. Define a path from the server to the library:
define path server1 mediamgr srctype=server desttype=library
externalmanager=/usr/sbin/mediamanager
100
Key choices:
a. Scratch volumes are labeled, empty volumes that are available for use. If
you allow scratch volumes for the storage pool by specifying a value for the
maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. Collocation is turned off by default. Collocation is a process by which the
server attempts to keep all files belonging to a client node or client file
space on a minimal number of volumes. Once clients begin storing data in a
storage pool with collocation off, you cannot easily change the data in the
storage pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping a Clients Files Together:
Collocation on page 208 and How Collocation Affects Reclamation on
page 220.
101
102
For more about device names, see Determining Device Special File Names on
page 62.
For more information about paths, see Defining Paths on page 108.
4. Classify the drives according to type by defining a device class named
TAPEDLT_CLASS. Use FORMAT=DRIVE as the recording format only if all the
drives associated with the device class are identical.
define devclass tapedlt_class library=manualdlt devtype=dlt format=drive
A closer look: When you associate more than one drive to a single device class
through a manual library, ensure that the recording formats and
media types of the devices are compatible. If you have a 4mm
tape drive and a DLT tape drive, you must define separate
manual libraries and device classes for each drive.
See Defining and Updating Tape Device Classes on page 165.
5. Verify your definitions by issuing the following commands:
query
query
query
query
library
drive
path
devclass
Key choices:
a. Scratch volumes are empty volumes that are labeled and available for use.
If you allow scratch volumes for the storage pool by specifying a value for
the maximum number of scratch volumes, the server can use any scratch
volumes available without further action on your part. If you do not allow
scratch volumes (MAXSCRATCH=0), you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. Collocation is turned off by default. Collocation is a process by which the
server attempts to keep all files belonging to a client node or client file
space on a minimal number of volumes. Once clients begin storing data in a
storage pool with collocation off, you cannot easily change the data in the
storage pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see Keeping a Clients Files Together:
Collocation on page 208 and How Collocation Affects Reclamation on
page 220.
See Defining or Updating Primary Storage Pools on page 182.
103
Label Volumes
|
|
|
|
Use the following procedure to ensure that volumes are available to the server.
Keep enough labeled volumes on hand so that you do not run out during an
operation such as client backup. Label and set aside extra scratch volumes for any
potential recovery operations you might have later.
|
|
|
|
Each volume used by a server for any purpose must have a unique name. This
requirement applies to all volumes, whether the volumes are used for storage
pools, or used for operations such as database backup or export. The requirement
also applies to volumes that reside in different libraries.
Do the following:
1. Label volumes that do not already have a standard label. For example, enter
the following command to use one of the drives to label a volume with the ID
of vol001:
label libvolume manualdlt vol001
104
|
|
6. Configure Tivoli Storage Manager policy for LAN-free data movement for the
client.
For more information on configuring Tivoli Storage Manager for LAN-free data
movement see IBM Tivoli Storage Manager Storage Agent Users Guide.
|
|
To help you tune the use of your LAN and SAN resources, you can control the
path that data transfers take for clients with the capability of LAN-free data
movement. For each client you can select whether data read and write operations
use:
v The LAN path only
v The LAN-free path only
v Either path
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Note: Libraries with multiple drive device types can be used with NDMP
operations.
2. Define a device class for NDMP operations.
3. Define the storage pool for backups performed by using NDMP operations.
4. Optional: Select or define a storage pool for storing tables of contents for the
backups.
5. Configure Tivoli Storage Manager policy for NDMP operations.
6. Register the NAS nodes with the server.
7. Define a data mover for the NAS file server.
8. Define the drives and their associated paths.
|
|
|
105
Defining Libraries
Task
Before you can use a drive, you must first define the library to which the drive
belongs. This is true for both manually mounted drives and drives in automated
libraries. For example, you have several stand-alone tape drives. You can define a
library named MANUALMOUNT for these drives by using the following
command:
define library manualmount libtype=manual
For all libraries other than manual libraries, you define the library and then define
a path from the server to the library. For example, if you have an IBM 3583 device,
you can define a library named ROBOTMOUNT using the following command:
|
|
|
Next, you use the DEFINE PATH command. In the path, you must specify the
DEVICE parameter. The DEVICE parameter is required and specifies the device
special file by which the librarys robotic mechanism is known.
define path server1 robotmount srctype=server desttype=library
device=/dev/lb0
For more information about paths, see Defining Paths on page 108.
If you have an IBM 3494 Tape Library Dataserver, you can define a library named
AUTOMOUNT using the following command:
define library automount libtype=349x
Next, assuming that you have defined one LMCP whose device name is
/dev/lmcp0, you define a path for the library:
define path server1 automount srctype=server desttype=library
device=/dev/lmcp0
|
|
|
|
|
|
|
If you choose, you can specify the serial number when you define the library to
the server. For convenience, the default is to allow the server to obtain the serial
number from the library itself at the time that the path is defined.
|
|
|
|
|
If you specify the serial number, the server confirms that the serial number is
correct when you define the path to the library. When you define the path, you can
set AUTODETECT=YES to allow the server to correct the serial number if the
number that it detects does not match what you entered when you defined the
library.
|
|
|
|
Depending on the capabilities of the library, the server may not be able to
automatically detect the serial number. Not all devices are able to return a serial
number when asked for it by an application such as the server. In this case, the
server will not record a serial number for the device, and will not be able to
For a library type of SCSI on a SAN, the server can track the librarys serial
number. With the serial number, the server can confirm the identity of the device
when you define the path or when the server uses the device.
106
|
|
confirm the identity of the device when you define the path or when the server
uses the device. See Recovering from Device Changes on the SAN on page 109.
Defining Drives
Task
Define drives
To inform the server about a drive that can be used to access storage volumes,
issue the DEFINE DRIVE command, followed by the DEFINE PATH command. For
more information about paths, see Defining Paths on page 108. When issuing the
DEFINE DRIVE command, you must provide some or all of the following
information:
Library name
The name of the library in which the drive resides.
Drive name
The name assigned to the drive.
|
|
|
|
Serial number
The serial number of the drive. The serial number parameter applies only
to drives in SCSI libraries. With the serial number, the server can confirm
the identity of the device when you define the path or when the server
uses the device.
|
|
|
|
|
|
|
You can specify the serial number if you choose. The default is to allow the
server to obtain the serial number from the drive itself at the time that the
path is defined. If you specify the serial number, the server confirms that
the serial number is correct when you define the path to the drive. When
you define the path, you can set AUTODETECT=YES to allow the server to
correct the serial number if the number that it detects does not match what
you entered when you defined the drive.
|
|
|
|
|
|
Depending on the capabilities of the drive, the server may not be able to
automatically detect the serial number. In this case, the server will not
record a serial number for the device, and will not be able to confirm the
identity of the device when you define the path or when the server uses
the device. See Recovering from Device Changes on the SAN on
page 109.
|
|
|
|
|
|
|
Element address
The element address of the drive. The ELEMENT parameter applies only
to drives in SCSI libraries. The element address is a number that indicates
the physical location of a drive within an automated library. The server
needs the element address to connect the physical location of the drive to
the drives SCSI address. You can allow the server to obtain the element
number from the drive itself at the time that the path is defined, or you
can specify the element number when you define the drive.
|
|
|
|
|
|
|
Depending on the capabilities of the library, the server may not be able to
automatically detect the element address. In this case you must supply the
element address when you define the drive, if the library has more than
one drive. If you need the element numbers, check the device worksheet
filled out in step 7 on page 60. Element numbers for many libraries are
available at www.ibm.com/software/sysmgmt/products/
support/IBMTivoliStorageManager.html.
107
For example, to define a drive that belongs to the manual library named MANLIB,
enter this command:
define drive manlib tapedrv3
Next, you define the path from the server to the drive, using the device name used
to access the drive:
define path server1 tapedrv3 srctype=server desttype=drive library=manlib
device=/dev/mt3
Defining Paths
Before a device can be used, a path must be defined between the device and the
server or the device and the data mover responsible for outboard data movement.
This command must be used to define the following path relationships:
v Between a server and a drive or a library.
v Between a storage agent and a drive.
v Between a data mover and a drive or a library.
108
When issuing the DEFINE PATH command, you must provide some or all of the
following information:
|
|
Source name
The name of the server, storage agent, or data mover that is the source for
the path.
Destination name
The assigned name of the device that is the destination for the path.
|
|
Source type
The type of source for the path. (A storage agent is considered a type of
server for this purpose.)
Destination type
The type of device that is the destination for the path.
Library name
The name of the library that a drive is defined to if the drive is the
destination of the path.
Device
|
|
|
The special file name of the device. This parameter is used when defining
a path between a server, a storage agent, or a NAS data mover and a
library or drive.
|
|
|
|
|
|
|
If you have a drive, DRIVE01, that resides in library AUTODLTLIB, and has a
device name of /dev/mt4, define it to server ASTRO1 by doing the following:
define path astro1 drive01 srctype=server desttype=drive library=autodltlib
device=/dev/mt4
|
|
|
|
Changes in device locations on the SAN can cause definitions of paths to drives
and libraries in Tivoli Storage Manager to require updating. The server assists you
in recovering from changes to devices on the SAN by using serial numbers to
confirm the identity of devices it contacts.
|
|
|
|
|
|
When you define a device (drive or library) you have the option of specifying the
serial number for that device. If you do not specify the serial number when you
define the device, the server obtains the serial number when you define the path
for the device. In either case, the server then has the serial number in its database.
From then on, the server uses the serial number to confirm the identity of a device
for operations.
109
|
|
|
|
|
|
|
When the server uses drives and libraries on a SAN, the server attempts to verify
that the device it is using is the correct device. The server contacts the device by
using the device name in the path that you defined for it. The server then requests
the serial number from the device, and compares that serial number with the serial
number stored in the server database for that device. If the serial numbers do not
match, the server issues a message about the mismatch. The server does not use
the device.
|
|
|
|
|
You can monitor the activity log for messages if you want to know when device
changes on the SAN have affected Tivoli Storage Manager. The following are the
number ranges for messages related to serial numbers:
v ANR8952 through ANR8958
v ANR8961 through ANR8967
|
|
|
|
Restriction: Some devices do not have the capability of reporting their serial
numbers to applications such as the Tivoli Storage Manager server. If
the server cannot obtain the serial number from a device, it cannot
assist you with changes to that devices location on the SAN.
110
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
|
Requirements
|
|
You must meet the following requirements when using NDMP for operations with
NAS file servers:
|
|
|
|
|
|
|
|
|
|
111
|
|
|
Tape Libraries
The Tivoli Storage Manager server supports three types of libraries for
operations using NDMP. The libraries supported are SCSI, ACSLS, and
349X.
v SCSI library
A SCSI library that is supported by the Tivoli Storage Manager device
driver. Visit www.ibm.com/software/sysmgmt/products/
support/IBMTivoliStorageManager.html. This type of library can be
attached directly either to the Tivoli Storage Manager server or to the
NAS file server. When the library is attached directly to the Tivoli
Storage Manager server, the Tivoli Storage Manager server controls the
library operations by passing the SCSI commands directly to the library.
When the library is attached directly to the NAS file server, the Tivoli
Storage Manager server controls the library by passing SCSI commands
to the library through the NAS file server.
v ACSLS library
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Note: The Tivoli Storage Manager server does not include External
Library support for the ACSLS library when the library is used
for NDMP operations.
v 349X library
A 349X library can only be directly connected to the Tivoli Storage
Manager server. The Tivoli Storage Manager server controls the library
by passing the library request through TCP/IP to the library manager.
|
|
|
|
|
|
|
Library Sharing: The Tivoli Storage Manager server that performs NDMP
operations can be a library manager for either a SCSI or
349X library, but cannot be a library client. If the Tivoli
Storage Manager server that performs NDMP operations
is a library manager, that server must control the library
directly and not by passing commands through the NAS
file server.
Tape Drives
One or more tape drives in the tape library. The NAS file server must be
able to access the drives. The drives must be supported for tape backup
operations by the NAS file server and its operating system. Visit
www.netapp.com/products/filer/index.html or
www.emc.com/products/networking/celerra.jsp for details.
|
|
|
|
|
|
Drive Sharing: The tape drives can be shared by the Tivoli Storage
Manager server and one or more NAS file servers. Also,
when a SCSI or a 349X library is connected to the Tivoli
Storage Manager server and not to the NAS file server, the
drives can be shared:
v By one or more NAS file servers and one or more Tivoli
Storage Manager library clients.
|
|
|
|
|
|
|
112
|
|
|
|
Verify the compatibility of specific combinations of a NAS file server, tape devices,
and SAN-attached devices with the hardware manufacturers.
Client Interfaces:
|
|
|
Server Interfaces:
v Server console
v Command line on the administrative client
|
|
|
|
|
|
|
|
The Tivoli Storage Manager Web client interface, available with the backup-archive
client, displays the file systems of the NAS file server in a graphical view. The
client function is not required, but you can use the client interfaces for NDMP
operations. The client function is recommended for file-level restore operations. See
Planning for File-Level Restore on page 120 for more information about file-level
restore.
Tivoli Storage Manager prompts you for an administrator ID and password when
you perform NDMP functions using either of the client interfaces. See
Backup-Archive Clients Installation and Users Guide for more information about
installing and activating client interfaces.
During backup operations that use NDMP, the NAS file server controls the format
of the data written to the tape library. The NDMP format is not the same as the
data format used for traditional Tivoli Storage Manager backups. When you define
a NAS file server as a data mover and define a storage pool for NDMP operations,
you specify the data format. For example, you would specify NETAPPDUMP if the
NAS file server is a Network Appliance device. You would specify
CELERRADUMP if the NAS file server is an EMC Celerra device.
Additional data formats will be added as Tivoli Storage Manager adds support for
NAS file servers from other vendors.
113
Many of the configuration choices you have for libraries and drives are determined
by the hardware features of your libraries. You can set up NDMP operations with
any supported library and drives. However, the more features your library has, the
more flexibility you can exercise in your implementation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
If you are using a SCSI tape library, one of the first steps in planning for NDMP
operations is to determine where to attach it. You must determine whether to
attach the library robotics to the Tivoli Storage Manager server or to the NAS file
server. Regardless of where you connect library robotics, tape drives must always
be connected to the NAS file server for NDMP operations.
|
|
|
|
|
|
|
|
Distance and your available hardware connections are factors to consider for SCSI
libraries. If the library does not have separate ports for robotics control and drive
access, the library must be attached to the NAS file server because the NAS file
server must have access to the drives. If your SCSI library has separate ports for
robotics control and drive access, you can choose to attach the library robotics to
either the Tivoli Storage Manager server or the NAS file server. If the NAS file
server is at a different location from the Tivoli Storage Manager server, the distance
may mean that you must attach the library to the NAS file server.
|
|
|
Whether you are using a SCSI, ACSLS, or 349X library, you have the option of
dedicating the library to NDMP operations, or of using the library for NDMP
operations as well as most traditional Tivoli Storage Manager operations.
114
Configuration
|
|
|
|
|
|
|
Configuration 1
(SCSI library
connected to the
Tivoli Storage
Manager server)
|
|
|
|
|
Distance between
Tivoli Storage
Manager server and
library
Library sharing
Drive sharing
between Tivoli
Storage Manager
and NAS file server
Drive sharing
between NAS
file servers
Drive sharing
between storage
agent and NAS
file server
Limited by SCSI or
FC connection
Supported
Supported
Supported
Supported
Configuration 2
(SCSI library
connected to the
NAS file server)
No limitation
Not supported
Supported
Supported
Not supported
|
|
|
Configuration 3
(349X library)
May be limited by
349X connection
Supported
Supported
Supported
Supported
|
|
|
|
Configuration 4
(ACSLS library)
May be limited by
ACSLS connection
Not supported
Supported
Supported
Not supported
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
In this configuration, the Tivoli Storage Manager server controls the SCSI library
through a direct, physical connection to the library robotics control port. For
NDMP operations, the drives in the library are connected directly to the NAS file
server, and a path must be defined from the NAS data mover to each of the drives
to be used. The NAS file server transfers data to the tape drive at the request of
the Tivoli Storage Manager server. To also use the drives for Tivoli Storage
Manager operations, connect the Tivoli Storage Manager server to the tape drives
and define paths from the Tivoli Storage Manager server to the tape drives. This
configuration also supports a Tivoli Storage Manager storage agent having access
to the drives for its LAN-free operations, and the Tivoli Storage Manager server
can be a library manager.
In this configuration, the tape library must have separate ports for robotics control
and for drive access. In addition, the library must be within Fibre-Channel range
or SCSI bus range of both the Tivoli Storage Manager server and the NAS file
server.
115
Tivoli Storage
Manager Server
Tape
Library
Web Client
(optional)
Legend:
NAS File
Server
SCSI or Fibre
Channel Connection
TCP/IP
Connection
Data Flow
Robotics Control
Drive access
|
|
|
|
1
2
Figure 10. Configuration 1: SCSI Library Connected to Tivoli Storage Manager Server
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The Tivoli Storage Manager server controls library robotics by sending library
commands across the network to the NAS file server. The NAS file server passes
the commands to the tape library. Any responses generated by the library are sent
to the NAS file server, and passed back across the network to the Tivoli Storage
Manager server. This configuration supports a physically distant Tivoli Storage
Manager server and NAS file server. For example, the Tivoli Storage Manager
server could be in one city, while the NAS file server and tape library are in
another city.
In this configuration, the library robotics and the drives must be physically
connected directly to the NAS file server, and paths must be defined from the NAS
data mover to the library and drives. No physical connection is required between
the Tivoli Storage Manager server and the SCSI library.
116
Tivoli Storage
Manager Server
Tape
Library
Web Client
(optional)
1 2
Legend:
NAS File
Server
SCSI or Fibre
Channel Connection
TCP/IP
Connection
Data Flow
|
|
|
|
Robotics Control
Drive access
Figure 11. Configuration 2: SCSI Library Connected to the NAS File Server
|
|
|
|
|
|
|
|
|
|
|
|
|
In order to perform NAS backup or restore operations, the NAS file server must be
able to access one or more tape drives in the 349X library. Any tape drives used for
NAS operations must be physically connected to the NAS file server, and paths
need to be defined from the NAS data mover to the drives. The NAS file server
transfers data to the tape drive at the request of the Tivoli Storage Manager server.
Follow the manufacturers instructions to attach the device to the server system.
|
|
|
|
This configuration supports a physically distant Tivoli Storage Manager server and
NAS file server. For example, the Tivoli Storage Manager server could be in one
city, while the NAS file server and tape library are in another city.
For this configuration, you connect the tape library to the system as for traditional
operations. See Chapter 4, Attaching Devices to the Server System, on page 59 for
more information. In this configuration, the 349X tape library is controlled by the
Tivoli Storage Manager server. The Tivoli Storage Manager server controls the
library by passing the request to the 349X library manager through TCP/IP.
117
|
|
|
|
Figure 12. Configuration 3: 349X Library Connected to the Tivoli Storage Manager Server
|
|
|
|
|
|
|
|
|
|
|
|
|
In order to perform NAS backup or restore operations, the NAS file server must be
able to access one or more tape drives in the ACSLS library. Any tape drives used
for NAS operations must be physically connected to the NAS file server, and paths
need to be defined from the NAS data mover to the drives. The NAS file server
transfers data to the tape drive at the request of the Tivoli Storage Manager server.
Follow the manufacturers instructions to attach the device to the server system.
|
|
|
This configuration supports a physically distant Tivoli Storage Manager server and
NAS file server. For example, the Tivoli Storage Manager server could be in one
city, while the NAS file server and tape library are in another city.
|
|
|
|
To also use the drives for Tivoli Storage Manager operations, connect the Tivoli
Storage Manager server to the tape drives and define paths from the Tivoli Storage
Manager server to the tape drives.
For this configuration, you connect the tape library to the system as for traditional
Tivoli Storage Manager operations. See Chapter 4, Attaching Devices to the Server
System, on page 59 for more information. The ACSLS tape library is controlled by
the Tivoli Storage Manager server. The Tivoli Storage Manager server controls the
library by passing the request to the ACSLS library server through TCP/IP.
118
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Figure 13. Configuration 4: ACSLS Library Connected to the Tivoli Storage Manager Server
119
Tape Library
NAS File Server 2
1
2
3
Legend:
Drive access
Drives
1
|
|
|
|
2 3
To create the configuration shown in the figure, you would do the following:
1. Define all three drives to Tivoli Storage Manager.
|
|
|
|
2. Define paths from the Tivoli Storage Manager server to drives 2 and 3. Because
drive 1 is not accessed by the server, no path is defined.
3. Define each NAS file server as a separate data mover.
|
|
See Step 6. Defining Tape Drives and Paths for NDMP Operations on page 127
for more information.
|
|
|
|
|
|
|
When you do a backup via NDMP, you can specify that the Tivoli Storage
Manager server collect and store file-level information in a table of contents (TOC).
If you specify this option at the time of backup, you can later display the table of
contents of the backup image. Through the backup-archive Web client, you can
select individual files or directories to restore directly from the backup images
generated.
|
|
|
|
|
|
|
|
|
|
You also have the option to do a backup via NDMP without collecting file-level
restore information. See Managing Table of Contents on page 131 for more
information.
120
|
|
|
|
|
|
To allow creation of a table of contents for a backup via NDMP, you must define
the TOCDESTINATION attribute in the backup copy group for the management
class to which this backup image is bound. You cannot specify a copy storage pool
as the destination. The storage pool you specify for the TOC destination must have
a data format of either NATIVE or NONBLOCK, so it cannot be the tape storage
pool used for the backup image.
|
|
|
|
|
|
|
|
|
|
|
|
If you choose to collect file-level information, specify the TOC parameter in the
BACKUP NODE server command. Or, if you initiate your backup using the client,
you can specify the TOC option in the client options file, client option set, or client
command line. See Administrators Reference for more information about the
BACKUP NODE command. You can specify NO, PREFERRED, or YES. When you
specify PREFERRED or YES, the Tivoli Storage Manager server stores file
information for a single NDMP-controlled backup in a table of contents (TOC). The
table of contents is placed into a storage pool. After that, the Tivoli Storage
Manager server can access the table of contents so that file and directory
information can be queried by the server or client. Use of the TOC parameter
allows a table of contents to be generated for some images and not others, without
requiring different management classes for the images.
|
|
|
|
To avoid mount delays and ensure sufficient space, use random access storage
pools (DISK device class) as the destination for the table of contents. For sequential
access storage pools, no labeling or other preparation of volumes is necessary if
scratch volumes are allowed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
If your level of Data ONTAP is earlier than 6.4.1, you must have one of the
following two configurations in order to collect and restore file-level information.
Results with configurations other than these two are unpredictable. The Tivoli
Storage Manager server will print a warning message (ANR4946W) during backup
operations. The message indicates that the character encoding of NDMP file history
messages is unknown, and UTF-8 will be assumed in order to build a table of
contents. It is safe to ignore this message only for the following two configurations.
v Your data has directory and file names that contain only English (7-bit ASCII)
characters.
v Your data has directory and file names that contain non-English characters and
the volume language is set to the UTF-8 version of the proper locale (for
example, de.UTF-8 for German).
|
|
|
|
|
If your level of Data ONTAP is 6.4.1 or later, you must have one of the following
three configurations in order to collect and restore file-level information. Results
with configurations other than these three are unpredictable.
v Your data has directory and file names that contain only English (7-bit ASCII)
characters and the volume language is either not set or is set to one of these:
|
|
|
|
All systems that create or access data on a particular NAS file server volume must
do so in a manner compatible with the volume language setting. You should install
Data ONTAP 6.4.1 or later, if it is available, on your Network Appliance NAS file
server in order to garner full support of international characters in the names of
files and directories.
C (POSIX)
en
en_US
en.UTF-8
Chapter 6. Using NDMP for Operations with NAS File Servers
121
|
|
|
|
en_US.UTF-8
v Your data has directory and file names that contain non-English characters, and
the volume language is set to the proper locale (for example, de.UTF-8 or de for
German).
|
|
|
|
Note: Using the UTF-8 version of the volume language setting is more efficient
in terms of Tivoli Storage Manager server processing and table of contents
storage space.
v You only use CIFS to create and access your data.
|
|
|
|
|
|
|
a. Attach the SCSI library to the NAS file server or to the Tivoli Storage
Manager server, or attach the ACSLS library or 349X library to the Tivoli
Storage Manager server.
b. Define the library with a library type of SCSI, ACSLS, or 349X.
c. Define a device class for the tape drives.
d. Define a storage pool for NAS backup media.
e. Define a storage pool for storing a table of contents. This step is optional.
|
|
2. Configure Tivoli Storage Manager policy for managing NAS image backups.
See Step 2. Configuring Tivoli Storage Manager Policy for NDMP Operations
on page 124.
3. Register a NAS file server node with the Tivoli Storage Manager server. See
Step 3. Registering NAS Nodes with the Tivoli Storage Manager Server on
page 125.
4. Define a data mover for the NAS file server. See Step 4. Defining a Data
Mover for the NAS File Server on page 125.
5. Define a path from either the Tivoli Storage Manager server or the NAS file
server to the library. See Step 5. Defining a Path to a Library on page 126.
6. Define the tape drives to Tivoli Storage Manager, and define the paths to those
drives from the NAS file server and optionally from the Tivoli Storage Manager
server. See Step 6. Defining Tape Drives and Paths for NDMP Operations on
page 127.
7. Check tapes into the library and label them. See Step 7. Labeling Tapes and
Checking Tapes into the Library on page 128.
8. Set up scheduled backups for NAS file servers. This step is optional. See Step
8. Scheduling NDMP Operations on page 128.
|
|
122
|
|
|
control to the Tivoli Storage Manager server or to the NAS file server. See
Planning for Tape Libraries and Drives used in NDMP Operations on
page 114.
|
|
|
Connect the SCSI tape library robotics to the Tivoli Storage Manager server
or to the NAS file server. See the manufacturers documentation for
instructions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
ACSLS Library
define library acslib libtype=acsls acsid=1
349X Library
define library tsmlib libtype=349x
123
The storage pools you define for storage of file system images produced during
backups using NDMP are different from storage pools used for conventional Tivoli
Storage Manager media. They are defined with different data formats. Tivoli
Storage Manager operations use storage pools defined with a NATIVE or
NONBLOCK data format. NDMP operations require storage pools with a data
format that matches the NAS file server and the backup method to be used. For
example, to define a storage pool named NASPOOL for a Network Appliance file
server, enter the following command:
|
|
|
To define a storage pool named CELERRAPOOL for an EMC Celerra file server,
enter the following command:
Attention: Ensure that you do not accidentally use storage pools that have been
defined for NDMP operations in traditional Tivoli Storage Manager operations. Be
especially careful when assigning the storage pool name as the value for the
DESTINATION parameter of the DEFINE COPYGROUP command. Unless the
destination is a storage pool with the appropriate data format, the backup will fail.
|
|
|
|
|
|
|
|
|
For example, to define a storage pool named TOCPOOL for a DISK device class,
enter the following command:
|
|
Then, you must define volumes for the storage pool. See Configuring Random
Access Volumes on Disk Devices on page 54 for more information.
This step is optional. If you plan to create a table of contents, you should also
define a disk storage pool in which to store the table of contents. You must set up
policy so that the Tivoli Storage Manager server stores the table of contents in a
different storage pool from the one where the backup image is stored. The table of
contents is treated like any other object in that storage pool.
2. Create a policy set in that domain. For example, to define a policy set named
STANDARD in the policy domain named NASDOMAIN, enter the following
command:
define policyset nasdomain standard
3. Define a management class, and then assign the management class as the
default for the policy set. For example, to define a management class named
MC1 in the STANDARD policy set, and assign it as the default, enter the
following commands:
define mgmtclass nasdomain standard mc1
assign defmgmtclass nasdomain standard mc1
124
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4. Define a backup copy group in the default management class. The destination
must be the storage pool you created for backup images produced by NDMP
operations. In addition, you can specify the number of backup versions to
retain. For example, to define a backup copy group for the MC1 management
class where up to four versions of each file system are retained in the storage
pool named NASPOOL, enter the following command:
define copygroup nasdomain standard mc1 destination=naspool verexists=4
The policy is ready to be used. Nodes are associated with Tivoli Storage
Manager policy when they are registered. For more information, see Step 3.
Registering NAS Nodes with the Tivoli Storage Manager Server.
|
|
|
|
|
|
Register the NAS file server as a Tivoli Storage Manager node, specifying
TYPE=NAS. This node name is used to track the image backups for the NAS file
server. For example, to register a NAS file server as a node named NASNODE1,
with a password of NASPWD1, in a policy domain named NASDOMAIN, enter
the following command:
|
|
If you are using a client option set, specify the option set when you register the
node.
|
|
|
You can verify that this node is registered by issuing the following command. You
must specify TYPE=NAS so that only NAS nodes are displayed:
125
when you registered the NAS node to the Tivoli Storage Manager server. For
example, to define a data mover for a NAS node named NASNODE1, enter the
following command:
define datamover nasnode1 type=nas hladdress=netapp2 lladdress=10000 userid=root
password=admin dataformat=netappdump
In this command:
v The high-level address is an IP address for the NAS file server, either a
numerical address or a host name.
v The low-level address is the IP port for Network Data Management Protocol
(NDMP) sessions with the NAS file server. The default is port number 10000.
v The user ID is the ID defined to the NAS file server that authorizes an NDMP
session with the NAS file server (for this example, the user ID is the
administrative ID for the Network Appliance file server).
v The password parameter is a valid password for authentication to an NDMP
session with the NAS file server.
v The data format is NETAPPDUMP. This is the data format that the Network
Appliance file server uses for tape backup. This data format must match the
data format of the target storage pool.
|
|
|
|
|
The value of the DEVICE parameter is the special file name for the tape
library as it is known to the NAS file server. See Obtaining Special File
Names for Path Definitions.
|
Define a path to the 349X library from the Tivoli Storage Manager server.
|
|
|
|
|
126
querying the NAS file server. For information about how to obtain names for
devices that are connected to a NAS file server, consult the product information for
the file server.
For example, for a Network Appliance file server, connect to the file server using
telnet and issue the SYSCONFIG command. To display the device names for tape
libraries, use this command:
sysconfig -m
To display the device names for tape drives, use this command:
sysconfig -t
|
|
|
|
For the Celerra file server, connect to the Celerra control workstation using telnet.
To see the devices attached to a particular data mover, use the server_devconfig
command on the control station:
The SERVER_# is the data mover on which the command should be run.
server_devconfig
server_#
-p
-s -n
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Note: When you define SCSI drives to the Tivoli Storage Manager server, the
ELEMENT parameter must contain a number if the library has more
than one drive. If the drive is shared between the NAS file server and
the Tivoli Storage Manager server, the element address is automatically
detected. If the library is connected to a NAS file server only, there is no
automatic detection of the element address and you must supply it.
Element numbers are available from device manufacturers. Element
numbers for tape drives are also available in the device support
information available on the Tivoli Web site at
www.ibm.com/software/sysmgmt/products/
support/IBMTivoliStorageManager.html.
2. Define a path for the drive:
v For example, if the drive is to be used only for NDMP operations, issue the
following command:
define path nasnode1 nasdrive1 srctype=datamover desttype=drive
library=naslib device=rst0l
Note: For a drive connected only to the NAS file server, do not specify
ASNEEDED for the CLEANFREQUENCY parameter of the DEFINE
DRIVE command.
v For example, if a drive is to be used for both Tivoli Storage Manager and
NDMP operations, enter the following commands:
define path server1 nasdrive1 srctype=server desttype=drive
library=naslib device=/dev/rmt0
|
|
127
The schedule is active, and is set to run at 8:00 p.m. every day. See Chapter 17,
Automating Server Operations, on page 401 for more information.
For more information on the command, see Tivoli Storage Manager for Windows
Backup-Archive Clients Installation and Users Guide or Tivoli Storage Manager for
UNIX Backup-Archive Clients Installation and Users Guide.
Note: Whenever you use the client interface, you are asked to authenticate
yourself as a Tivoli Storage Manager administrator before the operation can
begin. The administrator ID must have at least client owner authority for the
NAS node.
You can perform the same backup operation with a server interface. For example,
from the administrative command-line client, back up the file system named
/vol/vol1 on a NAS file server named NAS1, by entering the following command:
backup node nas1 /vol/vol1
You can restore the image using either interface. Backups are identical whether
they are backed up using a client interface or a server interface. For example,
suppose you want to restore the image backed up in the previous examples. For
this example the file system named /vol/vol1 is being restored to /vol/vol2.
Restore the file system with the following command, issued from a Windows
backup-archive client interface:
dsmc restore nas -nasnodename=nas1 {/vol/vol1} {/vol/vol2}
128
You can choose to restore the file system, using a server interface. For example, to
restore the file system name /vol/vol1 to file system /vol/vol2, for a NAS file
server named NAS1, enter the following command:
restore node nas1 /vol/vol1 /vol/vol2
|
|
|
When you restore individual files and directories, you have the choice of using one
of two interfaces to initiate the restore: the backup-archive Web client or the server
interface.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
v Device classes
v Storage pools
v Table of contents
129
Then you might change the domain of the node with the following command:
update node nasnode1 domain=nasdomain
4. Define the data mover using the new node name. In this example, you must
define a new data mover named NAS1 with the same parameters used to
define NASNODE1. See Step 4. Defining a Data Mover for the NAS File
Server on page 125.
Attention: When defining a new data mover for a node that you have
renamed, ensure that the data mover name matches the new node name and
that the new data mover parameters are duplicates of the original data mover
parameters. Any mismatch between a node name and a data mover name or
between new data mover parameters and original data mover parameters can
prevent you from establishing a session with the NAS file server.
5. For SCSI or 349X libraries, define a path between the NAS data mover and a
library only if the tape library is physically connected directly to the NAS file
server. See Step 5. Defining a Path to a Library on page 126.
6. Define paths between the NAS data mover and any drives used for NDMP
operations. See Step 6. Defining Tape Drives and Paths for NDMP Operations
on page 127.
Then issue the following command to make the data mover offline:
update datamover nasnode1 online=no
To delete the data mover, you must first delete any path definitions in which the
data mover has been used as the source. Then issue the following command to
delete the data mover:
delete datamover nasnode1
Attention: If the data mover has a path to the library, and you delete the data
mover or make the data mover offline, you disable access to the library.
130
The following DEFINE STGPOOL and UPDATE STGPOOL parameters are ignored
because storage pool hierarchies, reclamation, and migration are not supported for
these storage pools:
MAXSIZE
NEXTSTGPOOL
LOWMIG
HIGHMIG
MIGDELAY
MIGCONTINUE
RECLAIMSTGPOOL
OVFLOLOCATION
Attention: Ensure that you do not accidentally use storage pools that have been
defined for NDMP operations in traditional Tivoli Storage Manager operations. Be
especially careful when assigning the storage pool name as the value for the
DESTINATION parameter of the DEFINE COPYGROUP command. Unless the
destination is a storage pool with the appropriate data format, the backup will fail.
|
|
|
|
|
|
|
|
|
|
At installation, the retention time is set to 120 minutes. Use the QUERY STATUS
command to see the table of contents retention time.
|
|
|
|
|
Use the QUERY NASBACKUP command to display information about the file
system image objects that have been backed up for a specific NAS node and file
space. By issuing the command, you can see a display of all backup images
generated by NDMP and whether each image has a corresponding table of
contents.
131
|
|
|
|
Note: The Tivoli Storage Manager server may store a full backup in excess of the
number of versions you specified, if that full backup has dependent
differential backups. QUERY NASBACKUP will not display the extra
versions.
|
|
|
|
|
|
|
Use the QUERY TOC command to display files and directories in a backup image
generated by NDMP. By issuing the QUERY TOC server command, you can
display all directories and files within a single specified TOC. The specified TOC
will be accessed in a storage pool each time the QUERY TOC command is issued
because this command does not load TOC information into the Tivoli Storage
Manager database. Then, use the RESTORE NODE command with the FILELIST
parameter to restore individual files.
132
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
133
Tip: When you use the LABEL LIBVOLUME command with drives in an
automated library, you can label and check in the volumes with one
command.
3. If the storage pool cannot contain scratch volumes (MAXSCRATCH=0), identify
the volume to Tivoli Storage Manager by name so that it can be accessed later.
For details, see Defining Storage Pool Volumes on page 191.
If the storage pool can contain scratch volumes (MAXSCRATCH is set to a
non-zero value), skip this step.
|
|
By overwriting a volume label, you destroy all of the data that resides on the
volume. Use caution when overwriting volume labels to avoid destroying
important data.
When you use the LABEL LIBVOLUME command, you can identify the volumes
to be labeled in one of the following ways:
v Explicitly name one volume.
v Enter a range of volumes by using the VOLRANGE parameter.
134
v Use the VOLLIST parameter to specify a file that contains a list of volume
names or to explicitly name one or more volumes.
For automated libraries, you are prompted to insert the volume in the entry/exit
slot of the library. If no I/O convenience station is available, insert the volume in
an empty slot. For manual libraries, you are prompted to load the volume directly
into a drive.
Note: The LABEL LIBVOLUME command selects the next free drive. If you have
more than one free drive, this may not be /dev/mt5.
If the server is not available, use the following command:
> dsmlabel -drive=/dev/mt5
The DSMLABEL utility, which is an offline utility for labeling sequential access
volumes for Tivoli Storage Manager, must read the dsmserv.opt file to pick up the
language option. Therefore, you must issue the DSMLABEL command from the
/usr/tivoli/tsm/server/bin/ directory, or you must set the DSMSERV_DIR and
DSMSERV_CONFIG environment variables.
135
Searching the Library: The LABEL LIBVOLUME command searches all of the
storage slots in the library for volumes and tries to label each one that it finds. You
choose this mode when you specify the SEARCH=YES parameter. After a volume
is labeled, the volume is returned to its original location in the library. Specify
SEARCH=BULK if you want the server to search the librarys entry/exit ports for
usable volumes to be labeled.
When you specify LABELSOURCE=PROMPT, the volume is moved from its
location in the library or in the entry/exit ports to the drive. The server prompts
you to issue the REPLY command containing the label string, and that label is
written to the tape.
If the library has a bar-code reader, the LABEL LIBVOLUME command can use the
reader to obtain volume names, instead of prompting you for volume names. Use
the SEARCH=YES and LABELSOURCE=BARCODE parameters.If you specify the
LABELSOURCE=BARCODE parameter, the volume bar code is read, and the tape
is moved from its location in the library or in the entry/exit ports to a drive where
the bar-code label is written. After the tape is labeled, it is moved back to its
location in the library, to the entry/exit ports, or to a storage slot if the CHECKIN
option is specified.
Suppose that you want to label all volumes in a SCSI library. Enter the following
command:
label libvolume tsmlibname search=yes labelsource=barcode
|
|
Note: If the volumes to be labeled are 3590 media and there are both 3490 and
3590 drives in the library, you must add DEVTYPE=3590.
If the server is not available, use the following command:
> dsmlabel -drive=/dev/rmt1 -drive=/dev/rmt2 -library=/dev/lmcp0
136
You can also use the DSMLABEL utility to format and label 3.5-inch and 5.25-inch
optical disks. Use the -format parameter when starting the DSMLABEL utility.
The DSMLABEL utility, which is an offline utility for labeling sequential access
volumes for Tivoli Storage Manager, must read the dsmserv.opt file to pick up the
language option. Therefore, you must issue the DSMLABEL command from the
/usr/tivoli/tsm/server/bin/ directory, or you must set the DSMSERV_DIR and
DSMSERV_CONFIG environment variables.
> dsmlabel -drive=/dev/rop1,117 -library=/dev/lb0 -search -format
|
|
|
|
|
Task
To inform the server that a new volume is available in an automated library, check
in the volume with the CHECKIN LIBVOLUME command or LABEL LIBVOLUME
command with the CHECKIN option specified. When a volume is checked in, the
server adds the volume to its library volume inventory. You can use the LABEL
LIBVOLUME command to check in and label volumes in one operation.
Notes:
|
|
|
1. Do not mix volumes with bar-code labels and volumes without bar-code labels
in a library device because bar-code scanning can take a long time for
unlabeled volumes.
2. You must use the CHECKLABEL=YES (not NO or BARCODE) option on the
CHECKIN LIBVOLUME command when checking VolSafe volumes into a
library. This is true for both ACSLS and SCSI libraries.
When you check in a volume, you must supply the name of the library and the
status of the volume (private or scratch).
To check in one or just a few volumes, you can specify the name of the volume
with the command, and issue the command for each volume. See Checking
Volumes into a SCSI Library One at a Time on page 138.
To check in a larger number of volumes, you can use the search capability of the
CHECKIN command (see Checking in Volumes in Library Slots on page 139) or
you can use the VOLRANGE parameter of the CHECKIN command.
When using the CHECKIN LIBVOLUME command, be prepared to supply some
or all of the following information:
Library name
Specifies the name of the library where the storage volume is to be located.
Volume name
Specifies the volume name of the storage volume being checked in.
Status Specifies the status that is assigned to the storage volume being checked in.
If you check in a volume that has already been defined in a storage pool or
in the volume history file, you must specify a volume status of private
(STATUS=PRIVATE). This status ensures that the volume is not overwritten
when a scratch mount is requested. The server does not check in a volume
Chapter 7. Managing Removable Media Operations
137
with scratch status when that volume already belongs to a storage pool or
is a database, export, or dump volume.
Check label
Specifies whether Tivoli Storage Manager should read sequential media
labels of volumes during CHECKIN command processing, or use a
bar-code reader. See Checking Media Labels on page 140.
For optical volumes being checked in to an automated library, you must
specify CHECKLABEL=YES. Tivoli Storage Manager must read the label to
determine the type of volume: rewritable (OPTICAL device type) or
write-once read-many (WORM or WORM12 device type).
Swap
Mount wait
Specifies the maximum length of time, in minutes, to wait for a storage
volume to be mounted.
Search
Specifies whether Tivoli Storage Manager searches the library for volumes
that have not been checked in. See Checking Volumes into a SCSI Library
One at a Time, Checking in Volumes in Library Slots on page 139,
andChecking in Volumes in Library Entry/Exit Ports on page 139.
Device type
This parameter only applies to 349X libraries containing 3590 devices. This
parameter allows you to specify the device type for the volume being
checked in.
Iif the library has an entry/exit port, you are prompted to insert a cartridge into
the entry/exit port. If the library does not have an entry/exit port, you are
prompted to insert a cartridge into one of the slots in the library. Element
addresses identify these slots. For example, Tivoli Storage Manager finds that the
first empty slot is at element address 5. The message is:
138
ANR8306I 001: Insert 8MM volume VOL001 R/W in slot with element
address 5 of library TAPELIB within 60 minutes; issue 'REPLY' along
with the request ID when ready.
Check the worksheet for the device if you do not know the location of element
address 5 in the library. See www.ibm.com/software/sysmgmt/products/
support/IBMTivoliStorageManager.html to find the worksheet. When you have
inserted the volume as requested, respond to the message from a Tivoli Storage
Manager administrative client. Use the request number (the number at the
beginning of the mount request):
reply 1
If the volume has already been inserted, the server finds and processes it. If not,
you can insert the volume into the I/O station during the processing of the
command.
139
Tivoli Storage Manager selects the volume to eject by checking first for any
available scratch volume, then for the least frequently mounted volume.
The actual labeling of a VolSafe volume, a type of write once read many (WORM)
media, is performed as you would normal volumes. However, VolSafe volumes
have special considerations. To ensure you receive the full benefit of using these
volumes, you must take these considerations into account before checking them
into a library:
v All drives in a library that contain VolSafe volumes must be VolSafe enabled.
Library changers cannot identify WORM media from standard read write (RW)
media. The volume must be loaded into a drive to determine what type of
media is being used. This media type checking is only performed in a SCSI or
ACSLS library. However, WORM and RW media can be mixed in a library if all
of the drives are VolSafe enabled.
v External and manual libraries must segregate their media by having separate
logical libraries. Loading the correct media is left up to the operator and the
library manager software to control.
v VolSafe media requires the special device type of VOLSAFE, which requires that
storage pools be segregated by WORM or RW media.
v StorageTek WORM tapes allow the header to be overwritten only once.
Therefore you should only use the LABEL LIBVOLUME command once.
Overwriting the label can be guarded against by using the OVERWRITE=NO
option on the CHECKIN LIBVOLUME and LABEL LIBVOLUME command.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
140
|
|
|
|
|
|
|
|
Note: Each volume used by a server for any purpose must have a unique name.
This requirement applies to all volumes, whether the volumes are used for
storage pools, or used for operations such as database backup or export. The
requirement also applies to volumes that reside in different libraries but that
are used by the same server.
141
142
143
Note: If your server uses the disaster recovery manager function, the volume
information is automatically deleted during MOVE DRMEDIA command
processing. For additional information about DRM, see Chapter 23, Using
Disaster Recovery Manager, on page 589.
|
|
|
|
|
|
|
For example, if a client starts to back up data and does not have sufficient volumes
in the library, Tivoli Storage Manager cancels the backup transaction. The WORM
volumes to which Tivoli Storage Manager had already written for the canceled
backup are wasted because the volumes cannot be reused. Suppose that you have
WORM platters that hold 2.6GB each. A client starts to back up a 12GB file. If
Tivoli Storage Manager cannot acquire a fifth scratch volume after filling four
volumes, Tivoli Storage Manager cancels the backup operation. The four volumes
that Tivoli Storage Manager already filled cannot be reused.
To minimize cancellation of transactions, do the following:
v Ensure that you have enough volumes available in the library to handle
expected client operations such as backup.
Verify that you set the maximum number of scratch volumes for the storage
pool that is associated with the library to a high enough number.
144
Check enough scratch or private volumes into the library to handle the
expected load.
v If your clients tend to store files of smaller sizes, controlling the transaction size
can affect how WORM platters are used. Smaller transactions waste less space if
a transaction such as a backup must be canceled. The TXNGROUPMAX server
option and the TXNBYTELIMIT client option control transaction size. See How
the Server Groups Files before Storing on page 196 for information.
145
v All of the volumes in the library are full, and you want to remove some that are
not likely to be accessed to make room for new volumes that can be used to
store more data.
To remove a volume from an automated library, use the CHECKOUT LIBVOLUME
command. By default, the server mounts the volume being checked out and
verifies the internal label. When the label is verified, the server removes the
volume from the library volume inventory, and then moves it to the entry/exit
port or convenience I/O station. of the library. If the library does not have an
entry/exit port, Tivoli Storage Manager requests that the mount operator remove
the volume from a slot within the library.
For SCSI libraries with multiple entry/exit ports, use the REMOVE=BULK
parameter of the CHECKOUT LIBVOLUME command to eject the volume to the
next available entry/exit port.
If you check out a volume that is defined in a storage pool, the server may attempt
to access it later to read or write data. If this happens, the server requests that the
volume be checked in.
2. When the library becomes full, move the full volumes out of the library and to
the overflow location that you defined for the storage pool. For example, to
move all full volumes in the specified storage pool out of the library, enter this
command:
move media * stgpool=archivepool
All full volumes are checked out of the library. Tivoli Storage Manager records
the location of the volumes as Room2948. You can use the DAYS parameter to
specify the number of days that must elapse before a volume is eligible for
processing by the MOVE MEDIA command.
3. Check in new scratch volumes, if needed.
4. Reuse the empty scratch storage volumes in the overflow location. For example,
enter this command:
146
Use the DAYS parameter to specify the number of days that must elapse before the
volumes are eligible for processing by the QUERY MEDIA command.
The file that contains the generated commands can be run using the Tivoli Storage
Manager MACRO command. For this example, the file may look like this:
checkin libvol autolib TAPE13 status=private
checkin libvol autolib TAPE19 status=private
You can audit an automated library to ensure that the library volume inventory is
consistent with the volumes that physically reside in the library. You may want to
do this if the library volume inventory is disturbed due to manual movement of
volumes in the library or database problems. Use the AUDIT LIBRARY command
to restore the inventory to a consistent state. Missing volumes are deleted, and the
locations of the moved volumes are updated. However, new volumes are not
added during an audit.
Unless your SCSI library has a bar-code reader, the server mounts each volume
during the audit to verify the internal labels on volumes. For 349X libraries, the
server uses the information from the Library Manager.
Issue the AUDIT LIBRARY command only when there are no volumes mounted in
the library drives. If any volumes are mounted but in the IDLE state, you can issue
the DISMOUNT VOLUME command to dismount them.
If a SCSI library has a bar-code reader, you can save time by using the bar-code
reader to verify the identity of volumes. If a volume has a bar-code label, the
server uses the characters on the label as the name for the volume. The volume is
not mounted to verify that the bar-code name matches the internal volume name.
If a volume has no bar-code label, the server mounts the volume and attempts to
read the recorded label. For example, to audit the TAPELIB library using its
bar-code reader, issue the following command:
Chapter 7. Managing Removable Media Operations
147
148
Table 13. How SAN-enabled Servers Process Tivoli Storage Manager Operations
Operation
(Command)
Library Manager
Library Client
Not applicable.
Not applicable.
When a check-in operation
must be performed because
of a client restore, a request
is sent to the library manager
server.
Move media and move DRM Only valid for volumes used
media
by the library manager
(MOVE MEDIA,
server.
MOVE DRMEDIA)
Not applicable.
Dismount a volume
(DISMOUNT VOLUME)
Query a volume
(QUERY VOLUME)
149
You can get information about pending operator requests either by using the
QUERY REQUEST command or by checking the mount message queue on an
administrative client started in mount mode.
When you issue the QUERY REQUEST command, Tivoli Storage Manager displays
requested actions and the amount of time remaining before the requests time out.
For example, you enter the command as follows:
query request
Operator
When the server requires that an explicit reply be provided when a mount request
is completed, you can reply with the REPLY command. The first parameter for this
command is the request identification number that tells the server which of the
pending operator requests has been completed. This 3-digit number is always
displayed as part of the request message. It can also be obtained by issuing a
QUERY REQUEST command. If the request requires the operator to provide a
device to be used for the mount, the second parameter for this command is a
device name.
For example, enter the following command to respond to request 001 for tape
drive TAPE01:
150
reply 1
Operator
If a mount request for a manual library cannot be satisfied, you can issue the
CANCEL REQUEST command. This command forces the server to cancel the
request and cause the operation that needed the requested volume to fail.
The CANCEL REQUEST command must include the request identification number.
This number is included in the request message. You can also obtain it by issuing a
QUERY REQUEST command, as described in Requesting Information about
Pending Operator Requests on page 150.
You can specify the PERMANENT parameter if you want to mark the requested
volume as UNAVAILABLE. This process is useful if, for example, the volume has
been moved to a remote site or is otherwise inaccessible. By specifying
PERMANENT, you ensure that the server does not try to mount the requested
volume again.
For most of the requests associated with automated (SCSI) libraries, an operator
must perform a hardware or system action to cancel the requested mount. For such
requests, the CANCEL REQUEST command is not accepted by the server.
Operator
151
For a report of all volumes currently mounted for use by the server, you can issue
the QUERY MOUNT command. The report shows which volumes are mounted,
which drives have accessed them, and if the volumes are currently being used.
Operator
After a volume becomes idle, the server keeps it mounted for a time specified by
the mount retention parameter for the device class. Use of mount retention can
reduce the access time if volumes are repeatedly used.
An administrator can explicitly request to dismount an idle volume by issuing the
DISMOUNT VOLUME command. This command causes the server to dismount
the named volume from the drive in which it is currently mounted.
For information about setting mount retention times, see Mount Retention Period
on page 166.
Managing Libraries
You can query, update, and delete libraries.
Any administrator
You can request information about one or more libraries by using the QUERY
LIBRARY command. You can request either a standard or a detailed report. For
example, to display information about all libraries, issue the following command:
query library
Library
Type
------MANUAL
SCSI
349X
Private
Category
--------
Scratch
Category
--------
300
301
External
Manager
--------
Updating Libraries
You can update an existing library by issuing the UPDATE LIBRARY command. To
update the device names of a library, issue the UPDATE PATH command.
Note: You cannot update a MANUAL library.
152
Task
Update libraries
Automated Libraries
If your system or device is reconfigured, and the device name changes, you may
need to update the device name. The examples below show how you can use the
UPDATE LIBRARY and UPDATE PATH commands for the following library types:
v
v
v
v
SCSI
349X
ACSLS
External
Examples:
v SCSI Library
Update the path from SERVER1 to a SCSI library named SCSILIB:
update path server1 scsilib srctype=server desttype=library device=/dev/lb1
Update the definition of a SCSI library named SCSILIB defined to a library client
so that a new library manager is specified:
update library scsilib primarylibmanager=server2
v 349X Library
Update the path from SERVER1 to an IBM 3494 library named 3494LIB with
new device names.
update path server1 3494lib srctype=server desttype=library
device=/dev/lmcp1,/dev/lmcp2,/dev/lmcp3
Update the definition of an IBM 3494 library named 3494LIB defined to a library
client so that a new library manager is specified:
update library 3494lib primarylibmanager=server2
v ACSLS Library
Update an ACSLS library named ACSLSLIB with a new ID number.
update library acslslib ascid=1
v External Library
Update an external library named EXTLIB with a new media manager path
name.
update path server1 extlib srctype=server desttype=library
externalmanager=/v/server/mediamanager.exe
Deleting Libraries
Task
Delete libraries
Before you delete a library with the DELETE LIBRARY command, you must delete
all of the drives that have been defined as part of the library and delete the path to
the library. See Deleting Drives on page 159.
153
For example, suppose that you want to delete a library named 8MMLIB1. After
deleting all of the drives defined as part of this library and the path to the library,
issue the following command to delete the library itself:
delete library 8mmlib1
Managing Drives
You can query, update, clean, and delete drives.
Any administrator
You can request information about drives by using the QUERY DRIVE command.
This command accepts wildcard characters for both a library name and a drive
name. See Administrators Reference for information about this command and the
use of wildcard characters.
For example, to query all drives associated with your server, enter the following
command:
query drive
Device
Type
--------8MM
8MM
On Line
------Yes
Yes
Updating Drives
You can change the attributes of a drive by issuing the UPDATE DRIVE command.
Task
Update drives
You can change the following attributes of a drive by issuing the UPDATE DRIVE
command.
v The element address, if the drive resides in a SCSI library
v The ID of a drive in an ACSLS library
v The cleaning frequency
v Change whether the drive is online or offline
For example, to change the element address of a drive named DRIVE3 to 119, issue
the following command:
update drive auto drive3 element=119
If you are reconfiguring your system, you can change the device name of a drive
by issuing the UPDATE PATH command. For example, to change the device name
of a drive named DRIVE3, issue the following command:
154
Note: You cannot change the element number or the device name if a drive is in
use. See Taking Drives Offline. If a drive has a volume mounted, but the
volume is idle, it can be explicitly dismounted. See Dismounting an Idle
Volume on page 152.
Cleaning Drives
Task
Clean drives
The server can control cleaning tape drives in SCSI libraries and offers partial
support for cleaning tape drives in manual libraries. For automated library devices,
you can automate cleaning by specifying the frequency of cleaning operations and
checking a cleaner cartridge into the librarys volume inventory. Tivoli Storage
Manager mounts the cleaner cartridge as specified. For manual library devices,
Tivoli Storage Manager issues a mount request for the cleaner cartridge.
155
Some devices require a small amount of idle time between mount requests to start
drive cleaning. However, Tivoli Storage Manager tries to minimize the idle time for
a drive. The result may be to prevent the library drive cleaning from functioning
effectively. If this happens, try using Tivoli Storage Manager to control drive
cleaning. Set the frequency to match the cleaning recommendations from the
manufacturer.
If you have Tivoli Storage Manager control drive cleaning, disable the library drive
cleaning function to prevent problems. If the library drive cleaning function is
enabled, some devices automatically move any cleaner cartridge found in the
library to slots in the library that are dedicated for cleaner cartridges. An
application does not know that these dedicated slots exist. You will not be able to
check a cleaner cartridge into the Tivoli Storage Manager library inventory until
you disable the library drive cleaning function.
|
|
|
For example, to have DRIVE1 cleaned after 100GB is processed on the drive,
issue the following command:
update drive autolib1 drive1 cleanfrequency=100
After the cleaner cartridge is checked in, the server will mount the cleaner
cartridge in a drive when the drive needs cleaning. The server will use that
cleaner cartridge for the number of cleanings specified. See Checking In
Cleaner Cartridges on page 157 and Operations with Cleaner Cartridges in a
Library on page 157 for more information.
For details on the commands, see Administrators Reference.
156
The server then requests that the cartridge be placed in the entry/exit port, or
into a specific slot.
v Check in using search, but limit the search by using the VOLRANGE or
VOLLIST parameter:
checkin libvolume autolib1 status=cleaner cleanings=10 search=yes
checklabel=barcode vollist=cleanv
The process scans the library by using the bar-code reader, looking for the
CLEANV volume.
Manual Drive Cleaning in an Automated Library: If your library has limited
capacity and you do not want to use a slot in your library for a cleaner cartridge,
you can still make use of the servers drive cleaning function. Set the cleaning
frequency for the drives in the library. When a drive needs cleaning based on the
frequency setting, the server issues the message, ANR8914I. For example:
ANR89141I Drive DRIVE1 in library AUTOLIB1 needs to be cleaned.
You can use that message as a cue to manually insert a cleaner cartridge into the
drive. However, the server cannot track whether the drive has been cleaned.
Operations with Cleaner Cartridges in a Library: When a drive needs to be
cleaned, the server runs the cleaning operation after dismounting a data volume if
a cleaner cartridge is checked in to the library. If the cleaning operation fails or is
cancelled, or if no cleaner cartridge is available, then the indication that the drive
needs cleaning is lost. Monitor cleaning messages for these problems to ensure that
157
drives are cleaned as needed. If necessary, use the CLEAN DRIVE command to
have the server try the cleaning again, or manually load a cleaner cartridge into
the drive.
The server uses a cleaner cartridge for the number of cleanings that you specify
when you check in the cleaner cartridge. If you check in more than one cleaner
cartridge, the server uses one of them for its designated number of cleanings. Then
the server begins to use the next cleaner cartridge.
Visually verify that cleaner cartridges are in the correct storage slots before issuing
any of the following commands:
v AUDIT LIBRARY
v CHECKIN LIBVOLUME with SEARCH specified
v LABEL LIBVOLUME with SEARCH specified
To find the correct slot for a cleaner cartridge, use the QUERY LIBVOLUME
command.
Monitor the activity log or the server console for these messages and load a cleaner
cartridge into the drive as needed. The server cannot track whether the drive has
been cleaned.
158
administrator should issue the command when sufficient time permits. See
Auditing a Librarys Volume Inventory on page 147.
Deleting Drives
You can delete a drive by issuing the DELETE DRIVE command.
Task
Delete drives
Managing Paths
You can query, update, and delete paths.
Updating Paths
You can update an existing path by issuing the UPDATE PATH command. The
examples below show how you can use the UPDATE PATH commands for the
following path types:
v Library Paths
Update the path to change the device name for a SCSI library named SCSILIB:
update path server1 scsilib srctype=server desttype=library device=/dev/lb1
v Drive Paths
Update the path to change the device name for a drive named NASDRV1:
update path nas1 nasdrv1 srctype=datamover desttype=drive
library=naslib device=/dev/mt1
159
Deleting Paths
Task
Delete paths
A path cannot be deleted if the destination is currently in use. Before you can
delete a path to a device, you must delete the device.
Delete a path from a NAS data mover NAS1 to the library NASLIB.
delete path nas1 naslib srctype=datamover desttype=library
Attention: If you delete the path to a device or make the path offline, you disable
access to that device.
160
|
|
|
|
Tape alert messages are generated by tape and library devices to report hardware
errors. A log page is created and can be retrieved at any given time or at a specific
time such as when a drive is dismounted. These messages help to determine
problems that are not related to the IBM Tivoli Storage Manager server.
|
|
|
|
|
|
|
|
Tape alert messages are turned off by default. You may set tape alert messages to
ON or OFF by using the SET TAPEALERTMSG command. You may query tape
alert messages by using the QUERY TAPELAERTMSG comamnd.
161
162
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
For sequential access storage, Tivoli Storage Manager supports the following
device types:
Copyright IBM Corp. 1993, 2003
163
|
|
|
Device Type
Media Type
Device Examples
3570
3590
4MM
4mm cartridges
IBM 7206-005
8MM
8mm cartridges
CARTRIDGE
Tape cartridges
DLT
DTF
ECARTRIDGE
Tape cartridges
FILE
Server
GENERICTAPE
Tape cartridges
LTO
NAS
Unknown
OPTICAL
5.25-inch rewritable
optical cartridges
QIC
Quarter-inch tape
cartridges
IBM 7207
REMOVABLEFILE
SERVER
VOLSAFE
Write-once read-many
(WORM) tape
cartridges
WORM
5.25-inch write-once
read-many (WORM)
optical cartridges
WORM12
12-inch write-once
ready-many optical
cartridges
WORM14
14-inch write-once
ready-many optical
cartridges
For all device types other than FILE or SERVER, you must define libraries and
drives to Tivoli Storage Manager before you define the device classes.
You can define multiple device classes for each device type. For example, you may
need to specify different attributes for different storage pools that use the same
164
type of tape drive. Variations may be required that are not specific to the device,
but rather to how you want to use the device (for example, mount retention or
mount limit).
|
|
|
|
|
Tivoli Storage Manager now allows SCSI libraries to include tape drives of more
than one device type. When you define the device class in this environment, you
must declare a value for the FORMAT parameter. See Configuration with Multiple
Drive Device Types on page 74 and Mixing Device Types in Libraries on page 70
for additional information.
If you include the DEVCONFIG option in the dsmserv.opt file, the files you specify
with that option are automatically updated with the results of this command.
When you use this option, the files specified are automatically updated whenever
a device class, library, or drive is defined, updated, or deleted.
The following sections explain the device classes for each supported device type.
Mount Limit
The MOUNTLIMIT parameter specifies the maximum number of volumes that can
be simultaneously mounted for a device class. You can limit the number of drives
that the device class has access to at one time with the MOUNTLIMIT parameter.
The default mount limit value is DRIVES. The DRIVES parameter indicates that
every time a mount point is allocated, the number of drives online and defined to
the library is used to calculate the true mount limit value. The maximum value for
this parameter is 256 and the minimum value is 0. A zero value prevents new
transactions from gaining access to the storage pool.
When selecting a mount limit for a device class, be sure to consider the following
questions:
v How many storage devices are connected to your system?
Do not specify a mount limit value that is greater than the number of associated
available drives in your installation. If the server tries to mount as many
volumes as specified by the mount limit and no drives are available for the
required volume, an error occurs and client sessions may be terminated. (This
does not apply when the DRIVES parameter is specified.)
v Are you using the simultaneous write function to primary and copy storage
pools?
165
Specify a mount limit value that provides a sufficient number of mount points to
support a simultaneous write to the primary storage pool and all associated
copy storage pools.
v Are you associating multiple device classes with a single library?
A device class associated with a library can use any drive in the library that is
compatible with the device class device type. Because you can associate more
than one device class with a library, a single drive in the library can be used by
more than one device class. However, Tivoli Storage Manager does not manage
how a drive is shared among multiple device classes.
|
|
|
|
|
|
|
|
|
When you associate multiple device classes of the same device type with a
library, add up the mount limits for all these device classes. Ensure that this sum
is no greater than the number of compatible drives.
v How many Tivoli Storage Manager processes do you want to run at the same
time, using devices in this device class?
Tivoli Storage Manager automatically cancels some processes to run other,
higher priority processes. If the server is using all available drives in a device
class to complete higher priority processes, lower priority processes must wait
until a drive becomes available. For example, Tivoli Storage Manager cancels the
process for a client backing up directly to tape if the drive being used is needed
for a server migration or tape reclamation process. Tivoli Storage Manager
cancels a tape reclamation process if the drive being used is needed for a client
restore operation. For additional information, see Preemption of Client or
Server Operations on page 396.
If processes are often canceled by other processes, consider whether you can
make more drives available for Tivoli Storage Manager use. Otherwise, review
your scheduling of operations to reduce the contention for drives.
This consideration also applies to the primary and copy storage pool
simultaneous write function. You must have enough drives available to allow for
a successful simultaneous write.
Note: If the library associated with this device class is EXTERNAL type, it is
recommended that you explicitly specify the mount limit instead of using
MOUNTLIMIT=DRIVES.
166
Recording Format
The FORMAT parameter specifies the recording format used by Tivoli Storage
Manager when writing data to removable media. See the Administrators Reference
for information about the recording formats for each device type.
Specify the FORMAT=DRIVE parameter only if all drives associated with that
device class are identical. If some drives associated with the device class support a
higher density format than others and you specify FORMAT=DRIVE, mount errors
can occur. For example, suppose a device class uses two incompatible devices such
as an IBM 7208-2 and an IBM 7208-12. The server might select the high-density
recording format of 8500 for each of two new volumes. Later, if the two volumes
are to be mounted concurrently, one fails because only one of the drives is capable
of the high-density recording format.
|
|
|
|
Tivoli Storage Manager now supports drives with different tape technologies in a
single SCSI library. You must specify a value for the FORMAT parameter in this
configuration. See Configuration with Multiple Drive Device Types on page 74
for an example.
The recording format that Tivoli Storage Manager uses for a given volume is
selected when the first piece of data is written to the volume. Updating the
FORMAT parameter does not affect media that already contain data until those
media are rewritten from the beginning. This process may happen after a volume
is reclaimed or deleted, or after all of the data on the volume expires.
Estimated Capacity
The ESTCAPACITY parameter specifies the estimated capacity for volumes
assigned to this device class. Tivoli Storage Manager estimates the capacity of the
volumes in a storage pool based on the parameters assigned to the device class
associated with the storage pool. For tape device classes, the default values
selected by the server depend on the recording format used to write data to the
volume. You can either accept the default for a given device type or specify a
value.
Chapter 8. Defining Device Classes
167
Library
Before the server can mount a volume, it must know which drives can be used to
satisfy the mount request. This process is done by specifying the library when the
device class is defined. The library must contain drives that can be used to mount
the volume.
Only one library can be associated with a given device class. However, multiple
device classes can reference the same library. Unless you are using the DRIVES
value for MOUNTLIMIT, you must ensure that the numeric value of the mount
limits of all device classes do not exceed the number of drives defined in the
referenced library.
There is no default value for this parameter. It is required, and so must be
specified when the device class is defined.
168
Description
OPTICAL
WORM
WORM12
WORM14
Other parameters specify how to manage data storage operations involving the
new device class:
Mount Limit
See Mount Limit on page 165.
Mount Wait Period
See Mount Wait Period on page 166.
Mount Retention
See Mount Retention Period on page 166.
Recording Format
See Recording Format on page 167.
Estimated Capacity
See Estimated Capacity on page 167.
Library
See Library on page 168.
You can update the device class information by issuing the UPDATE DEVCLASS
command.
Removable file devices include devices such as Iomega Zip drives, Iomega Jaz
drives, SyQuest drives, and CD-ROM drives. Volumes in this device class are
sequential access volumes. Define a device class for these devices by issuing the
DEFINE DEVCLASS command with the DEVTYPE=REMOVABLEFILE parameter.
To access volumes that belong to this device class, the server requests that the
removable media be mounted in drives. The server then opens a file on the media
and reads or writes the file data. The server does not write directly to CD-ROM
media. Removable media is treated as single-sided media. Two-sided media are
treated as two separate volumes.
The server recognizes that the media can be removed and that additional media
can be inserted. This is subject to limits set with the MOUNTLIMIT parameter for
the device class and the MAXSCRATCH parameter for the storage pool.
169
When using CD-ROM media for the REMOVABLEFILE device type, the library
type must be specified as MANUAL. Access this media through a mount point, for
example, /dev/cdx (x is a number that is assigned by your operating system) .
|
|
|
Use the device manufacturers utilities to format (if necessary) and label the media.
The following restrictions apply:
v The label on the media must be no more than 11 characters.
v The label on the media must have the same name for file name and volume
label.
See Configuring Removable File Devices on page 98 for more information.
Other parameters specify how to manage storage operations involving this device
class:
Mount Wait
See Mount Wait Period on page 166.
Mount Retention
See Mount Retention Period on page 166.
Library
See Library on page 168.
Maximum Capacity
You can specify a maximum capacity value that restricts the size of
volumes (that is, files) associated with a REMOVABLEFILE device class.
Use the MAXCAPACITY parameter with the DEFINE DEVCLASS
command.
Because the server opens only one file per physical removable medium,
specify a value such that the one file makes full use of your media
capacity. When the server detects that a volume has reached a size equal to
the maximum capacity, it treats the volume as full and stores any new data
on a different volume.
The default MAXCAPACITY value for a REMOVABLEFILE device class is
the remaining space in the file system where the removable media volume
is added to Tivoli Storage Manager.
|
|
|
The MAXCAPACITY parameter must be set at less value than the capacity
of the media. For CD-ROM media, the maximum capacity cannot exceed
650MB.
|
|
|
Two-Sided
Two-sided media is treated as two individual volumes in this device class.
Define double-sided media as two separate volumes.
You can update the device class information by issuing the UPDATE DEVCLASS
command.
170
Note: Do not use raw partitions with a device class type of FILE.
When you define or update the FILE device class, you can specify the parameters
described in the following sections.
Mount Limit
The mount limit value for FILE device classes is used to restrict the number of
mount points (volumes or files) that can be concurrently opened for access by
server storage and retrieval operations. Any attempts to access more volumes than
indicated by the mount limit causes the requester to wait. The default value is 1.
The maximum value for this parameter is 256.
Note: The MOUNTLIMIT=DRIVES parameter is not valid for the FILE device
class.
When selecting a mount limit for this device class, consider how many Tivoli
Storage Manager processes you want to run at the same time.
Tivoli Storage Manager automatically cancels some processes to run other, higher
priority processes. If the server is using all available mount points in a device class
to complete higher priority processes, lower priority processes must wait until a
mount point becomes available. For example, Tivoli Storage Manager cancels the
process for a client backup if the mount point being used is needed for a server
migration or reclamation process. Tivoli Storage Manager cancels a reclamation
process if the mount point being used is needed for a client restore operation. For
additional information, see Preemption of Client or Server Operations on
page 396.
If processes are often cancelled by other processes, consider whether you can make
more mount points available for Tivoli Storage Manager use. Otherwise, review
your scheduling of operations to reduce the contention for resources.
Directory
You can specify the directory location of the files used in the FILE device class. The
default is the current working directory of the server at the time the command is
issued, unless the DSMSERV_DIR environment variable is set. For more
information on setting the environment variable, refer to Quick Start.
The directory name identifies the location where the server places the files that
represent storage volumes for this device class. While processing the command, the
server expands the specified directory name into its fully qualified form, starting
from the root directory.
Later, if the server needs to allocate a scratch volume, it creates a new file in this
directory. The following lists the file name extension created by the server for
scratch volumes depending on the type of data that is stored.
171
Server Name
The Tivoli Storage Manager server on which you define a SERVER device class is
called a source server. The source server uses the SERVER device class to store
data on another Tivoli Storage Manager server, called a target server.
When defining a SERVER device class, specify the name of the target server. The
target server must already be defined by using the DEFINE SERVER command.
See Using Virtual Volumes to Store Data on Another Server on page 505 for more
information.
Mount Limit
Use the mount limit value for SERVER device classes to restrict the number of
simultaneous sessions between the source server and the target server. Any
attempts to access more sessions than indicated by the mount limit causes the
requester to wait. The default mount limit value is 1. The maximum value for this
parameter is 256.
Note: The MOUNTLIMIT=DRIVES parameter is not valid for the SERVER device
class.
When selecting a mount limit, consider your network load balancing and how
many Tivoli Storage Manager processes you want to run at the same time.
Tivoli Storage Manager automatically cancels some processes to run other, higher
priority processes. If the server is using all available sessions in a device class to
complete higher priority processes, lower priority processes must wait until a
session becomes available. For example, Tivoli Storage Manager cancels the process
172
Mount Retention
You can specify the amount of time, in minutes, to retain an idle sequential access
volume before dismounting it. The default value is 60. The maximum value you
can specify for this parameter is 9999. A value of 1 to 5 minutes is recommended.
This parameter can improve response time for sequential access media mounts by
leaving previously mounted volumes online.
Prefix
You can specify a prefix that the source server will use as the beginning portion of
the high-level archive file name on the target server.
Retry Period
You can specify a retry period for communications with the target server. When
there is a communications failure, this period determines the amount of time
during which the source server continues to attempt to connect to the target server.
Retry Interval
You can specify how often the source server tries to connect to the target server
when there is a communications failure. During the retry period, the source server
tries to connect again as often as indicated by the retry interval.
|
|
|
|
|
|
To use StorageTek VolSafe brand media and drives, you must define a device class
by issuing the DEFINE DEVCLASS command with a DEVTYPE=VOLSAFE
parameter. This technology uses media that cannot be overwritten. Because of this,
do not use this media for short-term backups of client files, the server database, or
export tapes.
|
|
Notes:
|
|
|
|
|
|
173
|
|
|
|
|
You can use this device class with EXTERNAL, SCSI, and ACSLS libraries. All
drives in a library must be enabled for VolSafe use.
|
|
Other parameters specify how to manage data storage operations involving this
device class:
|
|
Mount Limit
See Mount Limit on page 165.
|
|
|
|
Mount Retention
See Mount Retention Period on page 166.
|
|
Recording Format
See Recording Format on page 167.
|
|
Estimated Capacity
See Estimated Capacity on page 167.
|
|
|
|
|
|
|
Library
If any drives in a library are VolSafe-enabled, all drives in the library must
be VolSafe-enabled. Consult your hardware documentation to enable
VolSafe function on the StorageTek 9840 drives. Attempting to write to
VolSafe media without a VolSafe-enabled drive results in errors. The media
needs to be loaded into a drive during the check-in process to determine
whether it is WORM or read-write.
|
|
|
|
|
Only one library can be associated with a given device class. However,
multiple device classes can reference the same library. Unless you are using
the DRIVES value for MOUNTLIMIT, you must ensure that the numeric
value of the mount limits of all device classes do not exceed the number of
drives defined in the referenced library.
|
|
This parameter is required and must be specified when the device class is
defined.
You can update the device class information by issuing the UPDATE DEVCLASS
command.
|
|
Any administrator
174
|
|
|
|
|
|
|
|
|
Device
Class
Name
--------DISK
TAPE8MM
FILE
GEN1
Device
Access
Strategy
---------Random
Sequential
Sequential
Sequential
Storage
Pool
Count
------9
1
1
2
Device
Type
Format
-------
------
8MM
FILE
LTO
8200
DRIVE
ULTRIUM
Est/Max
Capacity
(MB)
--------
Mount
Limit
-----
5,000.0
2
1
DRIVES
GEN1
Sequential
2
LTO
ULTRIUM
DRIVES
60
60
ADSM
GEN2LIB
ADMIN
01/23/03 12:25:31
You can delete a device class with the DELETE DEVCLASS command when:
v No storage pools are assigned to the device class. For information on deleting
storage pools, see Deleting a Storage Pool on page 246.
v The device class is not being used by an export or import process.
Note: You cannot delete the DISK device class from the server.
175
If you specify an estimated capacity that exceeds the actual capacity of the volume
in the device class, Tivoli Storage Manager updates the estimated capacity of the
volume when the volume becomes full. When Tivoli Storage Manager reaches the
end of the volume, it updates the capacity for the amount that is written to the
volume.
You can either accept the default estimated capacity for a given device class, or
explicitly specify an estimated capacity. An accurate estimated capacity value is not
required, but is useful. Tivoli Storage Manager uses the estimated capacity of
volumes to determine the estimated capacity of a storage pool, and the estimated
percent utilized. You may want to change the estimated capacity if:
v The default estimated capacity is inaccurate because data compression is being
performed by the drives.
v You have volumes of nonstandard size.
Advantages
Disadvantages
Tivoli Storage Manager client Reduced load on the network Higher CPU usage by the
compression
client
Longer elapsed time for client
operations such as backup
Drive compression
Either type of compression can affect tape drive performance, because compression
affects data rate. When the rate of data going to a tape drive is slower than the
drive can write, the drive starts and stops while data is written, meaning relatively
poorer performance. When the rate of data is fast enough, the tape drive can reach
streaming mode, meaning better performance. If tape drive performance is more
important than the space savings that compression can mean, you may want to
perform timed test backups using different approaches to determine what is best
for your system.
Drive compression is specified with the FORMAT parameter for the drives device
class, and the hardware device must be able to support the compression format.
For information about how to set up compression on the client, see Node
Compression Considerations on page 253 and Registering Nodes with the
Server on page 252.
176
client or by the storage device. It may wrongly appear that you are not getting the
full use of the capacity of your tapes, for the following reasons:
v A tape device manufacturer often reports the capacity of a tape based on an
assumption of compression by the device. If a client compresses a file before it is
sent, the device may not be able to compress it any further before storing it.
v Tivoli Storage Manager records the size of a file as it goes to a storage pool. If
the client compresses the file, Tivoli Storage Manager records this smaller size in
the database. If the drive compresses the file, Tivoli Storage Manager is not
aware of this compression.
Figure 17 on page 178 compares what Tivoli Storage Manager sees as the amount of
data stored on tape when compression is done by the device and by the client. For
this example, the tape has a physical capacity of 1.2 GB. However, the
manufacturer reports the capacity of the tape as 2.4 GB by assuming the device
compresses the data by a factor of two.
Suppose a client backs up a 2.4 GB file:
v When the client does not compress the file, the server records the file size as 2.4
GB, the file is compressed by the drive to 1.2 GB, and the file fills up one tape.
v When the client compresses the file, the server records the file size as 1.2 GB, the
file cannot be compressed any further by the drive, and the file still fills one
tape.
In both cases, Tivoli Storage Manager considers the volume to be full. However,
Tivoli Storage Manager considers the capacity of the volume in the two cases to be
different: 2.4 GB when the drive compresses the file, and 1.2 GB when the client
compresses the file. Use the QUERY VOLUME command to see the capacity of
volumes from Tivoli Storage Managers viewpoint. See Monitoring the Use of
Storage Pool Volumes on page 225.
177
Figure 17. Comparing Compression at the Client and Compression at the Device
178
Tasks:
Defining or Updating Primary Storage Pools on page 182
Task Tips for Storage Pools on page 186
Preparing Volumes for Random Access Storage Pools on page 190
Preparing Volumes for Sequential Access Storage Pools on page 190
Defining Storage Pool Volumes on page 191
Updating Storage Pool Volumes on page 192
Setting Up a Storage Pool Hierarchy on page 195
Monitoring Storage Pools and Volumes on page 223
Monitoring the Use of Storage Pool Volumes on page 225
Moving Files from One Volume to Another Volume on page 237
179
Tasks:
Moving Data for a Client Node on page 241
Renaming a Storage Pool on page 244
Defining a Copy Storage Pool on page 244
Deleting a Storage Pool on page 246
Deleting Storage Pool Volumes on page 247
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
180
space-managed files. Clients are likely to require fast access to their space-managed
files. Therefore, you may want to have those files stored in a separate storage pool
that uses your fastest disk storage.
181
Server
Server Storage
Primary Storage Pools
Offsite Storage
Copy Storage Pools
HSM
Backup
Archive
Disk Storage Pools
Tape Storage Pool
Database and
Recovery Log
182
The section also provides examples of defining and updating storage pools.
Task
System
When you define a primary storage pool, be prepared to provide some or all of the
information that is shown in Table 14. Most of the information is optional. Some
information applies only to random access storage pools or only to sequential
access storage pools. Required parameters are marked.
Table 14. Information for Defining a Storage Pool
Type of
Storage Pool
Information
Explanation
random,
sequential
The name of the device class assigned for the storage pool.
random,
sequential
Pool type
random,
sequential
Maximum number of
scratch volumes
When you specify a value greater than zero, the server dynamically
acquires scratch volumes when needed, up to this maximum number.
sequential
For automated libraries, set this value equal to the physical capacity of the
library. See Maintaining a Supply of Scratch Volumes in an Automated
Library on page 148.
Access mode
Defines access to volumes in the storage pool for user operations (such as
backup and restore) and system operations (such as reclamation and
server migration). Possible values are:
(Required)
Device class
(Required)
random,
sequential
Read/Write
User and system operations can read from or write to the
volumes.
Read-Only
User operations can read from the volumes, but not write. Server
processes can move files within the volumes in the storage pool.
However, no new writes are permitted to volumes in the storage
pool from volumes outside the storage pool.
Unavailable
User operations cannot get access to volumes in the storage pool.
No new writes are permitted to volumes in the storage pool from
other volumes outside the storage pool. However, system
processes (like reclamation) are permitted to move files within the
volumes in the storage pool.
Maximum file size
To exclude large files from a storage pool, set a maximum file size. The
maximum file size applies to the size of a physical file (a single client file
or an aggregate of client files).
random,
sequential
Do not set a maximum file size for the last storage pool in the hierarchy
unless you want to exclude very large files from being stored in server
storage.
183
Explanation
Cyclic Redundancy Check Specifies whether the server uses CRC to validate storage pool data during
(CRC)
audit volume processing. For additional information see Data Validation
During Audit Volume Processing on page 574.
Type of
Storage Pool
random,
sequential
Specifies the name of the next storage pool in the storage pool hierarchy,
where files can be migrated or stored. See Overview: The Storage Pool
Hierarchy on page 194. When defining copy storage pools to primary
pools that have defined next pools, the copy pool list for each primary
pool should be the same. Defining different copy pool lists can cause
resources to be freed when failing over to the next pool. If the resources
are freed, it can delay the completion of client operations.
random,
sequential
Migration thresholds
random,
sequential
Migration processes
Specifies the number of processes that are used for migrating files from
this storage pool. See Migration for Disk Storage Pools on page 200.
Migration delay
random,
sequential
Continue migration
process
random,
sequential
Cache
random
Collocation
With collocation enabled, the server attempts to keep all files belonging to
a client node or a client file space on a minimal number of sequential
access storage volumes. See Keeping a Clients Files Together:
Collocation on page 208.
sequential
Reclamation threshold
sequential
Specifies the name of the storage pool to be used for storing data from
volumes being reclaimed in this storage pool. Use for storage pools whose
device class only has one drive or mount point. See Reclaiming Volumes
in a Storage Pool with One Drive on page 217.
sequential
Specifies the number of days that must elapse after all of the files have
been deleted from a volume, before the volume can be rewritten or
returned to the scratch pool. See Delaying Reuse of Volumes for Recovery
Purposes on page 553.
sequential
184
random
Information
Explanation
Overflow location
Specifies the name of a location where volumes are stored when they are
ejected from an automated library by the MOVE MEDIA command. Use
for a storage pool that is associated with an automated library or an
external library. See Managing a Full Library on page 146.
sequential
Data Format
The format in which data will be stored. NATIVE is the default data
format. NETAPPDUMP and NONBLOCK are examples of other data
formats.
sequential
Specifies the names of copy storage pools where the server simultaneously
writes data when a client backup, archive, or migration operation stores
data to the primary storage pool. The server writes the data
simultaneously to all listed copy storage pools. This option is restricted to
primary storage pools using NATIVE or NONBLOCK data format. See the
Copy Continue entry and Simultaneous Write to a Primary Storage Pool
and Copy Storage Pools on page 187 for related information.
sequential
|
|
|
|
|
|
|
|
|
|
|
Notes:
1. The COPYSTGPOOLS parameter is not intended to replace the
BACKUP STGPOOL command. If you use the copy storage pools
function, ensure that the copy of the storage pool is complete by using
the BACKUP STGPOOL command.
2. When defining copy storage pools to primary pools that have defined
next pools, the copy storage pool list for each primary storage pool
should be the same. Defining different copy storage pool lists can
cause resources to be freed when failing over to the next pool. If the
resources are freed, it can delay the completion of client operations.
Copy Continue
Specifies how the server should react to a copy storage pool write failure
for any of the copy storage pools listed in the COPYSTGPOOLS
parameter. With a value of YES, during a write failure, the server will
exclude the failing copy storage pool from any further writes while that
specific client session is active. With a value of NO, during a write failure,
the server will fail the entire transaction including the write to the primary
storage pool.
sequential
185
No limit is set for the maximum file size, because this is the last storage pool
in the hierarchy.
To group files from the same client on a small number of volumes, use
collocation at the client node level.
Use scratch volumes for this pool, with a maximum number of 100 volumes.
The access mode is the default, read/write.
Use the default for reclamation: Reclaim a partially full volume (to allow tape
reuse) when 60% of the volumes space can be reclaimed.
You can define the storage pools in a storage pool hierarchy from the top down or
from the bottom up. Defining the hierarchy from the bottom up requires fewer
steps. To define the hierarchy from the bottom up, perform the following steps:
1. Define the storage pool named BACKTAPE with the following command:
define stgpool backtape tape
description='tape storage pool for engineering backups'
maxsize=nolimit collocate=yes maxscratch=100
2. Define the storage pool named ENGBACK1 with the following command:
define stgpool engback1 disk
description='disk storage pool for engineering backups'
maxsize=5m nextstgpool=backtape highmig=85 lowmig=40
Restrictions:
1. You cannot establish a chain of storage pools that lead to an endless loop. For
example, you cannot define StorageB as the next storage pool for StorageA, and
then define StorageA as the next storage pool for StorageB.
2. The storage pool hierarchy includes only primary storage pools, not copy
storage pools.
3. If a storage pool uses the data format NETAPPDUMP or CELERRADUMP, the
server will not perform storage pool backup, migration, reclamation, MOVE
DATA, and AUDIT VOLUME on that storage pool. For more information on
these data formats, see Chapter 6, Using NDMP for Operations with NAS File
Servers, on page 111.
|
|
|
|
|
Note:
v You cannot use this command to change the data format for a storage
pool.
v For storage pools that have the NETAPPDUMP or the CELERRADUMP
data format, you can modify only the following parameters:
DESCRIPTION, ACCESS, COLLOCATE, MAXSCRATCH, REUSEDELAY.
|
|
|
186
Do This
Change the policy that the clients use, Changing Policy on page 300
so that the backup copy group points
to the tape storage pool as the
destination.
187
Use of the simultaneous write function is not intended to replace regular backups
of storage pools. If you use the function to simultaneously write to copy storage
pools, ensure that the copy of each primary storage pool is complete by regularly
issuing the BACKUP STGPOOL command.
|
|
|
Restrictions:
1. This option is restricted to primary storage pools that use NATIVE or
NONBLOCK data format.
2. A storage agent ignores the list of copy storage pools. Simultaneous write to
copy storage pools does not occur when the operation is using LAN-free data
movement.
|
|
|
|
|
Note: For primary storage pools that are part of a storage hierarchy (next storage
pools are defined), make the copy pool list for each primary storage pool the
same. Defining different lists of copy storage pools can cause resources to be
freed when the server uses the next storage pool. If the resources are freed,
it can delay the completion of client operations.
188
Each volume defined in a sequential access storage pool must be of the same type
as the device type of the associated device class. See Table 16 for the type of
volumes associated with each device type.
For preparing sequential access volumes, see Preparing Volumes for Sequential
Access Storage Pools on page 190.
Table 16. Volume Types
Device Type
Volume Description
Label
Required
3570
Yes
3590
Yes
4MM
Yes
8MM
Yes
CARTRIDGE
Yes
DLT
Yes
DTF
Yes
ECARTRIDGE
A cartridge tape that is used by a tape drive such as the StorageTek SD-3 or
9490 tape drive
Yes
FILE
No
GENERICTAPE
A tape that is compatible with the drives that are defined to the device class
Yes
| LTO
Yes
| NAS
|
A tape drive that is used for backups via NDMP by a network-attached storage
(NAS) file server
Yes
OPTICAL
Yes
QIC
Yes
REMOVABLEFILE
A file on a removable medium. If the medium has two sides, each side is a
separate volume.
Yes
SERVER
One or more objects that are archived in the server storage of another server
No
| VOLSAFE
|
A StorageTek cartridge tape that is for write-once use on tape drives that are
enabled for VolSafe function.
No
WORM
Yes
WORM12
Yes
WORM14
Yes
189
The server tracks whether a volume being used was originally a scratch volume.
Scratch volumes that the server acquired for a primary storage pool are deleted
from the server database when they become empty. The volumes are then available
for reuse by the server or other applications. For scratch volumes that were
acquired in a FILE device class, the space that the volumes occupied is freed by the
server and returned to the file system.
Scratch volumes in a copy storage pool are handled in the same way as scratch
volumes in a primary storage pool, except for volumes with the access value of
offsite. If an offsite volume becomes empty, the server does not immediately return
the volume to the scratch pool. The delay prevents the empty volumes from being
deleted from the database, making it easier to determine which volumes should be
returned to the onsite location. The administrator can query the server for empty
offsite copy storage pool volumes and return them to the onsite location. The
volume is returned to the scratch pool only when the access value is changed to
READWRITE, READONLY, or UNAVAILABLE.
Prepare a volume for use in a random access storage pool by defining the volume.
For example, suppose you want to define a 21MB volume for the BACKUPPOOL
storage pool. You want the volume to be located in the path /usr/lpp/adsmserv/bin
and named stgvol.001. Enter the following command:
define volume backuppool /usr/lpp/adsmserv/bin/stgvol.001 formatsize=21
If you do not specify a full path name for the volume name, the command uses the
current path.
Notes:
1. Define storage pool volumes on disk drives that reside on the Tivoli Storage
Manager server machine, not on remotely mounted file systems.
Network-attached drives can compromise the integrity of the data that you are
writing.
2. This one-step process replaces the former two-step process of first formatting a
volume (using DSMFMT) and then defining the volume. If you choose to use
the two-step process, the DSMFMT utility is available from the operating
system command line. See Administrators Reference for details.
Another option for preparing a volume is to create a raw logical volume by using
SMIT.
190
For sequential access storage pools with other than a FILE or SERVER device type,
you must prepare volumes for use. When the server accesses a sequential access
volume, it checks the volume name in the header to ensure that the correct volume
is being accessed. To prepare a volume:
1. Label the volume. Table 16 on page 189 shows the types of volumes that require
labels. You must label those types of volumes before the server can use them.
See Labeling Removable Media Volumes on page 134.
Tip: When you use the LABEL LIBVOLUME command with drives in an
automated library, you can label and check in the volumes with one
command.
2. For storage pools in automated libraries, use the CHECKIN LIBVOLUME
command to check the volume into the library. See Checking New Volumes
into a Library on page 137.
3. If you have not allowed scratch volumes in the storage pool, you must identify
the volume, by name, to the server. For details, see Defining Storage Pool
Volumes.
If you allowed scratch volumes in the storage pool by specifying a value
greater than zero for the MAXSCRATCH parameter, you can let the server use
scratch volumes, identify volumes by name, or do both. See Using Scratch
Volumes for information about scratch volumes.
When you define a storage pool volume, you inform the server that the volume is
available for storing backup, archive, or space-managed data.
For a sequential access storage pool, the server can use dynamically acquired
scratch volumes, volumes that you define, or a combination.
To define a volume named VOL1 in the ENGBACK3 tape storage pool, enter:
define volume engback3 vol1
|
|
|
|
|
Each volume used by a server for any purpose must have a unique name. This
requirement applies to all volumes, whether the volumes are used for storage
pools, or used for operations such as database backup or export. The requirement
also applies to volumes that reside in different libraries but that are used by the
same server.
191
Before the server can use a scratch volume with a device type other than FILE or
SERVER, the volume must have a standard label. See Preparing Volumes for
Sequential Access Storage Pools on page 190.
Update volumes
System or operator
You can update the attributes of a storage pool volume assigned to a primary or
copy storage pool. Update a volume to:
v Reset any error state for a volume, by updating the volume to an access mode of
read/write.
v Change the access mode of a volume, for example if a tape cartridge is moved
offsite (offsite access mode) or damaged (destroyed access mode). See Access
Modes for Storage Pool Volumes on page 193.
v Change the location for a volume in a sequential access storage pool.
An example of when to use the UPDATE VOLUME command is if you
accidentally damage a volume. You can change the access mode to unavailable so
that the server does not try to write or read data from the volume. For example, if
the volume name is VOL1, enter the following command:
update volume vol1 access=unavailable
When using the UPDATE VOLUME command, be prepared to supply some or all
of the information shown in Table 17.
Table 17. Information for Updating a Storage Pool Volume
Information
Explanation
Volume name
Specifies the name of the storage pool volume to be updated. You can
specify a group of volumes to update by using wildcard characters in
the volume name. You can also specify a group of volumes by
specifying the storage pool, device class, current access mode, or status
of the volumes you want to update. See the parameters that follow.
(Required)
New access mode Specifies the new access mode for the volume (how users and server
processes such as migration can access files in the storage pool volume).
See Access Modes for Storage Pool Volumes on page 193 for
descriptions of access modes.
A random access volume must be varied offline before you can change
its access mode to unavailable or destroyed. To vary a volume offline, use
the VARY command. See Varying Disk Volumes Online or Offline on
page 55.
If a scratch volume that is empty and has an access mode of offsite is
updated so that the access mode is read/write, read-only, or unavailable,
the volume is deleted from the database.
192
Location
Storage pool
Device class
Current access
mode
Restricts the update to volumes that currently have the specified access
mode.
Explanation
Status
Restricts the update to volumes with the specified status (online, offline,
empty, pending, filling, or full).
Preview
193
You must vary offline a random access volume before you can change its
access mode to destroyed. To vary a volume offline, use the VARY
command. See Varying Disk Volumes Online or Offline on page 55. Once
you update a random access storage pool volume to destroyed, you cannot
vary the volume online without first changing the access mode.
If you update a sequential access storage pool volume to destroyed, the
server does not attempt to mount the volume.
If a volume contains no files and the UPDATE VOLUME command is used
to change the access mode to destroyed, the volume is deleted from the
database.
Offsite
Specifies that a copy storage pool volume is at an offsite location and
therefore cannot be mounted. Use this mode to help you track volumes
that are offsite. The server treats offsite volumes differently, as follows:
v Mount requests are not generated for offsite volumes
v Data can be reclaimed or moved from offsite volumes by retrieving files
from other storage pools
v Empty, offsite scratch volumes are not deleted from the copy storage
pool
You can only update volumes in a copy storage pool to offsite access
mode. Volumes that have the device type of SERVER (volumes that are
actually archived objects stored on another Tivoli Storage Manager server)
cannot have an access mode of offsite.
194
2. Define the storage pool named ENGBACK1 with the following command:
Chapter 9. Managing Storage Pools and Volumes
195
2. Specify that BACKTAPE is the next storage pool defined in the storage
hierarchy for ENGBACK1. To update ENGBACK1, enter:
update stgpool engback1 nextstgpool=backtape
When a user backs up or archives files from a client node, the server may group
multiple client files into an aggregate of files. The size of the aggregate depends on
the sizes of the client files being stored, and the number of bytes and files allowed
for a single transaction. Two options affect the number of files and bytes allowed
for a single transaction. TXNGROUPMAX, located in the server options file, affects
the number of files allowed. TXNBYTELIMIT, located in the client options file,
affects the number of bytes allowed in the aggregate.
v The TXNGROUPMAX option in the server options file indicates the maximum
number of logical files (client files) that a client may send to the server in a
single transaction. The server might create multiple aggregates for a single
transaction, depending on how large the transaction is.
It is possible to affect the performance of client backup, archive, restore, and
retrieve operations by using a larger value for this option. When transferring
multiple small files, increasing the TXNGROUPMAX option can improve
throughput for operations to tape.
|
|
|
|
|
|
|
|
|
|
|
|
|
You can override the value of the TXNGROUPMAX server option for individual
client nodes by using the TXNGROUPMAX parameter in the REGISTER NODE
and UPDATE NODE commands.
v The TXNBYTELIMIT option in the client options file indicates the total number
of bytes that the client can send to the server in a single transaction.
When a Tivoli Storage Manager for Space Management client (HSM client)
migrates files to the server, the files are not grouped into an aggregate.
196
DISKPOOL
Read/Write Access
Max File Size=3MB
TAPEPOOL
Read/Write Access
197
TAPEPOOL
The next storage pool in the hierarchy. It contains tape volumes accessed
by high-performance tape drives.
Assume a user wants to archive a 5MB file that is named FileX. FileX is bound to a
management class that contains an archive copy group whose storage destination
is DISKPOOL, see Figure 19 on page 197.
When the user archives the file, the server determines where to store the file based
on the following process:
1. The server selects DISKPOOL because it is the storage destination specified in
the archive copy group.
2. Because the access mode for DISKPOOL is read/write, the server checks the
maximum file size allowed in the storage pool.
The maximum file size applies to the physical file being stored, which may be a
single client file or an aggregate. The maximum file size allowed in DISKPOOL
is 3MB. FileX is a 5MB file and therefore cannot be stored in DISKPOOL.
3. The server searches for the next storage pool in the storage hierarchy.
If the DISKPOOL storage pool has no maximum file size specified, the server
checks for enough space in the pool to store the physical file. If there is not
enough space for the physical file, the server uses the next storage pool in the
storage hierarchy to store the file.
4. The server checks the access mode of TAPEPOOL, which is the next storage
pool in the storage hierarchy. The access mode for TAPEPOOL is read/write.
5. The server then checks the maximum file size allowed in the TAPEPOOL
storage pool. Because TAPEPOOL is the last storage pool in the storage
hierarchy, no maximum file size is specified. Therefore, if there is available
space in TAPEPOOL, FileX can be stored in it.
198
199
200
1. The server checks for the client node that has backed up or migrated the largest
single file space or has archived files that occupy the most space.
2. For all files from every file space belonging to the client node that was
identified, the server examines the number of days since the files were stored
in the storage pool and last retrieved from the storage pool. The server
compares the number (whichever is less) to the migration delay that is set for
the storage pool. The server migrates any of these files for which the number is
more than the migration delay set for the storage pool.
3. After the server migrates the files for the first client node to the next storage
pool, the server checks the low migration threshold for the storage pool. If the
amount of space that is used in the storage pool is now below the low
migration threshold, migration ends. If not, the server chooses another client
node by using the same criteria as described above, and the migration process
continues.
The server may not be able to reach the low migration threshold for the pool by
migrating only files that have been stored longer than the migration delay period.
When this happens, the server checks the storage pool characteristic that
determines whether migration should stop even if the pool is still above the low
migration threshold. See Keeping Files in a Storage Pool on page 204 for more
information.
If multiple migration processes are running (controlled by the MIGPROCESS
parameter of the DEFINE STGPOOL command), the server may choose the files
from more than one node for migration at the same time.
For example, Table 18 displays information that is contained in the database that is
used by the server to determine which files to migrate. This example assumes that
the storage pool contains no space-managed files. This example also assumes that
the migration delay period for the storage pool is set to zero, meaning any files can
be migrated regardless of time stored in the pool or the last time of access.
Table 18. Database Information on Files Stored in DISKPOOL
Archived Files (All Client File
Spaces)
Client Node
TOMC
TOMC/C
200MB
TOMC/D
100MB
CAROL
CAROL
50MB
5MB
PEASE
PEASE/home
150MB
40MB
PEASE/temp
175MB
55MB
201
During Migration
Before Migration
After Migration
High
Migration
Threshold
80%
Low
Migration
Threshold
DISKPOOL
20%
DISKPOOL
DISKPOOL
TAPEPOOL
Figure 20 shows what happens when the high migration threshold defined for the
disk storage pool DISKPOOL is exceeded. When the amount of migratable data in
DISKPOOL reaches 80%, the server performs the following tasks:
1. Determines that the TOMC/C file space is taking up the most space in the
DISKPOOL storage pool, more than any other single backed-up or
space-managed file space and more than any client nodes archived files.
2. Locates all data belonging to node TOMC stored in DISKPOOL. In this
example, node TOMC has backed up or archived files from file spaces
TOMC/C and TOMC/D stored in the DISKPOOL storage pool.
3. Migrates all data from TOMC/C and TOMC/D to the next available storage
pool. In this example, the data is migrated to the tape storage pool,
TAPEPOOL.
The server migrates all of the data from both file spaces belonging to node
TOMC, even if the occupancy of the storage pool drops below the low
migration threshold before the second file space has been migrated.
If the cache option is enabled, files that are migrated remain on disk storage
(that is, the files are cached) until space is needed for new files. For more
information about using cache, see Using Cache on Disk Storage Pools on
page 207.
4. After all files that belong to TOMC are migrated to the next storage pool, the
server checks the low migration threshold. If the low migration threshold has
not been reached, then the server again determines which client node has
backed up or migrated the largest single file space or has archived files that
occupy the most space. The server begins migrating files belonging to that
node.
In this example, the server migrates all files that belong to the client node
named PEASE to the TAPEPOOL storage pool.
5. After all the files that belong to PEASE are migrated to the next storage pool,
the server checks the low migration threshold again. If the low migration
threshold has been reached or passed, then migration ends.
202
203
If you do not use cache, you may want to keep the low threshold at a higher
number so that more data stays on the disk.
v How frequently you want migration to occur, based on the availability of
sequential access storage devices and mount operators. The larger the low
threshold, the shorter time that a migration process runs (because there is less
data to migrate). But if the pool refills quickly, then migration occurs more
frequently. The smaller the low threshold, the longer time that a migration
process runs, but the process runs less frequently.
You may need to balance the costs of larger disk storage pools with the costs of
running migration (drives, tapes, and either operators or automated libraries).
v Whether you are using collocation on the next storage pool. When you use
collocation, the server attempts to store data for different clients or client file
spaces on separate tapes, even for clients with small amounts of data. You may
want to set the low threshold to keep more data on disk, to avoid having many
tapes used by clients with only small amounts of data.
204
If you allow more than one migration process for the storage pool and allow the
server to move files that do not satisfy the migration delay time
(MIGCONTINUE=YES), some files that do not satisfy the migration delay time
may be migrated unnecessarily. As one process migrates files that satisfy the
migration delay time, a second process could begin migrating files that do not
satisfy the migration delay time to meet the low migration threshold. The first
process that is still migrating files that satisfy the migration delay time might have,
by itself, caused the storage pool to meet the low migration threshold.
|
|
v You can migrate data from a sequential access storage pool only to
another sequential access storage pool. You cannot migrate data from a
sequential access storage pool to a disk storage pool. If you need to move
data from a sequential access storage pool to a disk storage pool, use the
MOVE DATA command. See Moving Files from One Volume to Another
Volume on page 237.
v Storage pools using the NETAPPDUMP or the CELERRADUMP data
format are unable to use migration.
205
206
v The access mode for the next storage pool in the storage hierarchy is set to
read/write.
For information about setting an access mode for sequential access storage pools,
see Defining or Updating Primary Storage Pools on page 182.
v Collocation is set the same in both storage pools. For example, if collocation is
set to yes in the first storage pool, then collocation should be set to yes in the
next storage pool.
When you enable collocation for a storage pool, the server attempts to keep all
files belonging to a client node or a client file space on a minimal number of
volumes. For information about collocation for sequential access storage pools,
see Keeping a Clients Files Together: Collocation on page 208.
v You have sufficient staff available to handle any necessary media mount and
dismount operations. More mount operations occur because the server attempts
to reclaim space from sequential access storage pool volumes before it migrates
files to the next storage pool.
If you want to limit migration from a sequential access storage pool to another
storage pool, set the high-migration threshold to a high percentage, such as 95%.
For information about setting a reclamation threshold for tape storage pools, see
Reclaiming Space in Sequential Access Storage Pools on page 213.
|
|
|
|
207
The advantage of using cache for a disk storage pool is that cache can improve
how quickly the server retrieves some files. When you use cache, a copy of the file
remains on disk storage after the server migrates the primary file to another
storage pool. You may want to consider using a disk storage pool with cache
enabled for storing space-managed files that are frequently accessed by clients.
However, using cache has some important disadvantages:
v Using cache can increase the time required for client backup operations to
complete. Performance is affected because, as part of the backup operation, the
server must erase cached files to make room for storing new files. The effect can
be severe when the server is storing a very large file and must erase cached files.
For the best performance for client backup operations to disk storage pools, do
not use cache.
v Using cache can require more space for the server database. When you use
cache, more database space is needed because the server has to keep track of
both the cached copy of the file and the new copy in the next storage pool.
If you leave cache disabled, you may want to consider higher migration thresholds
for the disk storage pool. A higher migration threshold keeps files on disk longer
because migration occurs less frequently.
208
the storage pool. Collocation thus improves access time for these operations.
Figure 21 shows an example of collocation by client node with three clients, each
having a separate volume containing that clients data.
When collocation is disabled, the server attempts to use all available space on each
volume before selecting a new volume. While this process provides better
utilization of individual volumes, user files can become scattered across many
volumes. Figure 22 shows an example of collocation disabled, with three clients
sharing space on a volume.
With collocation disabled, when users restore, retrieve, or recall a large number of
files, media mount operators may be required to mount more volumes. The system
default is to not use collocation.
The following sections give more detail on collocation:
The Effects of Collocation on Operations
How the Server Selects Volumes with Collocation Enabled on page 210
How the Server Selects Volumes with Collocation Disabled on page 211
Turning Collocation On or Off on page 212
Collocation on Copy Storage Pools on page 212
209
Collocation Enabled
Collocation Disabled
Tip: If you use collocation, but want to reduce the number of media mounts and
use space on sequential volumes more efficiently, you can do the following:
v Define a storage pool hierarchy and policy to require that backed-up,
archived, or space-managed files are stored initially in disk storage pools.
When files are migrated from a disk storage pool, the server attempts to
migrate all files belonging to the client node that is using the most disk
space in the storage pool. This process works well with the collocation
option because the server tries to place all of the files from a given client on
the same sequential access storage volume.
v Use scratch volumes for sequential access storage pools to allow the server
to select new volumes for collocation.
210
4. A volume with the most available free space among volumes that already
contain data
When collocation at the file space level is enabled for a storage pool
(COLLOCATION=FILESPACE) and a client node backs up, archives, or migrates
files to the storage pool, the server attempts to select a volume using the following
selection order:
1. A volume that already contains files from the same file space of that client node
2. An empty predefined volume
3. An empty scratch volume
4. A volume containing data from the same client node
5. A volume with the most available free space among volumes that already
contain data
When the server needs to continue to store data on a second volume, it uses the
following selection order to acquire additional space:
1. An empty predefined volume
2. An empty scratch volume
3. A volume with the most available free space among volumes that already
contain data
4. Any available volume in the storage pool
Through this selection process, the server attempts to provide the best use of
individual volumes while minimizing the mixing of files from different clients or
file spaces on volumes. For example, Figure 23 shows that volume selection is
horizontal, where all available volumes are used before all available space on each
volume is used. A, B, C, and D represent files from four different client nodes.
Figure 23. Using All Available Sequential Access Storage Volumes with Collocation Enabled
211
When the server needs to continue to store data on a second volume, it attempts to
select an empty volume. If none exists, the server attempts to select any remaining
available volume in the storage pool.
Figure 24 shows that volume utilization is vertical when collocation is disabled. In
this example, fewer volumes are used because the server attempts to use all
available space by mixing client files on individual volumes. A, B, C, and D
represent files from four different client nodes.
Figure 24. Using All Available Space on Sequential Volumes with Collocation Disabled
For example, if collocation is off for a storage pool and you turn it on, from then on
client files stored in the pool are collocated. Files that had previously been stored
in the pool are not moved to collocate them. As volumes are reclaimed, however,
the data in the pool tends to become more collocated. You can also use the MOVE
DATA or MOVE NODEDATA commands to move data to new volumes to increase
collocation. This, however, does cause an increase in the processing time and the
volume mount activity.
|
|
|
|
Note: A mount wait can occur or increase when collocation by file space is
enabled and a node has a volume containing multiple file spaces. If a
volume is eligible to receive data, Tivoli Storage Manager will wait for that
volume.
|
|
|
|
|
|
|
|
|
Primary and copy storage pools perform different recovery roles. Normally you
use primary storage pools to recover data to clients directly. You use copy storage
pools to recover data to the primary storage pools. In a disaster where both clients
and the server are lost, the copy storage pool volumes will probably be used
directly to recover clients. The types of recovery scenarios that concern you the
most will help you to determine whether to use collocation on your copy storage
pools.
|
|
You may also want to consider that collocation on copy storage pools will result in
more partially filled volumes and potentially unnecessary offsite reclamation
212
|
|
|
|
|
|
activity. Collocation typically results in a partially filled sequential volume for each
client or client file space. This may be acceptable for primary storage pools because
these partially filled volumes remain available and can be filled during the next
migration process. However, for copy storage pools this may be unacceptable
because the storage pool backups are usually made to be taken offsite immediately.
If you use collocation for copy storage pools, you will have to decide between:
|
|
v Taking more partially filled volumes offsite, thereby increasing the reclamation
activity when the reclamation threshold is lowered or reached.
|
|
|
or
v Leaving these partially filled volumes onsite until they fill and risk not having
an offsite copy of the data on these volumes.
|
|
|
With collocation disabled for a copy storage pool, typically there will be only a few
partially filled volumes after storage pool backups to the copy storage pool are
complete.
|
|
|
|
|
Consider carefully before using collocation for copy storage pools. Even if you use
collocation for your primary storage pools, you may want to disable collocation for
copy storage pools. Collocation on copy storage pools may be desirable when you
have few clients, but each of them has large amounts of incremental backup data
each day.
|
|
See Keeping a Clients Files Together: Collocation on page 208 for more
information about collocation.
|
|
|
|
|
|
|
|
|
|
|
|
The server reclaims the space in storage pools based on a reclamation threshold that
you can set for each sequential access storage pool. When the percentage of space
that can be reclaimed on a volume rises above the reclamation threshold, the
server reclaims the volume. See the following sections:
How IBM Tivoli Storage Manager Reclamation Works on page 213
Choosing a Reclamation Threshold on page 216
|
|
|
|
|
213
checks whether reclamation is needed at least once per hour and begins space
reclamation for eligible volumes. You can set a reclamation threshold for each
sequential access storage pool when you define or update the pool.
During space reclamation, the server copies files that remain on eligible volumes to
other volumes. For example, Figure 25 on page 215 shows that the server
consolidates the files from tapes 1, 2, and 3 on tape 4. During reclamation, the
server copies the files to volumes in the same storage pool unless you have
specified a reclamation storage pool. Use a reclamation storage pool to allow
automatic reclamation for a storage pool with only one drive.
Note: To prevent contention for the same tapes, the server does not allow a
reclamation process to start if a DELETE FILESPACE process is active. The
server checks every hour for whether the DELETE FILESPACE process has
completed so that the reclamation process can start. After the DELETE
FILESPACE process has completed, reclamation begins within one hour.
|
|
|
|
|
The server also reclaims space within an aggregate. An aggregate is a physical file
that contains multiple logical files that are backed up or archived from a client in a
single transaction. Space within the aggregate becomes reclaimable space as logical
files in the aggregate expire or are deleted by the client. The server removes
unused space from expired or deleted logical files as the server copies the
aggregate to another volume during reclamation processing. However, reclamation
does not aggregate files that were originally stored in non-aggregated form.
Reclamation also does not combine aggregates to make new aggregates. You can
also reclaim space in an aggregate by issuing the MOVE DATA command. See
Reclaiming Space in Aggregates During Data Movement on page 240 for details.
214
= valid data
Figure 25. Tape Reclamation
After the server moves all readable files to other volumes, one of the following
occurs for the reclaimed volume:
v If you have explicitly defined the volume to the storage pool, the volume
becomes available for reuse by that storage pool
v If the server acquired the volume as a scratch volume, the server deletes the
volume from the Tivoli Storage Manager database
Volumes that have a device type of SERVER are reclaimed in the same way as
other sequential access volumes. However, because the volumes are actually data
stored in the storage of another Tivoli Storage Manager server, the reclamation
process can consume network resources. See Reclamation of Volumes with the
Device Type of SERVER on page 218 for details of how the server reclaims these
types of volumes.
Volumes in a copy storage pool are reclaimed in the same manner as a primary
storage pool except for the following:
v Offsite volumes are handled differently.
v The server copies active files from the candidate volume only to other volumes
in the same storage pool.
See Reclamation for Copy Storage Pools on page 218 for details.
215
216
Finally, update the reclamation storage pool so that data migrates back to the tape
storage pool:
update stgpool reclaimpool nextstgpool=tapepool1
|
|
|
|
|
|
|
|
|
|
|
|
217
218
way, the server copies valid files on offsite volumes without having to mount these
volumes. For more information, see Reclamation of Offsite Volumes.
Reclamation of copy storage pool volumes should be done periodically to allow
reuse of partially filled volumes that are offsite. Reclamation can be done
automatically by setting the reclamation threshold for the copy storage pool to less
than 100%. However, you need to consider controlling when reclamation occurs
because of how offsite volumes are treated. For more information, see Controlling
When Reclamation Occurs for Offsite Volumes.
Virtual Volumes: Virtual volumes (volumes that are stored on another Tivoli
Storage Manager server through the use of a device type of
SERVER) cannot be set to the offsite access mode.
219
One way to resolve this situation is to keep partially filled volumes onsite until
they fill up. However, this would mean a small amount of your data would be
without an offsite copy for another day.
If you send copy storage pool volumes offsite, it is recommended you control copy
storage pool reclamation by using the default value of 100. This turns reclamation
off for the copy storage pool. You can start reclamation processing at desired times
by changing the reclamation threshold for the storage pool. To monitor offsite
volume utilization and help you decide what reclamation threshold to use, enter
the following command:
query volume * access=offsite format=detailed
Depending on your data expiration patterns, you may not need to do reclamation
of offsite volumes each day. You may choose to perform offsite reclamation on a
less frequent basis. For example, suppose you ship copy storage pool volumes to
and from your offsite storage location once a week. You can run reclamation for
the copy storage pool weekly, so that as offsite volumes become empty they are
sent back for reuse.
When you do perform reclamation for offsite volumes, the following sequence is
recommended:
1. Back up your primary storage pools to copy storage pools.
2. Turn on reclamation for copy storage pools by lowering the reclamation
threshold below 100%.
3. When reclamation processing completes, turn off reclamation for copy storage
pools by raising the reclamation threshold to 100%.
4. Mark any newly created copy storage pool volumes as offsite and then move
them to the offsite location.
This sequence ensures that the files on the new copy storage pool volumes are sent
offsite, and are not inadvertently kept onsite because of reclamation.
Using Storage on Another Server for Copy Storage Pools: Another resolution to
this problem of partially filled volumes is to use storage on another Tivoli Storage
Manager server (device type of SERVER) for storage pool backups. If the other
server is at a different site, the copy storage pool volumes are already offsite, with
no moving of physical volumes between the sites. See Using Virtual Volumes to
Store Data on Another Server on page 505 for more information.
220
v Mount and dismount multiple volumes to allow the server to select the most
appropriate volume on which to move data for each client node or client file
space. The server tries to select a volume in the following order:
1. A volume that already contains files belonging to the client file space or
client node
2. An empty volume
3. The volume with the most available space
4. Any available volume
If collocation is disabled and reclamation occurs, the server tries to move usable
data to new volumes by using the following volume selection criteria, in the order
shown:
1. The volume that contains the most data
2. Any partially full volume
3. An empty predefined volume
4. An empty scratch volume
|
See also Reclamation of Tape Volumes with High Capacity on page 217.
221
For storage pools for space-managed files, provide enough disk space to
support the daily space-management load from HSM clients, without causing
migration from the disk storage pool to occur.
v Decide what percentage of this data you want to keep on disk storage space.
Establish migration thresholds to have the server automatically migrate the
remainder of the data to less expensive storage media in sequential access
storage pools.
See Choosing Appropriate Migration Threshold Values on page 203 for
recommendations on setting migration thresholds.
where:
Backup Space
The total amount of storage pool disk space needed.
WkstSize
The average data storage capacity of a workstation. For example, if the
typical workstation at your installation has a 4GB hard drive, then the
average workstation storage capacity is 4GB.
Utilization
An estimate of the fraction of each workstation disk space used, in the
range 0 to 1. For example, if you expect that disks on workstations are 75%
full, then use 0.75.
VersionExpansion
An expansion factor (greater than 1) that takes into account the additional
backup versions, as defined in the copy group. A rough estimate allows 5%
additional files for each backup copy. For example, for a version limit of 2,
use 1.05, and for a version limit of 3, use 1.10.
NumWkst
The estimated total number of workstations that the server supports.
If clients use compression, the amount of space required may be less than the
amount calculated, depending on whether the data is compressible.
222
Because additional storage space can be added at any time, you can start with a
modest amount of storage space and increase the space by adding storage volumes
to the archive storage pool, as required.
Figure 26 on page 224 shows a standard report with all storage pools defined to the
system. To monitor the use of storage pool space, review the Estimated Capacity and
Pct Util columns.
223
Storage
Pool Name
Device
Class Name
Estimated
Pct
Pct High Low
Capacity Util Migr Mig Mig
(MB)
Pct Pct
----------- ---------- ---------- ----- ----- ---- ---ARCHIVEPOOL DISK
0.0
0.0
0.0
90
70
BACKTAPE
TAPE
180.0 85.0 100.0
90
70
BACKUPPOOL DISK
80.0 51.6 51.6
50
30
COPYPOOL
TAPE
300.0 42.0
ENGBACK1
DISK
0.0
0.0
0.0
85
40
Next
Storage
Pool
----------BACKTAPE
BACKTAPE
Estimated Capacity
Specifies the space available in the storage pool in megabytes.
For a disk storage pool, this value reflects the total amount of available
space in the storage pool, including any volumes that are varied offline.
For a sequential access storage pool, this value is an estimate of the total
amount of available space on all volumes in the storage pool. The total
includes volumes with any access mode (read-write, unavailable, read-only,
offsite, or destroyed). The total includes scratch volumes that the storage
pool can acquire only when the storage pool is using at least one scratch
volume for data.
Volumes in a sequential access storage pool, unlike those in a disk storage
pool, do not contain a precisely known amount of space. Data is written to
a volume as necessary until the end of the volume is reached. For this
reason, the estimated capacity is truly an estimate of the amount of
available space in a sequential access storage pool.
Pct Util
Specifies, as a percentage, the space used in each storage pool.
For disk storage pools, this value reflects the total number of disk blocks
currently allocated by Tivoli Storage Manager. Space is allocated for
backed-up, archived, or space-managed files that are eligible for server
migration, cached files that are copies of server-migrated files, and files
that reside on any volumes that are varied offline.
Note: The value for Pct Util can be higher than the value for Pct Migr if
you query for storage pool information while a client transaction
(such as a backup) is in progress. The value for Pct Util is
determined by the amount of space actually allocated (while the
transaction is in progress). The value for Pct Migr represents only
the space occupied by committed files. At the end of the transaction,
Pct Util and Pct Migr become synchronized.
For sequential access storage pools, this value is the percentage of the total
bytes of storage available that are currently being used to store active data
(data that is not expired). Because the server can only estimate the
available capacity of a sequential access storage pool, this percentage also
reflects an estimate of the actual utilization of the storage pool.
224
The estimated capacity for the tape storage pool named BACKTAPE is 180MB,
which is the total estimated space available on all tape volumes in the storage
pool. This report shows that 85% of the estimated space is currently being used to
store workstation files.
Note: This report also shows that volumes have not yet been defined to the
ARCHIVEPOOL and ENGBACK1 storage pools, because the storage pools
show an estimated capacity of 0.0MB.
Any administrator
You can query the server for information about storage pool volumes:
v General information about a volume, such as the following:
Current access mode and status of the volume
Amount of available space on the volume
Location
v Contents of a storage pool volume (user files on the volume)
v The volumes that are used by a client node
Figure 27 shows an example of the output of this standard query. The example
illustrates that data is being stored on the 8mm tape volume named WREN01, as
well as on several other volumes in various storage pools.
Volume Name
Storage
Pool Name
Device
Estimated
Pct
Class Name Capacity Util
(MB)
------------------------ ----------- ---------- --------- ----/dev/raixvol1
AIXPOOL1
DISK
240.0 26.3
/dev/raixvol2
AIXPOOL2
DISK
240.0 36.9
/dev/rdosvol1
DOSPOOL1
DISK
240.0 72.2
/dev/rdosvol2
DOSPOOL2
DISK
240.0 74.1
/dev/ros2vol1
OS2POOL1
DISK
240.0 55.7
/dev/ros2vol2
OS2POOL2
DISK
240.0 51.0
WREN00
TAPEPOOL
TAPE8MM
2,472.0
0.0
WREN01
TAPEPOOL
TAPE8MM
2,472.0
2.2
Volume
Status
-------On-Line
On-Line
On-Line
On-Line
On-Line
On-Line
Filling
Filling
To query the server for a detailed report on volume WREN01 in the storage pool
named TAPEPOOL, enter:
query volume wren01 format=detailed
Figure 28 on page 226 shows the output of this detailed query. Table 20 on page 226
gives some suggestions on how you can use the information.
225
Volume Name:
Storage Pool Name:
Device Class Name:
Estimated Capacity (MB):
Pct Util:
Volume Status:
Access:
Pct. Reclaimable Space:
Scratch Volume?:
In Error State?:
Number of Writable Sides:
Number of Times Mounted:
Write Pass Number:
Approx. Date Last Written:
Approx. Date Last Read:
Date Became Pending:
Number of Write Errors:
Number of Read Errors:
Volume Location:
Last Update by (administrator):
Last Update Date/Time:
WREN01
TAPEPOOL
TAPE8MM
2,472.0
26.3
Filling
Read/Write
5.3
No
No
1
4
2
09/04/2002 11:33:26
09/03/2002 16:42:55
0
0
TANAGER
09/04/2002 11:33:26
Volume Status
Access
Check the Volume Status to see if a disk volume has been varied offline, or if a
sequential access volume is currently being filled with data.
Check the Access to determine whether files can be read from or written to this
volume.
Estimated Capacity
Pct Util
The Estimated Capacity is determined by the device class associated with the
storage pool to which this volume belongs. Based on the estimated capacity, the
system tracks the percentage of space occupied by client files (Pct Util). In this
example, 26.3% of the estimated capacity is currently in use.
226
Scratch Volume?
Write Pass Number
Number of Times Mounted
Approx. Date Last Written
Approx. Date Last Read
The server maintains usage statistics on volumes that are defined to storage
pools. Statistics on a volume explicitly defined by an administrator remain for as
long as the volume is defined to the storage pool. The server continues to
maintain the statistics on defined volumes even as the volume is reclaimed and
reused. However, the server deletes the statistics on the usage of a scratch
volume when the volume returns to scratch status (after reclamation or after all
files are deleted from the volume).
In this example, WREN01 is a volume defined to the server by an administrator,
not a scratch volume (Scratch Volume? is No).
The Write Pass Number indicates the number of times the volume has been
written to, starting from the beginning of the volume. A value of one indicates
that a volume is being used for the first time. In this example, WREN01 has a
write pass number of two, which indicates space on this volume may have been
reclaimed or deleted once before. Compare this value to the specifications
provided with the media that you are using. The manufacturer may recommend
a maximum number of write passes for some types of tape media. You may need
to retire your tape volumes after reaching the maximum passes to better ensure
the integrity of your data. To retire a volume, move the data off the volume by
using the MOVE DATA command. See Moving Files from One Volume to
Another Volume on page 237.
Use the Number of Times Mounted, the Approx. Date Last Written, and the Approx.
Date Last Read to help you estimate the life of the volume. For example, if more
than six months have passed since the last time this volume has been written to
or read from, audit the volume to ensure that files can still be accessed. See
Auditing a Storage Pool Volume on page 572 for information about auditing a
volume.
|
|
|
|
|
|
The number given in the field, Number of Times Mounted, is a count of the
number of times that the server has opened the volume for use. The number of
times that the server has opened the volume is not always the same as the
number of times that the volume has been physically mounted in a drive. After a
volume is physically mounted, the server can open the same volume multiple
times for different operations, for example for different client backup sessions.
Determine the location of a
volume in a sequential access
storage pool.
Location
Determine if a volume in a
sequential access storage pool is
waiting for the reuse delay period
to expire.
When you define or update a sequential access volume, you can give location
information for the volume. The detailed query displays this location name. The
location information can be useful to help you track volumes, for example, offsite
volumes in copy storage pools.
A sequential access volume is placed in the pending state after the last file is
deleted or moved from the volume. All the files that the pending volume had
contained were expired or deleted, or were moved from the volume. Volumes
remain in the pending state for as long as specified with the REUSEDELAY
parameter for the storage pool to which the volume belongs.
Whether or not a volume is full, at times the Pct Util (percent of the volume
utilized) plus the Pct Reclaimable Space (percent of the volume that can be
Chapter 9. Managing Storage Pools and Volumes
227
reclaimed) may add up to more than 100 percent. This can happen when a volume
contains aggregates that have empty space because of files in the aggregates that
have expired or been deleted. The Pct Util field shows all space occupied by both
non-aggregated files and aggregates, including empty space within aggregates. The
Pct Reclaimable Space field includes any space that is reclaimable on the volume,
also including empty space within aggregates. Because both fields include the
empty space within aggregates, these values may add up to more than 100 percent.
For more information about aggregates, see How the Server Groups Files before
Storing on page 196 and Requesting Information on the Use of Storage Space on
page 234.
228
Figure 29 displays a standard report which shows the first seven files from file
space /usr on TOMC stored in WREN01.
Node Name
Type Filespace
Name
------------------------ ---- ---------TOMC
Bkup /usr
TOMC
Bkup /usr
TOMC
Bkup /usr
TOMC
Bkup /usr
TOMC
Bkup /usr
TOMC
Bkup /usr
TOMC
Bkup /usr
The report lists logical files on the volume. If a file on the volume is an aggregate
of logical files (backed-up or archived client files), all logical files that are part of
the aggregate are included in the report. An aggregate can be stored on more than
one volume, and therefore not all of the logical files in the report may actually be
stored on the volume being queried.
Viewing a Detailed Report on the Contents of a Volume: To display detailed
information about the files stored on volume VOL1, enter:
query content vol1 format=detailed
Figure 30 on page 230 displays a detailed report that shows the files stored on
VOL1. The report lists logical files and shows whether each file is part of an
aggregate. If a logical file is stored as part of an aggregate, the information in the
Segment Number, Stored Size, and Cached Copy? fields apply to the aggregate,
not to the individual logical file.
If a logical file is part of an aggregate, the Aggregated? field shows the sequence
number of the logical file within the aggregate. For example, the Aggregated? field
contains the value 2/4 for the file AB0CTGLO.IDE, meaning that this file is the
second of four files in the aggregate. All logical files that are part of an aggregate
are included in the report. An aggregate can be stored on more than one volume,
and therefore not all of the logical files in the report may actually be stored on the
volume being queried.
For disk volumes, the Cached Copy? field identifies whether the file is a cached
copy of a file that has been migrated to the next storage pool in the hierarchy.
229
Node Name:
Type:
Filespace Name:
Clients Name for File:
Aggregated?:
Stored Size:
Segment Number:
Cached Copy?:
DWE
Bkup
OS2
\ README
No
27,089
1/1
No
Node Name:
Type:
Filespace Name:
Clients Name for File:
Aggregated?:
Stored Size:
Segment Number:
Cached Copy?:
DWE
Bkup
DRIVE_L_K:
\COMMON\DSMCOMMN\ AB0CTCOM.ENT
1/4
202,927
1/1
No
Node Name:
Type:
Filespace Name:
Clients Name for File:
Aggregated?:
Stored Size:
Segment Number:
Cached Copy?:
DWE
Bkup
DRIVE_L_K:
\COMMON\DSMCOMMN\ AB0CTGLO.IDE
2/4
202,927
1/1
No
Node Name:
Type:
Filespace Name:
Clients Name for File:
Aggregated?:
Stored Size:
Segment Number:
Cached Copy?:
DWE
Bkup
DRIVE_L_K:
\COMMON\DSMCOMMN\ AB0CTTRD.IDE
3/4
202,927
1/1
No
Node Name:
Type:
Filespace Name:
Clients Name for File:
Aggregated?:
Stored Size:
Segment Number:
Cached Copy?:
DWE
Bkup
DRIVE_L_K:
\COMMON\DSMCOMMN\ AB0CTSYM.ENT
4/4
202,927
1/1
No
230
For more information about using the SELECT command, see Administrators
Reference.
See Figure 31 on page 232 for an example of the results of this command.
If caching is on for a disk storage pool and files are migrated, the Pct Util value
does not change because the cached files still occupy space in the disk storage
pool. However, the Pct Migr value decreases because the space occupied by cached
files is no longer migratable.
231
Storage
Pool Name
Device
Class Name
Estimated
Pct
Pct High Low
Capacity Util Migr Mig Mig
(MB)
Pct Pct
----------- ---------- ---------- ----- ----- ---- ---BACKTAPE
TAPE
180.0 95.2 100.0
90
70
BACKUPPOOL DISK
80.0 51.6 28.8
50
30
Next
Storage
Pool
----------BACKTAPE
You can query the server to monitor the migration process by entering:
query process
232
233
To determine whether cache is being used on disk storage and to monitor how
much space is being used by cached copies, query the server for a detailed storage
pool report. For example, to request a detailed report for BACKUPPOOL, enter:
query stgpool backuppool format=detailed
BACKUPPOOL
PRIMARY
DISK
80.0
42.0
29.6
82.1
50
30
0
Yes
1
BACKTAPE
No Limit
Read/Write
Yes
0 Day(s)
Yes
0.10
5
SERVER_CONSOLE
09/04/2002 16:47:49
Native
No
When Cache Migrated Files? is set to Yes, the value for Pct Util should not change
because of migration, because cached copies of files migrated to the next storage
pool remain in disk storage.
This example shows that utilization remains at 42%, even after files have been
migrated to the BACKTAPE storage pool, and the current amount of data eligible
for migration is 29.6%.
When Cache Migrated Files? is set to No, the value for Pct Util more closely
matches the value for Pct Migr because cached copies are not retained in disk
storage.
234
Task
Any administrator
Any administrator can request information about server storage occupancy. Use the
QUERY OCCUPANCY command for reports with information broken out by node
or file space. Use this report to determine the amount of space used by:
v Client node and file space
v Storage pool or device class
v Type of data (backup, archive, or space-managed)
Each report gives two measures of the space in use by a storage pool:
v Logical space occupied
The amount of space used for logical files. A logical file is a client file. A logical
file is stored either as a single physical file, or in an aggregate with other logical
files.
v Physical space occupied
The amount of space used for physical files. A physical file is either a single
logical file, or an aggregate composed of logical files.
An aggregate may contain empty space that had been used by logical files that
are now expired or deleted. Therefore, the amount of space used by physical
files is equal to or greater than the space used by logical files. The difference
gives you a measure of how much unused space any aggregates may have. The
unused space can be reclaimed in sequential storage pools.
You can also use this report to evaluate the average size of workstation files stored
in server storage.
Remember that file space names are case-sensitive and must be entered exactly as
they are known to the server. Use the QUERY FILESPACE command to determine
the correct capitalization. For more information, see Managing File Spaces on
page 269.
Figure 35 on page 236 shows the results of the query. The report shows the number
of files backed up, archived, or migrated from the /home file space belonging to
MIKE. The report also shows how much space is occupied in each storage pool.
If you back up the ENGBACK1 storage pool to a copy storage pool, the copy
storage pool would also be listed in the report. To determine how many of the
client nodes files in the primary storage pool have been backed up to a copy
storage pool, compare the number of files in each pool type for the client node.
235
Physical Logical
Space
Space
Occupied Occupied
(MB)
(MB)
--------------- ---- ----------- ----------- --------- ---------- -------MIKE
Bkup /home
ENGBACK1
513
3.52
3.01
Node Name
Type Filespace
Name
Storage
Pool Name
Number of
Files
Figure 36 displays a report on the occupancy of tape storage pools assigned to the
TAPECLASS device class.
Node Name
Type Filespace
Name
--------------- ---CAROL
Arch
CAROL
Bkup
PEASE
Arch
PEASE
Bkup
PEASE
Bkup
TOMC
Arch
TOMC
Bkup
Storage
Pool Name
Number of
Files
Physical Logical
Space
Space
Occupied Occupied
(MB)
(MB)
----------- ----------- --------- ---------- -------OS2C
ARCHTAPE
5
.92
.89
OS2C
BACKTAPE
21
1.02
1.02
/home/peas- ARCHTAPE
492
18.40
18.40
e/dir
/home/peas- BACKTAPE
33
7.60
7.38
e/dir
/home/peas- BACKTAPE
2
.80
.80
e/dir1
/home/tomc ARCHTAPE
573
20.85
19.27
/driver5
/home
BACKTAPE
13
2.02
1.88
Note: For archived data, you may see (archive) in the Filespace Name column
instead of a file space name. This means that the data was archived before
collocation by file space was supported by the server.
236
Figure 37 displays a report on the amount of server storage used for backed-up
files.
Node Name
Type Filespace
Name
--------------- ---CAROL
Bkup
CAROL
Bkup
PEASE
Bkup
PEASE
Bkup
TOMC
Bkup
Storage
Pool Name
Number of
Files
Physical Logical
Space
Space
Occupied Occupied
(MB)
(MB)
---------- -------23.52
23.52
20.85
20.85
12.90
9.01
13.68
6.18
21.27
21.27
237
Task
Restricted storage
238
v Most volumes in copy storage pools may be set to an access mode of offsite,
making them ineligible to be mounted. During processing of the MOVE DATA
command, valid files on offsite volumes are copied from the original files in the
primary storage pools. In this way, valid files on offsite volumes are copied
without having to mount these volumes. These new copies of the files are
written to another volume in the copy storage pool.
v With the MOVE DATA command, you can move data from any primary storage
pool volume to any primary storage pool. However, you can move data from a
copy storage pool volume only to another volume within the same copy storage
pool.
When you move files from a volume marked as offsite, the server does the
following:
1. Determines which files are still active on the volume from which you are
moving data
2. Obtains these files from a primary storage pool or from another copy storage
pool
3. Copies the files to one or more volumes in the destination copy storage pool
239
When you move data from a volume, the server starts a background process
and sends informational messages, such as:
ANR1140I Move Data process started for volume /dev/vol3
(process ID 32).
Figure 38 shows an example of the report that you receive about the data
movement process.
Process Process Description Status
Number
-------- -------------------- ------------------------------------------------32 Move Data
Volume /dev/vol3, (storage pool BACKUPPOOL),
Target Pool STGTMP1, Moved Files: 49, Moved
Bytes: 9,121,792, Unreadable Files: 0,
Unreadable Bytes: 0. Current File (bytes):
3,522,560
Current output volume: VOL1.
240
Near the beginning of the move process, querying the volume from which data is
being moved gives the following results:
Volume Name
Storage
Pool Name
Device
Class Name
--------------/dev/vol3
----------BACKUPPOOL
---------DISK
Estimated
Capacity
(MB)
--------15.0
Pct
Util
Volume
Status
----59.9
-------On-Line
Querying the volume to which data is being moved (VOL1, according to the
process query output) gives the following results:
Volume Name
Storage
Pool Name
Device
Class Name
---------------VOL1
----------STGTMP1
---------8500DEV
Estimated
Capacity
(MB)
--------4,944.0
Pct
Util
Volume
Status
----0.3
-------Filling
At the end of the move process, querying the volume from which data was moved
gives the following results:
Volume Name
Storage
Pool Name
Device
Class Name
---------------/dev/vol3
---------BACKUPPOOL
---------DISK
Estimated
Capacity
(MB)
--------15.0
Pct
Util
Volume
Status
----0.0
-------On-Line
Moving Data for All File Spaces for One or More Nodes
Moving data for all file spaces on one or more nodes is useful:
Chapter 9. Managing Storage Pools and Volumes
241
Another example is to move data for a single node named MARKETING from all
primary sequential-access storage pools to a random-access storage pool named
DISKPOOL. First obtain a list of storage pools that contain data for node
MARKETING, issue either:
query occupancy marketing
or
SELECT * from OCCUPANCY where node_name=MARKETING;
242
For this example the list of resulting storage pool names all begin with the
characters FALLPLAN. To move the data repeat the following command for every
instance of FALLPLAN. The following example displays the command for
FALLPLAN3:
move nodedata marketing fromstgpool=fallplan3
tostgpool=diskpool
A final example shows moving both non-Unicode and Unicode file spaces for a
node. For node NOAH move non-Unicode file space \\servtuc\d$ and Unicode
file space \\tsmserv1\e$ that has a file space ID of 2 from sequential access
storage pool TAPEPOOL to random access storage pool DISKPOOL.
move nodedata noah fromstgpool=tapepool tostgpool=diskpool
filespace=\\servtuc\d$ fsid=2
Figure 39 shows an example of the report that you receive about the data
movement process.
Process
Number
-------3
Process Description
Status
243
Explanation
Device class
Specifies the name of the device class assigned for the storage pool. This
is a required parameter.
Pool type
Specifies that you want to define a copy storage pool. This is a required
parameter. Updating a storage pool cannot change whether the pool is a
primary or copy storage pool.
Access mode
Defines access to volumes in the storage pool for user operations (such
as backup and restore) and system operations (such as reclamation).
Possible values are:
Read/Write
User and system operations can read from or write to the
volumes.
Read-Only
User operations can read from the volumes, but not write.
However, system processes can move files within the volumes
in the storage pool.
Unavailable
Specifies that users cannot access files stored on volumes in the
copy storage pool. Files can be moved within the volumes of
the copy storage pool, but no new writes are permitted to the
volumes in the storage pool from volumes outside the storage
pool.
244
Explanation
Maximum
When you specify a value greater than zero, the server dynamically
number of scratch acquires scratch volumes when needed, up to this maximum number.
volumes
This is a required parameter.
For automated libraries, set this value equal to the physical capacity of
the library. See Maintaining a Supply of Scratch Volumes in an
Automated Library on page 148.
Collocation
Reclamation
threshold
Reuse delay
period
Specifies the number of days that must elapse after all of the files have
been deleted from a volume before the volume can be rewritten or
returned to the scratch pool. See Delaying Reuse of Reclaimed
Volumes on page 220.
To store data in the new storage pool, you must back up the primary storage pools
(BACKUPPOOL, ARCHIVEPOOL, and SPACEMGPOOL) to the
DISASTER-RECOVERY pool. See Backing Up Storage Pools on page 549.
245
No
Yes
No
No
Yes
No
Yes
No
Yes
Yes
Contents
Client files (backup versions, archived Copies of files that are stored in
files, space-managed files)
primary storage pools
Collocation
Yes
Reclamation
Yes
File deletion
System
246
Ensure that you have saved any readable data that you want to preserve by
issuing the MOVE DATA command. Moving all of the data that you want to
preserve may require you to issue the MOVE DATA command several times.
Before you begin deleting all volumes that belong to the storage pool, change
the access mode of the storage pool to unavailable so that no files can be written
to or read from volumes in the storage pool.
See Deleting a Storage Pool Volume with Data on page 248 for information
about deleting volumes.
v The storage pool is not identified as the next storage pool within the storage
hierarchy
To determine whether this storage pool is referenced as the next storage pool
within the storage hierarchy, query for storage pool information as described in
Monitoring Space Available in a Storage Pool on page 223.
Update any storage pool definitions to remove this storage pool from the storage
hierarchy by performing one of the following:
Naming another storage pool as the next storage pool in the storage hierarchy
Entering the value for the NEXTSTGPOOL parameter as "" (double quotes) to
remove this storage pool from the storage hierarchy definition
See Defining or Updating Primary Storage Pools on page 182 for information
about defining and updating storage pools.
v The storage pool to be deleted is not specified as the destination for any copy
group in any management class within the active policy set of any domain. Also,
a storage pool to be deleted cannot be the destination for space-managed files
(specified in any management class within the active policy set of any domain).
If this pool is a destination and the pool is deleted, operations fail because there
is no storage space to store the data.
247
Task
Restricted storage
After you respond yes, the server generates a background process to delete the
volume.
The command may be run in the foreground on an administrative client by issuing
the command with the WAIT=YES parameter.
The server generates a background process and deletes data in a series of batch
database transactions. After all files have been deleted from the volume, the server
deletes the volume from the storage pool. If the volume deletion process is
canceled or if a system failure occurs, the volume might still contain data. Reissue
the DELETE VOLUME command and explicitly request the server to discard the
remaining files on the volume.
To delete a volume but not the files it contains, move the files to another volume.
See Moving Files from One Volume to Another Volume on page 237 for
information about moving data from one volume to another volume.
Residual data: Even after you move data, residual data may remain on the
volume because of I/O errors or because of files that were
previously marked as damaged. (Tivoli Storage Manager does not
move files that are marked as damaged.) To delete any volume that
contains residual data that cannot be moved, you must explicitly
specify that files should be discarded from the volume.
248
249
250
v
v
v
v
v
Each node must be registered with the server and requires an option file with a
pointer to the server.
For details on many of the topics in this chapter, refer to Backup-Archive Clients
Installation and Users Guide. Administrators can perform the following activities
when managing nodes:
Tasks:
Installing Client Node Software on page 252
Accepting Default Closed Registration or Enabling Open Registration on page 252
Registering Nodes with the Server on page 252
Connecting Nodes with the Server on page 255
Concepts:
Overview of Clients and Servers as Nodes
Comparing Network-Attached Nodes to Local Nodes on page 257
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
251
252
Closed Registration
To add a node with closed registration, an administrator uses the REGISTER
NODE command to register the node and specify the initial password. The
administrator can also specify the following optional parameters:
|
|
|
|
|
|
|
|
|
v Contact information.
v The name of the policy domain to which the node is assigned.
v Whether the node compresses its files before sending them to the server for
backup and archive.
v Whether the node can delete backups and archives from server storage.
v The name of a client option set to be used by the node.
v Whether to force a node to change or reset the password.
v The type of node being registered.
v The URL address used to administer the client node.
v The maximum number of mount points the node can use.
v Whether the client node keeps a mount point for an entire session.
v
v
v
v
v
Open Registration
To add a node with open registration, the server prompts the user for a node
name, password, and contact information the first time the user attempts to
connect to the server. With open registration, the server automatically assigns the
node to the STANDARD policy domain. The server by default allows users to
delete archive copies, but not backups stored in server storage.
You can enable open registration by entering the following command from an
administrative client command line:
set registration open
For examples and a list of open registration defaults, refer to the Administrators
Reference.
To change the defaults for a registered node, use the UPDATE NODE command.
253
The client node MIKE is registered with the password pass2eng. When the client
node MIKE performs a scheduling operation, the schedule log entries are kept for
5 days.
You must use this same node name when you later define the corresponding data
mover name. For more information, see Chapter 6, Using NDMP for Operations
with NAS File Servers, on page 111.
254
255
The client options file dsm.opt is located in the client, application client, or host
server directory. If the file does not exist, copy the dsm.smp file. Users and
administrators can edit the client options file to specify:
v
v
v
v
v
|
|
256
Figure 42 on page 258 shows that a network environment Tivoli Storage Manager
consists of a backup-archive client and an administrative client on the same
computer as the server. However, network-attached client nodes can also connect
to the server.
257
Each client requires a client options file. A user can edit the client options file at
the client node. The options file contains a default set of processing options that
identify the server, communication method, backup and archive options, space
management options, and scheduling options.
258
259
260
Administrators can manage client nodes and control their access to the server. See
the following sections for more information:
Tasks:
Managing Nodes on page 262
Managing Client Access Authority Levels on page 267
Managing File Spaces on page 269
Managing Client Option Files on page 280
Managing IBM Tivoli Storage Manager Sessions on page 283
Managing IBM Tivoli Storage Manager Security on page 288
Concepts:
Client Nodes and File Spaces on page 270
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
261
Managing Nodes
From the perspective of the server, each client and application client is a node
requiring IBM Tivoli Storage Manager services. For information, see Client Nodes
and File Spaces on page 270. Client nodes can be local or remote to the server. For
information, see Comparing Network-Attached Nodes to Local Nodes on
page 257.
Administrators can perform the following activities when managing client nodes.
Task
System
|
|
|
|
In most cases, the IBM Tivoli Storage Manager server and clients can work across a
firewall. Since every firewall is different, the firewall administrator may need to
consult the instructions for the firewall software or hardware in use.
|
|
|
|
|
|
|
|
IBM Tivoli Storage Manager has two methods for enabling communication
between the client and the server across a firewall: client-initiated communication
and server-initiated communication. To allow either client-initiated or
server-initiated communication across a firewall, client options must be set in
concurrence with server parameters on the REGISTER NODE or UDPATE NODE
commands. Enabling server-initiated communication overrides client-initiated
communication, including client address information that the server may have
previously gathered in server-prompted sessions.
|
|
|
|
|
Client-initiated Sessions
|
|
|
|
|
|
|
|
|
|
Server-initiated Sessions
To limit the start of scheduled backup-archive client sessions to the IBM Tivoli
Storage Manager server, you must specify this on the server, and also synchronize
the information in the client option file. In either the REGISTER NODE or
UPDATE NODE command, select the SERVERONLY option of the
SESSIONINITIATION parameter. Provide the HLADDRESS and LLADDRESS
client node addresses.
262
|
|
|
For example,
|
|
|
|
The HLADDRESS specifies the IP address of the client node, and is used whenever
the server contacts the client. The LLADDRESS specifies the low level address of
the client node and is used whenever the server contacts the client. The client node
listens for sessions from the server on the LLADDRESS port number.
|
|
|
|
|
|
|
|
|
|
|
| SESSIONINITIATION=
| SERVERONLY
REGISTER or UPDATE
NODE command
SESSIONINITIATION=
SERVERONLY
| HLADDRESS
|
REGISTER or UPDATE
NODE command
TCP/IP address
TCP/IP address
| LLADDRESS
|
REGISTER or UPDATE
NODE command
TCPCLIENTPORT
263
ENGNODE retains the contact information and access to backup and archive data
that belonged to CAROLH. All files backed up or archived by CAROLH now
belong to ENGNODE.
Note: Before you can delete a NAS node, you must first delete any file spaces,
then delete any defined paths for the data mover with the DELETE PATH
command. Delete the corresponding data mover with the DELETE
DATAMOVER command. Then you can issue the REMOVE NODE
command to delete the NAS node.
The output from that command may display similar to the following:
264
Node Name
Platform
---------JOE
ENGNODE
HTANG
MAB
PEASE
SSTEINER
-------WinNT
AIX
Mac
AIX
Linux86
SUN
SOLARIS
Policy Domain
Name
Days Since
Last
Access
-------------- ---------STANDARD
6
ENGPOLDOM
<1
STANDARD
4
ENGPOLDOM
<1
STANDARD
3
ENGPOLDOM
<1
Days Since
Password
Set
---------6
1
11
1
12
1
Locked?
------No
No
No
No
No
No
Node Name:
Platform:
Client OS Level:
Client Version:
Policy Domain Name:
Last Access Date/Time:
Days Since Last Access:
Password Set Date/Time:
Days Since Password Set:
Invalid Sign-on Count:
Locked?:
Contact:
Compression:
Archive Delete Allowed?:
Backup Delete Allowed?:
Registration Date/Time:
Registering Administrator:
Last Communication Method Used:
Bytes Received Last Session:
Bytes Sent Last Session:
Duration of Last Session (sec):
Pct. Idle Wait Last Session:
Pct. Comm. Wait Last Session:
Pct. Media Wait Last Session:
Optionset:
URL:
Node Type:
Password Expiration Period:
Keep Mount Point?:
Maximum Mount Points Allowed:
Auto Filespace Rename:
Validate Protocol:
TCP/IP Name:
TCP/IP Address:
Globally Unique ID:
Transaction Group Max:
Data Write Path:
Data Read Path:
Session Initiation:
HL Address:
LL Address:
JOE
WinNT
5.00
Version 5, Release 1, Level 5.0
STANDARD
05/19/2002 18:55:46
6
05/19/2002 18:26:43
6
0
No
Clients Choice
Yes
No
03/19/2002 18:26:43
SERVER_CONSOLE
Tcp/Ip
108,731
698
0.00
0.00
0.00
0.00
http://client.host.name:1581
Client
60
No
1
No
No
JOE
9.11.153.39
11.9c.54.e0.8a.b5.11.d6.b3.c3.00.06.29.45.c1.5b
0
ANY
ANY
ClientOrServer
9.11.521.125
1501
265
Enterprise logon enables a user with the proper administrative user ID and
password to access a Web backup-archive client from a Web browser. The Web
backup-archive client can be used by the client node or a user ID with the proper
authority to perform backup, archive, restore, and retrieve operations on any
machine that is running the Web backup-archive client.
You can establish access to a Web backup-archive client for help desk personnel
that do not have system or policy privileges by granting those users client access
authority to the nodes they need to manage. Help desk personnel can then
perform activities on behalf of the client node such as backup and restore
operations.
A native backup-archive client can log on to IBM Tivoli Storage Manager using
their node name and password, or administrative user ID and password. The
administrative user ID password is managed independently from the password
that is generated with the passwordaccess generate client option. The client must
have the option passwordaccess generate specified in their client option file to
enable use of the Web backup-archive client.
To use the Web backup-archive client from your web browser, you specify the URL
and port number of the IBM Tivoli Storage Manager backup-archive client machine
running the Web client. The browser you use to connect to a Web backup-archive
client must be Microsoft Internet Explorer 5.0 or Netscape 4.7 or later. The
browser must have the Java Runtime Environment (JRE) 1.3.1, which includes the
Java Plug-in software. The JRE is available at http://java.sun.com/getjava.
During node registration, you have the option of granting client owner or client
access authority to an existing administrative user ID. You can also prevent the
server from creating an administrative user ID at registration. If an administrative
user ID already exists with the same name as the node being registered, the server
registers the node but does not automatically create an administrative user ID. This
process also applies if your site uses open registration.
For more information about installing and configuring the Web backup-archive
client, refer to Backup-Archive Clients Installation and Users Guide.
266
The administrator can change the client nodes password for which they
have authority.
This is the default authority level for the client at registration. An
administrator with system or policy privileges to a clients domain has
client owner authority by default.
Client access
You can only access the client through the Web backup-archive client.
You can restore data only to the original client.
A user ID with client access authority cannot access the client from another
machine using the NODENAME parameter.
This privilege class authority is useful for help desk personnel so they can
assist users in backing up or restoring data without having system or
policy privileges. The client data can only be restored to none other than
the original client. A user ID with client access privilege cannot directly
access clients data from a native backup-archive client.
The administrator FRED can now access the LABCLIENT client, and perform
backup and restore. The administrator can only restore data to the LABCLIENT
node.
To grant client owner authority to ADMIN1 for the STUDENT1 node, issue:
grant authority admin1 class=node authority=owner node=student1
The user ID ADMIN1 can now perform backup and restore operations for the
STUDENT1 client node. The user ID ADMIN1 can also restore files from the
STUDENT1 client node to a different client node.
267
This command results in the NEWCLIENT node being registered with a password
of pass2new, and also grants HELPADMIN client owner authority. This command
would not create an administrator ID. The HELPADMIN client user ID is now able
to access the NEWCLIENT node from a remote location.
268
The help desk person, using HELP1 user ID, has a Web browser with Java
Runtime Environment (JRE) 1.3.1.
1. Register an administrative user ID of HELP1.
register admin help1 05x23 contact="M. Smith, Help Desk x0001"
2. Grant the HELP1 administrative user ID client access authority to all clients in
the FINANCE domain. With client access authority, HELP1 can perform backup
and restore operations for clients in the FINANCE domain. Client nodes in the
FINANCE domain are Dave, Sara, and Joe.
grant authority help1 class=node authority=access domains=finance
3. The help desk person, HELP1, opens the Web browser and specifies the URL
and port number for client machine Sara:
http://sara.machine.name:1581
A Java applet is started, and the client hub window is displayed in the main
window of the Web browser. When HELP1 accesses the backup function from
the client hub, the IBM Tivoli Storage Manager login screen is displayed in a
separate Java applet window. HELP1 authenticates with the administrative user
ID and password. HELP1 can perform a backup for Sara.
For information about what functions are not supported on the Web
backup-archive client, refer to Backup-Archive Clients Installation and Users Guide.
Any administrator
269
Task
270
operation even when the file spaces contain directory names or files in multiple
languages, or when the client uses a different code page than the server.
New clients storing data on the server for the first time require no special set-up. If
the client has the latest IBM Tivoli Storage Manager client software installed, the
server automatically stores Unicode-enabled file spaces for that client.
However, if you have clients that already have data stored on the server and the
clients install the Unicode-enabled IBM Tivoli Storage Manager client software, you
need to plan for the migration to Unicode-enabled file spaces. To allow clients with
existing data to begin to store data in Unicode-enabled file spaces, IBM Tivoli
Storage Manager provides a function for automatic renaming of existing file
spaces. The file data itself is not affected; only the file space name is changed.
Once the existing file space is renamed, the operation creates a new file space that
is Unicode-enabled. The creation of the new Unicode-enabled file space for clients
can greatly increase the amount of space required for storage pools and the
amount of space required for the server database. It can also increase the amount
of time required for a client to run a full incremental backup, because the first
incremental backup after the creation of the Unicode-enabled file space is a full
backup.
When clients with existing file spaces migrate to Unicode-enabled file spaces, you
need to ensure that sufficient storage space for the server database and storage
pools is available. You also need to allow for potentially longer backup windows
for the complete backups.
Note: Once the server is at the latest level of software that includes support for
Unicode-enabled file spaces, you can only go back to a previous level of the
server by restoring an earlier version of IBM Tivoli Storage Manager and the
database.
|
|
271
converted to the servers code page. When IBM Tivoli Storage Manager cannot
convert the code page, the client may receive one or all of the following messages
if they were using the command line: ANS1228E, ANS4042E, and ANS1803E.
Clients that are using the GUI may see a Path not found message. If you have
clients that are experiencing such backup failures, then you need to migrate the file
spaces for these clients to ensure that these systems are completely protected with
backups. If you have a large number of clients, set the priority for migrating the
clients based on how critical each clients data is to your business. See Migrating
Clients to Unicode-Enabled File Spaces.
Any new file spaces that are backed up from client systems with the
Unicode-enabled IBM Tivoli Storage Manager client are automatically stored as
Unicode-enabled file spaces in server storage.
Objects backed up or archived with a Unicode-enabled IBM Tivoli Storage
Manager client in any supported language environment can be restored or
retrieved with a Unicode-enabled client in the same or any other supported
language environment. This means, for example, that files backed up by a Japanese
Unicode-enabled client can be restored by a German Unicode-enabled client.
Note: Objects backed up or archived by a Unicode-enabled IBM Tivoli Storage
Manager client, cannot be restored or retrieved by a client that is not
Unicode-enabled.
272
IBM Tivoli Storage Manager does not automatically rename client file spaces
when the client system upgrades to the Unicode-enabled IBM Tivoli Storage
Manager client. This setting can help an administrator control how many clients
file spaces can be renamed at one time. The administrator can determine how
many Unicode-enabled clients exist by using the QUERY NODE
FORMAT=DETAILED command. The output displays the client level. A
Unicode-enabled client is on a Windows NT, Windows 2000, Windows 2002,
Windows Server 2003, Macintosh OS 9, and Macintosh OS X system at IBM
Tivoli Storage Manager Version 4.2.0 or higher.
v Automatically rename existing file spaces, forcing the creation of
Unicode-enabled file spaces in place of the renamed file spaces
(AUTOFSRENAME=YES).
IBM Tivoli Storage Manager automatically renames client file spaces in server
storage when the client upgrades to the Unicode-enabled client and runs one of
the following operations: archive, selective backup, full incremental backup, or
partial incremental backup. IBM Tivoli Storage Manager automatically renames
the file spaces that are specified in the current operation and creates new,
Unicode-enabled file spaces where files and directories are stored to complete
the operation. Other file spaces that are not specified in the current operation are
not affected by the rename. This means a client can have mixed file spaces. See
The Rules for Automatically Renaming File Spaces on page 274 for how the
new name is constructed.
Attention: If you force the file space renaming for all clients at the same time,
client operations can contend for network and storage resources, and storage
pools can run out of storage space.
v Allow clients to choose whether to rename files spaces, in effect choosing
whether new Unicode-enabled file spaces are created
(AUTOFSRENAME=CLIENT).
If you use this value for a client node, the client can set its AUTOFSRENAME
option in its options file. The client option determines whether file spaces are
renamed (YES or NO), or whether the user is prompted for renaming at the time
of an IBM Tivoli Storage Manager operation (PROMPT).
The default value for the client option is PROMPT. When the option is set for
prompting, the client is presented with a choice about renaming file spaces.
When a client that has existing file spaces on server storage upgrades to the
Chapter 11. Managing Client Nodes
273
Unicode-enabled client, and the client runs an IBM Tivoli Storage Manager
operation with the server, the user is asked to choose whether to rename the file
spaces that are involved in the current operation.
The client is prompted only once about renaming a particular file space.
If the client does not choose to rename the file space, the administrator can later
rename the file space so that a new Unicode-enabled file space is created the
next time the client processes an archive, selective backup, full incremental
backup, or partial incremental backup.
Attention: There is no prompt for operations that run with the client scheduler.
If the client is running the scheduler and the client AUTOFSRENAME option is
set to PROMPT, there is no prompt and the file space is not renamed. This
allows a client session to run unattended. The prompt appears during the next
interactive session on the client.
The following table summarizes what occurs with different parameter and option
settings.
Table 24. Effects of AUTOFSRENAME Settings
Parameter on the
server
(for each client)
Yes
Renamed
Yes
No
Not renamed
No
Client
Yes
Renamed
Yes
No
Not renamed
Yes
Prompt
Depends on the
response from the user
(yes or no)
No
The Rules for Automatically Renaming File Spaces: With its automatic renaming
function, IBM Tivoli Storage Manager renames a file space by adding the suffix
_OLD. For example:
Original file space
Renamed file space
\\maria\c$
\\maria\c$_OLD
If the new name would conflict with the name of another file space, a number is
added to the suffix. For example:
Original file space
\\maria\c$
\\maria\c$_OLD2
If the new name for the file space exceeds the limit of 64 characters, the file space
name is truncated on the right before the suffix _OLD is added.
Planning for Unicode Versions of Existing Client File Spaces: You need to
consider the following factors in your planning:
274
v After clients with existing file spaces start to create Unicode-enabled file spaces,
they will still need to have access to the renamed file spaces that are not
Unicode-enabled for some period of time.
v Your storage pool and database space requirements can double if you allow all
clients to create Unicode-enabled file spaces in addition to their existing file
spaces that are not Unicode-enabled.
v Because the initial backups after migration are complete backups, it can also
greatly increase the time required to finish backup operations.
To minimize problems, you need to plan the storage of Unicode-enabled file spaces
for clients that already have existing file spaces in server storage.
1. Determine which clients need to migrate.
Clients that have had problems with backing up files because their file spaces
contain names of directories or files that cannot be converted to the servers
code page should have the highest priority. Balance that with clients that are
most critical to your operations. If you have a large number of clients that need
to become Unicode-enabled, you can control the migration of the clients.
Change the rename option for a few clients at a time to keep control of storage
space usage and processing time. Also consider staging migration for clients
that have a large amount of data backed up.
2. Allow for increased backup time and network resource usage when the
Unicode-enabled file spaces are first created in server storage.
Based on the number of clients and the amount of data those clients have,
consider whether you need to stage the migration. Staging the migration means
setting the AUTOFSRENAME parameter to YES or CLIENT for only a small
number of clients every day.
Note: If you set the AUTOFSRENAME parameter to CLIENT, be sure to have
the clients (that run the client scheduler) set their option to
AUTOFSRENAME YES. This ensures the file spaces are renamed.
3. Check the current storage usage for the clients that need to become
Unicode-enabled.
You can use the QUERY OCCUPANCY command to display information on
how much space each client is currently using. Initially, clients will need only
the amount of space used by active files. Therefore, you need to estimate how
much of the current space is used by copies (different versions of the same file).
Migration will result in a complete backup at the next incremental backup, so
clients will need space for that backup, plus for any other extra versions that
they will keep. Therefore, the amount of storage required also depends on
policy (see the next step). Your IBM Tivoli Storage Manager policy specifies
how files are backed up, archived, migrated from client node storage, and
managed in server storage.
4. Understand how your IBM Tivoli Storage Manager policies affect the storage
that will be needed.
If your policies expire files based only on the number of versions (Versions
Data Exists), storage space required for each client will eventually double, until
you delete the old file spaces.
If your policies expire files based only on age (Retain Extra Versions), storage
space required for each client will increase initially, but will not double.
If your policies use both the number of versions and their age, each client will
need less than double their current usage.
5. Estimate the effect on the database size.
Chapter 11. Managing Client Nodes
275
The database size depends on the number of files in server storage, as well as
the number of versions of those files. As Unicode-enabled file spaces are
backed up, the original file spaces that were renamed remain. Therefore, the
server requires additional space in the database to store information about the
increased number of file spaces and files.
See Estimating and Monitoring Database and Recovery Log Space
Requirements on page 424.
6. Arrange for the additional storage pool space, including space in copy storage
pools, based on your estimate from step 3 on page 275 and 4 on page 275.
7. Check the server database space that is available and compare with your
estimate from step 5 on page 275.
8. Ensure that you have a full database backup before you proceed with migration
of Unicode-enabled file spaces. See Backing Up the Database on page 553.
9. Consider how you will manage the renamed file spaces as they age. The
administrator can delete them, or the clients can be allowed to delete their own
file spaces.
How Clients are Affected by the Migration to Unicode: The server manages a
Unicode-enabled client and its file spaces as follows:
v When a client upgrades to a Unicode-enabled client and logs in to the server, the
server identifies the client as Unicode-enabled.
Note: That same client (same node name) cannot log in to the server with a
previous version of IBM Tivoli Storage Manager or a client that is not
Unicode-enabled.
v The original file space that was renamed (_OLD) remains with both its active
and inactive file versions that the client can restore if needed. The original file
space will no longer be updated. The server will not mark existing active files
inactive when the same files are backed up in the corresponding
Unicode-enabled file space.
Note: Before the Unicode-enabled client is installed, the client can back up files
in a code page other than the current locale, but cannot restore those files.
After the Unicode-enabled client is installed, if the same client continues
to use file spaces that are not Unicode-enabled, the client skips files that
are not in the same code page as the current locale during a backup.
Because the files are skipped, they appear to have been deleted from the
client. Active versions of the files in server storage are made inactive on
the server. When a client in this situation is updated to a Unicode-enabled
client, you should migrate the file spaces for that client to
Unicode-enabled file spaces.
v The server does not allow a Unicode-enabled file space to be sent to a client that
is not Unicode-enabled during a restore or retrieve process.
v Clients should be aware that they will not see all their data on the
Unicode-enabled file space until a full incremental backup has been processed.
When a client performs a selective backup of a file or directory and the original
file space is renamed, the new Unicode-enabled file space will contain only the
file or directory specified for that backup operation. All other directories and
files are backed up on the next full incremental backup.
If a client needs to restore a file before the next full incremental backup, the client
can perform a restore from the renamed file space instead of the new
Unicode-enabled file space. For example:
1. Sue had been backing up her file space, \\sue-node\d$.
276
2. Sue upgrades the IBM Tivoli Storage Manager client on her system to the
Unicode-enabled IBM Tivoli Storage Manager client.
3. Sue performs a selective backup of the file HILITE.TXT.
4. The automatic file space renaming function is in effect and IBM Tivoli
Storage Manager renames\\sue-node\d$ to \\sue-node\d$_OLD. IBM Tivoli
Storage Manager then creates a new Unicode-enabled file space on the server
with the name \\sue-node\d$. This new Unicode-enabled file space contains
only the HILITE.TXT file.
5. All other directories and files in Sues file system will be backed up on the
next full incremental backup. If Sue needs to restore a file before the next full
incremental backup, she can restore the file from the \\sue-node\d$_OLD
file space.
Refer to the Backup-Archive Clients Installation and Users Guide for more
information.
Example of a Migration Process: This section gives one possible sequence for
migrating clients. Assumptions for this scenario are:
v The IBM Tivoli Storage Manager server database has been backed up.
v The latest server software has been installed. This installation has also
performed an upgrade to the server database.
v Clients have installed the latest software.
v A few clients are file servers. Most clients are workstations used by individuals.
v Clients generally run scheduled incremental backups every night.
The following is a possible migration process:
1. Have all clients install the Unicode-enabled IBM Tivoli Storage Manager client
software.
2. Migrate the file servers first. For clients that are file servers, update the
AUTOFSRENAME parameter to enable automatic renaming for the file spaces.
For example, if the client node names for all file servers begin with FILE, enter
the following command:
update node file* autofsrename=yes
This forces the file spaces to be renamed at the time of the next backup or
archive operation on the file servers. If the file servers are large, consider
changing the renaming parameter for one file server each day.
3. Allow backup and archive schedules to run as usual. Monitor the results.
a. Check for the renamed file spaces for the file server clients. Renamed file
spaces have the suffix _OLD or _OLDn, where n is a number. (See The Rules
for Automatically Renaming File Spaces on page 274.)
b. Check the capacity of the storage pools. Add tape or disk volumes to
storage pools as needed.
c. Check database usage statistics to ensure you have enough space.
4. Migrate the workstation clients. For example, migrate all clients with names
that start with the letter a.
update node a* autofsrename=yes
5. Allow backup and archive schedules to run as usual that night. Monitor the
results.
6. After sufficient time passes, consider deleting the old, renamed file spaces. See
Managing the Renamed File Spaces on page 278.
Chapter 11. Managing Client Nodes
277
Managing the Renamed File Spaces: The file spaces that were automatically
renamed (_OLD) to allow the creation of Unicode-enabled file spaces continue to
exist on the server. Users can still access the file versions in these file spaces.
Because a renamed file space is not backed up again with its new name, the files
that are active (the most recent backup version) in the renamed file space remain
active and never expire. The inactive files in the file space expire according to the
policy settings for how long versions are retained. To determine how long the files
are retained, check the values for the parameters, Retain Extra Versions and Retain
Only Versions, in the backup copy group of the management class to which the
files are bound.
When users no longer have a need for their old, renamed file spaces, you can
delete them. If possible, wait for the longest retention time for the only version
(Retain Only Version) that any management class allows. If your system has
storage constraints, you may need to delete these file spaces before that.
Node Name
Filespace
Name
FSID
Platform
---------SUE
SUE
JOE
Filespace
Is
Capacity
Pct
Type
Filespace
(MB) Util
Unicode?
--------- --------- -------- ----NTFS
Yes
2,502.3 75.2
NTFS
Yes
6,173.4 59.6
NTFS
No
12,299.7 31.7
|
|
|
|
|
|
278
v Identify file spaces defined to each client node, so that you can delete each file
space from the server before removing the client node from the server
v Identify file spaces that are Unicode-enabled and identify their file space ID
(FSID)
v Monitor the space used on workstations disks
v Monitor whether backups are completing successfully for the file space
v Determine the date and time of the last backup
You display file space information by identifying the client node name and file
space name.
Note: File space names are case-sensitive and must be entered exactly as known to
the server.
For example, to view information about file spaces defined for client node JOE,
enter:
query filespace joe *
When you display file space information in detailed format, the Filespace Name
field may display file space names as .... This indicates to the administrator that
a file space does exist but could not be converted to the servers code page.
Conversion can fail if the string includes characters that are not available in the
server code page, or if the server has a problem accessing system conversion
routines.
File space names and file names that can be in a different code page or locale than
the server do not display correctly on the administrators Web interface or the
administrative command-line interface. The data itself is backed up and can be
restored properly, but the file space name or file name may display with a
combination of invalid characters or blank spaces. Refer to Administrators Reference
for details.
279
For example, client node PEASE no longer needs archived files in file space
/home/pease/dir2. However, he does not have the authority to delete those files.
You can delete them by entering:
delete filespace pease /home/pease/dir2 type=archive
The authority to delete backed-up or archived files from server storage is set
when a client node is registered. See Accepting Default Closed Registration or
Enabling Open Registration on page 252 for information on allowing users to
delete files in storage pools.
v You want to remove a client node from the server.
You must delete a users files from storage pools before you can remove a client
node. For example, to delete all file spaces belonging to client node DEBBYG,
enter:
delete filespace debbyg * type=any
After you delete all of a client nodes file spaces, you can delete the node with
the REMOVE NODE command. See Deleting Client Nodes on page 264 for
more details.
v You want to delete a specific users files.
For client nodes that support multiple users, such as UNIX, a file owner name is
associated with each file on the server. The owner name is the user ID of the
operating system, such as the UNIX user ID. When you delete a file space
belonging to a specific owner, only files that have the specified owner name in
the file space are deleted.
When a node has more than one file space and you issue a DELETE FILESPACE
command for only one file space, a QUERY FILESPACE command for the node
during the delete process shows no file spaces. When the delete process ends, you
can view the remaining file spaces with the QUERY FILESPACE command.
280
client can use these defined options during a backup, archive, restore, or retrieve
process. See Backup-Archive Clients Installation and Users Guide for detailed
information about individual client options.
To create a client option set and have the clients use the option set, do the
following:
1. Create the client option set with the DEFINE CLOPTSET command.
2. Add client options to the option set with the DEFINE CLIENTOPT command.
3. Specify which clients should use the option set with the REGISTER NODE or
UPDATE NODE command.
For a list of valid client options you can specify, refer to Administrators Reference.
The server automatically assigns sequence numbers to the specified options, or you
can choose to specify the sequence number for order of processing. This is helpful
if you have defined more than one of the same option as in the following example.
define clientopt engbackup inclexcl "include d:\admin"
define clientopt engbackup inclexcl "include d:\payroll"
The options are processed starting with the highest sequence number.
Any include-exclude statements in the server client option set have priority over
the include-exclude statements in the local client options file. The server
include-exclude statements are always enforced and placed at the bottom of the
include-exclude list and evaluated before the client include-exclude statements. If
the server option set has several include-exclude statements, the statements are
processed starting with the highest sequence number. The client can use the
QUERY INCLEXCL command to view the include-exclude statements in the order
they are processed. QUERY INCLEXCL also displays the source of each
include-exclude statement. For more information on the processing of the
include-exclude statements see The Include-Exclude List on page 308 and also
the Backup-Archive Clients Installation and Users Guide.
The FORCE parameter allows an administrator to specify whether a client node
can override an option value. This parameter has no effect on additive options
such as INCLEXCL and DOMAIN. The default value is NO. If FORCE=YES, the
Chapter 11. Managing Client Nodes
281
client cannot override the value. The following example shows how you can
prevent a client from using subfile backup:
define clientopt engbackup subfilebackup no force=yes
The client node MIKE is registered with the password pass2eng. When the client
node MIKE performs a scheduling operation, his schedule log entries are kept for 5
days.
Any administrator
282
Tivoli Storage Manager can hold a client restore session in DSMC loop mode until
one of these conditions is met:
v The device class MOUNTRETENTION limit is satisfied.
v The client IDLETIMEOUT period is satisfied.
v The loop session ends.
Administrators can perform the following activities when managing IBM Tivoli
Storage Manager sessions:
Task
Any administrator
System or operator
System or operator
Comm.
Method
-----Tcp/Ip
Tcp/Ip
HTTP
Sess
Wait Bytes Bytes Sess
State
Time
Sent Recvd Type
------ ------ ------- ------- ----IdleW 36 S
592
186 Node
RecvW
0 S
730
638 Node
Run
0 S
0
0 Admin
----------JOEUSER
STATION1
ADMIN
283
You can determine the state of the server by examining the session state and wait
time to determine how long (in seconds, minutes, or hours) the session has been in
the current state.
Run
End
RecvW
Waiting to receive an expected message from the client while a database
transaction is in progress. A session in this state is subject to the
COMMTIMEOUT limit.
SendW
Waiting for acknowledgment that the client has received a message sent by
the server.
MediaW
Waiting for removable media to become available.
Aggregation can cause multiple media waits within a transaction and is
indicated by one client message. For more information, see Reclaiming
Space in Sequential Access Storage Pools on page 213.
Note: If QUERY SESSION FORMAT=DETAILED is specified, the Media
Access Status field displays the type of media wait state.
IdleW Waiting for communication from the client, and a database transaction is
NOT in progress. A session in this state is subject to the IDLETIMEOUT
limit as specified in the server options file.
If a client does not initiate communication within the specified time limit
set by the IDLETIMEOUT option in the server options file, then the server
cancels the client session.
For example, if the IDLETIMEOUT option is set to 30 minutes, and a user
does not initiate any operations within those 30 minutes, then the server
cancels the client session. The client session is automatically reconnected to
the server when it starts to send data again.
284
If the session is in the Run state when it is canceled, the cancel process does not
take place until the session enters the SendW, RecvW, or IdleW state. For details,
see Server Session States on page 284.
If the session you cancel is currently waiting for a media mount, the mount request
is automatically canceled. If a volume associated with the client session is currently
being mounted by an automated library, the cancel may not take effect until the
mount is complete.
For example, to cancel a session for client MARIE:
1. Query client sessions to determine the session number as shown Figure 43 on
page 283. The example report displays MARIEs session number 6.
2. Cancel node MARIEs session by entering:
cancel session 6
285
been active for more minutes than specified and the data transfer rate is
less than the amount specified in the THROUGHPUTDATATHRESHOLD
server option.
Refer to the Administrators Reference for more information.
System or operator
Any administrator
You can prevent clients from establishing sessions with the server by using the
DISABLE SESSIONS command. This command does not cancel sessions currently
in progress or system processes such as migration and reclamation. For example, to
disable client node access to the server, enter:
disable sessions
You continue to access the server and current client activities complete unless a
user logs off or an administrator cancels a client session. After the client sessions
have been disabled, you can enable client sessions and resume normal operations
by entering:
enable sessions
You can issue the QUERY STATUS command to determine if the server is enabled
or disabled.
See also Locking and Unlocking Client Nodes on page 264.
286
eligible for backup. For example, if you are restoring all files in directory A, you
can still backup files in directory B from the same file space.
The RESTOREINTERVAL server option allows administrators to specify how long
client restartable restore sessions are saved in the server database. Consider
scheduled backup operations when setting this option. For more information, refer
to the RESTOREINTERVAL server option in Administrators Reference.
Administrators can perform the following activities when managing client
restartable restore sessions:
Task
Any administrator
System or operator
System or operator
287
|
|
|
|
|
|
|
|
|
|
|
|
When an administrator accesses the administrative Web interface, only the tasks
that correspond to the administrators privilege class are displayed.
288
System
Policy
Storage
Node
Operator
Restricted
Restricted
Unrestricted
Unrestricted
Analyst
Table 25 summarizes the privilege classes, and gives examples of how to set
privilege classes. For more information, see Managing Levels of Administrative
Authority on page 293.
Table 25. Authority and Privilege Classes
Privilege Class
Capabilities
System
v System-wide responsibilities
v Manage the enterprise
v Manage IBM Tivoli Storage Manager
security
Unrestricted Policy
grant authority smith classes=policy
Restricted Policy
grant authority jones domains=engpoldom
Unrestricted Storage
grant authority coyote classes=storage
Restricted Storage
grant authority holland stgpools=tape*
289
Capabilities
Operator
Task
Details
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
290
Task
|
|
|
|
|
|
Prevent clients from initiating sessions within Server-initiated Sessions on page 262
a firewall
Note: For information on connecting with
IBM Tivoli Storage Manager across a firewall,
refer to the Quick Start.
Details
Registering an administrator
System
System
System
Any administrator
System
Removing administrators
System
System
Registering Administrators
An administrator registers other administrators with the REGISTER ADMIN
command.
To register an administrator with a user ID of DAVEHIL, the password birds, and a
password expiration period of 120 days, enter the REGISTER ADMIN command:
register admin davehil birds passexp=120 contact=backup team
Renaming an Administrator
You can rename an administrator ID when an employee wants to be identified by
a new ID, or you want to assign an existing administrator ID to another person.
You cannot rename an administrator ID to one that already exists on the system.
For example, if administrator HOLLAND leaves your organization, you can assign
administrative privilege classes to another user by completing the following steps:
1. Assign HOLLANDs user ID to WAYNESMITH by issuing the RENAME
ADMIN command:
Chapter 11. Managing Client Nodes
291
Removing Administrators
You can remove administrators from the server so that they no longer have access
to administrator functions. For example, to remove registered administrator ID
SMITH, enter:
remove admin smith
Notes:
1. You cannot remove the last system administrator from the system.
2. You cannot remove the administrator SERVER_CONSOLE. See The Server
Console on page 288 for more information.
privilege**
privilege**
privilege**
privilege**
privilege**
privilege**
292
When she returns, any system administrator can unlock her administrator ID by
entering:
unlock admin marysmith
System
System
As an additional example, assume that three tape storage pools exist: TAPEPOOL1,
TAPEPOOL2, and TAPEPOOL3. To grant restricted storage privilege for these
storage pools to administrator HOLLAND, you can enter the following command:
grant authority holland stgpools=tape*
HOLLAND is restricted to managing storage pools with names that begin with
TAPE, if the storage pools existed when the authority was granted. HOLLAND is
not authorized to manage any storage pools that are defined after authority has
been granted.
293
System
System
System
System
System
294
You can specify a value from 0 to 9999 minutes. If the minimum value is 0, there is
no timeout period for the administrative Web interface. To help ensure the security
of an unattended browser, it is recommended that you set the timeout value higher
than zero.
|
|
|
|
|
Once you have explicitly set a password expiration for a node or administrator, it
is not modified if you later set a password expiration for all users. You can use the
RESET PASSEXP command to reset the password expiration period to the common
expiration period. Use the QUERY STATUS command to display the common
password expiration period, which at installation is set to 90 days.
295
Attention:
296
297
Tasks:
Getting Users Started on page 300
Changing Policy on page 300
Creating Your Own Policies on page 316
Defining and Updating a Policy Domain on page 318
Defining and Updating a Policy Set on page 319
Defining and Updating a Management Class on page 320
Defining and Updating a Backup Copy Group on page 321
Defining and Updating an Archive Copy Group on page 327
Assigning a Default Management Class on page 328
Validating and Activating a Policy Set on page 329
Assigning Client Nodes to a Policy Domain on page 330
Running Expiration Processing to Delete Expired Files on page 330
Policy for IBM Tivoli Storage Manager Servers as Clients on page 336
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
298
v The most recent backup version is retained for as long as the original file is on
the client file system. All other versions are retained for up to 30 days after they
become inactive.
v One backup version of a file that has been deleted from the clients system is
retained in server storage for 60 days.
v An archive copy is kept for up to 365 days.
See The Standard Policy for more details about the standard policy.
The server manages files based on whether the files are active or inactive. The
most current backup or archived copy of a file is the active version. All other
versions are called inactive versions. An active version of a file becomes inactive
when:
v A new backup is made
v A user deletes that file on the client node and then runs an incremental backup
Policy determines how many inactive versions of files the server keeps, and for
how long. When files exceed the criteria, the files expire. Expiration processing can
then remove the files from the server database. See File Expiration and Expiration
Processing on page 301 and Running Expiration Processing to Delete Expired
Files on page 330 for details.
Backup Policies
Files are backed up to the default disk storage pool,
BACKUPPOOL.
VEREXISTS
RETEXTRA
RETONLY
299
General
The default management class is STANDARD.
Changing Policy
Some types of clients and situations require policy changes. For example, if you
need to direct client data to storage pools different from the default storage pools,
you need to change policy. Other situations may also require policy changes. See
Configuring Policy for Specific Cases on page 331 for details.
300
To change policy that you have established in a policy domain, you must replace
the ACTIVE policy set. You replace the ACTIVE policy set by activating another
policy set. Do the following:
1. Create or modify a policy set so that it contains the policy that you want to
implement.
v Create a new policy set either by defining a new policy set or by copying a
policy set.
v Modify an existing policy set (it cannot be the ACTIVE policy set).
Note: You cannot directly modify the ACTIVE policy set. If you want to make
a small change to the ACTIVE policy set, copy the policy to modify it
and follow the steps here.
2. Make any changes that you need to make to the management classes, backup
copy groups, and archive copy groups in the new policy set. For details, see
Defining and Updating a Management Class on page 320, Defining and
Updating a Backup Copy Group on page 321, and Defining and Updating an
Archive Copy Group on page 327.
3. Validate the policy set. See Validating a Policy Set on page 329 for details.
4. Activate the policy set. The contents of your new policy set becomes the
ACTIVE policy set. See Activating a Policy Set on page 330 for details.
301
|
|
|
|
Backup
To guard against the loss of information, the backup-archive client can copy files,
subdirectories, and directories to media controlled by the server. Backups can be
controlled by administrator-defined policies and schedules, or users can request
backups of their own data. The backup-archive client provides two types of
backup:
Incremental backup
The backup of files according to policy defined in the backup copy group
of the management class for the files. An incremental backup typically
backs up all files that are new or that have changed since the last
incremental backup.
Selective backup
Backs up only files that the user specifies. The files must also meet some of
the policy requirements defined in the backup copy group.
See Backup-Archive Clients Installation and Users Guide for details on backup-archive
clients that can also back up logical volumes. The logical volume must meet some
of the policy requirements that are defined in the backup copy group, see Policy
for Logical Volume Backups on page 333.
Restore
When a user restores a backup version of a file, the server sends a copy of the file
to the client node. The backup version remains in server storage. Restoring a
logical volume backup works the same way.
If more than one backup version exists, a user can restore the active backup
version or any inactive backup versions.
If policy is properly set up, a user can restore backed-up files to a specific time. See
Setting Policy to Enable Point-in-Time Restore for Clients on page 337 for details
on the requirements.
302
Migration
When a file is migrated to the server, it is replaced on the client node with a small
stub file of the same name as the original file. The stub file contains data needed to
locate the migrated file on server storage.
Tivoli Storage Manager for Space Management provides selective and automatic
migration. Selective migration lets users migrate files by name. The two types of
automatic migration are:
Threshold
If space usage exceeds a high threshold set at the client node, migration
begins and continues until usage drops to the low threshold also set at the
client node.
Demand
If an out-of-space condition occurs for a client node, migration begins and
continues until usage drops to the low threshold.
To prepare for efficient automatic migration, Tivoli Storage Manager for Space
Management copies a percentage of user files from the client node to the IBM
Tivoli Storage Manager server. The premigration process occurs whenever Tivoli
Storage Manager for Space Management completes an automatic migration. The
next time free space is needed at the client node, the files that have been
premigrated to the server can quickly be changed to stub files on the client. The
default premigration percentage is the difference between the high and low
thresholds.
Files are selected for automatic migration and premigration based on the number
of days since the file was last accessed and also on other factors set at the client
node.
Recall
Tivoli Storage Manager for Space Management provides selective and transparent
recall. Selective recall lets users recall files by name. Transparent recall occurs
automatically when a user accesses a migrated file.
Reconciliation
Migration and premigration can create inconsistencies between stub files on the
client node and space-managed files in server storage. For example, if a user
deletes a migrated file from the client node, the copy remains at the server. At
regular intervals set at the client node, IBM Tivoli Storage Manager compares client
node and server storage and reconciles the two by deleting from the server any
outdated files or files that do not exist at the client node.
Chapter 12. Implementing Policies for Client Data
303
304
copy group, or no copy groups. Users can bind (that is, associate) their files
to a management class through the include-exclude list.
See More on Management Classes on page 307 for details.
Policy set
Specifies the management classes that are available to groups of users.
Policy sets contain one or more management classes. You must identify one
management class as the default management class. Only one policy set, the
ACTIVE policy set, controls policy operations.
Policy domain
Lets an administrator group client nodes by the policies that govern their
files and by the administrators who manage their policies. A policy domain
contains one or more policy sets, but only one policy set (named ACTIVE)
can be active at a time. The server uses only the ACTIVE policy set to
manage files for client nodes assigned to a policy domain.
You can use policy domains to:
v Group client nodes with similar file management requirements
v Provide different default policies for different groups of clients
v Direct files from different groups of clients to different storage
hierarchies based on need (different file destinations with different
storage characteristics)
v Restrict the number of management classes to which clients have access
305
Figure 47. How Clients, Server Storage, and Policy Work Together
1
When clients are registered, they are associated with a policy domain.
Within the policy domain are the policy set, management class, and copy
groups.
2, 3
When a client backs up, archives, or migrates a file, it is bound to a
management class. A management class and the backup and archive copy
groups within it specify where files are stored and how they are managed
when they are backed up, archived, or migrated from the client.
4, 5
Storage pools are the destinations for backed-up, archived, or
306
Files that are initially stored on disk storage pools can migrate to tape or
optical disk storage pools if the pools are set up in a storage hierarchy.
307
308
If a user does not create an include-exclude list, the following default conditions
apply:
v All files belonging to the user are eligible for backup and archive services.
v The default management class governs backup, archive, and space-management
policies.
Figure 48 shows an example of an include-exclude list. The statements in this
example list do the following:
v Excludes certain files or directories from backup, archive, and client migration
operations
Line 1 in Figure 48 means that the SSTEINER node ID excludes all core files
from being eligible for backup and client migration.
v Includes some previously excluded files
Line 2 in Figure 48 means that the files in the /home/ssteiner directory are
excluded. The include statement that follows on line 3, however, means that the
/home/ssteiner/options.scr file is eligible for backup and client migration.
v Binds a file to a specific management class
Line 4 in Figure 48 means that all files and subdirectories belonging to the
/home/ssteiner/driver5 directory are managed by the policy defined in the
MCENGBK2 management class.
exclude
exclude
include
include
/.../core
/home/ssteiner/*
/home/ssteiner/options.scr
/home/ssteiner/driver5/.../* mcengbk2
IBM Tivoli Storage Manager processes the include-exclude list from the bottom up,
and stops when it finds an include or exclude statement that matches the file it is
processing. Therefore, the order in which the include and exclude options are listed
affects which files are included and excluded. For example, suppose you switch the
order of two lines in the example, as follows:
include /home/ssteiner/options.scr
exclude /home/ssteiner/*
The exclude statement comes last, and excludes all files in the /home/ssteiner
directory. When IBM Tivoli Storage Manager is processing the include-exclude list
for the options.scr file, it finds the exclude statement first. This time, the
options.scr file is excluded.
Some options are evaluated after the more basic include and exclude options. For
example, options that exclude or include files for compression are evaluated after
the program determines which files are eligible for the process being run.
You can create include-exclude lists as part of client options sets that you define
for clients. For information on defining client option sets and assigning a client
option set to a client, see Creating Client Option Sets on the Server on page 280.
For detailed information on the include and exclude options, see the users guide
for the appropriate client.
309
310
311
How IBM Tivoli Storage Manager Selects Files for Policy Operations
This section describes how IBM Tivoli Storage Manager selects files for the
following operations:
v Full and partial incremental backups
v Selective backup
v Logical volume backup
v Archive
v Automatic migration from an HSM client (Tivoli Storage Manager for Space
Management)
Incremental Backup
Backup-archive clients can choose to back up their files using full or partial
incremental backup. A full incremental backup ensures that clients backed-up files
are always managed according to policies. Clients should use full incremental
backup whenever possible.
If the amount of time for backup is limited, clients may sometimes need to use
partial incremental backup. A partial incremental backup should complete more
quickly and require less memory. When a client uses partial incremental backup,
only files that have changed since the last incremental backup are backed up.
Attributes in the management class that would cause a file to be backed up when
doing a full incremental backup are ignored. For example, unchanged files are not
backed up even when they are assigned to a management class that specifies
absolute mode and the minimum days between backups (frequency) has passed.
The server also does less processing for a partial incremental backup. For example,
the server does not expire files or rebind management classes to files during a
partial incremental backup.
If clients must use partial incremental backups, they should periodically perform
full incremental backups to ensure that complete backups are done and backup
files are stored according to policies. For example, clients can do partial
incremental backups every night during the week, and a full incremental backup
on the weekend.
Performing full incremental backups is important if clients want the ability to
restore files to a specific time. Only a full incremental backup can detect whether
files have been deleted since the last backup. If full incremental backup is not done
often enough, clients who restore to a specific time may find that many files that
had actually been deleted from the workstation get restored. As a result, a clients
file system may run out of space during a restore process. See Setting Policy to
Enable Point-in-Time Restore for Clients on page 337 for more information.
312
313
Selective Backup
When a user requests a selective backup, IBM Tivoli Storage Manager performs the
following steps to determine eligibility:
1. Checks the file against any include or exclude statements contained in the user
include-exclude list:
v Files that are not excluded are eligible for backup. If a management class is
specified with the INCLUDE option, IBM Tivoli Storage Manager uses that
management class.
v If no include-exclude list exists, the files selected are eligible for backup, and
IBM Tivoli Storage Manager uses the default management class.
2. Checks the management class of each included file:
v If the management class contains a backup copy group and the serialization
requirement is met, the file is backed up. Serialization specifies how files are
handled if they are modified while being backed up and what happens if
modification occurs.
v If the management class does not contain a backup copy group, the file is
not eligible for backup.
An important characteristic of selective backup is that a file is backed up without
regard for whether the file has changed. This result may not always be what you
want. For example, suppose a management class specifies to keep three backup
versions of a file. If the client uses incremental backup, the file is backed up only
when it changes, and the three versions in storage will be at different levels. If the
client uses selective backup, the file is backed up regardless of whether it has
changed. If the client uses selective backup on the file three times without
changing the file, the three versions of the file in server storage are identical.
Earlier, different versions are lost.
314
v If the management class contains a backup copy group and the logical
volume meets the serialization requirement, the logical volume is backed up.
Serialization specifies how logical volumes are handled if they are modified
while being backed up and what happens if modification occurs.
v If the management class does not contain a backup copy group, the logical
volume is not eligible for backup.
Archive
When a user requests the archiving of a file or a group of files, IBM Tivoli Storage
Manager performs the following steps to determine eligibility:
1. Checks the files against the users include-exclude list to see if any
management classes are specified:
v IBM Tivoli Storage Manager uses the default management class for files that
are not bound to a management class.
v If no include-exclude list exists, IBM Tivoli Storage Manager uses the default
management class unless the user specifies another management class. See
the users guide for the appropriate client for details.
2. Checks the management class for each file to be archived.
v If the management class contains an archive copy group and the serialization
requirement is met, the file is archived. Serialization specifies how files are
handled if they are modified while being archived and what happens if
modification occurs.
v If the management class does not contain an archive copy group, the file is
not archived.
Note: If you need to frequently create archives for the same data, consider using
instant archive (backup sets) instead. Frequent archive operations can create
a large amount of metadata in the server database resulting in increased
database growth and decreased performance for server operations such as
expiration. Frequently, you can achieve the same objectives with incremental
backup or backup sets. Although the archive function is a powerful way to
store inactive data with fixed retention, it should not be used on a frequent
and large scale basis as the primary backup method. For details on how to
generate backup sets see Creating and Using Client Backup Sets on
page 344.
315
The file is larger than the stub file that would replace it (plus one byte) or the
file system block size, whichever is larger.
System
Restricted policy
Restricted policy
316
Restricted policy
Task
Restricted policy
System
Policy Set
Management Class
A new management class in the same policy set and a copy of each
copy group in the management class
317
The sections that follow describe the tasks involved in creating new policies for
your installation. Do the tasks in the following order:
Tasks:
Defining and Updating a Policy Domain
Defining and Updating a Policy Set on page 319
Defining and Updating a Management Class on page 320
Defining and Updating a Backup Copy Group on page 321
Defining and Updating an Archive Copy Group on page 327
Assigning a Default Management Class on page 328
Activating a Policy Set on page 330
Running Expiration Processing to Delete Expired Files on page 330.
318
starts from the day of the backup. For example, if the backup retention
grace period for the STANDARD policy domain is used and set to 30 days,
backup versions using the grace period expire in 30 days from the day of
the backup.
Backup versions of the file continue to be managed by the grace period
unless one of the following occurs:
v The client binds the file to a management class containing a backup
copy group and then backs up the file
v A backup copy group is added to the files management class
v A backup copy group is added to the default management class
Archive Retention Grace Period
Specifies the number of days to retain an archive copy when the
management class for the file no longer contains an archive copy group
and the default management class does not contain an archive copy group.
The retention grace period protects archive copies from being immediately
expired.
The archive copy of the file managed by the grace period is retained in
server storage for the number of days specified by the archive retention
grace period. This period starts from the day on which the file is first
archived. For example, if the archive retention grace period for the policy
domain STANDARD is used, an archive copy expires 365 days from the
day the file is first archived.
The archive copy of the file continues to be managed by the grace period
unless an archive copy group is added to the files management class or to
the default management class.
319
The policies in the new policy set do not take effect unless you make the new set
the ACTIVE policy set. See Activating a Policy Set on page 330.
Note: When you copy an existing policy set, you also copy any associated
management classes and copy groups.
2. Update the description of the policy set named TEST:
update policyset engpoldom test
description='Policy set for testing'
320
321
322
v File permissions
Absolute
A file is considered for full incremental backup regardless of
whether it has changed since the last backup.
The server considers both parameters to determine how frequently files can be
backed up. For example, if frequency is 3 and mode is Modified, a file or directory
is backed up only if it has been changed and if three days have passed since the
last backup. If frequency is 3 and mode is Absolute, a file or directory is backed up
after three days have passed whether or not the file has changed.
Use the Modified mode when you want to ensure that the server retains multiple,
different backup versions. If you set the mode to Absolute, users may find that they
have three identical backup versions, rather than three different backup versions.
Absolute mode can be useful for forcing a full backup. It can also be useful for
ensuring that extended attribute files are backed up, because Tivoli Storage
Manager does not detect changes if the size of the extended attribute file remains
the same.
When you set the mode to Absolute, set the frequency to 0 if you want to ensure
that a file is backed up each time full incremental backups are scheduled for or
initiated by a client.
323
versions expire when the number of days that they have been inactive exceeds the
value specified for retaining extra versions, even when the number of versions is
not exceeded.
Note: A base file is not eligible for expiration until all its dependent subfiles have
been expired. For details, see Enabling Clients to Use Subfile Backup on
page 350.
For example, see Table 28 and Figure 50. A client node has backed up the file
REPORT.TXT four times in one month, from March 23 to April 23. The settings in
the backup copy group of the management class to which REPORT.TXT is bound
determine how the server treats these backup versions. Table 29 on page 325 shows
some examples of how different copy group settings would affect the versions. The
examples show the effects as of April 24 (one day after the file was last backed
up).
Table 28. Status of REPORT.TXT as of April 24
Version
Date Created
Active
April 23
(not applicable)
Inactive 1
April 13
Inactive 2
March 31
Inactive 3
March 23
324
Table 29. Effects of Backup Copy Group Policy on Backup Versions for REPORT.TXT as of April 24. One day after
the file was last backed up.
Versions
Data Exists
Versions
Data
Deleted
3 versions
2 versions
60 days
180 days
Versions Data Exists and Retain Extra Versions control the expiration of the
versions. The version created on March 23 is retained until the client node
backs up the file again (creating a fourth inactive version), or until that
version has been inactive for 60 days.
If the user deletes the REPORT.TXT file from the client node, the server
notes the deletion at the next full incremental backup of the client node.
From that point, the Versions Data Deleted and Retain Only Version
parameters also have an effect. All versions are now inactive. Two of the
four versions expire immediately (the March 23 and March 31 versions
expire). The April 13 version expires when it has been inactive for 60 days
(on June 23). The server keeps the last remaining inactive version, the April
23 version, for 180 days after it becomes inactive.
NOLIMIT
2 versions
60 days
180 days
NOLIMIT
NOLIMIT
60 days
180 days
Retain Extra Versions controls expiration of the versions. The server does
not expire inactive versions based on the maximum number of backup
copies. The inactive versions (other than the last remaining version) are
expired when they have been inactive for 60 days.
If the user deletes the REPORT.TXT file from the client node, the server
notes the deletion at the next full incremental backup of the client node.
From that point, the Retain Only Version parameter also has an effect. All
versions are now inactive. The three of four versions will expire after each
of them has been inactive for 60 days. The server keeps the last remaining
inactive version, the April 23 version, for 180 days after it becomes
inactive.
3 versions
2 versions
NOLIMIT
NOLIMIT
Versions Data Exists controls the expiration of the versions until a user
deletes the file from the client node. The server does not expire inactive
versions based on age.
If the user deletes the REPORT.TXT file from the client node, the server
notes the deletion at the next full incremental backup of the client node.
From that point, the Versions Data Deleted parameter controls expiration.
All versions are now inactive. Two of the four versions expire immediately
(the March 23 and March 31 versions expire) because only two versions are
allowed. The server keeps the two remaining inactive versions indefinitely.
See Administrators Reference for details about the parameters. The following list
gives some tips on using the NOLIMIT value:
Versions Data Exists
Setting the value to NOLIMIT may require increased storage, but that
value may be needed for some situations. For example, to enable client
nodes to restore files to a specific point in time, set the value for Versions
Data Exists to NOLIMIT. Setting the value this high ensures that the server
retains versions according to the Retain Extra Versions parameter for the
Chapter 12. Implementing Policies for Client Data
325
326
Shared Static
Specifies that if the file is modified during an archive process, the
server does not archive it. However, IBM Tivoli Storage Manager
retries the archive process as many times as specified by the
CHANGINGRETRIES option in the client options file.
Dynamic
Specifies that a file is archived on the first attempt, even if the file
is being modified during the archive process.
Shared Dynamic
Specifies that if the file is modified during the archive attempt, the
server archives it on its last try even if the file is being modified.
IBM Tivoli Storage Manager retries the archive process as many
times as specified by the CHANGINGRETRIES option in the client
options file.
For most files, set serialization to either static or shared static to prevent
the server from archiving a file while it is being modified.
However, you may want to define a copy group with a serialization of
shared dynamic or dynamic for files where log records are continuously
added, such as an error log. If you only have copy groups that use static or
Chapter 12. Implementing Policies for Client Data
327
shared static, these files may never be archived because they are constantly
in use. With shared dynamic or dynamic, the log files are archived.
However, the archive copy may contain a truncated message.
Attention: If a file is archived while it is in use (shared dynamic or
dynamic serialization), the copy may not contain all the changes and may
not be usable.
Note: When certain users or processes open files, they deny read access to
the files for any other user or process. When this happens, even
with serialization set to dynamic or shared dynamic, the server does
not back up the file.
How long to retain an archived copy
Specifies the number of days to retain an archived copy in storage. When
the time elapses, the archived copy expires and the server deletes the file
the next time expiration processing runs.
When a user archives directories, the server uses the default management
class unless the user specifies otherwise. If the default management class
does not have an archive copy group, the server binds the directory to the
management class that currently has the shortest retention time for archive.
When you change the retention time for an archive copy group, you may
also be changing the retention time for any directories that were archived
using that copy group.
The user can change the archive characteristics by using Archive Options
in the interface or by using the ARCHMC option on the command.
The STANDARD management class was copied from the STANDARD policy set to
the TEST policy set (see Example: Defining a Policy Set on page 320). Before the
new default management class takes effect, you must activate the policy set.
328
The current ACTIVE policy set contains copy When users perform a backup and the
groups that are not defined in the policy set backup copy group no longer exists in the
being validated.
management class to which a file is bound,
backup versions are managed by the default
management class. If the default
management class does not contain a backup
copy group, backup versions are managed by
the backup retention grace period, and the
workstation file is not backed up. See
Defining and Updating a Policy Domain
on page 318
A management class specifies that a backup The contradictions within the management
classes can cause problems for HSM users.
version must exist before a file can be
migrated from a client node, but the
management class does not contain a backup
copy group.
329
To create a new client node, NEWUSER, and assign it to the ENGPOLDOM policy
domain, enter the following command:
register node newuser newuser domain=engpoldom
330
Expiration processing then deletes expired files from the database. You can
schedule this command by using the DEFINE SCHEDULE command. If you
schedule the EXPIRE INVENTORY command, set the expiration interval to 0 (zero)
in the server options so that the server does not run expiration processing when
you start the server.
You can control how long the expiration process runs by using the DURATION
parameter with the EXPIRE INVENTORY command.
When expiration processing runs, the server normally sends detailed messages
about policy changes made since the last time expiration processing ran. You can
reduce those messages by using the EXPQUIET server option, or by using the
QUIET=YES parameter with the EXPIRE INVENTORY command.. When you use
the quiet option or parameter, the server issues messages about policy changes
during expiration processing only when files are deleted, and either the default
management class or retention grace period for the domain has been used to
expire the files.
331
v
v
v
v
This command creates the DIR2TAPE policy domain that contains a default
policy set, management class, backup and archive copy group, each named
STANDARD.
2. Update the backup or archive copy group in the DIR2TAPE policy domain to
specify the destination to be a tape storage pool. For example, to use a tape
storage pool named TAPEPOOL for backup, enter the following command:
update copygroup dir2tape standard standard destination=tapepool
To use a tape storage pool named TAPEPOOL for archive, enter the following
command:
update copygroup dir2tape standard standard type=archive
destination=tapepool
4. Assign client nodes to the DIR2TAPE policy domain. For example, to assign a
client node named TAPEUSER1 to the DIR2TAPE policy domain, enter the
following command:
update node tapeuser1 domain=dir2tape
332
Some of the application clients include a time stamp in each database backup.
Because the default policy for the server keeps one backup version of each unique
file, database backups managed by default policy are never deleted because each
backup is uniquely named with its time stamp. To ensure that the server deletes
backups as required, configure policy as recommended in the users guide for the
application client.
333
For logical volume backups, the server ignores the frequency attribute in the
backup copy group.
Table 30. Example of Backup Policy for Files and Logical Volumes
Parameter (backup copy
group in the management
class)
NOLIMIT
3 versions
NOLIMIT
60 days
60 days
120 days
120 days
|
|
|
|
|
|
You can register a NAS file server as a node, using NDMP operations. Under the
direction of the Tivoli Storage Manager server, the NAS file server performs
backup and restore of file system images to a tape library. The Tivoli Storage
Manager server initiates the backup, allocates a drive, and selects and mounts the
media. The NAS file server then transfers the data to tape.
|
|
|
|
|
|
Because the NAS file server performs the backup, the data is stored in its own
format. For Network Appliance file servers, the data is stored in the
NETAPPDUMP data format. For EMC file servers, the data is stored in the
CELERRADUMP data format. To manage NAS file server image backups, copy
groups for NAS nodes must point to a storage pool that has a data format of either
NETAPPDUMP or CELERRADUMP.
|
|
|
|
|
|
The following backup copy group attributes are ignored for NAS images:
v Frequency
v Mode
v Retain Only Versions
v Serialization
v Versions Data Deleted
|
|
|
To set up the required policy for NAS nodes, you can define a new, separate policy
domain. See Chapter 6, Using NDMP for Operations with NAS File Servers, on
page 111 for details.
|
|
|
|
|
|
|
|
Backups for NAS nodes can be initiated from the server, or from a client that has
at least client owner authority over the NAS node. For client-initiated backups, you
can use client option sets that contain include and exclude statements to bind NAS
file system images to a specific management class. The valid options that can be
used for a NAS node are: include.fs.nas, exclude.fs.nas, and domain.nas. For details
on the options see the Backup-Archive Clients Installation and Users Guide for your
particular client platform. For more information about client option sets see
Creating Client Option Sets on the Server on page 280.
|
|
|
|
|
|
When the Tivoli Storage Manager server creates a table of contents (TOC), you can
view a collection of individual files and directories backed up via NDMP and
select which to restore. To establish where to send data and store the table of
contents, policy should be set so that:
v Image backup data is sent to a storage pool with either NETAPPDUMP or
CELERRADUMP format.
334
|
|
|
|
|
|
|
|
|
|
|
For LAN-free data movement, you can set up a SAN configuration in which a
client directly accesses a storage device to read or write data. LAN-free data
movement requires setup on the server and on the client, and the installation of a
storage agent on the client machine. The storage agent transfers data between the
client and the storage device. See IBM Tivoli Storage Manager Storage Agent Users
Guide for details. See the Web site for details on clients that support the feature:
www.ibm.com/software/sysmgmt/products/
support/IBMTivoliStorageManager.html.
|
|
|
|
|
|
|
One task in configuring your systems to use this feature is to set up policy for the
clients. Copy groups for these clients must point to the storage pool that is
associated with the SAN devices. (Configuring IBM Tivoli Storage Manager for
LAN-free Data Movement on page 104 describes how to configure the devices and
define the storage pool.) If you have defined a path from the client to a drive on
the SAN, drives in this storage pool can then use the SAN to send data directly to
the device for backup, archive, restore, and retrieve.
|
|
|
|
To set up the required policy, either define a new, separate policy domain, or
define a new management class in an existing policy domain:
v Define a New Policy Domain on page 335
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2. Create a policy set in that domain. For example, to define the policy set that is
named BASE in the SANCLIENTS policy domain, enter the following
command:
define policyset sanclients base
3. Create the default management class for the policy set. First define the
management class, then assign the management class as the default for the
policy set.
For example, to define the management class that is named SANCLIENTMC,
enter the following command:
define mgmtclass sanclients base sanclientmc
4. Define the backup copy group in the default management class, as follows:
v Specify the DESTINATION, the name of the storage pool that is associated
with the SAN devices on the server.
335
|
|
|
|
|
The storage pool must already be set up. The storage pool must point to a
device class that is associated with the library for the SAN devices. See
Configuring IBM Tivoli Storage Manager for LAN-free Data Movement on
page 104 for details.
v Accept the default settings for all remaining parameters.
|
|
|
|
|
|
For example, to define the backup copy group for the SANCLIENTMC
management class, enter the following command:
define copygroup sanclients base sanclientmc standard destination=sanpool
5. Define the archive copy group in the default management class, as follows:
v Specify the DESTINATION, the name of the storage pool that is associated
with the SAN devices on the server.
The storage pool must already be set up. The storage pool must point to a
device class that is associated with the library for the SAN devices. See
Configuring IBM Tivoli Storage Manager for LAN-free Data Movement on
page 104 for details.
v Accept the default settings for all remaining parameters.
|
|
|
|
|
For example, to define the archive copy group for the SANCLIENTMC
management class, enter the following command:
|
|
|
|
|
|
|
|
|
|
|
|
7. Register or update the application clients to associate them with the new policy
domain.
For example, to update the node SANCLIENT1, enter the following command:
update node sanclient1 domain=sanclients
|
|
|
|
|
|
|
|
|
|
|
|
For example, suppose sanclientmc is the name of the management class that you
defined for clients that are using devices on a SAN. You want the client to be able
to use the SAN for backing up any file on the c drive. Put the following line at the
end of the clients include-exclude list:
|
|
For details on the include-exclude list, see Backup-Archive Clients Installation and
Users Guide.
If you choose not to define a separate policy domain with the appropriate
management class as the default, you must define a new management class within
an existing policy domain and activate the policy set. Because the new
management class is not the default for the policy domain, you must add an
include statement to each client options file to bind objects to that management
class.
336
where the target server stores data for the source server. Other policy
specifications, such as how long to retain the data, do not apply to data stored for
a source server. See Using Virtual Volumes to Store Data on Another Server on
page 505 for more information.
337
The distributed policy becomes managed objects (policy domain, policy sets,
management classes, and so on) defined in the database of each managed server.
To use the managed policy, you must activate a policy set on each managed server.
If storage pools specified as destinations in the policy do not exist on the managed
server, you receive messages pointing out the problem when you activate the
policy set. You can create new storage pools to match the names in the policy set,
or you can rename existing storage pools.
On the managed server you also must associate client nodes with the managed
policy domain and associate client nodes with schedules.
See Setting Up an Enterprise Configuration on page 479 for details.
Querying Policy
Task
You can request information about the contents of policy objects. You might want
to do this before creating new objects or when helping users to choose policies that
fit their needs.
You can specify the output of a query in either standard or detailed format. The
examples in this section are in standard format.
On a managed server, you can see whether the definitions are managed objects.
Request the detailed format in the query and check the contents of the Last update
by (administrator) field. For managed objects, this field contains the string
$$CONFIG_MANAGER$$.
The following shows the output from the query. It shows that the ACTIVE policy
set contains two backup copy groups that belong to the MCENG and STANDARD
management classes.
Policy
Domain
Name
--------ENGPOLDOM
ENGPOLDOM
ENGPOLDOM
ENGPOLDOM
ENGPOLDOM
Policy
Set Name
Mgmt
Copy
Versions Versions
Retain Retain
Class
Group
Data
Data
Extra
Only
Name
Name
Exists Deleted Versions Version
--------- --------- -------- -------- -------- -------- ------ACTIVE
MCENG
STANDARD
5
4
90
600
ACTIVE
STANDARD STANDARD
2
1
30
60
STANDARD MCENG
STANDARD
5
4
90
600
STANDARD STANDARD STANDARD
2
1
30
60
TEST
STANDARD STANDARD
2
1
30
60
338
Policy
Set Name
--------ACTIVE
ACTIVE
STANDARD
STANDARD
TEST
Mgmt
Class
Name
--------MCENG
STANDARD
MCENG
STANDARD
STANDARD
Copy
Group
Name
--------STANDARD
STANDARD
STANDARD
STANDARD
STANDARD
Retain
Version
-------730
365
730
365
365
The following figure is the output from the query. It shows that the ACTIVE policy
set contains the MCENG and STANDARD management classes.
Policy
Domain
Name
--------ENGPOLDOM
Policy
Set Name
--------ACTIVE
Mgmt
Class
Name
--------MCENG
Default
Mgmt
Class ?
--------No
ENGPOLDOM
ACTIVE
STANDARD
Yes
ENGPOLDOM
STANDARD
MCENG
No
ENGPOLDOM
STANDARD
STANDARD
Yes
ENGPOLDOM
TEST
STANDARD
Yes
Description
-----------------------Engineering Management
Class with Backup and
Archive Copy Groups
Installed default
management class
Engineering Management
Class with Backup and
Archive Copy Groups
Installed default
management class
Installed default
management class
The following figure is the output from the query. It shows an ACTIVE policy set
and two inactive policy sets, STANDARD and TEST.
Policy
Domain
Name
Policy
Set Name
--------ENGPOLDOM
--------ACTIVE
Default
Mgmt
Class
Name
--------STANDARD
ENGPOLDOM
STANDARD
STANDARD
ENGPOLDOM
TEST
STANDARD
Description
339
The following figure is the output from the query. It shows that both the
ENGPOLDOM and STANDARD policy domains have client nodes assigned to
them.
Policy
Domain
Name
Activated
Policy
Set
--------APPCLIENTS
ENGPOLDOM
--------BASE
STANDARD
Activated
Default
Mgmt
Class
--------APPCLIENTMC
STANDARD
STANDARD
STANDARD
STANDARD
Number of
Registered
Nodes
Description
---------1
21
18
Deleting Policy
When you delete a policy object, you also delete any objects belonging to it. For
example, when you delete a management class, you also delete the copy groups in
it.
You cannot delete the ACTIVE policy set or objects that are part of that policy set.
Task
System
Restricted policy
You can delete the policy objects named STANDARD that come with the server.
However, all STANDARD policy objects are restored whenever you reinstall the
server. If you reinstall the server after you delete the STANDARD policy objects,
the server issues messages during processing of a subsequent DSMSERV AUDITDB
command. The messages may include the following statement: An instance count
does not agree with actual data. The DSMSERV AUDITDB command corrects this
problem by restoring the STANDARD policy objects. If necessary, you can later
delete the restored STANDARD policy objects.
340
When you delete a management class from a policy set, the server deletes the
management class and all copy groups that belong to the management class in the
specified policy domain.
When you delete a policy set, the server deletes all management classes and copy
groups that belong to the policy set within the specified policy domain.
The ACTIVE policy set in a policy domain cannot be deleted. You can replace the
contents of the ACTIVE policy set by activating a different policy set. Otherwise,
the only way to remove the ACTIVE policy set is to delete the policy domain that
contains the policy set.
2. If client nodes are assigned to the policy domain, remove them in one of the
following ways:
v Assign each client node to a new policy domain. For example, enter the
following commands:
update node htang domain=engpoldom
update node tomc domain=engpoldom
update node pease domain=engpoldom
If the ACTIVE policy set in ENGPOLDOM does not have the same
management class names as in the ACTIVE policy set of the STANDARD
policy domain, then backup versions of files may be bound to a different
341
When you delete a policy domain, the server deletes the policy domain and all
policy sets (including the ACTIVE policy set), management classes, and copy
groups that belong to the policy domain.
342
343
See Choosing Where to Enable Data Validation on page 575 to help you
determine where to enable data validation.
Later, the network has shown to be stable and no data corruption has been
identified when user ED has processed backups. You can then disable data
validation to minimize the performance impact of validating all of EDs data
during a client session. For example:
update node ed validateprotocol=no
344
v
v
v
v
You can generate backup sets on the server for client nodes. The client node for
which a backup set is generated must be registered to the server. An incremental
backup must be completed for a client node before the server can generate a
backup set for the client node.
The GENERATE BACKUPSET command runs as a background process on the
server. If you cancel the background process created by this command, the media
may not contain a complete backup set.
See the following sections:
v Choosing Media for Generating the Backup Set
v Selecting a Name for the Backup Set on page 346
v Setting a Retention Period for the Backup Set on page 346
v Example: Generating a Client Backup Set on page 346
345
You can use specific volumes for the backup set. If there is not enough space to
store the backup set on the volumes, the server uses scratch volumes to store the
remainder of the backup set.
2. Define a device class whose device type is REMOVABLEFILE. Name the device
class BACKSET:
define devclass backset devtype=removablefile library=manuallib
3. Define a drive to associate with the library. Name the drive CDDRIVE and the
device /cdrom
define drive manuallib cddrive device=/cdrom
4. Define a device class whose device type is FILE. Name the device class FILES:
define devclass files devtype=file maxcapacity=640M dir=/backupset
5. Generate the backup set to the FILE device class for client node JOHNSON.
Name the backup set PROJECT and retain it for 90 days.
generate backupset johnson project devclass=file scratch=yes
retention=90
6. Use your own software for writing CD-ROMs. For this example, the CD-ROM
volume names are VOL1, VOL2, and VOL3. These names were put on the
CD-ROM as they were created.
For an example of using the backup set on the CD-ROM, see Moving Backup
Sets to Other Servers on page 347.
346
For more information about restoring backup sets, see Using the Backup-Archive
Client guide for your particular operating system.
You can define (move) a backup set generated on one server to another Tivoli
Storage Manager server. Any client backup set that you generate on one server can
be defined to another server as long as the servers share a common device type.
The level of the server defining the backup set must be equal to or greater than the
level of the server that generated the backup set.
If you have multiple servers connecting to different clients, the DEFINE
BACKUPSET command makes it possible for you to take a previously generated
backup set and make it available to other servers. The purpose is to allow the user
flexibility in moving backup sets to different servers, thus allowing the user the
ability to restore their data from a server other than the one on which the backup
set was created.
Using the example described in Example: Generating a Client Backup Set on
page 346, you can make the backup set that was copied to the CD-ROM available
to another server by entering:
define backupset johnson project devclass=cdrom volumes=vol1,vol2,vol3
description="backup set copied to a CD-ROM"
Any administrator
347
Node Name:
Backup Set Name:
Date/Time:
Retention Period:
Device Class Name:
Description:
JANE
MYBACKUPSET.3099
09/04/2002 16:17:47
60
DCFILE
348
Date/Time:
Volume Type:
Backup Series:
Backup Operation:
Volume Seq:
Device Class:
Volume Name:
Volume Location:
Command:
Date/Time:
Volume Type:
Backup Series:
Backup Operation:
Volume Seq:
Device Class:
Volume Name:
Volume Location:
Command:
Date/Time:
Volume Type:
Backup Series:
Backup Operation:
Volume Seq:
Device Class:
Volume Name:
Volume Location:
Command:
09/04/2002 07:34:06 PM
BACKUPSET
1
FILE
01334846.ost
gen backupset client57 testbs /home dev=file scratch=yes
ret=2 desc="Client57 backupset"
09/04/2002 07:34:06 PM
BACKUPSET
2
FILE
01334849.ost
09/04/2002 07:34:06 PM
BACKUPSET
3
FILE
01334850.ost
Node Name
Filespace
Name
------------------------ ---------JANE
/srvr
JANE
/srvr
JANE
/srvr
JANE
/srvr
JANE
...
How File Space and File Names May be Displayed: File space names and file
names that can be in a different code page or locale than the server do not display
correctly on the administrators Web interface or the administrative command-line
interface. The data itself is backed up and can be restored properly, but the file
space or file name may display with a combination of invalid characters or blank
spaces.
Chapter 13. Managing Data for Client Nodes
349
If the file space name is Unicode enabled, the name is converted to the servers
code page for display. The results of the conversion for characters not supported
by the current code page depends on the operating system. For names that Tivoli
Storage Manager is able to partially convert, you may see question marks (??),
blanks, unprintable characters, or "...". These characters indicate to the
administrator that files do exist. If the conversion is not successful, the name is
displayed as "...". Conversion can fail if the string includes characters that are not
available in the server code page, or if the server has a problem accessing system
conversion routines.
To delete all backup sets belonging to client node JANE, created before 11:59 p.m.
on March 18, 1999, enter:
delete backupset jane * begindate=03/18/1999 begintime=23:59
350
The following table describes how Tivoli Storage Manager handles backups of file
CUST.TXT.
Version
Day of
subsequent
backup
One
Monday
Two
Tuesday
Three
Wednesday
Restoring Subfiles
When a client issues a request to restore subfiles, Tivoli Storage Manager restores
subfiles along with the corresponding base file back to the client. This process is
transparent to the client. That is, the client does not have to determine whether all
subfiles and corresponding base file were restored during the restore operation.
You can define (move) a backup set that contains subfiles to an earlier version of a
server that is not enabled for subfile backup. That server can restore the backup set
containing the subfiles to a client not able to restore subfiles. However, this process
is not recommended as it could result in a data integrity problem.
351
|
|
|
|
|
|
|
|
The progressive incremental backup that is the Tivoli Storage Manager standard
results in operations that are optimized for the restore of individual files or small
numbers of files. Progressive incremental backup minimizes tape usage, reduces
network traffic during backup operations, and eliminates the storage and tracking
of multiple copies of the same data. Progressive incremental backup may reduce
the impact to client applications during backup. For a level of performance that is
balanced across both backup and restore operations, the best method is usually
using progressive incremental backup with collocation set on in the storage pool.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
For more background on restore operations for clients, see Concepts for Client
Restore Operations on page 355.
|
|
352
Environment Considerations
|
|
|
|
|
|
|
|
Consider using disk to store data that requires quick restoration. For data that is
less critical, store the data to disk, then allow or force the data to migrate to tape
later.
|
|
|
|
|
|
|
|
|
|
|
One of the most important variables in restore performance is the layout of the
data that is to be restored across single or multiple tape volumes. Major causes of
performance problems are excessive tape mounts and the need to skip over
expired or inactive data on a tape. After a long series of incremental backups,
perhaps over years, the active data for a single file space can be spread across
many tape volumes. A single tape volume can have active data mixed with
inactive and expired data. See the following sections, which discuss ways to
control the placement of data, such as:
v Use collocation in storage pools.
v Limit the number of inactive versions of data through policy.
v Use the MOVE DATA or MOVE NODEDATA commands.
|
|
|
Using a file system image backup optimizes restore operations when an entire file
system needs to be restored, such as in disaster recovery or recovery from a
hardware failure. Restoring from an image backup minimizes concurrent mounts
of tapes and positioning within a tape during the restore operation. Consider the
following as aids to file system restore operations:
v Perform image backups frequently. More frequent image backups give better
point-in-time granularity, but will cost in terms of tape usage, disruption to the
client system during backup, and greater network bandwidth needed.
|
|
|
|
|
|
A guideline is to perform an image backup when more than 20% of the data in
the file system has changed.
v Combine image backups with progressive incremental backups for the file
system. This allows for full restore to an arbitrary point-in-time.
v To minimize disruption to the client during backup, use either hardware-based
or software-based snapshot techniques for the file system.
|
|
|
|
The capability for image backup is not available for all clients at this time. If image
backup is not available for the client and full file system restore is a priority,
consider using selective backup to force a full file system backup at regular
intervals.
|
|
|
|
|
|
|
|
|
|
|
|
353
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
v Use collocation by node or by file space for primary sequential pools that clients
back up to. For large file spaces for which restore performance is critical,
consider creating mount points on the client system. This would allow
collocation of data below the file space level.
See Keeping a Clients Files Together: Collocation on page 208 for more
information about collocation.
v Use the MOVE NODEDATA command to consolidate critical data in tape
storage pools, even in storage pools that have collocation set on. It may be
important to consolidate data for certain nodes, file spaces, and data types more
often than for others. If you do not use collocation or are limited by tape
quantity, you may want to do this more often. The rate of data turnover is also
something to consider.
Use the RECONSTRUCT parameter on the command to remove unused space in
file aggregates when the aggregates are moved.
Use the command for staging data to disk when the lead time for a restore
request allows it.
The effectiveness of the command in optimizing for restore might be reduced if
a large number of versions are kept.
v Create backup sets that can be taken to the client system and used to restore
from directly. This is effective if there is sufficient lead time prior to the restore,
and can save network bandwidth.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Creation of backup sets can also be done periodically when resources are
available, for example on weekends.
Use progressive incremental backups, but periodically force a full backup.
Some users have found it effective to define multiple Tivoli Storage Manager
client nodes on a system. One client node performs the incremental backups and
uses policies which retain multiple versions. Another client node performs either
full backups or incremental backups with collocation, but uses policies that
retain a single version. One node can be used for restoring older versions of
individual files, and the other client node can be used for restoring a complete
file system or directory tree to the latest version.
Create multiple storage pool hierarchies for clients with different priorities. For
the most critical data, use of only disk storage might be the best choice. Using
different storage hierarchies also allows you to set collocation differently in the
different hierarchies.
Minimize the number of versions you keep. This reduces the amount of time
spent positioning a tape during a restore operation. An alternative would be to
perform full backups.
Consider storage media characteristics, for example, the type of tape drive you
use. Use full file system backups if the tape drives you use are relatively slow at
positioning operations.
Doing more frequent full backups leads to faster restores for databases. For some
database products, you can use multiple sessions to restore, restore just the
database, or restore just the logs for the database. Optimal techniques for specific
Tivoli Storage Manager application clients are documented in IBM Tivoli Storage
Manager Publications on page xiii.
|
|
|
|
|
If you need the ability to restore files to a point in time, consider setting policy to
keep a large number of versions (by setting VEREXISTS=NOLIMIT and
|
|
354
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
If you cannot afford the resource costs of keeping the large numbers of file
versions and need the ability to restore to a point in time, consider using backup
sets, exporting the client data, or using archive. Using backup sets, exporting client
data, or archiving files gives you the capability to restore to the point in time when
the backup set was generated, the export was performed, or the archive was
created. Keep in mind that when you need to restore the data, your selection is
limited to the time at which you created the backup set, export, or archive.
|
|
|
Note: If you use the archive function, you should create an archive monthly or
yearly. Archive should not be used as a primary backup method because
frequent archives with large amounts of data can affect server performance.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Standard restore requires more interaction between the client and the server, and
multiple processes cannot be used for the restore. The no-query restore requires
less interaction between the client and the server, and the client can use multiple
sessions for the restore process. The no-query restore process is useful when
restoring large file systems on a client with limited memory because it avoids some
processing that can affect the performance of other client applications. The
no-query restore operation, however, can take much longer to complete than the
standard restore operation. For example, in cases where only specific directories or
files within a directory are to be restored, it may be faster to use a classic restore.
|
|
|
|
|
The method is called no-query restore because the client sends a single restore
request to the server instead of querying the server for each object to be restored.
The server returns the files and directories to the client without further action by
the client. The client accepts the data coming from the server and restores it to the
destination named on the restore command.
|
|
The no-query restore process is used by the client only when the restore request
meets both of the following criteria:
The client uses two different methods for restore operations: Standard restore (also
called classic restore), and no-query restore.
355
v You enter the restore command with a source file specification that has an
unrestricted wildcard.
An example of a source file specification with an unrestricted wildcard is:
|
|
|
|
/home/mydocs/2002/*
|
|
|
|
|
|
|
|
/home/mydocs/2002/sales.*
|
|
|
To force the use of classic restore, use ?* in the source file specification rather than
*. For example:
|
|
For more information about restore processes, see Backup-Archive Clients Installation
and Users Guide.
|
|
|
|
|
|
|
|
|
|
|
|
|
You must issue multiple commands when you are restoring more than one file
space. For example, when you are restoring a c: drive and a d: drive on a
Windows system you must issue multiple commands.
|
|
Consider using multiple commands when you are restoring a single, large file
space, and all of the following conditions are true:
|
|
|
|
|
|
|
|
|
v The data was backed up to a storage pool that had collocation set to
FILESPACE. Files will be on multiple volumes, and the volumes can be mounted
by multiple processes.
v The files are approximately evenly distributed across the different top-level
directories in the file space.
v The number of top-level directories in the file space is not large.
|
|
|
Issue multiple commands either by issuing the commands one after another in a
single session or window, or by issuing commands at the same time from different
command windows.
|
|
When you enter multiple commands to restore files from a single file space, you
must specify a unique part of the file space in each restore command. Be sure that
/home/mydocs/2002/?*
Another method which can aid in both the backup and restore of client nodes with
critical data is to manage the backup process through multiple commands instead
of multiple sessions. For example, when using multi-session backup, multiple
backup sessions may be contending for the same underlying hard disk, thus
causing delays. An alternative is to manage this process externally by starting
multiple dsmc commands. Each command backs up a pre-determined number of
file systems. Using this method in conjunction with collocation at the file space
level can improve backup throughput and allow for parallel restore processes
across the same hard drives.
v You can issue commands for the different top-level directories, and the
commands do not overlap (so that the same file is not restored multiple times by
different commands).
356
|
|
|
|
you do not use any overlapping file specifications in the commands. To display a
list of the directories in a file space, use the query backup command on the client.
For example:
For more information, see Backup-Archive Clients Installation and Users Guide.
|
|
|
|
|
|
|
|
Set the maximum number of mount points that the client is allowed to greater than
one (MAXNUMMP > 1 on the REGISTER NODE or UPDATE NODE command).
|
|
|
|
Set the client option for resource utilization to one greater than the number of
desired sessions (use the number of drives that you want that single client to use).
See Controlling Resource Utilization by a Client. The client option can be
included in a client option set.
|
|
Issue the restore command so that it results in a no query restore process. See No
Query Restore Processes on page 355 for details.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
At the client, the option for resource utilization also has an effect on how many
drives (how many sessions) the client can use. The client option, resource
utilization, can be included in a client option set. If the number specified in the
MAXNUMMP parameter is too low and there are not enough mount points for
each of the sessions, it may not be possible to achieve the benefits of the multiple
sessions specified in the resource utilization client option.
v For backup operations, you might want to prevent multiple sessions if the client
is backing up directly to tape, so that data is not spread among multiple
volumes. Multiple sessions can be prevented at the client by using a value of 2
for the resource utilization option on the client.
v For restore operations, set the resource utilization option to one greater than the
number of desired sessions. Use the number of drives that you want that single
client to use.
|
|
|
Remember that you might need to change the settings for the MAXNUMMP
parameter on the server and the resource utilization option on the client before
running a restore process to get optimal restore performance.
|
|
|
|
|
|
To use multiple sessions, data for the client must be on multiple, sequential access
volumes, or a combination of sequential access volumes and disk. The data for a
client usually becomes spread out over some number of volumes over time. This
occurs deliberately when collocation is not used for the storage pool where the
client data is stored.
Because how data is arranged on tapes can affect restore performance, the
administrator can take actions in managing server storage to help optimize restore
operations.
v Ensure that tape reclamation and expiration are run regularly so that the tape
drive will not have as much expired data to skip over during restore. See
357
Reclaiming Space in Sequential Access Storage Pools on page 213 and File
Expiration and Expiration Processing on page 301.
v Reduce the number of file versions that are retained so that the tape drive will
not have to skip over as much inactive data during restore. See How Many
Backup Versions to Retain and For How Long on page 323.
|
|
|
|
|
358
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
359
v The client options file (dsm.opt) must contain the network address of the server
that the client will contact for services. See Connecting Nodes with the Server
on page 255 for more information.
v The scheduler must be started on the client machine. Refer to Backup-Archive
Clients Installation and Users Guide for details.
360
v The priority for the operation is 5; this is the default. If schedules conflict, the
schedule with the highest priority (lowest number) runs first.
v The schedule window begins at 9:00 p.m., and the schedule itself has 2 hours to
start.
v The start window is scheduled every day; this is the default.
v The schedule never expires; this is the default.
To change the defaults, see the DEFINE SCHEDULE command in the
Administrators Reference.
Client nodes process operations according to the schedules associated with the
nodes. To associate client nodes with a schedule, use the DEFINE ASSOCIATION
command. A client node can be associated with more than one schedule. However,
a node must be assigned to the policy domain to which a schedule belongs.
After a client schedule is defined, you can associate client nodes with it by
identifying the following information:
v Policy domain to which the schedule belongs
v List of client nodes to associate with the schedule
To associate the ENGNODE client node with the WEEKLY_BACKUP schedule,
both of which belong to the ENGPOLDOM policy domain, enter:
define association engpoldom weekly_backup engnode
The client and the Tivoli Storage Manager server can be set up to allow all sessions
to be initiated by the server. See Server-initiated Sessions on page 262 for
instructions.
Note: Tivoli Storage Manager does not recognize changes that you made to the
client options file while the scheduler is running. For Tivoli Storage Manager
to use the new values immediately, you must stop the scheduler and restart
it.
Chapter 14. Scheduling Operations for Client Nodes
361
Any administrator
You can display information about schedules and whether the schedules ran
successfully.
Start Date/Time
Duration Period Day
-------------------- -------- ------ --09/04/2002 12:45:14
2 H
2 Mo Sat
09/04/2002 12:46:21
4 H
1 W Sat
For example, you can issue the following command to find out which events
were missed (did not start) in the ENGPOLDOM policy domain for the
WEEKLY_BACKUP schedule in the previous week:
query event engpoldom weekly_backup begindate=-7 begintime=now
enddate=today endtime=now exceptionsonly=yes
For more information about managing event records, see Managing Event
Records on page 370.
v Did the operation or commands run as a result of the schedule run successfully?
To determine the success of the commands issued as the result of a successful
schedule, you can:
Check the clients schedule log.
362
The schedule log is a file that contains information such as the statistics about
the backed-up objects, the name of the server backing up the objects, and the
time and date of the next scheduled operation. By default, Tivoli Storage
Manager stores the schedule log as a file called dsmsched.log and places the
file in the directory where the Tivoli Storage Manager backup-archive client is
installed. Refer to Backup-Archive Clients Installation and Users Guide for more
information.
Check the servers activity log.
Search or query the activity log for related messages. For example, search for
messages that mention the client node name, within the time period that the
schedule ran. For example:
query actlog begindate=02/23/2001 enddate=02/26/2001 originator=client
nodename=hermione
|
|
|
|
Issue the QUERY EVENT command with FORMAT=DETAILED, and view the
Result field of the output screen. For example:
|
|
Associate the client with the schedule and ensure that the scheduler is started on
the client or application client directory. The schedule runs the file called
c:\incr.cmd once a day between 6:00 p.m. and 6:05 p.m., every day of the week.
If the server uses password authentication, clients must use passwords. Passwords
are then also required for the server to process scheduled operations for client
nodes. If the password expires and is not updated, scheduled operations fail. You
can prevent failed operations by allowing Tivoli Storage Manager to generate a
new password when the current password expires. If you set the
PASSWORDACCESS option to GENERATE in the Tivoli Storage Manager client
Chapter 14. Scheduling Operations for Client Nodes
363
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
For more information about installing, configuring, and starting the Tivoli Storage
Manager scheduler, refer to Backup-Archive Clients Installation and Users Guide.
The following table compares the scheduling environment across operating systems
and components:
|
|
||
|
Component Type
Operating
System
Scheduling Environment
|
|
|
Backup-Archive Client
UNIX, platforms
other than
Windows
Backup-Archive Client
Windows NT
|
|
|
|
|
AIXHP-UX,
Linux, and Sun
Solaris
|
|
|
|
|
Windows
Windows
Windows
Windows
2003
|
|
|
|
|
|
364
NT,
2000,
XP,
Server
Windows NT
Refer to the Backup-Archive Clients Installation and Users Guide for details about
automatically starting the scheduler and running the scheduler in the background.
Display schedule information:
query schedule engpoldom
365
366
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
Any administrator
Any administrator
367
Modifying Schedules
You can modify existing schedules by using the UPDATE SCHEDULE
command.For example, to modify the ENGWEEKLY client schedule in the
ENGPOLDOM policy domain, enter:
update schedule engpoldom engweekly period=5 perunits=days
Deleting Schedules
When you delete a schedule, Tivoli Storage Manager deletes all client node
associations for that schedule. See Associating Client Nodes with Schedules on
page 361 for more information.
To delete the schedule WINTER in the ENGPOLDOM policy domain, enter:
delete schedule engpoldom winter
Rather than delete a schedule, you may want to remove all nodes from the
schedule and save the schedule for future use. For information, see Removing
Nodes from Schedules on page 370.
368
Start Date/Time
Duration Period Day
-------------------- -------- ------ --09/04/2002 12:45:14
2 H
2 Mo Sat
09/04/2002 12:46:21
4 H
1 W Sat
Any administrator
369
Instead of deleting a schedule, you may want to delete all associations to it and
save the schedule for possible reuse in the future.
Any administrator
System
370
Scheduled Start
Actual Start
-------------------- -------------------09/04/2002 06:40:00 09/04/2002 07:38:09
09/16/2002 06:40:00
Schedule Name
------------WEEKLY_BACKUP
WEEKLY_BACKUP
Node Name
------------GOODELL
GOODELL
Status
--------Started
Future
Figure 53 shows an example of the results of this query. To find out why a
schedule was missed or failed, you may need to check the schedule log on the
client node itself. For example, a schedule can be missed because the scheduler
was not started on the client node.
Scheduled Start
Actual Start
-------------------- -------------------09/04/2002 20:30:00
09/04/2002 20:30:00
Schedule Name
------------DAILY_BACKUP
DAILY_BACKUP
Node Name
------------ANDREA
EMILY
Status
--------Missed
Missed
371
You can specify how long event records stay in the database before the server
automatically deletes them by using the SET EVENTRETENTION command. You
can also manually delete event records from the database, if database space is
required.
By default, clients contact the server. To limit the start of scheduled backup-archive
client sessions to the server only, change the SESSIONINITIATION parameter to
SERVERONLY either on the REGISTER NODE command or on the UPDATE
NODE command, and specify the high-level address and low-level address
options. These options must match what the client is using, otherwise the server
will not know how to contact the client. By doing so, you specify that the server
will not accept client requests for sessions.
|
|
|
|
|
|
All sessions must be started by server-prompted scheduling on the port that was
defined for the client with the REGISTER NODE or the UPDATE NODE
commands. If you select the CLIENTORSERVER option, the client might start
sessions with the server by communicating on the TCP/IP port that was defined
with a server option. Server-prompted scheduling also can be used to prompt the
client to connect to the server.
Administrators can perform the following activities to manage the throughput of
scheduled operations.
Task
System
System
System
System
372
scheduling operations. With client-polling mode, client nodes poll the server for
the next scheduled event. With server-prompted mode, the server contacts the
nodes at the scheduled start time.
By default, the server permits both scheduling modes. The default (ANY) allows
nodes to specify either scheduling mode in their client options files. You can
modify this scheduling mode.
If you modify the default server setting to permit only one scheduling mode, all
client nodes must specify the same scheduling mode in their client options file.
Clients that do not have a matching scheduling mode will not process the
scheduled operations. The default mode for client nodes is client-polling.
The scheduler must be started on the client nodes machine before a schedule can
run in either scheduling mode.
For more information about modes, see Overview of Scheduling Modes.
373
|
|
Ensure that client nodes specify the same mode in their client options files.
Server-Prompted Scheduling Mode: To have the server prompt clients for
scheduled operations, enter:
set schedmodes prompted
Ensure that client nodes specify the same mode in their client options files.
Any Scheduling Mode: To return to the default scheduling mode so that the
server supports both client-polling and server-prompted scheduling modes, enter:
set schedmodes any
374
schedule, you specify the length of time between processing of the schedule.
Consider how these interact to ensure that the clients get the backup coverage that
you intend.
See Defining and Updating a Backup Copy Group on page 321.
Using...
SET MAXSCHEDSESSIONS
command
375
It is possible, especially after a client node or the server has been restarted, that a
client node may not poll the server until after the beginning of the startup window
in which the next scheduled event is to start. In this case, the starting time is
randomized over the specified percentage of the remaining duration of the startup
window.
Consider the following situation:
v The schedule start time is 8:00 a.m. and its duration is 1 hour. Therefore the
startup window for the event is from 8:00 to 9:00 a.m.
v Ten client nodes are associated with the schedule.
v Randomization is set to 50%.
v Nine client nodes poll the server before 8:00 a.m.
v One client node does not poll the server until 8:30 a.m.
The result is that the nine client nodes that polled the server before the beginning of
the startup window are assigned randomly selected starting times between 8:00
and 8:30. The client node that polled at 8:30 receives a randomly selected starting
time that is between 8:30 and 8:45.
376
This setting has no effect on clients that use the server-prompted scheduling mode.
The clients also have a QUERYSCHEDPERIOD option that can be set on each
client. The server value overrides the client value once the client successfully
contacts the server.
377
Maximum command retries can also be set on each client with a client option,
MAXCMDRETRIES. The server value overrides the client value once the client
successfully contacts the server.
You can use this setting in conjunction with the SET MAXCMDRETRIES command
(number of command retry attempts) to control when a client node contacts the
server to process a failed command. See Setting the Number of Command Retry
Attempts on page 377.
The retry period can also be set on each client with a client option, RETRYPERIOD.
The server value overrides the client value once the client successfully contacts the
server.
Tivoli Storage Manager defines a schedule and associates client node HERMIONE
with the schedule. The server assigns the schedule priority 1, sets the period units
(PERUNITS) to ONETIME, and determines the number of days to keep the
schedule active based on the value set with SET CLIENTACTDURATION
command.
For a list of valid actions, see the DEFINE CLIENTACTION command in
Administrators Reference. You can optionally include the OPTIONS and OBJECTS
parameters.
378
If the duration of client actions is set to zero, the server sets the DURUNITS
parameter (duration units) as indefinite for schedules defined with DEFINE
CLIENTACTION command. The indefinite setting for DURUNITS means that the
schedules are not deleted from the database.
379
380
381
382
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
Register licenses
Audit licenses
System
Any administrator
For current information about supported clients and devices, visit the IBM Tivoli
Storage Manager address at www.ibm.com/software/sysmgmt/products/
support/IBMTivoliStorageManager.html.
The base IBM Tivoli Storage Manager feature includes the following support:
v An unlimited number of administrative clients.
v Enterprise Administration, which includes: command routing, enterprise
configuration, and enterprise logging (server-to-server).
Copyright IBM Corp. 1993, 2003
383
Description
domino.lic
Each managed system that uses IBM Tivoli Storage Manager for
Mail
Also required: Managed System for LAN license if you use a
communication protocol other than shared memory.
drm.lic
emcsymm.lic
Each managed system that uses IBM Tivoli Storage Manager for
Hardware (EMC Symmetrix)
Also required: Managed System for LAN license if you use a
communication protocol other than shared memory.
emcsymr3.lic
Each managed system that uses IBM Tivoli Storage Manager for
Hardware (EMC Symmetrix R/3)
Also required: Managed System for LAN license if you use a
communication protocol other than shared memory.
ess.lic
Each managed system that uses IBM Tivoli Storage Manager for
Hardware (ESS)
Also required: Managed System for LAN license if you use a
communication protocol other than shared memory.
essr3.lic
Each managed system that uses IBM Tivoli Storage Manager for
Hardware (ESS R/3)
Also required: Managed System for LAN license if you use a
communication protocol other than shared memory.
informix.lic
Each managed system that uses IBM Tivoli Storage Manager for
Databases (Informix)
Also required: Managed System for LAN license if you use a
communication protocol other than shared memory.
384
Description
library.lic
libshare.lic
lnotes.lic
Each managed system that uses IBM Tivoli Storage Manager for
Lotus Notes
Also required: Managed System for LAN license if you use a
communication protocol other than shared memory.
msexch.lic
Each managed system that uses IBM Tivoli Storage Manager for
Mail (MS Exchange)
Also required: Managed System for LAN license if you use a
communication protocol other than shared memory.
mgsyslan.lic
mgsyssan.lic
mssql.lic
Each managed system that uses IBM Tivoli Storage Manager for
Databases (MS SQL Server)
Also required: Managed System for LAN license if you use a
communication protocol other than shared memory.
ndmp.lic
oracle.lic
Each managed system that uses IBM Tivoli Storage Manager for
Databases (Oracle)
Also required: Managed System for LAN license if you use a
communication protocol other than shared memory.
r3.lic
Each managed system that uses IBM Tivoli Storage Manager for
Enterprise Resource Planning
Also required: Managed System for LAN license if you use a
communication protocol other than shared memory.
385
Description
spacemgr.lic
Each managed system that uses Tivoli Storage Manager for Space
Management
Also required: Managed System for LAN license if you are using a
communication protocol other than shared memory. Only one
Managed System for LAN license is required if an HSM client and
backup-archive client are on the same system with the same node
ID.
|
|
was.lic
|
|
Each managed system that uses IBM Tivoli Storage Manager for
Application Servers (WebSphere)
Also required: Managed System for LAN license if you use a
communication protocol other than shared memory.
To register a license, you must issue the REGISTER LICENSE command as well as
the license file associated with the license. For example, to use the disaster
recovery manager and two Tivoli Storage Manager for Space Management (HSM
clients), issue the following commands:
register license file=drm.lic
register license file=spacemgr.lic number=2
register license file=mgsyslan.lic number=2
To register 20 managed systems that move data over a local area network, issue
the following command:
register license file=mgsyslan.lic
number=20
To register 10 IBM Tivoli Storage Manager for Lotus Notes clients that move data
over a LAN, using the TCP/IP communication protocol, issue the following
commands:
register license file=lnotes.lic number=10
register license file=mgsyslan.lic number=10
With the exception of the ndmp.lic, drm.lic and libshare.lic, you can specify any
number of license files to register. Always specify the total number of licenses you
want registered. The REGISTER LICENSE command updates the nodelock file
based on the total number of licenses you want registered. If you enter number=0
for a particular license, the license is unregistered. If you have twenty licenses and
require ten additional licenses, you must register thirty.
For example, to register IBM Tivoli Storage Manager for 30 managed systems that
move data over a local area network, issue the following commands:
register license file=mgsyslan.lic number=30
You can also register a license by specifying the product password that is included
in the license certificate file. For example:
register license 5s3qydpnwx7njdxnafksqas4
386
v The Nodelock file is destroyed or corrupted. IBM Tivoli Storage Manager stores
license information in the Nodelock file, which is located in the directory from
which the server is started.
Monitoring Licenses
When license terms change (for example, a new license is specified for the server),
the server conducts an audit to determine if the current server configuration
conforms to the license terms. The server also periodically audits compliance with
license terms. The results of an audit are used to check and enforce license terms.
If 30 days have elapsed since the previous license audit, the administrator cannot
cancel the audit.
If an IBM Tivoli Storage Manager system exceeds the terms of its license
agreement, one of the following occurs:
v The server issues a warning message indicating that it is not in compliance with
the licensing terms.
v If you are running in Try Buy mode, operations fail because the server is not
licensed for specific features.
You must contact your IBM Tivoli Storage Manager account representative or
authorized reseller to modify your agreement.
An administrator can monitor license compliance by:
Auditing licenses
Use the AUDIT LICENSES command to compare the current configuration
with the current licenses.
Note: During a license audit, the server calculates, by node, the amount of
backup, archive, and space management storage in use. This
calculation can take a great deal of CPU time and can stall other
server activity. Use the AUDITSTORAGE server option to specify
that storage is not to be calculated as part of a license audit.
Displaying license information
Use the QUERY LICENSE command to display details of your current
licenses and determine licensing compliance.
Scheduling automatic license audits
Use the SET LICENSEAUDITPERIOD command to specify the number of
days between automatic audits.
System or operator
387
v The server uses the volumes specified in the dsmserv.dsk file for the database
and recovery log to record activity. It also identifies storage pool volumes to be
used.
v The server starts an IBM Tivoli Storage Manager server console session that is
used to operate and administer the server until administrative clients are
registered to the server.
To start the server, complete the following steps:
1. Change to the /usr/tivoli/tsm/server/bin directory from an AIX session.
Enter:
cd /usr/tivoli/tsm/server/bin
Note: If the server does not start, set the ulimit parameter to unlimited. For
example,
ulimit -d unlimited
When the server is started, IBM Tivoli Storage Manager displays the following
information:
v Product licensing and copyright information
v Processing information about the server options file
v Communication protocol information
v Database and recovery log information
v Storage pool volume information
v Server generation date
v Progress messages and any errors encountered during server initialization
If IBM Tivoli Storage Manager detects an invalid system date and time, the server
is disabled, and expiration, migration, reclamation, and volume history deletion
operations are not allowed. An error message (ANR0110E) is displayed. You may
either change the system date if it is in error, or issue the ACCEPT DATE
command to force the server to accept the current system date as valid. After the
system date is resolved, you must issue the ENABLE SESSIONS command to
re-enable the server for client sessions.
The date and time check occur when the server is started and once each hour
thereafter. An invalid date is one that is:
v Earlier than the server installation date and time
v More than one hour earlier than the last time the date was checked
v More than 30 days later than the last time the date was checked
388
3. The administrative client can access the IBM Tivoli Storage Manager
server.
If you do not follow these steps, you cannot control the server. When this occurs,
you can only stop the server by canceling the process, using the process number
displayed at startup. You may not be able to take down the server cleanly without
this process number.
To start the server running in the background, enter the following:
nohup dsmserv quiet &
You can check your directory for the output created in the nohup.out file to
determine if the server has started. This file can grow considerably over time.
3. Edit ADSMSTART. Do NOT change the first line in the file. Specify the user log
files to capture messages on a daily basis. For example:
dsmulog /u/admin/log1 /u/admin/log2 /u/admin/log3
The following steps automatically start the server with console logging when
the system is rebooted:
4.
5.
6.
7.
389
v If you restart the server by running the ADSMSTART script, the server runs
in the foreground and all console output is sent to the specified user logs.
v If you restart the server by issuing nohup adsmstart &, the server runs in the
background and all console output is sent to the specified user logs. You
must then use an administrative client session to halt the server.
In the above example, if you invoke the utility on Friday, on Friday the server
messages are captured to log1, on Saturday the messages are captured to log2, and
on Sunday the messages are captured to log3. On Monday the messages are
captured to log1 and the previous Friday messages are overwritten.
The following example shows how to invoke the dsmulog utility to rotate through
the user logs based on size limit:
dsmulog /u/admin/log1 /u/admin/log2 /u/admin/log3 size=500
When the server is started, the utility captures the server messages to log1 until it
reaches a file size of 500 kilobytes and then changes to log2.
Tip: If the IBM Tivoli Storage Manager server goes down unexpectedly, copy the
current user logs to other file names before you restart the server. This will prevent
the dsmulog utility from overwriting the current logs. You can then view the user
logs to try and determine the cause of the unavailability of the server.
To log console messages during the current session, do the following:
1. If the server is running, halt the server.
2. Issue the dsmserv command as specified in the ADSMSTART shell script. For
example:
/usr/tivoli/tsm/server/bin/dsmserv 2>&1 | dsmulog /u/admin/log1 /u/admin/log2
To stop console logging and have the server automatically start after a system
reboot, complete the following steps:
1. If the server is running, halt the server.
2. Change to the server bin directory:
cd /usr/tivoli/tsm/server/bin
390
-o filename
Specifies an explicit options file name when running more than one server.
You can also define an environment variable to point to the server options file. For
example, to define the DSMSERV_CONFIG environment variable to point to the
server options file, enter:
export DSMSERV_CONFIG=/usr/tivoli/tsm/server/bin/ filename.opt
where filename is the name you assigned your server options file (dsmserv.opt).
Notes:
1. The -o parameter of the DSMSERV command can also be used to specify an
options file name.
2. Use the set environment command if your shell is in the csh family:
> setenv DSMSERV_DIR /usr/tivoli/tsm/server/bin
3. If you want to save this environment, save these entries in the .kshrc or the
.cshrc file of your $HOME directory.
4. The dsmserv.dsk is always read from the directory in which the server is
started.
The following procedure shows how to set up an additional IBM Tivoli Storage
Manager server:
1. Determine the directory where you want the server files created, for example,
/usr/tivoli/tsm/myserver, and make that directory:
> mkdir /usr/tivoli/tsm/myserver
391
Note: Ensure that the communication parameters are unique among all other
IBM Tivoli Storage Manager servers. The communication protocols are:
v TCPPORT for TCP/IP
v HTTPPORT for HTTP Access in the Web Administrative Client
Browser
For example, if your first server is using the default TCPPORT of 1500,
ensure that the new server is using a TCPPORT other than 1500 by
providing a real value in the server options file.
3. Set your path on the server console or from an aixterm session. Define your
environment variables, for example:
export DSMSERV_DIR=/usr/tivoli/tsm/server/bin
In this example, db indicates the database log, -m indicates megabytes and log
indicates the recovery log. Refer to Administrators Reference for more
information on these commands.
5. Create the database and recovery log in the desired directory for the new
server, for example:
|
Note: You need additional license authorizations to run additional servers. You
can use the register license file command to register these licenses. See
Registering Licensed Features on page 384 for more information.
392
393
v The restore operation must be done by a IBM Tivoli Storage Manager server at a
code level that is the same as or later than that on the machine that was backed
up.
v Only manual and SCSI library types are supported for the restore operation.
In the following example, the IBM Tivoli Storage Manager server on machine A is
moved to machine B.
On machine A:
1. Migrate all disk storage pool data to sequential media. See Migration for Disk
Storage Pools on page 200 for details.
2. Perform a full database backup to sequential media.
backup db devclass=8mm type=full
3. Copy the volume history file and the device configuration file.
On machine B:
4. Install IBM Tivoli Storage Manager.
5. Copy the volume history file and device configuration file to the new server.
6. Restore the database:
dsmserv restore db devclass=8mm volumenames=vol001,vol002,vol002
Any administrator
System
394
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
Note:
To prevent contention for the same tapes, the server does not allow a
reclamation process to start if a DELETE FILESPACE process is active. The
server checks every hour for whether the DELETE FILESPACE process has
completed so that the reclamation process can start. After the DELETE
FILESPACE process has completed, reclamation begins within one hour.
The server assigns each background process an ID number and displays the
process ID when the operation starts. This process ID number is used for tracking
purposes. For example, if you issue an EXPORT NODE command, the server
displays a message similar to the following:
EXPORT NODE started as Process 10
Some of these processes can also be run in the foreground by using the WAIT=YES
parameter when you issue the command from an administrative client. See
Administrators Reference for details.
The following figure shows a server background process report after a DELETE
FILESPACE command was issued. The report displays a process ID number, a
description, and a completion status for each background process.
395
You can issue the QUERY PROCESS command to find the process number. See
Requesting Information about Server Processes on page 395 for details.
If the process you want to cancel is currently waiting for a tape volume to be
mounted (for example, a process initiated by EXPORT, IMPORT, or MOVE DATA
commands), the mount request is automatically canceled. If a volume associated
with the process is currently being mounted by an automated library, the cancel
may not take effect until the mount is complete.
|
|
|
|
|
|
The following operations can be preempted and are listed in order of priority. The
server selects the lowest priority operation to preempt, for example reclamation.
1. Move data
2. Migration from disk to sequential media
3. Backup, archive, or HSM migration
4. Migration from sequential media to sequential media
5. Reclamation
396
You can disable preemption by specifying NOPREEMPT in the server options file.
When this option is specified, the BACKUP DB command is the only operation
that can preempt other operations.
The following operations cannot preempt other operations nor can they be
preempted:
v Audit Volume
v Restore from a copy storage pool
v Prepare a recovery plan
v Store data using a remote data mover
The following operations can be preempted, and are listed in order of priority. The
server preempts the lowest priority operation, for example reclamation.
1. Move data
2. Migration from disk to sequential media
3. Backup, archive, or HSM migration
4. Migration from sequential media
5. Reclamation
You can disable preemption by specifying NOPREEMPT in the server options file.
When this option is specified, no operation can preempt another operation for
access to a volume.
System
At installation, the server name is set to SERVER1. After installation, you can use
the SET SERVERNAME command to change the server name. You can use the
QUERY STATUS command to see the name of the server.
To specify the server name as WELLS_DESIGN_DEPT., for example, enter the
following:
set servername wells_design_dept.
You must set unique names on servers that communicate with each other. See
Setting Up Communications Among Servers on page 472 for details.
397
|
|
|
|
|
|
Attention: Changing any values on a source server that is used for virtual
volume operations can impact the ability of the source server to access and
manage the data it has stored on the corresponding target server. On a network
where clients connect to multiple servers, it is recommended that all of the servers
have unique names. Changing the server name after Windows clients are
connected forces the clients to re-enter the passwords.
|
|
|
|
|
Changing the server name using the SET SERVERNAME command may have
additional implications varying by platform. Some examples to be aware of are:
v Passwords may be invalidated
v Device information may be affected
v Registry information on Windows platforms may change
System
You can add or update server options by editing the dsmserv.opt file, using the
SETOPT command. For information about editing the server options file, refer to
Administrators Reference.
398
399
400
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
401
Notes:
1. Scheduled administrative command output is directed to the activity log. This
output cannot be redirected. For information about the length of time activity
log information is retained in the database, see Using the IBM Tivoli Storage
Manager Activity Log on page 449.
2. You cannot schedule MACRO or QUERY ACTLOG commands.
Task
System
Any administrator
402
*
-
Schedule Name
---------------BACKUP_ARCHIVEPOOL
Start Date/Time
-------------------09/04/2002 14:08:11
Duration
-------1 H
Period
-----1 D
Day
--Any
Note: The asterisk (*) in the first column specifies whether the corresponding
schedule has expired. If there is an asterisk in this column, the schedule has
expired.
You can check when the schedule is projected to run and whether it ran
successfully by using the QUERY EVENT command. For information about
querying events, see Querying Events on page 405.
Tailoring Schedules
To control more precisely when and how your schedules run, specify values for
schedule parameters instead of accepting the defaults when you define or update
schedules.
Schedule name
All schedules must have a unique name, which can be up to 30 characters.
Initial start date, time, and day
You can specify a past date, the current date, or a future date for the initial
start date for a schedule with the STARTDATE parameter.
You can specify a start time, such as 6 p.m. with the STARTTIME
parameter.
You can also specify the day of the week on which the startup window
begins with the DAYOFWEEK parameter. If the start date and start time
fall on a day that does not correspond to your value for the day of the
week, the start date and time are shifted forward in 24-hour increments
until the day of the week is satisfied.
If you select a value for the day of the week other than ANY, schedules
may not process when you expect. This depends on the values for PERIOD
and PERUNITS. Use the QUERY EVENT command to project when
schedules will process to ensure that you achieve the desired result.
Duration of a startup window
You can specify the duration of a startup window, such as 12 hours, with
the DURATION and DURUNITS parameters. The server must start the
scheduled service within the specified duration, but does not necessarily
complete it within that period of time. If the schedule needs to be retried
for any reason, the retry attempt must begin before the startup window
elapses or the operation does not restart.
If the schedule does not start during the startup window, the server
records this as a missed event in the database. You can get an exception
report from the server to identify schedules that did not run. For more
information, see Querying Events on page 405.
How often to run the scheduled service
You can set the schedule frequency based on a period of hours, days,
weeks, months, or years with the PERIOD and PERUNITS parameters. To
have weekly backups, for example, set the period to one week with
PERIOD=1 and PERUNITS=WEEKS.
Chapter 17. Automating Server Operations
403
Expiration date
You can specify an expiration date for a schedule with the EXPIRATION
parameter if the services it initiates are required for only a specific period
of time. If you set an expiration date, the schedule is not used after that
date, but it still exists. You must delete the schedule to remove it from the
database.
Priority
You can assign a priority to schedules with the PRIORITY parameter. For
example, if you define two schedules and they have the same startup
window or windows overlap, the server runs the schedule with the highest
priority first. A schedule with a priority of 1 is started before a schedule
with a priority of 3.
If two schedules try to use the same resources, the schedule that first
initiated the process will be the one to continue processing. The second
schedule will start but will not successfully complete. Be sure to check the
activity log for details.
Administrative schedule name
If you are defining or updating an administrative command schedule, you
must specify the schedule name.
Type of schedule
If you are updating an administrative command schedule, you must
specify TYPE=ADMINISTRATIVE on the UPDATE command. If you are
defining a new administrative command schedule, this parameter is
assumed if the CMD parameter is specified.
Command
When you define an administrative command schedule, you must specify
the complete command that is processed with the schedule with the CMD
parameter. These commands are used to tune server operations or to start
functions that require significant server or system resources. The functions
include:
v Migration
v Reclamation
v Export and import
v Database backup
Whether or not the schedule is active
Administrative command schedules can be active or inactive when they
are defined or updated. Active schedules are processed when the specified
command window occurs. Inactive schedules are not processed until they
are made active by an UPDATE SCHEDULE command with the ACTIVE
parameter set to YES.
This command specifies that, starting today, the ARCHIVEPOOL primary storage
pool is to be backed up to the RECOVERYPOOL copy storage pool every two days
at 8 p.m.
404
Copying Schedules
You can create a new schedule by copying an existing administrative schedule.
When you copy a schedule, Tivoli Storage Manager copies the following
information:
v A description of the schedule
v All parameter values from the original schedule
You can then update the new schedule to meet your needs.
To copy the BACKUP_ARCHIVEPOOL administrative schedule and name the new
schedule BCKSCHED, enter:
copy schedule backup_archivepool bcksched type=administrative
Deleting Schedules
To delete the administrative schedule ENGBKUP, enter:
delete schedule engbkup type=administrative
Any administrator
System
Querying Events
To help manage schedules for administrative commands, you can request
information about scheduled and completed events. You can request general or
exception reporting queries.
v To get information about past and projected scheduled processes, use a general
query. If the time range you specify includes the future, the query output shows
which events should occur in the future based on current schedules.
v To get information about scheduled processes that did not complete successfully,
use exception reporting.
To minimize the processing time when querying events, minimize the time range.
405
To query an event for an administrative command schedule, you must specify the
TYPE=ADMINISTRATIVE parameter. Figure 54 shows an example of the results of
the following command:
query event * type=administrative
Scheduled Start
-------------------09/04/2002 14:08:11
Actual Start
-------------------09/04/2002 14:08:14
Schedule Name
------------BACKUP_ARCHIVEPOOL
Status
--------Completed
Event records are automatically removed from the database after both of the
following conditions are met:
v The specified retention period has passed
v The startup window for the event has elapsed
406
The administrator can run the script by issuing the RUN command from the
administrative Web interface, or scheduling the script for processing using the
administrative command scheduler on the server. If one of the specified commands
in the script does not process successfully, the remaining commands are not
processed.
Tivoli Storage Manager scripts can include the following:
v Command parameter substitution.
v SQL SELECT statements that you specify when the script is processed.
v Conditional logic flow statements. These logic flow statements include:
The IF clause; this clause determines how processing should proceed based
on the current return code value.
The EXIT statement; this statement ends script processing.
The GOTO and LABEL statement; this statement directs logic flow to
continue processing with the line that starts with the label specified.
Comment lines.
You can define a server script line by line, create a file that contains the command
lines, or copy an existing script.
The following examples use commands to define and update scripts. However, you
can easily define and update scripts using the administrative Web interface where
you can also use local workstation cut and paste functions.
|
|
|
|
Note: The administrative Web interface only supports ASCII characters for input.
If you need to enter characters that are not ASCII, do not use the
administrative Web interface. Issue the DEFINE SCRIPT and UPDATE
SCRIPT commands from the server console.
You can define a script with the DEFINE SCRIPT command. You can initially
define the first line of the script with this command. For example:
define script qaixc "select node_name from nodes where platform=aix"
desc=Display AIX clients
This example defines the script as QAIXC. When you run the script, all AIX clients
are displayed.
To define additional lines, use the UPDATE SCRIPT command. For example, you
want to add a QUERY SESSION command, enter:
update script qaixc "query session *"
You can specify a WAIT parameter with the DEFINE CLIENTACTION command.
This allows the client action to complete before processing the next step in a
command script or macro. Refer to Administrators Reference for information.
You can use the ISSUE MESSAGE command to determine where a problem is
within a command in a script. Refer to Administrators Reference for information on
how to use the ISSUE MESSAGE command.
Chapter 17. Automating Server Operations
407
The script is defined as ADMIN1, and the contents of the script have been read in
from the file BKUP12.MAC.
Note: The file must reside on the server, and be read by the server.
When you run the script you must specify two values, one for $1 and one for $2.
For example:
run sqlsample node_name aix
The command that is processed when the SQLSAMPLE script is run is:
select node_name from nodes where platform=aix
408
The following script example backs up the BACKUPPOOL storage pool if a return
code with a severity of warning is encountered:
/* Backup storage pools if clients are not accessing the server */
select * from sessions
/* There are no sessions if rc_notfound is received */
if(warning) backup stg backuppool copypool
Specifying the EXIT Statement: The EXIT statement ends script processing. The
following example uses the IF clause together with RC_OK to determine if clients
are accessing the server. If a RC_OK return code is received, this indicates that
client sessions are accessing the server. The script proceeds with the exit statement,
and the backup does not start.
/* Back up storage pools if clients are not accessing the server */
select * from sessions
/* There are sessions if rc_ok is received */
if(rc_ok) exit
backup stg backuppool copypool
409
Run a script
Updating a Script
You can update a script to change an existing command line or to add a new
command line to a script.
To change an existing command line, specify the LINE= parameter.
To append a command line to an existing script issue the UPDATE SCRIPT
command without the LINE= parameter. The appended command line is assigned
a line number of five greater than the last command line number in the command
line sequence. For example, if your script ends with line 010, the appended
command line is assigned a line number of 015.
Appending a New Command: The following is an example of the QSTATUS
script. The script has lines 001, 005, and 010 as follows:
001 /* This is the QSTATUS script */
005 QUERY STATUS
010 QUERY PROCESS
To append the QUERY SESSION command at the end of the script, issue the
following:
update script qstatus "query session"
The QUERY SESSION command is assigned a command line number of 015 and
the updated script is as follows:
001
005
010
015
Adding a New Command and Line Number: To add the SET REGISTRATION
OPEN command as the new line 007 in the QSTATUS script, issue the following:
update script qstatus "set registration open" line=7
410
001
005
007
010
015
The QUERY1 command script now contains the same command lines as the
QSTATUS command script.
Description
Standard
Detailed
Lines
Displays the name of the script, the line numbers of the commands,
comment lines, and the commands.
Raw
Description
-----------------------------------------------------Display columns for a specified SQL table
Sample SQL Query
411
You can then edit the newscript.script with an editor that is available to you on
your system. To create a new script using the edited output from your query, issue:
define script srtnew file=newscript.script
Enter:
run qaixc node_name aix
412
Using Macros
Tivoli Storage Manager supports macros on the administrative client. A macro is a
file that contains one or more administrative client commands. You can only run a
macro from the administrative client in batch or interactive modes. Macros are
stored as a file on the administrative client. Macros are not distributed across
servers and cannot be scheduled on the server.
Macros can include the following:
v Administrative commands
For more information on
a Macro.
v Comments
For more information on
page 414.
v Continuation characters
For more information on
Characters on page 414.
v Variables
For more information on
Macro on page 415.
The name for a macro must follow the naming conventions of the administrative
client running on your operating system. For more information about file naming
conventions, refer to the Administrators Reference.
In macros that contain several commands, use the COMMIT and ROLLBACK
commands to control command processing within the macro. For more information
about using these commands, see Controlling Command Processing in a Macro
on page 416.
You can include the MACRO command within a macro file to invoke other macros
up to ten levels deep. A macro invoked from the Tivoli Storage Manager
administrative client command prompt is called a high-level macro. Any macros
invoked from within the high-level macro are called nested macros.
This example uses continuation characters in the macro file. For more information
on continuation characters, see Using Continuation Characters on page 414.
Chapter 17. Automating Server Operations
413
After you create a macro file, you can update the information that it contains and
use it again. You can also copy the macro file, make changes to the copy, and then
run the copy.
*/
*/
Comments cannot be nested and cannot span lines. Every line of a comment must
contain the comment delimiters.
414
Tivoli Storage Manager concatenates the two strings with no intervening blanks.
You must use only this method to continue a quoted string of values across more
than one line.
/* userid password
/* name, phone number
/* policy domain
*/
*/
*/
Then, when you run the macro, you enter the values you want to pass to the
server to process the command.
For example, to register the node named DAVID with a password of DAVIDPW,
with his name and phone number included as contact information, and assign him
to the DOMAIN1 policy domain, enter:
macro auth.mac david davidpw "david pease, x1234" domain1
If your system uses the percent sign as a wildcard character, the administrative
client interprets a pattern-matching expression in a macro where the percent sign is
immediately followed by a numeric digit as a substitution variable.
You cannot enclose a substitution variable in quotation marks. However, a value
you supply as a substitution for the variable can be a quoted string.
Running a Macro
Use the MACRO command when you want to run a macro. You can enter the
MACRO command in batch or interactive mode.
If the macro does not contain substitution variables (such as the REG.MAC macro
described in the Writing Commands in a Macro on page 413), run the macro by
entering the MACRO command with the name of the macro file. For example:
macro reg.mac
If you enter fewer values than there are substitution variables in the macro, the
administrative client replaces the remaining variables with null strings.
415
If you want to omit one or more values between values, enter a null string ("") for
each omitted value. For example, if you omit the contact information in the
previous example, you must enter:
macro auth.mac pease mypasswd "" domain1
416
v Start the administrative client session using the ITEMCOMMIT option. This
causes each command within a macro to be committed before the next command
is processed.
417
418
Any administrator
Tasks:
Estimating and Monitoring Database and Recovery Log Space Requirements on page 424
Increasing the Size of the Database or Recovery Log on page 427
Decreasing the Size of the Database or Recovery Log on page 431
Optimizing Database and Recovery Log Performance on page 433
419
Note: Mirroring of the database and recovery log is described in the chapter on
data protection. See Mirroring the Database and Recovery Log on
page 546.
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
|
|
|
|
|
|
|
|
A transaction is the unit of work exchanged between the client and server. The
client program can transfer more than one file or directory between the client and
server before it commits the data to server storage. Therefore, a transaction can
contain more than one file or directory. This is called a transaction group. Tivoli
Storage Manager provides a TXNGROUPMAX server option that allows you to
specify the number of files or directories contained within a transaction group.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The following are examples of how the TXNGROUPMAX option can affect
performance throughput for operations to tape and the recovery log. The
maximum number of concurrent client/server sessions is defined by the
MAXSESSIONS server option.
v The TXNGROUPMAX option is set to 20. The MAXSESSIONS option is set to 5.
Five concurrent sessions are processing, and each file in the transaction requires
10 logged database operations. This would be a concurrent load of:
|
|
|
420
20*10*5=1,000
|
|
|
|
|
|
|
|
This represents 1,000 log records in the recovery log. Each time a transaction
ends (commits), the server can free 200 of those log records. Over time and as
transactions end, the recovery log can release the space used by the oldest
transactions. These transactions complete and the log progresses forward.
v The TXNGROUPMAX option is set to 2,000. The MAXSESSIONS option is set to
5. Five concurrent sessions are processing, and each file in the transaction
requires 10 logged database operations. This would be a concurrent load of:
|
|
|
|
This represents 100,000 log records in the recovery log. Each time a transaction
ends (commits), the server can free 20,000 of those log records. Over time and as
transactions end, the recovery log can release the space used by the oldest
transactions. These transactions complete and the log progresses forward.
|
|
|
|
|
|
|
TXNGROUPMAX Setting
TXNGROUPMAX=20
100,000
|
|
TXNGROUPMAX=2,000
10,000,000
|
|
|
|
|
|
|
|
|
|
|
|
|
There are several related server options that can be used to tune server
performance and reduce the risk of running out of recovery log space:
|
|
|
|
|
|
|
|
|
2,000*10*5=20,000
421
|
|
TXNGROUPMAX setting. The drives need to handle higher transfer rates in order
to handle the increased load on the recovery log and database.
|
|
|
|
|
The TXNGROUPMAX option can be set as a global server option value, or it can
be set individually for a node. Refer to the REGISTER NODE command and the
server options in the Administrators Reference. We recommended that you specify a
conservative TXNGROUPMAX value (between 4 and 512). Select higher values for
individual nodes that will benefit from the increased transaction size.
|
|
|
To manage the database and recovery log effectively, you must understand the
following concepts:
v Available space
v Assigned capacity
v Utilization
Available Space
Not all of the space that is allocated for the volumes of the database or of the
recovery log can be used for database and recovery log information. The server
subtracts 1MB from each physical volume for overhead. The remaining space is
divided into 4MB partitions. For example, you allocate four 25MB volumes for the
database. For the four volumes, 4MB are needed for overhead leaving 96MB of
available space as shown in figure Figure 56:
|
Allocated Space
on Physical Volumes
Available Space
for the Database
25 MB
24 MB
VOL4
25 MB
24 MB
VOL3
25 MB
25 MB
Totals
|
|
|
100 MB
VOL2
VOL1
24 MB
24 MB
96 MB
422
Assigned Capacity
Assigned capacity is the available space that can be used for database or recovery
log information. During installation, the assigned capacities of the database and
recovery log match the available space. If you add volumes after installation, you
increase your available space. However, to increase the assigned capacity, you must
also extend the database or recovery log. See Step 2: Extending the Capacity of
the Database or Recovery Log on page 430 for details.
Utilization
Utilization is the percent of the assigned capacity in use at a specific time.
Maximum percent utilized is the highest utilization since the statistics were reset. For
example, an installation performs most backups after midnight. Figure 57 shows
that utilization statistics for the recovery log were reset at 9 p.m. the previous
evening and that the maximum utilization occurred at 12 a.m.
50%
80%
9:00 p.m.
reset utilization
statistics
% Util
Max
% Util
50.0
50.0
60%
12:58 a.m.
Current Time
% Util
Max
% Util
% Util
Max
% Util
80.0
80.0
60.0
80.0
Unless many objects are deleted, the database maximum percent utilized is usually
close to the utilization percentage.
423
512 bytes of USER area in a raw volume. This is not a problem for database and
recovery log volumes, but Tivoli Storage Manager control information is also
written in this area. If AIX overwrites this control information when raw
volumes are mirrored, Tivoli Storage Manager may not be able to vary the
volume online.
Note: Tivoli Storage Manager mirroring only supports database and recovery
log volumes. Disk storage pool volumes are not supported by Tivoli
Storage Manager mirroring.
The use of JFS files for database, recovery log, and storage pool volumes requires
slightly more CPU than is required for raw volumes. However, JFS read-ahead
caching improves performance.
Archived files
Up to 100,000 files might be archived copies of client files.
Space-managed files
Up to 200,000 files migrated from client workstations might be in
server storage.
Note: File aggregation does not affect space-managed files.
At 600 bytes per file, the space required for these files is:
(1,500,000 + 100,000 + 200,000) x 600 = 1.0GB
424
If the average file size is about 10KB, about 100,000 files are in
cache at any one time.
100,000 files x 200 bytes = 19MB
Therefore, cached files and copy storage pool files require about 0.4GB of
database space.
Overhead
About 1.4GB is required for file versions and cached and copy storage pool
files. Up to 50% additional space (or 0.7GB) should be allowed for
overhead.
The database should then be approximately 2.1GB.
If you cannot estimate the numbers of files, you can roughly estimate the database
size as from 1% to 5% of the required server storage space. For example, if you
need 100GB of server storage, your database should be between 1GB and 5GB. See
Estimating Space Needs for Storage Pools on page 221 for details.
During SQL queries of the server, intermediate results are stored in temporary
tables that require space in the free portion of the database. Therefore, the use of
SQL queries requires additional database space. The more complicated the queries,
the greater is the space required.
The size of the recovery log depends on the number of concurrent client sessions
and the number of background processes executing on the server. The maximum
number of concurrent client sessions is set in the server options.
Attention: Be aware that the results are estimates. The actual size of the database
may differ from the estimate because of factors, such as, the number of directories
and the length of the path and file names. You should periodically monitor your
database and recovery log and adjust their sizes as necessary.
Begin with at least a 12MB recovery log. If you use the database backup and
recovery functions in roll-forward mode, you should begin with at least 25MB. See
Database and Recovery Log Protection on page 544 and Estimating the Size of
the Recovery Log on page 554 for more information.
If the SELFTUNEBUFPOOLSIZE server option is in effect, the buffer pool cache hit
ratio statistics are reset at the start of expiration. After expiration, the buffer pool
size is increased if the cache hit ratio is less than 98%. The increase in the buffer
Chapter 18. Managing the Database and Recovery Log
425
pool size is in small increments and may change after each expiration. The change
in the buffer pool size is not reflected in the server options file. You can check the
current size at any time using the QUERY STATUS command. Use the SETOPT
BUFPOOLSIZE command to change the buffer pool size.
To display information about the database or recovery log, issue the QUERY DB or
QUERY LOG command. For example:
query db
|
|
|
To display information about the database or recovery log volumes, issue the
QUERY DBVOLUME or the QUERY LOGVOLUME command. For example:
|
|
|
|
|
|
|
|
|
|
|
|
/home/bill/dsmserv/build/db1
Syncd
Undefined
Undefined
12
12
0
Note: Tivoli Storage Manager displays output from this command from the lowest
to the highest number. If a volume is deleted, Tivoli Storage Manager reuses
that volume number the next time that a volume is defined. A query can
then display volumes that are not in numerical sequence. You can reset the
order of your database or recovery log volumes by specifying the desired
order with the DSMSERV LOADFORMAT command.
|
|
|
|
|
|
See the indicated sections for details about the following entries:
v Available space, Available Space on page 422
v Assigned capacity, Assigned Capacity on page 423
v Utilization and maximum utilization, Utilization on page 423
If utilization is high, you may want to add space (see Increasing the Size of the
Database or Recovery Log on page 427). If utilization is low, you may want to
delete space (see Decreasing the Size of the Database or Recovery Log on
page 431).
Note: You can also use a DEFINE SPACETRIGGER command to automatically
check whether the database or recovery log exceeds a utilization percentage
that you specify. See Automating the Increase of the Database or Recovery
Log on page 427 for details.
426
|
|
|
|
|
|
Note: There is one time when the database or recovery log might exceed the
maximum size specified: If the database or recovery log is less than the
maximum size when expansion begins, it continues to the full expansion
value. However, no further expansion will occur unless the space trigger is
updated.
To add the new volumes to the /usr/tivoli/tsm/server/bin/log1.dsm directory,
issue the following commands:
define spacetrigger db fullpct=85 spaceexpansion=25
expansionprefix=/usr/tivoli/tsm/server/bin/ maximumsize=200000
define spacetrigger log fullpct=75 spaceexpansion=30
expansionprefix=/usr/tivoli/tsm/server/bin/ maximumsize=50000
The server then monitors the database or recovery log and, if the utilization level
is reached, does the following:
v Displays a message (ANR4413I or ANR4414I) that states the amount of space
required to meet the utilization parameter specified in the command.
v Allocates space for the new volume.
v Defines the new volume.
v Extends the database or recovery log.
v If a volume is mirrored and there is enough disk space, the preceding steps are
also performed for the mirrored copies.
Chapter 18. Managing the Database and Recovery Log
427
Notes:
1. The maximum size of the recovery log is 13GB. The server will not
automatically extend the recovery log beyond 12GB.
2. An automatic expansion may exceed the specified database or recovery log
maximum size but not the 13GB recovery log limit. However, after the
maximum has been reached, no further automatic expansions will occur.
3. A space trigger percentage may be exceeded between the monitoring of the
database or recovery log and the time that a new volume is brought online.
4. If the server creates a database or recovery log volume and the attempt to add
it to the server fails, the volume is not deleted. After the problem is corrected,
you can define it with the DEFINE DBVOLUME or DEFINE LOGVOLUME
command.
5. Automatic expansion will not occur during a database backup.
6. The database and recovery log utilization percentage may exceed the space
trigger value. The server checks utilization after a database or recovery log
commit.
Also, deleting database volumes and reducing the database does not activate
the trigger. Therefore, the utilization percentage can exceed the set value before
new volumes are online.
7. The database and the recovery log size may exceed the specified
MAXIMUMSIZE value. This value is a threshold for expansion. Tivoli Storage
Manager checks the size and allows expansion if the database or the recovery
log is less than the maximum size. Tivoli Storage Manager will not
automatically expand the database or the recovery log if either is greater than
the maximum size. However, Tivoli Storage Manager only checks the size that
results after expansion to ensure that the maximum recovery log size is not
exceeded.
428
The available space of the database increases to 196MB, but the assigned capacity
remains at 96MB. For Tivoli Storage Manager to use the space, you must extend
the capacity (see Step 2: Extending the Capacity of the Database or Recovery Log
on page 430). To verify the change, query the database or recovery log. For
example, to query the database, enter:
query db
The value in the Maximum Extension field should equal the available space of the
new volume. In this example, a 101MB volume was allocated. This report shows
that the available space has increased by 100MB; the assigned capacity is
unchanged at 96MB; and the maximum extension is 100MB. Figure 58 illustrates
these changes.
Allocated Space
on Physical Volumes
101 MB
Available Space
for the Database
Assigned
Capacity
100 MB
VOL5
25 MB
24 MB
24 MB
24 MB
24 MB
24 MB
24 MB
24 MB
24 MB
196 MB
96 MB
VOL4
25 MB
VOL3
25 MB
25 MB
Totals
201 MB
VOL2
VOL1
You can also query the database and recovery log volumes to display information
about the physical volumes that make up the database and recovery log.
Notes:
1. The maximum size of the recovery log is 13GB, and the maximum size of the
database is 530GB. If you allocate a volume that would cause the recovery log
or database to exceed these limits, the subsequent DEFINE DBVOLUME or
DEFINE LOGVOLUME command for the volume will fail.
2. For performance reasons, define more than one volume for the database and
recovery log, and put these volumes on separate disks. This allows
simultaneous access to different parts of the database or recovery log.
429
3. To use disk space efficiently, allocate a few large disk volumes rather than
many small disk volumes. In this way, you avoid losing space to overhead
processing.
If you already have a number of small volumes and want to consolidate the
space into one large volume, see Decreasing the Size of the Database or
Recovery Log on page 431.
4. To protect database and recovery log volumes from media failure, use
mirroring. See Mirroring the Database and Recovery Log on page 546 for
details.
Using the DSMFMT Command to Format Volumes: You can still use the
DSMFMT utility to allocate a database or recovery log volume. You would then
issue the DEFINE DBVOLUME or DEFINE LOGVOLUME command without the
FORMATSIZE parameter, and extend the database or recovery log (see Step 2:
Extending the Capacity of the Database or Recovery Log).
To allocate an additional 101MB to the database as volume VOL5, enter:
> dsmfmt -db vol5 101
After the database has been extended, the available space and assigned capacity
are both equal to 196MB, as shown in Figure 59.
Allocated Space
on Physical Volumes
101 MB
Available Space
for the Database
Assigned
Capacity
100 MB
100 MB
24 MB
24 MB
24 MB
24 MB
24 MB
24 MB
24 MB
24 MB
196 MB
196 MB
VOL5
25 MB
VOL4
25 MB
VOL3
25 MB
25 MB
Totals
201 MB
VOL2
VOL1
You can query the database or recovery log (QUERY DB and QUERY LOG
commands) to verify their assigned capacities. The server would display a report,
like this:
Available Assigned Maximum Maximum
Page
Total
Used %Util Max.
Space Capacity Extension Reduction
Size
Pages
Pages
%Util
(MB)
(MB)
(MB)
(MB) (bytes)
--------- -------- --------- --------- ------- --------- --------- ----- ----196
196
0
192 4,096
50,176
111 0.2 0.2
430
VOL1
Syncd
Undefined
Undefined
24
24
0
In this example, VOL1, VOL2, VOL3, and VOL4 each have 24MB of available
space, and VOL5 has 100MB. To determine if there is enough unused space to
delete one or more volumes, enter:
query db
The Maximum Reduction field shows the assigned capacity not in use. In this
example, you could reduce the database by up to 176MB. This is enough space to
allow the deletion of VOL1, VOL2, VOL3, and VOL4.
Chapter 18. Managing the Database and Recovery Log
431
If there is not enough space on the remaining volumes, allocate more space and
define an additional volume. See Increasing the Size of the Database or Recovery
Log on page 427. Continue with Step 2: Reducing the Capacity of the Database
or Recovery Log.
Reducing capacity is run as a background process and can take a long time. Issue a
QUERY PROCESS command to check on the status of the process.
After reducing the database by 96MB, the assigned capacity is 100MB, and the
maximum extension is 96MB, as shown in the following example:
Available Assigned Maximum Maximum
Page
Total
Used %Util Max.
Space Capacity Extension Reduction
Size
Pages
Pages
%Util
(MB)
(MB)
(MB)
(MB) (bytes)
--------- -------- --------- --------- ------- --------- --------- ----- ----196
100
96
92 4,096
24,576
86 0.3 0.3
dbvolume
dbvolume
dbvolume
dbvolume
vol1
vol2
vol3
vol4
The server moves data from the volumes being deleted to available space on other
volumes, as shown in Figure 60 on page 433.
432
VOL5
VOL5
VOL4
VOL4
VOL3
VOL3
VOL2
VOL2
VOL1
VOL1
Before
Deletion
During
Deletion
VOL5
After
Deletion
After the data has been moved, these volumes are deleted from the server.
433
196
196
0
176
4,096
50,176
4,755
9.5
9.5
5
128
1,193,212
99.73
0.00
Use the following fields to evaluate your current use of the database buffer pool:
Buffer Pool Pages
The number of pages in the database buffer pool. This value is determined
by the server option for the size of the database buffer pool. At installation,
the database buffer pool is set to 2048KB, which equals 128 database pages.
Total Buffer Requests
The number of requests for database pages since the server was last started
or the buffer pool was last reset. If you regularly reset the buffer pool, you
can see trends over time.
Cache Hit Pct
The percentage of requests for cached database pages in the database
buffer pool that were not read from disk. A high value indicates that the
size of your database buffer pool is adequate. If the value falls below 98%,
consider increasing the size of the database buffer pool. For larger
installations, performance could improve significantly if your cache hit
percentage is greater than 99%.
Cache Wait Pct
The percentage of requests for database pages that had to wait for a buffer
to become available in the database buffer pool. When this value is greater
than 0, increase the size of the database buffer pool.
434
12
12
0
8
4,096
3,072
227
7.4
69.6
1
32
6.25
0.00
Use the following fields to evaluate the log buffer pool size:
Log Pool Pages
The number of pages in the recovery log buffer pool. This value is set by
the server option for the size of the recovery log buffer pool. At
installation, the default setting is 128KB, which equals 32 recovery log
pages.
Log Pool Pct. Util
The percentage of pages used to write changes to the recovery log after a
transaction is committed. A value below 10% means that the recovery log
buffer pool size is adequate. If the percentage increases, consider increasing
the recovery log buffer pool size.
Log Pool Pct. Wait
The percentage of requests for pages that are not available because all
pages are waiting to write to the recovery log.
If this value is greater than 0, increase the recovery log buffer pool size.
The procedure includes unloading the database, formatting database volumes and
recovery log volumes to prepare for loading, and then loading the database. The
operations read device information from the device configuration file, not from the
servers database.
|
|
|
You can use a device class of FILE for the DSMSERV UNLOADDB and DSMSERV
LOADDB operations. If you use any other type of device class for the operations,
you must use a drive that is assigned to a manual library (library type of
435
MANUAL). If the drive that you want to use is not assigned to a manual library,
you must edit the device configuration file to temporarily change the definition so
that it appears to be in a manual library.
|
|
|
|
|
|
|
|
|
Note: If you have specified more than one file with the DEVCONFIG option,
remove all but one file name for this process. After you complete this
procedure, you can add the other file names back to the option.
|
|
|
The device configuration file includes a copy of the device class, library, and
drive definitions for the server. The utility commands in this procedure need
these definitions.
See Administrators Reference for details on the DEVCONFIG option.
3. Ensure that the device configuration file contains the required definitions for
the device that you want to use for the operations.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
v To use hard disk, you must use a device class of type FILE for the
operations, and the device class definition must exist in the device
configuration file.
v To use other device types, check the definition of the library and the drive
in the device configuration file. The library and the drive must be defined
as a manual library and drive. If not, make a copy of your device
configuration file and store the original file in a safe place. Edit the copy of
the file to temporarily have the library and the drive be treated by the
server as a manual library and drive. Follow these guidelines:
For the library definition, change the library type to MANUAL and
remove any parameters not allowed for a MANUAL type of library. For
example, you have the library defined in your device configuration file
like this:
define library 8mmlib libtype=scsi shared=yes
|
|
|
|
|
|
For the drive definition, remove any parameters that do not apply to a
drive in a manual library. For example, you have the drive defined like
this:
define drive 8mmlib drive01 element=82
|
|
436
v If the server is running, you can use the following steps to estimate the
number of tapes required:
a. Request information about the database by using the following
command:
query db
b. Using the output of the QUERY DB command, multiply the Used Pages
by the Page Size to determine space occupied by the database.
c. Use the result to estimate the number of tapes of a specific device class
that you will need to unload the database. The space required will likely
be less than your estimate.
5. Halt the server if it is still running.
6. With the server not running, issue the DSMSERV UNLOADDB utility to
unload the database to tape. For example, issue this command:
dsmserv unloaddb devclass=tapeclass scratch=yes
|
|
|
|
This command prepares two recovery log volumes (logvol1 and logvol2), and
one database volume (dbvol1).
8. Reload the database using the volumes that contain the data from the unload
operation. For example:
dsmserv loaddb devclass=tapeclass volumenames=db001,db002,db003
|
|
437
438
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
439
example, all registered client nodes. Detailed format displays the default and
specific definition parameters. Use the detailed format when you want to see all
the information about a limited number of objects.
Here is an example of the standard output for the QUERY NODE command:
Node Name
Platform Policy
Domain
Name
Days
Days Locked?
Since
Since
Last Password
Access
Set
---------- -------- --------- ------ -------- ------CLIENT1
AIX
STANDARD
6
6
No
GEORGE
Linux86 STANDARD
1
1
No
JANET
HPUX
STANDARD
1
1
No
JOE2
Mac
STANDARD
<1
<1
No
TOMC
WinNT
STANDARD
1
1
No
Here is an example of the detailed output for the QUERY NODE command:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
Node Name:
Platform:
Client OS Level:
Client Version:
Policy Domain Name:
Last Access Date/Time:
Days Since Last Access:
Password Set Date/Time:
Days Since Password Set:
Invalid Sign-on Count:
Locked?:
Contact:
Compression:
Archive Delete Allowed?:
Backup Delete Allowed?:
Registration Date/Time:
Registering Administrator:
Last Communication Method Used:
Bytes Received Last Session:
Bytes Sent Last Session:
Duration of Last Session (sec):
Pct. Idle Wait Last Session:
Pct. Comm. Wait Last Session:
Pct. Media Wait Last Session:
Optionset:
URL:
Node Type:
Password Expiration Period:
Keep Mount Point?:
Maximum Mount Points Allowed:
Auto Filespace Rename:
Validate Protocol:
TCP/IP Name:
TCP/IP Address:
Globally Unique ID:
Transaction Group Max:
Session Initiation:
HLADDRESS:
LLADDRESS:
JOE
WinNT
5.00
Version 5, Release 1, Level 5.0
STANDARD
05/19/2002 18:55:46
6
05/19/2002 18:26:43
6
0
No
Clients Choice
Yes
No
03/19/2002 18:26:43
SERVER_CONSOLE
Tcp/Ip
108,731
698
0.00
0.00
0.00
0.00
http://client.host.name:1581
Client
60
No
1
No
No
JOE
9.11.153.39
11.9c.54.e0.8a.b5.11.d6.b3.c3.00.06.29.45.c1.5b
0
ClientOrServer
|
|
|
|
|
|
440
|
|
|
|
|
|
|
|
||
|
|
|
|
|
Comm.
Method
-----Tcp/Ip
Tcp/Ip
Tcp/Ip
Sess
Wait Bytes Bytes Sess Platform
State
Time
Sent Recvd Type
------ ------ ------- ------- ----- -------IdleW
9 S
7.8 K
706 Admin WinNT
IdleW
0 S
1.2 K
222 Admin AIX
Run
0 S
117
130 Admin Mac2
Client Name
-------------------TOMC
GUEST
MARIE
Check the wait time and session state. The wait time determines the length of time
(seconds, minutes, hours) the server has been in the current state. The session state
can be one of the following:
Start
Run
End
|
|
|
|
RecvW
|
|
|
SendW
|
|
MediaW
Waiting for removable media to become available.
|
|
|
IdleW Waiting for communication from the client, and a database transaction is
not in progress. A session in this state is subject to the IDLETIMEOUT
limit.
|
|
|
|
|
For example, Tivoli Storage Manager cancels the client session if the
IDLETIMEOUT option is set to 30 minutes, and a user does not initiate
any operations within those 30 minutes. The client session is automatically
reconnected to the server when it starts to send data again.
|
|
|
|
|
|
|
|
Most commands run in the foreground, but others generate background processes.
In some cases, you can specify that a process run in the foreground. Tivoli Storage
Manager issues messages that provide information about the start and end of
processes. In addition, you can request information about active background
processes. If you know the process ID number, you can use the number to limit the
search. However, if you do not know the process ID, you can display information
about all background processes by entering:
|
|
|
|
Figure 62 on page 442 shows a server background process report after a DELETE
FILESPACE command was issued. The report displays a process ID number, a
description, and a completion status for each background process.
query process
441
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
query status
|
|
|
|
|
|
|
v
v
v
v
v
|
|
|
|
|
|
|
|
|
|
|
||
Task
|
|
Any administrator
|
|
Use the QUERY OPTION command to display information about one or more
server options.
|
|
|
|
You can issue the QUERY OPTION command with no operands to display general
information about all defined server options. You also can issue the QUERY
OPTION command with a specific option name or pattern-matching expression to
display information on one or more server options.
|
|
|
|
You can set options by editing the server options file. See Administrators Reference
for more information.
query option
442
|
|
|
|
|
The QUERY SYSTEM command lets you combine multiple queries of your Tivoli
Storage Manager system into a single command. This command can be used to
collect statistics and to provide information for problem analysis by IBM service.
When you issue the QUERY SYSTEM command, the server issues the following
queries:
|
|
|
QUERY ASSOCIATION
Displays all client nodes that are associated with one or more client
schedules
|
|
QUERY COPYGROUP
Displays all backup and archive copy groups (standard format)
|
|
QUERY DB
Displays information about the database (detailed format)
|
|
QUERY DBVOLUME
Displays information about all database volumes (detailed format)
|
|
QUERY DEVCLASS
Displays all device classes (detailed format)
|
|
QUERY DOMAIN
Displays all policy domains (standard format)
|
|
QUERY LOG
Displays information about the recovery log (detailed format)
|
|
QUERY LOGVOLUME
Displays information about all recovery log volumes (detailed format)
|
|
QUERY MGMTCLASS
Displays all management classes (standard format)
|
|
QUERY OPTION
Displays all server options
|
|
QUERY PROCESS
Displays information about all active background processes
|
|
QUERY SCHEDULE
Displays client schedules (standard format)
|
|
|
QUERY SESSION
Displays information about all administrative and client node sessions in
standard format
|
|
|
QUERY STATUS
Displays general server parameters, such as those defined by SET
commands
|
|
QUERY STGPOOL
Displays information about all storage pools (detailed format)
|
|
QUERY VOLUME
Displays information about all storage pool volumes (standard format)
|
|
|
|
|
SELECT
Displays the results of two SQL queries:
select platform_name,count(*) from nodes group by platform_name
select stgpool_name,devclass_name,count(*) from volumes
group by stgpool_name,devclass_name
443
|
|
The second command displays the name and associated device class of all
storage pools having one or more volumes assigned to them.
|
|
|
|
You can use a standard SQL SELECT statement to get information from the
database. The SELECT command is a subset of the SQL92 and SQL93 standards.
|
|
|
IBM Tivoli Storage Manager also provides an open database connectivity (ODBC)
driver. The driver allows you to use a relational database product such as Lotus
Approach to query the database and display the results.
|
|
|
|
|
|
|
IBM Tivoli Storage Manager provides an ODBC driver for Windows. The driver
supports the ODBC Version 2.5 application programming interface (API). Because
Tivoli Storage Manager supports only the SQL SELECT statement (query), the
driver does not conform to any ODBC API or SQL grammar conformance level.
After you install this driver, you can use a spreadsheet or database application that
complies with ODBC to access the database for information.
|
|
|
|
The ODBC driver set-up is included in the client installation package. The client
installation program can install the ODBC driver and set the corresponding
registry values for the driver and data sources. For more information on setting up
the ODBC driver, see Backup-Archive Clients Installation and Users Guide.
|
|
|
|
To open the database through an ODBC application, you must log on to the server
(the defined data source). Use the name and password of a registered
administrator. After you log on to the server, you can perform query functions
provided by the ODBC application to access database information.
|
|
|
You can issue the SELECT command from the command line of an administrative
client. You cannot issue this command from the server console.
|
|
|
|
The SELECT command supports a subset of the syntax of the SELECT statement as
documented in the SQL92 and SQL93 standards. For complete information about
how to use the SELECT statement, refer to these standards or to other publications
about SQL.
|
|
|
|
|
Issuing the SELECT command to the server can use a significant amount of server
resources to run the query. Complicated queries or queries that run for a long time
can interfere with normal server operations. If your query requires excessive server
resource to generate the results, you will receive a message asking you to confirm
that you wish to continue.
|
|
|
|
Note: To allow any use of the SELECT command, the database must have at least
4MB of free space. For complex queries that require significant processing,
additional free space is required in the database. See Problems with
Exhausting Temporary Table Storage on page 446 for details.
|
|
|
444
|
|
|
SYSCAT.TABLES
Contains information about all tables that can be queried with the SELECT
command.
|
|
SYSCAT.COLUMNS
Describes the columns in each table.
|
|
|
SYSCAT.ENUMTYPES
Defines the valid values for each enumerated type and the order of the
values for each type.
|
|
|
|
You can issue the SELECT command to query these tables to determine the
location of the information that you want. For example, to get a list of all tables
available for querying in the database, enter the following command:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
TABSCHEMA:
TABNAME:
CREATE_TIME:
COLCOUNT:
INDEX_COLCOUNT:
UNIQUE_INDEX:
REMARKS:
TABSCHEMA:
TABNAME:
CREATE_TIME:
COLCOUNT:
INDEX_COLCOUNT:
UNIQUE_INDEX:
REMARKS:
TABSCHEMA:
TABNAME:
CREATE_TIME:
COLCOUNT:
INDEX_COLCOUNT:
UNIQUE_INDEX:
REMARKS:
TABSCHEMA:
TABNAME:
CREATE_TIME:
COLCOUNT:
INDEX_COLCOUNT:
UNIQUE_INDEX:
REMARKS:
ADSM
ACTLOG
11
1
FALSE
Server activity log
ADSM
ADMINS
17
1
TRUE
Server administrators
ADSM
ADMIN_SCHEDULES
15
1
TRUE
Administrative command schedules
ADSM
ARCHIVES
10
5
FALSE
Client archive files
Examples
The SELECT command lets you customize a wide variety of queries. This section
shows two examples. For many more examples of the command, see the
Administrators Reference.
|
|
|
|
Example 1: Find the number of nodes by type of operating system by issuing the
following command:
|
|
445
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
PLATFORM_NAME
------------OS/2
AIX
Windows
Number of Nodes
--------------45
90
35
Example 2: For all active client sessions, determine how long they have been
connected and their effective throughput in bytes per second:
select session_id as "Session", client_name as "Client", state as "State",
current_timestamp-start_time as "Elapsed Time",
(cast(bytes_sent as decimal(18,0)) /
cast((current_timestamp-start_time)seconds as decimal(18,0)))
as "Bytes sent/second",
(cast(bytes_received as decimal(18,0)) /
cast((current_timestamp-start_time)seconds as decimal(18,0)))
as "Bytes received/second"
from sessions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Session:
Client:
State:
Elapsed Time:
Bytes sent/second:
Bytes received/second:
24
ALBERT
Run
0 01:14:05.000000
564321.9302768451
0.0026748857944
Session:
Client:
State:
Elapsed Time:
Bytes sent/second:
Bytes received/second:
26
MILTON
Run
0 00:06:13.000000
1638.5284210992221
675821.6888561849
|
|
|
|
|
|
|
|
||
|
|
Check the value in the Maximum Reduction field. If this field shows a value of at
least 4MB, you can perform SELECT queries.
|
|
|
|
If the Maximum Reduction value is below 4MB, you will not be able to perform
SELECT queries. The database is either full or fragmented.
v If the database is full, increase the size of the database. See Increasing the Size
of the Database or Recovery Log on page 427 for details.
446
|
|
v If the database is fragmented, either add a volume or unload and load your
database. See Reorganizing the Database on page 435 for details.
|
|
|
Note: Complex SELECT queries (for example, those including the ORDER BY
clause, the GROUP BY clause, or the DISTINCT operator) may require more
than 4MB temporary table storage space.
|
|
|
|
|
|
|
|
|
|
|
|
A Tivoli Storage Manager script is one or more commands that are stored as an
object in the database. You can run a script from an administrative client, the
administrative Web interface, or the server console. You can also include it in an
administrative command schedule to run automatically. See IBM Tivoli Storage
Manager Server Scripts on page 406 for details. You can define a script that
contains one or more SELECT commands. Tivoli Storage Manager is shipped with
a file that contains a number of sample scripts. The file, scripts.smp, is in the server
directory. To create and store the scripts as objects in your servers database, issue
the DSMSERV RUNFILE command during installation:
|
|
You can also run the file as a macro from an administrative command line client:
|
|
|
|
|
|
The sample scripts file contains Tivoli Storage Manager commands. These
commands first delete any scripts with the same names as those to be defined,
then define the scripts. The majority of the samples create SELECT commands, but
others do such things as define and extend database volumes and back up storage
pools. You can also copy and change the sample scripts file to create your own
scripts.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
macro scripts.smp
script
script
script
script
script
script
script
script
script
script
script
def_db_extend
def_db_extend
def_db_extend
def_db_extend
def_db_extend
def_db_extend
def_db_extend
def_db_extend
def_db_extend
def_db_extend
def_db_extend
/* -----------------------------------------*/
/* Script Name: DEF_DB_EXTEND
*/
/* Description: Define a database volume, */
/*
and extend the database
*/
/* Parameter 1: db volume name
*/
/* Parameter 2: extension megabytes
*/
/* Example: run def_db_extend VOLNAME 12 */
/* -----------------------------------------*/
def dbv $1
if (rc_ok) extend db $2
if (warning, error) q db f=d
447
|
|
|
|
|
|
|
|
|
|
|
IBM Tivoli Storage Manager provides commands to control the format of results of
SELECT commands. You can control:
v How SQL data types such as VARCHAR are displayed, in wide or narrow
format (SET SQLDISPLAYMODE)
v The format of date and time values in the results (SET SQLDATETIMEFORMAT)
v Whether SQL arithmetic results are truncated or rounded (SET
SQLMATHMODE)
|
|
|
Note: Using the SET commands to change these settings keeps the settings in
effect only for the current administrative client session. You can query these
settings by using the QUERY SQLSESSION command.
|
|
|
|
|
You can query the SQL activity summary table to view statistics about each client
session and server process. For a listing of the column names and their
descriptions from the activity summary table, enter the following command:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
START_TIME:
END_TIME:
ACTIVITY:
NUMBER:
ENTITY:
COMMMETH:
ADDRESS:
EXAMINED:
AFFECTED:
FAILED:
BYTES:
IDLE:
MEDIAW:
PROCESSES:
SUCCESSFUL:
2002-07-22 19:32:00.000000
2002-07-22 19:32:56.000000
BACKUP
43
DWE
Named Pi
7
7
0
2882311
51
0
1
YES
v To display all events starting at or after 00:00 a.m. on September 24, 2002 until
the present time, enter:
select * from summary where start_time>= 2002-09-24 00:00
You can determine how long to keep information in the summary table. For
example, to keep the information for 5 days, enter the following command:
|
|
|
set summaryretention 5
448
|
|
|
|
|
|
You can redirect the output of SELECT commands to a file in the same way as you
would redirect the output of any command. When redirecting this output for use
in another program (for example, a spreadsheet or database program), write the
output in a format easily processed by the program to be used.
|
|
|
Two standard formats for tabular data files are comma-separated values (CSV) and
tab-separated values (TSV). Most modern applications that can import tabular data
can read one or both of these formats.
|
|
|
|
|
|
|
|
|
|
The use of command output redirection and one of the delimited output format
options lets you create queries whose output can be further processed in other
applications. For example, based on the output of a SELECT command, a
spreadsheet program could produce graphs of average file sizes and file counts
summarized by type of client platform.
For details about redirecting command output, see the Administrators Reference.
|
|
||
Task
Any administrator
System
|
|
|
|
|
The activity log contains all messages normally sent to the server console during
server operation. The only exceptions are responses to commands entered at the
console, such as responses to QUERY commands.
|
|
|
|
|
|
v
v
v
v
v
Any error messages sent to the server console are also stored in the activity log.
|
|
Use the following sections to adjust the size of the activity log, set an activity log
retention period, and request information about the activity log.
449
You can request information stored in the activity log. To minimize processing time
when querying the activity log, you can:
v Specify a time period in which messages have been generated. The default for
the QUERY ACTLOG command shows all activities that have occurred in the
previous hour.
v Specify the message number of a specific message or set of messages.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
v Specify whether the originator is the server or client. If it is the client, you can
specify the node, owner, schedule, domain, or session number. If you are doing
client event logging to the activity log and are only interested in server events,
then specifying the server as the originator will greatly reduce the size of the
results.
|
|
|
|
For example, to review messages generated on May 30 between 8 a.m. and 5 p.m.,
enter:
|
|
|
To request information about messages related to the expiration of files from the
server storage inventory, enter:
|
|
|
You can also request information only about messages logged by one or all clients.
For example, to search the activity log for messages from the client for node JEE:
|
|
|
|
|
|
|
|
Use the SET ACTLOGRETENTION command to specify how long activity log
information is kept in the database. The server automatically deletes messages
from the activity log once the day that was specified with the SET
ACTLOGRETENTION command has passed. At installation, the activity log
retention period is set to one day. To change the retention period to 10 days, for
example, enter:
|
|
|
set actlogretention 10
|
|
|
|
Because the activity log is stored in the database, the size of the activity log should
be factored into the amount of space allocated for the database. Allow at least 1MB
of additional space for the activity log.
|
|
|
|
The size of your activity log depends on how many messages are generated by
daily processing operations and how long you want to retain those messages in the
activity log. When retention time is increased, the amount of accumulated data also
increases, requiring additional database storage.
450
|
|
|
|
When there is not enough space in the database or recovery log for activity log
records, the server stops recording and sends messages to the server console. If
you increase the size of the database or recovery log, the server starts activity log
recording again.
|
|
|
|
If you do not have enough space in the database for the activity log, you can do
one of the following:
v Allocate more space to the database
v Reduce the length of time that messages are kept in the activity log
|
|
For information about increasing the size of the database or recovery log, see
Increasing the Size of the Database or Recovery Log on page 427.
|
|
|
|
|
|
The server and client messages provide a record of Tivoli Storage Manager activity
that you can use to monitor the server. You can log server messages and most
client messages as events to one or more repositories called receivers. You can log
the events to any combination of the following receivers:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
In addition, you can filter the types of events to be enabled for logging. For
example, you might enable only severe messages to the event server receiver and
one or more specific messages, by number, to another receiver. Figure 63 on
page 452 shows a possible configuration in which both server and client messages
are filtered by the event rules and logged to a set of specified receivers.
451
Tivoli Storage
Manager Server
Activity Log
Server Console
Client
Messages
File
Event
Rules
User Exit
Tivoli Event
Console
Event Server
Server
Messages
|
|
|
||
|
|
|
Task
System
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
To enable or disable events, issue the ENABLE EVENTS and DISABLE EVENTS
commands. For example,
v To enable event logging to a user exit for all error and severe server messages,
enter:
|
|
|
If you specify a receiver that is not supported on any platform, or if you specify an
invalid event or name, Tivoli Storage Manager issues an error message. However,
any valid receivers, events, or names that you specified are still enabled. Certain
v To enable event logging to a user exit for severe client messages for all client
nodes, enter:
enable events userexit severe nodename=*
v To disable event logging to a user exit for error server messages, enter
disable events userexit error
452
|
|
|
events, such as messages that are issued during server start-up and shutdown,
automatically go to the console. They do not go to other receivers, even if they are
enabled.
|
|
|
|
|
|
|
|
Note: Server messages in the SEVERE category and message ANR9999 can provide
valuable diagnostic information if there is a serious problem. For this reason,
you should not disable these messages. Use the SET CONTEXTMESSAGING
ON command to get additional information that could help determine the
cause of ANR9999D messages. The IBM Tivoli Storage Manager polls the
server components for information that includes process name, thread name,
session ID, transaction data, locks that are held, and database tables that are
in use.
|
|
|
|
|
|
|
|
|
|
|
At server start-up event logging begins automatically to the server console and
activity log and for any receivers that are started based on entries in the server
options file. See the appropriate receiver sections for details. To begin logging
events to receivers for which event logging is not started automatically, issue the
BEGIN EVENTLOGGING command. You can also use this command after you
have disabled event logging to one or more receivers. To end event logging for an
active receiver issue the END EVENTLOGGING command.
For example,
|
|
|
|
|
|
A receiver for which event logging has begun is an active receiver. To begin and end
logging for one or more receivers, issue the BEGIN EVENTLOGGING and END
EVENTLOGGING commands.
|
|
|
|
Logging events to the server console and activity log begins automatically at server
startup. To enable all error and severe client events to the console and activity log,
issue the following command:
|
|
|
|
|
|
|
Note: Enabling client events to the activity log will increase the database
utilization. You can set a retention period for the log records by using the
SET ACTLOGRETENTION command (see Setting the Activity Log
Retention Period on page 450). At server installation, this value is set to one
day. If you increase the retention period, utilization is further increased. For
more information about the activity log, see Using the IBM Tivoli Storage
Manager Activity Log on page 449.
|
|
|
|
|
You can disable server and client events to the server console and client events to
the activity log. However, you cannot disable server events to the activity log.
Also, certain messages, such as those issued during server startup and shutdown
and responses to administrative commands, will still be displayed at the console
even if disabled.
453
|
|
|
|
|
|
|
|
|
|
|
|
Note: Both types of event receivers must be specified in the server options file
(dsmserv.opt) file.
|
|
Both file and user exits receive event data in the same data block structure. Setting
up logging for these receivers is also similar:
1. Add an option for the exit to the server options file:
v For a file exit: Add either the FILEEXIT option (for a binary file exit) or
FILETEXTEXIT (for a text file exit) option.
|
|
|
|
|
|
|
|
|
|
|
Specify whether event logging to the file exit receiver begins automatically
at server startup. The parameters are YES and NO. If you do not specify
YES, you must begin event logging manually by issuing the BEGIN
EVENTLOGGING command.
Specify the file where each logged event is to be stored.
|
|
|
|
|
For example,
Specify how files will be stored if the file being stored already exists.
REPLACE will overwrite the existing file, APPEND will append data to
the existing file, and PRESERVE will not overwrite the existing file.
|
|
|
|
|
|
|
For example,
|
|
|
|
|
|
|
userexit no fevent.exit
2. Enable events for the receiver. You must specify the name of the user exit in the
USEREXIT server option and the name of the file in the FILEEXIT server
option. Here are two examples:
enable events file error
enable events userexit error,severe
You can also enable events to one or more client nodes or servers by specify
the NODENAME OR SERVERNAME parameter. See Enabling and Disabling
Events on page 452 for more information.
|
|
|
454
|
|
|
|
|
3. If you did not specify YES in the server option, begin event logging. For
example, to begin event logging for a user-defined exit, enter:
begin eventlogging userexit
See Beginning and Ending Event Logging on page 453 for more information.
|
|
|
Tivoli Storage Manager includes the Tivoli receiver, a Tivoli Enterprise Console
adapter for sending events to the Tivoli Enterprise Console. You can specify the
events to be logged based on their source. The valid event names are:
||
Event Name
Source
TSM_SERVER_EVENT
TSM_CLIENT_EVENT
|
|
TSM_APPL_EVENT
TSM_TDP_DOMINO_EVENT
|
|
TSM_TDP_EXCHANGE_EVENT
TSM_TDP_INFORMIX_EVENT
TSM_TDP_ORACLE_EVENT
|
|
|
|
|
|
TSM_TDP_SQL_EVENT
|
|
|
The application client must have enhanced Tivoli Enterprise Console support
enabled in order to route the events to the Tivoli Enterprise Console. Because of
the number of messages, you should not enable all messages from a node to be
logged to the Tivoli Enterprise Console.
To set up Tivoli as a receiver for event logging:
1. Define the Tivoli Storage Manager event classes to the Tivoli Enterprise Console
with the ibmtsm.baroc file, which is distributed with the server.
|
|
|
|
|
|
|
|
|
|
Before the events are displayed on a Tivoli Enterprise Console, you must
import ibmtsm.baroc into an existing rule base or create a new rule base and
activate it. To do this:
v From the Tivoli desktop, click on the Rule Base icon to display the pop-up
menu.
v Select Import, then specify the location of the ibmtsm.baroc file.
|
|
|
|
|
|
|
|
455
|
|
|
|
|
|
|
|
|
|
|
|
a. Click on the Event Server icon from the Tivoli desktop. The Event Server
Rules Bases window will open.
b. Select Rule Base from the Create menu.
c. Optionally, copy the contents of an existing rule base into the new rule base
by selecting the Copy pop-up menu from the rule base to be copied.
d. Click on the RuleBase icon to display the pop-up menu.
e. Select Import and specify the location of the ibmtsm.baroc file.
f. Select the Compile pop-up menu.
g. Select the Load pop-up menu and Load, but activate only when server
restarts from the resulting dialog.
h. Shut down the event server and restart it.
2. To define an event source and an event group:
|
|
|
|
|
|
|
|
|
|
|
|
|
a. From the Tivoli desktop, select Source from the EventServer pop-up menu.
Define a new source whose name is Tivoli Storage Manager from the
resulting dialog.
b. From the Tivoli desktop, select Event Groups from the EventServer pop-up
menu. From the resulting dialog, define a new event group for Tivoli
Storage Manager and a filter that includes event classes
IBMTSMSERVER_EVENT and IBMTSMCLIENT_EVENT.
c. Select the Assign Event Group pop-up menu item from the Event Console
icon and assign the new event group to the event console.
d. Double-click on the Event Console icon to start the configured event
console.
3. Enable events for logging to the Tivoli receiver. See Enabling and Disabling
Events on page 452 for more information.
|
|
|
|
|
|
|
|
|
4. In the server options file (dsmserv.opt), specify the location of the host on which
the Tivoli server is running. For example, to specify a Tivoli server at the IP
address 9.114.22.345:1555, enter the following:
techost 9.114.22.345
tecport 1555
5. Begin event logging for the Tivoli receiver. You do this in one of two ways:
v To begin event logging automatically at server start up, specify the following
server option:
tecbegineventlogging yes
Or
v Enter the following command:
|
|
|
See Beginning and Ending Event Logging on page 453 for more
information.
|
|
You can use the simple network management protocol (SNMP) together with event
logging to do the following:
v Set up an SNMP heartbeat monitor to regularly check that the Tivoli Storage
Manager server is running.
v Send traps to an SNMP manager, such as NetView or Tivoli Enterprise Console.
v Run Tivoli Storage Manager scripts and retrieve output and return codes. See
IBM Tivoli Storage Manager Server Scripts on page 406 for details.
|
|
|
|
|
456
The management information base (MIB), which is shipped with Tivoli Storage
Manager, defines the variables that will run server scripts and return the server
scripts results. You must register SNMPADMIN, the administrative client the
server runs these scripts under. Although a password is not required for the
subagent to communicate with the server and run scripts, a password should be
defined for SNMPADMIN to prevent access to the server from unauthorized users.
An SNMP password (community name) is required, however, to access the SNMP
agent, which forwards the request to the subagent.
Note: Because the SNMP environment has weak security, you should consider not
granting SNMPADMIN any administrative authority. This restricts
SNMPADMIN to issuing only Tivoli Storage Manager queries.
SNMP SET requests are accepted for the name and input variables associated with
the script names stored in the MIB by the SNMP subagent. This allows a script to
be processed by running a GET request for the ibmAdsm1ReturnValue and
ibmAdsm2ReturnValue variables. A GETNEXT request will not cause the script to
run. Instead, the results of the previous script processed will be retrieved. When an
entire table row is retrieved, the GETNEXT request is used. When an individual
variable is retrieved, the GET request is used.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
457
The statements grant read-write authority to the MIB for the local node through
the loopback mechanism (127.0.0.1), and to nodes with the three 9.115.xx.xx
addresses. On AIX, Tivoli Storage Manager installation automatically updates
the/etc/mib.defs file with the names of the Tivoli Storage Manager MIB variables.
The smux statement allows the dpid2 daemon to communicate with snmpd.
Here is an example of this command used to set and retrieve MIB variables:
snmpinfo -v -ms -c public -h tpcnov73 ibmAdsmServerScript1.1=QuerySessions
This command issues the set operation (-ms ), passing in community name public,
sending the command to host tpcnov73, and setting up variable
ibmAdsmServerScript1 to have the value QuerySessions. QuerySessions is the name of
a server script that has been defined on a server that will register with the Tivoli
Storage Manager subagent. In this case, the first server that registers with the
subagent is the .1 suffix in ibmAdsmServerScript1.1. The following commands set the
parameters for use with this script:
snmpinfo -v -ms -c public -h tpcnov73 ibmAdsmM1Parm1.1=xyz
snmpinfo -v -ms -c public -h tpcnov73 ibmAdsmM1Parm2.1=uvw
snmpinfo -v -ms -c public -h tpcnov73 ibmAdsmM1Parm3.1=xxx
You can set zero to three parameters. Only the script name is needed. To make the
QuerySessions script run, retrieve the ibmAdsmM1ReturnValue variable (in this case,
ibmAdsmM1ReturnValue.1). For example:
snmpinfo -v -mg -c public -h tpcnov73 ibmAdsmM1ReturnValue.1
The results of the command are returned as a single string with embedded carriage
return/newline characters.
Note: Not all MIB browsers properly handle embedded carriage return/newline
characters.
In this case, ibmAdsmM1ReturnCode.1 will contain the return code associated with
the running of the script. If ibmAdsmM2ReturnValue is retrieved, the results of
running the script named in ibmAdsmServerScript2 are returned as a single numeric
return code. Notice the -mg instead of -ms to signify the GET operation in the
command to retrieve ibmAdsmM1ReturnValue.1. If the entire row is retrieved, the
command is not run. Instead, the results from the last time the script was run are
retrieved. This would be the case if the following command were issued:
snmpinfo -v -md -c public -h tpcnov73 ibmAdsm
|
|
|
|
|
|
|
The SNMP manager system can reside on the same system as the Tivoli Storage
Manager server, but typically would be on another system connected through
SNMP. The SNMP management tool can be any application, such as NetView or
458
Tivoli Enterprise Console, which can manage information through SNMP MIB
monitoring and traps. The Tivoli Storage Manager server system runs the processes
needed to send Tivoli Storage Manager event information to an SNMP
management system. The processes are:
v SNMP agent (snmpd)
v Tivoli Storage Manager SNMP subagent (dsmsnmp)
v Tivoli Storage Manager server (dsmserv)
|
|
|
|
Windows
SNMP Manager
AIX
SNMP Protocol
SNMP Protocol
SNMP Agent
Tivoli Storage
Manager server
SNMP Subagent
SNMP DPI
Linux
Tivoli Storage
Manager server
SNMP Subagent
SNMP DPI
SNMP DPI
SNMP DPI
|
|
SNMP DPI
SNMP DPI
SNMP Subagent
SNMP Subagent
SNMP Subagent
Tivoli Storage
Manager server
Tivoli Storage
Manager server
Tivoli Storage
Manager server
Solaris
HP-UX
Figure 65 on page 460 shows how the communication for SNMP works in a Tivoli
Storage Manager system:
v The SNMP manager and agent communicate with each other through the SNMP
protocol. The SNMP manager passes all requests for variables to the agent.
v The agent then passes the request to the subagent and sends the answer back to
the manager. The agent responds to the managers requests and informs the
manager about events by sending traps.
v The agent communicates with both the manager and subagent. It sends queries
to the subagent and receives traps that inform the SNMP manager about events
taking place on the application monitored through the subagent. The SNMP
agent and subagent communicate through the Distributed Protocol Interface
(DPI). Communication takes place over a stream connection, which typically is a
TCP connection but could be another stream-connected transport mechanism.
v The subagent answers MIB queries of the agent and informs the agent about
events by sending traps. The subagent can also create and delete objects or
subtrees in the agents MIB. This allows the subagent to define to the agent all
the information needed to monitor the managed application.
459
Notes:
1. You can start dsmsnmp and the server in any order. However, starting dsmsnmp
first is more efficient in that it avoids retries.
2. The MIB file name is adsmserv.mib. The file name is located in the directory in
which the server is installed.
|
|
SNMP Protocol
snmpd
SNMP DPI
SNMP DPI
Windows
dsmsnmp
dsmserv.opt
Tivoli Storage
Manager server
|
|
|
|
|
460
|
|
|
|
such as the SystemView agent. For details about server options, see the server
options section in Administrators Reference.
commmethod
snmpsubagent
snmpheartbeatinterval
snmpmessagecategory
snmp
hostname jimbo communityname public timeout 600
5
severity
|
|
|
|
|
|
|
|
|
|
|
|
|
2. Install, configure, and start the SNMP agent as described in the documentation
for that agent. The SNMP agent must support the DPI Version 2.0 standard.
For example, the AIX SNMP agent is configured by customizing the file
/etc/snmpd.conf. A default configuration might look like this:
logging
logging
community
community
community
view
trap
snmpd
smux
file=/var/snmp/snmpd.log enabled
size=0 level=0
public
private 127.0.0.1
255.255.255.255 readWrite
system 127.0.0.1
255.255.255.255 readWrite
1.17.2 system enterprises view
public <snmp_manager_ip_adr>
1.2.3 fe
maxpacket=16000 smuxtimeout=60
1.3.6.1.4.1.2.3.1.2.2.1.1.2 public
1.17.2
|
|
|
|
Note: The trap statement in /etc/snmpd.conf also defines the system to which the
AIX SNMP agent forward traps that it receives.
|
|
Before starting the agent, ensure that the DPI agent has been started and not
the default SNMP agent that ships with the operating system or with TCP/IP.
3. Start the Tivoli Storage Manager SNMP subagent by running the dsmsnmp
executable.
4. Start the Tivoli Storage Manager server to begin communication through the
configured TCP/IP port with the subagent.
5. Begin event logging for the SNMP receiver, and enable events to be reported to
SNMP. For example, issue the following commands:
begin eventlogging snmp
enable event snmp all
|
|
|
|
|
|
|
|
6. Define the Tivoli Storage Manager SNMP MIB values for the SNMP manager to
help format and display the Tivoli Storage Manager SNMP MIB variables and
messages. The adsmserv.mib file ships with the Tivoli Storage Manager server
and must be loaded by the SNMP manager. This file is in the installation
directory of the server. For example, when you run NetView for Windows as an
SNMP manager, the adsmserv.mib file is copied to the \netview_path\SNMP_MIB
directory and then loaded through the following command:
[C:\] loadmib -load adsmserv.mib
461
page 462 shows the relationship of a sending Tivoli Storage Manager server and a
Tivoli Storage Manager event server.
Tivoli Storage
Manager Server
Client
Messages
Event
Rules
EVENTS
Event
Rules
File
User Exit
Tivoli Event
Console
Server
Messages
The following scenario is a simple example of how enterprise event logging can
work.
The administrator at each sending server does the following:
1. Defines the server that will be the event server. For details about
communication set up, see Setting Up Communications for Enterprise
Configuration and Enterprise Event Logging on page 472.
define server server_b password=cholla hladdress=9.115.3.45 lladdress=1505
3. Enables the logging of severe, error, and warning server messages from the
sending server and severe and error messages from all clients to the event
server receiver by issuing the following commands:
enable events eventserver severe,error,warning
enable events eventserver severe,error nodename=*
Then the administrator enables the events by issuing the ENABLE EVENTS
command for each sending server. For example, for SERVER_A the
administrator would enter:
enable events file severe,error servername=server_a
Note: By default, logging of events from another server is enabled to the event
server activity log. However, unlike events originating from a local
server, events originating from another server can be disabled for the
activity log at an event server.
462
One or more servers can send events to an event server. An administrator at the
event server enables the logging of specific events from specific servers. In the
previous example, SERVER_A routes severe, error, and warning messages to
SERVER_B. SERVER_B, however, logs only the severe and error messages. If a
third server sends events to SERVER_B, logging is enabled only if an ENABLE
EVENTS command includes the third server. Furthermore, the SERVER_B
determines the receiver to which the events are logged.
Attention: It is important that you do not set up server-to-server event logging in
a loop. In such a situation, an event would continue logging indefinitely, tying up
network and memory resources. Tivoli Storage Manager will detect such a
situation and issue a message. Here are a few configurations to avoid:
v SERVER_A logs to SERVER_B, and SERVER_B logs to SERVER_A.
v SERVER_A logs to SERVER_B; SERVER_B logs to SERVER_C; SERVER_C logs to
SERVER_A.
The output would specify the number of enabled events and the message names of
disabled events:
998 events are enabled for node HSTANFORD for the USEREXIT receiver.
The following events are DISABLED for the node HSTANFORD for the USEREXIT
receiver:
ANE4000, ANE49999
The QUERY EVENTRULES command displays the history of events that are
enabled or disabled by a specific receiver for the server or for a client node.
query enabled userexit nodename=hstanford
Beginning with IBM Tivoli Storage Manager Version 5.2, Tivoli Decision Support
for Storage Management Analysis will no longer be shipped. If you are already
using Tivoli Decision Support for Storage Management Analysis, you may continue
to use it with this version of Tivoli Storage Manager.
|
|
|
|
To use Tivoli Decision Support for Storage Management Analysis on your Tivoli
Storage Manager server or servers, you must first enable event logging of client
events to the activity log. See Logging Events to the IBM Tivoli Storage Manager
Server Console and Activity Log on page 453 for details.
|
|
|
463
|
|
|
|
|
|
You can schedule the Decision Support Loader (DSL) to run automatically using
the Tivoli Storage Manager Scheduler. Before defining a schedule, ensure that the
backup-archive client is installed on a dedicated Windows workstation where the
DSL is installed.
|
|
|
|
|
|
|
|
|
Notes:
1) The installation directory path for the DSL is:
|
|
|
|
|
|
|
|
"c:\program files\tivoli\tsm\decision\tsmdsl.exe"
|
|
|
|
|
|
b. Start the scheduler for the client. Leave the scheduler running until
scheduled rollups are no longer needed. To start the scheduler, you can
open a command prompt window and navigate to where the
backup-archive client is installed and enter:
|
|
Note: If the DSL is not processed according to the schedule you have defined,
check the directory path where the DSL is installed.
System
Tivoli Storage Manager accounting records show the server resources that are used
during a session. This information lets you track resources that are used by a client
node session. At installation, accounting defaults to OFF. You can set accounting to
ON by entering:
set accounting on
When accounting is on, the server creates a session resource usage accounting
record whenever a client node session ends.
Accounting records are stored in the dsmaccnt.log file. The
DSMSERV_ACCOUNTING_DIR environment variable specifies the directory
464
where the accounting file is opened. If this variable is not set when the server is
started, the dsmaccnt.log file is placed in the current directory when the server
starts. For example, to set the environment variable to place the accounting records
in the /home/engineering directory, enter this command:
export DSMSERV_ACCOUNTING_DIR=/home/engineering
The accounting file contains text records that can be viewed directly or can be read
into a spreadsheet program. The file remains opened while the server is running
and accounting is set to ON. The file continues to grow until you delete it or prune
old records from it. To close the file for pruning, either temporarily set accounting
off or stop the server.
There are 31 fields, which are delimited by commas (,). Each record ends with a
new-line character. Each record contains the following information:
Field
Contents
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Product version
Product sublevel
Product name, ADSM,
Date of accounting (mm/dd/yyyy)
Time of accounting (hh:mm:ss)
Node name of Tivoli Storage Manager client
Client owner name (UNIX)
Client Platform
Authentication method used
Communication method used for the session
Normal server termination indicator (Normal=X'01', Abnormal=X'00')
Number of archive store transactions requested during the session
Amount of archived files, in kilobytes, sent by the client to the server
Number of archive retrieve transactions requested during the session
Amount of space, in kilobytes, retrieved by archived objects
Number of backup store transactions requested during the session
Amount of backup files, in kilobytes, sent by the client to the server
Number of backup retrieve transactions requested during the session
Amount of space, in kilobytes, retrieved by backed up objects
Amount of data, in kilobytes, communicated between the client node and the
server during the session
Duration of the session, in seconds
Amount of idle wait time during the session, in seconds
Amount of communications wait time during the session, in seconds
Amount of media wait time during the session, in seconds
Client session type. A value of 1 or 4 indicates a general client session. A value
of 5 indicates a client session that is running a schedule.
Number of space-managed store transactions requested during the session
Amount of space-managed data, in kilobytes, sent by the client to the server
Number of space-managed retrieve transactions requested during the session
Amount of space, in kilobytes, retrieved by space-managed objects
Product release
Product level
21
22
23
24
25
26
27
28
29
30
31
465
2. Verify that database and recovery log volumes are online and synchronized.
query dbvolume
query logvolume
3. Check the status of disk volumes. If any are offline, check for hardware
problems.
query volume devclass=disk
5. Check the access state of the tape volumes. For example, a volume that is not
in the read-write state may indicate a problem. You may need to move data
and check the volumes out of the library.
query volume
466
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
467
468
Central Monitoring
Tivoli Storage Manager provides you with several ways to centrally monitor the
activities of a server network:
v Enterprise event logging, in which events are sent from one or more of servers
to be logged at an event server. See Enterprise Event Logging: Logging Events
to Another Server on page 461 for a description of the function and Setting Up
Communications for Enterprise Configuration and Enterprise Event Logging on
page 472 for communications set up.
Chapter 20. Working with a Network of Servers
469
Example Scenarios
The functions for managing multiple servers can be applied in many ways. Here
are just two scenarios to give you some ideas about how you can put the functions
to work for you:
v Setting up and managing Tivoli Storage Manager servers primarily from one
location. For example, an administrator at one location controls and monitors
servers at several locations.
v Setting up a group of Tivoli Storage Manager servers from one location, and
then managing the servers from any of the servers. For example, several
administrators are responsible for maintaining a group of servers. One
administrator defines the configuration information on one server for
distributing to servers in the network. Administrators on the individual servers
in the network manage and monitor the servers.
470
471
|
|
|
472
For a pair of servers to communicate with each other, each server must be defined
to the other. For example, if a configuration manager manages three managed
servers, there are three server pairs. You can issue separate definitions from each
server in each pair, or you can cross define a pair in a single operation. Cross
definition can be useful in large or complex networks. The following scenarios and
accompanying figures illustrate the two methods.
Using separate definitions Follow this sequence:
1. On MUNICH: Specify the server name and password of MUNICH.
On STRASBOURG: Specify the server name and password of STRASBOURG.
On HEADQUARTERS: Specify the server name and password of
HEADQUARTERS.
2. On HEADQUARTERS: Define MUNICH (whose password is BERYL and
whose address is 9.115.2.223:1919) and STRASBOURG (whose password is
FLUORITE and whose address is 9.115.2.178:1715).
On MUNICH and STRASBOURG: Define HEADQUARTERS (whose
password is AMETHYST and whose address is 9.115.4.177:1823).
Figure 70 shows the servers and the commands issued on each:
473
Communication Security
Security for this communication configuration is enforced through the exchange of
passwords (which are encrypted) and, in the case of enterprise configuration only,
verification keys. Communication among servers, which is through TCP/IP,
requires that the servers verify server passwords (and verification keys). For
example, assume that HEADQUARTERS begins a session with MUNICH:
1. HEADQUARTERS, the source server, identifies itself by sending its name to
MUNICH.
2. The two servers exchange verification keys (enterprise configuration only).
3. HEADQUARTERS sends its password to MUNICH, which verifies it against
the password stored in its database.
474
Note: If your server network is using enterprise configuration, you can automate
the preceding operations. You can distribute the administrator and server
lists to MUNICH and STRASBOURG. In addition, all server definitions and
server groups are distributed by default to a managed server when it first
subscribes to any profile on a configuration manager. Therefore, it receives
all the server definitions that exist on the configuration manager, thus
enabling command routing among the servers.
475
476
477
Note: If your server network is using enterprise configuration, you can automate
the preceding operations. You can distribute the administrator lists and
server lists to MUNICH and STRASBOURG. In addition, all server
definitions and server groups are distributed by default to a managed server
when it first subscribes to any profile on a configuration manager. Therefore,
it receives all the server definitions that exist on the configuration manager,
thus enabling command routing among the servers.
Figure 73 shows the servers and the commands issued on each:
478
If you update the password but not the node name, the node name defaults
to the server name specified by the SET SERVERNAME command.
v For enterprise configuration and enterprise event logging: If you update the
server password, it must match the password specified by the SET
SERVERPASSWORD command at the target server.
v For enterprise configuration: When a server is first defined at a managed server,
that definition cannot be replaced by a server definition from a configuration
manager. This prevents the definition at the managed server from being
inadvertently replaced. Such a replacement could disrupt functions that require
communication among servers, for example command routing or virtual
volumes.
To allow replacement, update the definition at the managed server by issuing
the UPDATE SERVER command with the ALLOWREPLACE=YES parameter.
When a configuration manager distributes a server definition, the definition
always includes the ALLOWREPLACE=YES parameter.
You can delete a server definition by issuing the DELETE SERVER command. For
example, to delete the server named NEWYORK, enter the following:
delete server newyork
The deleted server is also deleted from any server groups of which it is a member.
See Setting Up Server Groups on page 503 for information about server groups.
You cannot delete a server if either of the following conditions is true:
v The server is defined as an event server.
You must first issue the DELETE EVENTSERVER command.
v The server is a target server for virtual volumes.
A target server is named in a DEFINE DEVCLASS (DEVTYPE=SERVER)
command. You must first change the server name in the device class or delete
the device class.
|
|
479
v Server definitions
v Server groups
For details on the attributes that are distributed with these objects, see Associating
Configuration Information with a Profile on page 484.
Enterprise Configuration Scenario gives you an overview of the steps to take for
one possible implementation of enterprise configuration. Sections that follow give
more details on each step.
480
1. Decide whether to use the existing Tivoli Storage Manager server in the
headquarters office as the configuration manager or to install a new Tivoli
Storage Manager server on a system.
2. Set up the communications among the servers. See Setting Up
Communications Among Servers on page 472 for details.
3. Identify the server as a configuration manager.
Use the following command:
set configmanager on
481
Example 3: You want clients to back up data to the default disk storage pool,
BACKUPPOOL, on each server. But you want clients to archive data directly to
the tape library attached to each server. You can do the following:
v In the policy domain that you will point to in the profile, update the archive
copy group so that TAPEPOOL is the name of the destination storage pool.
v On each server that is to be a managed server, ensure that you have a tape
storage pool named TAPEPOOL.
Note: You must set up the storage pool itself (and associated device class) on
each managed server, either locally or by using command routing. If a
managed server already has a storage pool associated with the
automated tape library, you can rename the pool to TAPEPOOL.
Example 4: You want to ensure that client data is consistently backed up and
managed on all servers. You want all clients to be able to store three backup
versions of their files. You can do the following:
v Verify or define client schedules in the policy domain so that clients are
backed up on a consistent schedule.
v In the policy domain that you will point to in the profile, update the backup
copy group so that three versions of backups are allowed.
v Define client option sets so that basic settings are consistent for clients as
they are added.
5. Define one or more profiles.
For example, you can define one profile named ALLOFFICES that points to all
the configuration information (policy domain, administrators, scripts, and so
on). You can also define profiles for each type of information, so that you have
one profile that points to policy domains, and another profile that points to
administrators, for example.
For details, see Creating and Changing Configuration Profiles on page 484.
482
See Getting Information about Profiles on page 491. Look for definitions of
objects on the managed server that have the same name as those defined on the
configuration manager. With some exceptions, these objects will be overwritten
when the managed server first subscribes to the profile on the configuration
manager. See Associating Configuration Information with a Profile on
page 484 for details on the exceptions.
If the managed server is a new server and you have not defined anything, the
only objects you will find are the defaults (for example, the STANDARD policy
domain).
2. Subscribe to one or more profiles.
A managed server can only subscribe to profiles on one configuration manager.
See Subscribing to a Profile on page 493.
If you receive error messages during the configuration refresh, such as a local
object that could not be replaced, resolve the conflict and refresh the
configuration again. You can either wait for the automatic refresh period to be
reached, or kick off a refresh by issuing the SET CONFIGREFRESH command,
setting or resetting the interval.
3. If the profile included policy domain information, activate a policy set in the
policy domain, add or move clients to the domain, and associate any required
schedules with the clients.
You may receive warning messages about storage pools that do not exist, but
that are needed for the active policy set. Define any storage pools needed by
the active policy set, or rename existing storage pools. See Defining or
Updating Primary Storage Pools on page 182 or Renaming a Storage Pool on
page 244.
4. If the profile included administrative schedules, make the schedules active.
Administrative schedules are not active when they are distributed by a
configuration manager. The schedules do not run on the managed server until
you make them active on the managed server. See Tailoring Schedules on
page 403.
5. Set how often the managed server contacts the configuration manager to
update the configuration information associated with the profiles.
The initial setting for refreshing the configuration information is 60 minutes.
See Refreshing Configuration Information on page 497.
System
To set up one Tivoli Storage Manager server as the source for configuration
information for other servers, you identify the server as a configuration manager.
A configuration manager can be an existing Tivoli Storage Manager server that
already provides services to clients, or can be a server dedicated to just providing
configuration information to other Tivoli Storage Manager servers.
Enter the following command:
set configmanager on
483
definitions of servers and server groups that exist on the configuration manager.
You can change or delete the profile named DEFAULT_PROFILE.
When a managed server first subscribes to a profile on a configuration manager,
the configuration manager automatically also subscribes the managed server to the
profile named DEFAULT_PROFILE, if it exists. The information distributed via this
profile gets refreshed in the same way as other profiles. This helps ensure that all
servers have a consistent set of server and server group definitions for all servers
in the network.
If you do not change the DEFAULT_PROFILE, whenever a managed server
subscribed to the DEFAULT_PROFILE profile refreshes configuration information,
the managed server receives definitions for all servers and server groups that exist
on the configuration manager at the time of the refresh. As servers and server
groups are added, deleted, or changed on the configuration manager, the changed
definitions are distributed to subscribing managed servers.
Define profiles
System
When you define the profile, you select the name and can include a description.
For example, to define a profile named ALLOFFICES, enter the following
command:
define profile alloffices
description=Configuration to be used by all offices
System
After you define a profile, you associate the configuration information that you
want to distribute via that profile. You can associate the following configuration
information with a profile:
484
|
|
|
|
|
|
|
|
|
|
|
|
|
v Server groups. See Configuration Information for Servers and Server Groups
on page 486 for tips.
v Administrative command schedules. See Configuration Information for
Administrative Command Schedules on page 487 for tips.
v Tivoli Storage Manager server scripts. See IBM Tivoli Storage Manager Server
Scripts on page 406 for tips.
v Client option sets. See Managing Client Option Sets on page 282 for tips.
Before you can associate specific configuration information with a profile, the
definitions must exist on the configuration manager. For example, to associate a
policy domain named ENGDOMAIN with a profile, you must have already
defined the ENGDOMAIN policy domain on the configuration manager.
Suppose you want the ALLOFFICES profile to distribute policy information from
the STANDARD and ENGDOMAIN policy domains on the configuration manager.
Enter the following command:
define profassociation alloffices domains=standard,engdomain
You can make the association more dynamic by specifying the special character, *
(asterisk), by itself. When you specify the *, you can associate all existing objects
with a profile without specifically naming them. If you later add more objects of
the same type, the new objects are automatically distributed via the profile. For
example, suppose that you want the ADMINISTRATORS profile to distribute all
administrators registered to the configuration manager. Enter the following
commands on the configuration manager:
define profile administrators
description=Profile to distribute administrators IDs
define profassociation administrators admins=*
485
The administrator with the name SERVER_CONSOLE is never distributed from the
configuration manager to a managed server.
For administrator definitions that have node authority, the configuration manager
only distributes information such as password and contact information. Node
authority for the managed administrator can be controlled on the managed server
using the GRANT AUTHORITY and REVOKE AUTHORITY commands specifying
the CLASS=NODE parameter.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Policy domains can refer to storage pool names in the management classes,
backup copy groups, and archive copy groups. As you set up the configuration
information, consider whether managed servers already have or can set up or
rename storage pools with these names.
A subscribing managed server may already have a policy domain with the same
name as the domain associated with the profile. The configuration refresh
overwrites the domain defined on the managed server unless client nodes are
already assigned to the domain. Once the domain becomes a managed object on
the managed server, you can associate clients with the managed domain. Future
configuration refreshes can then update the managed domain.
If nodes are assigned to a domain with the same name as a domain being
distributed, the domain is not replaced. This safeguard prevents inadvertent
replacement of policy that could lead to loss of data. To replace an existing policy
domain with a managed domain of the same name, you can do the following steps
on the managed server:
1. Copy the domain.
2.
3.
4.
5.
Move all clients assigned to the original domain to the copied domain.
Trigger a configuration refresh.
Activate the appropriate policy set in the new, managed policy domain.
Move all clients back to the original domain, which is now managed.
486
groups in any other profile. Any servers and server groups that you define later
are associated automatically with the default profile and the configuration manager
distributes the definitions at the next refresh.
For a server definition, the following attributes are distributed:
v Communication method
v TCP/IP address (high-level address)
v Port number (low-level address)
v Server password
v Server URL
v The description
When server definitions are distributed, the attribute for allowing replacement is
always set to YES. You can set other attributes, such as the servers node name, on
the managed server by updating the server definition.
A managed server may already have a server defined with the same name as a
server associated with the profile. The configuration refresh does not overwrite the
local definition unless the managed server allows replacement of that definition.
On a managed server, you allow a server definition to be replaced by updating the
local definition. For example:
update server santiago allowreplace=yes
Server
Server
Server
Server group
Server group
Server
Server group
Server group
487
A configuration refresh does not replace or remove any local schedules that are
active on a managed server. However, a refresh can update an active schedule that
is already managed by a configuration manager.
Changing a Profile
Task
System
Update profiles
System
You can change a profile and its associated configuration information. For example,
if you want to add a policy domain named FILESERVERS to objects already
associated with the ALLOFFICES profile, enter the following command:
define profassociation alloffices domains=fileservers
You can also delete associated configuration information, which results in removal
of configuration from the managed server. Use the DELETE PROFASSOCIATION
command. See Removing Configuration Information from Managed Servers on
page 489 for details.
On a configuration manager, you cannot directly change the names of
administrators, scripts, and server groups associated with a profile. To change the
name of an administrator, script, or server group associated with a profile, delete
the object then define it again with a new name and associate it with the profile
again. During the next configuration refresh, each managed server makes the
corresponding changes in their databases.
You can change the description of the profile. Enter the following command:
update profile alloffices
description=Configuration for all offices with file servers
System
For example, to lock the ALLOFFICES profile for two hours (120 minutes), enter
the following command:
lock profile alloffices 120
You can let the lock expire after two hours, or unlock the profile with the following
command:
unlock profile alloffices
488
System
From the configuration manager, to notify all servers that are subscribers to the
ALLOFFICES profile, enter the following command:
notify subscribers profile=alloffices
The managed servers then refresh their configuration information, even if the time
period for refreshing the configuration has not passed.
System
4. You may want to notify any managed server that subscribes to the profile so
that servers refresh their configuration information:
notify subscribers profile=administrators
When you delete the association of an object with a profile, the configuration
manager no longer distributes that object via the profile. Any managed server
subscribing to the profile deletes the object from its database when it next contacts
489
Deleting Profiles
Task
Delete profiles
System
You can delete a profile from a configuration manager. Before deleting a profile,
you should ensure that no managed server still has a subscription to the profile. If
the profile still has some subscribers, you should first delete the subscriptions on
each managed server. When you delete subscriptions, consider whether you want
the managed objects to be deleted on the managed server at the same time. For
example, to delete the subscription to profile ALLOFFICES from managed server
SANTIAGO without deleting the managed objects, log on to the SANTIAGO
server and enter the following command:
delete subscription alloffices
See Deleting Subscriptions on page 496 for more details about deleting
subscriptions on a managed server.
490
Note: You can use command routing to issue the DELETE SUBSCRIPTION
command for all managed servers.
If you try to delete a profile, that still has subscriptions, the command fails unless
you force the operation:
delete profile alloffices force=yes
If you do force the operation, managed servers that still subscribe to the deleted
profile will later contact the configuration manager to try to get updates to the
deleted profile. The managed servers will continue to do this until their
subscriptions to the profile are deleted. A message will be issued on the managed
server alerting the administrator of this condition.
Any administrator
You can get information about configuration profiles defined on any configuration
manager, as long as that server is defined to the server with which you are
working. For example, from a configuration manager, you can display information
about profiles defined on that server or on another configuration manager. From a
managed server, you can display information about any profiles on the
configuration manager to which the server subscribes. You can also get profile
information from any other configuration manager defined to the managed server,
even though the managed server does not subscribe to any of the profiles.
For example, to get information about all profiles on the HEADQUARTERS
configuration manager when logged on to another server, enter the following
command:
query profile server=headquarters
Profile name
Locked?
--------------ADMINISTRATORS
DEFAULT_PROFILE
ENGINEERING
MARKETING
------No
No
No
No
You may need to get detailed information about profiles and the objects associated
with them, especially before subscribing to a profile. You can get the names of the
objects associated with a profile by entering the following command:
query profile server=headquarters format=detailed
491
Configuration manager:
Profile name:
Locked?:
Description:
Server administrators:
Policy domains:
Administrative command schedules:
Server Command Scripts:
Client Option Sets:
Servers:
Server Groups:
HEADQUARTERS
ADMINISTRATORS
No
ADMIN1 ADMIN2 ADMIN3 ADMIN4
** all objects **
Configuration manager:
Profile name:
Locked?:
Description:
Server administrators:
Policy domains:
Administrative command schedules:
Server Command Scripts:
Client Option Sets:
Servers:
Server Groups:
HEADQUARTERS
DEFAULT_PROFILE
No
Configuration manager:
Profile name:
Locked?:
Description:
Server administrators:
Policy domains:
Administrative command schedules:
Server Command Scripts:
Client Option Sets:
Servers:
Server Groups:
HEADQUARTERS
ENGINEERING
No
Configuration manager:
Profile name:
Locked?:
Description:
Server administrators:
Policy domains:
Administrative command schedules:
Server Command Scripts:
Client Option Sets:
Servers:
Server Groups:
HEADQUARTERS
MARKETING
Yes
** all objects **
** all objects **
ENGDOMAIN
QUERYALL
DESIGNER PROGRAMMER
MARKETDOM
QUERYALL
BASIC
If the server from which you issue the query is already a managed server
(subscribed to one or more profiles on the configuration manager being queried),
by default the query returns profile information as it is known to the managed
server. Therefore the information is accurate as of the last configuration refresh
done by the managed server. You may want to ensure that you see the latest
version of profiles as they currently exist on the configuration manager. Enter the
following command:
query profile uselocal=no format=detailed
To get more than the names of the objects associated with a profile, you can do one
of the following:
v If command routing is set up between servers, you can route query commands
from the server to the configuration manager. For example, to get details on the
ENGDOMAIN policy domain on the HEADQUARTERS server, enter this
command:
headquarters: query domain engdomain format=detailed
492
You can also route commands from the configuration manager to another server
to get details about definitions that already exist.
v If command routing is not set up, log on to the configuration manager and enter
the query commands to get the information you need.
Subscribing to a Profile
Task
System
System
Before a managed server subscribes to a profile, be aware that if you have defined
any object with the same name and type as an object associated with the profile
that you are subscribing to, those objects will be overwritten. You can check for
such occurrences by querying the profile before subscribing to it.
When a managed server first subscribes to a profile on a configuration manager, it
also automatically subscribes to DEFAULT_PROFILE, if a profile with this name is
defined on the configuration manager. Unless DEFAULT_PROFILE is modified on
the configuration manager, it contains all the server definitions and server groups
defined on the configuration manager. In this way, all the servers in your network
receive a consistent set of server and server group definitions.
Note: Although a managed server can subscribe to more than one profile on a
configuration manager, it cannot subscribe to profiles on more than one
configuration manager at a time.
493
A Subscription Scenario
This section describes a typical scenario in which a server subscribes to a profile on
a configuration manager, HEADQUARTERS. In this scenario an administrator for
the HEADQUARTERS server has defined three profiles, ADMINISTRATORS,
ENGINEERING, and MARKETING, each with its own set of associations. In
addition, DEFAULT_PROFILE was automatically defined and contains only the
server and server group definitions defined on the HEADQUARTERS server. An
administrator for HEADQUARTERS has given you the names of the profiles that
you should be using. To subscribe to the ADMINISTRATORS and ENGINEERING
profiles and keep them current, perform the following steps:
1. Display the names of the objects in the profiles on HEADQUARTERS.
You might want to perform this step to see if the object names on the profiles
are used on your server for any objects of the same type. Issue this command:
query profile * server=headquarters format=detailed
You might want to get detailed information on some of the objects by issuing
specific query commands on either your server or the configuration manager.
Note: If any object name matches and you subscribe to a profile containing an
object with the matching name, the object on your server will be
replaced, with the following exceptions:
v A policy domain is not replaced if the domain has client nodes
assigned to it.
v An administrator with system authority is not replaced by an
administrator with a lower authority level if the replacement would
leave the server without a system administrator.
v The definition of a server is not replaced unless the server definition
on the managed server allows replacement.
v A server with the same name as a server group is not replaced.
v A locally defined, active administrative schedule is not replaced
2. Subscribe to the ADMINISTRATORS and ENGINEERING profiles.
After the initial subscription, you do not have to specify the server name on the
DEFINE SUBSCRIPTION commands. If at least one profile subscription already
exists, any additional subscriptions are automatically directed to the same
configuration manager. Issue these commands:
define subscription administrators server=headquarters
define subscription engineering
The object definitions in these profiles are now stored on your database. In
addition to ADMINISTRATORS and ENGINEERING, the server is also
subscribed by default to DEFAULT_PROFILE. This means that all the server
and server group definitions on HEADQUARTERS are now also stored in your
database.
3. Set the time interval for obtaining refreshed configuration information from the
configuration manager.
494
If you do not perform this step, your server checks for updates to the profiles
at start up and every 60 minutes after that. Set up your server to check
HEADQUARTERS for updates once a day (every 1440 minutes). If there is an
update, HEADQUARTERS sends it to the managed server automatically when
the server checks for updates.
set configrefresh 1440
Note: You can initiate a configuration refresh from a managed server at any time.
To initiate a refresh, simply reissue the SET CONFIGREFRESH with any
value greater than 0. The simplest approach is to use the current setting:
set configrefresh 1440
Querying Subscriptions
Task
Any administrator
Any administrator
From time to time, you may want to see what profiles a server is subscribed to.
You may also want to see the last time that the configuration associated with that
profile was successfully refreshed on your server. The QUERY SUBSCRIPTION
command gives you this information. You can name a specific profile or use a
wildcard character to display all or a subset of profiles to which the server is
subscribed. For example, the following command displays ADMINISTRATORS and
any other profiles that begin with the string ADMIN:
query subscription admin*
Profile name
--------------ADMINISTRATORS
ADMINS_1
ADMINS_2
Last update
date/time
-------------------06/04/2002 17:51:49
06/04/2002 17:51:49
06/04/2002 17:51:49
To see what objects the ADMINISTRATORS profile contains, use the following
command:
query profile administrators uselocal=no format=detailed
HEADQUARTERS
ADMINISTRATORS
No
ADMIN1 ADMIN2 ADMIN3 ADMIN4
** all objects **
495
ENGDOMAIN
0
Policy for design and software engineers
30
365
$$CONFIG_MANAGER$$
06/04/2002 17:51:49
ENGINEERING
The field Managing profile shows the profile to which the managed server
subscribes to get the definition of this object.
Deleting Subscriptions
Task
System
If you decide that a server no longer needs to subscribe to a profile, you can delete
the subscription. When you delete a subscription to a profile, you can choose to
discard the objects that came with the profile or keep them in your database. For
example, to request that your subscription to PROFILEC be deleted and to keep
the objects that came with that profile, issue the following command:
delete subscription profilec discardobjects=no
After the subscription is deleted on the managed server, the managed server issues
a configuration refresh request to inform the configuration manager that the
subscription is deleted. The configuration manager updates its database with the
new information.
When you choose to delete objects when deleting the subscription, the server may
not be able to delete some objects. For example, the server cannot delete a
managed policy domain if the domain still has client nodes registered to it. The
server skips objects it cannot delete, but does not delete the subscription itself. If
you take no action after an unsuccessful subscription deletion, at the next
configuration refresh the configuration manager will again send all the objects
associated with the subscription. To successfully delete the subscription, do one of
the following:
v Fix the reason that the objects were skipped. For example, reassign clients in the
managed policy domain to another policy domain. After handling the skipped
objects, delete the subscription again.
v Delete the subscription again, except this time do not discard the managed
objects. The server can then successfully delete the subscription. However, the
objects that were created because of the subscription remain.
496
By issuing this command with a value greater than zero, you cause the managed
server to immediately start the refresh process.
At the configuration manager, you can cause managed servers to refresh their
configuration information by notifying the servers. For example, to notify
subscribers to all profiles, enter the following command:
notify subscribers profile=*
The managed servers then start to refresh configuration information to which they
are subscribed through profiles.
A managed server automatically refreshes configuration information when it is
restarted.
497
To return objects to local control when working on a managed server, you can
delete the subscription to one or more profiles. When you delete a subscription,
you can choose whether to delete the objects associated with the profile. To return
objects to local control, you do not delete the objects. For example, use the
following command on a managed server:
delete subscription engineering discardobjects=no
498
499
500
An administrator no longer has to remember multiple user IDs and passwords for
servers and clients, other than the initial user ID and password. The administrator
enters the initial user ID and password from the sign-on screen displayed on the
administrators Web browser. A single set of logon credentials are then used to
verify an administrators identity across servers and clients in a Web browser
environment. Encrypted credentials ensure password security.
Authentication time-out processing requires an administrator to re-authenticate
after a specific amount of time has passed. You can set the amount of time by
using the SET WEBAUTHTIMEOUT command. The time-out protects against
unauthorized users indefinitely accessing an unattended Web browser that has
credentials stored in a Web browser cache. A pop-up is displayed on the browser
that requires an administrators ID and password to proceed.
The following can use enterprise logon:
v An administrator who uses a Web browser to connect to a Tivoli Storage
Manager server
v An administrator or a help-desk person who uses a Web browser to connect to a
remote client with the Web backup-archive client
v An end user of Tivoli Storage Manager who uses the Web backup-archive client
to connect to their own remote client
A client can optionally disable enterprise logon.
Routing Commands
If you have set up your servers as described in Setting Up Communications for
Command Routing on page 475, you can route Tivoli Storage Manager
administrative commands to one or more servers. Command routing enables an
administrator to send commands for processing to one or more servers at the same
time. The output is collected and displayed at the server that issued the routed
commands. A system administrator can configure and monitor many different
servers from a central server by using command routing.
You can route commands to one server, multiple servers, servers defined to a
named group (see Setting Up Server Groups on page 503), or a combination of
these servers. A routed command cannot be further routed to other servers; only
one level of routing is allowed.
Each server that you identify as the target of a routed command must first be
defined with the DEFINE SERVER command. If a server has not been defined, that
server is skipped and the command routing proceeds to the next server in the
route list.
Tivoli Storage Manager does not run a routed command on the server from which
you issue the command unless you also specify that server. To be able to specify
the server on a routed command, you must define the server just as you did any
other server.
Commands cannot be routed from the SERVER_CONSOLE ID.
Routed commands run independently on each server to which you send them. The
success or failure of the command on one server does not affect the outcome on
any of the other servers to which the command was sent.
501
The colon after the server name indicates the end of the routing information. This
is also called the server prefix. Another way to indicate the server routing
information is to use parentheses around the server name, as follows:
(admin1) query stgpool
Note: When writing scripts, you must use the parentheses for server routing
information.
To route a command to more than one server, separate the server names with a
comma. For example, to route a QUERY OCCUPANCY command to three servers
named ADMIN1, GEO2, and TRADE5 enter:
admin1,geo2,trade5: query occupancy
or
(admin1,geo2,trade5) query occupancy
502
or
(west_complex) query stgpool
The QUERY STGPOOL command is sent for processing to servers BLD12 and
BLD13 which are members of group WEST_COMPLEX.
To route a QUERY STGPOOL command to two server groups WEST_COMPLEX
and NORTH_COMPLEX, enter:
west_complex,north_complex: query stgpool
or
(west_complex,north_complex) query stgpool
The QUERY STGPOOL command is sent for processing to servers BLD12 and
BLD13 which are members of group WEST_COMPLEX, and servers NE12 and
NW13 which are members of group NORTH_COMPLEX.
Routing Commands to Single Servers and Server Groups: You can route
commands to multiple single servers and to server groups at the same time. For
example, to route the QUERY DB command to servers HQSRV, REGSRV, and
groups WEST_COMPLEX and NORTH_COMPLEX, enter:
hqsrv,regsrv,west_complex,north_complex: query db
or
(hqsrv,regsrv,west_complex,north_complex) query db
System
System
503
You can define groups of servers to which you can then route commands. The
commands are routed to all servers in the group. To route commands to a server
group you must do the following:
1. Define the server with the DEFINE SERVER command if it is not already
defined (see Setting Up Communications for Command Routing on
page 475).
2. Define a new server group with the DEFINE SERVERGROUP command. Server
group names must be unique because both groups and server names are
allowed for the routing information.
3. Define servers as members of a server group with the DEFINE GRPMEMBER
command.
The following example shows how to create a server group named
WEST_COMPLEX, and define servers BLD12 and BLD13 as members of the
WEST_COMPLEX group:
define servergroup west_complex
define grpmember west_complex bld12,bld13
System
System
System
System
System
This command creates the new group. If the new group already exists, the
command fails.
Renaming a Server Group: To rename an existing server group
NORTH_COMPLEX to NORTH, enter:
rename servergroup north_complex north
504
This command removes all members from the server group. The server definition
for each group member is not affected. If the deleted server group is a member of
other server groups, the deleted group is removed from the other groups.
System
Deleting a Group Member from a Group: To delete group member BLD12 from
the NEWWEST server group, enter:
delete grpmember newwest bld12
When you delete a server, the deleted server is removed from any server groups of
which it was a member.
The PING SERVER command uses the user ID and password of the administrative
ID that issued the command. If the administrator is not defined on the server
being pinged, the ping fails even if the server may be running.
Data that is backed up, archived, or space managed from client nodes
Client data migrated from storage pools on the source server
Any data that can be moved by EXPORT and IMPORT commands
DRM plan files
The source server is a client of the target server, and the data for the source server
is managed only by the source server. In other words, the source server controls
Chapter 20. Working with a Network of Servers
505
the expiration and deletion of the files that comprise the virtual volumes on the
target server. The use of virtual volumes is not supported when the source server
and the target server reside on the same Tivoli Storage Manager server.
|
|
At the target server, the virtual volumes from the source server are seen as archive
data. The source server is registered as a client node (of TYPE=SERVER) at the
target server and is assigned to a policy domain. The archive copy group of the
default management class of that domain specifies the storage pool for the data
from the source server.
Note: If the default management class does not include an archive copy group,
data cannot be stored on the target server.
Using virtual volumes can benefit you in the following ways:
v The source server can use the target server as an electronic vault for rapid
recovery from a disaster.
v Smaller Tivoli Storage Manager source servers can use the storage pools and
tape devices of larger Tivoli Storage Manager servers.
v For incremental database backups, it can decrease wasted space on volumes and
under use of high-end tape drives.
Be aware of the following when you use virtual volumes:
v If you use virtual volumes for database backups, you might have the following
situation: SERVER_A backs up its database to SERVER_B, and SERVER_B backs
up its database to SERVER_A. If this is the only way databases are backed up, if
both servers are at the same location, and if a disaster strikes that location, you
may have no backups with which to restore your databases.
v Moving large amounts of data between the servers may slow down your
communications significantly, depending on the network bandwidth and
availability.
v You can specify in the device class definition (DEVTYPE=SERVER) how often
and for how long a time the source server will try to contact the target server.
Keep in mind that frequent attempts to contact the target server over an
extended period can affect your communications.
v Under certain circumstances, inconsistencies may arise among virtual volume
definitions on the source server and the archive files on the target server. You
can use the RECONCILE VOLUMES command to reconcile these inconsistencies
(see Reconciling Virtual Volumes and Archive Files on page 510 for details).
v If you want to enable data validation between a source and target server, enable
the settings using both the DEFINE SERVER and REGISTER NODE commands.
For more information see Validating a Nodes Data on page 343 and
Administrators Reference.
v Storage space limitations on the target server affect the amount of data that you
can store on that server.
v To minimize mount wait times, the total mount limit for all server definitions
that specify the target server should not exceed the mount total limit at the
target server. For example, a source server has two device classes, each
specifying a mount limit of 2. A target server has only two tape drives. In this
case, the source server mount requests could exceed the target servers tape
drives.
Note: When you issue a DEFINE SERVER command, the source server sends a
verification code to the target server. When the source server begins a
506
session with the target server, it also sends the verification code. If the code
matches what was previously stored on the target, the session is opened in
read/write mode. If the verification code is lost at the source server (for
example, after a database restore), the code can be reset by issuing an
UPDATE SERVER command with the FORCESYNC=YES parameter.
After issuing these commands, ensure that you assign the source server to
the new policy domain (UPDATE NODE) and activate the policy. See
Changing Policy on page 300 for details.
507
508
v The last volume of the database backup series has exceeded the expiration value
specified with the SET DRMDBBACKUPEXPIREDAYS command
See Moving Backup Volumes Onsite on page 605 for more information.
You can also do an automatic database backup to a target server. For example, if
you have issued the following command, a database backup occurs automatically
when more than 60 percent of recovery log space is used:
define dbbackuptrigger devclass=targetclass logfullpct=60
509
Tivoli Storage Manager server. You must specify a device class with a device type
specified as SERVER. For example, to copy server information directly to a target
server, issue the following command:
export server devclass=targetclass
Import Server Information from a Target Server: If data has been exported from
a source server to a target server, you can import that data from the target server
to a third server. The server that will import the data uses the node ID and
password of the source server to open a session with the target server. That session
is in read-only mode because the third server does not have the proper verification
code.
For example, to import server information from a target server, issue the following
command:
import server devclass=targetclass
The reconciliation action is determined by the FIX parameter as shown in Table 36.
Table 36. FIX Parameter Reconciliation
FIX=
At the Source
Server
Action
No files exist
Volumes exist
NO
Volumes do not
exist
510
Report error
At the Source
Server
Action
No files exist
Report error
For storage pool volumes:
Mark volumes as unavailable
Volumes exist
YES
Volumes do not
exist
Report error
511
512
Tivoli Storage Manager provides an export and import facility that allows you to
copy all or part of a server (export) so that data can be transferred to another
server (import). Two methods are available to perform the export and import
operation:
v Export directly to another server on the network. This results in an immediate
import process without the need for compatible sequential device types between
the two servers.
v Export to sequential media. Later, you can use the media to import the
information to another server that has a compatible device type.
Task
System
Any administrator
This chapter takes you through the export and import tasks. See the following
sections:
||
Concepts:
Exporting and Importing Data Using Sequential Media Volumes on page 520
|
|
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
513
v File data from server storage, which includes file space definitions and
authorization rules. You can request that file data be exported in any of the
following groupings of files:
Active and inactive versions of backed up files, archive copies of files, and
space-managed files
Active versions of backed up files, archive copies of files, and space-managed
files
Active and inactive versions of backed up files
Active versions of backed up files
Archive copies of files
Space-managed files
Exporting Restrictions
Restrictions for exporting data are as follows:
v You can export information from an earlier version of Tivoli Storage Manager to
a later one, but not from a later version to an earlier.
v Data exported from a server at version 4.1.2 or later with Unicode support
cannot be imported to a server at an earlier version.
v You cannot export nodes of type NAS. Export processing will exclude these
nodes.
|
|
514
515
|
|
You can export all server control information or a subset of server control
information by specifying one or more of the following export commands:
v EXPORT ADMIN
v EXPORT NODE
v EXPORT POLICY
v EXPORT SERVER
|
|
When you export data to a target server, you must specify the server name that
will receive the data as an import operation.
|
|
|
|
|
|
|
The following sections describe options to consider before you export data for
immediate import to another server.
|
|
|
|
|
|
|
|
|
|
Choosing to merge file spaces allows you to restart a cancelled import operation
because files that were previously imported can be skipped in the subsequent
import operation. This option is available when you issue an EXPORT SERVER or
EXPORT NODE command.
|
|
|
|
|
|
When you merge file spaces, the server performs versioning of the imported
objects based on the policy bound to the files. An import operation may leave the
target file space with more versions than policy permits. Files are versioned to
maintain the policy intent for the files, especially when incremental export (using
the FROMDATE and FROMTIME parameters) is used to maintain duplicate client
file copies on two or more servers.
|
|
The following describes how the server merges imported files, based on the type of
object, when you specify MERGEFILESPACES=YES.
|
|
|
|
|
Archive Objects
If an archive object for the imported node having the same TCP/IP
address, TCP/IP port, name, insert date, and description is found to
already exist on the target server, the imported object is skipped.
Otherwise, the archive object is imported.
|
|
|
|
|
|
|
|
Backup Objects
If a backup object for the imported node has the same TCP/IP address,
TCP/IP port, insert date, and description as the imported backup object,
the imported object is skipped. When backup objects are merged into
existing file spaces, versioning will be done according to policy just as it
occurs when backup objects are sent from the client during a backup
operation. Setting their insert dates to zero (0) will mark excessive file
versions for expiration.
You can merge imported client backup, archive, and space-managed files into
existing file spaces, and automatically skip duplicate files that may exist in the
target file space on the server. Optionally, you can have new file spaces created. If
you do not want to merge file spaces, see Understanding How Duplicate File
Spaces Are Handled on page 531.
|
|
|
v If the imported backup object has a later (more recent) insert date than
an active version of an object on the target server with the same node,
516
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
file space, TCP/IP address, and TCP/IP port, then the imported backup
object becomes the new active copy, and the active copy on the target
server is made inactive. Tivoli Storage Manager expires this inactive
version based on the number of versions that are allowed in policy.
v If the imported backup object has an earlier (less recent) insert date than
an active copy of an object on the target server with the same node, file
space, TCP/IP address, TCP/IP port, then the imported backup object is
inserted as an inactive version.
v If there are no active versions of an object with the same node, file
space, TCP/IP address, and TCP/IP port on the target server, and the
imported object has the same node, file space, TCP/IP address, and
TCP/IP port as the versions, then:
An imported active object with a later insert date than the most recent
inactive copy will become the active version of the file.
An imported active object with an earlier insert date than the most
recent inactive copy will be imported as an inactive version of the file
v Any imported inactive objects will be imported as other inactive
versions of the object.
|
|
|
|
|
|
|
|
The number of objects imported and skipped is displayed with the final statistics
for the import operation. See Querying the Activity Log for Export or Import
Information on page 536 for more information.
|
|
|
|
|
|
|
|
|
|
|
|
Incremental Export
|
|
|
|
|
|
|
Replace Definitions
|
|
|
|
|
The system administrator can limit the file data exported to objects that were
stored on the server on or after the date and time specified. For Tivoli Storage
Manager servers at version 5.1 or higher, you can use the FROMDATE and
FROMTIME parameters to export data based on the date and time the file was
originally stored in the server. The FROMDATE and FROMTIME parameters only
apply to client user file data; these parameters have no effect on other exported
information such as policy. If clients continue to back up to the originating server
while their data is being moved to a new server, you can move the backup data
that was stored on the originating server after the export operation was initiated.
This option is available when you issue an EXPORT SERVER or EXPORT NODE
command.
You can specify whether definitions (not file data) are replaced on the target server.
If duplicate definitions exist on the target server, they can be replaced with the
imported definitions. Alternatively, you can have the server skip duplicate
definitions. For more information, see Determining Whether to Replace Existing
Definitions on page 527. This option is available when you issue any of the
EXPORT commands.
517
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Before you export data to another server on the network, do the following:
v Install Tivoli Storage Manager on the target server. This includes defining disk
space for the database and recovery log, and defining initial server storage. For
more information, refer to Quick Start.
v Consider setting up enterprise configuration for the target server so you can
distribute consistent backup and archive policies to the target server. For details,
see Chapter 20, Working with a Network of IBM Tivoli Storage Manager
Servers, on page 467.
v Use the DEFINE SERVER command to define the name of the target server or
the originating server. For more information, see Setting Up Communications
Among Servers on page 472.
v Ensure that the administrator that issues the export command is defined with
the same administrator name and password on the target server, and has System
authority on the target server.
|
|
|
|
|
|
|
|
|
|
|
|
|
To determine how much space is required to export all server data, enter:
|
|
After you issue this command, the server starts a background process and issues a
message similar to the following:
When you export data to another server, you can use the PREVIEWIMPORT
option to determine how much data will be transferred without actually moving
any data. When PREVIEWIMPORT=NO, the export operation is performed, and
the data is immediately imported to the target server. This option is available when
you issue any EXPORT command.
|
||
|
|
You can view the preview results on the server console or by querying the activity
log.
|
|
|
|
|
|
|
|
518
|
|
|
You can direct import messages to an output file to capture any error messages
that are detected during the import process. Do this by starting an administrative
client session in console mode before you invoke the import command.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
You can specify a list of administrator names, or you can export all administrator
names.
|
|
|
|
The following example exports all the administrator definitions to the target server
defined as OTHERSERVER. It allows you to preview the export without actually
exporting the data for immediate import.
You can preview the result on the server console or by querying the activity log.
When you issue the EXPORT NODE command, the server exports client node
definitions. Each client node definition includes:
v User ID, password, and contact information
v Name of the policy domain to which the client is assigned
v File compression status
v Whether the user has the authority to delete backed up or archived files from
server storage
v Whether the client node ID is locked from server access
|
|
|
You can also specify whether to export file data. File data includes file space
definitions and authorization rules. You can request that file data be exported in
any of the following groupings of files:
|
|
|
|
|
|
|
519
|
|
|
|
|
|
|
|
v Active and inactive versions of backed up files, archive copies of files, and
space-managed files
v Active versions of backed up files, archive copies of files, and space-managed
files
v Active and inactive versions of backed up files
v Active versions of backed up files
v Archive copies of files
v Space-managed files
|
|
|
For example, to export client node information and all client files for NODE1
directly to SERVERB, issue the following command:
|
|
|
|
Note: When you specify a list of node names or node patterns, the server will not
report the node names or patterns that do not match any entries in the
database. Check the summary statistics in the activity log to verify that the
server exported all intended nodes.
|
|
|
When you issue the EXPORT POLICY command, the server exports the following
information belonging to each specified policy domain:
|
|
|
|
|
|
For example, to export policy information directly to SERVERB, issue the following
command:
|
|
|
|
|
|
|
When you issue the EXPORT SERVER command, the server exports all server
control information. You can also export file data information with the EXPORT
SERVER command.
|
|
|
|
|
|
|
For example, you want to export server data to another server on the network and
have the file spaces merged with any existing file spaces on the target server. You
want to replace definitions on the target server, and you want the data that is
exported to begin with any data inserted in the originating server beginning
10/25/2002. To issue this command, enter:
export server toserver=serv23 fromdate=10/25/2002 filedata=all
mergefilespaces=yes dates=relative
520
v Use the EXPORT or IMPORT command with the PREVIEW parameter to verify
what data will be moved
v Prepare sequential media for exporting and importing data
After you issue this command, the server starts a background process and issues a
message similar to the following:
EXPORT SERVER started as Process 4
You can view the preview results on the server console or by querying the activity
log.
You can request information about the background process, as described in
Requesting Information about an Export or Import Process on page 535. If
necessary, you can cancel an export or import process, as described in Canceling
Server Processes on page 396.
521
2. You can export data to a storage pool on another server by specifying a device
class whose device type is SERVER. For details, see Using Virtual Volumes to
Store Data on Another Server on page 505.
Estimating the Number of Removable Media Volumes to Label: To estimate the
number of tapes or optical disks needed to store export data, divide the number of
bytes to be moved by the estimated capacity of a volume.
For example, cartridge system tape volumes used with 3490 tape devices have an
estimated capacity of 360MB. If the preview shows that you need to transfer
720MB of data, label at least two tape volumes before you export the data.
Using Scratch Media: The server allows you to use scratch media to ensure that
you have sufficient space to store all export data. If you use scratch media, record
the label names and the order in which they were mounted. Or, use the
USEDVOLUMELIST parameter on the export command to create a file containing
the list of volumes used.
Labeling Removable Media Volumes: During an import process, you must
specify the order in which volumes will be mounted. This order must match the
order in which tapes or optical disks were mounted during the export process. To
ensure that tapes or optical disks are mounted in the correct order, label tapes or
optical disks with information that identifies the order in which they are mounted
during the import process. For example, label tapes as DSM001, DSM002, DSM003,
and so on.
When you export data, record the date and time for each labeled volume. Store
this information in a safe location, because you will need the information when
you import the data. Or, if you used the USEDVOLUMELIST parameter on the
export command, save the resulting file. This file can be used on the import
command volumes parameter.
Exporting Tasks
|
|
|
You can export all server control information or a subset of server control
information by specifying one or more of the following export commands:
|
|
|
|
v
v
v
v
|
|
|
|
When you export data, you must specify the device class to which export data will
be written. You must also list the volumes in the order in which they are to be
mounted when the data is imported. See Labeling Removable Media Volumes
for information on labeling tape volumes.
|
|
|
|
|
|
You can specify the USEDVOLUMELIST parameter to indicate the name of a file
where a list of volumes used in a successful export operation will be stored. If the
specified file is created without errors, it can be used as input to the IMPORT
command on the VOLUMENAMES=FILE:filename parameter. This file will contain
comment lines with the date and time the export was done, and the command
issued to create the export.
|
|
Note: An export operation will not overwrite an existing file. If you perform an
export operation and then try the same operation again with the same
522
EXPORT ADMIN
EXPORT NODE
EXPORT POLICY
EXPORT SERVER
|
|
volume name, the file is skipped, and a scratch file is allocated. To use the
same volume name, delete the volume entry from the volume history file.
|
|
|
|
|
|
|
|
You can specify a list of administrator names, or you can export all administrator
names.
|
|
|
|
|
|
In the following example, definitions for the DAVEHIL and PENNER administrator
IDs will be exported to the DSM001 tape volume, which the TAPECLASS device
class supports. Do not allow any scratch media to be used during this export
process. To issue this command, enter:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
When you issue the EXPORT ADMIN command, the server exports administrator
definitions. Each administrator definition includes:
v Administrator name, password, and contact information
v Any administrative privilege classes the administrator has been granted
v Whether the administrator ID is locked from server access
When you issue the EXPORT NODE command, the server exports client node
definitions. Each client node definition includes:
v User ID, password, and contact information
v Name of the policy domain to which the client is assigned
v File compression status
v Whether the user has the authority to delete backed up or archived files from
server storage
v Whether the client node ID is locked from server access
You can also specify whether to export file data. File data includes file space
definitions and authorization rules. You can request that file data be exported in
any of the following groupings of files:
|
|
|
v Active and inactive versions of backed up files, archive copies of files, and
space-managed files
v Active versions of backed up files, archive copies of files, and space-managed
files
v Active and inactive versions of backed up files
v Active versions of backed up files
v Archive copies of files
v Space-managed files
|
|
|
When client file data is exported, the server copies files to export volumes in the
order of their physical location in server storage. This process minimizes the
number of mounts that are required during the export process.
|
|
If you do not specify that you want to export file data, then the server only exports
client node definitions.
|
|
|
523
|
|
v Export any active backup versions of files belonging to these client nodes
v Export this information to scratch volumes in the TAPECLASS device class
|
|
|
|
|
|
|
|
|
|
|
|
|
When you issue the EXPORT POLICY command, the server exports the following
information belonging to each specified policy domain:
v Policy domain definitions
v Policy set definitions, including the active policy set
v Management class definitions, including the default management class
v Backup copy group and archive copy group definitions
v Schedule definitions
|
|
|
|
|
|
|
|
|
|
|
For example, suppose that you want to export policy and scheduling definitions
from the policy domain named ENGPOLDOM. You want to use tape volumes
DSM001 and DSM002, which belong to the TAPECLASS device class, but allow the
server to use scratch tape volumes if necessary. To issue this command, enter:
|
|
|
|
|
|
|
|
|
|
For example, you want to export server data to four defined tape cartridges, which
the TAPECLASS device class supports. You want the server to use scratch volumes
if the four volumes are not enough, and so you use the default of SCRATCH=YES.
To issue this command, enter:
|
|
|
|
|
During the export process, the server exports definition information before it
exports file data information. This ensures that definition information is stored on
the first tape volumes. This process allows you to mount a minimum number of
tapes during the import process, if your goal is to copy only control information to
the target server.
|
|
|
|
When you issue the EXPORT SERVER command, the server exports all server
control information. You can also export file data information with the EXPORT
SERVER command.
524
|
|
|
|
v
v
v
v
The following sections describe options to consider before you import data from
sequential media.
|
|
|
|
|
Merge File Spaces: You can merge imported client backup, archive, and
space-managed files into existing file spaces, and automatically skip duplicate files
that may exist in the target file space on the server. Optionally, you can have new
file spaces created. If you do not want to merge file spaces, see Understanding
How Duplicate File Spaces Are Handled on page 531.
|
|
|
Choosing to merge file spaces allows you to restart a cancelled import operation
since files that were previously imported can be skipped in the subsequent import
operation.
525
|
|
|
|
|
|
When you merge file spaces, the server performs versioning of the imported
objects based on the policy bound to the files. An import operation may leave the
target file space with more versions than policy permits. Files are versioned to
maintain the policy intent for the files, especially when incremental export (using
the FROMDATE and FROMTIME parameters) is used to maintain duplicate client
file copies on two or more servers.
|
|
The following describes how the server merges imported files, based on the type of
object, when you specify MERGEFILESPACES=YES.
|
|
|
|
|
Archive Objects
If an archive object for the imported node having the same TCP/IP
address, TCP/IP port, insert date, and description is found to already exist
on the target server, the imported object is skipped. Otherwise, the archive
object is imported.
|
|
|
|
|
|
|
|
Backup Objects
If a backup object for the imported node has the same TCP/IP address,
TCP/IP port, insert date, and description as the imported backup object,
the imported object is skipped. When backup objects are merged into
existing file spaces, versioning will be done according to policy just as it
occurs when backup objects are sent from the client during a backup
operation. Setting their insert dates to zero (0) will mark excessive file
versions for expiration.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
v If the imported backup object has a later (more recent) insert date than
an active version of an object on the target server with the same node,
file space, TCP/IP address, and TCP/IP port, then the imported backup
object becomes the new active copy. The active copy on the target server
is made inactive. Tivoli Storage Manager expires this inactive version
based on the number of versions that are allowed in policy.
v If the imported backup object has an earlier (less recent) insert date than
an active copy of an object on the target server with the same node, file
space, TCP/IP address, and TCP/IP port, then the imported backup
object is inserted as an inactive version.
v If there are no active versions of an object with the same node, file
space, TCP/IP address, TCP/IP port on the target server, and the
imported object has the same node, TCP/IP address, TCP/IP port as the
versions, then:
An imported active object with a later insert date than the most recent
inactive copy will become the active version of the file.
An imported active object with an earlier insert date than the most
recent inactive copy will be imported as an inactive version of the file
v Any imported inactive objects will be imported as other inactive
versions of the object.
|
|
|
|
|
|
|
The number of objects imported and skipped is displayed with the final statistics
for the import operation. See Querying the Activity Log for Export or Import
Information on page 536 for more information.
526
Figure 78 on page 528 shows an example of the messages sent to the server console
and the activity log.
527
Figure 78. Sample Report Created by Issuing Preview for an Import Server Command
Use the value reported for the total number of bytes copied to estimate storage
pool space needed to store imported file data.
For example, Figure 78 shows that 8 856 358 bytes of data will be imported. Ensure
that you have at least 8 856 358 bytes of available space in the backup storage
pools defined to the server. You can use the QUERY STGPOOL and QUERY
VOLUME commands to determine how much space is available in the server
storage hierarchy.
In addition, the preview report shows that 0 archive files and 462 backup files will
be imported. Because backup data is being imported, ensure that you have
528
sufficient space in the backup storage pools used to store this backup data. See
Step 3: Tailoring Server Storage Definitions on the Target Server on page 530 for
information on identifying storage pools on the target server.
For information on specifying the PREVIEW parameter, see Using Preview before
Exporting or Importing Data on page 521. For information on reviewing the
results of a preview operation, see Monitoring Export and Import Processes on
page 534.
529
v The current ACTIVE policy set contains management class names that are not
defined in the policy set to be activated.
v The current ACTIVE policy set contains copy group names that are not defined
in the policy set to be activated.
After each $$ACTIVE$$ policy set has been activated, the server deletes that
$$ACTIVE$$ policy set from the target server. To view information about active
policy on the target server, you can use the following commands:
v QUERY COPYGROUP
v QUERY DOMAIN
v QUERY MGMTCLASS
v QUERY POLICYSET
Results from issuing the QUERY DOMAIN command show the activated policy set
as $$ACTIVE$$. The $$ACTIVE$$ name shows you that the policy set which is
currently activated for this domain is the policy set that was active at the time the
export was performed.
Directing Import Messages to an Output File: The information generated by the
validation process can help you define a storage hierarchy that supports the
storage destinations currently defined in the import data.
You can direct import messages to an output file to capture any error messages
that are detected during the import process. Do this by starting an administrative
client session in console mode before you invoke the import command.
For example, to direct messages to an output file named IMPSERV.OUT, enter:
> dsmadmc -consolemode -outfile=impserv.out
Importing Server Control Information: Now you are ready to import the server
control information. Based on the information generated during the preview
operation, you know that all definition information has been stored on the first
tape volume named DSM001. Specify that this tape volume can be read by a
device belonging to the TAPECLASS device class.
From an administrative client session or from the server console, enter:
import server filedata=none devclass=tapeclass volumenames=dsm001
530
v Query management class and copy group definitions to compare the storage
destinations specified with the names of existing storage pools on the target
server.
To request detailed reports for all management classes, backup copy groups,
and archive copy groups in the ACTIVE policy set, enter these commands:
query mgmtclass * active * format=detailed
query copygroup * active * standard type=backup format=detailed
query copygroup * active * standard type=archive format=detailed
2. If storage destinations for management classes and copy groups in the ACTIVE
policy set refer to storage pools that are not defined, do one of the following:
v Define storage pools that match the storage destination names for the
management classes and copy groups, as described in Defining or Updating
Primary Storage Pools on page 182.
v Change the storage destinations for the management classes and copy
groups. Do the following:
a. Copy the ACTIVE policy set to another policy set
b. Modify the storage destinations of management classes and copy groups
in that policy set, as required
c. Activate the new policy set
For information on copying policy sets, see Defining and Updating a Policy
Set on page 319.
Depending on the amount of client file data that you expect to import, you may
want to examine the storage hierarchy to ensure that sufficient storage space is
available. Storage pools specified as storage destinations by management classes
and copy groups may fill up with data. For example, you may need to define
additional storage pools to which data can migrate from the initial storage
destinations.
531
If the server encounters duplicate file space names when it imports file data
information, it creates a new file space name for the imported definition by
replacing the final character or characters with a number. A message showing the
old and new file space names is written to the server console and to the activity
log.
For example, if the C_DRIVE and D_DRIVE file space names reside on the target
server for node FRED and on the tape volume for FRED, then the server imports
the C_DRIVE file space as C_DRIV1 file space and the D_DRIVE file space as
D_DRIV1 file space, both assigned to node FRED.
Deciding Whether to Use a Relative Date When Importing File Data: When you
import file data, you can keep the original creation date for backup versions and
archive copies, or you can specify that the server use an adjusted date.
Because tape volumes containing exported data might not be used for some time,
the original dates defined for backup versions and archive copies may be old
enough that files are expired immediately when the data is imported to the target
server.
To prevent backup versions and archive copies from being expired immediately,
specify DATES=RELATIVE on the IMPORT NODE or IMPORT SERVER commands
to adjust for the elapsed time since the files were exported to tape.
For example, assume that data exported to tape includes an archive copy archived
five days prior to the export operation. If the tape volume resides on the shelf for
six months before the data is imported to the target server, the server resets the
archival date to five days prior to the import operation.
If you want to keep the original dates set for backup versions and archive copies,
use DATES=ABSOLUTE, which is the default. If you use the absolute value, any
files whose retention period has passed will be expired shortly after they are
imported to the target server.
Issuing an Import Server or Import Node Command: You can import file data,
either by issuing the IMPORT SERVER or IMPORT NODE command. When you
issue either of these commands, you can specify which type of files should be
imported for all client nodes specified and found on the export tapes. You can
specify any of the following values to import file data:
All
Specifies that all active and inactive versions of backed up files, archive
copies of files, and space-managed files for specified client nodes are
imported to the target server
None
Specifies that no files are imported to the target server; only client node
definitions are imported
Archive
Specifies that only archive copies of files are imported to the target server
Backup
Specifies that only backup copies of files, whether active or inactive, are
imported to the target server
Backupactive
Specifies that only active versions of backed up files are imported to the
target server
532
Allactive
Specifies that only active versions of backed up files, archive copies of files,
and space-managed files are imported to the target server
Spacemanaged
Specifies that only files that have been migrated from a users local file
system (space-managed files) are imported
For example, suppose you want to import all backup versions of files, archive
copies of files, and space-managed files to the target server. You do not want to
replace any existing server control information during this import operation.
Specify the four tape volumes that were identified during the preview operation.
These tape volumes can be read by any device in the TAPECLASS device class. To
issue this command, enter:
import server filedata=all replacedefs=no
devclass=tapeclass volumenames=dsm001,dsm002,dsm003,dsm004
You can limit the import to nodes that were assigned to specific policy domains on
the source server. For example, suppose you exported from the source server the
data for all nodes in all domains. To import to the target server the data only for
nodes that were in the ENGDOM on the source server, enter this command:
import node filedata=all domains=engdom devclass=tapeclass
volumenames=dsm001,dsm002,dsm003,dsm004
If the ENGDOM policy domain exists on the target server, the imported nodes are
assigned to that domain. If ENGDOM does not exist on the target server, the
imported nodes are assigned to the STANDARD policy domain.
If you do not specify a domain on the IMPORT NODE command, the imported
node is assigned to the STANDARD policy domain.
533
EXPORT SERVER
IMPORT
IMPORT
IMPORT
IMPORT
Not applicable.
EXPORT NODE
IMPORT NODE
IMPORT SERVER
IMPORT ADMIN
IMPORT POLICY
EXPORT ADMIN
IMPORT ADMIN
IMPORT SERVER
IMPORT NODE
IMPORT POLICY
EXPORT POLICY
IMPORT POLICY
IMPORT SERVER
IMPORT ADMIN
IMPORT NODE
SERVER
ADMIN
NODE
POLICY
534
|
|
|
|
v The process first builds a list of what is to be exported. The process can
therefore be running for some time before any data is transferred.
v Watch for mount messages, because the server might request mounts of volumes
that are not in the library. Check-in of volumes may be required.
535
While the system is running in console mode, you cannot enter any administrative
commands from the client session. You can, however, start another administrative
client session for entering commands (for example, QUERY PROCESS) if you are
using a multitasking workstation, such as AIX.
If you want the server to write all terminal output to a file, specify the OUTFILE
option with a destination. For example, to write output to the SAVE.OUT file,
enter:
> dsmadmc -consolemode -outfile=save.out
For information about using the CONSOLE mode option and ending an
administrative session in console mode, see Administrators Reference.
536
To minimize processing time when querying the activity log for export or import
information, restrict the search by specifying EXPORT or IMPORT in the SEARCH
parameter of the QUERY ACTLOG command.
For example, to determine how much data will be moved after issuing the preview
version of the EXPORT SERVER command, query the activity log by entering:
query actlog search=export
You can do all the EXPORT and IMPORT operations described in the previous
sequential media sections to virtual volumes. For more information, see Exporting
and Importing Data Using Sequential Media Volumes on page 520.
Data stored as virtual volumes appear to be sequential storage pool volumes on
the source server, but are actually stored as archive files on another server. Those
archive files can be in random or sequential access storage pools. The EXPORT and
IMPORT commands are identical to those previously shown except that the device
class specified in the commands must have a device type of SERVER. For details
Chapter 21. Exporting and Importing Data
537
about how to configure your server to export to or import from virtual volumes,
see Using Virtual Volumes to Store Data on Another Server on page 505.
538
539
540
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
541
Levels of Protection
For the best protection of your data, you should use all of the following:
v Backups of your storage pools
v Mirrored copies of your database and recovery log, with the recovery log mode
set to roll-forward
v Full and incremental backups of your database
As an adjunct to full and incremental database backups, you can also use snapshot
database backups.
Attention: ADSM Version 1 provided database salvage commands in case of a
catastrophic error. Although these commands are still available, you should use the
current database backup and recovery functions for the best server protection. Do
not use the database salvage commands without help from an IBM service
representative.
542
storage pool. You can use this command to recreate files for one or more
volumes that have been lost or damaged. See Restoring Storage Pool
Volumes on page 570 for details.
Tivoli Storage Manager uses database information to determine which files should
be restored for a volume or storage pool. As a result, restore processing does not
require that the original volumes be accessed. For example, if a primary storage
pool volume is damaged, you can use the RESTORE VOLUME command to
recreate files that were stored on that volume, even if the volume itself is not
readable. However, if you delete the damaged files (DISCARDDATA=YES on the
DELETE VOLUME command), the server removes from the database references to
the files on the primary storage pool volume and to copies of the files on copy
storage pool volumes. You could not restore those files.
Restore processing copies files from a copy storage pool onto new primary storage
pool volumes. The server then deletes database references to files on the original
primary storage pool volumes. A primary storage pool volume will become empty
if all files that were stored on that volume are restored to other volumes. In this
case, the server automatically deletes the empty volume from the database.
543
Mirroring
You can prevent the loss of the database or recovery log due to a hardware failure
on a single drive, by mirroring drives. Mirroring simultaneously writes the same
data to multiple disks. However, mirroring does not protect against a disaster or a
hardware failure that affects multiple drives or causes the loss of the entire system.
While Tivoli Storage Manager is running, you can dynamically start or stop
mirroring and change the capacity of the database.
Mirroring provides the following benefits:
v Protection against database and recovery log media failures
v Uninterrupted operations if a database or recovery log volume fails
v Avoidance of costly database recoveries
However, there are also costs:
v Mirroring doubles the required DASD for those volumes that are mirrored
v Mirroring results in decreased performance
544
Quality of Protection
Normal Mode
Roll-forward Mode
Recover to a point-in-time of the latest full or incremental Recover to a point-in-time of the latest full or incremental
backup only.
backup or, with an intact recovery log, to the most
current state.
Recover the loss of client data up to the time when that
data has been:
Storage Requirements
Normal Mode
Roll-forward Mode
The following table compares four typical data recovery configurations, two for
roll-forward mode and two for normal mode. In all four cases, the storage pools
and the database are backed up. The benefits and costs are:
Mirroring
Whether the database and recovery log are mirrored. Mirroring costs
additional disk space.
Coverage
How completely you can recover your data. Roll-forward recovery cannot
be done if the recovery log is not intact. However, roll-forward mode does
support point-in-time recovery.
Speed to Recover
How quickly data can be recovered.
545
Mode
Mirroring
Quality of
Protection
Speed to
Recover
Roll-Forward
Greatest
Fastest
Log Only
Medium
Moderate
Medium
Moderate
None
Least
Slowest
Normal
546
Task
Any administrator
Physical Disk
Database Volume
Physical Disk 1
Physical Disk 2
|
|
Physical Disk 3
Mirrored volumes must have at least the same capacity as the original volumes.
Use the DSMFMT command to format the space. For example, to format VOLA, a
25MB database volume, enter:
./dsmfmt -m -db vola 25
Then define the group of mirrored volumes. For example, you might enter the
following commands:
define dbcopy vol1 vola
define dbcopy vol2 volb
define dbcopy vol3 volc
define dbcopy vol4 vold
define dbcopy vol5 vole
547
After a volume copy is defined, the volume copy is synchronized with the original
volume. This process can range from minutes to hours, depending on the size of
the volumes and performance of your system. After synchronization is complete,
the volume copies are mirror images of each other.
Four server options let you specify the level of protection, recoverability, and
performance for mirrored volumes:
v MIRRORREAD specifies how mirrored volumes are accessed when the server
reads the recovery log or a database page during normal processing. You may
specify MIRRORREAD LOG for reading recovery log pages, or MIRRORREAD
DB for reading database pages. MIRRORREAD LOG (or DB) NORMAL specifies
that only one mirrored volume is read to obtain the desired page.
MIRRORREAD LOG (or DB) VERIFY specifies that all mirrored volumes for a
page be read, compared, and re-synchronized if necessary. MIRRORREAD LOG
(or DB) VERIFY can decrease server performance as each mirrored volume for
the page is accessed on every read.
v MIRRORWRITE specifies how mirrored volumes are written to. You may issue
MIRRORWRITE LOG or DB, and then specify that write operations for the
database and the recovery log be specified as SEQUENTIAL or PARALLEL:
A PARALLEL specification offers better performance but at the potential cost
of recoverability. Pages are written to all copies at about the same time. If a
system outage results in a partial page write and the outage affects both
mirrored copies, then both copies could be corrupted.
A SEQUENTIAL specification offers improved recoverability but at the cost of
performance. Pages are written to one copy at a time. If a system outage
results in a partial page write, only one copy is affected. However, because a
successful I/O must be completed after the write to the first copy but before
the write to the second copy, performance can be affected.
|
|
|
|
|
|
|
|
|
|
||
MIRRORWRITE DB Value
DBPAGESHADOW Value
Page Shadowing
PARALLEL
YES
Yes
SEQUENTIAL
YES
No
|
|
SEQUENTIAL
NO
No
You can request information about mirrored database or recovery log volumes by
using the QUERY DBVOLUME and QUERY LOGVOLUME commands. For
example:
query dbvolume
548
Copy
Status
-----Syncd
Syncd
Syncd
Syncd
Volume Name
Copy
Volume Name
Copy
(Copy 2)
Status (Copy 3)
Status
------------- ------ ------------- -----VOLA
Syncd
UndefVOLB
Syncd
ined
VOLC
Syncd
VOLD
Syncd
VOL5
Syncd VOLE
Syncd
v Each pair of vertical columns displays an image of the database or recovery log.
For example, VOLA, VOLB, VOLC, VOLD, and VOLE (Copy 2) represent one
image of the database.
v Each horizontal row displays a group of mirrored volumes. For example, VOL1,
and VOLA represent the two volume copies.
Restore volumes
|
|
|
|
|
|
|
You can back up primary storage pools to copy storage pools to improve data
availability. When you back up a primary storage pool, you create backup copies
of client files that are stored in primary storage pools in copy storage pools. By
using copy storage pools, you maintain multiple copies of files and reduce the
potential for data loss due to media failure. If the primary file is not available or
becomes corrupted, the server accesses and uses the duplicate file from a copy
storage pool.
Server Storage
Primary Storage Pools
Offsite Storage
Copy Storage Pools
HSM
Backup
Archive
Disk Storage Pools
Tape Storage Pool
549
Primary storage pools should be backed up each day to the same copy storage
pool. Backing up to the same copy storage pool ensures that files do not need to be
recopied if they have migrated to the next pool.
|
|
|
|
|
The only files backed up to the DISASTER-RECOVERY pool are files for which a
copy does not already exist in the copy storage pool.
|
|
|
|
|
|
550
|
|
|
|
|
|
|
|
|
|
For the best protection, primary storage pools should be backed up regularly,
preferably each day. You can define schedules to begin backups of files in the
primary storage pools. For example, to back up the BACKUPPOOL,
ARCHIVEPOOL, and TAPEPOOL storage pools every night, schedule the
following commands:
|
|
See Chapter 17, Automating Server Operations, on page 401 for information
about scheduling commands.
|
|
|
|
|
|
|
If you schedule storage pool backups and migrations and have enough disk
storage, you can copy most files from the disk storage pool before they are
migrated to tape and thus avoid unnecessary tape mounts. Here is the sequence:
1. Clients back up or archive data to disk
Notes:
a. Because scratch volumes are allowed in this copy storage pool, you do not
need to define volumes for the pool.
b. All storage volumes in COPYPOOL are located onsite.
2. Perform the initial backup of the primary storage pools by issuing the
following commands:
backup stgpool diskpool copypool maxprocess=2
backup stgpool tapepool copypool maxprocess=2
551
|
|
|
|
You can set up a primary storage pool so that when a client backs up, archives, or
migrates a file, the file is written to the primary storage pool and is simultaneously
stored into each copy storage pool specified for the primary storage pool.
|
|
|
|
|
Use of the simultaneous write function is not intended to replace regular backups
of storage pools. If you use the function to simultaneously write to copy storage
pools, ensure that the copy of each primary storage pool is complete by regularly
issuing the BACKUP STGPOOL command. See Simultaneous Write to a Primary
Storage Pool and Copy Storage Pools on page 187.
|
|
|
|
|
|
|
|
|
Occasionally, when Tivoli Storage Manager restores data, there is some duplication
of restored files. This can occur if primary volumes are not available, and Tivoli
Storage Manager does not have a complete copy storage pool from which to
perform the restore. Then, Tivoli Storage Manager must use volumes from multiple
copy storage pools to restore the data. This process can result in duplicate data
being restored. To prevent this duplication, keep one complete set of copy storage
pools available to the server, or ensure that only one copy storage pool has an
access of read/write during the restore operation.
|
|
|
|
|
The primary storage pool Main contains volumes Main1, Main2, and Main3.
v Main1 contains files File11, File12, File13
v Main2 contains files File14, File15, File16
|
|
|
|
|
|
|
|
|
The copy storage pool DuplicateA contains volumes DupA1, DupA2, and DupA3.
v DupA1 contains copies of File11, File12
v DupA2 contains copies of File13, File14
v DupA3 contains copies of File15, File16, File17, File18 (File19 is missing because
BACKUP STGPOOL was run on the primary pool before the primary pool
contained File 19.)
The copy storage pool DuplicateB contains volumes DupB1 and DupB2.
|
|
|
|
|
|
|
|
If you have not designated copy storage pool DuplicateB as the only copy storage
pool to have read/write access for the restore operation, then Tivoli Storage
Manager can choose the copy storage pool DuplicateA, and use volumes DupA1,
DupA2, and DupA3. Because copy storage pool DuplicateA does not include file
File19, Tivoli Storage Manager would then use volume DupB2 from the copy
storage pool DuplicateB. The program does not track the restoration of individual
552
|
|
files, so File15, File16, File17, and File18 will be restored a second time, and
duplicate copies will be generated when volume DupB2 is processed.
553
v
v
v
v
Note: The log mode is not in roll-forward mode until you perform the first full
database backup after entering this command.
To set the log mode back to normal, enter:
set logmode normal
554
To determine the size that the recovery log should be in roll-forward mode, you
must know how much recovery log space is used between database backups. For
example, if you perform daily incremental backups, check your daily usage over a
period of time. You can use the following procedure to make your estimate:
1. Set the log mode to normal. In this way you are less likely to exceed your log
space if your initial setting is too low for roll-forward mode.
2. After a scheduled database backup, reset the statistic on the amount of
recovery log space used since the last reset by using the following command:
reset logconsumption
3. Just before the next scheduled database backup, display the current recovery
log statistics by using the following command:
query log format=detailed
Record the cumulative consumption value, which shows the space, in megabytes,
used since the statistic was last reset.
4. Repeat steps 2 and 3 for at least one week.
5. Increase the highest cumulative consumption value by 30 percent. Set your
recovery log size to this increased value to account for periods of unusually
high activity.
For example, over a period of a week the highest cumulative consumption
value was 500MB. If you set your recovery log to 650MB, you should have
enough space between daily backups.
For information on how to adjust the recovery log size, see Increasing the Size of
the Database or Recovery Log on page 427 or Decreasing the Size of the
Database or Recovery Log on page 431.
Note: If the recovery log runs out of space, you may not be able to start the server
for normal operation. You can create an additional recovery log volume if
needed to start the server and perform a database backup. For example, to
create a 5MB volume A00, issue the following command:
> dsmserv extend log a00 5mb
555
In roll-forward mode, you can set a database backup to occur automatically when
the recovery log utilization reaches a defined percentage. Once the backup is
complete, the server automatically deletes any unnecessary recovery log records,
thus reducing the recovery log utilization. You can also control automatic database
backups by how much log space will be freed and how much time has elapsed
since the last backup. You might want to automatically trigger database backups if
you have already scheduled them. However, while the automatically triggered
database backups are occurring, the recovery log could grow faster than expected
or the server may not be able to significantly reduce the recovery log utilization as
a result of the backup. You should try to coordinate the recovery log size and
scheduled database backups. A database backup has a higher priority than most
operations, and backup based on a trigger could occur during high server activity
and affect your other operations. Adjust the recovery log size and database backup
trigger parameters to avoid triggering backups at non-scheduled times.
|
|
By setting a database backup trigger you can reduce the likelihood that the
recovery log will run out of space before the next backup.
If the log mode is changed from normal to roll-forward, the next database backup
must be a full backup. If a database backup trigger is defined when you set the log
mode to roll-forward, the full backup is done automatically. The server does not
start saving log records for roll-forward recovery until this full backup completes
successfully.
|
|
|
|
|
|
|
|
|
|
You can determine the size of your recovery log by completing the steps in
Estimating the Size of the Recovery Log on page 554. Define your database
backup trigger according to the size of your recovery log. For example, assume
that your recovery log size is 650MB. The recovery log utilization percentage is
usually less than 500MB between database backups. You want to trigger a backup
only in unusual circumstances. Therefore, set the trigger to at least 75 percent
(approximately 500MB). To set the trigger to 75 percent and run 20 incremental
backups to every full backup, enter:
|
|
|
|
|
|
|
|
|
|
556
|
|
|
|
|
TAPECLASS
TAPECLASS
75
20
120
10
SERVER_CONSOLE
02/27/2002 12:57:52
This information shows that the database will be backed up if the log is at least 75
percent full and either the last database backup was at least two hours (120
minutes) ago, or there is at least 10 percent of the log that will be freed after the
backup completes. If automatic backups are occurring too often, you could increase
the log full percentage, the minimum interval, or the minimum amount of log
space to be freed. You could increase the log full percentage to 80, increase the
minimum interval to eight hours (480 minutes), and increase the minimum amount
of log space to be freed to 30 percent by entering:
|
If the database backup trigger automatically runs backups more often than you
want and the setting is high (for example, 90 percent or higher), you should
probably increase the recovery log size. If you no longer want to use the database
backup trigger, enter:
delete dbbackuptrigger
After you delete the database backup trigger, the server no longer runs automatic
database backups.
Note: If you delete the trigger and stay in roll-forward mode, transactions fail
when the log fills. Therefore, you should change the log mode to normal.
Remember, however, that normal mode does not let you perform
roll-forward recovery. Increase the recovery log size if you want roll-forward
recovery.
557
Note: You can recover the database without a volume history file. However,
because you must examine every volume that may contain database backup
information, this is a time-consuming and error-prone task.
The VOLUMEHISTORY server option lets you specify backup volume history files.
Then, whenever the server updates volume information in the database, it also
updates the same information in the backup files.
You can also back up the volume history information at any time, by entering:
backup volhistory
If you do not specify file names, the server backs up the volume history
information to all files specified with the VOLUMEHISTORY server option.
In order to ensure updates are complete before the server is halted, we recommend
you:
v Not halt the server for a few minutes after issuing the BACKUP VOLHISTORY
command.
v Specify multiple VOLUMEHISTORY options in the server options file.
v Examine the volume history file to see if the file is updated.
558
Notes:
1. Existing volume history files are not automatically updated with the DELETE
VOLHISTORY command.
2. Do not delete sequential volume history information until you no longer need
that information. For example, do not delete dump volume information or
storage volume reuse information, unless you have backed up or dumped the
database at a later time than that specified for the delete operation.
3. Do not delete the volume history information for database dump, database
backup, or export volumes that reside in automated libraries, unless you want
to return the volumes to scratch status. When the DELETE VOLHISTORY
command removes volume information for such volumes, they automatically
return to scratch status. The volumes are then available for reuse by the server
and the information stored on them may be overwritten.
DRM expires database backup series and deletes the volume history entries.
559
DRM saves a copy of the device configuration file in its disaster recovery
plan file.
The DEVCONFIG server option lets you specify backup device configuration files
(for details, see the Administrators Reference). After the server is restarted, whenever
the server updates device configuration information in the database, it also updates
the same information in the backup files.
During a database restore operation, the server tries to open the first device
configuration file in the order in which the files occur in the server options. If it
cannot read that file, it searches for the next usable device configuration file. If
none can be found, you must recreate the file. See Recreating a Device
Configuration File on page 561 for details. After the database has been restored,
you may have to update the device configuration.
You can also back up the device configuration information at any time, by
entering:
backup devconfig
If you do not specify file names, the device configuration file is backed up to all
files specified with the DEVCONFIG server option.
In order to ensure updates are complete before the server is halted, we recommend
you:
v Not halt the server for a few minutes after issuing the BACKUP DEVCONFIG
command.
v Specify multiple DEVCONFIG options in the server options file.
v Examine the device configuration file to see if the file is updated.
If you lose the device configuration file and need it to restore the database, you
must recreate it manually. See Recreating a Device Configuration File on
page 561 for details.
If you are using automated tape libraries, volume location information is also
saved in the device configuration file. The file is updated whenever CHECKIN
LIBVOLUME, CHECKOUT LIBVOLUME, and AUDIT LIBRARY commands are
issued, and the information is saved as comments (/* ...... */). This information is
used during restore or load operations to locate a volume in an automated library.
If you must recreate the device configuration file, you will be unable to recreate the
volume location information. Therefore, you must define your library as a manual
library and manually mount the volumes during server processing. If an
automated tape library is used at the recovery site, volume location information in
comments (/*...*/) in the device configuration file must be modified. First,
manually place the physical database backup volumes in the automated library
and note the element numbers where you place them. Then manually edit the
device configuration file to identify the locations of the database backup volumes
so that the server can find them to restore the database.
For virtual volumes, the device configuration file stores the password (in encrypted
form) for connecting to the remote server. If you regressed the server to an earlier
point-in-time, this password may not match what the remote server expects. In this
560
case, manually set the password in the device configuration file. Then ensure that
the password on the remote server matches the password in the device
configuration file.
Note: Set the password in clear text. After the server is operational again, you can
issue a BACKUP DEVCONFIG command to store the password in
encrypted form.
DEFINE
DEFINE
DEFINE
DEFINE
DEFINE PATH
You must provide those definitions needed to mount the volumes read by the
command that you issued. If you are restoring or loading from a FILE device
class, you will need only the DEFINE DEVCLASS command.
v For virtual volumes, the device configuration file stores the password (in
encrypted form) for connecting to the remote server. If you regressed the server
to an earlier point-in-time, this password may not match what the remote server
expects. In this case, manually set the password in the device configuration file.
Then ensure that the password on the remote server matches the password in
the device configuration file.
v You can use command defaults.
v The file can include blank lines.
v A single line can be up to 240 characters.
v The file can include continuation characters and comments as described in the
Administrators Reference.
The following shows an example of a device configuration file:
561
/*
In this example, the backup data is written to scratch volumes. You can also
specify volumes by name. After a full backup, you can perform incremental
backups, which copy only the changes to the database since the previous backup.
To do an incremental backup of the database to the TAPECLASS device class,
enter:
backup db type=incremental devclass=tapeclass
562
563
Database and recovery log setup (the output from detailed queries of your
database and recovery log volumes)
Note: To perform a DSMSERV RESTORE DB operation, the database backup
volumes must be in a library of a library type of MANUAL or SCSI.
DRM can query the server and generate a current, detailed disaster
recovery plan for your installation.
Attention: Do not start the server until after you restore the database (the next
step). Starting the server before the restore would destroy any existing volume
history files.
4. Issue the DSMSERV RESTORE DB utility. For example, to restore the database
to a backup series that was created on April 19, 1999, enter:
dsmserv restore db todate=04/19/1999
564
Note: If the volume history file is not available, you must mount tape
volumes in the correct order or specify their order on the DSMSERV
RESTORE DB utility.
b. Using the device configuration file, requests a mount of the first volume,
which should contain the beginning of the full backup.
c. Restores the backup data from the first volume.
d. Continues to request mounts and to restore data from the backup volumes
that contain the full backup and any incremental backups that occurred on
or before the date specified.
From the old volume history information (generated by the QUERY
VOLHISTORY command) you need a list of all the volumes that were reused
(STGREUSE), added (STGNEW), and deleted (STGDELETE) since the original
backup. Use this list to perform the rest of this procedure.
5.
6.
7.
8.
Notes:
1. Some files may be lost if they were moved since the backup (due to migration,
reclamation, or move data requests) and the space occupied by those files has
been reused. You can minimize this loss by using the REUSEDELAY parameter
when defining or updating sequential access storage pools. This parameter
delays volumes from being returned to scratch or being reused. See Delaying
Reuse of Volumes for Recovery Purposes on page 553 for more information on
the REUSEDELAY parameter.
2. By backing up your storage pool and your database, you reduce the risk of
losing data. To further minimize loss of data, you can:
v Mark the backup volumes in the copy storage pool as OFFSITE and move
them to an offsite location.
In this way the backup volumes are preserved and are not reused or
mounted until they are brought onsite. Ensure that you mark the volumes as
OFFSITE before you back up the database.
To avoid having to mark volumes as offsite or physically move volumes:
Specify a device class of DEVTYPE=SERVER in your database backup.
Back up a primary storage pool to a copy storage pool associated with a
device class of DEVTYPE=SERVER.
565
v Back up the database immediately after you back up the storage pools.
v Turn off migration and reclamation while you back up the database.
v Do not perform any MOVE DATA operations while you back up the
database.
v Use the REUSEDELAY parameters interval to prevent your copy storage
pool volumes from being reused or deleted before they might be needed.
3. If your old volume history file shows that any of the copy storage pool
volumes needed to restore your storage pools have been reused (STGREUSE) or
deleted (STGDELETE), you may not be able to restore all your files. You can
avoid this problem by including the REUSEDELAY parameter when you define
your copy storage pools.
4. After a restore, the volume inventories for Tivoli Storage Manager and for your
tape management system may be inconsistent. For example, after a database
backup, a new volume is added to Tivoli Storage Manager. The tape
management system inventory records the volume as belonging to Tivoli
Storage Manager. If the database is restored from the backup, Tivoli Storage
Manager has no record of the added volume, but the tape management system
does. You must synchronize these inventories. Similarly, the volume inventories
for Tivoli Storage Manager and for any automated libraries may also be
inconsistent. If they are, issue the AUDIT LIBRARY command to synchronize
these inventories.
For example, the most recent backup series consists of three operations:
0
566
Attention: If the original database or recovery log volumes are available, you
issue only the DSMSERV RESTORE DB utility. However, if those volumes have
been lost, you must first issue the DSMSERV FORMAT command to initialize the
database and recovery log, then issue the DSMSERV RESTORE DB utility.
1:30 p.m.
Client As files on Volume 1 (disk), is migrated to tape (Volume 2).
3:00 p.m.
Client B backs up its data to Volume 1.
The server places Client Bs files in the location that contained Client As
files prior to the migration.
3:30 p.m.
The server goes down.
3:40 p.m.
The system administrator reloads the noon version of the database by
using the DSMSERV RESTORE DB utility.
4:40 p.m.
Volume 1 is audited. The following then occurs:
1. The server compares the information on Volume 1 and with the
restored database (which matches the database at noon).
2. The audit does not find Client As files on Volume 1 where the reloaded
database indicates they should be. Therefore, the server deletes these
Client A file references.
3. The database has no record that Client As files are on Volume 2, and
the files are, in effect, lost.
4. The database has no record that Client Bs files are on Volume 1, and
the files are, in effect, lost.
If roll-forward recovery had been used, the database would have been rolled
forward to 3:30 p.m. when the server went down. In this case, neither Client As
files nor Client Bs files would have been lost. If a point-in-time restore of the
database had been performed and the storage pool had been backed up, Client As
files would not have been deleted by the volume audit. Those files could have
been restored with a RESTORE VOLUME or RESTORE STGPOOL command.
Client Bs files would still have been lost, however.
567
You can only use full and incremental backups with roll-forward mode enabled to
restore a database to its most current state. Snapshot database backups are
complete database copies of a point-in-time.
To restore the database to its most current state, enter:
dsmserv restore db
Attention: If the original database or recovery log volumes are available, you
issue only the DSMSERV RESTORE DB utility. However, if those volumes have
been lost, you must first issue the DSMSERV FORMAT utility to initialize the
database and recovery log, then issue the DSMSERV RESTORE DB utility.
Note: Roll-forward recovery does not apply if all recovery log volumes are lost.
However, with the server running in roll-forward mode, you can still
perform point-in-time recovery in such a case.
The RESTORE STGPOOL command restores specified primary storage pools that
have files with the following problems:
v The primary copy of the file has been identified as having read errors during a
previous operation. Files with read errors are marked as damaged.
v The primary copy of the file resides on a volume that has an access mode of
DESTROYED. For how the access mode of a volume changes to the
DESTROYED access mode, see How Restore Processing Works on page 542.
When you restore a storage pool, be prepared to provide the following
information:
Primary storage pool
Specifies the name of the primary storage pool that is being restored.
Copy storage pool
Specifies the name of the copy storage pool from which the files are to be
restored. This information is optional. If you do not specify a copy storage
pool, the server restores the files from any copy storage pool where it can
find them.
New storage pool
Specifies the name of the new primary storage pool to which to restore the
files. This information is optional. If you do not specify a new storage pool,
the server restores the files to the original primary storage pool.
Maximum number of processes
Specifies the maximum number of parallel processes that are used for
restoring files.
568
Preview
Specifies whether you want to preview the restore operation without
actually restoring data.
See Correcting Damaged Files on page 580 and Backup and Recovery
Scenarios on page 581 for examples of using the RESTORE STGPOOL command.
569
570
Task
Use the RESTORE VOLUME command to restore all files that are stored in the
same primary storage pool and that were previously backed up to copy storage
pools. When you use the RESTORE VOLUME command, be prepared to supply
some or all of the following information:
Volume name
Specifies the name of the volume in the primary storage pool for which to
restore files.
Tip: To restore more than one volume in the same primary storage pool,
issue this command once and specify a list of volumes to be restored.
When you specify more than one volume, Tivoli Storage Manager
attempts to minimize volume mounts for the copy storage pool.
Copy storage pool name
Specifies the name of the copy pool from which the files are to be restored.
This information is optional. If you do not specify a particular copy storage
pool, the files are restored from any copy storage pool where it can find
them.
New storage pool
Specifies the name of the new primary storage pool to which to restore the
files. This information is optional. If you do not specify a new storage pool,
the files are restored to the original primary storage pool.
Maximum number of processes
Specifies the maximum number of parallel processes that are used for
restoring files.
Preview
Specifies whether you want to preview the restore operation without
actually restoring data.
See Recovering a Lost or Damaged Storage Pool Volume on page 585 for an
example of using the RESTORE VOLUME command.
571
background process is canceled, some files may have already been restored prior to
the cancellation. To display information on background processes, use the QUERY
PROCESS command.
The RESTORE VOLUME command may be run in the foreground on an
administrative client by issuing the command with the WAIT=YES parameter.
The server database contains information about files on storage pool volumes. If
there are inconsistencies between the information in the database about files and
the files actually stored in a storage pool volume, users may be unable to access
their files.
To ensure that all files are accessible on volumes in a storage pool, audit any
volumes you suspect may have problems by using the AUDIT VOLUME
command. You have the option of auditing multiple volumes using a time range
criteria, or auditing all volumes in a storage pool.
You should audit a volume when the following conditions are true:
572
573
574
v The location of file copies (whether a copy of the file exists in a copy storage
pool)
See Volumes in a Primary Storage Pool on page 573 and Volumes in a Copy
Storage Pool on page 574 for details on how the server handles inconsistencies
detected during an audit volume process. Check the activity log for details about
the audit operation.
The server removes the CRC values before it returns the data to the client node.
Tivoli Storage
Manager
client
1
Storage
Agent
Tivoli Storage
Manager
server
5
Storage
Pool
Figure 84. Data Transfer Eligible for Data Validation
Table 39 provides information that relates to Figure 84. This information explains
the type of data being transferred and the appropriate command to issue.
Table 39. Setting Data Validation
Numbers in
Figure 84
Where to Set
Data
Validation
Type of Data
Transferred
Command
Command Parameter
1
Node
definition
See Note
See Note
2
Node
definition
575
Where to Set
Data
Validation
Type of Data
Transferred
Command
Command Parameter
3
Server
definition
(storage sgent
only)
Metadata
VALIDATEPROTOCOL=ALL
4
Storage pool
File Data
definition
issued on the
Tivoli Storage
Manager server
DEFINE STGPOOL
UPDATE STGPOOL
CRCDATA=YES
5
File Data
Storage pool
definition
issued on the
Tivoli Storage
Manager server
DEFINE STGPOOL
UPDATE STGPOOL
CRCDATA=YES
Note: The storage agent reads the VALIDATEPROTOCOL setting for the client
from the Tivoli Storage Manager server.
Figure 85 is similar to the previous figure, however note that the top section
encompassing 1, 2, and 3 is shaded. All three of these data validations are
related to the VALIDATEPROTOCOL parameter. What is significant about this
validation is that it is active only during the client session. After validation, the
client and server discard the CRC values generated in the current session. This is
In contrast to storage pool validation, 4 and 5, which is always active as long
as the storage pool CRCDATA setting is equal to YES.
The validation of data transfer between the storage pool and the storage agent 4
is managed by the storage pool CRCDATA setting defined by the Tivoli Storage
Manager server. Even though the flow of data is between the storage agent and the
storage pool, data validation is determined by the storage pool definition.
Therefore, if you always want your storage pool data validated, set your primary
storage pool CRCDATA setting to YES.
Tivoli Storage
Manager
client
1
Storage
Agent
Tivoli Storage
Manager
server
5
Storage
Pool
Figure 85. Protocol Data Validation versus Storage Pool Data Validation
576
If the network is unstable, you may decide to only enable data validation for
nodes. Tivoli Storage Manager generates a cyclic redundancy check when the data
is sent over the network to the server. Certain nodes may have more critical data
than others and may require the assurance of data validation. When you identify
the nodes that require data validation, you can choose to have only the users data
validated or all the data validated. Tivoli Storage Manager validates both the file
data and the file metadata when you choose to validate all data. See Validating a
Nodes Data During a Client Session on page 344.
When you enable data validation for a server-to-server exchange or between a
storage agent and server, the server must validate all data. You can enable data
validation by using the DEFINE SERVER or UPDATE SERVER command. For a
server-to-server exchange, see Using Virtual Volumes to Store Data on Another
Server on page 505 for more information. For data that is exchanged between a
storage agent and the server, refer to the IBM Tivoli Storage Manager Storage Agent
Users Guide for the storage agents operating system.
If the network is fairly stable but your site is perhaps using new hardware devices,
you may decide to only enable data validation for storage pools. When the server
sends data to the storage pool, the server generates cyclic redundancy checking,
and stores the CRC value with the data. The server validates the CRC value when
the server audits the volume. Later, you may decide that data validation for
storage pools is no longer required after the devices prove to be stable. Refer to
Auditing a Storage Pool Volume on page 572 for more information on data
validation for storage pools.
Performance Considerations
Consider the impact on performance when you decide whether data validation is
necessary for storage pools. Data validation impacts performance because the
server requires additional CPU overhead to calculate and compare CRC values.
This method of validation is independent of validating data during a client session
with the server. When you choose to validate storage pool data, there is no
performance impact on the client.
If you enable CRC for storage pools on devices that later prove to be stable, you
can increase performance by updating the storage pool definition to disable data
validation.
577
The audit volume process is run in the background and the server returns the
following message:
ANR2313I Audit Volume NOFIX process started for volume /dev/vol1
(process id 4).
The following figure displays an example of the audit volume process report.
To display the results of a volume audit after it has completed, you can issue the
QUERY ACTLOG command.
If you request that the server audit volume VOL3, the server first accesses volume
VOL2, because File D begins at VOL2. When volume VOL2 is accessed, the server
only audits File D. It does not audit the other files on this volume.
578
Because File D spans multiple volumes, the server accesses volumes VOL2, VOL3,
VOL4, and VOL5 to ensure that there are no inconsistencies between the database
and the storage pool volumes.
For volumes that require manual mount and dismount operations, the audit
process can require significant manual intervention.
The server audits all volumes that were written to starting at 12:00:01 a.m. on
March 20 and ending at 11:59:59 p.m. on March 22, 2002.
When you use the parameters FROMDATE, TODATE, or both, the server limits the
audit to only the sequential media volumes that meet the date criteria, and
automatically includes all online disk volumes. When you include the STGPOOL
parameter you limit the number of volumes that may include disk volumes.
This command will audit all volumes in the STPOOL3 storage pool every two
days. The audit operation will begin at 9:00 p.m.
579
580
If a client tries to access a file stored in TAPEPOOL and a read error occurs, the file
in TAPEPOOL is automatically marked as damaged. Future accesses to the file
automatically use the copy in COPYPOOL as long as the copy in TAPEPOOL is
marked as damaged.
To restore any damaged files in TAPEPOOL, you can define a schedule that issues
the following command periodically:
restore stgpool tapepool
You can check for and replace any files that develop data-integrity problems in
TAPEPOOL or in COPYPOOL. For example, every three months, query the
volumes in TAPEPOOL and COPYPOOL by entering the following commands:
query volume stgpool=tapepool
query volume stgpool=copypool
Then issue the following command for each volume in TAPEPOOL and
COPYPOOL:
audit volume <volname> fix=yes
If a read error occurs on a file in TAPEPOOL, that file is marked damaged and an
error message is produced. If a read error occurs on file in COPYPOOL, that file is
deleted and a message is produced.
Restore damaged primary files by entering:
restore stgpool tapepool
581
point in time and that database references to files in the storage pool are
valid. For example, to retain database backups for seven days and,
therefore, sets REUSEDELAY to 7.
v Back up its storage pools every night.
v Perform a full backup of the database once a week and incremental backups on
the other days.
v Ship the database and copy storage pool volumes to an offsite location every
day.
To protect client data, the administrator does the following:
1. Creates a copy storage pool named DISASTER-RECOVERY. Only scratch tapes
are used, and the maximum number of scratch volumes is set to 100. The copy
storage pool is defined by entering:
define stgpool disaster-recovery tapeclass pooltype=copy
maxscratch=100
4. Does the following operations nightly after the scheduled operations have
completed:
a. Backs up the volume history and device configuration files. If they have
changed, back up the server options files and the database and recovery log
setup information.
b. Moves the volumes marked offsite, the database backup volumes, volume
history files, device configuration files, server options files and the database
and recovery log setup information to the offsite location.
c. Identifies offsite volumes that should be returned onsite by using the
QUERY VOLUME command:
query volume stgpool=disaster-recovery access=offsite status=empty
582
Do the following:
1. Install Tivoli Storage Manager on the replacement processor with the same
server options and the same size database and recovery log as on the
destroyed system. For example, to initialize the database and recovery log,
enter:
dsmserv format 1 log1 1 dbvol1
2. Move the latest backup and all of the DISASTER-RECOVERY volumes onsite
from the offsite location.
3.
4.
5.
6.
Note: Do not change the access mode of these volumes until after you have
completed step 7.
If a current, undamaged volume history file exists, save it.
Restore the volume history and device configuration files, the server options
and the database and recovery log setup. For example, the recovery site might
require different device class, library, and drive definitions. For more
information, see Updating the Device Configuration File on page 561.
Restore the database from the latest backup level by issuing the DSMSERV
RESTORE DB utility (see Recovering Your Server Using Database and
Storage Pool Backups on page 563).
Change the access mode of all the existing primary storage pool volumes in
the damaged storage pools to DESTROYED by entering:
update volume * access=destroyed wherestgpool=backuppool
update volume * access=destroyed wherestgpool=archivepool
update volume * access=destroyed wherestgpool=spacemgpool
update volume * access=destroyed wherestgpool=tapepool
583
volumes from the database by using the DELETE VOLUME command with
the DISCARDDATA option. Any files backed up to these volumes cannot be
restored.
8. Change the access mode of the remaining volumes in the
DISASTER-RECOVERY pool to READWRITE by entering:
update volume * access=readwrite wherestgpool=disaster-recovery
Note: At this point, clients can access files. If a client tries to access a file that
was stored on a destroyed volume, the retrieval request goes to the
copy storage pool. In this way, clients can access their files without
waiting for the primary storage pool to be restored. When you update
volumes brought from offsite to change their access, you greatly speed
recovery time.
9. Define new volumes in the primary storage pool so the files on the damaged
volumes can be restored to the new volumes. The new volumes also let clients
backup, archive, or migrate files to the server. You do not need to perform this
step if you use only scratch volumes in the storage pool.
10. Restore files in the primary storage pool from the copies located in the
DISASTER-RECOVERY pool by entering:
restore stgpool backuppool maxprocess=2
restore stgpool archivepool maxprocess=2
restore stgpool spacemgpool maxprocess=2
restore stgpool tapepool maxprocess=2
584
3. Delete the volumes from the library clients that do not own the volumes.
4. Resume transactions by enabling all schedules, migration, and reclamations on
the library client and library manager servers.
Point-in-Time Restore of a Library Client Server: A point-in-time restore of a
library client server could cause volumes to be removed from the volume
inventory of a library client server and later overwritten. If a library client server
acquired scratch volumes after the point-in-time to which the server is restored,
these volumes would be set to private in the volume inventories of the library
client and library manager servers. After the restore, the volume inventory of the
library client server can be regressed to a point-in-time before the volumes were
acquired, thus removing them from the inventory. These volumes would still exist
in the volume inventory of the library manager server as private volumes owned
by the client.
The restored volume inventory of the library client server and the volume
inventory of the library manager server would be inconsistent. The volume
inventory of the library client server must be synchronized with the volume
inventory of the library manager server in order to return those volumes to scratch
and enable them to be overwritten. To synchronize the inventories, do the
following:
1. Audit the library on the library client server to synchronize the volume
inventories of the library client and library manager servers.
2. To resolve any remaining volume ownership concerns, refer to the volume
history and issue the UPDATE VOLUME command as needed.
This command produces a list of offsite volumes that contain the backed up
copies of the files that were on tape volume DSM087.
2. Set the access mode of the copy volumes identified as UNAVAILABLE to
prevent reclamation.
Note: This precaution prevents the movement of files stored on these volumes
until volume DSM087 is restored.
3. Bring the identified volumes to the onsite location and set their access mode to
READONLY to prevent accidental writes. If these offsite volumes are being
used in an automated library, the volumes must be checked into the library
when they are brought back onsite.
4. Restore the destroyed files by entering:
restore volume dsm087
This command sets the access mode of DSM087 to DESTROYED and attempts
to restore all the files that were stored on volume DSM087. The files are not
Chapter 22. Protecting and Recovering Your Server
585
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
If the original database or recovery log volumes were lost, issue the
DSMSERV FORMAT utility to initialize the database and recovery log.
5. Issue the DSMSERV RESTORE DB utility.
6. Start the library manager.
7. Issue an AUDIT LIBRARY command from each library client for each shared
library.
8. Create a list from the old volume history information (generated by the
QUERY VOLHISTORY command) that shows all of the volumes that were
reused (STGREUSE), added (STGNEW), and deleted (STGDELETE) since the
original backup. Use this list to perform the rest of this procedure.
9. Audit all disk volumes, all reused volumes, and any deleted volumes located
by the AUDIT VOLUME command using the FIX=YES parameter.
10. Issue the RESTORE STGPOOL command to restore those files detected as
damaged by the audit. Include the FIX=YES parameter on the AUDIT
VOLUME command to delete database entries for files not found in the copy
storage pool.
11. Mark as destroyed any volumes that cannot be located, and recover those
volumes from copy storage pool backups. If no backups are available, delete
the volumes from the database by using the DELETE VOLUME command
with the DISCARDDATA=YES parameter.
586
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12. Redefine any storage pool volumes that were added since the database
backup.
|
|
|
|
|
|
9. Mark as destroyed any volumes that cannot be located, and recover those
volumes from copy storage pool backups. If no backups are available, delete
the volumes from the database by using the DELETE VOLUME command
with the DISCARDDATA=YES parameter.
10. Issue the AUDIT LIBRARY command for all shared libraries on this library
client.
|
|
11. Redefine any storage pool volumes that were added since the database
backup.
587
588
In this chapter, most examples illustrate how to perform tasks by using a Tivoli
Storage Manager command-line interface. For information about the commands,
see Administrators Reference, or issue the HELP command from the command line
of an Tivoli Storage Manager administrative client.
Tivoli Storage Manager tasks can also be performed from the administrative Web
interface. For more information about using the administrative interface, see Quick
Start.
589
Task
System
Note: The IBM Tivoli Storage Manager default installation directories changed
from earlier versions. If you created a recovery plan file with ADSM Version
3 Release 1, some names in that file may no longer be valid. After installing
Tivoli Storage Manager, immediately back up your storage pools and
database and create a new recovery plan file.
You can use a recovery plan file and database backup that were created on
an ADSM Version 3 Release 1 server to restore a Tivoli Storage Manager
server. After the restore is complete, start the server with the following
command:
dsmserv upgradedb
/u/recovery/plans/rpp
/u/recovery/plans/source/
@
PRIM1 PRIM2
COPY*
Local
Joes Courier Service
Ironvault, D. Lastname, 1-000-000-0000
30 Day(s)
60 Day(s)
Check Label?: Yes
No
/drm/orm/exec.cmds
590
When the recovery plan file is generated, you can limit processing to specified pools.
The default at installation: All copy storage pools.
To change the default: SET DRMCOPYSTGPOOL
For example, to specify that only the copy storage pools named COPY1 and COPY2
are to be processed, enter:
set drmcopystgpool copy1,copy2
Notes:
1. To remove any specified primary storage pool names, and thus select all primary
storage pools, specify a null string () in SET DRMCOPYSTGPOOL.
2. If you specify both primary and copy storage pools, the specified copy storage
pools should be those used to back up the specified primary storage pools.
To override the default: Specify copy storage pool names in the PREPARE command
To restore a primary storage pool volume, mark the original volume destroyed and
create a replacement volume having a unique name. You can specify a character to be
appended to the name of the original volume in order to create a name for the
replacement volume. This character can help you find the replacement volume names
in the disaster recovery plan.
The default identifier at installation: @
To change the default: SET DRMPLANVPOSTFIX
For example, to use the character r, enter:
set drmplanvpostfix r
591
Table 40. Defaults for the Disaster Recovery Plan File (continued)
You can specify a prefix for the names of the recovery instructions source files in the
recovery plan file.
Recovery instructions
prefix
The default at installation: For a description of how DRM determines the default
prefix, see the INSTRPREFIX parameter of the PREPARE command section in the
Administrators Reference or enter HELP PREPARE from administrative client command
line.
To set a default: SET DRMINSTRPREFIX
For example, to specify the prefix as /u/recovery/plans/rpp, enter:
set drminstrprefix /u/recovery/plans/rpp
The disaster recovery plan file will include, for example, the following file:
/u/recovery/plans/rpp.RECOVERY.INSTRUCTIONS.GENERAL
To override the default: The INSTRPREFIX parameter with the PREPARE command
Prefix for the recovery plan You can specify a prefix to the path name of the recovery plan file. DRM uses this
file
prefix to identify the location of the recovery plan file and to generate the macros and
script file names included in the RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE
and RECOVERY.SCRIPT.NORMAL.MODERECOVERY.DRMODE and
RECOVERY.NMODE stanzas.
The default at installation: For a description of how DRM determines the default
prefix, see the PLANPREFIX parameter of the PREPARE command section in the
Administrators Reference or enter HELP PREPARE from administrative client command
line.
To change the default: SET DRMPLANPREFIX
For example, to specify the prefix as /u/server/recoveryplans/, enter:
set drmplanprefix /u/server/recoveryplans/
The disaster recovery plan file name created by PREPARE processing will be in the
following format:
/u/server/recoveryplans/20000603.013030
To override the default: The PLANPREFIX parameter with the PREPARE command
The disaster recovery plan
expiration period
You can set the numbers of days after creation that a disaster recovery plan file stored
on a target server expires. After the number of days has elapsed, all recovery plan
files that meet both of the following conditions are eligible for expiration:
v The last recovery plan associated with the database series is older than the set
number of days.
v The recovery plan file is not associated with the most recent backup series.
The default at installation: 60 days
To change the default: SET DRMRPFEXPIREDAYS
For example, to change the time to 90 days, enter:
set drmrpfexpiredays 90
592
MOVE DRMEDIA and QUERY DRMEDIA can process copy storage pool volumes in
the MOUNTABLE state. You can limit processing to specified copy storage pools.
The default at installation: All copy storage pool volumes in the MOUNTABLE state
To change the default: SET DRMCOPYSTGPOOL
To override the default: COPYSTGPOOL parameter on MOVE DRMEDIA or QUERY
DRMEDIA
MOVE DRMEDIA generates a location name for volumes that move to the
NOTMOUNTABLE state.
The default at installation: NOTMOUNTABLE
To change the default: SET DRMNOTMOUNTABLENAME
For example, to specify a location named LOCAL, enter:
set drmnotmountablename local
Location name for volumes MOVE DRMEDIA generates a location name for volumes that are changing from
that move to the COURIER NOTMOUNTABLE to COURIER or from VAULTRETRIEVE to COURIERRETRIEVE.
or COURIERRETRIEVE
The default at installation: COURIER
state
To change the default: SET DRMCOURIERNAME
For example, to specify a courier named Joes Courier Service, enter:
set drmcouriername "Joes Courier Service"
Reading labels of checked
out volumes
To determine whether DRM reads the sequential media labels of volumes that are
checked out with MOVE DRMEDIA.
Note: This command does not apply to 349X library types.
The default at installation: DRM reads the volume labels.
To change the default: SET DRMCHECKLABEL
For example, to specify that DRM should not read the volume labels, enter:
set drmchecklabel no
593
A database backup series (full plus incremental and snapshot) is eligible for
expiration if all of these conditions are true:
v The volume state is VAULT or the volume is associated with a device type of
SERVER (for virtual volumes).
v It is not the most recent database backup series.
v The last volume of the series exceeds the expiration value, number of days since
the last backup in the series.
The default at installation: 60 days
To change the default: SET DRMDBBACKUPEXPIREDAYS
For example, to set the expiration value to 30 days, enter:
set drmdbbackupexpiredays 30
Whether to process backup At installation, MOVE DRMEDIA and QUERY DRMEDIA will not process backup
volumes of the FILE device volumes that are associated with a device type of FILE.
type
The default at installation: Backup volumes of the FILE device type are not
processed
To change the default: SET DRMFILEPROCESS
To allow processing, enter:
set drmfileprocess yes
Vault Name
MOVE DRMEDIA uses the vault name to set the location of volumes that are moving
from the COURIER state to the VAULT state
The default at installation: The vault name is set to VAULT.
To change the default: SET DRMVAULTNAME
For example, to specify the vault name as IRONVAULT, the contact name as J.
SMITH, and the telephone number as 1-555-000-0000, enter:
set drmvaultname "Ironvault, J. Smith, 1-555-000-0000"
594
Recovery Instructions for Tivoli Storage Manager Server ACMESRV on system ZEUS
Joe Smith (wk 002-000-1111 hm 002-003-0000): primary system programmer
Sally Doe (wk 002-000-1112 hm 002-005-0000): primary recovery administrator
Jane Smith (wk 002-000-1113 hm 002-004-0000): responsible manager
Security Considerations:
Joe Smith has the password for the Admin ID ACMEADM. If Joe is unavailable,
you need to either issue SET AUTHENTICATION OFF or define a new
administrative user ID at the replacement Tivoli Storage Manager server console.
RECOVERY.INSTRUCTIONS.OFFSITE
Include information such as the offsite vault location, couriers name, and
telephone numbers. For example:
Our offsite vault location is Ironvault, Safetown, Az.
The phone number is 1-800-000-0008. You need to contact them directly
to authorize release of the tapes to the courier.
Our couriers name is Fred Harvey. You can contact him at 1-800-444-0000.
Since our vault is so far away, be sure to give the courier a list
of both the database backup and copy storage pool volumes required. Fred
is committed to returning these volumes to us in less than 12 hours.
RECOVERY.INSTRUCTIONS.INSTALL
Include information about restoring the base server system from boot
media or, if boot media is unavailable, about server installation and the
location of installation volumes. For example:
Most likely you will not need to reinstall the Tivoli Storage Manager server and
administrative clients because we use
mksysb to backup the rootvg volume group, and the Tivoli Storage Manager server
code and configuration files exist in this group.
However, if you cannot do a mksysb restore of the base server system,
and instead have to start with a fresh AIX build, you may need
to add Tivoli Storage Manager server code to that AIX system.
The install volume for the Tivoli Storage Manager server is INS001. If that is
lost, you will need to contact Copy4You Software, at 1-800-000-0000, and
obtain a new copy. Another possibility is the local IBM Branch office
at 555-7777.
RECOVERY.INSTRUCTIONS.DATABASE
Include information about how to recover the database and about how
much hardware space requirements. For example:
You will need to find replacement disk space for the server database. We
have an agreement with Joe Replace that in the event of a disaster, he
will provide us with disk space.
RECOVERY.INSTRUCTIONS.STGPOOL
Include information on primary storage pool recovery instructions. For
example:
Do not worry about the archive storage pools during this disaster recovery.
Focus on migration and backup storage pools.
The most important storage pool is XYZZZZ.
595
Server Machine
1. Specify server machine information:
Issue the DEFINE MACHINE command. with ADSMSERVER=YES. For
example, to define machine MACH22 in building 021, 2nd floor, in room 2929,
with a priority of 1, enter:
define machine tsm1 adsmserver=yes priority=1
Client Machines
2. Specify the client node location and business priority:
Issue the DEFINE MACHINE command. For example, to define machine
MACH22 in building 021, 2nd floor, in room 2929, with a priority of 1, enter:
define machine mach22 building=021 floor=2 room=2929 priority=1
To query machine definitions, issue the QUERY MACHINE command. See the
example, in Client Recovery Scenario on page 612.
4. To add machine characteristics and recovery instructions to the database, issue
the INSERT MACHINE command. You must first query the operating system
to identify the characteristics for your client machine. You can add the
information manually or use an awk script. A sample program is shipped with
DRM.
v Add information manually:
The following partial output is from a query on an AIX client machine.
--1 Host Name: mach22 with 256 MB Memory Card
--256 MB Memory Card
----4 Operating System: AIX Version 4 Release 3
----- Hardware Address: 10:00:5x:a8:6a:46
596
597
You should define the recovery media after a client machine configuration
changes. For example, after you have installed a new level of AIX on a client
machine and created a bootable image using mksysb, issue the DEFINE
RECOVERYMEDIA command to define the new mksysb volumes.
To query your recovery media definitions, issue the QUERY
RECOVERYMEDIA command with the FORMAT=DETAILED parameter.
2. Associate one or more machines with recovery media. Use the association
information to identify the boot media to use in the replacement machines. For
example, to associate machine MACH255 with recovery media
TELLERWRKSTNIMAGE, issue the following command:
define recmedmachassociation tellerwrkstnimage mach255
3. When the boot media is moved offsite, update its location. For example, to
update the location of boot media TELLERWRKSTNIMAGE to the offsite
location IRONVAULT, issue the following command:
update recoverymedia tellerwrkstnimage location=ironvault
You can define media that contain softcopy manuals that you would need during
recovery. For example, to define a CD-ROM containing the AIX 4.3 manuals that
are on volume CD0001, enter:
define recoverymedia aix43manuals type=other volumes=cd0001
description="AIX 4.3 Bookshelf"
598
Commands for performing database recovery and primary storage pool recovery
Commands for registering licenses
Instructions that you define
Machine and recovery media information that you define
For details about the recovery plan file, see The Disaster Recovery Plan File on
page 619.
DRM creates one copy of the disaster recovery plan file each time you issue the
PREPARE command. You should create multiple copies of the plan for safekeeping.
For example, keep copies in print, on diskettes, on NFS-mounted disk space that is
located offsite, or on a remote server.
Before creating a disaster recovery plan, back up your storage pools then backup
the database. See Backing Up Storage Pools on page 549 and Backing Up the
Database on page 553 for details about these procedures.
If you manually send backup media offsite, see Moving Backup Volumes Offsite
on page 604. If you use virtual volumes, see Using Virtual Volumes to Store Data
on Another Server on page 505.
When your backups are both offsite and marked offsite, you can create a disaster
recovery plan.
You can use the Tivoli Storage Manager scheduler to periodically run the
PREPARE command (see Chapter 17, Automating Server Operations, on
page 401).
Note: DRM creates a plan that assumes that the latest database full plus
incremental series would be used to restore the database. However, you may
want to use DBSNAPSHOT backups for disaster recovery and retain your
full plus incremental backup series on site to recover from possible
availability problems. In this case, you must specify the use of
DBSNAPSHOT backups in the PREPARE command. For example:
prepare source=dbsnapshot
Recovery plan files that are stored locally are not automatically expired. You
should periodically delete down-level recovery plan files manually.
DRM appends to the file name the date and time (yyyymmdd.hhmmss). For
example:
/u/server/recoveryplans/20000925.120532
599
The recovery plan file is written as an object on the target server, and a volume
history record is created on the source server. For more about recovery plan files
that are stored on target servers, see Displaying Information about Recovery Plan
Files.
You can also issue the QUERY VOLHISTORY command to display a list of
recovery plan files for the source server. Specify recovery plan files that were
created assuming either full plus incremental database backups (TYPE=RPFILE)
or database snapshot backups (TYPE=RPFSNAPSHOT). For example:
query volhistory type=rpfile
v From the target server: Issue a QUERY RPFILE command that specifies the node
name associated with the server or servers that prepared the plan. For example,
to display a list of all recovery plan files that have been saved in the target
server, enter:
query rpfile nodename=*
600
v From the target server: Issue the following command for a recovery plan file
created on August 31,2000 at 4:50 a.m. on a source server named MARKETING
whose node name is BRANCH8:
query rpfcontent marketing.20000831.045000 nodename=branch8
Notes:
1. You cannot issue these commands from a server console.
2. An output delay can occur when the plan file is located on tape.
See The Disaster Recovery Plan File on page 619 for an example of the contents
of a recovery plan file.
To display a list of recovery plan files, use the QUERY RPFILE command. See
Displaying Information about Recovery Plan Files on page 600 for more
information.
All recovery plan files that meet the criteria are eligible for expiration if both of the
following conditions exist:
v The last recovery plan file of the series is over 90 days old.
v The recovery plan file is not associated with the most recent backup series. A
backup series consists of a full database backup and all incremental backups that
apply to that full backup. Another series begins with the next full backup of the
database.
Expiration applies to plan files based on both full plus incremental and snapshot
database backups.
601
To limit the operation to recovery plan files that were created assuming database
snapshot backups, specify TYPE=RPFSNAPSHOT.
Send backup volumes offsite and back onsite Unrestricted storage or operator
Offsite recovery media management does not process virtual volumes. To display
all virtual copy storage pool and database backup volumes that have their backup
objects on the remote target server, issue the following command:
query drmedia * wherestate=remote
The disaster recovery plan includes backup volume location information and can
provide a list of offsite volumes required to restore a server.
The following diagram shows the typical life cycle of the recovery media:
602
Offsite
Onsite
Vault
In Transit
Storage Hierarchy
COURIER
Backup
storage
pool
NOTMOUNTABLE
EDELAY
VAULT
Scratch
DR
M
RE
Private
r/w
US
Backup
database
DB
BAC
KUPEXPIREDAYS
MOUNTABLE
Scratch
VAULTRETRIEVE
ONSITERETRIEVE
COURIERRETRIEVE
Figure 88. Recovery Media Life Cycle
DRM assigns the following states to volumes. The location of a volume is known
at each state.
MOUNTABLE
The volume contains valid data, and Tivoli Storage Manager can access it.
NOTMOUNTABLE
The volume contains valid data and is onsite, but Tivoli Storage Manager
cannot access it.
COURIER
The volume contains valid data and is in transit to the vault.
VAULT
The volume contains valid data and is at the vault.
VAULTRETRIEVE
The volume, which is located at the offsite vault, no longer contains valid
data and is to be returned to the site. For more information on reclamation
of offsite copy storage pool volumes, see Reclamation of Offsite Volumes
on page 219. For information on expiration of database backup volumes,
see step 1 on page 605.
603
COURIERRETRIEVE
The volume no longer contains valid data and is in the process of being
returned by the courier.
ONSITERETRIEVE
The volume no longer contains valid data and has been moved back to the
onsite location. The volume records of database backup and scratch copy
storage pool volumes are deleted from the database. For private copy
storage pool volumes, the access mode is updated to READWRITE.
State
Last Update
Date/Time
--------------- ---------------- ------------------TPBK05
Mountable
01/01/2000 12:00:31
TPBK99
Mountable
01/01/2000 12:00:32
TPBK06
Mountable
01/01/2000 12:01:03
Automated
LibName
----------------LIBRARY
LIBRARY
LIBRARY
For all volumes in the MOUNTABLE state, DRM does the following:
v Updates the volume state to NOTMOUNTABLE and the volume location
according to the SET DRMNOTMOUNTABLENAME. If this command has
not been issued, the default location is NOTMOUNTABLE.
v For a copy storage pool volume, updates the access mode to unavailable.
v For a volume in an automated library, checks the volume out of the library.
Notes:
a. During checkout processing, SCSI libraries request operator intervention. To
bypass these requests and eject the cartridges from the library, first issue the
following command:
move drmedia * wherestate=mountable remove=no
From this list identify and remove the cartridges (volumes) from the library.
b. For the 349X library type, if the number of cartridges to be checked out of
the library is greater than the number of slots in the I/O station, you can
define a high capacity area in your library. Then use the following
command to eject the cartridges to the high capacity area, rather than to the
I/O station:
move drmedia * wherestate=mountable remove=bulk
604
3. Send the volumes to the offsite vault. Issue the following command to have
DRM select volumes in the NOTMOUNTABLE state:
move drmedia * wherestate=notmountable
For all volumes in the NOTMOUNTABLE state, DRM updates the volume state
to COURIER and the volume location according to the SET
DRMCOURIERNAME. If the SET command has not yet been issued, the
default location is COURIER. For more information, see Specifying Defaults
for Offsite Recovery Media Management on page 592
4. When the vault location confirms receipt of the volumes, issue the MOVE
DRMEDIA command in the COURIER state. For example:
move drmedia * wherestate=courier
For all volumes in the COURIER state, DRM updates the volume state to
VAULT and the volume location according to the SET DRMVAULTNAME
command. If the SET command has not yet been issued, the default location is
VAULT. For more information, see Specifying Defaults for Offsite Recovery
Media Management on page 592.
5. To display a list of volumes that contain valid data at the vault, issue the
following command:
query drmedia wherestate=vault
State
Last Update
Automated
Date/Time
LibName
----------------- -------------- ------------------- ----------------TAPE0P
Vault
01/05/2000 10:53:20
TAPE1P
Vault
01/05/2000 10:53:20
DBT02
Vault
01/05/2000 10:53:20
TAPE3S
Vault
01/05/2000 10:53:20
6. If you do not want to step through all the states, you can use the TOSTATE
parameter on the MOVE DRMEDIA command to specify the destination state.
For example, to transition the volumes from NOTMOUNTABLE state to
VAULT state, issue the following command:
move drmedia * wherestate=notmountable tostate=vault
For all volumes in the NOTMOUNTABLE state, DRM updates the volume state
to VAULT and the volume location according to the SET DRMVAULTNAME
command. If the SET command has not yet been issued, the default location is
VAULT.
See Staying Prepared for a Disaster on page 608 for an example that
demonstrates sending server backup volumes offsite using MOVE DRMEDIA and
QUERY DRMEDIA commands.
605
3. After the vault location acknowledges that the volumes have been given to the
courier, issue the following command:
move drmedia * wherestate=vaultretrieve
The server does the following for all volumes in the VAULTRETRIEVE state:
v Change the volume state to COURIERRETRIEVE.
v Update the location of the volume according to what is specified in the SET
DRMCOURIERNAME command. For more information, see Specifying
Defaults for Offsite Recovery Media Management on page 592.
4. When the courier delivers the volumes, acknowledge that the courier has
returned the volumes onsite, by issuing:
move drmedia * wherestate=courierretrieve
The server does the following for all volumes in the COURIERRETRIEVE state:
v The volumes are now onsite and can be reused or disposed.
v The database backup volumes are deleted from the volume history table.
v For scratch copy storage pool volumes, the record in the database is deleted.
For private copy storage pool volumes, the access is updated to read/write.
5. If you do not want to step through all the states, you can use the TOSTATE
parameter on the MOVE DRMEDIA command to specify the destination state.
For example, to transition the volumes from VAULTRETRIEVE state to
ONSITERETRIEVE state, issue the following command:
move drmedia * wherestate=vaultretrieve tostate=onsiteretrieve
The server does the following for all volumes with in the VAULTRETRIEVE
state:
v The volumes are now onsite and can be reused or disposed.
v The database backup volumes are deleted from the volume history table.
606
v For scratch copy storage pool volumes, the record in the database is deleted.
For private copy storage pool volumes, the access is updated to read/write.
607
Daily Operations
Day 5
1. Back up client files.
2. Back up the primary storage pools.
3. Back up the database (for example, a database snapshot backup).
4. Send the backup volumes and a list of expired volumes to be reclaimed
to the vault.
5. Generate the disaster recovery plan.
b. Send the volumes offsite and record that the volumes were given to the
courier:
move drmedia * wherestate=notmountable
8. Give the courier the database and storage pool backup tapes, the recovery
plan file diskette, and the list of volumes to be returned from the vault.
9. The courier gives you any tapes that were on the previous days return from
the vault list.
Update the state of these tapes and check them into the library:
move drmedia * wherestate=courierretrieve cmdf=/drm/checkin.libvol
cmd="checkin libvol libauto &vol status=scratch"
608
The volume records for the tapes that were in the COURIERRETRIEVE state
are deleted from the database. The MOVE DRMEDIA command also generates
the CHECKIN LIBVOL command for each tape processed in the file
/drm/checkin.libvol. For example:
checkin libvol libauto tape01 status=scratch
checkin libvol libauto tape02 status=scratch
...
10. The courier takes the database and storage pool backup tapes, the recovery
plan diskette, and the list of volumes to return from the vault.
11. Call the vault and verify that the backup tapes arrived and are secure, and
that the tapes to be returned to the site have been given to the courier.
12. Set the location of the volumes sent to the vault:
move drmedia * wherestate=courier
13. Set the location of the volumes given to the courier by the vault:
move drmedia * wherestate=vaultretrieve
609
|
|
|
|
|
|
|
|
Restoration from the mksysb tapes includes recreating the root volume group,
and the file system where the database, recovery log, storage pool and disk
volumes are located.
7. Review the Tivoli Storage Manager macros contained in the recovery plan.
If, at the time of the disaster, the courier had not picked up the previous
nights database and storage pool incremental backup volumes but they were
not destroyed, remove the entry for the storage pool backup volumes from the
COPYSTGPOOL.VOLUMES.DESTROYED file.
8. If some required storage pool backup volumes could not be retrieved from the
vault, remove the volume entries from the
COPYSTGPOOL.VOLUMES.AVAILABLE file.
9. If all primary volumes were destroyed, no changes are required to the
PRIMARY.VOLUMES script and Tivoli Storage Manager macro files.
10. Review the device configuration file to ensure that the hardware configuration
at the recovery site is the same as the original site. Any differences must be
updated in the device configuration file. Examples of configuration changes
that require updates to the configuration information are:
v Different device names
v Use of a manual library instead of an automated library
v For automated libraries, the requirement of manually placing the database
backup volumes in the automated library and updating the configuration
information to identify the element within the library. This allows the server
to locate the required database backup volumes.
For information about updating the device configuration file, see Updating
the Device Configuration File on page 561.
11. To restore the database to a point where clients can be recovered, invoke the
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script file. Enter the script
file name at the command prompt. As an alternative, you can use the recovery
script as a guide and manually issue the steps.
The following are some sample steps from a recovery script:
a. Copy the Tivoli Storage Manager server options file from the
DSMSERV.OPT file to its original location.
b. Copy the volume history file required by database restore processing from
the VOLUME.HISTORY.FILE file to its original location.
610
Note: Use this copy of the volume history file unless you have a more
recent copy (after the disaster occurred).
c. Copy the device configuration file required by database restore processing
from the DEVICE.CONFIGURATION.FILE file to its original location.
d. Create the Tivoli Storage Manager server recovery log and database
volumes using DSMFMT.
e. Issue DSMSERV FORMAT command to format the recovery log and
database files.
f. Issue the DSMSERV RESTORE DB command.
g. Start the server.
h.
i.
j.
k.
Notes:
a. Due to changes in hardware configuration during recovery, you might
have to update the device configuration file located in the restored Tivoli
Storage Manager database (see Updating the Device Configuration File
on page 561).
b. You can mount copy storage pool volumes upon request, check in the
volumes in advance, or manually place the volumes in the library and
ensure consistency by issuing the AUDIT LIBRARY command.
c. Use the AUDIT LIBRARY command to ensure that the restored Tivoli
Storage Manager database is consistent with the automated library
volumes.
12. If client machines are not damaged, invoke the
RECOVERY.SCRIPT.NORMAL.MODE script file to restore the server primary
storage pools. If client machines are damaged, you may want to delay this
action until after all clients are recovered.
Note: This action is optional because Tivoli Storage Manager can access the
copy storage pool volumes directly to restore client data. Using this
feature, you can minimize client recovery time because server primary
storage pools do not have to be restored first. However, in this scenario,
the client machines were not damaged, so the focus of the
administrator is to restore full Tivoli Storage Manager server operation.
As an alternative, you can use the recovery script as a guide and manually
run each step. The steps run in this script are:
a. Create replacement primary volumes.
b. Define the replacement primary volumes to Tivoli Storage Manager.
c. Restore the primary storage pools.
13. Collect the database backup and copy storage pool volumes used in the
recovery for return to the vault. For these backup volumes to be returned to
the vault using the routine MOVE DRMEDIA process, issue the following
commands:
update volhist TPBK50 devcl=lib8mm ormstate=mountable
update volhist TPBK51 devcl=lib8mm ormstate=mountable
The copy storage pool volumes used in the recovery already have the correct
ORMSTATE.
Chapter 23. Using Disaster Recovery Manager
611
14. Issue the BACKUP DB command to back up the newly restored database.
15. Issue the following command to check the volumes out of the library:
move drmedia * wherestate=mountable
17. Give the volumes to the courier and issue the following command:
move drmedia * wherestate=notmountable
POLARIS
1
21
2
1
No
Payroll
POLARIS
MKSYSB1
Yes
Yes
Location
Machine Name
---------- ---------------IRONVAULT
POLARIS
612
devices
aio0
Defined
Asynchronous I/O
bus0
Available 00-00
Microchannel Bus
fd0
Available 00-00-0D-00 Diskette Drive
fda0
Available 00-00-0D
Standard I/O Diskette Adapter
fpa0
Available 00-00
Floating Point Processor
gda0
Available 00-04
Color Graphics Display Adapter
hd1
Defined
Logical volume
hd2
Defined
Logical volume
hd3
Defined
Logical volume
hdisk0
Available 00-01-00-00 400 MB SCSI Disk Drive
hdisk1
Available 00-01-00-40 Other SCSI Disk Drive
hft0
Available
High Function Terminal Subsystem
inet0
Available
Internet Network Extension
ioplanar0
Available 00-00
I/O Planar
kbd0
Defined 00-00-0K-00 United States keyboard
lb0
Available 00-02-00-20 TIVSM Library
lo0
Available
Loopback Network Interface
loglv00
Defined
Logical volume
lp0
Available 00-00-0P-00 IBM 4201 Model 3 Proprinter III
lv03
Defined
Logical volume
lv04
Defined
Logical volume
lvdd
Available
N/A
mem0
Available 00-0B
8 MB Memory Card
mem1
Available 00-0C
16 MB Memory Card
mous0
Defined 00-00-0M-00 3 button mouse
mt0
Available 00-02-00-40 TIVSM Tape Drive
ppa0
Available 00-00-0P
Standard I/O Parallel Port Adapter
pty0
Available
Asynchronous Pseudo-Terminal
rootvg
Defined
Volume group
sa0
Available 00-00-S1
Standard I/O Serial Port 1
sa1
Available 00-00-S2
Standard I/O Serial Port 2
scsi0
Available 00-01
SCSI I/O Controller
scsi1
Available 00-02
SCSI I/O Controller
sio0
Available 00-00
Standard I/O Planar
siokb0
Available 00-00-0K
Keyboard Adapter
sioms0
Available 00-00-0M
Mouse Adapter
siotb0
Available 00-00-0T
Tablet Adapter
sys0
Available 00-00
System Object
sysplanar0 Available 00-00
CPU Planar
sysunit0
Available 00-00
System Unit
tok0
Available 00-03
Token-Ring High-Performance Adapter
tr0
Available
Token Ring Network Interface
tty0
Available 00-00-S1-00 Asynchronous Terminal
tty1
Available 00-00-S2-00 Asynchronous Terminal
usrvice
Defined
Logical volume
veggie2
Defined
Volume group
logical volumes by volume group
veggie2:
LV NAME
TYPE
LPs PPs PVs LV STATE
MOUNT POINT
hd2
jfs
103 103 1
open/syncd
/usr
hd1
jfs
1
1
1
open/syncd
/home
hd3
jfs
3
3
1
open/syncd
/tmp
hd9var
jfs
1
1
1
open/syncd
/var
file systems
Filesystem
Total KB
free %used iused %iused Mounted on
/dev/hd4
8192
420 94%
909
44% /
/dev/hd9var
4096
2972 27%
87
8% /var
/dev/hd2
421888 10964 97% 17435
16% /usr
/dev/hd3
12288 11588
5%
49
1% /tmp
/dev/hd1
4096
3896
4%
26
2% /home
613
Device Configuration */
Here is an example of the updated device configuration file when a manual library
is used at the recovery site:
/* Device Configuration */
define devclass auto8mm_class devtype=8mm format=drive
614
Device Configuration */
615
Device Configuration */
In this example, database backup volume DBBK01 was placed in element 1 of the
automated library. Then a comment is added to the device configuration file to
identify the location of the volume. Tivoli Storage Manager needs this informatiion
to restore the database restore. Comments that no longer apply at the recovery site
are removed.
Note: If you are using an automated library, you may also need to audit the
library after the database is restored in order to update the Tivoli Storage
Manager inventory of the volumes in the library.
616
Status
Person
Resp.
Backup
Person
617
618
Status
Person
Resp.
Backup
Person
Status
Person
Resp.
Backup
Person
619
Tasks
PLANFILE.DESCRIPTION
None
PLANFILE.TABLE.OF.CONTENTS
None
SERVER.REQUIREMENTS
None
RECOVERY.INSTRUCTIONS.GENERAL
RECOVERY.INSTRUCTIONS.OFFSITE
RECOVERY.INSTRUCTIONS.INSTALL
RECOVERY.INSTRUCTIONS.DATABASE
RECOVERY.INSTRUCTIONS.STGPOOL
RECOVERY.VOLUMES.REQUIRED
RECOVERY.DEVICES.REQUIRED
None
620
Table 43. Administrative Tasks Associated with the Disaster Recovery Plan File (continued)
Stanza Name
Tasks
LOGANDDB.VOLUMES.CREATE script
LOG.VOLUMES
DB.VOLUMES
LOGANDDB.VOLUMES.INSTALL script
LICENSE.REGISTRATION macro
COPYSTGPOOL.VOLUMES.AVAILABLE macro
COPYSTGPOOL.VOLUMES.DESTROYED macro
PRIMARY.VOLUMES.DESTROYED macro
PRIMARY.VOLUMES.REPLACEMENT.CREATE script
PRIMARY.VOLUMES.REPLACEMENT macro
STGPOOLS.RESTORE macro
LICENSE.INFORMATION
None
MACHINE.GENERAL.INFORMATION
MACHINE.RECOVERY.INSTRUCTIONS
MACHINE.RECOVERY.CHARACTERISTICS
621
Table 43. Administrative Tasks Associated with the Disaster Recovery Plan File (continued)
Stanza Name
Tasks
MACHINE.RECOVERY.MEDIA
end PLANFILE.DESCRIPTION
PLANFILE.TABLE.OF.CONTENTS
Lists the stanzas documented in this plan.
622
begin PLANFILE.TABLE.OF.CONTENTS
PLANFILE.DESCRIPTION
PLANFILE.TABLE.OF.CONTENTS
Server Recovery Stanzas:
SERVER.REQUIREMENTS
RECOVERY.INSTRUCTIONS.GENERAL
RECOVERY.INSTRUCTIONS.OFFSITE
RECOVERY.INSTRUCTIONS.INSTALL
RECOVERY.VOLUMES.REQUIRED
RECOVERY.DEVICES.REQUIRED
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script
RECOVERY.SCRIPT.NORMAL.MODE script
LOGANDDB.VOLUMES.CREATE script
LOG.VOLUMES
DB.VOLUMES
LOGANDDB.VOLUMES.INSTALL script
LICENSE.REGISTRATION macro
COPYSTGPOOL.VOLUMES.AVAILABLE macro
COPYSTGPOOL.VOLUMES.DESTROYED macro
PRIMARY.VOLUMES.DESTROYED macro
PRIMARY.VOLUMES.REPLACEMENT.CREATE script
PRIMARY.VOLUMES.REPLACEMENT macro
STGPOOLS.RESTORE macro
VOLUME.HISTORY.FILE
DEVICE.CONFIGURATION.FILE
DSMSERV.OPT.FILE
Machine Description Stanzas:
MACHINE.GENERAL.INFORMATION
MACHINE.RECOVERY.INSTRUCTIONS
MACHINE.CHARACTERISTICS
MACHINE.RECOVERY.MEDIA.REQUIRED
end PLANFILE.TABLE.OF.CONTENTS
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE
LOGANDDB.VOLUMES.CREATE
LOGANDDB.VOLUMES.INSTALL
PRIMARY.VOLUMES.REPLACEMENT.CREATE
623
begin SERVER.REQUIREMENTS
Database Requirements Summary:
Available Space (MB):
Assigned Capacity (MB):
Pct. Utilization:
Maximum Pct. Utilization:
Physical Volumes:
20
20
2.2
2.2
2
RECOVERY.INSTRUCTIONS.OFFSITE
Contains instructions that the administrator has entered in the file identified by
prefix RECOVERY.INSTRUCTIONS.OFFSITE. The instructions should include the
624
name and location of the offsite vault, and how to contact the vault (for example, a
name and phone number).
begin RECOVERY.INSTRUCTIONS.OFFSITE
Our offsite vaulting vendor is OffsiteVault Inc.
Their telephone number is 514-555-2341. Our account rep is Joe Smith.
Our account number is 1239992. Their address is ...
Here is a map to their warehouse ...
Our courier is ...
end RECOVERY.INSTRUCTIONS.OFFSITE
RECOVERY.INSTRUCTIONS.INSTALL
Contains instructions that the administrator has entered in the file identified by
prefix RECOVERY.INSTRUCTIONS.INSTALL. The instructions should include how
to rebuild the base server machine and the location of the system image backup
copies.
begin RECOVERY.INSTRUCTIONS.INSTALL
The base server system is AIX 4.3 running on an RS6K model 320.
Use mksysb volume serial number svrbas to restore this system image.
A copy of this mksysb tape is stored at the vault. There is also a copy
in bldg 24 room 4 cabinet a. The image includes the server code.
The system programmer responsible for this image is Fred Myers.
Following are the instructions to do a mksysb based OS install:
end RECOVERY.INSTRUCTIONS.INSTALL
RECOVERY.INSTRUCTIONS.DATABASE
Contains instructions that the administrator has entered in the file identified by
prefix RECOVERY.INSTRUCTIONS.DATABASE. The instructions should include
how to prepare for the database recovery. For example, you may enter instructions
on how to initialize or load the backup volumes for an automated library. No
sample of this stanza is provided.
RECOVERY.INSTRUCTIONS.STGPOOL
Contains instructions that the administrator has entered in the file identified by
prefix RECOVERY.INSTRUCTIONS.STGPOOL. The instructions should include the
names of your software applications and the copy storage pool names containing
the backup of these applications. No sample of this stanza is provided.
625
If you are using a nonvirtual volume environment and issuing the MOVE
DRMEDIA command, a blank location field means that the volumes are onsite and
available to the server. This volume list can be used in periodic audits of the
volume inventory of the courier and vault. You can use the list to collect the
required volumes before recovering the server.
For virtual volumes, the location field contains the target server name.
begin RECOVERY.VOLUMES.REQUIRED
Volumes required for data base restore
Location = OffsiteVault Inc.
Device Class = LIB8MM
Volume Name =
TPBK08
Location = OffsiteVault Inc.
Device Class = LIB8MM
Volume Name =
TPBK06
Volumes required for storage pool restore
Location = OffsiteVault Inc.
Copy Storage Pool = CSTORAGEPF
Device Class = LIB8MM
Volume Name =
TPBK05
TPBK07
end RECOVERY.VOLUMES.REQUIRED
RECOVERY.DEVICES.REQUIRED
Provides details about the devices needed to read the backup volumes.
begin RECOVERY.DEVICES.REQUIRED
Purpose: Description of the devices required to read the
volumes listed in the recovery volumes required stanza.
Device Class Name:
Device Access Strategy:
Storage Pool Count:
Device Type:
Format:
Est/Max Capacity (MB):
Mount Limit:
Mount Wait (min):
Mount Retention (min):
Label Prefix:
Library:
Directory:
Last Update by (administrator):
Last Update Date/Time:
LIB8MM
Sequential
2
8MM
DRIVE
4.0
2
60
10
TIVSM
RLLIB
Bill
12/11/2000 10:18:34
end RECOVERY.DEVICES.REQUIRED
626
modify the script because of differences between the original and the replacement
systems. At the completion of these steps, client requests for file restores are
satisfied directly from copy storage pool volumes.
The disaster recovery plan issues commands using the administrative client. The
disaster recovery plan file issues commands using the administrative client. Ensure
that the path to the administrative client is established before running the script.
For example, set the shell variable PATH or update the scripts with the path
specification for the administrative client.
The commands in the script do the following:
v Restore the server options, volume history, and device configuration information
files.
v Invoke the scripts contained in the LOGANDDB.VOLUMES.CREATE and
LOGANDDB.VOLUMES.INSTALL stanzas.
Attention: When this script runs, any log volumes or database volumes with
the same names as those named in the plan are removed (see
LOGANDDB.VOLUMES.CREATE under Create and Install Database and
Recovery Log Volumes Stanzas on page 631). In most disaster recoveries, the
Tivoli Storage Manager server is installed on a new machine. When this script is
run, it is assumed that there is no Tivoli Storage Manager data in the log or
database volumes. Tivoli Storage Manager installation includes the creation of
database and recovery log volumes. If you have created a log volume or a
database volume (for example, for testing), and you want to preserve the
contents, you must take some action such as renaming the volume or copying
the contents before executing this script.
v Invoke the macros contained in the following stanzas:
LICENSE.REGISTRATION
COPYSTGPOOL.VOLUMES.AVAILABLE
COPYSTGPOOL.VOLUMES.DESTROYED
PRIMARY.VOLUMES.DESTROYED.
To help understand the operations being performed in this script, see Backup and
Recovery Scenarios on page 581.
To invoke this script, specify the following positional parameters:
v $1 (the administrator ID)
v $2 (the administrator password)
v $3 (the server ID as specified in the dsm.sys file)
Note: The default location for dsm.sys is /usr/tivoli/tsm/client/admin/bin.
For example, to invoke this script using an administrator ID of don, password of
mox, server name of prodtsm, enter the following command:
planprefix/RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE don mox prodtsm
For more information, see the entry for the recovery plan prefix in Table 40 on
page 591.
627
628
629
For more information, see the entry for the recovery plan prefix in Table 40 on
page 591.
630
631
LOG.VOLUMES
Contains the names of the log volumes to be initialized. The contents of this stanza
must be placed into a separate file to be used by the
LOGANDDB.VOLUMES.INSTALL script.
begin LOG.VOLUMES
/usr/tivoli/tsm/server/binlg01x
/usr/tivoli/tsm/server/binlg02x
end LOG.VOLUMES
DB.VOLUMES
Contains the names of the database volumes to be initialized. The contents of this
stanza must be placed into a separate file to be used by the
LOGANDDB.VOLUMES.INSTALL script.
begin DB.VOLUMES
/usr/tivoli/tsm/server/bindb01x
/usr/tivoli/tsm/server/bindb02x
end DB.VOLUMES
LOGANDDB.VOLUMES.INSTALL
632
Contains a script with the commands required to initialize the database and log
volumes. This script is invoked by the
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script.
begin LOGANDDB.VOLUMES.INSTALL script
#!/bin/ksh
set -x
# Purpose: Initialize the log and database volumes.
# Recovery Administrator: Run this to initialize an server.
/usr/tivoli/tsm/server/bindsmserv install \
2 FILE:/prepare/LOG.VOLUMES \
2 FILE:/prepare/DB.VOLUMES
end LOGANDDB.VOLUMES.INSTALL script
633
*/
*/
*/
*/
*/
*/
*/
COPYSTGPOOL.VOLUMES.DESTROYED
Contains a macro to mark copy storage pool volumes as unavailable if the volumes
were onsite at the time of the disaster. This stanza does not include copy storage
pool virtual volumes. These volumes are considered offsite and have not been
destroyed in a disaster. You can use the information as a guide and issue the
administrative commands from a command line, or you can copy it to a file,
modify it, and run it. This macro is invoked by the
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script.
After a disaster, compare the copy storage pool volumes listed in this stanza with
the volumes that were left onsite. If you have any of the volumes and they are
usable, you should remove their entries from this stanza.
begin COPYSTGPOOL.VOLUMES.DESTROYED macro
/* Purpose: Mark destroyed copy storage pool volumes as unavailable.
/* Volumes in this macro were not marked as offsite at the time the
/* PREPARE ran. They were likely destroyed in the disaster.
/* Recovery Administrator: Remove any volumes that were not destroyed.
*/
*/
*/
*/
634
vol
vol
vol
vol
PRIMARY.VOLUMES.REPLACEMENT.CREATE
Contains a script with the commands needed to recreate the primary disk storage
pool volumes. You can use the script as a guide and run the commands from a
command line, or you can copy thew script to a file, modify it, and run it. This
script is invoked by the RECOVERY.SCRIPT.NORMAL.MODE script.
The plan file assumes that the volume formatting program (DSMFMT) resides in
the same directory as the server executable indicated in the stanza
SERVER.REQUIREMENTS.
The SET DRMPLANVPOSTFIX command adds a character to the end of the names
of the original volumes listed in this stanza. This character does the following:
v Improves retrievability of volume names that require renaming in the stanzas.
Before using the volume names, change these names to new names that are
valid for the device class and valid on the replacement system.
v Generates a new name that can be used by the replacement server. Your naming
convention must take into account the appended character.
Notes:
1. Replacement primary volume names must be different from any other
original volume name or replacement name.
2. The RESTORE STGPOOL command restores storage pools on a logical basis.
There is no one-to-one relationship between an original volume and its
replacement.
3. There will be entries for the same volumes in
PRIMARY.VOLUMES.REPLACEMENT.
This stanza does not include primary storage pool virtual volumes, because these
volumes are considered offsite and have not been destroyed in a disaster.
635
PRIMARY.VOLUMES.REPLACEMENT
Contains a macro to define primary storage pool volumes to the server. You can
use the macro as a guide and run the administrative commands from a command
line, or you can copy it to a file, modify it, and execute it. This macro is invoked
by the RECOVERY.SCRIPT.NORMAL.MODE script.
Primary storage pool volumes with entries in this stanza have at least one of the
following three characteristics:
1. Original volume in a storage pool whose device class was DISK.
2. Original volume in a storage pool with MAXSCRATCH=0.
3. Original volume in a storage pool and volume scratch attribute=no.
The SET DRMPLANVPOSTFIX command adds a character to the end of the names
of the original volumes listed in this stanza. This character does the following:
v Improves the retrievability of volume names that must be renamed in the
stanzas. Before using the volume names, change these names to new names that
are valid for the device class on the replacement system.
v Generates a new name that can be used by the replacement server. Your naming
convention must take into account the appended character.
Notes:
1. Replacement primary volume names must be different from any other
original volume name or replacement name.
2. The RESTORE STGPOOL command restores storage pools on a logical basis.
There is no one-to-one relationship between an original volume and its
replacement.
3. There could be entries for the same volume in
PRIMARY.VOLUMES.REPLACEMENT.CREATE and
PRIMARY.VOLUMES.REPLACEMENT if the volume has a device class of
DISK.
This stanza does not include primary storage pool virtual volumes. These volumes
are considered offsite and have not been destroyed in a disaster.
636
stgp
stgp
stgp
stgp
stgp
ARCHIVEPOOL
BACKUPPOOL
BACKUPPOOLF
BACKUPPOOLT
SPACEMGPOOL
Configuration Stanzas
VOLUME.HISTORY.FILE
Contains a copy of the volume history information when the recovery plan was
created. The DSMSERV RESTORE DB command uses the volume history file to
determine what volumes are needed to restore the database. It is used by the
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script.
637
The following rules determine where to place the volume history file at restore
time:
v If the server option file contains VOLUMEHISTORY options, the server uses the
fully qualified file name associated with the first entry. If the file name does not
begin with a directory specification (for example, . or /), the server uses the
prefix volhprefix.
v If the server option file does not contain VOLUMEHISTORY options, the server
uses the default name volhprefix followed by drmvolh.txt. For example, if
volhprefix is /usr/tivoli/tsm/server/bin, the file name is
/usr/tivoli/tsm/server/bindrmvolh.txt.
Note: The volhprefix is set based on the following:
v If the environmental variable DSMSERV_DIR has been defined, it is used
as the volhprefix.
v If the environmental variable DSMSERV_DIR has not been defined, the
directory where the server is started from is used as the volhprefix.
If a fully qualified file name was not specified in the server options file for the
VOLUMEHISTORY option, the server adds it to the DSMSERV.OPT.FILE stanza.
begin VOLUME.HISTORY.FILE
*************************************************************************
*
*
Tivoli Storage Manager Sequential Volume Usage History
*
Updated 02/11/2000 10:20:34
*
*
Operation
Volume Backup Backup Volume Device
Volume
*
Date/Time
Type
Series Oper. Seq Class Name Name
*************************************************************************
2000/08/11 10:18:43 STGNEW
0
0
0 COOL8MM
BACK4X
2000/08/11 10:18:43 STGNEW
0
0
0 FILES
BK03
2000/08/11 10:18:46 STGNEW
0
0
0 LIB8MM
TPBK05
* Location for volume TPBK06 is: Ironvault Inc.
2000/08/11 10:19:23 BACKUPFULL
1
0
1 LIB8MM
TPBK06
2000/08/11 10:20:03 STGNEW
0
0
0 LIB8MM
TPBK07
2000/08/11 10:20:22 BACKUPINCR
1
1
1 LIB8MM
TPBK08
end VOLUME.HISTORY.FILE
DEVICE.CONFIGURATION.FILE
Contains a copy of the server device configuration information when the recovery
plan was created. The DSMSERV RESTORE DB command uses the device
configuration file to read the database backup volumes. It is used by the
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script.
At recovery time, you may need to modify this stanza. You must update the device
configuration information if the hardware configuration at the recovery site has
changed. Examples of changes requiring updates to the configuration information
are:
v Different device names
v Use of a manual library instead of an automated library
v For automated libraries, the requirement to manually place the database backup
volumes in the automated library and update the configuration information to
identify the element within the library. This allows the server to locate the
required database backup volumes.
638
For details, see Updating the Device Configuration File on page 561.
The following rules determine where the device configuration file is placed at
restore time:
v If the server options file contains DEVCONFIG entries, the server uses the fully
qualified file name associated with the first entry. If the specified file name does
not begin with a directory specification (for example, . or /), the server adds
the prefix devcprefix.
v If the server options file does not contain DEVCONFIG entries, the server uses
the default name devcprefix followed by drmdevc.txt. For example, if devcprefix is
/usr/tivoli/tsm/server/bin, the file name used by PREPARE is
/usr/tivoli/tsm/server/bindrmdevc.txt.
Note: The devcprefix is set based on the following:
v If the environmental variable DSMSERV_DIR has been defined, it is used
as the devcprefix.
v If the environmental variable DSMSERV_DIR has not been defined, the
directory where the server is started from is used as the devcprefix.
If a fully qualified file name was not specified for the DEVCONFIG option in the
server options file, the server adds it to the stanza DSMSERV.OPT.FILE.
begin DEVICE.CONFIGURATION.FILE
/* Tivoli Storage Manager Device Configuration */
DEFINE DEVCLASS COOL8MM DEVTYPE=8MM FORMAT=DRIVE MOUNTLIMIT=1 MOUNTWAIT=60
MOUNTRETENTION=60 PREFIX=TIVSM LIBRARY=ITSML
DEFINE DEVCLASS FILES DEVTYPE=FILE MAXCAPACITY=4096K MOUNTLIMIT=2 +
DIRECTORY=/usr/tivoli/tsm/server/bin
DEFINE DEVCLASS FILESSM DEVTYPE=FILE MAXCAPACITY=100K MOUNTLIMIT=2 +
DIRECTORY=/usr/tivoli/tsm/server/bin
DEFINE DEVCLASS LIB8MM DEVTYPE=8MM FORMAT=DRIVE MOUNTLIMIT=1 MOUNTWAIT=60+
MOUNTRETENTION=60 PREFIX=TIVSM LIBRARY=RLLIB
end DEVICE.CONFIGURATION.FILE
DSMSERV.OPT.FILE
Contains a copy of the server options file. This stanza is used by the
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script.
Note: The following figure contains text strings that are too long to display in
hardcopy or softcopy publications. The long text strings have a plus symbol
(+) at the end of the string to indicate that they continue on the next line.
The disaster recovery plan file adds the DISABLESCHEDS option to the server
options file and sets it to YES. This option disables administrative and client
schedules while the server is being recovered. After the server is recovered, you
can enable scheduling by deleting the option or setting it to NO and then
restarting the server.
639
begin DSMSERV.OPT.FILE
* Server options file located in /usr/tivoli/tsm/server/bindsmserv.optx
TCPPort 1509
VOLUMEHISTORY /usr/tivoli/tsm/server/binvolhistory.txtx
DEVCONFIG
/usr/tivoli/tsm/server/bindevconfig.txtx
* The following option was added by PREPARE.
DISABLESCHEDS YES
end DSMSERV.OPT.FILE
begin LICENSE.INFORMATION
Last License Audit:
Registered Client Nodes:
Licensed Client Nodes:
Are network connections in use ?:
Are network connections licensed ?:
Are Open Systems Environment clients registered ?:
Are Open Systems Environment clients licensed ?:
Is space management in use ?:
Is space management licensed ?:
Is disaster recovery manager in use ?:
Is disaster recovery manager licensed ?:
Are Server-to-Server Virtual Volumes in use ?:
Are Server-to-Server Virtual Volumes licensed ?:
Is Advanced Device Support required ?:
Is Advanced Device Support licensed ?:
Server License Compliance:
12/30/2000 10:25:34
1
51
Yes
Yes
No
No
No
No
Yes
Yes
No
Yes
No
No
Valid
end LICENSE.INFORMATION
640
MACHINE.RECOVERY.INSTRUCTIONS
Provides the recovery instructions for the server machine. This stanza is included
in the plan file if the machine recovery instructions are saved in the database.
begin MACHINE.RECOVERY.INSTRUCTIONS
Purpose: Recovery instructions for machine DSMSRV1.
Primary Contact:
Jane Smith (wk 520-000-0000 hm 520-001-0001)
Secondary Contact:
John Adams (wk 520-000-0001 hm 520-002-0002)
end MACHINE.RECOVERY.INSTRUCTIONS
MACHINE.RECOVERY.CHARACTERISTICS
Provides the hardware and software characteristics for the server machine.This
stanza is included in the plan file if the machine characteristics are saved in the
database.
begin MACHINE.CHARACTERISTICS
Purpose: Hardware and software characteristics of machine DSMSRV1.
devices
aio0
bbl0
bus0
DSM1509bk02
DSM1509db01x
DSM1509lg01x
en0
Defined
Available 00-0J
Available 00-00
Available
Available
Available
Defined
Asynchronous I/O
GXT150 Graphics Adapter
Microchannel Bus
N/A
N/A
N/A
Standard Ethernet Network Interface
end MACHINE.CHARACTERISTICS
MACHINE.RECOVERY.MEDIA
Provides information about the media (for example, boot media) needed for
rebuilding the machine that contains the server. This stanza is included in the plan
file if recovery media information is saved in the database and it has been
associated with the machine that contains the server.
begin MACHINE.RECOVERY.MEDIA.REQUIRED
Purpose: Recovery media for machine DSMSRV1.
Recovery Media Name: DSMSRVIMAGE
Type: Boot
Volume Names: mkssy1
Location: IRONMNT
Description: mksysb image of server machine base OS
Product: mksysb
Product Information: this mksysb was generated by AIX 4.3
end MACHINE.RECOVERY.MEDIA.REQUIRED
641
642
CreateProcess Call
|
|
|
|
|
|
|
|
The server creates two anonymous uni-directional pipes and maps them to stdin
and stdout during the CreateProcess call. When a standard handle is redirected to
refer to a file or a pipe, the handle can only be used by the ReadFile and WriteFile
functions. This precludes normal C functions such as gets or printf. Since the
server will never terminate the external program process, it is imperative that the
external program recognize a read or write failure on the pipes and exit the
process. In addition, the external program should exit the process if it reads an
unrecognized command.
The external program may obtain values for the read and write handles using the
following calls:
readPipe=GetStdHandle(STD_INPUT-HANDLE) and writePipe=GetStdHandle(STD_OUTPUT_HANDLE)
643
644
Error Handling
If the server encounters an error during processing, it will close the stdin and
stdout streams to the agent exit. The agent will detect this when it tries to read
from stdin or write to stdout. If this occurs, the agent performs any necessary
cleanup and calls the stdlib exit routine.
If the code for any response (except for EJECT and QUERY) is not equal to
SUCCESS, Tivoli Storage Manager does not proceed with the subsequent steps.
After the agent sends a non-SUCCESS return code for any response, the agent will
perform any necessary cleanup and call the stdlib exit routine.
However, even if the code for EJECT or QUERY requests is not equal to SUCCESS,
the agent will continue to send these requests.
If the server gets an error while trying to write to the agent, it will close the pipes,
perform any necessary cleanup, and terminate the current request.
645
where:
resultCode
One of the following:
v SUCCESS
v INTERNAL_ERROR
where:
resultCode
One of the following:
v SUCCESS
v INTERNAL_ERROR
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volume
Specifies the volume name to be queried.
Format of the external program response:
QUERY libraryname volume COMPLETE, STATUS=statusValue, RESULT=resultCode
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
646
volume
Specifies the volume name queried.
resultCode
One of the following:
v SUCCESS
v LIBRARY_ERROR
v VOLUME_UNKNOWN
v VOLUME_UNAVAILABLE
v CANCELLED
v TIMED_OUT
v INTERNAL_ERROR
If
If
v
v
resultCode is not SUCCESS, the exit must return statusValue set to UNDEFINED.
resultCode is SUCCESS, STATUS must be one of the following values:
IN_LIBRARY
NOT_IN_LIBRARY
IN_LIBRARY means that the volume is currently in the library and available to be
mounted.
NOT_IN_LIBRARY means that the volume is not currently in the library.
Initialization Requests
When the server is started, the server sends an initialization request to the external
media management program for each EXTERNAL library. The external program
must process this request to ensure that the external program is present, functional,
and ready to process requests. If the initialization request is successful, Tivoli
Storage Manager informs its operators that the external program reported its
readiness for operations. Otherwise, Tivoli Storage Manager reports a failure to its
operators.
Tivoli Storage Manager does not attempt any other type of operation with that
library until an initialization request has succeeded. The server sends an
initialization request first. If the initialization is successful, the request is sent. If the
initialization is not successful, the request fails. The external media management
program can detect whether the initialization request is being sent by itself or with
another request by detecting end-of-file on the stdin stream. When end-of-file is
detected, the external program must end by using the stdlib exit routine (not the
return call).
When a valid response is sent by the external program, the external program must
end by using the exit routine.
Format of the request:
INITIALIZE libraryname
where:
647
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
resultcode
One of the following:
v SUCCESS
v NOT_READY
v INTERNAL_ERROR
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volume
Specifies the volume to be ejected.
location info
Specifies the location information associated with the volume from the Tivoli
Storage Manager inventory. It is delimited with single quotation marks. This
information is passed without any modification from the Tivoli Storage
Manager inventory. The customer is responsible for setting its contents with
the appropriate UPDATE MEDIA or UPDATE VOLUME command before the
move command is invoked. Set this field to some target location value that will
assist in placing the volume after it is ejected from the library. It is suggested
that the external agent post the value of this field to the operator.
Format of the external program response:
EJECT libraryname volume COMPLETE, RESULT=resultCode
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volume
Specifies the ejected volume.
resultCode
One of the following:
v SUCCESS
v LIBRARY_ERROR
v VOLUME_UNKNOWN
v VOLUME_UNAVAILABLE
v CANCELLED
v TIMED_OUT
v INTERNAL_ERROR
648
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volname
Specifies the name of the volume to be returned to scratch (released).
Format of the external program response:
RELEASE libraryname volname COMPLETE, RESULT=resultcode
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volname
Specifies the name of the volume returned to scratch (released).
resultcode
One of the following:
v SUCCESS
v VOLUME_UNKNOWN
v VOLUME_UNAVAILABLE
v INTERNAL_ERROR
649
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volname
Specifies the actual volume name if the request is for an existing volume. If a
scratch mount is requested, the volname is set to SCRTCH.
accessmode
Specifies the access mode required for the volume. Possible values are
READONLY and READWRITE.
devicetypes
Specifies a list of device types that can be used to satisfy the request for the
volume and the FORMAT specified in the device class. The most preferred
device type is first in the list. Items are separated by commas, with no
intervening spaces. Possible values are:
v 3480
v 3480XF
v 3490E
v 3570
v 3590
v 3590E
v 3590H
v 4MM_DDS1
v 4MM_DDS1C
v 4MM_DDS2
v 4MM_DDS2C
v 4MM_DDS3
v 4MM_DDS3C
v 4MM_DDS4
v 4MM_DDS4C
v 4MM_HP_DDS4
v 4MM_HP_DDS4C
v 8MM_8200
v 8MM_8205
v 8MM_8500
v 8MM_8500C
v 8MM_8900
v 8MM_AIT
v 8MM_AITC
v 8MM_ELIANT
v 8MM_M2
v DLT1
v DLT_2000
v DLT_4000
v DLT_7000
v DLT_8000
v SDLT
v SDLT320
v DTF2
v DTF
650
|
|
|
|
|
|
|
|
|
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
GENERICTAPE
IBM_QIC4GBC
LTO_ULTRIUM
LTO_ULTRIUM2
M8100
OPT_RW_650MB
OPT_RW_1300MB
OPT_RW_2600MB
OPT_RW_5200MB
OPT_RW_9100MB
OPT_WORM_650MB
OPT_WORM_1300MB
OPT_WORM_2600MB
OPT_WORM_5200MB
OPT_WORM_9100MB
OPT_WORM12_5600MB
OPT_WORM12_12000MB
OPT_WORM14_14800MB
QIC_12GBC
QIC_20GBC
QIC_25GBC
QIC_30GBC
QIC_50GBC
QIC_IBM1000
QIC_525
QIC_5010C
REMOVABLEFILE
STK_9490
STK_9840
STK_9940
STK_9940B
STK_9840_VOLSAFE
STK_9940_VOLSAFE
STK_9940B_VOLSAFE
STK_SD3
timelimit
Specifies the maximum number of minutes that the server waits for the
volume to be mounted. If the mount request is not completed within this time,
the external manager responds with the result code TIMED_OUT.
userid
Specifies the user ID of the process that needs access to the drive.
|
|
|
volumenumber
For non-optical media, the volumenumber is 1. For optical media, the
volumenumber is 1 for side A, 2 for side B.
location
Specifies the value of the location field from the Tivoli Storage Manager
inventory (for example, Room 617 Floor 2). One blank character is inserted
between the volume number and the left single quotation mark in the location
information. If no location information is associated with a volume, nothing is
passed to the exit. If no volume information exists, the single quotation marks
are not passed. Also, if volume information is passed, then probably the
volume has been ejected from the library and needs to be returned to the
651
library before the mount operation can proceed. The location information
should be posted by the agent so that the operator can obtain the volume and
return it to the library.
Format of the external program response:
MOUNT libraryname volname COMPLETE ON specialfile, RESULT=resultcode
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volname
Specifies the name of the volume mounted for the request.
specialfile
The fully qualified path name of the device special file for the drive in which
the volume was mounted. If the mount request fails, the value should be set to
/dev/null.
The external program must ensure that the special file is closed before the
response is returned to the server.
resultcode
One of the following:
v SUCCESS
v DRIVE_ERROR
v LIBRARY_ERROR
v VOLUME_UNKNOWN
v VOLUME_UNAVAILABLE
v CANCELLED
v TIMED_OUT
v INTERNAL_ERROR
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volname
Specifies the name of the volume to be dismounted.
Format of the external program response:
DISMOUNT libraryname volname COMPLETE, RESULT=resultcode
652
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volname
Specifies the name of the volume dismounted.
resultcode
One of the following:
v SUCCESS
v DRIVE_ERROR
v LIBRARY_ERROR
v INTERNAL_ERROR
653
654
655
#define BASE_YEAR
*****/
1900
MAX_SERVERNAME_LENGTH
64
MAX_NODE_LENGTH
64
MAX_COMMNAME_LENGTH
16
MAX_OWNER_LENGTH
64
MAX_HL_ADDRESS
64
MAX_LL_ADDRESS
32
MAX_SCHED_LENGTH
30
MAX_DOMAIN_LENGTH
30
MAX_MSGTEXT_LENGTH 1600
656
/**********************************************
* Event Types (in elEventRecvData.eventType) *
**********************************************/
#define TSM_SERVER_EVENT
#define TSM_CLIENT_EVENT
0x03
0x05
/* Server Events */
/* Client Events */
/***************************************************
* Application Types (in elEventRecvData.applType) *
***************************************************/
#define
#define
#define
#define
TSM_APPL_BACKARCH
TSM_APPL_HSM
TSM_APPL_API
TSM_APPL_SERVER
/*****************************************************
* Event Severity Codes (in elEventRecvData.sevCode) *
*****************************************************/
#define
#define
*/
#define
#define
#define
#define
TSM_SEV_INFO
TSM_SEV_WARNING
0x02
0x03
/* Informational message.
/* Warning message.
*/
TSM_SEV_ERROR
TSM_SEV_SEVERE
TSM_SEV_DIAGNOSTIC
TSM_SEV_TEXT
0x04
0x05
0x06
0x07
/*
/*
/*
/*
*/
*/
*/
*/
Error message.
Severe error message.
Diagnostic message.
Text message.
/************************************************************
* Data Structure of Event that is passed to the User-Exit. *
* This data structure is the same for a file generated via *
*
FILEEXIT option on the server.
*
************************************************************/
typedef struct evRdata
{
int32
eventNum;
int16
sevCode;
int16
applType;
int32
sessId;
int32
version;
int32
eventType;
/*
/*
/*
/*
/*
/*
*
657
DateTime timeStamp;
/* timestamp for event data.
*/
uchar
serverName[MAX_SERVERNAME_LENGTH+1]; /* server name
*/
uchar
nodeName[MAX_NODE_LENGTH+1]; /* Node name for session
*/
uchar
commMethod[MAX_COMMNAME_LENGTH+1]; /* communication method
*/
uchar
ownerName[MAX_OWNER_LENGTH+1];
/* owner
*/
uchar
hlAddress[MAX_HL_ADDRESS+1];
/* high-level address
*/
uchar
llAddress[MAX_LL_ADDRESS+1];
/* low-level address
*/
uchar
schedName[MAX_SCHED_LENGTH+1]; /* schedule name if applicable*/
uchar
domainName[MAX_DOMAIN_LENGTH+1]; /* domain name for node
*/
uchar
event[MAX_MSGTEXT_LENGTH];
/* event text
*/
} elEventRecvData;
/************************************
* Size of the Event data structure *
************************************/
#define ELEVENTRECVDATA_SIZE
sizeof(elEventRecvData)
/*************************************
* User Exit EventNumber for Exiting *
*************************************/
#define USEREXIT_END_EVENTNUM
1822 /* Only user-exit receiver to exit*/
#define END_ALL_RECEIVER_EVENTNUM 1823 /* All receivers told to exit
*/
/**************************************
*** Do not modify above this line. ***
**************************************/
/********************** Additional Declarations **************************/
#endif
Figure 118. Sample User Exit Declarations (Part 3 of 3)
658
} /* End of main() */
/******************************************************************
* Procedure: adsmV3UserExit
* If the user-exit is specified on the server, a valid and
* appropriate event causes an elEventRecvData structure (see
* userExitSample.h) to be passed to adsmV3UserExit that returns a void.
* INPUT :
A (void *) to the elEventRecvData structure
* RETURNS: Nothing
******************************************************************/
void adsmV3UserExit( void *anEvent )
{
/* Typecast the event data passed */
elEventRecvData *eventData = (elEventRecvData *)anEvent;
Figure 119. Sample User Exit Program (Part 1 of 2)
659
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/**************************************
*** Do not modify above this line. ***
**************************************/
if( ( eventData->eventNum == USEREXIT_END_EVENTNUM
) ||
( eventData->eventNum == END_ALL_RECEIVER_EVENTNUM ) )
{
/* Server says to end this user-exit. Perform any cleanup, *
* but do NOT exit() !!!
*/
return;
}
/* Field Access: eventData->.... */
/* Your code here ... */
/*
*
*
*
*
*
*/
Be aware that certain function calls are process-wide and can cause
synchronization of all threads running under the TSM Server process!
Among these is the system() function call. Use of this call can
cause the server process to hang and otherwise affect performance.
Also avoid any functions that are not thread-safe. Consult your
systems programming reference material for more information.
660
Column
Description
0001-0006
0008-0010
0012-0013
0015-0023
Session ID number
0025-0027
0029-0031
0033-0046
Date/Time (YYYYMMDDDHHmmSS)
0048-0111
0113-0176
Node name
0178-0193
0195-0258
Owner name
0260-0323
0325-0356
0358-0387
Schedule name
0389-0418
Domain name
0420-2019
Event text
Description
2020-2499
Unused spaces
2500
661
662
Appendix C. Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the users responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
IBM World Trade Asia Corporation
Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106, Japan
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
This information could include technical inaccuracies or typographical errors.
Changes are periodically made to the information herein; these changes will be
incorporated in new editions of the publication. IBM may make improvements
and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
663
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758 U.S.A.
Such information may be available, subject to appropriate terms and conditions,
including in some cases, payment of a fee.
The licensed program described in this information and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
Information concerning non-IBM products was obtained from the suppliers of
those products, their published announcements or other publicly available sources.
IBM has not tested those products and cannot confirm the accuracy of
performance, compatibility or any other claims related to non-IBM products.
Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
Programming Interface
This publication is intended to help the customer plan for and manage the IBM
Tivoli Storage Manager server.
This publication also documents intended Programming Interfaces that allow the
customer to write programs to obtain the services of IBM Tivoli Storage Manager.
This information is identified where it occurs, either by an introductory statement
to a chapter or section or by the following marking:
Programming interface information
End of Programming interface information
664
Trademarks
The following terms are trademarks of the International Business Machines
Corporation in the United States, other countries, or both:
Advanced Peer-to-Peer Networking
AIX
Application System/400
APPN
DB2
DFDSM
DFS
DFSMS/MVS
DFSMShsm
DFSMSrmm
DPI
Enterprise Storage Server
ESCON
Extended Services
FlashCopy
IBM
IBMLink
iSeries
Magstar
MVS
MVS/ESA
MVS/SP
NetView
OpenEdition
Operating System/2
Operating System/400
OS/2
OS/390
OS/400
POWERparallel
PowerPC
pSeries
RACF
Redbooks
RISC System/6000
RS/6000
SAA
SANergy
SP
System/370
System/390
SystemView
Tivoli
Tivoli Enterprise Console
Tivoli Management Environment
TotalStorage
TME
VTAM
WebSphere
xSeries
z/OS
zSeries
Lotus, Lotus 123, Lotus Approach, Lotus Domino and Lotus Notes are
trademarks of Lotus Development Corporation in the United States, other
countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are registered
trademarks of Microsoft Corporation in the United States, other countries, or both.
UNIX is a registered trademark of the Open Group in the United States, other
countries, or both.
Java and all Java-based trademarks and logos are trademarks of Sun Microsystems,
Inc. in the United States, other countries, or both.
Intel is a registered trademark of the Intel Corporation in the United States, other
countries, or both.
Other company, product, or service names may be trademarks or service marks of
others.
Appendix C. Notices
665
666
Glossary
The terms in this glossary are defined as they
pertain to the IBM Tivoli Storage Manager library.
If you do not find the term you need, refer to the
IBM Software Glossary on the Web at this
address: www.ibm.com/ibm/terminology/. You
can also refer to IBM Dictionary of Computing,
New York: McGraw-Hill, 1994.
This glossary may include terms and definitions
from:
v The American National Standard Dictionary for
Information Systems, ANSI X3.172-1990,
copyright (ANSI). Copies may be purchased
from the American National Standards
Institute, 11 West 42nd Street, New York 10036.
v The Information Technology Vocabulary,
developed by Subcommittee 1, Joint Technical
Committee 1, of the International Organization
for Standardization and the International
Electrotechnical Commission (ISO/IEC
JTC2/SC1).
A
absolute mode. A backup copy group mode that
specifies that a file is considered for incremental
backup even if the file has not changed since the last
backup. See also mode. Contrast with modified mode.
access mode. An attribute of a storage pool or a
storage volume that specifies whether the server can
write to or read from the storage pool or storage
volume. The access mode can be read/write, read-only,
or unavailable. Volumes in primary storage pools can
also have an access mode of destroyed. Volumes in
copy storage pools can also have an access mode of
offsite.
activate. To validate the contents of a policy set and
make it the active policy set.
active policy set. The activated policy set that contains
the policy rules currently in use by all client nodes
assigned to the policy domain. See also policy domain
and policy set.
active version. The most recent backup copy of a file
stored by IBM Tivoli Storage Manager. The active
version of a file cannot be deleted until a backup
process detects that the user has either replaced the file
with a newer version or has deleted the file from the
workstation. Contrast with inactive version.
667
B
back up. To copy information to another location to
ensure against loss of data. In IBM Tivoli Storage
Manager, you can back up user files, the IBM Tivoli
Storage Manager database, and storage pools. Contrast
with restore. See also database backup series and
incremental backup.
668
C
cache. The process of leaving a duplicate copy on
random access media when the server migrates a file to
another storage pool in the hierarchy.
central scheduler. A function that allows an
administrator to schedule client operations and
administrative commands. The operations can be
scheduled to occur periodically or on a specific date.
See client schedule and administrative command schedule.
client. A program running on a PC, workstation, file
server, LAN server, or mainframe that requests services
of another program, called the server. The following
types of clients can obtain services from an IBM Tivoli
Storage Manager server: administrative client,
application client, API client, backup-archive client, and
HSM client (also known as Tivoli Storage Manager for
Space Management).
client domain. The set of drives, file systems, or
volumes that the user selects to back up or archive
using the backup-archive client.
client migration. The process of copying a file from a
client node to server storage and replacing the file with
a stub file on the client node. The space management
attributes in the management class control this
migration. See also space management.
D
damaged file. A physical file for which IBM Tivoli
Storage Manager has detected read errors.
database. A collection of information about all objects
managed by the server, including policy management
objects, users and administrators, and client nodes.
database backup series. One full backup of the
database, plus up to 32 incremental backups made
since that full backup. Each full backup that is run
starts a new database backup series. A backup series is
identified with a number.
database backup trigger. A set of criteria that defines
when and how database backups are run automatically.
The criteria determine how often the backup is run,
whether the backup is a full or incremental backup,
and where the backup is stored.
database buffer pool. Storage that is used as a cache
to allow database pages to remain in memory for long
periods of time, so that the server can make continuous
updates to pages without requiring input or output
(I/O) operations from external storage.
database snapshot. A complete backup of the entire
IBM Tivoli Storage Manager database to media that can
be taken off-site. When a database snapshot is created,
the current database backup series is not interrupted. A
database snapshot cannot have incremental database
backups associated with it. See also database backup
series. Contrast with full backup.
data mover. A device, defined to IBM Tivoli Storage
Manager, that moves data on behalf of the server. A
NAS file server can be a data mover.
Glossary
669
670
F
file space. A logical space in IBM Tivoli Storage
Manager server storage that contains a group of files.
For clients on Windows systems, a file space is a logical
partition that is identified by a volume label. For clients
on UNIX systems, a file space is a logical space that
contains a group of files backed up or archived from
the same file system, or part of a file system that stems
from a virtual mount point. Clients can restore, retrieve,
or delete their file spaces from IBM Tivoli Storage
Manager server storage. IBM Tivoli Storage Manager
does not necessarily store all the files from a single file
space together, but can identify all the files in server
storage that came from a single file space.
file space ID (FSID). A unique numeric identifier that
the server assigns to a file space when it is stored in
server storage.
frequency. A copy group attribute that specifies the
minimum interval, in days, between incremental
backups.
FSID. See file space ID.
full backup. The process of backing up the entire
server database. A full backup begins a new database
backup series. See also database backup series and
incremental backup. Contrast with database snapshot.
fuzzy copy. A backup version or archive copy of a file
that might not accurately reflect the original content of
the file because IBM Tivoli Storage Manager backed up
or archived the file while the file was being modified.
H
hierarchical storage management (HSM) client. The
Tivoli Storage Manager for Space Management program
that runs on workstations to allow users to maintain
free space on their workstations by migrating and
recalling files to and from IBM Tivoli Storage Manager
storage. Synonymous with space manager client.
high migration threshold. A percentage of the storage
pool capacity that identifies when the server can start
migrating files to the next available storage pool in the
hierarchy. Contrast with low migration threshold. See
server migration.
HSM client. Hierarchical storage management client.
Also known as the space manager client.
I
IBM Tivoli Storage Manager command script. A
sequence of IBM Tivoli Storage Manager administrative
commands that are stored in the database of the IBM
Tivoli Storage Manager server. You can run the script
L
LAN-free data transfer. The movement of client data
directly between a client and a storage device over a
SAN, rather than over the LAN.
library. (1) A repository for demountable recorded
media, such as magnetic tapes. (2) For IBM Tivoli
Storage Manager, a collection of one or more drives,
and possibly robotic devices (depending on the library
type), which can be used to access storage volumes. (3)
In the AS/400 system, a system object that serves as a
Glossary
671
M
macro file. A file that contains one or more IBM Tivoli
Storage Manager administrative commands, which can
be run only from an administrative client by using the
MACRO command. Contrast with IBM Tivoli Storage
Manager command script.
managed object. A definition in the database of a
managed server that was distributed to the managed
server by a configuration manager. When a managed
server subscribes to a profile, all objects associated with
that profile become managed objects in the database of
the managed server. In general, a managed object
cannot be modified locally on the managed server.
Objects can include policy, schedules, client options
sets, server scripts, administrator registrations, and
server and server group definitions.
managed server. An IBM Tivoli Storage Manager
server that receives configuration information from a
configuration manager via subscription to one or more
profiles. Configuration information can include
672
N
NAS. Network-attached storage.
NAS node. An IBM Tivoli Storage Manager node that
is a NAS file server. Data for the NAS node is
transferred by the NAS file server itself at the direction
of an IBM Tivoli Storage Manager server that uses
NDMP. The data is not transferred by the IBM Tivoli
Storage Manager client. Also called NAS file server
node.
native format. A format of data that is written to a
storage pool directly by the IBM Tivoli Storage
Manager server. Contrast with non-native data format.
NDMP. Network Data Management Protocol.
network-attached storage (NAS) file server. A
dedicated storage device with an operating system that
is optimized for file-serving functions. In IBM Tivoli
Storage Manager, a NAS file server can have the
characteristics of both a node and a data mover. See
also data mover and NAS node.
Network Data Management Protocol (NDMP). An
industry-standard protocol that allows a network
storage-management application (such as IBM Tivoli
Storage Manager) to control the backup and recovery of
an NDMP-compliant file server, without installing
third-party software on that file server.
node. (1) A workstation or file server that is registered
with an IBM Tivoli Storage Manager server to receive
its services. See also client node and NAS node. (2) In a
Microsoft cluster configuration, one of the computer
systems that make up the cluster.
node privilege class. A privilege class that allows an
administrator to remotely access backup-archive clients
for a specific client node or for all clients in a policy
domain. See also privilege class.
O
open registration. A registration process in which any
users can register their own workstations as client
nodes with the server. Contrast with closed registration.
operator privilege class. A privilege class that allows
an administrator to issue commands that disable or halt
the server, enable the server, cancel server processes,
and manage removable media. See also privilege class.
P
page. A unit of space allocation within IBM Tivoli
Storage Manager database volumes.
path. An IBM Tivoli Storage Manager object that
defines a one-to-one relationship between a source and
a destination. Using the path, the source accesses the
destination. Data may flow from the source to the
destination, and back. An example of a source is a data
mover (such as a NAS file server), and an example of a
destination is a tape drive.
physical file. A file stored in one or more storage
pools, consisting of either a single logical file, or a
group of logical files packaged together (an aggregate).
See also aggregate and logical file.
physical occupancy. The amount of space used by
physical files in a storage pool. This space includes the
unused space created when logical files are deleted
from aggregates. See also physical file, logical file, and
logical occupancy.
policy domain. A policy object that contains policy
sets, management classes, and copy groups that are
used by a group of client nodes. See policy set,
management class, and copy group.
policy privilege class. A privilege class that allows an
administrator to manage policy objects, register client
nodes, and schedule client operations for client nodes.
Authority can be restricted to certain policy domains.
See also privilege class.
policy set. A policy object that contains a group of
management classes that exist for a policy domain.
Several policy sets can exist within a policy domain but
only one policy set is active at one time. See
management class and active policy set.
673
R
randomization. The process of distributing schedule
start times for different clients within a specified
percentage of the schedules startup window.
rebinding. The process of associating a backed-up file
with a new management class name. For example,
rebinding occurs when the management class
associated with a file is deleted. See binding.
recall. To access files that have been migrated from
workstations to server storage by using the space
manager client. Contrast with migrate.
receiver. A server repository that contains a log of
server messages and client messages as events. For
example, a receiver can be a file exit, a user exit, or the
IBM Tivoli Storage Manager server console and activity
log. See also event.
reclamation. A process of consolidating the remaining
data from many sequential access volumes onto fewer
new sequential access volumes.
reclamation threshold. The percentage of reclaimable
space that a sequential access media volume must have
before the server can reclaim the volume. Space
becomes reclaimable when files are expired or are
deleted. The percentage is set for a storage pool.
674
S
schedule. A database record that describes scheduled
client operations or administrative commands. See
administrative command schedule and client schedule.
scheduling mode. The method of interaction between
a server and a client for running scheduled operations
on the client. IBM Tivoli Storage Manager supports two
scheduling modes for client operations: client-polling
and server-prompted.
Glossary
675
T
tape library. A term used to refer to a collection of
drives and tape cartridges. The tape library may be an
automated device that performs tape cartridge mounts
and demounts without operator intervention.
tape volume prefix. A device class attribute that is the
high-level-qualifier of the file name or the data set
name in the standard tape label.
target server. A server that can receive data sent from
another server. Contrast with source server. See also
virtual volumes.
U
UCS-2. An ISO/IEC 10646 encoding form, Universal
Character Set coded in 2 octets. The IBM Tivoli Storage
Manager client on Windows NT and Windows 2000
uses the UCS-2 code page when the client is enabled
for Unicode.
Unicode Standard. A universal character encoding
standard that supports the interchange, processing, and
display of text that is written in any of the languages of
the modern world. It can also support many classical
and historical texts and is continually being expanded.
The Unicode Standard is compatible with ISO/IEC
10646. For more information, see www.unicode.org.
UTF-8. Unicode transformation format - 8. A
byte-oriented encoding form specified by the Unicode
Standard.
V
validate. To check a policy set for conditions that can
cause problems if that policy set becomes the active
policy set. For example, the validation process checks
whether the policy set contains a default management
class.
676
Index
Special characters
$$CONFIG_MANAGER$$ 496
Numerics
3480 tape drive
cleaner cartridge 157
device support 59
device type 164
3490 tape drive
cleaner cartridge 157
device support 59
device type 164
3494 automated library device 33, 80
3494SHARED server option 71
3570 tape drive
ASSISTVCRRECOVERY server option 71
defining device class 50, 163
device support 59
3590 tape drive
ASSISTVCRRECOVERY server option 71
defining device class 50, 163, 165
device support 81
A
absolute mode, description of 323
ACCEPT DATE command 388
access authority, client 267
access mode, volume
changing 192
description 193
determining for storage pool 183, 244
access, managing 288, 290
accounting record
description of 464
monitoring 464
accounting variable 464
ACSLS (Automated Cartridge System Library Software)
StorageTek library
configuring 94
description 34
Tivoli Storage Manager server options for 71
ACTIVATE POLICYSET command 329
ACTIVE policy set
creating 319, 329
replacing 300
activity log
adjusting the size 450
description of 449
monitoring 449
querying 450
setting the retention period 450
administrative client
description of 3
viewing information after IMPORT or EXPORT 536
administrative commands
ACCEPT DATE 394
ASSIGN DEFMGMTCLASS 300, 328
AUDIT LIBVOLUME 147
Copyright IBM Corp. 1993, 2003
677
678
aggregate file
controlling the size of 196
definition 196
reclaiming space in 213, 214
unused space in 235
viewing information about 229
AIXASYNCIO 399
AIXDIRECTIO 399
analyst privilege class
revoking 294
ANR8914I message 158
ANR9999D message 453
application client
adding node for 252
description 4
policy for 332
application program interface (API)
client, registering 255
compression option 255
deletion option 255
description of 3
registering to server 255
archive
amount of space used 236
backup set 9, 13
defining criteria 318
description of 8, 13
file while changing 327
instant 8, 13
policy 23
processing 315
archive copy group
defining 327, 328
deleting 340
description of 304
archive file management 302
archiving a file 302, 315
ASCII restriction for browser script definition 407
ASSIGN DEFMGMTCLASS command 300, 328
assigned capacity 423, 430
association, client with schedule
defining 361
deleting 370
association, file with management class 310, 311
association, object with profile
administrative command schedule 487
administrator 485, 498
client option set 485
deleting 489
policy domain 486
script 485
Atape device driver 62
atldd device driver 62
AUDIT LIBVOLUME command 147
AUDIT LICENSE command 387
AUDIT VOLUME command 572, 578
auditing
librarys volume inventory 147
license, automatic by server 387
multiple volumes in sequential access storage pool 578
single volume in sequential access storage pool 579
volume in disk storage pool 578
volume, reasons for 572
volumes by date 579
volumes by storage pool 579
authentication, client/server 294
Index
679
authority
client access 267
granting to administrators 288
privilege classes 288
server options 288
AUTOFSRENAME parameter 273
Automated Cartridge System Library Software (ACSLS)
StorageTek library
configuring 94
description 34
Tivoli Storage Manager server options for 71
automated library device
auditing 147
changing volume status 145
checking in volumes 137
defining 33
informing server of new volumes 137
labeling volumes 135
overflow location 185
removing volumes 145
returning volumes 146
scratch and private volumes 38
updating 153
volume inventory 39
automatically renaming file spaces 273
automating
client operations 360
server operations 401
awk script 596, 619
B
background processes 394
backup
amount of space used by client 236
comparison of types 11, 14
database 556, 562
default policy 299
defining criteria for client files 318
differential, for NAS node 9, 45
file 302, 312, 314
file management 302
file while open 321
frequency for file 322
full, for NAS node 45
group 12
incremental 302, 312
logical volume 314
NAS file server 44
policy 23
selective 302, 314
snapshot, using hardware 9, 12
storage pool 549
subfiles, server set-up 22, 350
types available 10, 14
when to perform for database 555
backup copy group
defining 321, 326
deleting 340
description of 304
frequency 312
mode 312
serialization 312
BACKUP DB command 562
BACKUP DEVCONFIG command 560
backup period, specifying for incremental
680
backup set
adding subfiles to 352
deleting 350
description of 345
displaying contents of 349
example of generating 346
generating 345
how the server manages and tracks 347
media, selecting 345
moving to other servers 347
OST extension on 345
selecting a name for 346
selecting retention period for 347
suggested usage 9, 13, 15, 22
updating 347
use as archive 9, 13, 15, 22
viewing information about 348
BACKUP STGPOOL command 549
BACKUP VOLHISTORY command 557
backup-archive client
description of 3
operations summary 10
performing operations for 343, 367, 372
policy for 307
registering node 252
scheduling operations for 360
using to back up NAS file server 113, 128
bar-code reader
auditing volumes in a library 147
checking in volumes for a library 140
labeling volumes in a library 136
base file 350
batch file, scheduling on client 363
binding a file to a management class 310
browser, limited to ASCII entry for script definition 407
buffer pool 433
BUFPOOLSIZE option 399, 434
374
cache
deleting files from 208, 238
description of 20
disabling for disk storage pools 207
effect on performance 208
effect on statistics 208
enabling for disk storage pools 184, 207
monitoring utilization on disk 233
CANCEL PROCESS command 232, 396
CANCEL RESTORE command 287
CANCEL SESSION command 284
capacity, assigned 423, 430
capacity, tape 175
capturing server messages to a user log 389
cartridge
cleaner cartridge 157
device support 59
device type 164
category, 349X library 80
Celerra, EMC file server
storage pool for backup 186
central scheduling
client operations 343, 359, 367, 372
controlling the workload 375
coordinating 372
description of 23, 25, 359
server operations 402
client
access user ID 267
administrative 3
API (application program interface) 4, 255
application client 4, 332
backup-archive 3
how to protect 8
operations summary 10
options file 255
restore without primary volumes available 552
Tivoli Storage Manager for Space Management (HSM
client) 4, 307
using to back up NAS file server 44, 128
client file
allowing archive while changing 300
allowing backup while changing 299, 321
associating with management class 310, 311
damaged 552
delaying migration of 204
deleting 247
deleting from a storage pool 246
deleting from cache 208
deleting when deleting a volume 247
duplication when restoring 552
eligible for archive 300, 312
eligible for backup 299, 312
eligible for expiration 301
eligible for space management 315
how IBM Tivoli Storage Manager stores 196
on a volume, querying 228
server migration of 199
client migration 315, 316
client node
adding 251
amount of space used 235
creating backup sets for 345
file spaces, QUERY OCCUPANCY command 235
finding tapes used by 230
immediate processing 378
importing 533
locking 264
managing registration 252, 262, 383
options file 256
performing operations for 343, 367, 372
privilege class for scheduling operations for 360
querying 264
registering 255
removing 264
renaming 263
scheduling operations for 360
setting password authentication 296
setting scheduling mode 374
setting up subfile backups 351
unlocking 264
updating 263
viewing information about 264
client option
TXNBYTELIMIT 196
VIRTUALMOUNTPOINT 269
client option set
adding client options to 281
assigning clients to 282
copying 282
creating 281
deleting 283
deleting an option from 282
for NAS node 125
Index
681
682
683
684
configuring (continued)
shared library 77, 87
console mode 536
contents of a volume 228
context messaging for ANR9999D 453
continuation characters, using 408
COPY CLOPTSET command 282
COPY DOMAIN command 319
copy group
archive, description of 304
backup, description of 304
defining archive 327
defining backup 321
deleting 340
COPY MGMTCLASS command 321
COPY POLICYSET command 319
COPY SCHEDULE command 368, 405
COPY SCRIPT command 411
COPY SERVERGROUP command 504
copy storage pool
compared with primary 245
defining a 244
restore from multiple 552
role in storage pool migration 207
simultaneous write 187
storage hierarchy, effect on 198
creating backup sets
benefits of 345
example for 346
creating server scripts 407
cross definition 473, 474, 477
cyclic redundancy check
during a client session 343
for storage pool volumes 574
for virtual volumes 506
performance considerations for nodes 344
perfornance considerations for storage pools 577
D
damaged files 542, 580
data
considering user needs for recovering 51
exporting 513
importing 513
data compression 253
data format for storage pool 113, 124, 131
definition 185
operation restrictions 186
data movement, querying 240
data mover
defining 108, 125
description 38
managing 130
NAS file server 38
data storage
client files, process for storing 5
concepts overview 15
considering user needs for recovering 51
deleting files from 247
evaluating 39
example 181
managing 18
monitoring 572
planning 39
protection, methods 542
server options affecting 71
685
686
322, 327
E
ECARTRIDGE device class 163
element address 107
EMC Celerra file server
storage pool for backup 186
ENABLE EVENTS command 452
ENABLE SESSIONS command 286
ENABLE3590LIBRARY parameter 72, 80
END EVENTLOGGING command 453
Enterprise Administration
description 467
enterprise configuration
communication setup 472
description 468, 479
procedure for setup 480
profile for 482
scenario 470, 480
subscription to 483
enterprise event logging 461, 472
enterprise logon 267, 469, 500
environment variable, accounting 464
environment variables 391
error analysis 443
error checking for drive cleaning 158
error reporting for ANR9999D messages 453
error reports for volumes 226
establishing server-to-server communications
enterprise configuration 472
enterprise event logging 472
virtual volumes 478
estimated capacity for storage pools 224
estimated capacity for tape volumes 226
event logging 451
event record (for a schedule)
deleting 372, 406
description of 362, 370
managing 405
querying 405
removing from the database 372, 406
setting retention period 372, 406
event server 461
EXPINTERVAL option 330
expiration date, setting 404
expiration processing
description 330, 553
files eligible 301, 330
of subfiles 301, 324, 330, 352
starting 330
using disaster recovery manager 331
EXPIRE INVENTORY command
deleting expired files 330
duration of process 331
export
labeling tapes 517
monitoring 534
planning for sequential media 521
PREVIEW parameter 521
querying about a process 535
querying the activity log 536
using scratch media 522
viewing information about a process 535
EXPORT ADMIN command 523
EXPORT commands 535, 536
Index
687
F
file data, importing 513
FILE device type
defining device class 163
deleting scratch volumes 558
setting up storage pool 54
file exit 451
file name for a device 62, 126
file retrieval date 208
file server, network-attached storage (NAS)
backup methods 44
registering a NAS node for 125
using NDMP operations 44, 111
file size, determining maximum for storage pool
file space
deleting, effect on reclamation 214
deleting, overview 279
description of 269
merging on import 516, 525
names that do not display correctly 279
QUERY OCCUPANCY command 235
querying 269
renaming 534
Unicode enabled 278
viewing information about 269
file space identifier (FSID) 278
file-level restore 120
managing 129
planning 120
file, client
allowing archive while changing 300
allowing backup while changing 299, 321
associating with management class 310, 311
damaged 552
delaying migration of 204
deleting 247
deleting from a storage pool 246
deleting from cache 208
deleting when deleting a volume 247
duplication when restoring 552
688
H
HALT command 392
halting the server 392
held volume in a client session 283
HELP command 399
hierarchical storage management
See Tivoli Storage Manager for Space Management
hierarchy, storage
defining in reverse order 186, 195
establishing 194
example 181
how the server stores files in 196
next storage pool
definition 195
deleting 247
migration to 199, 231
staging data on disk for tape storage 199
HL ADDRESS 262
home pages xv
how backup sets are managed 347
how to cause the server to accept date and time 394
HSM
See Tivoli Storage Manager for Space Management
I
IBM error analysis 443
IBM service xv
IBM Tivoli Storage Manager (Tivoli Storage Manager)
introduction 3
server network 26, 467
IBM Tivoli Storage Manager device driver 62
IBMtape device driver 62
IDLETIMEOUT server option 284, 285
image backup
policy for 333, 334
suggested use 8, 12
import
monitoring 534
PREVIEW parameter 521, 527
querying about a process 535
querying the activity log 536
recovering from an error 534
viewing information about a process 535
IMPORT ADMIN command 525
IMPORT commands 535, 536
IMPORT NODE command 525, 532
IMPORT POLICY command 525
IMPORT SERVER command 525, 532
importing
data 525
data storage definitions 529, 530
date of creation 527, 532
description of 513
directing messages to an output file 518, 530
duplicate file spaces 531
file data 531
policy definitions 529
server control data 530
subfiles 351
subsets of information 533
include-exclude file
description of 23, 308
for policy environment 304, 308
incomplete copy storage pool, using to restore 552
incremental backup, client
file eligibility for 312
frequency, specifying 374
full 312
partial 313
progressive 14
initial start date for schedule 403
initial start time for schedule 403
installing IBM Tivoli Storage Manager xiii, 252
instant archive
creating on the server 344
description of 9, 13
interface, application program
client, registering 255
compression option 255
deletion option 255
description of 3
registering to server 255
interfaces to IBM Tivoli Storage Manager 17
Internet xv
introduction to IBM Tivoli Storage Manager 3
J
Journal File System 188, 423
L
label
checking media 140
overwriting existing labels 134, 135
sequential storage pools 133, 191
volume examples 135
volumes using a library device 135
LABEL LIBVOLUME command
identifying drives 134
insert category 136
labeling sequential storage pool volumes 134
overwriting existing volume labels 134
restrictions for VolSafe-enabled drives 173
using a library device 135
using a manual library 104
using an automated library 76, 86, 97
volume labeling examples 135
LAN-free data movement
configuration 104
description 15, 42
suggested usage 9
library
ACSLS (Automated Cartridge System Library
Software) 34, 94
adding volumes 137
attaching for NAS file server backup 114, 123
auditing volume inventory 147
automated 145
categories for IBM 3494 80
configuration example 72, 82, 102
configure for more than one device type 70, 165
defining 106, 154
defining path for 108, 126
deleting 153
detecting changes to, on a SAN 106, 109
external 34
full 146
IBM 3494 33, 80
managing 152
manual 33, 102, 150
mixing device types 70
mode, random or sequential 61
overflow location 185
querying 152
SCSI 33
serial number 106
sharing among servers 77, 87
type 41
updating 152
volume inventory 39
library client, shared library 42, 79, 88
library manager, shared library 42, 78, 88
license
compliance 387
features 383
monitoring 387
registering 384
using 383
limitation for script definition on administrative Web
interface 407
LL ADDRESS 262
Index
689
location, volume
changing 192
overflow for storage pool 185
querying volume 227
LOCK ADMIN command 292
LOCK NODE command 264
LOCK PROFILE command 488, 489
log mode
normal 554, 556
roll-forward 554, 556
setting 554
logical devices 54
logical volume on client
backup 302
management class for 310
policy for 312, 333
process for backup 314
restore 302
logical volume, raw 54, 188, 190, 423
LOGPOOLSIZE option 434
loop session, DSMC 283
LTO Ultrium device type 163
LUN
using in paths 108
M
machine characteristics 596
machine recovery information 596
macro
commit individual commands 416
continuation characters 414
controlling command processing 416
running 415
scheduling on client 363
substitution variables 415
testing 416
using 413
writing commands 413
writing comments 414
MACRO administrative command, using 259
magnetic disk devices 35, 53
managed server
changing the configuration manager 494, 499
communication setup 472
deleting a subscription 496
description 468
managed objects 468, 493
refreshing configuration information 497
renaming 500
returning managed objects to local control 498
setting up 482
subscribing to a profile 483, 493, 494
management class
assigning a default 328
associating a file with 310
binding a file to 310
configuration 307
controlling user access 307
copying 316, 320
default 308
defining 320
deleting 341
description of 304, 307
querying 339
rebinding a file 311
updating 311, 316, 321
690
mount (continued)
library 168
limit 165
mode 149
operations 149
query 151
retention period 166
wait period 166
mount point
preemption 396
queue, server option 72
relationship to mount limit in a device class 165, 171
settings for a client session 253
MOVE BATCHSIZE option 399
MOVE DATA command 238
MOVE DRMEDIA command 606
MOVE NODEDATA 241
MOVE SIZETHRESH option 399
moving a backup set
benefits of 347
to another server 347
moving data
from offsite volume in a copy storage pool 238
monitoring the movement of 241
procedure 239
requesting processing information 240
to another storage pool 238
to other volumes in same storage pool 238
multiple
copy storage pools, restoring from 552
device types in a single library 165
managing IBM Tivoli Storage Manager servers 26, 467
servers, running 391
N
name of device 62
NAS file server, NDMP operations
backing up a NAS file server 128
configuration checklist 122
data format 113
data mover, description 38, 108
defining a data mover 108, 125
defining a device class 123
defining a path for data mover and a library 126
defining a storage pool 124
defining a tape drive 127
differential image backup, description 45
full image backup, description 45
interfaces used with 113
managing NAS nodes 129
path, description 38, 108
planning 114
policy configuration 124, 334
registering a NAS node 125, 254
requirements for set up 111
restoring a NAS file server 128
scheduling a backup 128
storage pools for NDMP operations 124
NAS node
defining 125
deleting 130
registering 125
renaming 130
NATIVE data format 113
NDMP operations for NAS file servers
backing up a NAS file server 128
691
O
occupancy, querying 234
ODBC driver 444
offsite recovery media (for DRM)
volumes
moving back onsite 605
sending offsite 604
states 603
offsite volume access mode 194
offsite volumes, moving data in a copy storage pool
one-drive library, volume reclamation 184, 217
open registration
description 252
process 253
setting 252
operations available to client 10
operator privilege class
revoking 294
optical device
defining device class 163, 169
device driver 61
OPTICAL device type 169
reclamation for media 217
option set, client
adding client options to 281
assigning clients to 282
copying 282
creating 281
deleting 283
deleting an option from 282
for NAS node 125
requesting information about 282
updating description for 283
option, server
3494SHARED 71
ACSLS options 71
ASSISTVCRRECOVERY 71
AUDITSTORAGE 387, 398
BUFPOOLSIZE 434
changing with SETOPT command 398
COMMTIMEOUT 284, 285
DEVCONFIG 560
DRIVEACQUIRERETRY 72
ENABLE3590LIBRARY 72, 80
EXPINTERVAL 330
EXPQUIET 331
IDLETIMEOUT 284, 285, 441
LOGPOOLSIZE 434
MOVEBATCHSIZE 399
MOVESIZETHRESH 399
NOPREEMPT 72, 396
NORETRIEVEDATE 208
overview 18
QUERYAUTH 288
REQSYSAUTHOUTFILE 288
RESOURCETIMEOUT 72
RESTOREINTERVAL 287, 301, 330
SEARCHMPQUEUE 72
SELFTUNEBUFPOOLSIZE 399
SELFTUNETXNSIZE 399
THROUGHPUTDATATHRESHOLD 285
THROUGHPUTTIMETHRESHOLD 285
692
P
238
Index
693
Q
query
authority 288
database volumes 426
for general information 225
policy objects 338
recovery log volumes 426
storage volumes 225
QUERY ACTLOG command 450, 536
QUERY ADMIN command 292
QUERY BACKUPSET command 348
QUERY BACKUPSETCONTENTS command 349
QUERY CONTENT command 228
QUERY COPYGROUP command 338, 531
QUERY DB command 431, 434
QUERY DBBACKUPTRIGGER command 557
QUERY DBVOLUME command 431, 548
QUERY DEVCLASS command 521
QUERY DOMAIN command 340
QUERY DRIVE command 154
QUERY DRMSTATUS command 590
QUERY ENABLED command 463
QUERY EVENT command 370, 405
QUERY FILESPACE command 269
QUERY LIBRARY command 152
QUERY LICENSE command 387
QUERY LOG command 435
QUERY LOGVOLUME command 431, 548
QUERY MGMTCLASS command 339
QUERY MOUNT command 151
QUERY NODE command 264
QUERY OCCUPANCY command 234, 235, 236
QUERY OPTION command 442
QUERY POLICYSET command 339
QUERY PROCESS command 232, 240, 395, 441, 535
QUERY REQUEST command 150
QUERY RESTORE command 287
QUERY RPFCONTENT command 600
QUERY RPFILE command 600
QUERY SCHEDULE command 362
QUERY SCRIPT command 411
QUERY SERVERGROUP command 504
QUERY SESSION command 283, 440
QUERY STATUS command 442
QUERY STGPOOL command 223, 231, 233
QUERY SUBSCRIPTION command 495
QUERY SYSTEM command 443
QUERY VOLHISTORY command 558
QUERY VOLUME command 225, 241
QUERYAUTH server option 288
R
random mode for libraries 61
randomize, description of 376
raw logical volume 54, 188, 190, 423
read-only access mode 193
read/write access mode 193
rebinding
description of 311
file to a management class 311
recalling a file
selective 303
transparent 303
receiver 451
694
reclamation
delayed start of process 214
delaying reuse of volumes 220, 553
description of 20
effects of collocation 220
effects of DELETE FILESPACE 214
offsite volume 219
setting a threshold for sequential storage pool 184, 213,
245
storage pool for 184
virtual volumes 218
with single drive 217
RECONCILE VOLUMES command 510
recovering storage pools 549
recovering the database 563
recovery instructions file 624
recovery log
adding space to 428, 429
automating increase of 427
available space 422, 425
buffer pool 435
consistent database image 419
defining a volume 429
defining mirrored volumes 547
deleting a volume 432
deleting space 431
description of 26, 419
determining how much space is allocated 422, 425
estimating the amount of space needed 424
logical volume 422, 425
managing 419
mirroring 544, 546
monitoring space 422, 425
monitoring the buffer pool 435
optimizing performance 433
querying the buffer pool 435
querying volumes 426
reducing capacity 432
size of 554
space trigger 426, 427
storage pool size effect 419
viewing information about 435
when to backup 542, 546, 555
recovery log mode
normal 554, 556
roll-forward 554, 556
setting 554
recovery plan file
break out stanzas 619
creating 598
example 622
prefix 592
stanzas 619
recovery, disaster
auditing storage pool volumes 580
example recovery procedures 581
general strategy 505, 541, 542
media 598
methods 505, 541
providing 505, 541
when to backup 542, 555
Redbooks xv
REDUCE DB command 432
REDUCE LOG command 432
REGISTER ADMIN command 291
REGISTER LICENSE command 386
REGISTER NODE command 268
S
SAN (storage area network)
client access to devices 42
device changes, detecting 109
LAN-free data movement 42
NDMP operations 44, 111
policy for clients using LAN-free data movement
sharing a library among servers 41, 77, 87
storage agent role 42
schedule
administrative command 401
associating client node with 361
checking the log file 368
coordinating 372
copying 368, 405
day of the week 403
defining 360, 403
deleting 368, 405
description of 359
expiration date 404
failed, querying 362, 371
for NAS file server backup 128
frequency of service 403
initial start date 403
initial time 403
mode, setting 373
priority 404
querying 362
results of 370, 405
server administrative command 401
startup window 375, 403
type of action 404
uncertain status 371, 406
updating 403
viewing information about 362
schedule event
managing 370, 405
querying 370, 405
viewing information about 370, 405
scheduled operations, setting the maximum 375
scheduler workload, controlling 375
scheduling mode
client-polling 373
overview of 373
selecting 373
server-prompted 373
setting on a client node 374
setting on the server 373
scheduling, central
client operations 343, 359, 367, 372
335
Index
695
696
server (continued)
disabling access 286
disaster recovery 28
enabling access 286
halting 392
importing subfiles from 351
maintaining, overview 17
managing multiple 26
managing operations 383
managing processes 394
messages 452
multiple instances 391
network of IBM Tivoli Storage Manager 26, 467
options, adding or updating 398
prefix 502
protecting 27
querying about processes 395, 441
querying options 442
querying status 442
restarting 393
running multiple servers 391
setting the server name 397
starting 387, 388
stopping 392
viewing information about 442
viewing information about processes 395, 441
server console, description of 288
SERVER device type 505
server group
copying 504
defining 503
deleting 505
member, deleting 505
moving a member 505
querying 504
renaming 504
updating description 504
server option
3494SHARED 71
ACSLS options 71
ASSISTVCRRECOVERY 71
AUDITSTORAGE 387, 398
BUFPOOLSIZE 434
changing with SETOPT command 398
COMMTIMEOUT 284, 285
DEVCONFIG 560
DRIVEACQUIRERETRY 72
ENABLE3590LIBRARY 72, 80
EXPINTERVAL 330
EXPQUIET 331
IDLETIMEOUT 284, 285, 441
LOGPOOLSIZE 434
MOVEBATCHSIZE 399
MOVESIZETHRESH 399
NOPREEMPT 72, 396
NORETRIEVEDATE 208
overview 18
QUERYAUTH 288
REQSYSAUTHOUTFILE 288
RESOURCETIMEOUT 72
RESTOREINTERVAL 287, 301, 330
SEARCHMPQUEUE 72
SELFTUNEBUFPOOLSIZE 399
SELFTUNETXNSIZE 399
THROUGHPUTDATATHRESHOLD 285
THROUGHPUTTIMETHRESHOLD 285
TXNGROUPMAX 196
697
698
T
table of contents 120
managing 129
planning 120
tape
capacity 175
exporting data 522
finding for client node 230
label prefix 167
monitoring life 227
number of times mounted 227
tape (continued)
planning for exporting data 521
recording format 167
reuse in storage pools 141
rotation 47, 142
scratch, determining use 183, 191, 245
setting mount retention period 166
target server 507
technical publications, Redbooks xv
threshold
migration, for storage pool 200, 205
reclamation 184, 213, 245
THROUGHPUTDATATHRESHOLD server option 285
THROUGHPUTTIMETHRESHOLD server option 285
time interval, setting for checking in volumes 166
timeout
administrative Web interface session 294
client session 285
Tivoli Data Protection for NDMP
See NDMP operations for NAS file servers
Tivoli event console 451, 455
Tivoli Storage Manager for Space Management
archive policy, relationship to 316
backup policy, relationship to 316
description 303
files, destination for 320
migration of client files
description 303
eligibility 315
policy for, setting 315, 320
premigration 303
recall of migrated files 303
reconciliation between client and server 303
selective migration 303
setting policy for 316, 320
space-managed file, definition 303
stub file 303
transactions, database 419, 420
transparent recall 303
trigger
database space 427
recovery log space 427
troubleshooting
errors in database with external media manager 102
tuning, server automatically 399
TXNBYTELIMIT client option 196
TXNGROUPMAX server option 196
type, device
3570 165
3590 165
4MM 165
8MM 165
CARTRIDGE 165
DISK 163
DLT 165
DTF 165
ECARTRIDGE 163
GENERICTAPE 165
LTO 164
multiple in a single library 70, 165
NAS 123
OPTICAL 169
QIC 165
SERVER 165, 506, 507
VOLSAFE 173
WORM 165
WORM12 165
U
Ultrium, LTO device type 163
unavailable access mode
description 193
marked by server 151
uncertain, schedule status 371, 406
Unicode
automatically renaming file space 273
client platforms supported 270
deciding which clients need enabled file spaces 271
description of 270
displaying Unicode-enabled file spaces 278
example of migration process 277
file space identifier (FSID) 278, 279
how clients are affected by migration 276
how file spaces are automatically renamed 274
migrating client file spaces 272
options for automatically renaming file spaces 273
unloading the database 435
UNLOCK ADMIN command 292
UNLOCK NODE command 264
UNLOCK PROFILE command 488, 489
unplanned shutdown 392
unreadable files 542, 580
unusable space for database and recovery log 422
UPDATE ADMIN command 291
UPDATE BACKUPSET command 347
UPDATE CLIENTOPT command 282
UPDATE CLOPTSET command 283
UPDATE COPYGROUP command 321, 327
UPDATE DBBACKUPTRIGGER command 557
UPDATE DEVCLASS command 168
UPDATE DOMAIN command 319
UPDATE DRIVE command 154
UPDATE LIBRARY command 152
UPDATE LIBVOLUME command 38, 145
UPDATE MGMTCLASS command 321
UPDATE NODE command 263, 277, 280
UPDATE POLICYSET command 319
UPDATE RECOVERYMEDIA command 598
UPDATE SCHEDULE command 403
UPDATE SCRIPT command 410
UPDATE SERVER command 478, 479
UPDATE VOLUME command 191
URL for client node 252
usable space 422
user exit 451
user ID, administrative
creating automatically 268
description of 252
preventing automatic creation of 268
using server performance options 399
AIXASYNCIO 399
AIXDIRECTIO 399
utilization, database and recovery log
description of 423
monitoring 423, 425
V
VALIDATE POLICYSET command 329
Index
699
validating data
during a client session 343
for storage pool volumes 574
for virtual volumes 506
performance considerations for nodes 344
perfornance considerations for storage pools 577
variable, accounting log 464
VARY command 55
varying volumes on or off line 55
VERDELETED parameter 299, 323
VEREXISTS parameter 299, 323
versions data deleted, description of 299, 323, 326
versions data exists, description of 299, 323, 325
virtual volumes, server-to-server
reclaiming 218
using to store data 505
VIRTUALMOUNTPOINT client option 270
Vital Cartridge Records (VCR), corrupted condition 71
VOLSAFE device class 173
volume capacity 167
volume copy
allocating to separate disks 547
description of 547
volume history
deleting information from 558
file, establishing 437, 557
using backup to restore database 557, 564
VOLUMEHISTORY option 557
volumes
access preemption 397
access, controlling 141
adding to automated libraries 81
allocating space for disk 54, 190
assigning to storage pool 190
auditing 147, 572
auditing considerations 572
automated library inventory 39
capacity, compression effect 176
checking in new volumes to library 137
checking out 145
contents, querying 228
defining for database 429
defining for recovery log 429
defining to storage pools 191
delaying reuse 220, 553
deleting 248, 558
detailed report 229
determining which are mounted 151
disk storage 191
disk storage pool, auditing 578
dismounting 152
errors, read and write 226
estimated capacity 226
finding for client node 230
help in dsmc loop session 283
inventory maintenance 141
location 227
managing 145
monitoring life 227
monitoring movement of data 241
monitoring use 225
mount retention time 166
moving files between 237
number of times mounted 227
overview 38
pending status 227
private 38
700
volumes (continued)
querying contents 228
querying db and log volumes 426
querying for general information 225
random access storage pools 180, 191
reclamation 217
recovery using mirroring 570
removing from a library 145
returning to a library 146
reuse delay 220, 553
scratch category 38
scratch, using 191
sequential 191
sequential storage pools 133, 191
setting access mode 193
standard report 228
status, in automated library 38
status, information on 226
swapping 140
updating 145, 191
using private 38
varying on and off 55
W
Web administrative interface
description 17
limitation of browser for script definitions 407
setting authentication time-out value 294
Web backup-archive client
granting authority to 267
remote access overview 265
URL 252, 266
Web sites xv
wizard
client configuration 257
setup 257
workstation, registering 255
WORM device class
defining 169
maintaining volumes in a library 144
reclamation of media 217
WORM tape, StorageTek VolSafe 173
www xv
Printed in U.S.A.
GC32-0768-01
Spine information:
Administrators Guide
Version 5.2