Itm Install
Itm Install
Itm Install
GC32-9407-03
GC32-9407-03
Note
Before using this information and the product it supports, read the information in Appendix L, Notices, on page 607.
This edition applies to version 6.2.2 of IBM Tivoli Monitoring (product number 5724-C04) and to all subsequent
releases and modifications until otherwise indicated in new editions.
Copyright IBM Corporation 2005, 2010.
US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Part 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 1. Overview of IBM Tivoli Monitoring . . . . . . . . . . . . . . . . . . . . . 3
Components of the monitoring architecture . . . . . . . . . . . . . . . . . . . . . . . 3
Tivoli Enterprise Monitoring Server . . . . . . . . . . . . . . . . . . . . . . . . . 5
Tivoli Enterprise Portal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Tivoli Enterprise Monitoring Agents . . . . . . . . . . . . . . . . . . . . . . . . . 6
Tivoli Data Warehouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Event synchronization component . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Tivoli Enterprise Portal Server extended services . . . . . . . . . . . . . . . . . . . . 9
New in release 6.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Changes to installation media . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Changes to runtime prerequisites and platform support. . . . . . . . . . . . . . . . . . 11
Changes to the Tivoli Data Warehouse . . . . . . . . . . . . . . . . . . . . . . . 11
Authentication using LDAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Help for the Tivoli Enterprise Portal presented by the Eclipse Help Server. . . . . . . . . . . 12
Event forwarding and synchronization for Tivoli Enterprise Console and Netcool/OMNIbus. . . . . 12
Support for /3GB boot option on 32-bit Windows . . . . . . . . . . . . . . . . . . . . 12
Common Event Console view for events from multiple event servers . . . . . . . . . . . . 12
Remote installation of application support files using Manage Tivoli Enterprise Monitoring Services
on Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Flexible scheduling of Summarization and Pruning agent . . . . . . . . . . . . . . . . . 13
Validation of monitoring server protocols and standby configuration . . . . . . . . . . . . . 13
Support for License Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Support for UNIX agents in Solaris local zones . . . . . . . . . . . . . . . . . . . . 13
New managed system groups offer advanced features over managed system lists . . . . . . . 13
New in release 6.2 fix pack 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Base DVD split . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Support for Sun Java Runtime Environment . . . . . . . . . . . . . . . . . . . . . 14
Support for the browser client on Linux . . . . . . . . . . . . . . . . . . . . . . 14
Support for single sign-on for launch to and from other Tivoli applications . . . . . . . . . . 14
New in release 6.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Reconfigured product media . . . . . . . . . . . . . . . . . . . . . . . . . . 15
New IBM Tivoli Monitoring High-Availability Guide provides resiliency information and instructions 15
IPv6 communications protocol now fully supported . . . . . . . . . . . . . . . . . . 15
RedHat Enterprise Linux 2.1 no longer supported on Intel platforms . . . . . . . . . . . . 16
SELinux now supported when executing IBM Tivoli Monitoring . . . . . . . . . . . . . 16
Asynchronous remote agent deployment and group deployment now supported . . . . . . . 16
Linux/UNIX users: 64-bit Tivoli Enterprise Portal Server now supported. . . . . . . . . . . 16
Support for 64-bit DB2 for the workstation . . . . . . . . . . . . . . . . . . . . . 16
Separate DB2 for the workstation licensing no longer required . . . . . . . . . . . . . 16
Tivoli Data Warehouse now supports DB2 on z/OS . . . . . . . . . . . . . . . . . . 16
New schema publication tool simplifies generation of SQL statements needed to create the Tivoli
Data Warehouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Tivoli Data Warehouse support for Solaris environments . . . . . . . . . . . . . . . . 16
Agentless monitoring of distributed operating systems now supported . . . . . . . . . . . 17
The tacmd createNode command need no longer be executed on the monitoring server node
17
Support for multiple remote Tivoli Enterprise Monitoring Servers on one Linux or UNIX computer 17
New in release 6.2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Copyright IBM Corp. 2005, 2010
iii
. 17
. 18
. 20
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
20
20
20
20
21
21
21
21
22
22
22
22
22
23
23
23
.
.
.
.
.
.
.
.
.
23
23
23
23
23
23
24
24
24
. 24
. 24
. 24
. 24
. 24
. 24
iv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
27
27
28
29
30
31
32
33
34
35
35
36
36
36
Netcool/OMNIbus integration . . . . . . . . . . . . . . . . . . . . . .
Firewall gateway . . . . . . . . . . . . . . . . . . . . . . . . . . .
IBM Tivoli Universal Agent . . . . . . . . . . . . . . . . . . . . . . .
IBM Tivoli Agent Builder . . . . . . . . . . . . . . . . . . . . . . . .
Additional ports used in the Tivoli Monitoring environment . . . . . . . . . . . .
Understanding COUNT and SKIP options . . . . . . . . . . . . . . . . .
Configuring your firewalls . . . . . . . . . . . . . . . . . . . . . . .
Sizing your Tivoli Monitoring hardware . . . . . . . . . . . . . . . . . . . . .
Locating and sizing the hub Tivoli Enterprise Monitoring Server . . . . . . . . . .
Locating and sizing the remote Tivoli Enterprise Monitoring Server . . . . . . . . .
Locating and sizing the remote deployment depot . . . . . . . . . . . . . . .
Locating and sizing the Tivoli Enterprise Portal Server . . . . . . . . . . . . . .
Locating and sizing the Warehouse Proxy agent . . . . . . . . . . . . . . . .
Locating and sizing the Summarization and Pruning agent . . . . . . . . . . . .
Locating and sizing the portal client . . . . . . . . . . . . . . . . . . . . .
Platform support matrix for Tivoli Monitoring . . . . . . . . . . . . . . . . . . .
Configuring for high availability and disaster recovery . . . . . . . . . . . . . . .
Configuring the hub monitoring server for high availability and disaster recovery . . . .
Configuring for portal server high availability and disaster recovery . . . . . . . . .
Configuring for agent and remote monitoring server high availability and disaster recovery
Configuring for warehouse high availability and disaster recovery . . . . . . . . . .
Configuring for Warehouse Proxy agent high availability and disaster recovery . . . . .
Configuring for Summarization and Pruning agent high availability and disaster recovery .
Agent deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Background information about agent autonomy . . . . . . . . . . . . . . . .
Event forwarding from autonomous agents . . . . . . . . . . . . . . . . .
Agentless monitoring versus monitoring agents . . . . . . . . . . . . . . . .
Deployment options for agentless monitors . . . . . . . . . . . . . . . . .
Documentation resources for agentless monitoring . . . . . . . . . . . . . .
Problem-diagnosis tools available for agentless monitoring . . . . . . . . . . .
Tivoli Universal Agent deployments . . . . . . . . . . . . . . . . . . . . . .
Tivoli Universal Agent versioning considerations . . . . . . . . . . . . . . . .
Tivoli Universal Agent firewall considerations . . . . . . . . . . . . . . . . .
Large-scale deployment strategies . . . . . . . . . . . . . . . . . . . . .
Using Universal Agents with remote monitoring servers . . . . . . . . . . . . .
Mainframe users . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Multi-hub environments . . . . . . . . . . . . . . . . . . . . . . . . . .
Accelerating your custom monitoring . . . . . . . . . . . . . . . . . . . . .
Planning and project management . . . . . . . . . . . . . . . . . . . . . .
Estimating deployment tasks . . . . . . . . . . . . . . . . . . . . . . . .
Install server components on Windows and UNIX. . . . . . . . . . . . . . . .
Install server components on z/OS . . . . . . . . . . . . . . . . . . . . .
Install data warehousing components . . . . . . . . . . . . . . . . . . . .
Install and configure event integration components . . . . . . . . . . . . . . .
Install and configure monitoring agents . . . . . . . . . . . . . . . . . . .
Setting up situation-based monitoring . . . . . . . . . . . . . . . . . . . .
Creating policies and workflows . . . . . . . . . . . . . . . . . . . . . .
Creating workspaces . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating and deploying Tivoli Universal Agent applications . . . . . . . . . . . .
Transferring skills . . . . . . . . . . . . . . . . . . . . . . . . . . .
Scheduling the initial deployment. . . . . . . . . . . . . . . . . . . . . .
Scheduling for fix packs . . . . . . . . . . . . . . . . . . . . . . . . .
Staffing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
36
37
37
37
38
38
39
39
40
41
41
42
42
44
45
46
47
48
48
49
50
50
51
51
52
53
53
58
59
59
59
59
60
60
60
61
61
62
63
63
64
64
64
64
65
66
66
66
66
67
67
67
67
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
69
70
71
71
72
72
73
73
74
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
75
75
75
75
76
77
78
78
78
79
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
vi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 83
. 83
. 83
. 84
. 84
. 85
. 86
. 86
. 86
. 86
. 86
. 87
. 87
. 90
. 91
. 91
. 91
. 91
. 92
. 93
. 93
. 93
. 94
. 94
. 95
. 95
. 95
. 96
. 96
. 107
. 109
. 109
. 110
Additional requirements . . . . .
Required hardware for System z . .
Required software . . . . . . . .
Required software for event integration
. .
. .
. .
with
. . . . . . .
. . . . . . .
. . . . . . .
Netcool/OMNIbus
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
111
111
111
113
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
115
115
116
116
117
117
117
118
119
119
120
121
122
122
122
122
122
123
124
124
125
126
126
126
127
127
129
129
131
131
131
132
133
133
133
133
133
133
134
134
134
134
135
135
135
137
138
vii
viii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
147
148
148
153
153
154
155
155
156
157
157
160
161
162
162
169
170
170
171
175
182
182
183
184
186
187
187
187
188
189
189
190
190
191
192
192
193
193
194
196
196
198
199
199
200
200
204
206
206
208
209
209
211
212
. 212
. 212
. 213
214
. 215
. 215
. 217
. 218
. 220
. 220
. 221
. 221
. 224
. 225
. 226
. 226
. 227
. 228
. 229
. 229
. 229
. 230
. 231
. 232
. 232
. 232
. 233
. 233
. 233
. 234
. 234
. 235
. 236
. 236
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
237
237
238
238
238
239
239
240
240
241
242
242
243
244
244
244
245
246
246
246
246
Contents
ix
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
247
247
247
247
247
247
247
248
248
249
250
250
251
251
252
252
252
252
254
254
255
257
257
258
258
258
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
261
261
261
261
262
264
265
266
266
266
266
267
268
268
269
269
270
271
272
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
275
275
275
275
277
277
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
279
279
280
281
281
282
282
282
283
284
284
284
285
285
285
286
286
287
287
Server)
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
293
293
293
294
295
296
296
297
297
298
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
299
299
301
301
302
302
304
304
306
307
307
309
309
309
310
310
311
311
312
312
312
313
313
314
314
Contents
xi
Table spaces. . . . . . . . . . . . . . . .
Buffer pools . . . . . . . . . . . . . . . .
Logging . . . . . . . . . . . . . . . . .
Database maintenance . . . . . . . . . . . .
Application design details . . . . . . . . . . . .
Hardware design and operating system usage . . . .
Memory . . . . . . . . . . . . . . . . .
CPU . . . . . . . . . . . . . . . . . . .
I/O . . . . . . . . . . . . . . . . . . .
Network . . . . . . . . . . . . . . . . .
Tuning . . . . . . . . . . . . . . . . . . .
Database manager configuration tuning . . . . . .
Database configuration tuning . . . . . . . . .
Buffer pools . . . . . . . . . . . . . . . .
Registry variables . . . . . . . . . . . . . .
Monitoring tools . . . . . . . . . . . . . . .
SNAPSHOT and EVENT monitors . . . . . . . .
DB2BATCH . . . . . . . . . . . . . . . .
Optimizing queries . . . . . . . . . . . . . . . .
Processing queries . . . . . . . . . . . . . . .
Defining custom queries . . . . . . . . . . . . .
Optimizing situations . . . . . . . . . . . . . . . .
Planning for platform-specific scenarios . . . . . . . . .
Disabling TCP-delayed acknowledgements on AIX systems
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
314
315
315
316
318
318
319
319
319
320
320
320
321
323
323
323
323
324
325
325
326
327
328
328
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
331
331
331
332
332
333
333
333
333
334
334
335
337
338
338
339
339
339
340
340
340
340
342
343
344
Chapter 16. Tivoli Data Warehouse solution using DB2 for the workstation . . . . . . . . . 349
Supported components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Prerequisite installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
xii
Implementing a Tivoli Data Warehouse solution using DB2 for the workstation . . . . .
Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solution steps . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Step 1: Create the Tivoli Data Warehouse database . . . . . . . . . . . . . . .
Creating the warehouse database on DB2 for the workstation. . . . . . . . . . .
Creating a warehouse user on Windows . . . . . . . . . . . . . . . . . .
Creating a warehouse user on Linux or UNIX. . . . . . . . . . . . . . . . .
Limiting the authority of the warehouse user . . . . . . . . . . . . . . . . .
Setting database and instance configuration values . . . . . . . . . . . . . .
Activating the DB2 listeners on a UNIX DB2 server . . . . . . . . . . . . . .
Step 2: Install and configure communications for the Warehouse Proxy agent . . . . . .
Cataloging a remote data warehouse. . . . . . . . . . . . . . . . . . . .
Configuring an ODBC data source for a DB2 data warehouse . . . . . . . . . .
Before you begin . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring a Warehouse Proxy agent on Windows (ODBC connection) . . . . . . .
Configuring a Warehouse Proxy agent on Linux or UNIX (JDBC connection) . . . . .
Starting the Warehouse Proxy . . . . . . . . . . . . . . . . . . . . . .
Step 3: Configure communications between the Tivoli Enterprise Portal Server and the data
warehouse. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring a Windows portal server (ODBC connection) . . . . . . . . . . . .
Before you begin . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring a Linux or AIX portal server (DB2 for the workstation CLI connection) . . .
Starting the portal server . . . . . . . . . . . . . . . . . . . . . . . .
Step 4: Install and configure communications for the Summarization and Pruning agent . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
352
352
352
353
353
354
354
355
356
357
358
359
360
360
361
361
364
366
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
367
367
367
367
369
370
371
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
373
374
375
376
376
377
378
379
380
381
382
383
384
385
386
387
388
391
391
391
395
Chapter 18. Tivoli Data Warehouse solution using Microsoft SQL Server .
Supported components . . . . . . . . . . . . . . . . . . . . .
Prerequisite installation . . . . . . . . . . . . . . . . . . . . .
Implementing a Tivoli Data Warehouse solution using Microsoft SQL Server .
Assumptions . . . . . . . . . . . . . . . . . . . . . . . .
Solution steps . . . . . . . . . . . . . . . . . . . . . . .
Step 1: Create the Tivoli Data Warehouse database . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
397
398
399
400
400
400
402
Contents
xiii
Step 2: Install and configure communications for the Warehouse Proxy agent . . . . . .
Configuring an ODBC data source for a Microsoft SQL data warehouse . . . . . . .
Configuring a Warehouse Proxy agent on Windows (ODBC connection) . . . . . . .
Configuring a Warehouse Proxy agent on Linux or UNIX (JDBC connection) . . . . .
Starting the Warehouse Proxy agent . . . . . . . . . . . . . . . . . . . .
Step 3: Configure communications between the Tivoli Enterprise Portal Server and the data
warehouse. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring the portal server (ODBC connection) . . . . . . . . . . . . . . .
Step 4: Install and configure communications for the Summarization and Pruning agent . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
403
404
405
407
409
. . . . 410
. . . . 410
. . . . 412
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
415
415
417
418
418
418
419
419
421
422
423
424
426
428
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
429
429
431
432
433
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
435
435
444
445
445
447
447
448
448
449
450
452
453
456
456
456
457
457
457
458
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
xiv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
464
465
466
467
468
469
472
476
479
480
480
481
481
482
483
484
484
484
485
485
486
487
489
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
491
492
492
493
494
495
497
500
502
502
503
505
507
508
508
508
509
510
511
511
512
512
512
514
515
516
Chapter 23. Monitoring your operating system via a System Monitor Agent .
Installing the System Monitor Agent on Windows systems . . . . . . . . .
Configuring the System Monitor Agents on Windows . . . . . . . . . .
Uninstalling the Windows System Monitor Agent. . . . . . . . . . . .
Installing the System Monitor Agent on Linux or UNIX systems . . . . . . .
.
.
.
.
.
517
517
520
521
522
Contents
xv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
529
530
531
532
533
534
535
536
537
538
539
540
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
541
541
542
544
544
544
545
547
548
Appendix C. Firewalls . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Determining which option to use . . . . . . . . . . . . . . . . . . . . . . . .
Flow of connection establishment . . . . . . . . . . . . . . . . . . . . . . .
Permission at the firewall . . . . . . . . . . . . . . . . . . . . . . . . . .
Server address continuity . . . . . . . . . . . . . . . . . . . . . . . . . .
Number of internet zones . . . . . . . . . . . . . . . . . . . . . . . . . .
Basic (automatic) implementation . . . . . . . . . . . . . . . . . . . . . . . .
Implementation with ephemeral pipe . . . . . . . . . . . . . . . . . . . . . . .
Implementation with partition files . . . . . . . . . . . . . . . . . . . . . . . .
Sample scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Scenario 1: Hub monitoring server INSIDE and monitoring agents OUTSIDE . . . . . .
Scenario 2: Hub and remote monitoring servers INSIDE and monitoring agents OUTSIDE .
Scenario 3: Hub monitoring server INSIDE, remote monitoring server and agents OUTSIDE
Creating or modifying the partition file in Manage Tivoli Enterprise Monitoring Services . . .
Windows: Editing the partition file . . . . . . . . . . . . . . . . . . . . . .
UNIX and Linux: Editing the partition file . . . . . . . . . . . . . . . . . . .
Creating the partition file manually . . . . . . . . . . . . . . . . . . . . . . .
Sample partition file . . . . . . . . . . . . . . . . . . . . . . . . . . .
Implementation with firewall gateway . . . . . . . . . . . . . . . . . . . . . . .
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Activation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IPv4 Address Data . . . . . . . . . . . . . . . . . . . . . . . . . . .
IPv6 Address Data . . . . . . . . . . . . . . . . . . . . . . . . . . .
XML Document Structure . . . . . . . . . . . . . . . . . . . . . . . . .
Warehouse Proxy Configuration. . . . . . . . . . . . . . . . . . . . . . . .
Example gateway configuration scenario . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
551
551
551
551
552
552
552
552
554
554
554
554
555
555
555
556
556
557
557
558
558
559
559
559
562
562
xvi
Tivoli Monitoring
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . . . . . . . . . . . . . 577
installation
. . . . .
. . . . .
. . . . .
on
.
.
.
Linux
. . .
. . .
. . .
or
.
.
.
UNIX
. .
. .
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
579
579
580
580
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
581
581
581
583
584
584
585
585
587
587
587
587
588
589
590
591
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
595
595
596
597
597
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
599
599
599
599
600
600
601
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
603
603
604
604
605
605
606
606
Contents
xvii
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
xviii
Figures
1. IBM Tivoli Monitoring environment . . . . . . . . . . . . . . . . . . . . . . . . . 4
2. Event synchronization overview . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3. Help button on the IBM Tivoli Monitoring installer's Select Features panel . . . . . . . . . . 18
4. Help button on the installer's Hub TEMS Configuration panel . . . . . . . . . . . . . . 19
5. Status bar for the installer's software-installation phase . . . . . . . . . . . . . . . . . 19
6. Tivoli Monitoring V6.2.2 communications model . . . . . . . . . . . . . . . . . . . 29
7. Tivoli Monitoring component architecture including firewall gateway . . . . . . . . . . . . 30
8. Multiple data center environment . . . . . . . . . . . . . . . . . . . . . . . . . 33
9. Warehouse load projection spreadsheet . . . . . . . . . . . . . . . . . . . . . . 43
10. Tivoli supported platforms screen shot . . . . . . . . . . . . . . . . . . . . . . . 47
11. Architecture of agentless monitoring . . . . . . . . . . . . . . . . . . . . . . . . 55
12. Adding agentless monitors to the deployment depot . . . . . . . . . . . . . . . . . . 58
13. Configuration window for the portal server database using DB2 for the workstation . . . . . . 141
14. Configuration window for the Tivoli Data Warehouse database using DB2 for the workstation
143
15. Configuration window for the Tivoli Data Warehouse database using Microsoft SQL Server
143
16. Manage Tivoli Enterprise Monitoring Services window . . . . . . . . . . . . . . . . . 144
17. Progress bar for application seeding . . . . . . . . . . . . . . . . . . . . . . . 152
18. The Select Database for Tivoli Enterprise Portal window . . . . . . . . . . . . . . . . 163
19. Configuration window for the portal server database using DB2 for the workstation . . . . . . 166
20. Common Event Console Configuration window . . . . . . . . . . . . . . . . . . . 176
21. Registering the portal server with the Tivoli Enterprise Monitoring Server . . . . . . . . . . 177
22. Configuring database connections for the portal server . . . . . . . . . . . . . . . . 179
23. Configuration information for the Tivoli Data Warehouse using an Oracle database . . . . . . 181
24. Installing the Agent Compatibility Package (component code AC) . . . . . . . . . . . . 188
25. Java Runtime Environment Not Detected error . . . . . . . . . . . . . . . . . . . 189
26. IBM Tivoli Monitoring for Databases: application support packages . . . . . . . . . . . . 202
27. The Select the Application Support to Add to the TEMS window . . . . . . . . . . . . . 203
28. Application Support Addition Complete window . . . . . . . . . . . . . . . . . . . 204
29. Refresh Configuration menu option. . . . . . . . . . . . . . . . . . . . . . . . 206
30. Manage Tivoli Enterprise Monitoring Services Install Product Support window . . . . . . . . 216
31. Firefox Security Warning . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
32. Java memory error message . . . . . . . . . . . . . . . . . . . . . . . . . . 226
33. Java Control Panel window . . . . . . . . . . . . . . . . . . . . . . . . . . 227
34. Server connection error, Tivoli Enterprise Portal browser client . . . . . . . . . . . . . 228
35. Deployment Status Summary workspace showing the status of SSM deployments . . . . . . 248
36. Bulk deployment processing model. . . . . . . . . . . . . . . . . . . . . . . . 249
37. Restart Component window: Tivoli Enterprise Monitoring Server . . . . . . . . . . . . . 263
38. Restart of Monitoring Agent window . . . . . . . . . . . . . . . . . . . . . . . 265
39. Hierarchy for the heartbeat interval. . . . . . . . . . . . . . . . . . . . . . . . 270
40. Manage Tivoli Enterprise Monitoring Services Advanced Utilities window . . . . . . . . . . 272
41. The Manage Tivoli Enterprise Monitoring Services select the new portal server database window 273
42. The Manage Tivoli Enterprise Monitoring Services select the new portal server database window 273
43. Tivoli Enterprise Portal Server snapshot request screen . . . . . . . . . . . . . . . . 279
44. Tivoli Enterprise Portal Server snapshot verification screen . . . . . . . . . . . . . . . 280
45. Intranet with integral Web server . . . . . . . . . . . . . . . . . . . . . . . . 288
46. Intranet with external Web server . . . . . . . . . . . . . . . . . . . . . . . . 289
47. Intranet with integral Web server; Internet with external Web server. . . . . . . . . . . . 290
48. Intranet and Internet with integral and external Web servers . . . . . . . . . . . . . . 291
49. Two host addresses, intranet and Internet, with integral and external Web servers . . . . . . 292
50. Summary of support for the Tivoli Data Warehouse . . . . . . . . . . . . . . . . . . 345
51. Tivoli Data Warehouse solution using DB2 for the workstation . . . . . . . . . . . . . . 350
52. Warehouse Proxy Database Selection screen . . . . . . . . . . . . . . . . . . . . 362
53. Configure DB2 Data Source for Warehouse Proxy window . . . . . . . . . . . . . . . 363
Copyright IBM Corp. 2005, 2010
xix
xx
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
365
365
368
370
374
378
379
380
381
382
383
384
385
386
387
388
389
390
392
393
394
395
398
406
408
408
411
416
425
426
427
430
431
437
439
440
442
443
454
455
465
466
467
469
486
491
493
494
495
523
543
559
562
581
582
582
110.
111.
112.
113.
114.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Figures
.
.
.
.
.
582
582
583
583
583
xxi
xxii
Tables
1. IBM Tivoli Monitoring base monitoring agents . . . . . . . . . . . . . . . . . . . . . 3
2. Planning checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3. Warehouse database server considerations . . . . . . . . . . . . . . . . . . . . . 44
4. Portal client deployment advantages . . . . . . . . . . . . . . . . . . . . . . . 45
5. Data collectors usable with the various agentless monitors and releases supported . . . . . . 56
6. User's guides for the agentless monitors . . . . . . . . . . . . . . . . . . . . . . 59
7. Update history for the baroc files for IBM Tivoli Monitoring agents and components . . . . . . 65
8. Staffing estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
9. Commands for determining your system's kernel version . . . . . . . . . . . . . . . . 69
10. Installation and configuration steps . . . . . . . . . . . . . . . . . . . . . . . . 83
11. Supported IBM Tivoli Monitoring configurations using the IPv6 communications protocol . . . . 85
12. Supported Windows operating systems . . . . . . . . . . . . . . . . . . . . . . 96
13. Supported UNIX, i5/OS, and z/OS operating systems . . . . . . . . . . . . . . . . . 99
14. Supported Linux operating systems . . . . . . . . . . . . . . . . . . . . . . . 102
15. Operating system requirements for IBM GSKit . . . . . . . . . . . . . . . . . . . 106
16. Supported databases for the portal server . . . . . . . . . . . . . . . . . . . . . 107
17. Supported databases for the Tivoli Data Warehouse . . . . . . . . . . . . . . . . . 108
18. Estimated memory and disk storage for IBM Tivoli Monitoring components on distributed systems 110
19. Required software for IBM Tivoli Monitoring. . . . . . . . . . . . . . . . . . . . . 111
20. Agents requiring warehouse database migration . . . . . . . . . . . . . . . . . . . 119
21. Upgrading from IBM Tivoli Monitoring V6.1 or V6.2 to IBM Tivoli Monitoring V6.2.2 . . . . . . 127
22. Upgrading from OMEGAMON Platform 350 or 360 . . . . . . . . . . . . . . . . . . 132
23. OMEGAMON to IBM Tivoli Monitoring terminology . . . . . . . . . . . . . . . . . . 133
24. Unsupported OMEGAMON functions . . . . . . . . . . . . . . . . . . . . . . . 134
25. Configuration information for the portal server database . . . . . . . . . . . . . . . . 141
26. Configuration information for the Tivoli Data Warehouse database . . . . . . . . . . . . 143
27. IBM Tivoli Monitoring high-level installation steps . . . . . . . . . . . . . . . . . . 147
28. Communications protocol settings for the hub monitoring server . . . . . . . . . . . . . 150
29. Steps for installing a hub monitoring server on a Linux or UNIX computer . . . . . . . . . 153
30. UNIX monitoring server protocols and values . . . . . . . . . . . . . . . . . . . . 154
31. Parameters for the itmcmd manage command . . . . . . . . . . . . . . . . . . . 156
32. Remote monitoring server communications protocol settings . . . . . . . . . . . . . . 159
33. Steps for installing a remote monitoring server on a Linux or UNIX computer . . . . . . . . 160
34. UNIX monitoring server protocols and values . . . . . . . . . . . . . . . . . . . . 161
35. Configuration information for the portal server database . . . . . . . . . . . . . . . . 166
36. Steps for installing a portal server on a Linux or AIX computer . . . . . . . . . . . . . 170
37. Hub monitoring server protocols and values . . . . . . . . . . . . . . . . . . . . 173
38. Parameters for the itmcmd manage command . . . . . . . . . . . . . . . . . . . 176
39. Configuration information for the Tivoli Enterprise Portal Server database . . . . . . . . . 180
40. Communications protocol settings . . . . . . . . . . . . . . . . . . . . . . . . 185
41. Steps for installing a monitoring agent on Linux or UNIX . . . . . . . . . . . . . . . . 189
42. UNIX monitoring server protocols and values . . . . . . . . . . . . . . . . . . . . 191
43. Procedures for installing and enabling application support . . . . . . . . . . . . . . . 197
44. Product support on the Infrastructure and Agent DVDs . . . . . . . . . . . . . . . . 198
45. Installation media and instructions for installing application support for nonbase monitoring
agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
46. Locations of CAT and ATR files for the monitoring server . . . . . . . . . . . . . . . 213
47. Locations of application support files on a Linux or UNIX monitoring server . . . . . . . . . 214
48. Locations of CAT and ATR files for the monitoring server . . . . . . . . . . . . . . . 215
49. Language support included on IBM Tivoli Monitoring V6.2.2 Language Support DVDs . . . . . 218
50. File locations for changing application properties for UNIX and Linux . . . . . . . . . . . 230
51. Remote agent deployment tasks. . . . . . . . . . . . . . . . . . . . . . . . . 237
52. Agent depot management commands. . . . . . . . . . . . . . . . . . . . . . . 240
Copyright IBM Corp. 2005, 2010
xxiii
xxiv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
253
254
261
261
262
264
269
293
294
294
295
296
305
336
337
349
.
.
.
.
.
352
353
354
355
358
364
. 367
369
370
371
. 373
.
.
.
.
.
375
376
376
377
397
. 400
. 402
. 403
406
. 410
411
412
. 415
.
.
.
.
.
417
419
419
421
425
429
. 430
. 431
433
. 435
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Tables
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
438
438
450
463
470
470
471
477
496
496
500
505
530
531
532
533
534
535
536
537
538
539
540
541
546
567
568
570
577
586
586
xxv
xxvi
Part 1. Introduction
The single chapter in this section, Chapter 1, Overview of IBM Tivoli Monitoring, on page 3, describes the
architecture of the IBM Tivoli Monitoring products and provides information to help you plan your
deployment and prepare to install, upgrade, or configure the product's base components.
Tivoli Monitoring products use a set of service components (known collectively as Tivoli Management
Services) that are shared by a number of other product suites, including IBM Tivoli OMEGAMON XE
monitoring products, IBM Tivoli Composite Application Manager products, System Automation for z/OS,
Web Access for Information Management, and others. The information in this section is also relevant to
these products.
IBM Tivoli Monitoring V6.2 products (including V6.2.2) are enabled for use with IBM Tivoli License
Manager. Tivoli Management Services components and Tivoli Monitoring agents provide inventory
signature files and usage definitions that allow License Manager to report installed products and product
usage by computer. License Manager support is an optional capability that requires License Manager
version 2.2 or later.
Tivoli Monitoring products, and other products that share Tivoli Management Services, participate in a
server-client-agent architecture. Monitoring agents for various operating systems, subsystems, databases,
and applications (known collectively as Tivoli Enterprise Monitoring Agents) collect data and send it to a
Tivoli Enterprise Monitoring Server. Data is accessed from the monitoring server by Tivoli Enterprise Portal
clients. A Tivoli Enterprise Portal Server provides presentation and communication services for these
clients. Several optional components such as an historical data warehouse extend the functionality of the
framework. Figure 1 shows the configuration of an IBM Tivoli Monitoring environment.
Before deciding where to deploy the components of the Tivoli Monitoring product in your environment, you
should understand the components of the product, the roles that they play, and what affects the load on
these components.
v Tivoli Enterprise Monitoring Agents, installed on the systems or subsystems you want to monitor. These
agents collect data from monitored, or managed, systems and distribute this information either to a
monitoring server or to an SNMP Event Collector such as IBM Tivoli Netcool/OMNIbus.
v z/OS only: Tivoli Management Services:Engine (TMS:Engine) provides common functions, such as
communications, multithreaded runtime services, diagnosis (dumps), and logging (RKLVLOG), for the
Tivoli Enterprise Monitoring Server, monitoring agents, and OMEGAMON components of OMEGAMON
XE products running on z/OS.
v An Eclipse Help Server for presenting help for the portal and all monitoring agents for which support
has been installed.
An installation optionally includes the following components:
v Tivoli Data Warehouse for storing historical data collected from agents in your environment. The data
warehouse is located on an IBM DB2 for the workstation, DB2 on z/OS, Oracle, or Microsoft SQL
database. To store data in this database, you must install the Warehouse Proxy agent. To perform
aggregation and pruning functions on the data, you must also install the Summarization and Pruning
agent.
v Event synchronization component that sends updates to situation events that are forwarded to a Tivoli
Enterprise Console event server or a Netcool/OMNIbus Object Server back to the monitoring server.
The following sections describe each of these components in more detail.
The portal server uses a DB2 for the workstation or Microsoft SQL database to store various artifacts
related to presentation at the portal client.
The portal client provides access to the Tivoli Enterprise Portal. There are two kinds of portal client:
v Browser client interface (automatically installed with Tivoli Enterprise Portal Server): The browser client
can be run using Microsoft Internet Explorer or Mozilla Firefox; it connects to a Web server running in
the Tivoli Enterprise Portal Server. Running the browser client is supported only on Windows, Linux, and
AIX computers.
v Desktop client interface: A Java-based graphical user interface on a Windows or Linux workstation. After
the desktop client is installed and configured, you can use it to start Tivoli Enterprise Portal in desktop
mode. You can also download and run the desktop client using Java Web Start, as discussed in Java
Web Start clients on page 229.
See Configuring clients, browsers, and JREs on page 220 for a discussion of the comparative
advantages of each type of portal client.
See Part 5, Setting up data warehousing, on page 329 for information about setting up data
warehousing.
For information about the various configurations of monitoring servers and event servers that you can have
in your environment, see Part 6, Integrating event management systems, on page 461.
Simplified operating system selection for Linux and UNIX systems on page 22
New installation option allows you to retain your customized seeding files on page 22
Automatic installation of application support for Linux/UNIX monitoring servers on page 23
New silent-response files simplify agent installations and updates on page 23
10
11
Help for the Tivoli Enterprise Portal presented by the Eclipse Help
Server
The online Help for the Tivoli Enterprise Portal is now presented using the Eclipse Help Server. The Help
Server is installed with the Tivoli Enterprise Portal Server and can be started and stopped only by starting
and stopping the portal server.
Common Event Console view for events from multiple event servers
The Common Event Console enables you to view and manage events from the Tivoli Enterprise
Monitoring Server in the same way as the situation event console. In addition, however, it incorporates
events from the Tivoli Enterprise Console Event Server and the Tivoli Netcool/OMNIbus Object Server if
your managed environment is configured for those servers.
Event data is retrieved from an event system by a common event connector, which also sends
user-initiated actions to be run in that event system. To have the events from a specific event system
displayed in the Common Event Console, you must configure a connector for that event system. During
configuration of the Tivoli Enterprise Portal Server you can edit the default connector for the Tivoli
Enterprise Monitoring Server and you can configure additional connectors for other event systems.
12
13
v IBM Tivoli Monitoring V6.2 FP1 UNIX/Linux Base DVD. This DVD contains the IBM Tivoli Monitoring
base product components, the operating system agents for UNIX and Linux,, as well as application
support for the operating system agents and a selected set of IBM Tivoli Monitoring 6.x based
distributed monitoring agents.
Support for single sign-on for launch to and from other Tivoli applications
Fix Pack 1 introduces support for a single sign-on capability between Tivoli applications. You can now
launch from the Tivoli Enterprise Portal to other Tivoli web-based and web-enabled applications, and from
those applications into the portal without re-entering your log-on credentials. This single sign-on solution
uses a central LDAP-based user registry to authenticate sign-on credentials. For more information, see
Security options on page 93 and the IBM Tivoli Monitoring: Administrator's Guide.
14
15
Note: Upgrading your Tivoli Enterprise Monitoring Server to IPv6 overwrites any customizations you may
have made and written to the monitoring server's ms.ini file.
16
Support for multiple remote Tivoli Enterprise Monitoring Servers on one Linux or
UNIX computer
You can now install and run multiple remote monitoring servers on a single Linux and UNIX node or LPAR;
see Linux or UNIX: Installing a remote monitoring server on page 160. Note, however, that you are still
restricted to a single hub monitoring server per computer or LPAR.
This enhancement does not apply to Windows nodes.
17
Figure 3. Help button on the IBM Tivoli Monitoring installer's Select Features panel
18
In addition, progress bars have been added to many long-running processes (such as software installation
and the installation of language support) so users can see visually how these processes are progressing;
for an example, see Figure 5.
19
20
Enterprise Portal clients. Derby has been tested with up to 20 portal clients, and it does require higher
CPU and memory usage than DB2 for the workstation.
The Tivoli Enterprise Portal Server uses the implementation of Derby that comes with eWAS; hence it is
supported on all platforms that support eWAS.
Note: If you transition from one supported database system to another for the Tivoli Enterprise Portal
Server, your existing portal server data is not copied from the first system to the new one.
More memory required to install and run the portal server: As a result of support for both an
embedded database and the eWAS server, the Tivoli Enterprise Portal Server's memory requirements
have increased substantially. The portal server now requires 650 MB if your site does not use the
embedded database, 1 GB if it does.
21
Note: When you install a multi-instance Agent Builder agent, you don't need to run a command to
configure it. You just need to provide the necessary .cfg file (if needed) and run the start
command: %CANDLE_HOME%\InstallITM\itmcmd.cmd agent start -o instance product_code
This new screen makes it easier and less error-prone to install the appropriate components for the
operating system your current node is running.
To retrieve a list of other operating systems and releases, type 6.
New installation option allows you to retain your customized seeding files
As part of the conversion of the operating system monitoring agents to dynamically assigned affinities, you
now have the option, when adding their application support to the Tivoli Enterprise Monitoring Server, of
either using the supplied files or retaining and using the ones you have modified.
Also, remote seeding is now supported.
22
Event integration of IBM Tivoli Monitoring with both IBM Tivoli Business Service
Manager and Netcool/OMNIbus now supported
For users of both Tivoli Business Service Manager and Netcool/OMNIbus, integration of Tivoli Monitoring
events is now supported for both products.
Upgrade procedure provided to Tivoli Event Synchronization V2.2.0.0: For users of Tivoli Event
Synchronization version 2.0.0.0, an upgrade procedure has been added that converts your existing
environment to V2.2.0.0.
23
enhanced to enable them to directly forward events to either the Tivoli Enterprise Console or
Netcool/OMNIbus. This ensures events trapped by these monitoring agents do not get lost when you elect
to run them autonomously.
Support for DB2 Database for Linux, UNIX, and Windows version 9.7: Both the Tivoli Data
Warehouse and the Tivoli Enterprise Portal Server now support V9.7 of DB2 for the workstation.
Note: IBM Tivoli Monitoring V6.2.x still includes DB2 Workstation Server Edition V9.5 for use with the
portal server and the data warehouse.
24
25
26
Planning checklist
Use the Planning checklist in Table 2 to ensure that all important planning tasks are accomplished.
Perform all of these tasks before starting your Tivoli Monitoring installation. The following sections of this
guide provide the necessary information to complete this checklist.
Table 2. Planning checklist
Planning activity
Comments
Status
Sample: Complete
27
Comments
Status
Tivoli Monitoring V6.2.2 supports integration with LDAP user registries for authentication. See Security
options on page 93 for more information.
LDAP SSL requires some actions by an LDAP administrator that are not covered by the Tivoli Monitoring
V6.2.2 documentation. Here are some LDAP SSL Web pages for working with LDAP servers:
v Configuring Microsoft Active Directory for SSL access
v Configuring Sun Java System Directory Server for SSL access
v Configuring the Tivoli Directory Server client for SSL access
v LDAP SSL will also require creating a GSKit keystore; see the IBM Global Security Kit Secure Sockets
Layer and iKeyman User's Guide
28
The default port for SSL communication is 3660/TCP. (For non-SSL communication the default port is
1918/TCP.)
29
Tivoli Monitoring installation requires the following optional and mandatory components:
v Tivoli Enterprise Monitoring Server on page 31
v Tivoli Enterprise Portal Server on page 32
v Tivoli Enterprise Portal client on page 33
v Warehouse Proxy agent on page 35
v
v
v
v
v
v
v
The specific hardware and software prerequisites for each of these components are listed in Hardware
and software requirements on page 96.
30
31
Placement of the remote monitoring server depends on a few factors. Plan your firewalls early to ensure
that communications can be established between the Tivoli Monitoring components with only a modest
number of holes in the firewall.
By locating the Warehouse Proxy agent on the same computer as the remote monitoring server, you can
negotiate NAT environments with their historical data collection. For remote locations connected over a
slow network, place the remote monitoring server at the remote location if there are significant numbers of
computers. For remote locations with just a few computers, it doesnt make sense to place a remote
monitoring server at the remote location.
32
33
Using browser mode or Web Start clients allow you to perform maintenance updates in a single location. If
the desktop client is installed from installation media, maintenance must be installed on each computer. If
Web Start for Java is used to download and run the desktop client, you gain the performance advantage
of the desktop client along with the convenience of centralized administration from the server. Additional
performance gains can be made by modifying the Java heap settings. (For more details on the heap
settings see Locating and sizing the portal client on page 45.) Unless you want a very secure
environment where there are no downloads, use IBM Web Start for Java for obtaining the desktop client.
Note: To use Java Web Start to download the desktop client from the Tivoli Enterprise Portal Server, IBM
Runtime Environment for Java 2, version 5.0 (also referred to as IBM JRE version 1.5) must be
installed on the system to which you are downloading the client.
Many customers install the desktop client on Citrix for the following reasons. Citrix allows for better GUI
performance for users at remote locations. In addition, some users do not allow the Tivoli Enterprise Portal
client's prerequisite Java version to be installed on desktop computers. By using Citrix, users do not have
to install Java on the user's desktop systems.
34
for the desired agent. For environments with network events that cause agents to failover to a secondary
monitoring server, this option may be a poor choice. When an agent fails over, it loses access to the
short-term historical data because the data is located on the primary remote monitoring server and the
agent is connected to the backup remote monitoring server.
By installing the operating system agent on the system first, the remaining agents can be deployed
remotely using the Add agent capabilities.
35
publication tool on page 340. In a new Tivoli Monitoring environment, the warehouse database schema
will be created using the new schema. If the Tivoli Monitoring V6.2.2 environment was updated, the
database will continue to use the old schema and will not benefit from the improvements. If you want to
take advantage of the performance improvements in an upgraded environment, you will need to create a
new warehouse database with a new schema. If desired, you can migrate the data from your old
warehouse database into your new warehouse database.
Netcool/OMNIbus integration
If you are already using Netcool/OMNIbus to monitor events from other sources in your enterprise, you
can also view and manage situation events from a Tivoli Enterprise Monitoring Server in the
Netcool/OMNIbus console. Event integration requires Netcool/OMNIbus V7.x and Netcool/OMNIbus V7.x
Probe for Tivoli EIF.
Situation events are sent to Netcool/OMNIbus Probe for Tivoli EIF using the Tivoli Event Integration Facility
(EIF) interface. The Netcool/OMNIbus EIF Probe receives events, maps them to the Netcool/OMNIbus
events format, and then inserts into Netcool/OMNIbus ObjectServer. When an Netcool/OMNIbus user
acknowledges, closes, or reopens a situation event, Netcool/OMNIbus sends those changes to the
originating monitoring server through the event synchronization component.
For information on forwarding events to Netcool/OMNIbus and installing the event synchronization
component, see Chapter 22, Setting up event forwarding to Netcool/OMNIbus, on page 491.
By default, the EIF Probe listens on the default port (5529).
36
Firewall gateway
Using the firewall gateway, you can traverse even the most complex firewalls. Using the IP PIPE or IP
SPIPE protocols, the Tivoli Monitoring software can traverse most firewall configurations. In most cases,
the firewall gateway is not necessary.
For detailed information on installing and configuring the firewall gateway, go to Appendix C, Firewalls, on
page 551.
37
Agent Builder agent on such nodes, you may receive the error shown in
Figure 25 on page 189. If this happens, complete the procedure outlined in
Installing the Embedded Java Runtime and the User Interface Extensions on
page 189, and retry the agent installation.
38
If you have KDC_FAMILIES=IP.PIPE SKIP:2, then the first port you want to use is (Well-known-port +
4096 * (2)) and you continue to try until you contact one or run out of ports to try.
As an example, assume that you have the following scenario:
v A Windows system inside a firewall.
v You expect to run a monitoring server, monitoring agent, and a Warehouse Proxy agent on this system.
v The monitoring server and the Warehouse Proxy agent must be accessible from outside the firewall.
Accessibility from outside the firewall means that you require IP.PIPE ports that must be permitted at the
firewall, thus, these ports must be predictable.
Given this scenario, and to allow for the accessibility, one IP.PIPE port requires reservation and firewall
permission in addition to the monitoring server. The monitoring server always receives the well-known port
by default. The Warehouse Proxy agent requires KDC_FAMILIES=IP.PIPE COUNT:1 and the monitoring
agent requires KDC_FAMILIES=IP.PIPE SKIP:2 to make the necessary reservations for the Warehouse
Proxy agent. And if the monitoring server's well-known port is 1918, then the Warehouse Proxy agent is
assigned port (1918 + (4096 *(1)) 6014. The monitoring agent attempts to listen on port (1918 + (4096
*(2)) 10110.
The Warehouse Proxy agent port is reserved only on 6014 if keyword KDC_FAMILIES is used in
conjunction with the following COUNT option:
v COUNT specifies the number of offsets that the agent can use to retrieve a reserved port number.
v COUNT:N means that all offset ports calculated by the following formula starting from 1 up to N are
allowed and the first free one is used:
Well-known port 1918 + (X * 4096) for X ` {1,..,N}
v COUNT:1 means that only 1918+(1*4096)=6014 is used, thus the proxy port is fixed. For example:
KDC_FAMILIES=IP.PIPE COUNT:1 PORT:1918 IP use:n SNA use:n IP.SPIPE use:n
If other Tivoli Monitoring components, such as the Tivoli Universal Agent, portal server, and other
components, are running on the same system, the SKIP option must be used to instruct these components
to skip some reserved ports:
v SKIP:N+1 means that only offset ports starting from multiplier N+1 are allowed:
Well-known port 1918 + (X * 4096) for X ` {N+1,N+2,..,max_port_high}
v SKIP:2 allocates 1918+(2*4096)=10110 as the first port and there is no proxy port conflict. For example:
KDC_FAMILIES=IP.PIPE SKIP:2 PORT:1918 IP use:n SNA use:n IP.SPIPE use:n
39
processor, memory and disk requirements. The guidelines below are based on typical deployments, and
supplement the hardware requirements described in Hardware and software requirements on page 96.
40
One of the most important aspects of using remote deployment is ensuring that the files that are
distributed are at the correct version. In large environments, there might be several remote monitoring
41
servers that require maintenance to ensure that all of the depots are updated. To avoid the necessity of
installing updates and maintenance on multiple computers, consider creating one depot and making it
available using NFS or Windows shares.
If you decide to use a shared depot, make sure that the remote monitoring server is upgraded before you
upgrade the agents that are connected to that remote monitoring server. When you deploy monitoring
agents, make sure that you configure them to connect to the desired remote monitoring server.
Notes:
1. In addition to remotely deploying monitoring agents, as of V6.2.2, IBM Tivoli Monitoring lets you
remotely deploy non-agent bundles (with which your site can employ Tivoli Monitoring components that
need not connect to a Tivoli Enterprise Monitoring Server).
2. Remote deployment is not available for z/OS-based OMEGAMON agents, nor is it supported on nodes
running the Tivoli Enterprise Monitoring Server, Tivoli Enterprise Portal Server, or the Tivoli Enterprise
Portal desktop or browser client.
42
Figure 9 shows a sample spreadsheet summary created using the tool downloaded from the OPAL site.
The amount of network traffic closely matches the amount of data collected in the Warehouse. The key
planning numbers, highlighted in red, are based on sample data. Each Tivoli Monitoring environment is
different, so fill out the spreadsheet based on your warehousing needs.
For some high-volume metrics, consider collecting only short-term historical data. For example, if you want
to collect process data, there is one row of data per monitored process for every collection interval, which
generates a significant amount of data. By retaining only 24 hours of short-term historical data, you do not
overload your Warehouse server or network, but you can still perform trending analysis.
If historical data collection is started but a warehousing interval is not set, care must be taken to ensure
that the local historical files do not grow indefinitely. This is only for distributed systems. For z/OS, Tivoli
Management Services provides automatic maintenance for the data sets in the persistent data store (the
dataset in which the short-term historical data is stored).
The key information in the spreadsheet includes the following data:
v Total Tivoli Data Warehouse Inserts per hour
In this example, there are 185,600 inserts per hour. Most reasonable database servers can handle this
rate without any problem.
v Total megabytes of new data inserted into Tivoli Data Warehouse per hour
In the example, 137 MB of data per hour are inserted. This amount roughly translates to 137 MB of
network traffic per hour coming into the Warehouse Proxy agent from the connected agents, and
roughly 137 MB per hour of network traffic from the Warehouse Proxy agent to the warehouse database
server.
v Total gigabytes of data in Tivoli Data Warehouse (based on retention settings)
43
In this example, you expect the Tivoli Data Warehouse database to grow to 143 GB of data. Additional
space must be allocated for the database logs. The database does not grow beyond 143 GB because
you are summarizing the raw (detailed) metrics and pruning both the detailed and summarized data.
The planning spreadsheet helps you determine your warehouse size based on your retention needs.
This warehouse database must be carefully planned and tuned. Where possible separate the largest
tables into separate table spaces or data files. By filling in the Warehouse load projection spreadsheet,
most users will find that 3 to 5 tables make up the majority of their warehouse data. These tables should
be isolated from each other as much as possible so that they have optimal performance.
Multiple Warehouse Proxy agents are required when the number of agents collecting historical data
increases above approximately 1500 (to ensure that limits for the number of concurrent connections are
not reached). If you use multiple Warehouse Proxy agents, consider running them on the same servers
running the remote monitoring servers, and configuring them to support agents connected to this
monitoring server. This approach consolidates the Tivoli Monitoring infrastructure components, and limits
the number of agents that connect to each Warehouse Proxy agent.
For server sizing, see Locating and sizing the Summarization and Pruning agent.
Number of CPUs
Memory
1 CPU
2 GB
2 CPUs
2 GB
2 CPUs
4 GB
4 CPUs
4 GB
4 CPUs
8 GB
4 CPUs
8 GB
If your Summarization and Pruning agent is installed on the warehouse database server, configure the
agent so that it leaves some processing power for others to perform Warehouse queries and for the
Warehouse Proxy agent to insert data. Set the number of worker threads to 2 to 4 times the number of
44
CPUs on the system where the Summarization and Pruning agent is running. Start with 2 times the
number of CPUs and then increase the number to see if it continues to improve your performance.
To set the number of worker threads, edit the Summarization and Pruning agent configuration file (KSYENV
on Windows, sy.ini on UNIX or Linux) and set the KSY_MAX_WORKER_THREADS to the number of
desired threads. This parameter can also be set using the configuration dialog panels.
To assess the performance and throughput of the Summarization and Pruning agent for your environment,
you can use the following approach:
1. Start by enabling historical data collection for a small set of attribute groups which do not generate a
large number of rows per data collection interval
2. Examine the Summarization and Pruning agent Java log to see how many detailed records were read
and pruned in a processing cycle, and the elapsed time for the processing cycle. Dividing the number
of records read and pruned by the elapsed time will give you a rough measurement of the
Summarization and Pruning agent throughput (records read and pruned per second).
3. As long as the elapsed time for the Summarization and Pruning agent processing cycle is acceptable,
considering enabling historical data collection for additional attribute groups and repeat step 2.
The processing throughput as determined in step 2 is a rough and easily calculated measurement of the
Summarization and Pruning agent performance. Database tuning or additional worker threads can improve
the throughput. See the Tivoli Data Warehouse on page 307 for more tuning information.
Advantages
Disadvantages
Browser client
v Requires maintenance to be
installed individually on each client
machine.
v Mismatch of versions between
portal server and the client is not
permitted.
v Try using the browser client first to see if it meets your response time needs. If you are using the
browser client, it is imperative that you increase the Java heap size parameters for the Java Plug-in.
v Using the desktop client with Java Web Start may reduce response time for new workspace requests by
about one second. If this additional response time reduction is important to you, then consider using
Chapter 2. Pre-deployment phase
45
Java Web Start with the desktop client. Refer to Using Web Start to download and run the desktop
client on page 233 for instructions on how to set this up.
The memory required by the portal client depends on the size of the monitoring environment and on the
Java heap size parameters. The default maximum Java heap size is 256 MB for the desktop client. Using
this default heap size and a medium to large monitoring environment, the portal client can be expected to
use approximately 350 MB of memory.
For the browser client, it is imperative that the default Java plug-in parameters be increased. The preferred
starting value for small to medium environments is Xms128m Xmx256m and Xms256m Xmx512m
for larger environments. When modifying the maximum Java heap size, make sure there is adequate
physical memory for the entire Java heap to be contained in memory. For more information see Tuning
the portal client JVM on page 304.
You can also use the portal desktop client and leverage Java Web Start to minimize your maintenance
costs. The portal desktop client requires less memory and offers better performance characteristics.
46
Select the product and product components that you are interested in, and then click Display Selection.
You see a listing of the operating systems and database versions, and the product that you selected.
Information is displayed in the cells might look like C(32,64T) or S(32,64T). The C and S refer to either
client or server support respectively. The 32 and 64T mean that 32-bit kernel and 64-bit kernel systems
were tested in toleration mode. For 64-bit toleration, the application is a 32-bit application, but can also run
on a 64-bit system.
If you do not allow macros to run within Excel, you can still use the Support Matrix spreadsheet, but you
might not see the menus as shown above. You can select either the OS Support or the Database
Support tab at the bottom of the window, and then search through the spreadsheet for the Tivoli
Monitoring products and components.
The platform support matrix refers to the most recently supported versions of the product. In most cases,
you need to apply the latest fix pack to your Tivoli Monitoring environment to obtain support for some of
the newer operating system and database versions. If you have any questions, the Readme file for each
fix pack clearly documents the supported platforms.
47
availability and disaster recovery. Ensuring high availability involves achieving redundancy for every
Monitoring component. Disaster recovery means being able to recover from a major outage such as a data
center going offline or losing its WAN link.
48
OS Cluster:
Many users set up an OS Cluster for their portal server. Depending on the clustering software used, the
cluster can be set up across a WAN to achieve disaster recovery. For detailed information on setting up
the monitoring server and portal server in an OS Cluster, see the IBM Tivoli Monitoring: High-Availability
Guide for Distributed Systems.
Cold backup:
Some smaller users do not want to dedicate CPU cycles and memory to a live backup portal server. If that
is the case in your environment, install a second portal server on another computer that serves as a
production server. The backup portal server is typically shut down so that it does not use any CPU or
memory. If the primary portal server goes down, the cold backup can be brought online. The key for a cold
backup portal server is to periodically export the portal server database content and import it into the cold
backup. In addition, ensure that the cold backup portal server is patched with the same software levels as
the primary portal server.
Consider using a tool like the Tivoli System Automation for Multiplatforms to automate the process of
backing up the resources.
As discussed previously, some users choose to implement a master read-write portal server and one or
more read-only portal servers. When you implement multiple read-only portal servers, you can place a
load balancer or edge server in front of the portal server and have users connect to the edge server. By
doing this, you minimize the complexity and maximize the availability for the end user.
The strategy for backup and restore is to have one master Tivoli Enterprise Portal Server database where
all customization is done. Then, periodically export the content from the master portal server and import
the content into any other portal server. The import replaces the Tivoli Monitoring content in the portal
server database, so be aware that any customization made in the secondary portal server environments
will be overwritten during the import. The export and import of the portal server database can be done in
two ways:
v Using RDBMS backup utilities such as DB2s db2 backup and db2 restore commands
v Using the migrate-export and migrate-import command provided by the Tivoli Monitoring product
If the various portal server databases are not running on the same OS version, then the RDBMS backup
and restore utilities will probably not work. In those cases, use the Tivoli Monitoring migrate-export and
migrate-import commands as described in the product documentation.
49
The first and preferred strategy involves having a spare remote monitoring server. By default, the
spare remote monitoring server has no agents connected. When the monitoring agents that report to
the primary monitoring server are configured, they are configured to use the spare remote
monitoring server for their secondary monitoring server. Over time, network and server anomalies
cause the agents to migrate.
To manage this environment, write a situation to monitor how many agents are connect to the spare
remote monitoring server. You can then use the situation to trigger a Take Action command that
forces the agents back to their primary remote monitoring server by restarting them. Restarting the
agents cause them to connect to their primary monitoring server. Ideally, migrate the agents back to
their primary remote monitoring server when the number of agents connect to the spare monitoring
server is greater than 20.
The disadvantage to using a spare remote monitoring server is that you must dedicate a spare
server to be the spare remote monitoring server. Some users choose to co-locate this server with
the Warehouse Proxy agent or run in a virtualized environment to minimize the extra hardware
required.
The second strategy is to evenly distribute the agents so that they failover to different remote
monitoring servers to ensure that no remote monitoring server becomes overloaded. In the example
below, there are four remote monitoring servers. In this example, configure one-third of the agents
on each remote monitoring server to failover to a different remote monitoring server. Review the
following scenario:
RTEMS_1 has 1125 agents, RTEMS_2 has 1125 agents, RTEMS_3 and RTEMS_4 have 1125
agents.
A third of RTEMS_1s agents failover to RTEMS_2, a third failover to RTEMS_3, and a third failover
to RTEMS_4.
This strategy ensures that none of the remote monitoring servers become overloaded. The problem
with this strategy is that it requires a lot of planning and tracking to ensure that all of the remote
monitoring servers are well-balanced.
v If you want your agent to failover to a remote monitoring server in another data center, ensure that you
have good network throughput and low latency between the data centers.
Note: Connect a very small number of agents to the hub monitoring server. Typically, only the Warehouse
Proxy agent, Summarization and Pruning agent, and any OS agents that are monitoring the
monitoring server are connected to the hub monitoring server.
Use the Tivoli Monitoring V6.2.2 heartbeat capabilities to ensure that agents are running and accessible.
The default heartbeat interval is 10 minutes. If an agent does not contact the monitoring server, a status of
MS_Offline is seen at the monitoring server. An event can be generated when an agent goes offline. An
administrator can evaluate whether the agent is having problems or whether there is another root cause.
In addition, there is a solution posted on the OPAL Web site that leverages the MS_Offline status and
attempts to ping the server to determine if the server is down or whether the agent is offline. You can find
more information by searching for "Perl Ping Monitoring Solution" or navigation code "1TW10TM0F" in the
Tivoli Open Process Automation Library (OPAL).
50
agents are configured to receive historical data from the same agent. To avoid problems, ensure that only
one Warehouse Proxy agent is responsible for collecting the historical data from a remote monitoring
server.
To ensure that your Warehouse server performs optimally, ensure that the WAREHOUSELOG and
WAREHOUSEAGGREGLOG tables are pruned on a regular basis. Pruning for these tables can be
configured by specifying retention intervals in the configuration dialog for the Summarization and Pruning
agent or in the configuration file (KSYENV on Windows, sy.ini on UNIX or Linux). Refer to Historical data
collection on page 307 for more details.
Agent deployments
When planning your installation, you need to determine how you want to deploy your agents. For very
small environments, some users manually install agents on each server. For larger environments,
automation tools must be used to deploy the agents and agent patches. A key decision point is to
determine which deployment software to use. The Tivoli Monitoring V6.2.2 product has a remote
deployment capability that allows you to initially deploy your operating system agents remotely, and then
remotely add agents to your systems.
Each product and fix pack includes a remote deployment bundle that can be placed in the remote
deployment depot for future agent distributions and patching. However, the remote deployment capability is
not as efficient at distributing software as some purchasable distribution products. If you already have an
enterprise-class software distribution product like Tivoli Configuration Manager or Tivoli Provisioning
51
Manager, you might find it more efficient to distribute the agents and patches. Tivoli Monitoring V6.2.2
agents provide software package blocks that can be used by Tivoli Configuration Manager and Tivoli
Provisioning Manager to distribute the agents.
The main advantage of using products such as Tivoli Configuration Manager and Tivoli Provisioning
Manager are:
v Faster distribution times to speed large-scale deployments.
v Tivoli Configuration Manager and Tivoli Provisioning Manager can be tuned to utilize only a portion of
the network bandwidth.
v Tivoli Configuration Manager and Tivoli Provisioning Manager can easily be configured for retries and
tracking of success and failure.
Here is the location of the Software Packages for the IBM Tivoli Monitoring V6.1 monitoring agents:
ftp://www.redbooks.ibm.com/redbooks/SG247143/.
The advantage of using remote deployment is no additional work is required to create and deploy agent
patches.
If you have an enterprise-class software distribution product, use it for the initial software distribution and
for the deployment of larger fix packs. For interim fixes and small fix packs, the remote deployment
capability might require less time to configure and utilize.
One of the most important aspects of agent deployment is the agent prerequisite preparation. Ensure that
the servers are prepared with the appropriate filesystems including adequate space. In addition to disk
space, you must determine the user account to be used for the agent installation. By using an
administrative account (administrator or root), you ease your agent deployment tasks.
If administrator or root are not allowed, then using sudo on a UNIX system is the next best choice.
Without administrative authority, the installation becomes a multi-step process where the systems
administrators need to be brought in to run commands such as setperm to setup the permissions. For
planning purposes, plan for roughly 500 MB of disk space to allow space for the agent and historical logs.
52
Notes:
1. If you respond NO to question Will this agent connect to a TEMS? when running the UNIX installer,
these parameters get set correctly for you.
2. In all cases, including fully autonomous mode, at least one active protocol must be defined by using
the KDC_FAMILIES environment variable. If no protocols are defined, the agent will not start.
53
As of version 6.2.1, IBM Tivoli Monitoring also provides agentless monitors. An agentless monitor is a
standard Tivoli Monitoring agent that can monitor the operating system running on multiple remote nodes
that do not have the full-function OS agents running on them. An agentless monitor obtains data from
nodes it is monitoring via a remote application programming interface, or APIin this case, SNMP, CIM, or
WMIrunning on the node being monitored. Since these interfaces provide information about either
operating system functions or base application functions, no IBM Tivoli Monitoring component need be
installed or deployed on the monitored node.
API
Function
SNMP The Simple Network Management Protocol is a TCP/IP transport protocol for exchanging network
management data and controlling the monitoring and operation of network nodes in a TCP/IP
environment.
CIM
The Common Information Model is an XML-based standard for defining device and application
characteristics so system administrators and management programs can monitor and control them
using the same set of tools, regardless of their differing architectures. CIM provides a more
comprehensive toolkit for such management functions than the Simple Network Management
Protocol.
WMI
Microsoft's Windows Management Instrumentation API provides a toolkit for managing devices and
applications in a network of Windows-based computers. WMI provides data about the status of
local or remote computer systems as well as the tools for controlling them. WMI is included with
the Windows XP and Windows Server 2003 and 2008 operating systems.
These APIs are supported by the Agent Builder, which enables you to build custom agentless monitoring
solutions that are separate from the agentless monitors available on the Tivoli Monitoring installation media
and that provide additional function.
Since an agentless monitor is a standard Tivoli Monitoring agent, it collects data and distributes it to a
Tivoli Enterprise Monitoring Server and then on to a Tivoli Enterprise Portal Server. It also takes advantage
of the various features of the IBM Tivoli Monitoring product, such as Tivoli Enterprise Portal workspace
views, situations, remote deployment of the agentless monitors, policies, and so on. Detailed information
can be found in the user's guide for each agentless monitor; see Table 6 on page 59.
Agentless monitoring does not provide the kind of deep-dive information your site may need for its core
business servers; however, it does allow a small set of centralized servers to supervise the health of the
operating nodes in your environment. There are five types of agentless monitors: Windows, AIX, Linux,
HP-UX, and Solaris environments.
The agentless monitors are multi-instance agents. After installing or deploying an agentless monitor on a
machine, additional instances can be created via configuration. Each instance can communicate with up to
100 remote nodes.
Each type of agentless monitor can run on additional platforms beyond the type of platform it monitors. For
example, the agentless monitor for Windows (which monitors only Windows operating systems) can run on
any of the supported platforms: Windows, AIX, Solaris, HP-UX, Linux.
Specific operating system releases that a particular agentless monitor can monitor are detailed in Table 5
on page 56. Check the user's guide for each agentless monitor regarding platform-specific requirements
for the operating systems that agentless monitors can run with.
A computer that has one or more agentless monitors running on it is referred to as an agentless
monitoring server. Each server node can support up to 10 active agentless monitor instances, in any
combination of agentless monitor types; for example, 2 AIX, 2 HP-UX, 2 Linux, 2 Solaris, 2 Windows; or 4
Windows, 3 AIX, 3 Linux; or 5 Windows, 5 Solaris; or 10 HP-UX. Each instance can communicate with up
to 100 remote nodes, which means a single agentless monitoring server can support as many as 1000
54
monitored systems (10 instances * 100 remote nodes per instance). By adding more server nodes, the
number of monitored nodes increases into the thousands.
Figure 11 illustrates the architecture of an IBM Tivoli Monitoring environment that employs agentless
monitoring.
Agentless technology provides lightweight OS monitoring that targets key metrics along with basic
situations meant to satisfy simple monitoring needs. Agentless monitoring provides speedy implementation
and minimum agent deployment, including the deployment of updates; however, the need to poll the
monitored node to retrieve its monitoring data increases network traffic, and real-time data availability is
impacted both by the network delay and the reliance on polling. In addition, the implementation of Take
Action commands for command and control is more powerful with the full-function agents than for
agentless technology.
Key operating system metrics returned:
v Logical and physical disk utilization.
v Network utilization
v Virtual and physical memory
v System-level information
v Aggregate processor utilization
v Process availability
Default situations are provided for:
v Disk utilization
v Memory utilization
v CPU utilization
v Network utilization
You can use these situations as is or as models for custom situations that meet your site's specific needs.
The agentless monitors monitor the distributed operating systems listed in Table 5 on page 56. You can
configure different data collectors for these environments, as shown.
55
Table 5. Data collectors usable with the various agentless monitors and releases supported
Agentless monitor
Agentless Monitoring
for Windows OS
Product
code
R2
Data collectors
supported
1
v WMI
v Performance Monitor
(PerfMon)1
v Windows event log1
v SNMP V1, V2c, V3
Agentless Monitoring
for AIX OS
56
R3
Table 5. Data collectors usable with the various agentless monitors and releases supported (continued)
Product
code
Data collectors
supported
Agentless Monitoring
for Linux OS
R4
Agentless Monitoring
for HP-UX OS
R5
Agentless Monitoring
for Solaris OS
R6
v CIM-XML
v SNMP V1, V2c, V3
Agentless monitor
Notes:
1. To use one of the native Windows data collectors (WMI, PerfMon, the event log), the agentless monitoring server
must run under Windows.
IBM recommends that you deploy a full-feature operating system agent to each agentless monitoring
server to watch the CPU, memory, and network consumption of the agentless monitors themselves.
Chapter 2. Pre-deployment phase
57
You also have the full range of remote-deployment options, as explained in Chapter 3, Deployment
phase, on page 69, at your disposal when planning how best to deploy agentless monitors across your
environment. These include:
v The Tivoli Enterprise Portal's deployment features.
v The tacmd CLI commands.
As required for the deployment of any IBM Tivoli Monitoring agent, remote deployment of an agentless
monitor to an agentless monitoring server requires that an OS agent be running on that machine. For
example, if the agentless monitor runs on an AIX operating system, the IBM Tivoli Monitoring AIX agent
must first be running on it to remotely deploy that agentless monitor. In addition, the OS agent is required
to configure a server's agentless monitors via the Tivoli Enterprise Portal.
The agentless monitors are included on the same Agents DVD as the traditional OS agents and the
Universal Agent.
58
Document number
IBM Tivoli Monitoring: Agentless Monitoring for Windows Operating Systems User's Guide SC23-9765
IBM Tivoli Monitoring: Agentless Monitoring for AIX Operating Systems User's Guide
SC23-9761
IBM Tivoli Monitoring: Agentless Monitoring for Linux Operating Systems User's Guide
SC23-9762
IBM Tivoli Monitoring: Agentless Monitoring for HP-UX Operating Systems User's Guide
SC23-9763
IBM Tivoli Monitoring: Agentless Monitoring for Solaris Operating Systems User's Guide
SC23-9764
59
A solution used by many users to handle the complexities of versioning is to develop your agent using a
different agent name during your development phase. Develop the agent using a different name and after
the agent is working and optimized, rename the agent in your test environment before promoting it to
production.
To avoid issues with Tivoli Universal Agent versioning entirely, consider developing your custom monitoring
solution with the Agent Builder.
60
v The preferred approach is to connect your Tivoli Universal Agent to the hub prior to performing the
kumpcon import and um_console import command. This approach causes the required files to be
created on the hub monitoring servers. For monitoring to work as desired, move the Tivoli Universal
Agent to the remote monitoring server.
v You can also leave the Universal Agent connected to a remote monitoring server and copy the required
files from the monitoring server to the hub monitoring server. Because this approach requires a recycle
of the hub monitoring server, it is the less desirable approach.
Mainframe users
Mainframe environments have some unique considerations. There are features that are available only
when running a z/OS hub monitoring server and features that are available only when running a distributed
hub monitoring server. This section outlines those considerations so that you can make the best choice for
your environment.
Unique z/OS hub monitoring server features
The z/OS hub monitoring server allows you to take advantage of RACF authentication. However,
the OMEGAMON for MQ Configuration product has some specific integration with RACF that
requires a z/OS hub in order to take advantage of RACF authentication within the OMEGAMON
for MQ Configuration product.
The z/OS hub does not provide the Hot Standby feature. High availability is achieved using a
movable hub solution as described in the IBM Tivoli Management Services on z/OS: Configuring
the Tivoli Enterprise Monitoring Server on z/OS.
Note: The z/OS environment does not support the Tivoli Universal Agent.
Unique distributed hub monitoring server features
Two features that are provided on a distributed hub monitoring server environment are not
available on z/OS hub monitoring server environments.
v Remote deployment
v Hot Standby
The distributed hub monitoring server has a feature called Hot Standby to assist with high
availability and disaster recovery scenarios. Many users choose not to use the Hot Standby
feature and instead deploy OS Clusters for high availability and disaster recovery.
Linux on z/VM systems
Many mainframe users run Linux on their z/VM systems. Many different Tivoli Monitoring
components can be installed in Linux on z/VM environments, including the monitoring server,
portal server, monitoring agent and warehouse-related components. Each Linux environment can
be configured with monitoring server or portal server software, or both.
Multi-hub environments
Large users who go beyond the limits of a single hub monitoring server environment must consider
additional factors. Tivoli Monitoring V6.2.2 has been tested with environments with as many as 10000
agents. Some users need multiple hub monitoring servers to handle the tens of thousands of agents in
their environment. Following are some considerations to take into account when deploying multiple hub
monitoring servers.
Sharing a warehouse database:
You can share a single warehouse database with multiple hub monitoring servers, but there are additional
considerations when choosing this deployment option. First, you must take into account scalability of the
Warehouse Proxy and Summarization and Pruning agents. Use the Warehouse load projection
spreadsheet, which can be found in the Tivoli Open Process Automation Library (OPAL) by searching for
"warehouse load projections" or the navigation code "1TW10TM1Y."
Chapter 2. Pre-deployment phase
61
With multiple hubs and more than 10000 agents, you increase the likelihood of exceeding the capacity of
the Warehouse. Be aware of how much data you are collecting to ensure that you do not exceed the
capacity of the warehouse database. To get an idea of the capacity of the Summarization Pruning agent
with your warehouse database, consider using the measurement approach discussed in Locating and
sizing the Summarization and Pruning agent on page 44.
In addition to scalability, there are specific deployment requirements when a warehouse database is
shared between hub monitoring servers. First, you can run only one Summarization and Pruning agent in
only one of the two monitoring server environments. The single Summarization and Pruning agent is
responsible for summarizing and pruning the data for all of the data in the Warehouse. The summarization
and pruning configuration settings are maintained by the portal server that is specified in the
Summarization and Pruning agent configuration dialog.
Due to the complexity and potential scalability issues of sharing a warehouse database across multiple
hub monitoring servers, you might want to maintain multiple warehouse databases. To build reports across
the databases, use Federation capabilities or create a data mart that merges that content from multiple
warehouse databases.
You cannot set up different summarization and pruning schedules for each of the hub monitoring server
environments. In addition, you must also ensure that the hub with the Summarization and Pruning agent is
patched and maintained so that it is a superset of the two monitoring servers. If you install the database
agents in one hub, then you must install the application support for the database agents on the hub
monitoring server and portal server in the hub environment with the Summarization and Pruning agent. If
you install a fix pack on one hub, then you must ensure that it is also installed on the hub with the
Summarization and Pruning agent, which ensures that the Summarization and Pruning agent is aware of
all attribute groups and attributes that can be collected.
Sharing customization:
When using multiple hubs, most customization can be shared between the two hub environments.
Customization includes situations, policies, workspaces, managed systems lists, and Tivoli Universal Agent
solutions. In the Tivoli Monitoring V6.2.2 release a number of CLIs were added to the product to do bulk
imports and exports of situations, policies, and workspaces. For details on the new CLIs, see the IBM
Tivoli Monitoring: Command Reference. Most of the customization can be cleanly exported from one
monitoring server environment to another monitoring server environment using tools that are identified in
Maintaining an efficient monitoring environment on page 77.
62
Install
Install
Install
Install
Install
63
64
When using the Tivoli Enterprise Console, you need a baroc file for each type of agent. For the packaged
agents, a baroc file is automatically installed on your monitoring server. The baroc files are placed in the
CANDLE_HOME\cms\TECLIB directory. For Tivoli Universal Agent solutions, it is necessary to create a baroc
file for each Tivoli Universal Agent solution. A tool available on OPAL generates the baroc file for the Tivoli
Universal Agent solutions. To find this tool search for "BAROC file generator" or navigation code
"1TW10TM43" at the Tivoli Open Process Automation Library (OPAL).
Table 7. Update history for the baroc files for IBM Tivoli Monitoring agents and components
IBM Tivoli Monitoring agent/component
v6.2.1
V6.2.1
Note: File omegamon.baroc contains the base event class definitions for all Tivoli Monitoring events; it is
automatically installed on Tivoli Enterprise Console when event synchronization is installed.
65
For more information search for "SMP/E installation on z/OS" or navigation code "1TW10TM3M" at the
Tivoli Open Process Automation Library (OPAL).
Creating workspaces
Again this is difficult to estimate. Typically, you not know exactly what you want to monitor. Be careful not
to create something that will be hard to maintain in the future. Workspaces vary in complexity a great deal,
so a graphic view with icons does not take long, but a workspace with links using variables takes quite a
while to set up and test. On balance, aim for ten workspaces per day along with associated custom
navigator items.
66
as scan for string with in a string to catch the data. When the metafiles are imported, make sure the
Tivoli Universal Agent is connected to the hub monitoring server first and then later it can be moved to any
desire remote monitoring server.
Transferring skills
Much of the skills transfer can occur during the installation and configuration process if you are working
closely with key staff. You can still factor in an additional three days to cover the same subjects as in the
IBM Tivoli Monitoring Administration course. Also add one to two days for the Tivoli Universal Agent
depending on the complexity of the monitoring requirements. Other skills transfer can be estimated at one
day per agent type. For some of the z/OS-based agents this can be a bit longer because of the two
different 3270 mainframe and the portal client interfaces, so allow two days for CICS and DB2 agents.
These time estimates correspond roughly to the number of days it takes to deliver formal training.
Staffing
The table below lists the staffing and time estimates for various Tivoli Monitoring tasks.
Table 8. Staffing estimates
Tivoli Monitoring tasks
Hours
required
Number of
people
required
Skills
required
16
Medium
8 hours
Medium
67
Hours
required
Number of
people
required
Skills
required
4 hours
Medium
High (DBA)
High (Network
Team)
20
High
Medium
Medium
32
Medium
40
High
Medium
Verifying all agents and core components are active and working
correctly
40
High
Medium
High
24
Medium
Medium
High
High
Medium
40
High
Use the following Project Plan as a template to ensure that all tasks are planned during your installation.
This Project Plan is located on the SAPM Technical Exchange wiki at http://www.ibm.com/developerworks/
wikis/pages/viewpageattachments.action?pageId=9595.
68
Pre-installation checklist
Use the following checklist for your pre-installation:
v Verify that your hardware meets the requirements for Tivoli Monitoring V6.2.2 core components.
v Verify that the correct media is downloaded (not the upgrade install media).
v Verify that you have the correct media for your hardware architecture. Some operating systems support
both 32-bit and 64-bit kernels. To check which kernel version your system is running, use the following
commands:
Table 9. Commands for determining your system's kernel version
System
Command
AIX
HP
getconf KERNEL_BITS
Linux
Solaris
isainfo -b
69
successfully monitor up to 1500 agents, so using two remote monitoring servers during your initial
deployment is not a problem. Firewall considerations might necessitate additional remote monitoring
servers.
If you plan to use clustering to achieve a high availability configuration, configure the cluster before
connecting any agents. Otherwise, it is necessary to reconfigure the agents to connect to the cluster rather
than connecting to one of the nodes in the cluster. For information on setting up the hub monitoring server
and portal server in an OS, see the IBM Tivoli Monitoring: High-Availability Guide for Distributed Systems.
Because the installation of application agent support on the monitoring server and portal server requires
them to be recycled, install the application support on the monitoring server, portal server, and portal client
for all agents that you expect to deploy in the next six to nine months. This likely includes the IBM Tivoli
Monitoring for Databases, IBM Tivoli Monitoring for Messaging and Collaboration, ITCAM for SOA and any
other products you plan to implement.
During the initial installation phase, install the Warehouse Proxy agent and Summarization and Pruning
agent, but do not begin using them until after you have successfully deployed your first 50 agents.
Installing in this way gives you an opportunity to assess the health of your environment before adding to
the complexity of your environment.
Configuration checklist
Use the following checklist for your configuration:
v Install all Tivoli Monitoring V6.2.2 core components (portal server, monitoring server, Warehouse Proxy
agent, and Summarization and Pruning agent) so there can be a section for each component.
v Verify the correct protocols are selected. If SPIPE is chosen, make sure the encryption key string used
is the same across the Tivoli Monitoring V6.2.2 enterprise environment.
v Verify the correct configurations are performed regarding data warehousing.
After installing the hub and remote monitoring server, ensure that you do not attempt to start a second
kdsmain instance, which can corrupt your environment. Modify the monitoring server startup CandleServer
script so that it looks as follows:
#
#Local change to check for another running kdsmain
#
if [ "$action" = "start" ]
then
if ps -ef | grep -v grep | grep kdsmain
then
echo "There is a KDSMAIN running already"
exit
fi
fi
Some users run multi-hub monitoring servers on a single server for the development and test
environments using different ports. If that is the case, then the previous script cannot work because
kdsmain is running for the other hub. If that is the case, use extreme care to ensure that you do not
accidentally start a second kdsmain for a given hub.
Do not enable warehousing at this time. Wait until all the agents for phase one have been installed and
started, and the situations have been distributed.
Disable all the default situations by unassigning the managed system group and any agents present in the
Assigned check box.
Create the managed system groups before creating the situations.
70
Distribute all newly created situations with naming conventions to the customized managed system group
and not to *NT_SYSTEM, *ALL_UNIX. You must customize your situation thresholds before forwarding your
events to Tivoli Enterprise Console or OMNIbus, which ensures that you do not cause any event storms.
You can enable Tivoli Enterprise Console forwarding at this time. Install your first 50 agents.
Using the installation method of your choice, install several OS monitoring agents. These can be installed
locally on the server or through the remote deployment mechanism. Detailed steps on remote deployment
are included in the following sections.
Note: Because the remote deployment of application agents depends on having an OS agent running on
the server, always deploy the OS agents first.
OS type
Agent name or type, or both
Business unit
Physical location
v Severity
When choosing a name, keep in mind that the situations are sorted alphabetically in the Situation Editor. A
typical situation might look like:
v East_UNIX_High_CPU_Crit
For more information on disabling the default situations and performing bulk work on situations see the
IBM Tivoli Monitoring: Command Reference.
Choose similar naming conventions for any custom queries, managed system groups, and workspaces.
Choose the same criteria such as physical location, business unit, agent type, and so on that you used for
situations.
Another important consideration when creating situations is the Display Item, which by default, is not
enabled. If you want to generate a unique Tivoli Enterprise Console Event for each item that triggered a
situation or want to run a Take Action command against each item that triggered the situation, then you
want to select the Display Item and choose the appropriate attribute.
71
v DEPOTHOME
The location of the depot. The default location is
Windows: %CANDLE_HOME%\CMS\Depot
Linux and UNIX: $CANDLEHOME/tables/hub_tems_name/depot
Relocating the depot directory enables you to backup your Tivoli Monitoring environment without
having to backup the very large depot directory. In addition, relocating the depot directory ensures
that the depot will not fill up the filesystem where Tivoli Monitoring is running.
The target directories listed below are examples:
If CANDLE_HOME on Windows is located at C:\IBM\ITM, then relocate the depot to D:\ITM\depot
If CANDLE_HOME on Linux and UNIX is located at /opt/IBM/ITM, then relocate the depot to
/data/ITM/depot
By setting the variable in the env.config file, it remains unchanged when maintenance is applied to
your monitoring server. If you put this into your KBBENV file, it is overwritten when maintenance is
applied.
v Bind a specific IP address
KDEB_INTERFACELIST=192.100.100.100
Note: Use this option only if the monitoring server and portal server are on separate servers. If they are
on the same computer, there are going to be problems due to the multiplexed port 1920: The
tacmd command will not be able to find the monitoring server, and portal server clients will not be
able to find the portal server.
v Bind a specific host name
KDEB_INTERFACELIST=caps001
v Bind the first IPV4 address associated with the current host name to be the default interface.
KDEB_INTERFACELIST=!*
v Use an IPV4 symbolic name
KDEB_INTERFACELIST=!en0
There are many other optional parameters documented in the IBM Tivoli Monitoring: Administrator's Guide.
Review those parameters and determine whether any are required for your environment. See Tivoli
Enterprise Monitoring Server on page 299 for information on monitoring server parameters for
performance tuning.
72
you make sure the level of code, bundles and packages, in those installation depots are
consistent. You can also use the shared depot. If a shared directory is accessible to all monitoring
servers, it can be mounted across the monitoring server environment. This reduces the
maintenance workload, since you have only one directory to maintain in a shared location, rather
than maintaining depot directories on each monitoring server.
Post-installation checklist
Use this post-installation checklist to ensure the following items have been completed:
v Monitoring server (situations created)
v Portal server (check all aspects of portal server functionality such as workspaces)
v Perform a complete backup of all Tivoli Monitoring components
73
74
Applying maintenance
This section outlines the planning and implementation steps necessary to install maintenance in your Tivoli
Monitoring V6.2.2 environment. Routine maintenance is outlined in the following sections:
v Planning an upgrade
v Upgrade steps
v Post-upgrade health check on page 76
Planning an upgrade
Use the following checklist to plan your upgrade:
v Check that the plan is in place for upgrading the environment (upgrade the environment incrementally).
Follow a formal change management plan for Tivoli Monitoring upgrades and include, at minimum, both
a deployment and tested backout plan.
v Download the correct upgrade media, not the fresh installation media.
v Back up all Tivoli Monitoring core components such as monitoring server and portal server.
v Carefully review the appropriate Fix Pack Readme and Documentation Addendum for any prerequisites.
Attention: Before upgrading your infrastructure components and beginning the upgrade process, perform
a cold backup of your hub monitoring server, portal server, portal client, Warehouse Proxy
agents, Summarization and Pruning agents, and remote monitoring server. Back up the
following key components:
v Portal server database
v Warehouse database
v Full system backups and file system backups for installed Tivoli Monitoring components
Upgrade steps
When performing upgrades read this Installation and Setup Guide or the supplied fix pack readme
carefully. Perform your install in the following order:
Note: This order might vary depending on the content of the release and the fix pack.
1.
2.
3.
4.
5.
6.
7.
Event Synchronization
Warehouse including Warehouse Proxy agent and Summarization and Pruning agent
Hub Tivoli Enterprise Monitoring Server
Remote Tivoli Enterprise Monitoring Server
Tivoli Enterprise Portal Server
Run any scripts necessary to update the Warehouse schema
Tivoli Enterprise Portal desktop client
75
For more information about installing fix packs, see Installing product maintenance on page 236.
Run the following SQL against each monitoring server in a portal server view:
"SELECT APPL_NAME, TIMESTAMP FROM SYSTEM.SYSAPPLS AT (REMOTE_TEMS) ORDER BY APPL_NAME"
v Check if the depots populated on each monitoring server (hub and remote) are the same.
Run these commands from the hub monitoring server.
tacmd viewdepot
tacmd viewdepot -j remote_tems
v Check if the warehouse data is visible through the workspace views, meaning the portal server still has
the correct connection to the warehouse database.
Select the attribute group for which history collection is enabled by checking that view and making sure
the data can be pulled for more than 24 hours.
v Check if the agents are online and connected to the expected remote monitoring server.
Run the tacmd listSystems command.
v Check if the situations are firing and events are being forwarded to Tivoli Enterprise Console or
OMNIbus.
Run the command on the Tivoli Enterprise Console server using wtdumprl or drag the Tivoli Enterprise
Console icon to any view to view the events.
v Check if the historical configuration is active.
Log in to the portal server using the portal browser or desktop client and click History Configuration.
Browse through the desired attribute groups to see if they are still active.
Or you can run this query: "SELECT NODEL, OBJNAME, LSTDATE FROM O4SRV.TOBJACCL WHERE OBJNAME
LIKE 'UADVISOR*'"
v Check if the Warehouse Proxy agent and Summarization Pruning agents correctly started, meaning the
agents made successful connections to warehouse database.
You can examine the WAREHOUSELOG table to see the last updates by each attribute group. See
sample below:
SELECT ORIGINNODE AS "Agent Hostname", OBJECT AS "Attribute Group",
EXPORTTIME AS "Export Time", ROWSRECEIVED AS "Received Rows",
ROWSINSERTED AS "Inserted Rows", ROWSSKIPPED AS "Skipped Rows",
ERRORMSG AS "Error Message" FROM WAREHOUSELOG
Note to Windows users: If you attempt to run a tacmd CLI command and either the Embedded Java
Runtime or the User Interface Extensions are not available on the node where
you invoke the command, you will receive the error shown in Figure 25 on page
189
76
77
v Check the core components process memory and CPU usage and that you have situations created to
monitor them.
78
v Check the list of managed systems deployed in your Tivoli Monitoring environment. Take note of their
maintenance levels. Check with IBM Software Support or your IBM account representative to see if new
fix packs and interim fixes are available. If so, determine what has been fixed so you can decide if you
want to deploy the patches to your environment or just wait until the next major fix pack.
v Once again check your situation thresholds to make sure you dont have false positive events. In large
user environments there are many factors that can have an affect on how a system performs. The
change in performance in any system can change the way Tivoli Monitoring V6.2.2 reports status for
any given system. Make sure the events active in Tivoli Monitoring are real.
v Take inventory of the systems being managed by Tivoli Monitoring. There might be a need to deploy
additional agents on new systems or systems where new applications have been added.
v Assess the capacity of the infrastructure systems for resource CPU, memory and disk utilization to
continually plan for overall workload balancing. As new versions of applications are introduced into the
environment, their affect on resources typically change. This ongoing effort helps ensure the correct
hardware is in place. Confirm the number of agents connected to each remote monitoring server to
ensure that you have not exceeded the recommended limit of 1500 agents.
79
80
81
82
Gather any information required for successful installation Specific information to have ready
(such as DB2 user information and security
Appendix A, Installation worksheets, on page 529
specifications).
Install the Tivoli Enterprise Monitoring Server.
Start the portal client to verify that you can view the
monitoring data.
If you are upgrading from IBM Tivoli Monitoring V6.1 or OMEGAMON Platform 350 or 360 and CandleNet
Portal 195, see Chapter 6, Upgrading from a previous installation, on page 115 before installing any IBM
Tivoli Monitoring components.
If you are upgrading from Tivoli Distributed Monitoring to IBM Tivoli Monitoring, see the IBM Tivoli
Monitoring: Upgrading from Tivoli Distributed Monitoring guide.
If you are upgrading from IBM Tivoli Monitoring V5.x to V6.2, see IBM Tivoli Monitoring: Upgrading from
V5.1.2.
If you plan to use firewalls in your environment, see Appendix C, Firewalls, on page 551 for an overview
of the IBM Tivoli Monitoring implementation of firewalls.
83
If the returned path includes something like hostname:\Rulebase_directory, with no drive letter (such
as C:\), copy the ESync2200Win32.exe file from the \TEC subdirectory of the IBM Tivoli Monitoring
installation image to the drive where the rule base exists and run the installation from that file.
2. If you are using a Windows event server, if you have any rule base with an associated path that
does not contain a relative drive letter and that has the Sentry2_0_Base class imported, copy the
ESync2200Win32.exe file from the \TEC subdirectory of the IBM Tivoli Monitoring installation image to
the drive where the rule base exists and run the installation from that file.
To verify if you have any rule bases that have an associated path containing no relative drive letter,
run the wrb -lsrb -path command as described in the previous note.
To determine if your rule bases have the Sentry2_0_Base class imported, run the following
command against all of your rule bases:
wrb -lsrbclass rule_base
84
IPv6
IPv4
Portal server
Hub
monitoring
server
IPv6
IPv6
IPv4 or IPv6
IPv4 or IPv6
Agents
Remote
monitoring
server
Agents
IPv6
IPv6
IPv6
IPv4 or IPv6
IPv41
IPv4
Notes:
1. All agents running on a computer must be configured to use the same protocol, either IPv4 or IPv6.
2. In scenarios where some agents are on IPv4-only computers or the network between the agents and
the monitoring servers they report to is IPv4 only, these agents need to communicate with the
monitoring servers over IPv4. The monitoring servers therefore may communicate with some agents
over IPv4 and with others over IPv6.
3. The portal server does not support IPv6 on the Windows platform. If the portal server is on Windows,
the browser and desktop clients need to communicate with it using IPv4.
4. Components do not operate in dual-stack mode on the Solaris platform. Components can be
configured to communicate using either IPv4 or IPv6. Thus, if a hub server on a Solaris host is
configured to use IPv6, the portal server, all remote servers, and all agents connecting to the hub must
be configured to use IPv6 for communicating with the hub.
5. On HP-UX, patch PHNE_29445 is required for IPv6 support.
6. Components do not operate in dual-stack mode on the HP-UX HP9000 platform (although dual-stack
mode is supported on the HP-UX Integrity platform). Components can be configured to communicate
using either IPv4 or IPv6. Thus, if a hub server on an HP9000 host is configured to use IPv6, the
portal server, all remote servers, and all agents connecting to the hub must be configured to use IPv6
for communicating with the hub
Chapter 5. Preparing for installation
85
7. On Linux computers, a minimum kernel level of 2.6 is required for IPv6 support.
Monitoring components, when installed and configured using the appropriate platform-specific configuration
tools, are initially configured only for IPv4 communication on all platforms except z/OS (where your ICAT
settings govern the protocol used). On all other platforms, you must perform supplemental configuration
steps to reconfigure Tivoli Monitoring components to communicate using IPv6.
User authority
To install IBM Tivoli Monitoring on a Windows computer, you must have Administrator privileges on that
computer. You must also run the IBM Tivoli Monitoring components as a user with Administrator privileges.
86
Or:
su - USER -c "ITM_Install_Home/bin/itmcmd agent o Instance start product_code"
Where:
USER
Is the ID that the application will be started as. By default, USER is the owner of the bin directory for
the application. For the UNIX Log Alert agent, USER is the owner of the ITM_Install_Home/PLAT/ul/bin
directory.
N
ITM_Install_Home
Is the full path to the IBM Tivoli Monitoring version 6.x installation directory.
product_code
Is the two-character code for this application. Refer to Appendix D, IBM Tivoli product, platform, and
component codes, on page 567 for a list of the codes for the common components and the base
agents. See the product documentation for other product codes.
instance
Is the instance name required to start this application.
PLAT
Is the platform directory where the application is installed.
87
Components are started in the order listed in the autostart script. This order is based on the dependencies
between components, rather than any logical sequence.
The kcirunas.cfg file was added to allow overrides to the default processing. The kcirunas.cfg file is
delivered in the root directory of the installation media, in the same location as install.sh. During
installation, this file is copied to the ITM_Install_Home/config directory (but is not overwritten if this file
already exists). This file is provided as a sample file with each section commented out. You do not have to
modify this file if you want the autostart script to be generated with the default processing.
For local installation usage, you may modify the kcirunas.cfg file in the root directory of the media if you
want to use the same set of values for multiple installations on similar systems from this image. You may
also modify the kcirunas.cfg file in the ITM_Install_Home/config directory if you want to use a specific
set of values for each individual installation from this image.
For remote deployment usage, you can modify the kcirunas.cfg file in the root directory of the media. You
can also modify the kcirunas.cfg file in the Tivoli Enterprise Monitoring Server depot after populating the
depot from this image. If you have set the DEPOTHOME variable in the tables/TEMS_NAME/KBBENV file, you
must use that value as the base when searching for the depot location. To determine if you have set
DEPOTHOME, run the following commands:
cd ITM_Install_Home
DEPOTHOME=`find tables -name KBBENV -exec grep DEPOTHOME {} \; 2> /dev/null | cut -d= -f2`
echo $DEPOTHOME
If DEPOTHOME is not empty, run the following commands to locate kcirunas.cfg in the monitoring server
depot:
cd ITM_Install_Home
DEPOTHOME=`find tables -name KBBENV -exec grep DEPOTHOME {} \; 2> /dev/null | cut -d= -f2`
find $DEPOTHOME -name kcirunas.cfg -print
The file kcirunas.cfg contains a superset of the XML syntax and structure in the ITM_Install_Home/
config/HOST_kdyrunas.cfg file (where HOST is the short hostname for this system) produced by remote
configurations, such as remote deployment or Tivoli Enterprise Portal-based agent configuration.
The entries in kcirunas.cfg do not affect the actions performed for remote deployment, remote
configuration, remote starting or stopping, or any Tivoli Enterprise Portal-initiated agent action. The entries
in HOST_kdyrunas.cfg affect the generation of the reboot script. The entries in kcirunas.cfg also affect the
generation of the reboot script, and they override any entries for the same component in
HOST_kdyrunas.cfg.
The following is the default kcirunas.cfg file with all <productCode> entries commented:
<agent>
<!productCode>ux</productCode>
<instance>
<user>itmuser</user>
</instance>
<!productCode>ul</productCode>
<instance>
<user>root</user>
</instance>
<!productCode>lz</productCode>
<instance>
88
<user>itmuser</user>
</instance>
<!productCode>ud</productCode>
<instance>
<name>db2inst1</name>
<user>db2inst1</user>
</instance>
<instance>
<name>db2inst2</name>
<user>root</user>
</instance>
<!productCode>ms</productCode>
<instance>
<name>HUB17</name>
<user>itmuser</user>
</instance>
<!productCode>cq</productCode>
<instance>
<user>itmuser</user>
</instance>
<!productCode>cj</productCode>
<instance>
<user>itmuser</user>
</instance>
</agent>
By default, each <productCode> section in the kcirunas.cfg file is disabled by making the product code
a comment, such as <!productCode>. To activate a section, do the following:
1. Remove the comment indicator (the exclamation point, !) so that the <!productCode> item looks like
<productCode>.
2. Copy a <productCode> section.
3. Rather than creating new sections from scratch, customize each <productCode> section, and activate
it.
Commented, or deactivated, sections are ignored. Uncommented, or activated, sections for applications
that are not installed are ignored. For agents that do not require an instance value, specify only:
<productCode>product_code</productCode>
<instance>
<user>USER</user>
</instance>
For agents that do require an instance value, like the DB2 monitoring agent (product code ud), specify the
product_code, instance, user, and name:
<productCode>ud</productCode>
<instance>
<name>db2inst1</name>
<user>db2inst1</user>
</instance>
<instance>
<name>db2inst2</name>
<user>root</user>
</instance>
Two items that are supported in the kcirunas.cfg file that are not supported in the HOST_kdyrunas.cfg file
are the <default> section and the <autoStart> flag. The <autoStart> flag can be used in the <default>
section and in the <instance> section. The <default> section is specified as follows:
Chapter 5. Preparing for installation
89
<productCode>product_code</productCode>
<default>
<user>db2inst1</user>
</default>
<productCode>product_code</productCode>
<default>
<autoStart>no</autoStart>
</default>
<productCode>product_code</productCode>
<default>
<user>db2inst1</user>
<autoStart>no</autoStart>
</default>
A section similar to the following can be used to not automatically start the default MQ Monitoring instance
and to automatically start all other instances as the mqm user:
<productCode>mq</productCode>
<default>
<user>mqm</user>
</default>
<instance>
<name>None</name>
<autoStart>no</autoStart>
</instance>
A set of sections similar to the following can be used to avoid automatically starting the set of installed
agents and servers. You need one section for each agent or server component installed:
<productCode>product_code</productCode>
<default>
<autoStart>no</autoStart>
</default>
Where product_code is the two-character product code for the individual agent or server component (refer
to Appendix D, IBM Tivoli product, platform, and component codes, on page 567).
Notes:
1. Any changes made directly to the autostart script (ITMAgentsN or rc.itmN, depending on the platform)
will not be preserved and will be overwritten the next time you install, configure, or upgrade an
application.
2. Any changes made to the AutoRun.sh script will not be preserved and will be overwritten the next time
you apply higher maintenance.
90
Note: This option does not apply to configuring the portal server on Linux systems where you may use a
nonroot IBM Tivoli Monitoring user ID to install the portal server. If you do, you must then use the
root user ID to configure the portal server because the DB2 installation or administrator ID may lack
the necessary privileges.
v
v
v
If, however, you use either the root user ID, the DB2 installation ID, or the DB2 administrator ID to
install the portal server, you must use that same user ID to configure it. You can then use your
nonroot Tivoli Monitoring user ID to run the portal server
You can use any valid name.
You can install the IBM Tivoli Monitoring software as the root user, but you do not have to. If you do not
install as a root user, you must follow the steps outlined in Changing the file permissions for agents on
page 191 after you install any monitoring agents.
Use the same user to install all components.
If you are using NFS or a local file system, establish your installation directory according to the
guidelines used in your environment.
Only the Korn shell is supported for the execution of the installation and runtime scripts. Consider using
the Korn shell as the default environment for your IBM Tivoli login account.
91
NFS also has some trade-offs in how you manage your environment. While you can have your entire IBM
Tivoli Monitoring in one place, there might be additional configuration required to define the use of specific
products or processes in your installation directory. Will every product on every host system execute using
the same configuration; or will you tailor the configuration to the particular environment?
Note: If installing from images on an NFS mount, the NFS mount needs world execute permissions to be
accessible by the installation processes.
92
Local and remote installation is not possible. Tivoli Monitoring installation always requires read-write
access to the /opt directory. This is not only a GSKit issue. Even if CANDLEHOME is specified as the
nondefault directory, read-write access to /opt/IBM/ITM/tmaitm6/links is still needed.
Note: In all supported Small Local Zone configurations, the Tivoli monitoring interactive command line
installation prompts you for the parent directory where the links will be created. For example, if you
enter /tmp for the directory, the links will be created in the /tmp/usr/lib and /tmp/usr/bin directories.
The default directory for this prompt is $CANDLEHOME/gsklinks. During remote installation, the
default directory is always used.
It is very difficult to predict all the possible shared-resource policies for small local zones and the possible
side effects. It is the responsibility of the system administrator to create these policies without causing
unintentional side effects between different zones and software installed.
The "nofiles" parameter is the number of file descriptors available to a process. For the monitoring server
process (kdsmain), the "nofiles" parameter should be set larger than the maximum number of agents that
will be connecting to the monitoring server. If the monitoring server is unable to get file descriptors when
needed, unexpected behavior can occur, including program failures. Consider increasing the value to 8000
file descriptors or more.
There are other user limit parameters that control how much data, stack and memory are available to a
process. For large environments, consider increasing these memory-related user limit parameters for the
monitoring server (kdsmain) process. Configuring the user limit parameters usually requires root access,
and involves changing system startup files which are operating system specific. Consult the operating
system manuals for information on how to configure the user limit parameters.
Security options
User IDs and passwords sent between Tivoli Management Services components are encrypted by default.
Other communication between components can be secured by configuring the components to use secure
protocols. See Communication between components on page 94.
Access to the Tivoli Enterprise Portal (authorization) is controlled by user accounts (IDs) defined to the
Tivoli Enterprise Portal Server. The hub Tivoli Enterprise Monitoring Server can be configured to validate,
or authenticate, user IDs through either the local system registry or an external LDAP-enabled registry.
Alternatively, authentication by an external LDAP registry can be configured through the Tivoli Enterprise
Portal Server. If authentication is not configured through either the monitoring server or the portal server,
no password is required to log on to the Tivoli Enterprise Portal. See Authorization and authentication on
page 94.
User IDs that require access to the SOAP Server, including user IDs that issue commands that invoke
SOAP methods, must be authenticated through the hub monitoring server. If user authentication is not
Chapter 5. Preparing for installation
93
enabled on the hub monitoring server, anyone can make requests to the SOAP Server. If user
authentication is enabled on the hub, the SOAP Server honors only requests from user IDs and passwords
authenticated by the local or external registry. If type of access is specified for specific users, only
requests from those users for which access is specified are honored. See SOAP server security on page
95.
User IDs that require the ability to share credentials with other Web-enabled Tivoli applications must be
authenticated through the Tivoli Enterprise Portal Server, which must be configured for single sign-on. See
Single sign-on capability on page 95. If you have previously enabled authentication through the hub
monitoring server and want to change to the portal server, see the IBM Tivoli Monitoring: Administrator's
Guide.
User passwords are limited to 16 characters or fewer.
Notes:
1. The Tivoli Directory Server (TDS) LDAP client used by the Tivoli Enterprise Monitoring Server does not
support LDAP referrals, such as those supported by Microsoft Active Directory.
2. The IBM Tivoli Monitoring Service Console enables you to read logs and turn on traces for remote
product diagnostics and configuration. The Service Console performs user authentication using the
native operating system security facility. This means that if you use the Service Console on z/OS, your
user ID and password are checked by the z/OS security facility (such as RACF/SAF). If you use the
Service Console on Windows, you must pass the Windows workstation user ID and password prompt.
A password is always required to access the Service Console. Even if a user ID is allowed to log into
the operating system without a password, access to the Service Console will be denied. If necessary,
you must create a password for the user ID that is being used to log in to the Service Console. For
more information about the Service Console, see the IBM Tivoli Monitoring: Troubleshooting Guide.
94
User authentication may be enabled through either the hub Tivoli Enterprise Monitoring Server, or the
Tivoli Enterprise Portal Server.
If authentication is enabled through the hub monitoring server, user IDs can be authenticated either by the
local system registry or by an external LDAP-enabled central registry. User IDs that need to make SOAP
Server requests (including user IDs that issue CLI commands that invoke SOAP server methods) can be
authenticated only through the hub monitoring server.
If authentication is enabled through the Tivoli Enterprise Portal, user IDs are authenticated against an
external LDAP-enabled registry. User IDs that require single sign-on (SSO) capability must be
authenticated through the portal server and mapped to unique user identifiers in an LDAP registry shared
by all SSO-eligible Tivoli applications.
User authentication should not be enabled until at least a basic installation of Tivoli Management Services
components and IBM Tivoli Monitoring base agents has been completed and tested. For instructions on
enabling authentication, see the IBM Tivoli Monitoring: Administrator's Guide.
95
On z/OS, GSKit is known as the Integrated Cryptographic Service Facility, or ICSF. If ICSF is not installed
on the z/OS system, the monitoring server uses an alternative, less secure encryption scheme. Since both
components must be using the same scheme, if the hub system does not use ICSF, you must configure
the Tivoli Enterprise Portal to use the less secure scheme (EGG1) as well. For more information, see IBM
Tivoli Management Services on z/OS: Configuring the Tivoli Enterprise Monitoring Server on z/OS.
A default certificate and key are provided with GSKit at installation. A stash file provides the database
password for unattended operation. You can also use the key management facilities in GSKit to generate
your own certificates. For more information regarding GSKit and iKeyMan, including information about
creating and using security certificates, see the GSKit product documentation located at
http://www-128.ibm.com/developerworks/java/jdk/security/50/.
Notes:
1. The IBM Tivoli Monitoring installer no longer modifies the system GSKit. If necessary, it installs a local
copy of GSKit that is private to Tivoli Monitoring.
2. In 64-bit environments, both the 64-bit and the 32-bit GSKit are installed, to support both 64-bit and
32-bit Tivoli Monitoring components.
Portal
server
Portal
client1
OS
monitoring
agent2
Operating system
96
Warehouse Summarization
Proxy
and Pruning
agent
agent
Operating system
Windows Server 2003 Standard
Edition R2 on Intel x86-32 (32 bit)
Windows Server 2003 Standard
Edition R2 on x86-64 (64 bit)
Monitoring
server
Portal
server
Portal
client1
OS
monitoring
agent2
32 bit
32 bit
32 bit
32 bit
32 bit
Warehouse Summarization
Proxy
and Pruning
agent
agent
32 bit
32 bit
32 bit
32 bit
32 bit
32 bit
32 bit
32 bit
32 bit
32 bit
32 bit
32 bit
32 bit
32 bit
32 bit
X5
32 bit
32 bit
32 bit
32 bit
32 bit5
32 bit
32 bit
32 bit
97
OS
monitoring
agent2
32 bit
32 bit
32 bit
Operating system
Monitoring
server
Portal
server
32 bit
32 bit
Warehouse Summarization
Proxy
and Pruning
agent
agent
32 bit
32 bit
32 bit
32 bit5
32 bit
32 bit
32 bit
32 bit
32 bit
32 bit
Notes:
1. The Tivoli Enterprise Portal desktop client is supported on marked platforms. However, the browser client can be
accessed only from Windows computers running Internet Explorer 6 or 7 or either Firefox 2.x or 3.0.x (but not
3.5.x). Note that the Firefox browser is not supported on Windows Server 2008.
2. The OS monitoring agent column indicates the platforms on which an operating system monitoring agent is
supported. This column does not indicate that any agent runs on any operating system. For example, to monitor
a Linux computer, you must use a Linux monitoring agent, not a Windows monitoring agent.
For information about the operating systems supported for non-OS agents, see the documentation for the specific
agents you are using in your environment.
3. For the Windows XP and Windows Vista operating systems, the Microsoft End User License Agreement (EULA)
does not license these operating systems to function as a server. Tivoli products that function as a server on
these operating systems are supported for demonstration purposes only.
4. Windows Server 2008 (32-bit) is supported for these Tivoli Management Services components:
v The monitoring server
v The portal server
v The agent infrastructure and the OS agent
v The portal desktop client and the browser client but only under Internet Explorer (Mozilla Firefox is not
supported)
v The Warehouse Proxy agent
5. The Summarization and Pruning agent is supported on Windows 2008, but only with the necessary workaround.
98
Table 13 shows the support for monitoring components on UNIX (non-Linux), i5/OS, and z/OS computers.
Table 13. Supported UNIX, i5/OS, and z/OS operating systems
Operating system
Monitoring
server
Portal
server
Portal
client
OS
monitoring
agent1,2
Warehouse
Proxy
agent3
Summarization
and Pruning
agent
(browser
client
only)
32 bit
(browser
client
only)
32 bit
32 bit
(browser
client
only)
(browser
client
only)
(browser
client
only)
(browser
client
only)
32 bit
(browser
client
only)
32 bit
32 bit
32 bit
(browser
client
only)
32 bit
32 bit
32 bit
(browser
client
only)
32 bit
(browser
client
only)
32 bit
32 bit
32 bit
(browser
client
only)
(browser
client
only)
(browser
client
only)
(browser
client
only)
(browser
client
only)
32 bit
32 bit
32 bit
99
Table 13. Supported UNIX, i5/OS, and z/OS operating systems (continued)
Operating system
HP-UX 11i v3 on Integrity
(IA64)12,14
Monitoring
server
Portal
server
32 bit
Portal
client
(browser
client
only)
OS
monitoring
agent1,2
Warehouse
Proxy
agent3
Summarization
and Pruning
agent
32 bit
32 bit
10,11
31 bit
10,11
31 bit
10,11
31 bit
10,11
31 bit
31 bit
10,11
Notes :
1. The OS monitoring agent column indicates the platforms on which an operating system monitoring agent is
supported. This column does not indicate that any agent runs on any operating system. For example, to monitor
a Linux computer, you must use a Linux monitoring agent, not a Windows monitoring agent.
For information about the operating systems supported for non-OS agents, see the documentation for the specific
agents you are using in your environment.
2. If you are installing the OMEGAMON XE for Messaging agent on a 64-bit operating system, you must install the
32-bit version of the agent framework.
3. Configuration of the Warehouse Proxy agent requires an X Window System (also known as the X11 GUI) on the
computer where you are configuring it. Alternatively, you can run the following command to use an X terminal
emulation program (such as Cygwin) that is running on another computer:
export DISPLAY=my_windows_pc_IP_addr:0.0
where my_windows_pc_IP_addr is the IP address of a computer that is running an X terminal emulation program.
4. Supported AIX systems must be at the required maintenance level for IBM Java 1.5. See the following Web site
for the Java 5 AIX maintenance level matrix: http://www-128.ibm.com/developerworks/java/jdk/aix/service.html
Component xlC.aix50.rte must be at level 8.0.0.4. See the following Web site for installation instructions:
http://www-1.ibm.com/support/docview.wss?uid=swg1IY84212
The Tivoli Enterprise Monitoring Server and Tivoli Enterprise Portal Server require AIX 5.3 TL5 SP3 or newer. The
other components need AIX 5.3 TL3 or newer, but if they are at AIX 5.3 TL5, they too require SP3.
Version 8 of the AIX XL C/C++ runtime must be installed. To determine the current level, run the following AIX
command:
lslpp -l | grep -i xlc
100
Table 13. Supported UNIX, i5/OS, and z/OS operating systems (continued)
Operating system
Monitoring
server
Portal
server
Portal
client
OS
monitoring
agent1,2
Warehouse
Proxy
agent3
Summarization
and Pruning
agent
5. Component xlC.aix61.rte must be at level 10.1.0.2 or higher. See the following Web site for installation
instructions: http://www-1.ibm.com/support/docview.wss?uid=swg1IY84212
Version 10 of the AIX XL C/C++ runtime must be installed. Additionally on AIX 6.1, the xlC supplemental runtime
for aix50 (xlC.sup.aix50.rte at level 9.0.0.1 or higher) must be installed. To determine the current level, run the
following AIX command:
lslpp -l | grep -i xlc
The output should be something similar to the following:
xlC.aix61.rte
10.1.0.2
xlC.cpp
9.0.0.0
xlC.msg.en_US.cpp
9.0.0.0
xlC.msg.en_US.rte
10.1.0.2
xlC.rte
10.1.0.2
xlC.sup.aix50.rte
9.0.0.1
6. Solaris V8 32 bit requires patches 108434-17 and 109147-07. Solaris V8 64 bit requires 108435-17 and
108434-17. Both 32-bit and 64-bit versions require 111721-04.
7. Solaris V9 32 bit requires patch 111711-11. Solaris V9 64 bit requires 111712-11 and 111711-11. Both 32-bit and
64-bit versions require 111722-04.
8. There are some limitations on installing into Solaris 10 when zones are configured. See Installing into Solaris
zones on page 92.
9. SNMP version 3 is not supported on i5/OS.
10. For information about installing the monitoring server on z/OS, refer to the program directory that comes with
that product.
11. The OS monitoring agent for z/OS computers is part of the IBM Tivoli OMEGAMON for z/OS product.
12. You cannot upgrade either the OS or Log Alert agents that you currently have running on an HP-UX 11i v2
(B.11.23) on an Integrity (IA64) computer in PA-RISC mode prior to fix pack 4. Fix packs prior to fix pack 4 did
not run in native 64-bit mode by default. You must first uninstall the agent if the version is previous to the fix
pack 4 version.
13. The 32-bit kernel still requires a 64-bit processor. Ensure that any HP-UX managed system is based on
PA-RISC2 architecture. From the native kernel mode (for example, 64 bit if the system is 64-bit based), run the
following command:
file /stand/vmunix
This returns the native architecture type. For example:
/stand/vmunix:
101
Portal
server
Portal
client1
OS
monitoring
agent2,5
browser
client only
browser
client only
Operating system
Warehouse Summarization
Proxy
and Pruning
agent3
agent
browser
client only
browser
client only
browser
client only
browser
client only
browser
client only
browser
client only
X8
browser
client only
browser
client only
browser
client only
browser
client only
X8
browser
client only
102
Operating system
Monitoring
server
Portal
server
Portal
client1
OS
monitoring
agent2,5
browser
client only
browser
client only
browser
client only
Warehouse Summarization
Proxy
and Pruning
agent3
agent
browser
client only
browser
client only
browser
client only
X8
browser
client only
browser
client only
browser
client only
X8
browser
client only
browser
client only
browser
client only
X
X
X8
browser
client only
native Linux
OS agent
native Linux
OS agent
native Linux
OS agent
103
Operating system
Monitoring
server
Portal
server
Portal
client1
OS
monitoring
agent2,5
native Linux
OS agent
native Linux
OS agent
native Linux
OS agent
104
Warehouse Summarization
Proxy
and Pruning
agent3
agent
Operating system
Monitoring
server
Portal
server
Portal
client1
OS
monitoring
agent2,5
Warehouse Summarization
Proxy
and Pruning
agent3
agent
Notes:
1. Both the Tivoli Enterprise Portal desktop and browser clients are supported on the marked platforms. The
browser client may work with Firefox 2.x on other Linux platforms that have not yet been certified by IBM.
2. The OS monitoring agent column indicates the platforms on which an agent is supported. This column does not
indicate that any agent runs on any operating system. For example, to monitor a Linux computer, you must use a
Linux monitoring agent, not a Windows monitoring agent. An "X" symbol in the column indicates that an operating
system agent is available for the specific operating system that is named in the row where the "X" is located.
3. Configuration of the Warehouse Proxy agent requires an X Window System (also known as the X11 GUI) on the
computer where you are configuring it. Alternatively, you can run the following command to utilize an X terminal
emulation program (such as Cygwin) that is running on another computer:
export DISPLAY=my_windows_pc_IP_addr:0.0
where my_windows_pc_IP_addr is the IP address of a computer that is running an X terminal emulation program.
4. SuSE Linux Enterprise Server 9 must be at SP3 or higher. SuSE 10 must be at pdksh-5.2.14 or higher.
5. The Linux OS Monitoring Agent requires the installation of the latest versions of the following libraries:
libstdc++
libgcc
compat-libstdc++
libXp
These libraries are available on the Linux operating system installation media and Service Packs. Each library
can have multiple packages, and each should be installed.
6. The following rpm files are prerequisites when running IBM Tivoli Monitoring V6.2.1/6.2.2 under Linux on Itanium:
ia32el-1.1-20.ia64.rpm
glibc-2.3.4-2.36.i686.rpm
glibc-common-2.3.4-2.36.i386.rpm
libgcc-3.4.6-8.i386.rpm
7. RedHat Enterprise Linux V5 now enables SELinux (security-enhanced Linux) by default, which interferes with the
installation, configuration, and operation of IBM Tivoli Monitoring. To ensure the proper operation of the V6.1.x
and V6.2.x releases, the SELinux setting must be changed from enforcing mode to either permissive or
disabled mode. When the permissive mode is chosen, the system log will contain entries regarding which Tivoli
Monitoring binaries have triggered the SELinux security condition. However, under the permissive mode, these
entries are for auditing purpose only, and will operate normally.
As of V6.2.1 and V6.2.2, SELinux can be enabled again after installation and configuration; however, it must still
be set to either permissive or disabled mode when installing and configuring IBM Tivoli Monitoring. The
appropriate command is either:
setenforce Permissive
or:
setenforce Enforcing
After switching the SELinux mode, issue the prelink -a command. If you skip this command, the IBM Tivoli
Monitoring installer may fail with error message KCI1235E terminating ... problem with starting Java Virtual
Machine.
8. On zSeries systems running a 32-bit DB2 V9 instance under Linux, you must install a 32-bit Tivoli Enterprise
Portal Server. On zSeries systems running a 64-bit DB2 V9 instance under Linux, you can install either:
v A 64-bit Tivoli Enterprise Portal Server.
v A 32-bit Tivoli Enterprise Portal Server if the 32-bit DB2 V9 client libraries are also installed.
105
The following table lists the operating system patches required for the IBM Global Security Toolkit (GSKit),
which is used to provide security between monitoring components. GSKit is installed automatically when
you install Tivoli Management Services components.
Table 15. Operating system requirements for IBM GSKit
Operating system
Patch required
Solaris V8
Solaris V9
111711-08
Solaris V10
none
HP-UX V11i
AIX V5.x
xlC.aix50.rte.6.0.0.3 or later
AIX V6.x
none
pdksh-5.2.14-13.i386.rpm
compat-gcc-32-c++-3.2.3-46.1.i386.rpm
compat-gcc-32-3.2.3-46.1.i386.rpm
compat-libstdc++-33-3.2.3-46.1.i386.rpm
none
none
106
Supported databases for Tivoli Enterprise Portal Server and Tivoli Data
Warehouse
The following tables show the supported databases for the portal server and the Tivoli Data Warehouse.
Table 16 shows the supported databases for the portal server. Note that the database and the portal
server must be installed on the same computer.
Note: IBM Tivoli Monitoring V6.2.x includes DB2 Workstation Server Edition V9.5 for use with the portal
server and the Tivoli Data Warehouse. (Version 6.2 and its fix packs provided a restricted-use
version of DB2 Database for Linux, UNIX, and Windows V9.1.)
Table 16. Supported databases for the portal server
Portal server
operating
system
AIX
v
v
v
v
v
V8.1
V8.2
V9.1
V9.5
V9.7
Linux4
v
v
v
v
v
Windows
v
v
v
v
v
V8.1
V8.2
V9.1
V9.5
V9.7
MS SQL Server
Notes:
1. "TEPS" is the default database name for the database used by the portal server.
2. Your portal server database must be located on the computer where the portal server is installed.
3. If, in your environment, you are using products whose licenses require you to collect software use information
and report it to IBM using IBM Tivoli License Manager, you must ensure that use of this instance of IBM DB2 for
the workstation is not included in the report. To do this, create a Tivoli License Manager license, select a license
type that does not involve reporting to IBM, and associate this instance of the product with it.
4. On Linux, the portal server database must be installed with the operating system language set to UTF-8.
5. IBM Tivoli Monitoring supports Microsoft SQL Server 2000 only if the data is limited to codepoints inside the
Basic Multilingual Plane (range U+0000 to U+FFFF). This restriction does not apply to IBM DB2 for the
workstation.
6. This assumes the portal server runs on Windows. See Technote 1240452 for further information on support for
this application.
7. The base installation CD includes, among its Tivoli Enterprise Portal Server files, a version of the embedded
Derby database that you can use instead of either DB2 for the workstation or SQL Server (but note the limitations
listed in Derby now supported as a portal server database on page 20). This version of Derby is the one
supported by and included with the version of eWAS required for the portal server.
8. If you transition from one supported database system to another for the Tivoli Enterprise Portal Server, your
existing portal server data is not copied from the first system to the new one.
107
Table 17 shows the supported databases for the Tivoli Data Warehouse. Note that if you run the database
for the Tivoli Enterprise Portal Server and the database for the warehouse in the same instance of IBM
DB2 for the workstation, you must follow the support requirements in Table 16 on page 107.
Table 17. Supported databases for the Tivoli Data Warehouse
Tivoli Data Warehouse database ("WAREHOUS")1
IBM DB2 for the workstation
MS SQL Server
Supported versions:
v V8.1 with fix pack 10 or higher
v V8.2 with fix pack 3 or higher
v V9.1 and fix packs
v V9.5 and fix packs5
v V9.7 and fix packs
Supported versions:
version 9.1 or
subsequent.
Support applies to
any Windows,
Linux, or UNIX
platform that can
run the Warehouse
Proxy agent. DB2
Connect Server
Edition is also
required on the
workstation.
Supported versions:
v MS SQL Server
2000 Enterprise
Edition4
v MS SQL Server
2005 Enterprise
Edition
v MS SQL Server
2008 Enterprise
Edition
Support applies to
the Windows
operating
environment only.
Oracle
Supported versions:
v 9.2
v 10g Release 1
v 10g Release 2
v 11g Release 1
v 11g Release 2
Support applies to the following
operating systems:
v AIX V5.3
v HP-UX 11iv3
v Solaris 102
v RedHat Enterprise Linux 4 for
Intel
v RedHat Enterprise Linux 5 on
Intel and zSeries
v SuSE Linux Enterprise Server 9
for Intel
v SuSE Linux Enterprise Server
10 for Intel
v Windows 2003 Server
Notes:
1. "WAREHOUS" is the default database name for the database used by Tivoli Data Warehouse. Support is for
32-bit or 64-bit databases. Your Tivoli Data Warehouse database can be located on the same computer as your
monitoring server or on a remote computer.
2. See the Oracle company support Web site (www.oracle.com) for information about installing and configuring
Oracle on Solaris V10.
3. Do not use DB2 for the workstation V9.1 fix pack 2 for the Tivoli Data Warehouse. Use of DB2 for the workstation
V9.1 FP2 can cause the Warehouse Proxy agent and the Summarization and Pruning agent not to function
properly. Use an earlier version, such as DB2 for the workstation V9.1 fix pack 1, or upgrade to a level that
contains the fix for APAR JR26744, such as DB2 for the workstation V9.1 fix pack 3.
4. IBM Tivoli Monitoring supports Microsoft SQL Server 2000 only if the data is limited to code points inside the
Basic Multilingual Plane (range U+0000 to U+FFFF).
5. Users of DB2 Database for Linux, UNIX, and Windows V9.5 fix pack 1 must update their JDBC driver to version
3.57.82. You can download this updated driver here:
http://www-01.ibm.com/support/docview.wss?rs=4020&uid=swg21385217
6. DB2 on z/OS is not supported for the Tivoli Enterprise Portal Server.
108
Processor requirements
For best performance, processor speeds should be at least 1.5 GHz for RISC architectures and 3 GHz for
Intel architectures. Choosing faster processors should result in improved response time, greater
throughput, and lower CPU utilization.
Except for the Tivoli Data Warehouse, single-processor systems are suitable when an IBM Tivoli
Monitoring infrastructure component is installed on a separate computer from the other components. The
infrastructure components (monitoring server, portal server, portal client) run as multithreaded processes
and are able to run threads concurrently across multiple processors if they are available. CPU utilization
for most components is bursty, and steady-state CPU utilization is expected to be low in most cases. For
components supporting large environments, using multiprocessor systems can improve throughput.
You should also consider using multiprocessor systems in the following scenarios:
v You want to run the Tivoli Enterprise Portal client on a computer that is also running one of the server
components.
v You have a monitoring environment of 1000 or more monitored agents, and you want to install multiple
server components on the same computer. For example:
Portal server and hub monitoring server
Monitoring server (hub or remote) and Warehouse Proxy agent
Warehouse Proxy and Summarization and Pruning agents
v You have a small environment and you want to include all of the server components (monitoring server,
portal server, Warehouse Proxy agent, and Summarization and Pruning agent) on a single computer.
v Except in very small environments, use a multiprocessor system for the Tivoli Data Warehouse
database server. You can run the Warehouse Proxy agent and the Summarization and Pruning agent on
the Warehouse database server to eliminate network transfer from the database processing path:
If you install the Warehouse Proxy agent on the Warehouse database server, consider using a
two-way or four-way processor.
If you install the Summarization and Pruning agent on the Warehouse database server (with or
without the Warehouse Proxy agent), consider using a four-way processor. For large environments
where more CPU resources might be needed, you can run the Summarization and Pruning agent on
a computer separate from the Warehouse database server. In this case, ensure that a high-speed
network connection exists (100 Mbps or faster) between the Summarization and Pruning agent and
the database server.
109
Component
Small environment2
Hub monitoring server
70 MB
Large environment3
400 MB
Disk storage
requirements4
Windows: 1.1 GB
Linux and UNIX: 1.3 GB
plus 300 MB free in the
/tmp directory5
100 MB
400 MB
900 MB5
Portal server
1.2 GB plus an
additional 1.2 GB in
your computer's
temporary directory to
install the eWAS server
and the Eclipse Help
Server
150 MB
300 MB
150 MB
2 - 4 GB depending on
database configuration
parameters
4 - 8 GB depending on
database configuration
parameters
100 MB
200 MB
150 MB
100 MB
200 MB
150 MB
Notes:
1. The memory and disk sizings shown in this table are the amounts required for the individual component beyond
the needs of the operating system and any concurrently running applications. For the total system memory
required by small, medium-size, and large environments, see Sizing your Tivoli Monitoring hardware on page
39.
2. A small environment is considered to be a monitoring environment with 500 to 1000 agents, with 100 to 200
monitored agents per remote monitoring server and 20 clients or fewer per portal server.
3. A large environment is considered to be a monitoring environment with 10,000 agents or more monitored agents,
with 500 to 1500 monitored agents per remote monitoring server, and with 50 clients or more per portal server.
4. The disk storage estimates apply to any size monitoring environment and are considered high estimates. The
size of log files affect the amount of storage required.
5. The storage requirements for the hub and remote monitoring servers do not include storage for the agent depot,
which can require an additional 1 GB or more.
6. The memory requirement for the portal server does not include database processes for the portal server
database, which require up to 400 MB of additional memory, depending on configuration settings.
Add the sizings for individual components to calculate a total for more than one component installed on
the same computer.
For example, if the hub monitoring server and portal server are installed on the same computer in a small
monitoring environment, the initial requirement is 170 MB of memory and 900 MB of disk space beyond
the needs of the operating system and other applications. If you add 400 MB of memory for the portal
server database and 1 GB of storage for the agent depot, the total requirement for IBM Tivoli Monitoring
components comes to 570 MB of memory and 1.9 GB of storage.
110
Additional requirements
v The best network connection possible is needed between the hub monitoring server and portal server
and also between the Tivoli Data Warehouse, Warehouse Proxy agent, and Summarization and Pruning
agent.
v A video card supporting 64,000 colors and 1024 x 768 resolution is required for the portal client.
Required software
The following table lists the software required for IBM Tivoli Monitoring.
Table 19. Required software for IBM Tivoli Monitoring
Component where the software is required
Product
Supported version
IBM Runtime
Environment for
Java
JRE V1.5
Sun Java SE
Runtime
Environment
For Linux
computers: a Korn
shell interpreter
pdksh-5.2.14
For RedHat
Enterprise Linux
4.0 computers:
libXp.so.6
(available in
xorg-x11deprecated-libs)
Monitoring
server
Portal
server
Portal
desktop
client
Portal
browser
client
X1
Monitoring
agent
X2
111
Supported version
Monitoring
server
Portal
server
Portal
desktop
client
Portal
browser
client
Monitoring
agent
Mozilla Firefox
Database1
A supported RDBMS is required for the Tivoli Enterprise Portal Server and the Tivoli Data
Warehouse. Supported database platforms for the portal server are listed in Table 16 on page
107. Supported database platforms for the Tivoli Data Warehouse are listed in Table 17 on page
108.
Each database requires a driver. For detailed information, see Chapter 15, Tivoli Data
Warehouse solutions, on page 331 and subsequent chapters about the Tivoli Data Warehouse.
IBM Tivoli
Enterprise Console
For TCP/IP
communication on
Windows:
v Windows 2000
Professional or
Server with
Service Pack 3
or above
v Microsoft
Winsock v1.1 or
later
v Microsoft TCP/IP
protocol stack
112
Supported version
For SNA
communication on
Windows:
v Windows 2000
Professional or
Server with
Service Pack 3
or above
IBM Communications
Server V5.0 requires
fixes JR10466 and
JR10368.
Monitoring
server
Portal
server
Portal
desktop
client
Portal
browser
client
Monitoring
agent
v Microsoft SNA
Server V3.0 or
later
v IBM
Communications
Server V5.0 or
5.2
Notes:
1. If the JRE is not installed on the computer on which the browser is launched, you are prompted to download and
install it. The Windows user account must have local administrator authority to download and install the JRE.
2. The Korn shell (any version) is also required when installing the monitoring agent on AIX, HP-UX, or Solaris
systems.
113
114
Upgrade scenarios
Use one of the following scenarios as a guide when you upgrade from a previous installation:
Upgrading from Tivoli Distributed Monitoring: The new IBM Tivoli Monitoring is not dependent on the
Tivoli Management Framework. An upgrade toolkit is provided to facilitate your move from a Tivoli
Distributed Monitoring environment to the new IBM Tivoli Monitoring. For information, see IBM Tivoli
Monitoring: Upgrading from Tivoli Distributed Monitoring, document number GC32-9462.
Upgrading from IBM Tivoli Monitoring V5.1.2: An upgrade toolkit is provided to facilitate your move from
IBM Tivoli Monitoring V5.1.2 to IBM Tivoli Monitoring V6.2.
The IBM Tivoli Monitoring version 5 to version 6 Migration Toolkit provides a starting point for migrating
your IBM Tivoli Monitoring version 5 environment to IBM Tivoli Monitoring version 6. This tool assesses a
Tivoli Monitoring V5 environment and produces an inventory of your site's current monitoring architecture
and its monitors. In addition, a draft Tivoli Monitoring V6 configuration is produced for your review and
modification. While the tool migrates what it can from IBM Tivoli Monitoring V5 to IBM Tivoli Monitoring V6,
optimizations and improvements might be available that would greatly improve your new Tivoli Monitoring
V6 performance and scale, but you must perform these manually.
For information about using the toolkit and some best-practices guidelines, see IBM Tivoli Monitoring:
Upgrading from V5.1.2.
IBM Tivoli Monitoring V5.x interoperability: Using the IBM Tivoli Monitoring 5.x Endpoint agent, you can
view data from IBM Tivoli Monitoring 5.x resource models in the Tivoli Enterprise Portal and warehouse
granular data in the Tivoli Data Warehouse. You can use this visualization to replace the Web Health
Console used in IBM Tivoli Monitoring V5.1. For information about using this visualization, see the
Monitoring Agent for IBM Tivoli Monitoring 5.x Endpoint User's Guide (document number SC32-9490).
Upgrading from IBM Tivoli Monitoring V6.1: To upgrade from a previous installation of IBM Tivoli
Monitoring V6.1, use the planning, upgrading, and post-installation configuration instructions provided in
this chapter (see Planning your upgrade on page 116 and Upgrading from IBM Tivoli Monitoring V6.1 or
V6.2 on page 127).
Upgrading from OMEGAMON Platform V350 and V360: Migration tools are provided to facilitate the
upgrade process for your site's custom situations, policies, and queries to the formats used by IBM Tivoli
Monitoring V6.1 and V6.2.
Many of the existing OMEGAMON Platform V350 and V360 agents have equivalent IBM Tivoli Monitoring
agents. For any that do not yet have an IBM Tivoli Monitoring counterpart, you can continue to monitor
Copyright IBM Corp. 2005, 2010
115
those agents in your new IBM Tivoli Monitoring environment. Use the planning and upgrade instructions in
this chapter (see Planning your upgrade and Upgrading from OMEGAMON Platform V350 and V360 on
page 131).
116
v
v
v
v
v
z/OS V1.4
z/OS V1.5
RedHat Enterprise Linux V2.1 x86 Summarization and Pruning agent
RedHat Enterprise Linux V3 x86 Warehouse Proxy agent and Summarization and Pruning agent
RedHat Enterprise Linux V3 zSeries Warehouse Proxy agent and Summarization and Pruning agent
v SuSE Linux Enterprise V8 x86 Tivoli Enterprise Monitoring Server, Warehouse Proxy agent and
Summarization and Pruning agent
v SuSE Linux Enterprise V8 zSeries Warehouse Proxy agent and Summarization and Pruning agent
For a complete description of the supported platforms for IBM Tivoli Monitoring V6.2, see Hardware and
software requirements on page 96.
Upgrading and Migrating DB2 Database for Linux, UNIX, and Windows
IBM Tivoli Monitoring V6.2.x includes a limited-use version of IBM DB2 Workgroup Server Edition V9.5 for
use with the Tivoli Enterprise Portal Server and the Tivoli Data Warehouse. IBM Tivoli Monitoring V6.2 and
its fix packs included a limited-use version of IBM DB2 Enterprise Server Edition V9.1.
As long as you're using a supported version of DB2 Database for Linux, UNIX, and Windows, upgrading
and migrating to IBM DB2 Workstation Server Edition V9.5 is not required. See Supported databases for
Tivoli Enterprise Portal Server and Tivoli Data Warehouse on page 107 for supported database versions.
If you elect to upgrade and migrate from an earlier version of DB2 for the workstation, you may receive a
warning during instance migration that the new edition of DB2 is different from the edition prior to upgrade
(Workgroup Edition versus Enterprise Server Edition). As a result, certain DB2 GUI utilities such as db2cc
may fail to start after migration, possibly due to the DB2 jdk_path parameter being restored to its default
value during the migration. To resolve this, jdk_path may need to be reset: as the db2inst1 user, run the
following DB2 command:
db2 update dbm cfg using jdk_path /home/db2inst1/sqllib/java/jdk32
See the DB2 Version 9.5 for Linux, UNIX, and Windows Migration Guide for more information on the
jdk_path parameter.
Components to upgrade
You can determine what components to upgrade by using either of two methods. Use the Enterprise level
of the Managed Systems Status list to display the version of any connected agent, as well as its status
(online/offline). Identifying an agent's status is especially important as you plan remote deployments. An
alternative method, available at the Enterprise level of the Navigator, is to use the Tivoli Management
Services Infrastructure view. This topology view visually expresses the relationships and linking of
monitoring agents and other components to the hub monitoring server. Use hover-help (help flyovers) in
the Tivoli Enterprise Portal topology view to determine the current version of your installed monitoring
agents and other components.
1. Access the Tivoli Enterprise Portal through the desktop client or the browser client:
v Desktop client:
a. In the Windows Start menu, select Programs IBM Tivoli Monitoring Tivoli Enterprise
Portal. The login message is displayed.
b. Type the name (sysadmin) and password of the account that you created during installation.
c. Click OK.
Chapter 6. Upgrading from a previous installation
117
v Browser client:
a. Start the browser.
b. Type the following URL for the Tivoli Enterprise Portal into the Address field of the browser:
http://systemname:1920///cnp/client
where the systemname is the host name of the computer where the Tivoli Enterprise Portal Server
is installed, and 1920 is the port number for the browser client. 1920 is the default port number
for the browser client. Your portal server might have a different port number assigned.
c. Click Yes in response to any security prompts.
d. In the login prompt, type the user name and password of the sysadmin account that you
created during installation.
e. Click OK.
f. Click Yes to accept Java Security certificates for this browser session.
2. Click the Navigator, the view that is located by default in the upper-left corner of the portal. The first
display that you see in the Navigator view is Enterprise Status workspace.
3. Select the Enterprise node at the top of the navigation tree, if it is not already selected.
4. Right-click the node and select Workspace > Self-Monitoring Topology to display the default
topology view.
5. Click on the TMS Infrastructure - Base view. This view contains the topology display.
6. Click on the Maximize icon in the upper-right corner of the portal. The topology view occupies the
entire display area of the workspace.
118
Note: Agents created with the Agent Builder require that the entire path from the agent, through the
remote monitoring server it connects to, to the hub monitoring server, to the portal server, and the
data warehouse be upgraded to V6.2. Agents released after IBM Tivoli Monitoring V6.2 may also
require a complete V6.2 path. Check your agent documentation.
119
120
4. Use the tar command to compress the contents of ITM_Install_dir (the directory where IBM Tivoli
Monitoring is installed), using a command that is similar to the following:
tar -cvf /tmp/ITM_Install_dir.backup.tar ITM_Install_dir
5. Add the following files to the tar file created in step 4 above:
v On AIX:
/etc/rc.itm*
tar -uvf /tmp/ITM_Install_dir.backup.tar /etc/rc.itm*
v On HP-UX:
/sbin/init.d/ITMAgents*
tar -uvf /tmp/ITMinstall_dir.backup.tar /etc/init.d/ITMAgents*
6. Use the appropriate database commands to back up the Tivoli Data Warehouse databases.
For more information, see Backing up your portal server and Tivoli Data Warehouse databases on
page 122.
You are now ready to proceed with the upgrade.
121
Always contact IBM Software Support before attempting to use the files generated in this procedure to
restore the IBM Tivoli Monitoring environment. The support staff can help ensure success of the
restoration. Otherwise, errors made during the modification of the Windows registry could lead to a
corrupted operating system.
db2
db2
db2
db2
db2
db2
connect to yourtepsdatabase
quiesce database immediate force connections
connect reset
backup database yourtepsdatabase to /yourbackuplocation
connect to yourtepsdatabase
unquiesce database
122
If an existing connection prevents you from backing up the database, use the following commands.
1. db2 connect to yourwarehousedatabase
2. db2 quiesce database immediate force connections
3. db2 connect reset
4. db2 backup database yourwarehousedatabase to /yourbackuplocation
5. db2 connect to yourwarehousedatabase
6. db2 unquiesce database
7. db2 connect reset
Sites using the Summarization and Pruning agent with DB2 for the workstation
Your migrated Tivoli Data Warehouse database requires subsequent checking to ensure it is set up
so the Summarization and Pruning agent can perform multiple system batching. Complete these
steps:
1. Back up the database, if necessary.
2. Edit the KSY_DB2_WAREHOUSEMARKER.sql script. If using archive logging, modify it as
suggested in the script to avoid the need for extra backup at the end.
3. Execute the script:
db2 -tvf KSY_DB2_WAREHOUSEMARKER.sql
The code will detect if the table is correctly migrated. If not, a single managed system will be
enforced independent of the setting to prevent database deadlocks.
This procedure affects only the WAREHOUSEMARKER table. If you do not complete it, the
Summarization and Pruning agent will do single system batching only.
Some of the monitoring agents have made changes to the warehouse tables. There are three types of
changes, two of these types of changes require performing upgrade procedures before running the 6.2
Warehouse Proxy and Summarization and Pruning agents. The procedure for the other change can be
performed before or after you upgrade IBM Tivoli Monitoring. These procedures are accomplished using
product-provided scripts.
Case 1 changes affect the table structure, and Case 3 changes affect the table location. Both Case 1 and
Case 3 changes must be completed before running the 6.2 Warehouse Proxy and Summarization and
Pruning agents. These changes and procedures are documented comprehensively in the following user's
guide appendix for each monitoring agent that is affected: "Upgrade: upgrading for warehouse
summarization."
v Case 1 changes add a new column to a raw table, and assign a default value to the column. If the 6.2
Warehouse Proxy agent is started before these scripts are run, a NULL default value is assigned, and
that default value cannot be changed. The following monitoring agents have Case 1 changes:
Monitoring for Databases: DB2 for the workstation Agent
Monitoring for Applications: mySAP Agent
Monitoring: UNIX OS Agent
123
124
Also, agents built with the V6.2.1.x Agent Builder (or previous) cannot be installed into an
environment where a System Monitor Agent is also running.
v If your site uses IP.SPIPE for Tivoli Enterprise Monitoring Server communication:
A V6.2.2 remote monitoring server not running in FIPS mode can connect to a V6.1 hub.
Connectivity will default to the Triple DES standard.To enable AES encryption on the hub, customers
may set parameter GSK_V3_CIPHER_SPECS="352F0A" on the V6.1 monitoring system.
A V6.2 or V6.2.1 remote monitoring server can connect to a V6.2.2 hub not running in FIPS mode.
Connectivity will default to the Triple DES standard. To enable AES encryption, customers may set
parameter GSK_V3_CIPHER_SPECS="352F0A" on the remote monitoring systems.
A pre-V6.2.2 agent can select IP.SPIPE to connect to a V6.2.2 monitoring server defined with
FIPS=N.
A pre-V6.2.2 agent cannot select IP.SPIPE to connect to a V6.2.2 monitoring server defined with
FIPS=Y.
A V6.2.2 agent defined with FIPS=N can select IP.SPIPE to connect to a V6.2.1 monitoring server.
A V6.2.2 agent defined with FIPS=Y cannot select IP.SPIPE to connect to a V6.2.1 monitoring
server.
A V6.2.2 agent defined with either FIPS=N or FIPS=Y can select IP.SPIPE to connect to a V6.2.2
monitoring server defined with either FIPS=N or FIPS=Y.
v A V6.2, V6.2.1, or V6.2.2 remote monitoring server can connect to a V6.2, V6.2.1, or V6.2.2 hub
monitoring server.
v If you are running a hub monitoring server at release 6.2 or earlier, the CLI commands for situation
groups, situation long names, and adaptive monitoring cannot be used.
v Agents that use dynamic affinity cannot connect or fail over to a pre-V6.2.1 remote monitoring server.
v V6.2.2 interoperability with OMEGAMON Platform V350/360 remains the same as for IBM Tivoli
Monitoring V6.1; see Upgrading from OMEGAMON Platform V350 and V360 on page 131.
125
Agents
IBM Tivoli Monitoring V6.2.2 agents require a V6.2.2 hub Tivoli Enterprise Monitoring Server and Tivoli
Enterprise Portal Server. In other words, the V6.2.1 (and earlier) monitoring server and portal server do not
support V6.2.2 agents.
Back-level agents can connect through up-level monitoring servers. In particular, V6.2.1 agents will work
with both the V6.2.1 and V6.2.2 Tivoli Enterprise Monitoring Server and Tivoli Enterprise Portal Server.
Agents that use dynamic affinity cannot connect or fail over to a pre-V6.2.1 remote monitoring server.
An agent upgrade elevates the user credentials of agents that have been set at deployment time to run
with reduced (nonroot) credentials. Use one of the following steps to stop the agent and restart it with the
appropriate user credentials:
v After stopping the agent, log in (or invoke su) as the desired user, and then run the itmcmd command to
start the agent in that user's context.
v Edit the startup file, and add the following line to change the default user context for starting agents:
/usr/bin/su - dbinstancename-c "itmhome/bin/itmcmd agent -h itmhome
-o dbinstancename start product_code
where:
product_code
is the two-character product code for the agent (for example, ud for DB2 for the workstation).
Special interoperability note for IBM Tivoli Monitoring V6.2.1 and V6.2.2:
Java Runtime Environment changes were introduced in the V6.2.1 agents but only to update the service
level. No impact is expected.
126
127
Table 21. Upgrading from IBM Tivoli Monitoring V6.1 or V6.2 to IBM Tivoli Monitoring V6.2.2 (continued)
Task
After upgrading your
monitoring environment
Notes:
1. The installation instructions sometimes reference the default path for the installation of IBM Tivoli Monitoring. If
you did not install the product in the default path, you must specify the correct path whenever the upgrade
procedures require a path specification.
2. After you upgrade a UNIX monitoring server, you must run the itmcmd config -S command to configure the
upgraded monitoring server. During this upgrade you are asked if you want to update application support files as
well.
If a silent upgrade is performed by default, all installed support files are updated.
3. You can use the tacmd updateAgent command to install an agent update on a specified managed system. For
reference information about this command and related commands, see the IBM Tivoli Monitoring: Command
Reference.
4. You must recycle (stop and restart) the portal server to use upgraded workspaces for your monitoring agents.
128
Step 1: Verify the DB2 Database for Linux, UNIX, and Windows authorizations
In the following example the portal server database is called TEPS, the DB2 for the workstation userid is
itmuser, and the DB2 administrative userid is db2inst1. Both Linux and AIX sites must complete this step.
1. Log in as the DB2 administrator:
su - db2inst1
2. Connect to the portal server database using the portal server's DB2 userid:
db2 connect to TEPS user itmuser using TEPSpw
Example output:
Administrative Authorizations for Current User
Direct
Direct
Direct
Direct
Direct
Direct
Direct
Direct
Direct
Direct
Direct
Direct
Direct
SYSADM authority
SYSCTRL authority
SYSMAINT authority
DBADM authority
CREATETAB authority
BINDADD authority
CONNECT authority
CREATE_NOT_FENC authority
IMPLICIT_SCHEMA authority
LOAD authority
QUIESCE_CONNECT authority
CREATE_EXTERNAL_ROUTINE authority
SYSMON authority
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect
SYSADM authority
SYSCTRL authority
SYSMAINT authority
DBADM authority
CREATETAB authority
BINDADD authority
CONNECT authority
CREATE_NOT_FENC authority
IMPLICIT_SCHEMA authority
LOAD authority
QUIESCE_CONNECT authority
CREATE_EXTERNAL_ROUTINE authority
SYSMON authority
=
=
=
=
=
=
=
=
=
=
=
=
=
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
=
=
=
=
=
=
=
=
=
=
=
=
=
NO
NO
NO
NO
YES
YES
YES
NO
YES
NO
NO
NO
NO
DBADM authority
CREATETAB authority
BINDADD authority
CONNECT authority
CREATE_NOT_FENC authority
=
=
=
=
=
YES
YES
YES
YES
YES
129
Direct
Direct
Direct
Direct
IMPLICIT_SCHEMA authority
LOAD authority
QUIESCE_CONNECT authority
CREATE_EXTERNAL_ROUTINE authority
=
=
=
=
YES
YES
YES
YES
If they are all set to YES, you are finished. Otherwise continue with steps 5 through 7 below.
5. Connect to the Tivoli Enterprise Portal Server database as the DB2 administrator:
db2 connect to TEPS user db2inst1 using db2pw
7. Reconnect to the portal server database using the portal server's DB2 userid, and recheck its
authorizations:
db2 connect to TEPS user itmuser using TEPSpw
Database Connection Information
Database server
SQL authorization ID
Local database alias
= DB2/LINUX 8.2.8
= ITMUSER
= TEPS
SYSADM authority
SYSCTRL authority
SYSMAINT authority
DBADM authority
CREATETAB authority
BINDADD authority
CONNECT authority
CREATE_NOT_FENC authority
IMPLICIT_SCHEMA authority
LOAD authority
QUIESCE_CONNECT authority
CREATE_EXTERNAL_ROUTINE authority
SYSMON authority
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect
Indirect
SYSADM authority
SYSCTRL authority
SYSMAINT authority
DBADM authority
CREATETAB authority
BINDADD authority
CONNECT authority
CREATE_NOT_FENC authority
IMPLICIT_SCHEMA authority
LOAD authority
QUIESCE_CONNECT authority
CREATE_EXTERNAL_ROUTINE authority
SYSMON authority
=
=
=
=
=
=
=
=
=
=
=
=
=
NO
NO
NO
YES
YES
YES
YES
YES
YES
YES
YES
YES
NO
=
=
=
=
=
=
=
=
=
=
=
=
=
NO
NO
NO
NO
YES
YES
YES
NO
YES
NO
NO
NO
NO
You should now see the authorizations for the following items set to YES:
Direct
Direct
Direct
Direct
Direct
130
DBADM authority
CREATETAB authority
BINDADD authority
CONNECT authority
CREATE_NOT_FENC authority
=
=
=
=
=
YES
YES
YES
YES
YES
Direct
Direct
Direct
Direct
IMPLICIT_SCHEMA authority
LOAD authority
QUIESCE_CONNECT authority
CREATE_EXTERNAL_ROUTINE authority
=
=
=
=
YES
YES
YES
YES
131
Run the installation program for IBM Tivoli Chapter 8, Installing IBM Tivoli
Monitoring V6.1 (fix pack 5) on all
Monitoring, on page 147
components that you want to upgrade.
Use your existing installation directory as
your IBM Tivoli Monitoring directory.
Run the installation program for IBM Tivoli Chapter 8, Installing IBM Tivoli
Monitoring V6.2.1 (or subsequent) on all
Monitoring, on page 147
components that you want to upgrade.
Use your existing installation directory as
your IBM Tivoli Monitoring directory.
Notes:
1. After you upgrade a UNIX monitoring server, you must run the itmcmd config -S command to configure the
upgraded monitoring server and the itmcmd support command to update application support files.
2. You must recycle (stop and restart) the portal server to use upgraded workspaces for your monitoring agents.
3. You cannot use the agent deployment function in IBM Tivoli Monitoring to upgrade OMEGAMON agents.
132
Considerations
Consider the following issues when upgrading from OMEGAMON Platform 350 or 360 and CandleNet
Portal 195 to IBM Tivoli Monitoring 6.2.x.
Terminology changes
The following terms have changed with the move from Candle OMEGAMON to IBM Tivoli Monitoring:
Table 23. OMEGAMON to IBM Tivoli Monitoring terminology
OMEGAMON term
OMEGAMON Platform
Event
Situation event
Seeding
133
If you use the OMEGAMON XE for CICS 3.1.0 product, you must continue to use Candle Management
Workstation to configure workloads for Service Level Analysis. After you have configured the workloads,
you can use the Tivoli Enterprise Portal for all other tasks. If you do not currently have Candle
Management Workstation (for example, if you are installing OMEGAMON XE for CICS 3.1.0 into an IBM
Tivoli Monitoring environment), you must install the Candle Management Workstation that is included with
the OMEGAMON XE for CICS 3.1.0 product. Install Candle Management Workstation on a different
computer than the Tivoli Enterprise Portal; otherwise the Candle Management Workstation installer
attempts to uninstall the Tivoli Enterprise Portal.
Note: As of OMEGAMON XE for CICS 4.1.0, the Candle Management Workstation is no longer required.
A new view, CICS SLA, is provided with Tivoli Enterprise Portal V6.2 (and subsequent) that
replaces the Service Level Analysis feature of the old Candle Management Workstation.
CandleClone
CandleRemote
Event emitters
Event adapters
134
SNA
The IP.SPIPE protocol is not supported for OMEGAMON XE agents.
v You cannot install an OMEGAMON agent on the same computer as a Tivoli Enterprise Portal Server. If
you want to monitor something on the same computer as the portal server, install an IBM Tivoli
Monitoring agent, if one is available.
v IBM Tivoli OMEGAMON XE for Messaging V6.0 requires that the hub Tivoli Enterprise Monitoring
Server, the Tivoli Enterprise Portal Server, and the Tivoli Enterprise Portal client be at the IBM Tivoli
Monitoring V6.2 level.
After you upgrade the infrastructure components to IBM Tivoli Monitoring V6.2, install application
support files for this monitoring agent on the monitoring server, the portal server, and the portal desktop
client. To install and enable application support, follow the instructions in the IBM Tivoli OMEGAMON
XE for Messaging Version 6.0 Installation Guide.
v To enable the Tivoli Data Warehouse to collect data from OMEGAMON Platform 350 and 360 agents
(through the Warehouse Proxy), copy the product attribute file to the itm_installdir/TMAITM6/ATTRLIB
directory on the Warehouse Proxy agent (where itm_installdir is the IBM Tivoli Monitoring installation
directory).
For full interoperability between OMEGAMON Platform 350 and 360 and AF/Operator agents and IBM
Tivoli Monitoring, you need to install the application support files for these agents on the monitoring server,
portal server, and portal desktop client. See the "Agent interoperability" document in the IBM Tivoli
Monitoring information center for information about downloading and installing these support files.
Configuration
Your existing Tivoli Monitoring environment includes these five distributed servers:
v The primary (acting) hub Tivoli Enterprise Monitoring Server runs on server #1, an AIX 5.3, 64-bit node.
v The secondary (backup) hub monitoring server runs on server #2, a Solaris 10, 32-bit node.
v A remote Tivoli Enterprise Monitoring Server runs on server #3, an RHEL Linux 2.6, 32-bit node.
v A second remote monitoring server runs on server #4, a Windows 2000 node.
v The Tivoli Enterprise Portal Server running on server #5, another RHEL Linux 2.6, 32-bit node,
communicates with the acting (primary) hub monitoring server running on server #1. server #5 also runs
the Warehouse Proxy agent and the Summarization and Pruning agent.
Each system runs an OS agent and the Tivoli Universal Agent. All 14 nodes in this example configuration
are connected to one remote monitoring server as its primary and the other as its secondary, to
accommodate agent switching when upgrading the remote Tivoli Enterprise Monitoring Servers. Half of the
agents point to each remote monitoring server as its primary monitoring server.
135
1. Begin historical collection of agent attributes. Assume the historical collection is being done at the
agents and not at the Tivoli Enterprise Monitoring Server; otherwise data collection may be
interrupted when agents switch from the acting monitoring server to the backup.
2. Cause situations to fire, for example, by varying thresholds. For nodes running the Tivoli Universal
Agent, include always-true situations.
3. Ensure the acting (primary) and backup (secondary) monitoring servers are connected by examining
their log/trace records and process state for these messages:
v KQM0001 "FPO started at ..."
v KQM0003 "FTO connected to ..."
v KQM0009 "FTO promoted primary as the acting HUB"
v KQM0009 "FTO promoted SITMON*secondary(Mirror) as the acting HUB"
4. If you have integrated your IBM Tivoli Monitoring events with either Tivoli Enterprise Console or
Netcool/OMNIbus (see 461):
a. First upgrade your event synchronization.
b. Then restart Tivoli Enterprise Console or OMNIbus, as appropriate.
c. Finally, restart the monitoring server to which Tivoli Enterprise Console or OMNIbus is connected.
This must be the acting monitoring server.
5. Confirm that both the acting and backup hub monitoring servers are operational, so that during this
upgrade process, when the acting hub is shut down, the backup will be able to assume the role of
acting Tivoli Enterprise Monitoring Server.
6. Upgrade the primary hub Tivoli Enterprise Monitoring Server and all other Tivoli Management
Services infrastructure components in that $ITM_HOME installation (in other words, select ALL when
asked what to upgrade).
v If working on a Windows platform, apply support to upgraded agents as part of this upgrade step.
v If working on a UNIX or Linux platform, apply support for the upgraded infrastructure agents on the
primary hub monitoring server:
./itmcmd support -t primary_TEMS_name lz nt ul um ux hd sy
Note: All Tivoli Management Services processes running within the same IBM Tivoli Monitoring
environment are temporarily shut down when upgrading a monitoring server.
7. Upgrade the secondary hub Tivoli Enterprise Monitoring Server and all other Tivoli Management
Services infrastructure components in that $ITM_HOME installation (in other words, select ALL when
asked what to upgrade).
v If working on a Windows platform, apply support to upgraded agents as part of this upgrade step.
v If working on a UNIX or Linux platform, apply support for the upgraded infrastructure agents on the
secondary hub monitoring server:
./itmcmd support -t secondary_TEMS_name lz nt ul um ux hd sy
8. Restart both the primary (backup) and secondary (acting) Tivoli Enterprise Monitoring Servers.
9. Upgrade the Tivoli Enterprise Portal Server.
10. Confirm that all remote Tivoli Enterprise Monitoring Servers are operational, so that during this
upgrade process, when the acting hub monitoring server is shut down, the backup will be able to
assume the role of acting Tivoli Enterprise Monitoring Server.
11. Upgrade each remote monitoring server, one at a time.
v If working on a Windows platform, apply support to upgraded agents when upgrading the
monitoring server.
v If working on a UNIX or Linux platform, immediately after a remote monitoring server is upgraded,
apply support for the upgraded infrastructure agents on the same node. Then restart the remote
monitoring server to make the upgraded support effective.
12. Upgrade the remaining infrastructure agents.
13. At a time when no history records will be uploaded to it, upgrade the Warehouse Proxy agent.
136
Expected results
1. When the primary hub monitoring server is down while completing step 6 on page 136:
a. Configure the portal server to point to the newly acting hub monitoring server, and verify that
expected agents and situations show up on the Tivoli Enterprise Portal client.
b. Verify that the secondary monitoring server has taken over as acting hub, in other words, that
these messages appear in its log:
v KQM0004 "FTO detected lost parent connection"
v KQM0009 "FTO promoted secondary as the acting HUB"
c. Verify that the event handler your site is using (either Tivoli Enterprise Console or
Netcool/OMNIbus) is still being updated, now by the secondary hub, by causing an event state to
change and validating the state of all events on the remote console.
d. Verify that attribute collection was uninterrupted by examining the history data, by varying attribute
values at the agent, and by observing the change in the portal client.
e. Ensure that the primary hub Tivoli Enterprise Monitoring Server is not restarted by the upgrade
process until all remote monitoring servers and directly connected agents have successfully failed
over to the secondary hub.
2. After completing step 6 on page 136, verify that the primary hub Tivoli Enterprise Monitoring Server
has taken on the role of standby monitoring server: ensure its log contains these messages:
v KQM0001 "FPO started at ..."
v KQM0003 "FTO connected to ..."
v KQM0009 "FTO promoted secondary as the acting HUB"
v KQM0009 "FTO promoted primary(Mirror) as the acting HUB"
The log for the secondary hub monitoring server should contain these messages:
v KQM0005 "FTO has recovered parent connection ..."
v KQM0009 "FTO promoted secondary as the acting HUB"
Use the sitpad and taudit tools for assistance. These tools can be downloaded from the IBM Tivoli
Open Process Automation Library (OPAL) Web site at http://www.ibm.com/software/tivoli/opal.
3. When the secondary hub Tivoli Enterprise Monitoring Server is stopped while completing step 7 on
page 136:
a. Configure the Tivoli Enterprise Portal Server to point to the acting hub, the primary.
b. Verify that the primary monitoring server has taken over as the acting hub; its log should contain
these messages:
v KQM0004 "FTO detected lost parent connection"
v KQM0009 "FTO promoted primary as the acting HUB"
c. Verify that either Tivoli Enterprise Console or Netcool/OMNIbus is still being updated with event
information.
d. Verify that attribute history is still being collected as in substep 1d.
e. Ensure that the secondary hub Tivoli Enterprise Monitoring Server is not restarted by the upgrade
process until all remote monitoring servers and directly connected agents have successfully failed
over to the primary hub.
4. After completing step 7 on page 136, verify that the secondary hub Tivoli Enterprise Monitoring Server
is functional by examining the log/trace records of both hub monitoring servers for these messages.
In the primary hub's log:
v KQM0005 "FTO has recovered parent connection ..."
v KQM0009 "FTO promoted primary as the acting HUB"
In
v
v
v
v
137
138
139
database is created on the same relational database management server (RDBMS) used for the portal
server database. In larger environments, it is best to create the warehouse database on a different
computer from the portal server.
Installation procedure
Complete the following steps to install IBM Tivoli monitoring on one Windows computer:
1. Launch the installation wizard by double-clicking the setup.exe file in the WINDOWS subdirectory of
the installation media.
2. Click Next on the Welcome window.
3. Click Accept to accept the license agreement.
4. Choose the directory where you want to install the product. The default directory is C:\IBM\ITM. Click
Next.
Note: If you specify an incorrect directory name, you will receive the following error:
The IBM Tivoli Monitoring installation directory cannot exceed 80 characters
or contain non-ASCII, special or double-byte characters.
The directory name can contain only these characters:
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ _\:0123456789()~-./".
5. Click Next to accept the default encryption key and then click OK on the popup window to confirm
the encryption key.
6. On the Select Features window, select the check boxes for the components you want to install. Select
all components for a complete installation on one computer.
For additional information about these components, press the Help button.
Note: The agent support comprises additional support files for the Tivoli Enterprise Monitoring
Server, Tivoli Enterprise Portal Server, and the Tivoli Enterprise Portal desktop client; these are
automatically installed whenever you install the agent support. Also, the Eclipse Help Server
selection is no longer provided; the Eclipse Help Server is automatically installed whenever
you install a Tivoli Enterprise Portal Server.
7. Click Next.
The Agent Deployment window is displayed. This window lists monitoring agents that you can deploy
to remote computers. For this installation, do not select any agents for remote deployment.
8. Click Next.
9. If the TEPS Desktop and Browser Signon ID and Password window is displayed, enter and confirm
the password to be used for logging on to the Tivoli Enterprise Portal. The default logon user ID,
sysadmin, cannot be changed on this window.
This password is required only when Security: Validate User has been enabled on the hub monitoring
server. This window is not displayed if the sysadmin user ID has already been defined in the
operating system.
10. Review the installation summary details. The summary identifies the components you are installing.
Click Next to begin the installation. The status bar indicates the progress of your installation.
After the components are installed, a configuration window is displayed.
11. Click Next to start configuring all selected components. (Leave all check boxes selected for this
installation.)
12. Configure communications for the Tivoli Enterprise Portal Server:
a. Click Next to confirm that you are installing the portal server on this computer. (The host name of
this computer is displayed by default.)
b. If no relational database manager can be found in this computer, the embedded Derby RDBMS is
used by default. However, if at least one RDBMS product is installed on this computer, a window
140
is displayed for you to choose the RDBMS product you want to use. Choose either Embedded
TEPS database (that is, Derby), IBM DB2 Universal Database Server, or Microsoft SQL
Server, and click Next.
c. A window is displayed for you to configure the connection between the portal server and the portal
server database (TEPS database).
The installation program uses the information on this window to automatically perform the
following tasks:
v Create the portal server database.
v Create a database user for the portal server to use to access the database.
v Configure the ODBC connection between the portal server and the database.
Figure 13. Configuration window for the portal server database using DB2 for the workstation
Figure 13 shows the configuration window for a portal server database using DB2 for the
workstation. The configuration window for a Derby or Microsoft SQL Server database is similar.
The fields on the configuration window are described in the following table:
Table 25. Configuration information for the portal server database
Field
MS SQL
default
Description
Admin User ID
db2admin
sa
Admin Password
(no default)
(no default)
Database User ID
TEPS
TEPS
141
Table 25. Configuration information for the portal server database (continued)
Field
MS SQL
default
Database Password
(no default)
(no default)
Reenter password
(no default)
(no default)
(no default)
(no default)
Description
d. Optionally change the default values for the administrator ID and database user ID. Enter
passwords for the administrator and database user. Click OK.
e. Click OK on the message that tells you that the portal server configuration was successful.
f. Click Next to accept the default Tivoli Data Warehouse user ID and password. (You can change
these values in a later step.)
g. In the TEP Server Configuration window, click OK to accept the default communications protocol,
IP.PIPE. IP.PIPE is the protocol that the portal server will use to communicate with the Tivoli
Enterprise Monitoring Server.
A second TEP Server Configuration window is displayed. The IP.PIPE area of this window
displays the host name of this computer and the default port number, 1918.
h. Click OK to accept the default host name and port number.
i. Click Yes on the message asking if you want to reconfigure the warehouse connection information
for the portal server.
j. Select DB2 or SQL Server from the list of RDBMS platforms and click OK.
A window is displayed for you to configure the connection between the portal server and the Tivoli
Data Warehouse database. (The Tivoli Data Warehouse database is referred to in the title of this
window as the data source for the Warehouse Proxy. The Warehouse Proxy agent sends
information collected from monitoring agents to the Tivoli Data Warehouse.)
The installation program uses the information on this window to automatically perform the following
tasks:
v Create the Tivoli Data Warehouse database.
v Create a database user (called the warehouse user) for the portal server, Warehouse Proxy
agent, and Summarization and Pruning agent to use to access the warehouse database.
v Configure the ODBC connection between the portal server and the warehouse database.
142
Field
MS SQL default
Description
ITM Warehouse
ITM Warehouse
Database Name
WAREHOUS
WAREHOUS
Admin User ID
db2admin
sa
Admin Password
(no default)
(no default)
Database User ID
ITMUser
ITMUser
Database Password
itmpswd1
itmpswd1
Reenter password
itmpswd1
itmpswd1
k. Optionally change the default values on this window. Enter the password of the database
administrator. Click OK.
l. Click OK on the message that tells you that the portal server configuration was successful.
13. If your monitoring server is not local, you must configure communications for the hub Tivoli Enterprise
Monitoring Server:
a. Remove the check mark from Security: Validate User; then click OK to accept the other defaults
on the Tivoli Enterprise Monitoring Server Configuration window:
Chapter 7. Installing IBM Tivoli Monitoring on one computer
143
144
A red exclamation mark indicates that the component needs to be configured before it can be
started.
Post-installation procedures
Complete the following tasks after you finish the installation procedure:
v Start the Eclipse Help Server and the Tivoli Enterprise Portal Server, if not started. Right-click the
component in the Manage Tivoli Enterprise Monitoring Services window and select Start.
To log on to the Tivoli Enterprise Portal using the desktop client, double-click the Tivoli Enterprise Portal
icon on your Windows desktop. Use the sysadmin password that you specified in Step 9 on page 140.
(For more information about logging on to the Tivoli Enterprise Portal, see Starting the Tivoli Enterprise
Portal client on page 232.)
v Configure the Summarization and Pruning agent.
The installation procedure automatically starts and configures all the monitoring agents that you installed
except for the Summarization and Pruning agent. The Summarization and Pruning agent is not
configured and started during installation to give you an opportunity to configure history collection in
advance for all installed monitoring agents, a task that must be performed prior to starting the
Summarization and Pruning agent for the first time.
To configure and start the Summarization and Pruning agent, refer to the following procedures:
Configuring the Summarization and Pruning agent (JDBC connection) on page 435
Starting the Summarization and Pruning agent on page 444
v If you are using Microsoft SQL Server 2005 for the Tivoli Data Warehouse database, perform the
following additional steps:
Create a schema with the same name (and owner) as the database user ID for accessing the Tivoli
Data Warehouse. (The default user ID is ITMUser.) Change the default schema from dbo to this
database user ID.
You specified the database user ID on the Configure SQL Data Source for Warehouse Proxy window
during installation. See Step 12j of the installation procedure.
Ensure that the database is set up to support inbound network TCP/IP connections.
v Enable authentication of user credentials.
Enable user authentication through either the portal server (for LDAP authentication and single sign-on
capability) or the monitoring server (for LDAP or local registry authentication and for SOAP Server
commands). See the IBM Tivoli Monitoring: Administrator's Guide.
145
146
147
Install the language packs for all languages other Installing language packs on page 218
than English.
Configure the clients, browser and Java runtime
environment.
Specify what browser to use to display the online Specifying the browser used for online help on page 229
help.
Start the Tivoli Enterprise Portal to verify your
installation.
Using Web Start to download and run the desktop client on page
233
148
Note: If you are running Windows 2003 or Windows XP and have security set to check the software
publisher of applications, you might receive an error stating that the setup.exe file is from an
unknown publisher. Click Run to disregard this error message.
2. Click Next on the Welcome window.
Note: If you have another IBM Tivoli Monitoring component already installed on this computer, select
Modify on the Welcome window to indicate that you are updating an existing installation. Click
OK on the message telling you about preselected items. Then skip to Step 6.
3. Click Accept to accept the license agreement.
4. Choose the directory where you want to install the product. The default directory is C:\IBM\ITM. Click
Next.
5. Type a 32-character encryption key. You can use the default key.
Notes:
a. Do not
&
|
'
=
$
149
Note: By default, the agent depot is located in the itm_installdir/CMS/depot directory on Windows.
If you want to use a different directory, create the directory (if it does not exist) and specify the
directory using the DEPOTHOME key in the KBBENV file.
Select the agents, if any, that you want to add to the agent depot. (You can add agents to the agent
depot at a later time by updating your installation.) Click Next.
8. If no IBM Tivoli Monitoring component has been previously installed on this computer, a window is
displayed for you to select a program folder for the Windows Start menu. Select a program folder,
and click Next. The default program folder name is IBM Tivoli Monitoring.
9. If the TEPS Desktop and Browser Signon ID and Password window is displayed, enter and confirm
the password to be used for logging on to the Tivoli Enterprise Portal. The default logon user ID,
sysadmin, cannot be changed on this window.
This password is required only when Security: Validate Users is enabled on the hub monitoring
server. This window is not displayed if the sysadmin user ID has already been defined in the
operating system.
10. Review the installation summary details. The summary identifies the components you are installing.
Click Next to begin the installation.
After the components are installed, a configuration window (called the Setup Type window) is
displayed.
11. Clear the check boxes for any components that have already been installed and configured (at the
current release level) on this computer, unless you want to modify the configuration. Click Next to
start configuring all selected components.
12. Configure the Tivoli Enterprise Monitoring Server:
a. Select the type of monitoring server you are configuring: Hub or Remote. For this procedure,
select Hub.
b. Verify that the name of this monitoring server is correct in the TEMS Name field. If it is not,
change it.
The default name is HUB_host_name, for example, HUB_itmserv16. This name must be unique in
the enterprise.
c. Identify the communications protocol for the monitoring server. You have four choices: IP.UDP,
IP.PIPE, IP.SPIPE, or SNA. You can specify up to three methods for communication. If the method
you identify as Protocol 1 fails, Protocol 2 is used as a backup. If Protocol 2 fails, Protocol 3 is
used as a backup.
Do not select any of the other options on this window (for example Address Translation, Tivoli
Event Integration Facility or the option to configure Hot Standby). You can configure these
options after installation is complete.
Important: Remove the check mark from the box for Security: Validate Users. Use the
procedures in the IBM Tivoli Monitoring: Administrator's Guide to configure security
after installation is complete. If you decide to leave security enabled, and you want to
use LDAP to authenticate users instead of the hub security system, use the
information in the IBM Tivoli Monitoring: Administrator's Guide to complete the
configuration.
d. Click OK.
e. Complete the following fields for the communications protocol for the monitoring server.
Table 28. Communications protocol settings for the hub monitoring server
Field
Description
IP.UDP Settings
Hostname or IP Address
150
Table 28. Communications protocol settings for the hub monitoring server (continued)
Field
Description
IP.PIPE Settings
Hostname or IP Address
Port Number
IP.SPIPE Settings
Hostname or IP Address
Port number
SNA Settings
Network Name
LU Name
LU 6.2 LOGMODE
TP Name
f. If you are certain that you have typed the values for all of these fields with exactly the correct
casing (upper and lower cases), you can select Use case as typed. However, because IBM Tivoli
Monitoring is case-sensitive, consider selecting Convert to upper case to reduce the chance of
user error.
g. Click OK to continue.
For additional information about the monitoring server's configuration parameters, press the Help
button.
13. If you selected Tivoli Event Integration Facility, provide the host name and port number for the
TEC event server or Netcool/OMNIbus EIF probe to which you want to forward events and click OK.
14. Enable application support on the monitoring server.
In Step 6 on page 149, you selected the base monitoring agents for which you wanted to install
application support files on the monitoring server. In this step, you activate the application support
through a process known as seeding the monitoring server.
Note: If you are running in a Hot Standby environment, shut down your Hot Standby (that is, mirror)
monitoring server before completing this procedure. You may restart the Hot Standby
monitoring server only after you have seeded the hub server.
a. Specify the location of the monitoring server to which to add application support. You have two
choices:
v On this computer
v On a different computer
Click OK.
For additional information about these parameters, press the Help button.
b. Click OK on the Select the application support to add to the TEMS window.
This window lists the monitoring agents that you selected in Step 6 on page 149. Click OK to
begin seeding the monitoring server (using the SQL files listed on this window).
Chapter 8. Installing IBM Tivoli Monitoring
151
This process can take up to 20 minutes. As the seeding process completes, a progress bar is
displayed, showing the progress of seeding, in turn, the application support for the agents you
selected.
Once seeding completes, if support could not be added, a window is displayed showing all
seeding results.
c. Click Next on the message that provides results for the process of adding application support
(see Figure 28 on page 204). A return code of 0 (rc: 0) indicates that the process succeeded.
Note: If the Application Support Addition Complete window is not displayed after 20 minutes, look
in the IBM\ITM\CNPS\Logs\seedkpc.log files (where pc is the two-character product code
for each monitoring agent) for diagnostic messages that help you determine the cause of
the problem. For a list of product codes, see Appendix D, IBM Tivoli product, platform, and
component codes, on page 567.
15. Configure the communication between any IBM Tivoli Monitoring component and the hub monitoring
server:
a. Specify the default values for IBM Tivoli Monitoring components to use when they communicate
with the monitoring server.
1) If agents must cross a firewall to access the monitoring server, select Connection must pass
through firewall.
2) Identify the type of protocol that the agents use to communicate with the hub monitoring
server. You have four choices: IP.UDP, IP.PIPE, IP.SPIPE, or SNA. You can specify up to three
methods for communication. If the method you identify as Protocol 1 fails, Protocol 2 is used
as a backup. If Protocol 2 fails, Protocol 3 is used as a backup.
Click OK.
b. Complete the communication protocol fields for the monitoring server. See Table 28 on page 150
for definitions of these fields. Click OK.
For additional information about these parameters, press the Help button.
16. Click Finish to complete the installation.
17. Click Finish on the Maintenance Complete window if you are updating an existing installation.
The Manage Tivoli Enterprise Monitoring Services utility is opened. (This might take a few minutes.) You
can start, stop, and configure IBM Tivoli Monitoring components with this utility.
152
2. When prompted for the IBM Tivoli Monitoring home directory, press Enter to accept the default
(/opt/IBM/ITM). If you want to use a different installation directory, type the full path to that directory
and press Enter.
3. If the directory you specified does not exist, you are asked whether to create it. Type 1 to create this
directory.
4. The following prompt is displayed:
Select one of the following:
1) Install products to the local host.
2) Install products to depot for remote deployment (requires TEMS).
3) Install TEMS support for remote seeding
4) Exit install.
Please enter a valid number:
153
where tems_name is the name of your monitoring server (for example, HUB_itmdev17).
3. Press Enter to indicate that this is a hub monitoring server (indicated by the *LOCAL default).
4. Press Enter to accept the default host name for the monitoring server. This should be the host name
for your computer. If it is not, type the correct host name and then press Enter.
5. Enter the type of protocol to use for communication with the monitoring server. You have four choices:
ip.udp, ip.pipe, ip.spipe, or sna. Press Enter to use the default communications protocol (IP.PIPE).
6. If you want to set up a backup protocol, enter that protocol and press Enter. If you do not want to use
backup protocol, press Enter without specifying a protocol.
7. Depending on the type of protocol you specified, provide the following information when prompted:
Table 30. UNIX monitoring server protocols and values
Protocol
Value
Definition
IP.UDP
IP Port Number
The port number for the monitoring server. The default is 1918.
SNA
Net Name
LU Name
Log Mode
IP.PIPE
IP.PIPE Port
Number
The port number for the monitoring server. The default is 1918.
IP.SPIPE
IP.SPIPE Port
Number
The port number for the monitoring server. The default is 3660.
8. Press Enter to not specify the name of the KDC_PARTITION. You can configure the partition file at a
later time, as described in Appendix C, Firewalls, on page 551.
9. Press Enter when prompted for the path and name of the KDC_PARTITION.
10. If you want to use Configuration Auditing, press Enter. If you do not want to use this feature, type 2
and press Enter.
11. Press Enter to accept the default setting for the Hot Standby feature (none).
For best results, wait until after you have fully deployed your environment to configure the Hot
Standby feature for your monitoring server. See the IBM Tivoli Monitoring: High-Availability Guide for
Distributed Systems for information about configuring Hot Standby.
12. Press Enter to accept the default value for the Optional Primary Network Name (none).
154
13. Press Enter for the default Security: Validate User setting (NO).
Important: Do not change this to set Security: Validate User. You can configure security after
configuring the monitoring server, as described in the IBM Tivoli Monitoring:
Administrator's Guide.
14. If you want to forward situation events to either IBM Tivoli Enterprise Console (TEC) or the IBM Tivoli
Netcool/OMNIbus console, type 1 and press Enter to enable the Tivoli Event Integration Facility.
Complete the following additional steps:
a. For EIF Server, type the hostname of the TEC event server or the hostname of the
Netcool/OMNIbus EIF probe and press Enter.
b. For EIF Port, type the EIF reception port number for the TEC event server or the
Netcool/OMNIbus EIF probe and press Enter.
15. To disable Workflow Policy Activity event forwarding, type 1 and press Enter. Otherwise, press Enter
to accept the default value (2=NO). See the note in Event integration with Tivoli Enterprise Console
on page 463 for more information.
16. Type 6 to accept the default SOAP configuration and exit the configuration.
Note: You can configure any SOAP information at a later time. See Chapter 13, Configuring IBM
Tivoli Monitoring Web Services (the SOAP Server), on page 293 for information.
The monitoring server is now configured.
A configuration file is generated in the install_dir/config directory with the format
host_name_ms_tems_name.config (for example, itmdev17_ms_HUBitmdev17.config).
155
where tems_name is the name of the monitoring server (for example, HUB_itmserv16) and pc is the
product code for each agent for which you want to enable application support.
To view the product codes for the applications installed on this computer, run the following command:
./cinfo
Type 1 when prompted to display the product codes for the components installed on this computer.
See your product documentation for the product code for other agents.
Activate support only for the monitoring agents for which you installed support. For example, if you
installed support for the DB2 agent, you would enter the following command:
./itmcmd support -t tems_name ud
GUI procedure: This section describes how to use the Manage Tivoli Enterprise Monitoring Services
window on a Linux Intel or UNIX computer to enable application support on a monitoring server that is
located on the local computer. You can use this procedure as an alternative to the itmcmd support
command. (This command applies only to monitoring servers that are installed on the local computer. To
enable application support on a monitoring server that is located on a remote computer, see Configuring
application support on a nonlocal monitoring server from a Linux or UNIX system on page 214.)
This procedure assumes that you have installed the support files on this computer, and that X Windows is
enabled on this computer.
Complete the following steps to enable application support from the Manage Tivoli Enterprise Monitoring
Services window on the local Linux or UNIX monitoring server:
1. Log on to the computer where the Tivoli Enterprise Portal Server is installed.
2. Start the Manage Tivoli Enterprise Monitoring Services utility:
a. Change to the bin directory:
cd install_dir/bin
b. Run the following command using the parameters described in Table 31:
./itmcmd manage [-h ITMinstall_dir]
where:
Table 31. Parameters for the itmcmd manage command
-h
ITMinstall_dir
156
5. If you selected the Advanced option, the Install Product Support window is displayed. Select the
application support packages you want to install and click Install.
6. Stop and restart the monitoring server:
a. Right-click Tivoli Enterprise Monitoring Server and click Stop.
b. Right-click Tivoli Enterprise Monitoring Server and click Start.
157
When you select the Tivoli Enterprise Monitoring Server checkbox, all of the check boxes in the
attached subtree are automatically selected. These check boxes are for installing application support
files for base monitoring agents and other to the monitoring server. (The base monitoring agents are
included with the base IBM Tivoli Monitoring installation package.) If you leave all of the application
support check boxes selected, you do not need to reconfigure application support as new agent types
are added to your environment. However, installing support for many agents at a time can increase
the installation time and you may still have to add support for an agent later if it has been updated.
For detailed information about application support, see Installing and enabling application support on
page 196.
Notes:
a. If you have purchased monitoring agents that run on z/OS, but have not purchased IBM Tivoli
Monitoring as a separate product, expand the Tivoli Enterprise Monitoring Server node. Clear
all check boxes in the subtree except the check boxes labeled Tivoli Enterprise Monitoring
Server and Summarization and Pruning agent.
b. If you are updating an existing installation (you selected Modify on the Welcome window), all
check boxes on the Select Features window reflect your choices during the initial installation.
Clearing a check box has the effect of uninstalling the component. Clear a check box only if you
want to remove a component.
7. If you want to install any agents on this remote monitoring server, expand Tivoli Enterprise
Monitoring Agents and select the agents you want to install.
8. Click Next to display the Agent Deployment window.
The Agent Deployment window lists monitoring agents on this installation image that you can add to
the agent depot. The agent depot contains agents that you can deploy to remote computers. For
information about how to deploy agents in the agent depot to remote computers, see Chapter 9,
Deploying monitoring agents across your environment, on page 237.
Note: By default, the agent depot is located in the itm_installdir/CMS/depot directory on Windows.
If you want to use a different directory, create the directory (if it does not exist) and specify the
directory using the DEPOTHOME key in the KBBENV file.
Select the agents, if any, that you want to add to the agent depot. (You can add agents to the agent
depot at a later time by updating your installation.) Click Next.
9. If no IBM Tivoli Monitoring component has been previously installed on this computer, a window is
displayed for you to select a program folder for the Windows Start menu. Select a program folder and
click Next. The default program folder name is IBM Tivoli Monitoring.
10. If the TEPS Desktop and Browser Signon ID and Password window is displayed, enter and confirm
the password to be used for logging on to the Tivoli Enterprise Portal. The default logon user ID,
sysadmin, cannot be changed on this window. The logon password must match the password that you
specified for sysadmin when you configured the hub monitoring server.
This window is not displayed if the sysadmin user ID has already been defined in the operating
system.
11. Review the installation summary details. The summary identifies the components you are installing.
Click Next to begin the installation.
After the components are installed, a configuration window (called the Setup Type window) is
displayed.
12. Clear the check boxes for any components that have already been installed and configured (at the
current release level) on this computer, unless you want to modify the configuration. Click Next to
start configuring all selected components.
13. Configure the Tivoli Enterprise Monitoring Server:
a. Select the type of monitoring server you are configuring: Hub or Remote. For this procedure,
select Remote.
b. Verify that the name of this monitoring server is correct in the TEMS Name field. If it is not,
change it.
158
The default name is REMOTE_host_name, for example, REMOTE_itmserv16. This name must be
unique in the enterprise.
c. Identify the communications protocol for the monitoring server. You have four choices: IP.UDP,
IP.PIPE, IP.SPIPE, or SNA. You can specify up to three methods for communication. If the method
you identify as Protocol 1 fails, Protocol 2 is used as a backup. If Protocol 2 fails, Protocol 3 is
used as a backup.
d. Click OK.
e. Complete the following fields for the communications protocol for the monitoring server.
Table 32. Remote monitoring server communications protocol settings
Field
Description
Port Number
Port Number
The LU alias.
TP Name
LU Name
LU 6.2 LOGMODE
TP Name
f. If you are certain that you have typed the values for all of these fields with exactly the correct
casing (upper and lower cases), you can select Use case as typed. However, because IBM Tivoli
Monitoring is case-sensitive, consider selecting Convert to upper case to reduce the chance of
user error.
g. Click OK to continue.
14. Enable application support on the monitoring server.
In Step 6 on page 157, you selected the base monitoring agents for which you wanted to install
application support files on the monitoring server. In this step, you activate the application support
through a process known as seeding the monitoring server.
a. Specify the location of the monitoring server to which to add application support. You have two
choices:
Chapter 8. Installing IBM Tivoli Monitoring
159
v On this computer
v On a different computer
Click OK.
For additional information about these parameters, press the Help button.
b. Click OK on the Select the application support to add to the TEMS window.
This window lists the monitoring agents that you selected in Step 6 on page 157. Click OK to
begin seeding the monitoring server (using the SQL files listed on this window).
This process can take up to 20 minutes. As the seeding process completes, a progress bar is
displayed, showing the progress of seeding, in turn, the application support for the agents you
selected. Once seeding completes, if support could not be added, a window is displayed showing
all seeding results.
c. Click Next on the message that provides results for the process of adding application support
(see Figure 28 on page 204). A return code of 0 (rc: 0) indicates that the process succeeded.
Note: If the Application Support Addition Complete window is not displayed after 20 minutes, look
in the IBM\ITM\CNPS\Logs\seedkpc.log files (where pc is the two-character product code
for each monitoring agent) for diagnostic messages that help you determine the cause of
the problem. For a list of product codes, see Appendix D, IBM Tivoli product, platform, and
component codes, on page 567.
15. Configure the communication between any IBM Tivoli Monitoring component and the monitoring
server:
a. Specify the default values for IBM Tivoli Monitoring components to use when they communicate
with the monitoring server.
1) If agents must cross a firewall to access the monitoring server, select Connection must pass
through firewall.
2) Identify the type of protocol that the agents use to communicate with the hub monitoring
server. You have four choices: IP.UDP, IP.PIPE, IP.SPIPE, or SNA. You can specify up to three
methods for communication. If the method you identify as Protocol 1 fails, Protocol 2 is used
as a backup. If Protocol 2 fails, Protocol 3 is used as a backup.
Click OK.
b. Complete the communication protocol fields for the monitoring server. See Table 32 on page 159
for definitions of these fields. Click OK.
For additional information about these parameters, press the Help button.
16. Click Finish to complete the installation.
17. Click Finish on the Maintenance Complete window if you are updating an existing installation.
Note: IBM Tivoli Monitoring does not support multiple remote monitoring servers on the same Windows
computer.
160
Note: Under both Linux and UNIX, IBM Tivoli Monitoring (version 6.1 fix pack 6 and subsequent) supports
multiple remote monitoring servers on the same LPAR or computer (however, it does not support
multiple hub monitoring servers on the same computer). Note that each instance of a remote
monitoring server must have its own network interface card and its own unique IP address; in
addition, each monitoring server must be installed on its own disk. These limitations isolate each
remote monitoring server instance so you can service each one independently: you can upgrade a
remote Tivoli Enterprise Monitoring Server without affecting another server's code base and shared
libraries.
where tems_name is the name of your monitoring server (for example, remote_itmdev17).
3. Type remote to indicate that this is a remote monitoring server.
4. Press Enter to accept the default host name for the hub monitoring server. This should be the host
name for hub computer. If it is not, type the correct host name and then press Enter.
5. Enter the type of protocol to use for communication with the monitoring server. You have four choices:
ip.udp, ip.pipe, ip.spipe, or sna. Press Enter to use the default communications protocol (IP.PIPE).
6. If you want to set up a backup protocol, enter that protocol and press Enter. If you do not want to use
backup protocol, press Enter without specifying a protocol.
7. Depending on the type of protocol you specified, provide the following information when prompted:
Table 34. UNIX monitoring server protocols and values
Protocol
Value
Definition
IP.UDP
IP Port Number
SNA
Net Name
LU Name
Log Mode
IP.PIPE
IP.SPIPE
161
11. Press Enter to accept the default setting for the Hot Standby feature (none).
For best results, wait until after you have fully deployed your environment to configure the Hot
Standby feature for your monitoring server. See the IBM Tivoli Monitoring: High-Availability Guide for
Distributed Systems for information about configuring Hot Standby.
12. Press Enter to accept the default value for the Optional Primary Network Name (none).
13. Press Enter for the default Security: Validate User setting (NO).
Note: This option is valid only for a hub monitoring server.
14. For Tivoli Event Integration, type 2 and press Enter.
Note: This option is valid only for a hub monitoring server.
15. At the prompt asking if you want to disable Workflow Policy/Tivoli Emitter Agent event forwarding,
press Enter to accept the default (2=NO).
Note: This option is valid only for a hub monitoring server.
16. At the prompt asking if you want to configure the SOAP hubs, press Enter to save the default settings
and exit the installer.
Note: This option is valid only for a hub monitoring server.
The monitoring server is now configured.
A configuration file is generated in the install_dir/config directory with the format
host_name_ms_tems_name.config (for example, itmdev17_ms_HUBitmdev17.config).
162
Complete the following steps to install the Tivoli Enterprise Portal Server and portal client on a Windows
computer:
1. Launch the installation wizard by double-clicking the setup.exe file in the Infrastructure DVD or DVD
image.
2. Click Next on the Welcome window.
Note: If you have another IBM Tivoli Monitoring component already installed on this computer, select
Modify on the Welcome window to indicate that you are updating an existing installation. Click
OK on the message telling you about preselected items. Then skip to Step 7 on page 164.
3. Read and accept the software license agreement by clicking Accept.
4. You are asked to choose the database management system you want to use for your portal server
database, Derby, DB2 Database for Linux, UNIX, and Windows, or Microsoft SQL Server, as shown in
Figure 18. Note that if a particular RDBMS is uninstalled on this computer or if it installed but not at
the necessary release level, the radio button is grayed out.
Figure 18. The Select Database for Tivoli Enterprise Portal window
5. Specify the directory where you want to install the portal server software and accompanying files. The
default location is C:\IBM\ITM. Click Next.
Note: If you specify an incorrect directory name, you will receive the following error:
The IBM Tivoli Monitoring installation directory cannot exceed 80 characters
or contain non-ASCII, special or double-byte characters.
The directory name can contain only these characters:
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ _\:0123456789()~-./".
163
6. Type an encryption key to use. This key should be the same as what was used during the installation
of the hub monitoring server to which this portal server will connect. Click Next and then OK to
confirm the encryption key.
7. On the Select Features window, select Tivoli Enterprise Portal Server from the list of components
to install.
When you select the Tivoli Enterprise Portal Server check box, all of the check boxes in the
attached subtree are automatically selected. The support check boxes in the subtree are for installing
application support files for base monitoring agents to the monitoring server. (The base monitoring
agents are included with the base IBM Tivoli Monitoring installation package.) It is best to leave all of
the support check boxes selected so you do not need to reconfigure application support as new agent
types are added to your environment. For detailed information about application support, see
Installing and enabling application support on page 196.
Notes:
a. If you have purchased monitoring agents that run on z/OS, but have not purchased IBM Tivoli
Monitoring as a separate product, expand the Tivoli Enterprise Portal Server node. Clear all
check boxes in the subtree except the check boxes labeled Tivoli Enterprise Portal Server and,
optionally, TEC GUI Integration. (See Step 8a.)
b. If you are updating an existing installation (you selected Modify on the Welcome window), all
check boxes on the Select Features window reflect your choices during the initial installation.
Clearing a check box has the effect of uninstalling the component. Clear a check box only if you
want to remove a component.
c. The Eclipse Help Server is automatically selected when you select the Tivoli Enterprise Portal
Server.
8. Optionally select the following additional components to install:
a. If you want to view events on the IBM Tivoli Enterprise Console event server through the Tivoli
Enterprise Portal, expand Tivoli Enterprise Portal Server and ensure that TEC GUI Integration
is selected.
b. If you want to install a portal desktop client on this computer, select Tivoli Enterprise Portal
Desktop Client.
When you select the Tivoli Enterprise Portal Desktop Client check box, all of the check boxes
in the attached subtree are automatically selected. These check boxes are for installing
application support files for base monitoring agents to the portal desktop client. Leave these
check boxes selected as you did for the portal server in Step 7.
Note: If you have purchased monitoring agents that run on z/OS, but have not purchased IBM
Tivoli Monitoring as a separate product, expand the Tivoli Enterprise Portal Desktop
Client node. Clear all check boxes in the subtree except the check boxes labeled Tivoli
Enterprise Portal Desktop Client and, optionally, TEC GUI Integration.
9. Click Next.
10. If a monitoring server is not installed on this computer, go to Step 11 on page 165.
If you are installing the portal server on a computer that already has a monitoring server installed, the
Agent Deployment window is displayed.
The Agent Deployment window lists monitoring agents on this installation image that you can add to
the agent depot. The agent depot contains agents that you can deploy to remote computers. For
information about how to deploy agents in the agent depot to remote computers, see Chapter 9,
Deploying monitoring agents across your environment, on page 237.
Note: By default, the agent depot is located in the itm_installdir/CMS/depot directory on Windows.
If you want to use a different directory, create the directory (if it does not exist) and specify the
directory using the DEPOTHOME key in the KBBENV file.
Select the agents, if any, that you want to add to the agent depot. (You can add agents to the agent
depot at a later time by updating your installation.) Click Next.
164
11. If no IBM Tivoli Monitoring component has been previously installed on this computer, a window is
displayed for you to select a program folder for the Windows Start menu. Select a program folder and
click Next. The default program folder name is IBM Tivoli Monitoring.
12. Review the installation summary details. The summary identifies what you are installing and where
you chose to install it. Click Next to start the installation.
After installation is complete, a configuration window (called the Setup Type window) is displayed.
13. Clear the check boxes for any components that have already been installed and configured (at the
current release level) on this computer, unless you want to modify the configuration. (For example,
clear the check box for the Tivoli Enterprise Monitoring Server if it has already been installed and
configured on this computer.) Click Next to start configuring all selected components.
14. Configure communications for the Tivoli Enterprise Portal Server:
a. Type the host name of the computer where you are installing the portal server. (The host name of
this computer is displayed by default.) Click Next.
b. If more than one RDBMS product is installed on this computer, a window is displayed for you to
choose the RDBMS product you want to use. Choose IBM DB2 or SQL Server and click Next.
c. A window is displayed for you to configure the connection between the portal server and the portal
server database (TEPS database).
The installation program uses the information on this window to automatically perform the
following tasks:
v Create the portal server database.
v Create a database user for the portal server to use to access the database.
v Configure the ODBC connection between the portal server and the database.
165
Figure 19. Configuration window for the portal server database using DB2 for the workstation
Figure 19 shows the configuration window for a portal server database using DB2 for the
workstation. The configuration window for a Microsoft SQL Server database is similar. The fields
on the configuration window are described in the following table:
Table 35. Configuration information for the portal server database
Field
MS SQL
default
Description
Admin User ID
db2admin
sa
Admin Password
(no default)
(no default)
Database User ID
TEPS
TEPS
Database Password
(no default)
(no default)
Reenter password
(no default)
(no default)
(no default)
(no default)
d. Optionally change the default values for the administrator ID and database user ID. Enter
passwords for the administrator and database user. Click OK.
For additional information about these parameters, press the Help button.
166
e. Click OK on the message that tells you that the portal server configuration was successful.
f. Click Next to accept the default Tivoli Data Warehouse user ID and password. (You can change
these values in a later step.)
This is the warehouse user ID. The same ID and password must be used by all components
connecting to the Tivoli Data Warehouse, including the Tivoli Enterprise Portal Server, Warehouse
Proxy Agents and the Summarization and Pruning agent.
g. Configure the connection between the portal server and the hub Tivoli Enterprise Monitoring
Server:
1) Select the communications protocol that the monitoring server uses from the Protocol
drop-down list. You can also specify whether the connection between the portal server and the
hub monitoring server passes through a firewall. Click OK.
2) Enter the host name or IP address and the port number for the hub monitoring server. Click
OK.
For additional information about these parameters, press the Help button.
15. A message is displayed asking if you want to reconfigure the warehouse connection for the portal
server. Do one of the following:
v Click No if you have not set up a Tivoli Data Warehouse.
Follow the instructions later in this book for implementing a Tivoli Data Warehouse solution,
beginning with Chapter 15, Tivoli Data Warehouse solutions, on page 331. These instructions will
direct you to reconfigure the connection between the portal server and the warehouse database
after you have completed all preliminary setup tasks.
If you select No, go to Step 20 on page 169.
v Click Yes if you have completed the tasks for setting up a Tivoli Data Warehouse and want to
configure the connection between the portal server and the Tivoli Data Warehouse database at this
time. (You can choose to configure the connection later.) The prerequisite tasks and the information
you need to configure the connection for each database type are described in the following steps.
For additional information about warehouse configuration, press the Help button.
16. To configure a connection between the portal server and a DB2 for the workstation data warehouse,
complete the following steps:
a. Verify that you have completed the following tasks:
v Created a warehouse database using DB2 for the workstation
v Created a warehouse user on the computer where you created the warehouse database.
The warehouse user is the user account (user ID and password) used by the portal server and
other warehousing components to access the warehouse database.
v Activated the UNIX listeners on the computer where the warehouse database is located if the
warehouse database is installed on UNIX systems.
v Installed a DB2 for the workstation client on the portal server if the data warehouse is remote
and the portal server database does not use DB2 for the workstation.
v Cataloged the warehouse database on the computer where you are installing the portal server
if the warehouse database is remote from the portal server.
These tasks are described in Chapter 16, Tivoli Data Warehouse solution using DB2 for the
workstation, on page 349.
b. Gather the following information: data source name, database name, database administrator ID
and password, warehouse user ID and password. The warehouse user ID is the one declared in
the configuration panel of the Warehouse Proxy Agent. This user ID serves as the first part of the
name of all the tables created in the Warehouse database. If you do not declare the same user ID
when configuring the Tivoli Enterprise Portal server you will not be able to see the Warehouse
tables with the portal client even if they exist in the database.
c. Complete the Configuring a Windows portal server (ODBC connection) procedure, starting from
Step 4 on page 368.
Chapter 8. Installing IBM Tivoli Monitoring
167
For additional information about these parameters, press the Help button.
17. To configure a connection between the portal server and a Microsoft SQL Server data warehouse,
complete the following steps:
a. Verify that you have completed the following tasks:
v Created a warehouse database using Microsoft SQL Server.
v Created a warehouse user on the computer where you created the warehouse database.
The warehouse user is the user account (user ID and password) used by the portal server and
other warehousing components to access the warehouse database.
v Installed a Microsoft SQL Server client on the portal server if the data warehouse is remote and
the portal server database does not use Microsoft SQL Server.
v Configured a remote client connection on the computer where you are installing the portal
server if the warehouse database is remote from the portal server.
These tasks are described in Chapter 18, Tivoli Data Warehouse solution using Microsoft SQL
Server, on page 397.
b. Gather the following information: data source name, database name, database administrator ID
and password, warehouse user ID and password. The warehouse user ID is the one declared in
the configuration panel of the Warehouse Proxy Agent. This user ID serves as the first part of the
name of all the tables created in the Warehouse database. If you do not declare the same user ID
when configuring the Tivoli Enterprise Portal server you will not be able to see the Warehouse
tables with the portal client even if they exist in the database.
c. Complete the procedure Configuring the portal server (ODBC connection), starting from Step 4 on
page 410.
For additional information about these parameters, press the Help button.
18. To configure a connection between the portal server and an Oracle data warehouse, complete the
following steps:
a. Verify that you have completed the following tasks:
v Created a warehouse database using Oracle
v Created a warehouse user on the computer where you created the warehouse database.
The warehouse user is the user account (user ID and password) used by the portal server and
other warehousing components to access the warehouse database.
v Activated the Oracle listener on the computer where the warehouse database is located
v Installed an Oracle client on the portal server
v Created a TNS Service Name on the computer where you are installing the portal server if the
warehouse database is remote from the portal server.
These tasks are described in Chapter 19, Tivoli Data Warehouse solution using Oracle, on page
415.
b. Gather the following information: the data source name, database name, database administrator
ID and password, and the warehouse user ID and password. The warehouse user ID is the one
declared in the configuration panel of the Warehouse Proxy Agent. This user ID serves as the first
part of the name of all the tables created in the Warehouse database. If you do not declare the
same user ID when configuring the Tivoli Enterprise Portal server you will not be able to see the
Warehouse tables with the portal client even if they exist in the database.
c. Complete the procedure Configuring a Windows portal server (ODBC connection), starting from
Step 4 on page 430.
For additional information about these parameters, press the Help button.
19. Configure the default connector for the Common Event Console.
168
The default connector retrieves situation events reported to Tivoli Enterprise Monitoring Servers for
display in the Common Event Console. You can configure connectors for other event management
systems after you have completed the product installation. For configuration instructions, see the IBM
Tivoli Monitoring: Administrator's Guide.
Click OK to accept the default values or specify values for the following fields, then click OK:
Enable this connector
Select Yes to enable the connector to collect and display situation events in the Common Event
Console, or No to configure the connector without enabling it. The connector is enabled by
default.
Connector name
The name that is to be displayed in the Common Event Console for this connector. The default
name is ITM1.
Maximum number of events for this connector
The maximum number of events that are to be available in the Common Event Console for this
connector. The default value is 100 events.
View closed events
Select No to display only active events in the Common Event Console for this connector. Select
Yes to view both active and closed events. By default, only active events are displayed.
20. Configure the default communication between any monitoring agents installed on this computer and
the hub Tivoli Enterprise Monitoring Server:
a. Click OK to accept the default communications protocol.
b. Ensure that the host name and port number of the Tivoli Enterprise Monitoring Server are correct.
Click OK.
For additional information about these parameters, press the Help button.
21. Click Finish to complete the installation.
22. Click Finish on the Maintenance Complete window if you are updating an existing installation.
169
Table 36. Steps for installing a portal server on a Linux or AIX computer
Steps
2. When prompted for the IBM Tivoli Monitoring home directory, press Enter to accept the default
directory (/opt/IBM/ITM) or type the full path to a different directory.
Note: If you specify an incorrect directory name, you will receive the following error:
The IBM Tivoli Monitoring installation directory cannot exceed 80 characters
or contain non-ASCII, special or double-byte characters.
The directory name can contain only these characters:
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ _\:0123456789()~-./".
3. If the installation directory does not already exist, you are asked if you want to create it. Type y to
create this directory and press Enter.
4. The following prompt is displayed:
Select one of the following:
1) Install products to the local host.
2) Install products to depot for remote deployment (requires TEMS).
3) Install TEMS support for remote seeding
4) Exit install.
Please enter a valid number:
Type 1 to start the installation and display the software license agreement.
5. Press Enter to read through the agreement.
6. Type 1 to accept the agreement and press Enter.
7. Enter a 32-character encryption key or press Enter to accept the default key. This key should be the
one used during the installation of the monitoring server to which this portal server will connect.
A numbered list of available operating systems is displayed.
8. Type 4 to install the portal server for your current operating system. Press Enter.
A message is displayed indicating that the Tivoli Enterprise Portal Server is about to be installed.
170
Note: The Eclipse Help Server is automatically installed when you install the Tivoli Enterprise Portal
Server.
9. Type 1 to confirm the installation.
The installation begins.
10. After the Tivoli Enterprise Portal Server is installed, you are asked whether you want to install
additional products or product support packages. Type 1 and press Enter.
The installer presents a numbered list of products and application support packages.
11. Install the required application support packages.
All monitoring agents require that application support files be installed on the monitoring servers (hub
and remote), portal server, and portal desktop clients in your environment. Application support files
contain the information required for agent-specific workspaces, helps, predefined situations, and other
data.
This step installs the application support files for base monitoring agents. The base monitoring agents
are included with the base IBM Tivoli Monitoring installation package. For detailed information about
application support, see Installing and enabling application support on page 196.
When you entered 1 in the preceding step, the installer presents a numbered list of items, including
the following application support packages:
Tivoli Enterprise Portal Browser Client support
Tivoli Enterprise Portal Server support
Note: The Tivoli Enterprise Portal Browser Client support package is portal server code that supports
the browser clients. You must install the browser client support package on the computer
where you install the portal server if you want to connect to it using a browser client.
Complete the following steps to install the portal server and browser client support packages for the
base monitoring agents:
a. Type the number that corresponds to Tivoli Enterprise Portal Browser Client support and
press Enter.
A numbered list of base monitoring agents is displayed.
b. Type the numbers that correspond to the base monitoring agents for which you want to install the
application support package, or type the number that corresponds to All of the above. Type the
numbers on the same line separated by spaces or commas (,). Press Enter.
It is best to select all of the base monitoring agents (All of the above) so you do not need to
reconfigure application support as new agent types are added to your environment.
c. Type 1 to confirm the installation and press Enter.
The installation begins.
d. After the support package is installed, you are asked whether you want to install additional
products or product support packages. Enter 1 and repeat the preceding steps for the Tivoli
Enterprise Portal Server support package.
Note: This step installs the application support files. However, you must enable the application
support by configuring the portal server. The next two sections show you how to configure the
portal server.
12. After you are finished installing the portal server and browser client packages, you are asked whether
you want to install additional products or product support packages. Type 2 and press Enter.
171
v Includes steps for configuring the connection between the portal server and the following components:
The hub monitoring server
The portal server database
The Tivoli Data Warehouse database
Note: If you have not set up the Tivoli Data Warehouse, complete this procedure but accept the default
values at the prompts for configuring the connection to the data warehouse. You can reconfigure
the connection after you set up the warehouse. See Step 9 on page 173 for more information.
Complete the following steps to configure the Tivoli Enterprise Portal Server from the command line on
Linux or AIX:
1. Log on to the computer where the Tivoli Enterprise Portal Server is installed.
2. At the command line change to the ITMinstall_dir/bin directory, where ITMinstall_dir is the directory
where you installed the product.
3. Run the following command to start configuring the Tivoli Enterprise Portal Server:
./itmcmd config -A cq
172
Value
Definition
SNA
Net Name
LU Name
Log Mode
IP.PIPE
IP.SPIPE
e. Press Enter when you are asked if you want to configure the connection to a secondary
monitoring server. The default value is none.
f. Press Enter to accept the default value for the Optional Primary Network Name (none).
g. Press Enter to accept the default setting for SSL between the portal server and clients (N). By
default, SSL is disabled. To enable SSL, type 1 and press Enter.
7. Configure the connection between the portal server and the portal server database:
a. Type 1 if your site is using the embedded Derby database, 2 if you're using DB2 for the
workstation.
b. Type the DB2 for the workstation instance name. The default value is db2inst1. Press Enter.
c. Type the DB2 for the workstation administrator ID. The default value is db2inst1. Press Enter.
Note: The DB2 for the workstation administrator account was created during DB2 for the
workstation installation.
d. Type the password for the DB2 for the workstation administrator ID, and press Enter.
e. Confirm the password for the DB2 for the workstation administrator ID by typing it again. Press
Enter.
8. If you are configuring DB2 for the workstation for the portal server (instead of the embedded Derby
database), complete the following parameters as well:
a. Type the name of the portal server database. The default value is TEPS. Press Enter.
b. Type the login name of the database user that the portal server will use to access the database.
The default value is itmuser. Press Enter.
c. Type the password for the database user and press Enter.
d. Confirm the password for the database user by typing it again. Press Enter.
e. You are asked if you want to create the DB2 for the workstation login user if it does not exist.
Type 1 and press Enter.
9. You are asked for the database parameters for either DB2 for the workstation or Oracle for the Tivoli
Data Warehouse. Enter D for DB2 for the workstation, J for Oracle (JDBC). DB2 for the workstation is
the default.
Note: This prompt and all remaining prompts ask for information to configure the connection between
the portal server and the Tivoli Data Warehouse database. If you have not set up a Tivoli Data
Warehouse, accept the default values at these prompts. Follow the instructions later in this
book for implementing a Tivoli Data Warehouse solution, beginning with Chapter 15, Tivoli
Data Warehouse solutions, on page 331. These instructions will direct you to reconfigure the
connection between the portal server and the warehouse database after you have completed
all preliminary setup tasks.
Chapter 8. Installing IBM Tivoli Monitoring
173
174
where oracleinstalldir is the directory location of the JDBC driver JAR file on this computer.
g. Enter the following JDBC driver name:
oracle.jdbc.driver.OracleDriver
h. Enter the JDBC driver URL. This is the Oracle-defined URL that identifies the locally or
remotely installed Oracle instance used for the Tivoli Data Warehouse. The following entry is an
example:
jdbc:oracle:thin:@localhost:1521:WAREHOUS
If the warehouse database is located on a remote computer, replace localhost with the host
name of the remote computer.
Change the default port number (1521) and Tivoli Data Warehouse name (WAREHOUS) if they
are different.
i. Enter any user-defined attributes that are used to customize the behavior of the driver
connection. Use semi-colons (;) to delimit the attributes. Press Enter to finish the configuration.
A message is displayed telling you that InstallPresentation is running, and then a message telling you
that the installation has completed.
175
b. Run the following command using the parameters described in Table 38:
./itmcmd manage [-h ITMinstall_dir]
where:
Table 38. Parameters for the itmcmd manage command
-h
ITMinstall_dir
4. Click OK to accept the default values for the ITM Connector, or specify your preferred values and then
click OK.
176
The ITM Connector retrieves situation events reported to Tivoli Enterprise Monitoring Servers for
display in the Common Event Console. You can configure connectors for other event management
systems after you have completed the product installation. For configuration instructions, see the IBM
Tivoli Monitoring: Administrator's Guide.
The Configure Tivoli Enterprise Portal Server window is displayed.
Figure 21. Registering the portal server with the Tivoli Enterprise Monitoring Server
177
5. On the TEMS Connection page, enter information about the Tivoli Enterprise Monitoring Server to
which the Tivoli Enterprise Portal Server connects:
v Enter the host name of the monitoring server in the TEMS Hostname field. (If the field is not active,
clear the No TEMS check box.)
v Select the communications protocol that the monitoring server uses from the Protocol drop-down
list.
If you select SNA, enter information in the Net Name, LU Name, and LOG Mode fields.
If you select IP.PIPE or IP.SPIPE, enter the port number of the monitoring server in the Port
Number field.
For information about these fields, refer to Table 37 on page 173.
6. Click the Agent Parameters tab.
178
7. Configure the connection between the portal server and the portal server database by entering
information in the fields described in the following table:
179
Table 39. Configuration information for the Tivoli Enterprise Portal Server database
Field
Default value
Description
db2inst1
DB2 admin ID
db2inst1
The DB2 for the workstation administrator ID. The DB2 for
the workstation administrator account was created during
DB2 for the workstation installation.
Not required if the embedded Derby database is used for
the portal server and Oracle is selected for the warehouse
database.
(no default)
(no default)
TEPS
TEPS DB user ID
itmuser
(no default)
(no default)
yes
8. Optionally configure the connection between the portal server and the Tivoli Data Warehouse
database.
Note: If you have not set up a Tivoli Data Warehouse, accept the default values for these fields.
Follow the instructions later in this book for implementing a Tivoli Data Warehouse solution,
beginning with Chapter 15, Tivoli Data Warehouse solutions, on page 331. These instructions
180
will direct you to reconfigure the connection between the portal server and the warehouse
database after you have completed all preliminary setup tasks.
The bottom section of the Agent Parameters tab contains fields for configuring the connection
between the portal server and a Tivoli Data Warehouse database using DB2 for the workstation
or Oracle. (See Figure 23.)
Figure 23. Configuration information for the Tivoli Data Warehouse using an Oracle database
181
182
1. Using the product media for the current release, install and configure the 32-bit Tivoli Enterprise Portal
Server for IBM Tivoli Monitoring V6.2.x.
Before proceeding to step 2, ensure that the portal server is either completely initialized or completely
stopped.
2. Invoke tepsBackup from the /ITM_installdir/bin directory:
cd /ITM_installdir/bin
./tepsBackup
where:
ITM_installdir
is the root location of your IBM Tivoli Monitoring V6.2.x environment.
This process creates a compressed tar file with default and customized application data in the
/ITM_installdir/tmp directory.
3. Uninstall the 32-bit portal server. If asked to completely remove the Tivoli Monitoring installation
directories, answer no; otherwise the backup tar file will be lost.
4. Again using the product media for the current release, install and configure the 64-bit Tivoli Enterprise
Portal Server.
Before proceeding to step 5, ensure that the portal server is either completely initialized or completely
stopped.
5. Invoke tepsRestore from the /ITM_installdir/bin directory:
cd /ITM_installdir/bin
./tepsRestore
183
All monitoring agents require that agent-specific application support be installed on the monitoring servers,
portal server, and portal desktop clients. See Installing and enabling application support on page 196 for
information.
The following sections provide instructions for installing a monitoring agent:
v Windows: Installing a monitoring agent
v Linux or UNIX: Installing a monitoring agent on page 189
184
Description
IP.UDP Settings
Hostname or IP Address
185
Description
IP.PIPE Settings
Hostname or IP Address
Port Number
IP.SPIPE Settings
Hostname or IP Address
Port number
SNA Settings
Network Name
LU Name
LU 6.2 LOGMODE
TP Name
Local LU Alias
The LU alias.
186
v 32-bit and 64-bit versions of the IBM Global Security Toolkit (GSKit), component KGS
v The Tivoli Monitoring configuration utilities, component KGL
Note: The AC component is a Windows component only. There is no equivalent for Linux or UNIX
systems.
When to install the AC component: You need to install the AC component in these situations:
v When adding a native 64-bit agent into preexisting IBM Tivoli Monitoring installations comprising only
32-bit components.
v When adding a 32-bit agent into preexisting Tivoli Monitoring installations comprising only 64-bit
components.
v The Tivoli Monitoring installer has detected a situation where proceeding without the AC component
would result in incompatibility either between the 32-bit and the 64-bit agent frameworks or between the
32-bit and the 64-bit GSKit libraries.
The IBM Tivoli Monitoring installer will automatically detect any of the above situations and then stop the
installation with this error:
The version ver_num or newer of the Agent Compatibility Package must be installed in order to ensure
the correct cooperation between 32bit and 64bit components. Exiting installation.
When not to install the AC component: The AC component can be installed with any 32-bit or 64-bit
agent where a mixed 32/64 bit environment is anticipated. There is no need to install the AC component if
only 32-bit or only 64-bit IBM Tivoli Monitoring agents are planned for this machine.
Installing the AC component using the Windows GUI: The AC component can be found on the
Agents DVD. IBM recommends you install the AC component at the same version as the Windows OS
agent (component code NT). Since the AC is a standard Tivoli Monitoring component, it can be installed
using the standard interactive IBM Tivoli Monitoring installer by selecting the 32/64 Bit Agent
Compatibility Package feature for installation, as shown in Figure 24 on page 188.
187
Figure 24. Installing the Agent Compatibility Package (component code AC)
Remotely deploying the AC components: There are special considerations for deploying native 64-bit
agents as well as the AC component. When deploying an agent to a Windows machine running an x86-64
CPU, checks are automatically made to verify whether the AC component is required. If the checks report
the AC component is needed and it is available in the depot, it is sent automatically with no action required
on your part. However, if the checks report the AC component is needed but the component is not
available in the depot, an error is reported, and the deployment request fails. Therefore, it is highly
recommended that you populate the deployment depot with the AC component.
Sample tacmd addBundles command to add the AC component to the deploy depot:
tacmd addBundles -i C:\ITM_6.2.2FP1_Agents_Image\WINDOWS\Deploy -t ac
For more details on managing the agent depot, refer to Chapter 9, Deploying monitoring agents across
your environment, on page 237.
Note: Once you have added the AC bundle to the remote-deployment depot, it is listed among the
available packages in the Tivoli Enterprise Portal. Thus your interactive users can select it when
invoking the tacmd AddSystem command for a particular 64-bit node. However, if they do this,
your users will receive this error:
KFWITM291E an Agent Configuration schema not found
When it becomes necessary to invoke the remote deployment of the Agent Compatibility package,
instead use the CLI.
188
Installing the Embedded Java Runtime and the User Interface Extensions
On nodes where only the Windows OS agent has been installed, either locally or remotely, the Embedded
Java Runtime and the Tivoli Enterprise Services User Interface Extensions (the KUE component) are not
also installed by default. If you later decide to install an Agent Builder agent on this node, you may not be
able to reconfigure this factory agent, and you may receive the error shown in Figure 25.
This error also may occur when you attempt to run a tacmd CLI command on nodes where either the
Embedded Java Runtime or the User Interface Extensions are unavailable.
If this happens, you must install the Embedded Java Runtime and the KUE component. Complete one of
the following procedures, depending on the installation method you're following:
local GUI installation
Select Tivoli Enterprise Services User Interface Extensions from the list of features to install.
local silent installation
Uncomment this line in the silent response file:
;KUEWICMA=Tivoli Enterprise Services User Interface Extensions
remote installation
First ensure that component UE has been added to the server depot. Once the UE component is
available, from the command line, perform a tacmd addsystem command, specifying -t UE as the
component to install.
Once you have completed one of the above procedures, the Embedded Java Runtime and the Tivoli
Enterprise Services User Interface Extensions are installed and can be accessed on this node.
189
2. When prompted for the IBM Tivoli Monitoring home directory, press Enter to accept the default
directory (/opt/IBM/ITM) or type the full path to a different directory.
3. If the installation directory does not already exist, you are asked if you want to create it. Type y to
create this directory and press Enter.
4. The following prompt is displayed:
Select one of the following:
1) Install products to the local host.
2) Install products to depot for remote deployment (requires TEMS).
3) Install TEMS support for remote seeding
4) Exit install.
Please enter a valid number:
Note: This prompt might vary depending on the installation image from which you are installing.
Type 1 to start the installation and display the software license agreement.
5. Press Enter to read through the agreement.
6. Type 1 to accept the agreement and press Enter.
7. Type a 32 character encryption key and press Enter. This key should be the same as the key that
was used during the installation of the monitoring server to which this monitoring agent connects.
Note: This step applies only to those agents that you install from the IBM Tivoli Monitoring
installation image. Agents installed from the agent installation image do not need to provide the
encryption key.
A numbered list of available operating systems is displayed.
8. Type 1 to install the IBM Tivoli Monitoring support for your current operating system. Press Enter.
A numbered list of available components is displayed.
9. Type the number that corresponds to the monitoring agent or agents that you want to install. If you
want to install more than one agent, use a comma (,) or a space to separate the numbers for each
agent. Press Enter.
Note: Before you install the Warehouse Proxy agent or Summarization and Pruning agent, follow the
instructions later in this book for setting up a Tivoli Data Warehouse solution, beginning with
Chapter 15, Tivoli Data Warehouse solutions, on page 331.
A list of the components to be installed is displayed.
10. Type 1 to confirm the installation.
The installation begins.
11. After all of the components are installed, you are asked whether you want to install additional
products or product support packages. Type 2 and press Enter. Continue with Configuring the
monitoring agent.
where pc is the product code for your agent. For the UNIX agent, use the product code "ux"; for Linux,
use "lz". See Appendix D, IBM Tivoli product, platform, and component codes, on page 567 for a list
of agent product codes.
190
2. Press Enter when you are asked if the agent connects to a monitoring server.
3. Type the host name for the monitoring server.
4. Type the protocol that you want to use to communicate with the monitoring server. You have four
choices: ip.udp, sna, ip.spipe, or ip.pipe. Press Enter to accept the default protocol (IP.PIPE).
5. If you want to set up a backup protocol, enter that protocol and press Enter. If you do not want to use
backup protocol, press Enter without specifying a protocol.
6. Depending on the type of protocol you specified, provide the following information when prompted:
Table 42. UNIX monitoring server protocols and values
Protocol
Value
Definition
IP.UDP
IP Port Number
SNA
Net Name
LU Name
Log Mode
IP.PIPE
IP.SPIPE
191
mkgroup itmusers
3. Run the following command to ensure that the CANDLEHOME environment variable correctly
identifies IBM Tivoli Monitoring installation directory:
echo $CANDLEHOME
If you run the following steps in the wrong directory, you can change the permissions on every file in
every file system on the computer.
4. Change to the directory returned by the previous step:
cd $CANDLEHOME
5. Run the following command to ensure that you are in the correct directory:
pwd
7. Run the following command to change the ownership of additional agent files:
bin/SetPerm
8. If you want to run the agent as a particular user, add the user to the itmusers group. To do this, edit
the /etc/group file and ensure that the user is in the list of users for the itmusers group.
For example, if you want to run the agent as user test1, ensure that the following line is in the
/etc/group file:
itmusers:x:504:test1
9. Run the su command to switch to the user that you want to run the agent as or log in as that user.
10. Start the agent as described in Starting the monitoring agents.
where pc is the product code for the agent that you want to start. See Appendix D, IBM Tivoli product,
platform, and component codes, on page 567 for a list of agent product codes.
To start multi-instance agents (agents that may run more than one instance on a computer, like Seibel or
Domino agents), run the following command:
./itmcmd agent -o instance_name start pc
where pc is the product code for the agent that you want to start and instance_name is the name that
uniquely identifies the instance you want to start. See Appendix D, IBM Tivoli product, platform, and
component codes, on page 567 for a list of agent product codes.
192
v If your site uses Microsoft SQL Server to manage its Tivoli Data Warehouse, run this script at the MS
SQL command line:
sqlcmd -i populate_agents.sql [-U my_username -P my_password] [-H my_host])
v If your site uses Oracle to manage its Tivoli Data Warehouse, start this procedure:
POPULATE_OSAGENTS('ITMUSER');
193
OMEGAMON XE monitoring product and you did not separately purchase IBM Tivoli Monitoring, you
should not install support for the distributed operating system monitoring agents nor the Tivoli
Universal Agent. If you are installing from the Base DVD, you may see application support for
components that are not supported on the operating system. For detailed information about
application support, see Installing and enabling application support on page 196.
Note: If you are updating an existing installation (you selected Modify on the Welcome window), all
check boxes on the Select Features window reflect your choices during the initial installation.
Clearing a check box has the effect of uninstalling the component. Clear a check box only if
you want to remove a component.
7. If you want to view IBM Tivoli Enterprise Console events through the Tivoli Enterprise Portal, expand
Tivoli Enterprise Portal Desktop Client and ensure that TEC GUI Integration is selected.
8. Click Next.
9. If a monitoring server is not installed on this computer, go to Step 10.
If you are installing the desktop client on a computer that already has a monitoring server installed,
the Agent Deployment window is displayed.
The Agent Deployment window lists monitoring agents on this installation image that you can add to
the agent depot. The agent depot contains agents that you can deploy to remote computers. For
information about how to deploy agents in the agent depot to remote computers, see Chapter 9,
Deploying monitoring agents across your environment, on page 237.
Note: By default, the agent depot is located in the itm_installdir\CMS\depot directory on Windows. If
you want to use a different directory, create the directory (if it does not exist) and specify the
directory using the DEPOTHOME key in the KBBENV file.
Select the agents, if any, that you want to add to the agent depot. (You can add agents to the agent
depot at a later time by updating your installation.) Click Next.
10. If no IBM Tivoli Monitoring component has been previously installed on this computer, a window is
displayed for you to select a program folder for the Windows Start menu. Select a program folder and
click Next. The default program folder name is IBM Tivoli Monitoring.
11. Review the installation summary details. The summary identifies what you are installing and where
you chose to install it. Click Next to start the installation.
After installation is complete, a configuration window (called the Setup Type window) is displayed.
12. Clear the check boxes for any components that have already been installed and configured (at the
current release level) on this computer, unless you want to modify the configuration. (For example,
clear the check box for the Tivoli Enterprise Monitoring Server if it has already been installed and
configured on this computer.) If the desktop client is being installed on the same host as the portal
server but as part of a separate installation process, you must reconfigure the portal server.
Click Next to start configuring all selected components.
13. Type the host name of the portal server and click OK.
14. Click Finish to complete the installation.
2. When prompted for the IBM Tivoli Monitoring home directory, press Enter to accept the default
directory (/opt/IBM/ITM) or type the full path to a different directory.
3. If the installation directory does not already exist, you are asked if you want to create it. Type 1 to
create this directory and press Enter.
4. The following prompt is displayed:
194
Type 1 to start the installation and display the software license agreement.
5. Press Enter to read through the agreement.
6. Type 1 to accept the agreement and press Enter.
7. Type an 32-character encryption key to use and press Enter. This key should be the same key as
that used during the installation of the portal server to which the client will connect.
A numbered list of available operating systems is displayed.
8. Type 3 to install the desktop client support for your current operating system. Press Enter.
A message is displayed indicating that the Tivoli Enterprise Portal Desktop Client is about to be
installed.
9. Type 1 to confirm the installation.
The installation begins.
10. After the portal desktop client is installed, you are asked whether you want to install additional
products or product support packages. Type 1 and press Enter.
A numbered list is displayed, including the following application support package:
Tivoli Enterprise Portal Desktop Client support
11. Install the application support package for the portal desktop client.
All monitoring agents require that application support files be installed on the monitoring servers (hub
and remote), portal server, and portal desktop clients in your environment. Application support files
contain the information required for agent-specific workspaces, helps, predefined situations, and other
data.
This step installs the application support files for base monitoring agents. The base monitoring agents
are included with the base IBM Tivoli Monitoring installation package. For detailed information about
application support, see Installing and enabling application support on page 196.
a. Type the number that corresponds to Tivoli Enterprise Portal Desktop Client support and
press Enter.
A numbered list of base monitoring agents is displayed.
b. Type the numbers of the base monitoring agents for which you want to install application support,
or type the number that corresponds to All of the above. Type the numbers on the same line
separated by spaces or commas. Press Enter.
It is best to select all of the base monitoring agents (All of the above) so you do not need to
reconfigure application support as new agent types are added to your environment.
c. Type 1 to confirm the installation and press Enter.
The installation begins.
Note: This step installs the application support files. However, you must enable the application
support by configuring the portal desktop client. The next sections shows you how to
configure the portal desktop client.
12. After application support for the monitoring agents is installed, you are asked whether you want to
install additional products or product support packages. Type 2 and press Enter.
The next step is to configure the desktop client. Use the instructions in Linux: Configuring the desktop
client on page 196.
195
2.
3.
4.
5.
196
installation the File Transfer Enablement component. If you do, the File Transfer Enablement
component shipped with the V6.2.2 fix pack 2 (or subsequent) monitoring server will be replaced, and
the tacmd getfile, tacmd putfile, and tacmd executecommand commands will become inoperative.
Because application support for some IBM Tivoli Monitoring 6.x-based distributed agents is included on the
Base Infrastructure DVD, you may see application support files that do not apply to all systems.
Configuring application support is a two-step process:
1. Installing the application support files (from installation media).
2. Enabling the application support (sometimes referred to as adding or activating the application
support).
On the portal server and portal desktop clients, application support is enabled when the component is
configured. On monitoring servers, application support is enabled by seeding the database with
agent-specific information.
The procedures for configuring application support differ by operating system, as summarized in Table 43.
On Windows, both installation and enablement of application support are accomplished during the
installation of the monitoring servers, portal server, and desktop clients. On Linux or UNIX, this two-step
process is more visible, with the enablement step done separately from the installation.
Table 43. Procedures for installing and enabling application support
Operating system Monitoring servers
Portal server
Desktop clients1
Windows
Linux or UNIX
z/OS
v itmcmd support2
command, OR
v itmcmd config
command, OR
v itmcmd config
command, OR
This book does not describe configuring application support for a monitoring server installed on
a z/OS system. See Configuring the Tivoli Enterprise Monitoring Server on z/OS for information
and instructions.
The portal server and desktop client are not supported on z/OS. If you have a monitoring server
on z/OS, use the procedures in this book to configure application support on the portal server
and desktop clients.
1. You need to configure application support for desktop clients that are installed from the installation media. You do
not need to configure application support for desktop clients that are installed by using IBM Java Web Start to
download the client from the Tivoli Enterprise Portal Server.
2. You can seed a nonlocal monitoring server, even if one is not installed on the local computer, by installing the
support using option 3, Install TEMS support for remote seeding, then using Manage Tivoli Enterprise
Monitoring Services to seed the nonlocal monitoring server. You cannot use itmcmd support to seed a nonlocal
monitoring server.
3.
There is no way to uninstall the application support files laid down without uninstalling the Tivoli Enterprise
Monitoring Server.
197
Product Name
R3
R5
R4
R6
R2
LZ
UL
UX
NT
A4
SY
HD
UM
The Agent DVD delivered with IBM Tivoli Monitoring V6.2.2 must be used instead of the agent installation
CDs if you are installing support for the agents listed in the following tables on the component specified.
The installer on the agent installation CDs does not support these components on these platforms. To
install application support for these agents on distributed components, see Configuring application support
for nonbase monitoring agents on page 199. To install application support for the monitoring agents on a
z/OS monitoring server, see IBM Tivoli Management Services on z/OS: Configuring the Tivoli Enterprise
Monitoring Server on z/OS.
For all other distributed agents, install application support from the product installation CDs. See
Configuring application support for nonbase monitoring agents on page 199 for instructions.
198
199
The following table shows you which installation media to use and where to find instructions for installing
application support, according to the type of agent (distributed or z/OS) and whether the agent reports to a
distributed or z/OS monitoring server.
Table 45. Installation media and instructions for installing application support for nonbase monitoring agents
Agent
Monitoring
server
Distributed
Distributed
Installation media
z/OS
Distributed
Data Files CD
z/OS
z/OS
Data Files CD
Use the instructions in the following sections to install application support for nonbase distributed or z/OS
monitoring agents on the distributed monitoring servers, portal server, and portal desktop clients in your
environment:
v Installing application support on monitoring servers
v Installing application support on the Tivoli Enterprise Portal Server on page 206
v Installing application support on the Tivoli Enterprise Portal desktop client on page 209
Each of these sections provides information for installing application support files to a single component,
such as the monitoring server. If you have multiple components on the same computer (such as the
monitoring server and the portal server), combine steps from the individual sections to install application
support to all components.
200
1. In the \WINDOWS subdirectory on the agent product CD (for distributed products) or data files CD
(for z/OS products), double-click the setup.exe file to launch the installation.
2. Click Next on the Welcome window.
Note: If a monitoring agent is already installed on this computer, select Modify on the Welcome
window to indicate that you are updating an existing installation. Click OK on the message
telling you about preselected items. Then skip to Step 6.
3. On the Install Prerequisites window, read the information about the required levels of IBM Global
Security Toolkit (GSKit) and IBM Java.
The check box for each prerequisite is cleared if the correct level of the software is already installed.
Otherwise, the check box is selected to indicated that the software is to be installed. If you are
installing support from the data files CD for z/OS agent products, you might be prompted to install
Sun Java Runtime Environment (JRE) 1.4.2, even if you have already installed IBM JRE 1.5 with the
distributed components of Tivoli Management Services. The two versions can coexist, and installation
of application support for some monitoring agents requires Sun Java 1.4.2. You might also see a
message indicating that you can decline the installation of JRE 1.4.2 and that accepting installation of
JRE 1.4.2 results in uninstallation of other Java versions. Ignore this message, because you cannot
proceed without accepting the installation of Sun Java 1.4.2, and accepting the installation does not
uninstall IBM Java 1.5.
4. Click Next. The prerequisite software is installed if necessary.
If the installation program installs IBM GSKit or IBM JRE, you might be prompted to restart the
computer when the installation is complete. If so, you receive an abort message with a Severe error
heading. This is normal and does not indicate a problem.
If you are prompted to reboot, do the following:
a. Click OK on the window prompting you to reboot.
b. Click No on the window asking whether you want to view the abort log.
c. Restart the computer.
d. Restart the installation program.
5. Click Accept to accept the license agreement.
6. If you see a message regarding installed versions being newer than the agent installation, click OK to
ignore this message.
7. Select the application support packages that you want to install:
a. On the Select Features window, select Tivoli Enterprise Monitoring Server.
b. Expand the Tivoli Enterprise Monitoring Server node to display a list of application support
packages that you can install on the monitoring server.
The following example shows the application support packages available with the IBM Tivoli
Monitoring for Databases product:
201
Figure 26. IBM Tivoli Monitoring for Databases: application support packages
202
In Step 7 on page 201, you selected the application support packages that you wanted to install on
the monitoring server. In this step, you activate the application support through a process known as
seeding the monitoring server.
a. Specify the location of the monitoring server to which to add application support. You have two
choices:
v On this computer
v On a different computer
Click OK.
For additional information about these parameters, press the Help button.
b. If you are updating a hub Tivoli Enterprise Monitoring Server, you are asked to choose whether
you want to add the default managed system groups when you process the application-support
files:
All
New
Add the default managed system groups to all applicable situations from the product
support packages being seeded for the first time. Modifications are not made to managed
system groups in previously upgraded product support packages.
None
Note: Not all situations support the default managed group setting. For some, you might need to
manually define the distribution using the Tivoli Enterprise Portal due to the specific content
of the agent support package.
Figure 27. The Select the Application Support to Add to the TEMS window
c. Click OK on the Select the Application Support to Add to the TEMS window.
This window lists the application support packages that you selected in Step 7 on page 201. Click
OK to begin seeding the monitoring server (using the SQL files listed on this window). This
process can take up to 20 minutes.
Chapter 8. Installing IBM Tivoli Monitoring
203
d. Click Next on the message that provides results for the process of adding application support
(see Figure 28).
Note: If you are running in a Hot Standby environment, shut down your Hot Standby (that is, mirror)
monitoring server before completing this procedure. You may restart the Hot Standby
monitoring server only after you have seeded the hub server.
2. Run the following command from the installation media (the agent product CD for distributed agent
products or the data files CD for z/OS agent products):
./install.sh
3. When prompted for the IBM Tivoli Monitoring home directory, press Enter to accept the default
directory (/opt/IBM/ITM) or type the full path to the installation directory you used.
4. The following prompt is displayed:
204
XE
XE
XE
XE
Note: The prompt (The following products will be installed) seems to indicate that the
installer is about to install the listed monitoring agents, which is true for distributed agents.
However, for z/OS agents, only metaprobes for the monitoring agents are installed. You
cannot install z/OS agents on a monitoring server on Linux or UNIX (and you cannot install
monitoring agents from a data files CD).
d. Enter 1 to confirm the installation.
e. After the metaprobes are installed, the installer asks if you want to install additional products or
support packages. Enter y.
8. Install the application support package for the Tivoli Enterprise Monitoring Server:
a. Enter the number for Tivoli Enterprise Monitoring Server support.
Chapter 8. Installing IBM Tivoli Monitoring
205
A list of the monitoring agents for which you can install application support is displayed.
b. Enter the numbers of the monitoring agents for which you want to install application support, or
enter the number that corresponds to All of the above. Enter the numbers on the same line
separated by spaces or commas (,).
c. Enter 1 to confirm the installation.
The installation begins.
9. You are asked if you want to install application support on the Tivoli Enterprise Monitoring Server. If
you reply yes, application support is automatically added.
If you disagree, you can manually add application support later; see Installing and enabling
application support on page 196.
10. If you are updating a hub Tivoli Enterprise Monitoring Server, you are asked to choose whether you
want to add the default managed system groups when you process the application-support files, as
shown in Figure 27 on page 203:
All
New
Add the default managed system groups to all applicable situations from the product support
packages being seeded for the first time. Modifications are not made to managed system
groups in previously upgraded product support packages.
None
Note: Not all situations support the default managed group setting. For some, you might need to
manually define the distribution using the Tivoli Enterprise Portal due to the specific content of
the agent support package.
11. After you have added application support for one or more agents, you must refresh the monitoring
server configuration:
a. Start Manage Tivoli Enterprise Monitoring Services.
b. Pull down the Actions menu, and select the Refresh Configuration option (see Figure 29).
206
207
9. On the Start Copying Files window, read the list of actions to be performed and click Next to start the
installation.
The application support packages that you selected are installed.
10. On the Setup Type window, clear any components that you have already installed and configured on
this computer. Click Next.
11. Type the host name for the portal server and click Next.
12. Click Finish to complete the installation wizard.
13. Restart the portal server.
Linux or AIX: Installing application support on a portal server: Complete the following steps to install
application support for monitoring agents on a Linux or AIX portal server:
1. Stop the portal server by running the following command:
./itmcmd agent stop cq
2. Run the following command from the installation media (the agent product CD for distributed agent
products or the data files CD for z/OS agent products):
./install.sh
3. When prompted for the IBM Tivoli Monitoring home directory, press Enter to accept the default
directory (/opt/IBM/ITM) or enter the full path to the installation directory you used.
4. The following prompt is displayed:
Select one of the following:
1) Install products to the local host.
2) Exit install.
Please enter a valid number:
208
Note: The Tivoli Enterprise Portal Browser Client support package is portal server code that supports
the browser clients. You must install the browser client support package on the computer
where you install the portal server.
Repeat the following steps for each support package:
a. Enter the number that corresponds to the support package (for example, 28).
A numbered list of monitoring agents is displayed.
b. Enter the numbers that correspond to the monitoring agents for which you want to install the
application support package, or enter the number that corresponds to All of the above. Type the
numbers on the same line separated by spaces or commas (,).
c. Enter 1 to confirm the installation.
The installation begins.
d. After the support package is installed, you are asked whether you want to install additional
products or product support packages. Enter 1 to install an additional package and repeat the
preceding steps. Enter 2 if you are finished installing support packages.
8. Stop the portal server by running the following command:
./itmcmd agent stop cq
9. Run the following command to configure the portal server with the new agent information:
./itmcmd config -A cq
Complete the configuration as prompted. For information about configuring the portal server, see
Configuring the portal server on Linux or AIX: command-line procedure on page 171.
10. Restart the portal server by running the following command:
./itmcmd agent start cq
209
IBM JRE 1.5 with the distributed components of Tivoli Management Services. The two versions can
coexist, and installation of application support for some monitoring agents requires Sun Java 1.4.2.
You might also see a message indicating that you can decline the installation of JRE 1.4.2 and that
accepting installation of JRE 1.4.2 results in uninstallation of other Java versions. Ignore this
message, because you cannot proceed without accepting the installation of Sun Java 1.4.2, and
accepting the installation does not uninstall IBM Java 1.5.
Tip
If you are installing support from the data files CD for z/OS agent products, you might be
prompted to install Java Runtime Environment (JRE) 1.4.2, even if you have already installed
IBM JRE 1.5 with the distributed components of Tivoli Management Services. You might also
see a message indicating that you can decline the installation of JRE 1.4.2 and that accepting
installation of JRE 1.4.2 results in uninstallation of other Java versions. Ignore this message,
because you cannot proceed without accepting the installation of Java 1.4.2, and accepting the
installation of Java 1.4.2 does not uninstall IBM Java 1.5. The two versions can coexist.
However, the most recently installed version of Java becomes the active version, and the
distributed components of Tivoli Management Services V6.2.0 require that JRE 1.5 be the active
version.
To change the active version back to JRE 1.5 after you complete installation of application
support, follow these steps:
a. Open the Windows Control Panel by selecting Start > Settings > Control Panel.
b. From the Windows Control Panel, select IBM Control Panel for Java.
c. On the Java tab of the Java Control Panel, click the View button in the Java Application
Runtime Settings section.
d. On the JNLP Runtime Settings window, select version 1.5, and make sure the Enabled
checkbox is selected.
e. Click OK twice to save your settings and exit the Java Control Panel.
f. From the Windows Control Panel, select Java Plug-in.
g. On the Advanced tab of the Java Plug-in Control Panel, make sure that JRE 1.5.0 is
selected. If you change the setting, click Apply.
h. Close the Java Plug-in Control Panel window and the Windows Control Panel.
5. Click Next to continue. The prerequisite software is installed if necessary.
If the installation program installs IBM GSKit or IBM JRE, you might be prompted to restart the
computer when the installation is complete. If so, you receive an abort message with a Severe error
heading. This is normal and does not indicate a problem.
If you are prompted to reboot, do the following:
a. Click OK on the window prompting you to reboot.
b. Click No on the window asking whether you want to view the abort log.
c. Restart the computer.
d. Restart the installation program.
6. Read the software license agreement and click Accept.
7. If you see a message regarding installed versions being newer than the agent installation, click OK to
ignore this message.
8. Select the application support packages that you want to install:
a. On the Select Features window, select Tivoli Enterprise Portal Desktop Client.
b. Expand the Tivoli Enterprise Portal Desktop Client node to display a list of application support
packages that you can install on the portal server.
Initially, all application support packages are selected.
210
c. Clear the check boxes for application support packages that you do not want to install.
9.
10.
11.
12.
Note: If you are updating an existing installation (you selected Modify on the Welcome window),
all check boxes on the Select Features window reflect your choices during the initial
installation. Clearing a check box has the effect of uninstalling the component or product
package. Clear a check box for an application support package only if you want to remove
the application support.
d. Click Next.
On the Start Copying Files window, read the list of actions to be performed and click Next to start the
installation.
The application support packages that you selected are installed.
On the Setup Type window, clear any components that you have already installed and configured on
this computer. Click Next.
Type the host name for the portal server and click Next.
Click Finish to complete the installation wizard.
Linux: Installing application support on a desktop client: Complete the following steps to install
application support for monitoring agents on a Linux desktop client:
Note: Stop the desktop client before performing this procedure.
1. Stop the desktop client by running the following command:
./itmcmd agent stop cj
2. Run the following command from the installation media (the agent product CD for distributed agent
products or the data files CD for z/OS agent products):
./install.sh
3. When prompted for the IBM Tivoli Monitoring home directory, press Enter to accept the default
directory (/opt/IBM/ITM) or enter the full path to the installation directory you used.
4. The following prompt is displayed:
Select one of the following:
1) Install products to the local host.
2) Exit install.
Please enter a valid number:
211
e. After the monitoring agents are installed, the installer asks if you want to install additional
products or support packages. Enter 1.
7. Install the application support package for the portal desktop client:
a. Enter the number that corresponds to Tivoli Enterprise Portal Desktop Client support.
A numbered list of monitoring agents is displayed.
b. Enter the numbers that correspond to the monitoring agents for which you want to install the
application support package, or enter the number that corresponds to All of the above. Type the
numbers on the same line separated by spaces or commas (,).
c. Enter 1 to confirm the installation.
The installation begins.
8. After application support for all monitoring agents is installed, you are asked whether you want to
install additional products or product support packages. Enter 2.
9. Run the following command to configure the desktop client with the new agent information:
./itmcmd config -A cj
Complete the configuration as prompted. For information about configuring the desktop client, see
Linux: Configuring the desktop client on page 196.
10. Restart the desktop client by running the following command:
./itmcmd agent start cj
212
Directory
Windows
.cat
install_dir\cms\RKDSCATL
.atr
install_dir\cms\ATTRIB
.cat
install_dir/tables/cicatrsq/RKDSCATL
.atr
install_dir/tables/cicatrsq/ATTRIB
Linux or UNIX
where install_dir specifies the IBM Tivoli Monitoring installation directory. The IBM Tivoli Monitoring
installation directory is represented by the %CANDLE_HOME% (Windows) or $CANDLEHOME (Linux and
UNIX) environment variable. The default installation directory on Windows is \IBM\ITM. The default
installation directory on Linux and UNIX is /opt/IBM/ITM.
Notes:
1. If you export the CANDLEHOME environment variable to your current session, many of the installation
and configuration commands do not require that CANDLEHOME be passed to them (usually via the -h
CLI parameter).
2. If you are adding support to a monitoring server on z/OS, you can use the FTP utility provided with
Manage Tivoli Enterprise Monitoring Services. See IBM Tivoli Management Services on z/OS:
Configuring the Tivoli Enterprise Monitoring Server on z/OS for instructions.
3. If you specify an incorrect directory name, you will receive the following error:
The IBM Tivoli Monitoring installation directory cannot exceed 80 characters
or contain non-ASCII, special or double-byte characters.
The directory name can contain only these characters:
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ _\:0123456789()~-./".
Adding application support (SQL files) to a nonlocal hub: If you are adding application support to a
nonlocal hub monitoring server, you must add the SQL files used by the EIB, in addition to the catalog and
attribute files. Use the following procedure to seed the hub with SQL files using the Manage Tivoli
Enterprise Monitoring Services utility on your local Windows system:
1. Ensure that the hub monitoring server is started.
Note: If you are running in a Hot Standby environment, shut down your Hot Standby (that is, mirror)
monitoring server before completing this procedure. You may restart the Hot Standby
monitoring server only after you have seeded the hub server.
2. In the Manage Tivoli Enterprise Monitoring Services window, select Actions > Advanced > Add
TEMS application support.
3. On the Add Application Support to the TEMS window, Select On a different computer and click OK.
4. When you are prompted to ensure that the hub monitoring server is configured and running, click OK.
5. On the Non-Resident TEMS Connection window, provide the hub monitoring server TEMS name
(node ID) and select the communication protocol to use in sending the application support SQL files
to the hub monitoring server.
6. On the next window, provide any values required by the communication protocol.
For example, if the protocol is IP.PIPE, you are prompted for the fully qualified TCP/IP host name and
port number of the z/OS system where the monitoring server is installed. See for the values you
recorded during installation planning.
7. On the Select the Application Support to Add to the TEMS window, select the products for which you
want to add application support or click Select All to choose all available products. Click OK.
213
The SQL application support files are added to the hub monitoring server. This might take several
minutes.
8. The Application Support Addition Complete window shown in Figure 28 on page 204 gives you
information about the status and location of the application support SQL files. Click Save As if you
want to save the information in a text file. Click Close to close the window.
If the Application Support Addition Complete window is not displayed after 20 minutes, look in the
IBM\ITM\CNPS\Logs\seedkpp.log files (where pp is the two-character code for each monitoring agent)
for diagnostic messages that help you determine the cause of the problem.
9. If the monitoring server is not already stopped, stop it.
10. Restart the monitoring server.
Directory
.cat
install_dir/tables/cicatrsq/RKDSCATL
.atr
install_dir/tables/cicatrsq/ATTRIB
.sql
install_dir/tables/cicatrsq/SQLLIB
The file names of the application support files have the following format:
kpc.ext
where pc is the product code for the agent and ext is the file extension.
For example, kud.sql is the SQL support file for the DB2 for the workstation monitoring agent. See
Appendix D, IBM Tivoli product, platform, and component codes, on page 567 for a list of product
codes.
If you cannot find application support files for some agents for which you want to install application
support, install the missing files on this computer.
- To install missing support files for base monitoring agents, follow the installation steps described in
Installing the monitoring server on page 153.
- To install missing files for nonbase monitoring agents, follow the installation steps described in
Linux or UNIX: Installing application support on a monitoring server on page 204.
- If no monitoring server is installed on this computer, use the procedure in Installing application
support files on a computer with no monitoring server on page 217.
v Gather the following information about the monitoring server on the remote computer:
The host name or IP address
214
v
v
v
The protocol and port number that was specified when the monitoring server was configured
The monitoring server on the remote computer must be configured to use the IP.UDP, IP.PIPE, or
IP.SPIPE communications protocol. This procedure does not support a monitoring server that was
configured to use SNA.
Verify that the monitoring server on the remote computer is running.
Verify that the hub monitoring server to which this remote server sends its data is running.
In these instructions, install_dir specifies the IBM Tivoli Monitoring installation directory. You can enter
either $CANDLEHOME or the name of the directory. The default installation directory on Linux and
UNIX is /opt/IBM/ITM.
If you are running in a Hot Standby environment, shut down your Hot Standby (that is, mirror)
monitoring server before completing this procedure. You may restart the Hot Standby monitoring server
only after you have seeded the hub server.
Copying the CAT and ATR files to the nonlocal monitoring server: Copy the .cat and .atr files for the
agents you want to support from the local Linux or UNIX monitoring server to the nonlocal monitoring
server. If you use FTP, copy the files in ASCII format. The .cat and .atr files are located in the following
directories on the local monitoring server:
v CAT files are located in install_dir/tables/cicatrsq/RKDSCATL
v ATR files are located in install_dir/tables/cicatrsq/ATTRIB
Copy the files to the directory shown in Table 48 on the remote computer:
Table 48. Locations of CAT and ATR files for the monitoring server
Remote computer on: File type
Directory
Windows
.cat
install_dir\cms\RKDSCATL
.atr
install_dir\cms\ATTRIB
.cat
install_dir/tables/cicatrsq/RKDSCATL
.atr
install_dir/tables/cicatrsq/ATTRIB
Linux or UNIX
where install_dir specifies the IBM Tivoli Monitoring installation directory. The IBM Tivoli Monitoring
installation directory is represented by the %CANDLE_HOME% (Windows) or $CANDLEHOME (Linux and
UNIX) environment variable. The default installation directory on Windows is \IBM\ITM. The default
installation directory on Linux and UNIX is /opt/IBM/ITM.
Notes:
1. If you are installing support on a z/OS monitoring server, you can use the FTP utility provided by
Manage Tivoli Enterprise Monitoring Services to copy the .cat and .atr files. See IBM Tivoli
Management Services on z/OS: Configuring the Tivoli Enterprise Monitoring Server on z/OS for
instructions.
2. If you specify an incorrect directory name, you will receive the following error:
The IBM Tivoli Monitoring installation directory cannot exceed 80 characters
or contain non-ASCII, special or double-byte characters.
The directory name can contain only these characters:
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ _\:0123456789()~-./".
If the nonlocal monitoring server to which you are adding application support is a hub, you must also
install the SQL files used by the EIB. See Adding application support (SQL files) to a nonlocal hub.
Adding application support (SQL files) to a nonlocal hub: To add application support SQL files to a
hub monitoring server on a nonlocal system, complete this procedure.
215
Figure 30. Manage Tivoli Enterprise Monitoring Services Install Product Support window
5. On the Add Application Support to the TEMS window, select On a different computer and click OK.
6. When you are prompted to ensure that the hub monitoring server is configured and running, click OK.
216
7. On the Non-Resident TEMS Connection window, provide the hub monitoring server name (node ID)
and select the communication protocol to use in sending the application support SQL files to the hub
monitoring server.
8. Select the appropriate communications protocol and click OK.
9. On the next window, supply any values required by the selected communication protocol and click
OK.
10. On the Install Product Support window, select the monitoring agents for which you want to add
application support to the hub monitoring server, and click Install.
11. In Manage Tivoli Enterprise Monitoring Services, look for this message:
Remote seeding complete!
2. When prompted for the IBM Tivoli Monitoring home directory, press Enter to accept the default
directory (/opt/IBM/ITM) or enter the full path to the installation directory you used.
The following prompt is displayed:
Select one of the following:
1)
2)
3)
4)
217
IBM Tivoli Monitoring V6.2.2 Language Support 3 of 3 DVD: Czech, Hungarian, Polish, Russian
IBM Tivoli Monitoring V6.2.2 DM Upgrade Toolkit Language Support CD
IBM Tivoli Monitoring V6.2.2 ITM 5.1.2 Migration Toolkit Language Support CD
IBM Tivoli Monitoring V6.2.2 Agent Builder Toolkit Language Support CD
IBM Tivoli Monitoring Agents for System P Language Support CD
Agent product installation CDs
The IBM Tivoli Monitoring V6.2.2 Language Support DVDs contain the national language versions of the
help and presentation files for the components and agents listed in Table 49.
Table 49. Language support included on IBM Tivoli Monitoring V6.2.2 Language Support DVDs
Product code
HD
SY
NT
LZ
UX
UL
UM
TM
P5
PX
PH
PK
PV
VA
Language support for the products found on the IBM Tivoli Monitoring V6.2.2 Tools DVD is available on
the following CDs:
v IBM Tivoli Monitoring V6.2.2: Language Support for Distributed Monitoring Toolkit CD
v IBM Tivoli Monitoring V6.2.2: Language Support for Migration Toolkit CD
v IBM Tivoli Monitoring V6.2.2: Language Support for Agent Builder CD
Language support for all other distributed agents, including those agents for which application support is
included on the Base DVD, is included with the installation media for the individual agent. For the
OMEGAMON XE monitoring agents on z/OS, the installation media for the agents are mainframe tapes,
which don't include the language packs; as with the distributed agents, language support is provided on a
separate CD or DVD.
218
Install the language packs on any system where you have installed the Tivoli Enterprise Portal or where
you have installed a desktop client. (If you download and run a desktop client using Web Start, you do not
need to install the language packs on the local system. They are downloaded from the portal server.)
Before you can install a language pack, you must install the component in English.
Before installing a language pack, first install the component in English. Also ensure that Java Runtime
Environment version 1.5 or above is installed and set in the system path. The follow the following steps to
install a language pack on any system where you have installed either the Tivoli Enterprise Portal Server
or the Tivoli Enterprise Portal desktop client:
1. In the directory where you extracted the language pack installation image, launch the installation
program as follows:
v On Windows, double-click the lpinstaller.bat file.
v On Linux and UNIX, run the following command:
./lpinstaller.sh -c install_dir
where:
install_dir
is the directory where you installed IBM Tivoli Monitoring (usually /opt/IBM/ITM).
To perform a console installation on Linux or UNIX (instead of a GUI installation), add the -i
console parameter to the above command.
2. Select the language you want installed, and click OK.
3. On the Introduction panel, click Next.
4. On the Select Action Set panel, click Add/Update, and click Next.
5. Select the folder in which the Language Support package files (win*.jar and unix*.jar) are located,
and click Next. The default folder is the directory where the installer is launched.
6. Select the languages that you want to install, and click Next.
For multiple selections, hold down the Ctrl key.
7. Review the installation summary, and, if correct, click Next.
The installation's progress is displayed.
8. On the Post Install Message panel, click Next.
9. Click Done once the installation is complete.
10. Reconfigure and restart the Tivoli Enterprise Portal Server and the Eclipse Help Server. See below.
After installing a Tivoli Monitoring V6.2.2 Language Pack, reconfigure the portal server and the desktop
client using either the Manage Tivoli Enterprise Monitoring Services utility or the itmcmd config command.
Use one of the following methods to reconfigure the affected components:
v Launch Manage Tivoli Enterprise Monitoring Services, right-click the affected component, and select
Reconfigure. (See Starting Manage Tivoli Enterprise Monitoring Services on page 261.)
v Change to the install_dir/bin directory, and enter the following commands:
./itmcmd config -A cq
./itmcmd config -A cj
Accept the default values, which reflect the decisions made when the component was installed or last
configured. For instructions on specifying your users' language environment, see the IBM Tivoli Monitoring:
Administrator's Guide.
After you have reconfigured these components, you need to stop and restart these components:
v Tivoli Enterprise Portal Server
v Tivoli Enterprise Portal desktop or browser client
219
For SuSE Linux Enterprise Server (SLES) 10 computers only: On the SLES 10 platform, the Tivoli
Enterprise Portal displays corrupted text resources in the Japanese locale. Download the Kochi fonts
contained in the kochi-substitute-20030809.tar package from the following Web site: http://sourceforge.jp/
projects/efont/files/.
The downloaded tar file includes the truetype fonts (ttf files), which need to be installed on your system.
Complete the following steps to install the files:
1. Extract the tar file.
2. Copy the font files (ttf) to X11 font path (for example, /usr/X11R6/lib/X11/fonts/truetype).
3. Run SuSEconfig -module fonts.
Refer to the following Web site for detailed instructions for installing the additional fonts to SuSE Linux:
http://www.suse.de/~mfabian/suse-cjk/installing-fonts.html .
where:
install_dir
is the directory where you installed IBM Tivoli Monitoring (usually /opt/IBM/ITM).
To perform a console installation on Linux or UNIX (instead of a GUI installation), add the -i
console parameter to the above command.
2. Select the language to be uninstalled, and click OK.
3. On the Introduction panel, click Next.
4. On the Select Action Set panel, click Remove, and click Next.
5. Select the languages that you want to uninstall, and click Next.
For multiple selections, hold down the Ctrl key.
6. Review the installation summary, and, if correct, click Next.
7. On the Post Install Message panel, click Next.
8. Click Done once the uninstallation is complete.
9. Reconfigure and restart the Tivoli Enterprise Portal Server and the Eclipse Help Server, as described
above.
220
The version of IBM JRE required by Tivoli Enterprise Portal clients is installed with Tivoli Management
Services components. If you want to run a client on a machine where no other components are installed,
you can download the IBM JRE installer from the Tivoli Enterprise Portal Server (see Installing the IBM
JRE on page 233). The IBM JRE must be installed as the system JVM.
The packaging, installation, and servicing of the Sun JRE is not provided by IBM. The Sun JRE at version
1.5.0_xx or 1.6.0.xx must already be installed on the machines where the Tivoli Enterprise Portal client will
run. The Sun JRE can be downloaded from the following Web site: http://www.java.com/getjava. For
help installing the Sun JRE and enabling and configuring its Java plug-in on Linux, visit the following Web
site: http://www.java.com/en/download/help/5000010500.xml.
Support for the Sun JRE is a feature of the Tivoli Enterprise Portal client only; installation and use of the
IBM JRE is still required for the Tivoli Enterprise Portal Server and some other Tivoli Management
Services components.
As of IBM Tivoli Monitoring 6.2.2, the installer no longer modifies the system JRE. Instead it installs a local
copy of Java, one that is private to Tivoli Monitoring. This applies to all Java-dependent Tivoli Monitoring
components, including those at a pre-6.2.2 level.
This embedded JRE is installed in the %CANDLE_HOME%\java directory. The system Java is untouched by the
IBM Tivoli Monitoring installer; you can remove it yourself if you desire.
The exceptions are the Tivoli Enterprise Portal Server and the eWAS server, which use the embedded
JVM delivered as part of the eWAS component. Also, the browser client and Java Web Start still use the
system JRE. For your convenience, the system JRE installation image is still distributed as part of the
browser client package.
The desktop client uses the new, embedded JVM.
The IBM Tivoli Monitoring installer does not install an embedded JVM if none of the components selected
for installation has a Java dependency.
Desktop clients
You do not need to configure the desktop client if:
v You want the client to use the IBM JRE, or
v The SUN JRE is the only JRE installed on the computer
You must configure the client start-up script with the location of the JRE if:
v Both IBM and Sun JREs are installed on the computer and you want to use the Sun JRE, or
v You have multiple versions of the Sun JRE installed and you want to specify a particular version
On Windows computers, add a user-level environment variable named TEP_JAVA_HOME whose value is
the fully qualified directory location of the Sun JRE you want to use. For example, TEP_JAVA_HOME=C:\
Program Files\Java\jre1.5.0_15\bin\. On UNIX and Linux computers, define the same environment
variable with the fully qualified location of the Sun JRE you want to use for the Tivoli Enterprise Portal
desktop client.
Browser clients
Configuration for browser clients involves the following steps:
v Registering the Java plug-in on page 224
When the browser client connects to the Tivoli Enterprise Portal Server, it downloads a Java applet.
Java applets that run in a browser environment use Java plug-in technology. The applet plug-in must be
registered with the browser to establish and initialize an appropriate Java runtime environment for the
applet to execute in.
Chapter 8. Installing IBM Tivoli Monitoring
221
On Linux and UNIX, the plug-in must be manually registered with the browsers; on Windows, the Java
installer automatically registers the associated Java plug-in with both Internet Explorer and Firefox. After
installation, a plug-in can be de-registered from a given browser selectively using the Java control panel
(see Removing the Java plug-in on Windows on page 226).
v Specifying runtime parameters for the plug-in on page 225
Several runtime parameters should be specified before the browser client is started to allocate enough
memory for the Tivoli Enterprise Portal applet and decrease the startup time.
If you are using Internet Explorer under Windows, you must complete an additional step to use the Sun
JRE:
v Identifying the version of the Sun JRE the client should use on page 226
Conflicts in plug-in registration can sometimes occur if both IBM and Sun JREs are installed on Windows.
You can resolve such problems by de-registering the plug-in in the browsers:
v Removing the Java plug-in on Windows on page 226
222
Firefox tips
v Unlike Internet Explorer, Firefox does not automatically load the current URL into a new window.
If you want to invoke multiple copies of the Tivoli Enterprise Portal browser client, copy the full
URL from the first window to the second window. Do not load the initial http://
hostname:1920///cnp/kdh/lib/cnp.html URL, because that will not work.
v To use the browser's tab support when opening workspaces in the browser client, you may
need to configure the browser to force links that otherwise open in new windows to open
instead in new tabs. Refer to your browser's instructions for customizing tab preferences. This is
true for Internet Explorer version 7 (and subsequent) as well as Mozilla Firefox.
Reuse of existing tabs is supported only with the Firefox browser. This means if you have a
workspace open in an unfocused tab and from your currently focused tab you select the same
workspace to open in a new tab, the unfocused tab is refocused, and the workspace is reloaded
there. To allow this support:
1. Enter about:config in the address bar, and press Enter.
2. Find the preference signed.applets.codebase_principal_support, and change the value to
true. (Or double-click the preference name to toggle its value.)
3. The first time you attempt to reuse an existing tab, the menu shown in Figure 31 on page
224 might display. If so, add a checkmark to the box Remember this decision, and press
Allow.
If you do not allow this tab reuse, a new tab is opened each time you open a new workspace.
Tip: To open a workspace in a new tab using either Firefox or Internet Explorer, press
Shift+Ctrl while selecting the workspace. This always opens a new tab whether or not your
browser is set to reuse existing tabs.
v If the Firefox process dies unexpectedly on Linux, you may need to also kill the Java process
that is spawned when the Java plug-in is started. If you do not kill the Java process,
subsequent startups of the Tivoli Enterprise Portal client in a new Firefox browser may be very
slow.
If you are running the browser client with the IBM JRE, issue the following command to find the
errant Java process:
ps aef | grep plugin
If running the browser client with the Sun JRE, the errant Java process is called java_vm.
223
Note: For the Sun JRE, you may find two plugin files. For example:
./plugin/i386/ns7/libjavaplugin_oji.so
./plugin/i386/ns7-gcc29/libjavaplugin_oji.so
224
5. If you make any changes, press Apply and exit the panel.
2. If you have multiple Java versions, verify that you have the correct control panel open by confirming
the Java Runtime and that the JRE is in the correct path (for example, c:\program
files\IBM\Java50\jre\bin for IBM Java on Windows). To verify, click on the Java(TM) tab and check
the Location column for the JRE.
3. Set the Java Runtime Parameters:
a. Click the Java tab.
b. Click the Java Applet Runtime Settings View button.
c. Click in the Java Runtime Parameters field and set the following parameters:
v IBM JRE: -Xms128m -Xmx256m -Xverify:none
-Djava.protocol.handler.pkgs=sun.plugin.net.protocol
v Sun JRE: -Xms128m -Xmx256m -Xverify:none
The -Xms128m specifies the starting size of the Java heap (128 MB) and -Xmx256m specifies the
maximum size. The -Xverify:none parameter disables Java class verification, which can
increase startup time.
The Djava.protocol.handler.pkgs option required only for the IBM JRE on Linux due to a
problem with the plug-in not caching jar files. If the parameter is left off, the Tivoli Enterprise
Portal applet jar files will not be cached, making subsequent start ups of the applet slow.
d. Click OK.
4. Confirm that the Temporary Files settings are set to Unlimited:
a. Click the General tab.
b. Click Settings.
c. Select Unlimited for the Amount of disk space to use.
d. Click OK.
5. Clear the browser cache:
a. In the General tab, click Delete Files.
b. In the window that opens, select Downloaded Applets and click OK.
The Sun JRE does not always support the same maximum heap values as the IBM JRE. The true
maximum is calculated based on the resources available on the particular machine being configured, and
the algorithm that is involved is different between the two JREs. The symptom of a memory problem is
that the applet fails to load in the browser, and you receive a message similar to the following:
225
To resolve this problem, reduce the value of the maximum heap setting in the Java Control panel in 32m
or 64m increments until the error goes away. For example, if you start with the recommended value of
-Xmx256m try reducing it to -Xmx224m or -Xmx192m. Eventually you will reach a value that is appropriate for
the particular machine involved.
Identifying the version of the Sun JRE the client should use
If you are using Internet Explorer for the Tivoli Enterprise Portal browser client, and you want to use the
Sun JRE, you must identify to the Tivoli Enterprise Portal Server the version of the JRE you want the client
to use.
To identify the version, update the file jrelevel.js found in the \itm_install_dir\CNB directory (Windows)
or itm_install_dir/arch/cw (Linux) on the computer where the portal server is installed. Assign a valid
value to the following declared variable:
var jreLevel = "1.5.0"
The browser client will use the default JRE for the computer.
1.5.0
The browser client will use the IBM 1.5 JRE (the default as shipped).
5.0
The browser client will use the latest Sun 1.5.0_xx JRE installed.
6.0
The browser client will use the latest Sun 1.6.0_xx JRE installed.
or
Applet(s) in this HTML page requires a version of java different from the one
the browser is currently using. In order to run the applet(s) in this HTML page,
a new browser session is required.
Press 'Yes to start a new browser session.
226
If you encounter problems loading Tivoli Enterprise Portal using Firefox, this is one of the first panels
you should look at to ensure that the Java plug-in has indeed been registered. Often, a quick way to
resolve any registration problems is to simply remove the check mark (if already on), press Apply,
then check the box again, and again press Apply. This action will switch the option off and on so that
the plug-in registration is reset and the correct Windows registry entries get re-established.
4. Remove the check mark from the box for the browser or browsers you want to unregister.
Support for Sun Java 1.6.0_10 or higher with browser clients on Windows
With Sun Java versions 1.6.0_10 and higher, a new plug-in architecture was introduced and established as
the default plug-in. This support enables an applet-caching feature called legacy lifecycle that, coupled
with use of the Sun 1.6 JRE, significantly increases performance of the Tivoli Enterprise Portal browser
client after it has been downloaded. Performance measurements using this configuration show that, with
legacy lifecycle, performance of the browser client is virtually identical to that of the desktop and Java Web
Start deployment modes.
IBM Tivoli Monitoring browser clients do not automatically run with this new plug-in architecture. To use the
Sun 1.6.0_10 (or higher) JRE, before launching the browser you must enable the legacy_lifecycle
parameter in the applet.html file on the computer where the Tivoli Enterprise Portal Server is installed.
v On Windows, launch Manage Tivoli Enterprise Monitoring Services.
1. Right-click Browser Task/SubSystem, and select Reconfigure.
2. Locate the legacy_lifecycle parameter, and double-click the line to bring up the edit dialog.
3. Change the value to true, and check the In Use box.
4. Click OK.
5. Click OK again to save your changes.
v On Linux and AIX, edit the applet.html file in the itm_home/platform/cw branch, where platform is a
string that represents your current operating system.
1. Locate this line:
document.writeln( '<PARAM NAME= "legacy_lifecycle" VALUE="false">' );
227
Figure 34. Server connection error, Tivoli Enterprise Portal browser client
Required maintenance: To use Sun Java 1.6.0_10 or higher with any IBM Tivoli Monitoring V6.2,
V6.2.1, or V6.2.2 portal client (browser, Java Web Start, desktop), the following APAR is required:
IZ41252. APAR IZ41252 is included in the following maintenance deliveries:
ITM6.2 fixpack 3 and subsequent
ITM6.2.1 interim fix 3 and subsequent
ITM6.2.2 (GA release) and subsequent maintenance
Note: At this time Mozilla Firefox is not officially supported for the portal client running in browser (applet)
mode if Sun Java 1.6.0_10 or higher is used. Mozilla Firefox is supported with both Sun Java
1.6.0_07 or lower and IBM Java 1.5.0, which is currently shipped with IBM Tivoli Monitoring
versions 6.2, 6.2.1, and 6.2.2.
228
Under Windows, the last JRE installed on the machine usually controls which Java Web Start loader is
associated with the Java Web Start deployment files, which have the extension .jnlp. If the Sun JRE was
the last JRE installed, the Sun Java Web Start loader and associated JRE will be used for the Tivoli
Enterprise Portal client. If both IBM and Sun Java are installed on the same machine and the IBM JRE
was installed last, it may be necessary to manually re-associate the .jnlp extension with the Sun JRE.
To verify that the loader is associated with the correct JRE:
1. Launch Folder Options from Windows Control Panel folder.
2. Select the File Types tab.
3. Find and select the JNLP file type in the list of registered file types.
4. Click the Advanced button, select the Launch action, and click the Edit... button.
5. If the javaws.exe (the Java Web Start loader) is not associated with the correct JRE installation path,
use the Browse button to locate and select the correct path.
Note to users of the single sign-on feature: If you have enabled single sign-on and your users start the
portal's browser client via Java Web Start, a special URL is
required:
http://tep_host:15200/LICServletWeb/LICServlet
Note that the servlet that supports single sign-on for the
enterprise portal's Java Web Start client requires that port
15200 be open on the portal server.
For further information about single sign-on, see either
Support for single sign-on for launch to and from other
Tivoli applications on page 14 or the IBM Tivoli Monitoring:
Administrator's Guide.
For more information on using and configuring Java Web Start client and setting up its environment, see
Using Web Start to download and run the desktop client on page 233.
229
2. In the Manage Tivoli Enterprise Monitoring Services window, right-click the browser or desktop client
and select Reconfigure.
The Configure the Tivoli Enterprise Portal Browser window is displayed. (If you are configuring the
desktop client, the Configure Application Instance window is displayed.)
3. Scroll down in the list of variables until you see the kjr.browser.default variable.
4. Double-click kjr.browser.default.
The Edit Tivoli Enterprise Portal Browser Parm window is displayed.
5. In the Value field, type the path and the application name of the alternative browser application. For
example:
C:\Program Files\Mozilla Firefox\firefox.exe
Purpose of file
bin/cnp.sh
bin/cnp_instance.sh
platform/cj/original/cnp.sh_template
To change the location of the Web browser you must change the above file or files to include a new
property:
1. Go to the install_dir/bin/cnp.sh and edit the cnp.sh shell script.
2. Add your Web browser location to the last line of the file. In the example below, the Web browser
location is /opt/foo/bin/launcher. -Dkjr.browser.default=/opt/foo/bin/launcher
Important: The line is very long and has various options on it, including several other D options to
define other properties. It is very important to add the option in the correct place.
If the last line of your bin/cnp.sh originally looked like the following:
230
To set the browser location to /opt/foo/bin/launcher, change the line to look like the following:
${JAVA_HOME}/bin/java -showversion -noverify -classpath ${CLASSPATH}
-Dkjr.browser.default=/opt/foo/bin/launcher
-Dkjr.trace.mode=LOCAL -Dkjr.trace.file=/opt/IBM/ITM/logs/kcjras1.log
-Dkjr.trace.params=ERROR -DORBtcpNoDelay=true -Dcnp.http.url.host=
-Dvbroker.agent.enableLocator=false
-Dhttp.proxyHost=
-Dhttp.proxyPort=candle.fw.pres.CMWApplet 2>& 1 >> ${LOGFILENAME}.log
231
Windows example:
<resources os="Windows">
<jar href="classes/browser-winnt.jar"/>
<jar href="classes/browser-core-winnt.jar"/>
<property name="kjr.browser.default" value="C:\Program Files\Internet Explorer\iexplore.exe"/>
</resources>
Linux example:
<resources os="Linux">
<jar href="classes/browser-li.jar"/>
<jar href="classes/browser-core-li.jar"/>
<property name="kjr.browser.default" value="/usr/bin/firefox"/>
</resources>
Note: kjr.browser.default is not the only property you can specify using the <property> keyword.
You can include any client parameters that are specific to your operating system.
where the systemname is the host name of the computer where the Tivoli Enterprise Portal Server is
installed, and 1920 is the port number for the browser client. 1920 is the default port number for the
browser client. Your portal server might have a different port number assigned.
3. Click Yes on the Warning - Security window.
4. Type your user ID and password in the logon window. The default user ID is sysadmin.
5. Click OK.
232
where TEPS_host_name is the fully qualified host name of the computer where the portal server is
installed (for example, myteps.itmlab.company.com).
3. When prompted, save the java/ibm-java2.exe file to a directory on your hard drive.
4. Change to the directory where you saved the java/ibm-java2.exe file and double-click the file to
launch the JRE installer to start the installation program.
5. On the pop-up window, select the language from the drop-down list and click OK.
6. Click Next on the Welcome page.
7. Click Yes to accept the license agreement.
8. Accept the default location for installing the JRE or browse to a different directory. Click Next.
9. Click NO on the message asking if you want to install this JRE as the system JVM.
Make Java 1.5 the system JVM only if there are no other JREs installed on the computer.
10. If another JRE is currently installed as the system JVM and you are prompted to overwrite the current
system JVM, click NO.
Overwriting the current system JVM may cause applications depending on the current JVM to fail.
Chapter 8. Installing IBM Tivoli Monitoring
233
11. Click Next on the Start Copying Files window to start installing the JRE.
12. On the Browser Registration window, select the browsers that you want the IBM JRE to be
associated with. These would normally be the browsers that you want to use with the browser client.
13. Click Next.
14. Click Finish to complete the installation.
where teps_hostname is the fully qualified host name of the computer where the portal server is
installed (for example, myteps.itmlab.company.com).
3. When prompted, save the installer to disk.
4. Change to the directory where you saved the ibm-java2-i386-jre-5.0-7.0.i386.rpm file and launch the
installer to start the installation program using the following command:
rpm -ivh ibm-java2-i386-jre-5.0-7.0.i386.rpm
You can also install the JRE without downloading the installer by supplying the URL to the rpm in the
command:
rpm -ivh http://teps_hostname:1920///cnp/kdh/lib/java
/ibm-java2-i386-jre-5.0-7.0.i386.rpm
234
235
where TEPS_host_name is the fully qualified host name of the computer where the Tivoli Enterprise
Portal Server is installed (for example, myteps.itmlab.company.com).
Web Start downloads and launches the desktop client.
3. Click Next and type a name for the shortcut in the Select a Title for the Program window. For example:
ITM Web Start client
4. Click Finish.
The shortcut appears on your desktop.
236
Use one agent depot for all the monitoring servers in your Sharing an agent depot across your environment on
monitoring environment.
page 241
Deploy an OS agent.
You can also use the remote agent deployment function to configure deployed agents and install
maintenance on your agents. For information, see the IBM Tivoli Monitoring: Administrator's Guide. See
the IBM Tivoli Monitoring: Command Reference for commands that you can use to perform these tasks.
Important: Run the tacmd login command before executing commands from the tacmd library. This
requirement does not apply to the addBundles command. Run the tacmd logoff command
after you finish using the tacmd command library."
237
with DB2-related agent bundles. If you deploy an agent from a remote monitoring server, you must have a
agent bundle in the depot available to the monitoring server.
Note: Agent depots cannot be located on a z/OS monitoring server.
There are two methods to populate the agent depot:
v Populating the agent depot from the installation image
v Populating the agent depot with the tacmd addBundles command on page 240
238
Application agent installation image: Use the following steps to populate the agent depot from an
application agent installation image:
1. Launch the installation wizard by double-clicking the setup.exe file in the \Windows subdirectory of the
installation image.
2. Select Next on the Welcome window.
3. Click Next on the Select Features window without making any changes.
4. On the Agent Deployment window, select the agents that you want to add to the depot and click Next.
5. Review the installation summary and click Next to begin the installation.
After the agents are added to the agent depot, a configuration window (called the Setup Type window)
is displayed.
6. Clear all selected components. You have already configured all components on this computer and do
not need to reconfigure any now. Click Next.
7. Click Finish to complete the installation.
2. When prompted for the IBM Tivoli Monitoring home directory, press Enter to accept the default
directory (/opt/IBM/ITM). If you want to use a different installation directory, type the full path to that
directory and press Enter.
3. If the directory you specified does not exist, you are asked whether to create it. Type y to create this
directory.
4. The following prompt is displayed:
Select one of the following:
1) Install products to the local host.
2) Install products to depot for remote deployment (requires TEMS).
3) Install TEMS support for remote seeding
4) Exit install.
Please enter a valid number:
B
Moves back one page in the list.
7. When you have specified all the agents that you want to add to the agent depot, type E and press
Enter to exit.
239
For the full syntax, including parameter descriptions, see the IBM Tivoli Monitoring: Command Reference.
Examples:
v The following example copies every agent bundle, including its prerequisites, into the agent depot on a
UNIX computer from the installation media (CD image) located at /mnt/cdrom/:
tacmd addbundles -i /mnt/cdrom/unix
v The following example copies all agent bundles for the Oracle agent into the agent depot on a UNIX
computer from the installation media (CD image) located at /mnt/cdrom/:
tacmd addbundles -i /mnt/cdrom/unix -t or
v The following example copies all agent bundles for the Oracle agent into the agent depot on a Windows
computer from the installation media (CD image) located at D:\WINDOWS\Deploy:
tacmd addbundles -i D:\WINDOWS\Deploy -t or
v The following example copies the agent bundle for the Oracle agent that runs on the AIX version 5.1.3
operating system into the agent depot on a UNIX computer from the installation media (CD image)
located at /mnt/cdrom/:
tacmd addbundles -i /mnt/cdrom/unix -t or -p aix513
By default, the agent depot is located in the itm_installdir/CMS/depot directory on Windows and
itm_installdir/tables/tems_name/depot directory on UNIX. The tacmd addBundles command puts the
agent bundle in that location unless another location is defined in the monitoring server configuration file
for DEPOTHOME.
If you want to change this location, do the following before you run the tacmd addBundles command:
1. Open the KBBENV monitoring server configuration file located in the itm_installdir\CMS directory on
Windows and the itm_installdir/tables/tems_name directory on Linux and UNIX.
2. Locate the DEPOTHOME variable. If it does not exist, add it to the file.
3. Type the path to the directory that you want to use for the agent depot.
4. Save and close the file.
5. On UNIX or Linux only, add the same variable and location to the kbbenv.ini file located in
itm_installdir/config/kbbenv.ini.
If you do not add the variable to the kbbenv.ini file, it will be deleted from the KBBENV file the next
time the monitoring server is reconfigured.
Description
tacmd listbundles
tacmd removebundles
240
Description
tacmd viewdepot
See the IBM Tivoli Monitoring: Command Reference for the full syntax of these commands.
Note: Only Tivoli-provided product agent bundles should be loaded into the IBM Tivoli Monitoring
deployment depot. User-provided or customized bundles are not supported. Use only Tivoli
provided tacmd commands to process bundles and to execute agent deployments. Manual
manipulation of the depot directory structure or the bundles and files within it is not supported and
may void your warranty.
241
Deploying OS agents
Before you can deploy any non-OS agent, you must first install an OS agent on the computer where you
want the non-OS agent to be deployed. In addition to monitoring base OS performance, the OS agent also
installs the required infrastructure for remote deployment and maintenance.
Note: Ensure that you have populated your agent depot, as described in Populating your agent depot on
page 237, before attempting to deploy any agents.
You can install the OS agent locally, as described in Installing monitoring agents on page 183 or remotely
using the tacmd createNode command.
The tacmd createNode command creates a directory on the target computer called the node. The OS
and non-OS agents are deployed in this directory. Agent application support is also added at this time (see
Installing and enabling application support on page 196).
The tacmd createNode command uses one of the following protocols to connect to the computers on
which you want to install the OS agent:
v Server Message Block (SMB), used primarily for Windows servers
v Secure Shell (SSH), used primarily by UNIX servers, but also available on Windows
Note: Only SSH version 2 is supported.
v Remote Execution (REXEC), used primarily by UNIX servers, but not very secure
v Remote Shell (RSH), used primarily by UNIX servers, but not very secure
You can specify a protocol to use; if you do not, the tacmd createNode command selects the appropriate
protocol dynamically.
242
Important: Unless you specifically indicate otherwise, the agent that you deploy using this command
assumes that the monitoring server to which it connects is the monitoring server from which
you run the command. The agent also uses the default settings for the communications
protocol (IP.PIPE for protocol type and 1918 for the port). To change these default values
(especially if you are not using the IP.PIPE protocol), use the following property (specified with
the -p parameter) when running the command: SERVER=[PROTOCOL://][HOST|IP][:PORT].
For example, SERVER=IP.PIPE://server1.ibm.com:1918.
243
Each agent bundle has its own unique configuration parameters that you may need to specify using this
command. If you have installed the agent bundle that you want to deploy to the deployment depot, you
can view the configuration parameters by running the following command from the monitoring server
where that agent bundle is installed:
tacmd describeSystemType -t pc -p platform
An agent of the same type and platform must be deployed into the depot available to the monitoring server
from which the command is run. You can also get more information about agent-specific parameters in the
agent user's guide for the agent that you want to deploy.
Note: The tacmd command has been updated to support two new features, the asynchronous remote
deployment (which allows you to request the deployment of another remote agent, even if the
244
previously deployed agent has not initialized fully) and the grouping of agents to be remotely
deployed. The deployment commands have been modified to accept two new grouping parameters:
the g parameter lets you specify a deployment group and the b parameter lets you specify a
bundle group. For detailed information, including usage examples, see the IBM Tivoli Monitoring:
Command Reference.
With asynchronous remote deployment, CLI requests are queued, and the tacmd command returns
control immediately to the process that invoked them along with a transaction ID that can be used
to monitor the status of the request via either the tacmd getDeployStatus command or the
Deployment Status Summary By Transaction workspace in the Tivoli Enterprise Portal.
Asynchronous remote agent deployment applies both to agents started via the tacmd command and
to those started using the portal client. Workspace reports regarding asynchronous agent
deployment are also available; for information, see the IBM Tivoli Monitoring online help.
where metafile is the name of the .mdl file that you want the Tivoli Universal Agent to use.
To deploy the Tivoli Universal Agent and its referenced script, add the following parameter to the
addSystem command:
-p UA.CONFIG=metafile.mdl. UA.SCRIPT=script-filename
where metafile is the name of the .mdl file and script-filename is the name of the script that the Tivoli
Universal Agent uses.
If a metafile belonging to a nondefault type of Data Provider type is remotely deployed (the four default
Data Providers are API, Socket, File, and Script), there is no automatic mechanism to activate the
appropriate Data Provider. For example, if you deploy an ODBC metafile to a remote Tivoli Universal
Agent and that agent has not already been configured to start the ODBC DP, the Data Provider
configuration will not happen automatically as a result of the deployment. You must manually configure the
ODBC Data Provider on that Tivoli Universal Agent for the metafile to be usable. For instructions on
configuring the Data Provider, see the IBM Tivoli Universal Agent User's Guide.
Note to Linux/UNIX users: The instance name is converted to uppercase when the agent registers with
the Tivoli Enterprise Monitoring Server. If the name you specify contains
lowercase letters, this will cause problems when stopping or starting the
agent remotely, as the instance name you specified will not match the actual
instance name. Thus, the universal agent, when used on Linux or UNIX, must
be named with all uppercase letters for the Tivoli Enterprise Portal to start or
Chapter 9. Deploying monitoring agents across your environment
245
-t ssm
-p li6213 -i c:\SSM40_Bundle
This example adds the SSM bundle for the Linux platform to the depot.
Once you have populated the depot with SSM bundles, you can use the tacmd viewDepot command to
see the bundles. Each SSM bundle consists of the installer files and the corresponding descriptor file.
The tacmd commands for installing, patching, starting, stopping, and configuring the Tivoli Monitoring
agents have been expanded to support operations with Netcool SSM agents as well.
This example deploys an SSM agent to the Windows machine achan1 using the Server Message Block.
The installation directory for the agent is specified by the d option, and the SNMP port to be used by the
SSM agent is specified using the p SNMPPORT property.
The above command uninstalls the SSM agent on Windows machine achan1.
This installs SSM 4.0 agent fixpack 1 to the achan1 agent machine. The agent must be up and running; if
not, the request fails.
To start the agent, use the tacmd startAgent command, as explained in Starting an SSM agent on page
247.
246
test2.exe
Notes:
1. The files specified in the configfile (c) option or the filelist (l) option must be stored in the ssmconfig
subdirectory of the depot directory.
v On Windows, if your depot directory is c:\IBM\ITM\CMS\depot, you must put the files specified in the
configfile option or the filelist option into the c:\IBM\ITM\CMS\depot\ssmconfig directory.
v On Linux or UNIX, if your depot directory is /opt/IBM/ITM/tables/TEMS_NAME/depot, put the files into
the /opt/IBM/ITM/tables/TEMS_NAME/depot/ssmconfig directory.
2. The files are pulled from the agent through the Tivoli Enterprise Monitoring Server's HTTP server. Here
is an example of an HTTP request:
http://tems_host_name:1920//ms/kdh/lib/depot/ssmconfig/test1.cfg
247
Figure 35. Deployment Status Summary workspace showing the status of SSM deployments
248
Instead of the deployment command returning after some period of time with a message saying that the
deployment completed with a success or a failure, the CLI returns immediately with a transaction identifier
(ID). This frees your user environment (be it the Tivoli Enterprise Portal, the command line, or an
automation script) to continue normal processing:
C:\>tacmd createnode -g Targets -b Agents
KUICCN022I: Request has been successfully queued to the deploy controller.
The transaction ID is 1224009912765000000000041, use the getDeployStatus CLI to view the status.
You can then monitor the status of your deployment requests using the command interface or Tivoli
Enterprise Portal workspaces or situations to check for failures.
Once a set of deployment transactions has been submitted into the IBM Tivoli Monitoring infrastructure,
they are routed to the appropriate remote Tivoli Enterprise Monitoring Server for processing. Each remote
monitoring server can process deployment transactions independently of every other monitoring server,
whether remote or hub. This allows for a large degree of parallelism, which increases with each remote
monitoring server added to your environment. Additionally, each remote Tivoli Enterprise Monitoring Server
can process multiple deployment transactions concurrently, further increasing the level of parallelism
applied to bulk deployments.
The default number of concurrent deployments per monitoring server is 10, but this is configurable using
the DEPLOYTHREADPOOLSIZE setting in the Tivoli Enterprise Monitoring Server configuration file
(KBBENV).
Figure 36 illustrates the processing model for bulk deployment.
Deploy status
Because deployment transactions are asynchronous (that is, they run in the background), monitoring the
status of deployments is an important activity to ensure they complete successfully. There are three
mechanisms for monitoring deployment status, as follows.
v The tacmd getDeployStatus CLI command.
Chapter 9. Deploying monitoring agents across your environment
249
For more information about this command, refer to the IBM Tivoli Monitoring: Command Reference.
v The Tivoli Enterprise Portal's Deployment Status workspaces.
v Deployment attribute groups, and situations created using the Tivoli Enterprise Portal that are based on
these groups.
Using these features, you can display and monitor the progress of deployments running in your
environment. You can track the status of your deployments either by individual transaction or by group
transaction. A group transaction represents a deployment using groups of agents deployed to groups of
targets; this is discussed in the succeeding sections. Each transaction possesses a status that indicates
the progress the transaction has made in the deployment process. The valid status values for a
deployment transaction are listed below.
Status Description
Pending
The transaction is waiting in the queue to begin processing.
In-Progress
The transaction has begun the deployment process but has not yet finished.
Success
The transaction has completed successfully.
Failed Retrying
The transaction experienced a recoverable error during the deployment process and will be
restarted periodically at the point of failure for a predefined number of iterations. The number of
retry iterations is defined by configurable property ERRORRETRYLIMIT. The time between each
successive attempt is defined by the RETRYTIMEINTERVAL property. Both of these properties
can be set in the monitoring server's configuration file, KBBENV, and take effect when the
monitoring server is restarted. The default value for ERRORRETRYLIMIT is 3 retries, and the
default value for RETRYTIMEINTERVAL is 7 minutes. These default values are based on a typical
bulk deployment scenario using the createNode command. A smaller RETRYTIMEINTERVAL
value (for example, 20 seconds) might be better suited if you perform more bulk upgrade
scenarios than deployments.
If an error persists after all retry iterations are exhausted, the transaction moves to Failed status.
Failed The transaction has completed with an unrecoverable failure, or the number of retry attempts has
been exhausted. Consult the status message included with the transaction status for more
information about the failure.
250
exceptional circumstance related to a deployment. For more information about these deployment status
attribute groups, refer to the IBM Tivoli Monitoring: Tivoli Enterprise Portal User's Guide or the Tivoli
Enterprise Portal online help.
IBM Tivoli Monitoring provides two deployment situations that monitor the Deploy Status attribute group,
which you can use directly or as samples for creating your own situations. These situations are as follows.
Situation
Description
Deploy_Failed
Triggers a critical event when a deployment transaction fails.
Deploy_Retrying
Triggers a minor event when a deployment transaction enters the failed_retry state.
Function
Deploy groups
Organize deployment targets, which are the systems or agents to which a deployment is targeted.
Bundle groups
Organize the deployable components that will be used during deployment operations.
In addition to the obvious role these groups play in organizing the participants of deployment, the grouping
facilities perform another very important role: Groups and group members can also hold and organize
properties that the deployment process uses; this is discussed in the following sections.
Deploy and Bundle groups are created using tacmd commands. For detailed information about creating
such groups, see the IBM Tivoli Monitoring: Command Reference.
Deploy groups
Deploy groups organize the targets of deployment operations. Deploy targets are specified as members
within the deploy group. These deploy targets can represent unmanaged systems, which could be targets
of a createNode deployment operation. Deploy targets can also represent managed systems where a
Tivoli Monitoring or Netcool System Service Monitor (SSM) agent is already installed. Both of these types
of deploy targets can be added as members to deploy groups. This dual representation of targets within
deploy groups allows a group to specify:
v Targets of an initial OS agent or SSM deployment using the tacmd createNode command.
v Targets of an application agent deployment using the tacmd addSystem command.
v Targets of an agent or SSM upgrade using the tacmd updateAgent command.
v Targets of an agent or SSM configuration deployment using the tacmd configureSystem command.
Once you create the necessary deploy groups to represent logical groupings of systems within your
enterprise, these groups can be used repeatedly during the lifecycle of your monitoring infrastructure to
deploy, update, start, stop, configure, and eventually remove agents.
251
Best practices: There are a number of best practices associated with deploy groups. The following are
suggested.
v Segment your systems by geographic location. In this case, it is likely that you would want all agents
deployed to a deploy group to connect with the same primary Tivoli Enterprise Monitoring Server.
v Segment your systems according to their role within the enterprise. You might create a group of
database servers, a group of application servers, a group of Web servers. This allows you to deploy
and administer agents to the systems within a group the same way but differently from how you might
deploy and administer agents in a different group.
v Segment your systems according to the monitoring infrastructure being used. You might create a set of
groups for monitoring using Tivoli Monitoring agents and another group or set of groups for monitoring
using SSM agents.
v Segment your systems into smaller groups representing a percentage of total systems. This allows you
to perform deployments into more easily consumable sets of activities.
Bundle groups
Bundle groups organize and represent the components that you deploy. These include IBM Tivoli
Monitoring agents and Netcool SSM agents. Deployable bundles are added as members to bundle groups
using the tacmd command by specifying the agent's product code and optionally the platform and version
for the bundle.
To enable bulk deployment of more than one agent at a time, you add multiple members to the bundle
group representing multiple agents to deploy. For example, you might create a bundle group to deploy a
Windows OS agent and a DB2 application agent by adding a member with product code NT and a
member with product code UD. Since one of these members is an OS agent, you use the tacmd
createNode command to deploy this bundle group.
If you had created a bundle group containing only application agents, such as DB2, Domino, or SAP
agents, you deploy them using the tacmd addSystem command. Each bundle group member represents
a deployment bundle that must be added to the appropriate agent depot using the tacmd addBundles
command before a bulk deployment command is executed involving the bundle group. To avoid the
complexity of managing multiple depots, consider using a shared agent depot; see Sharing an agent
depot across your environment on page 241.
Best practices: There are a number of best practices associated with bundle groups. The following are
suggested.
Grouping
Definition
Grouped by platform type
A group used to specify one or more agent bundles for a specific platform type.
Grouped by system role
A group used to specify a standard set of agents being deployed for a specific machine type or
role within your enterprise. For example, a group deployed to database servers could contain an
OS agent and a database agent.
Grouped for latest update
A group containing various agents where the version specification for agent member bundles is
omitted. This causes initial deployment and upgrades to always deploy the most current available
version in the depot. By managing the version of agents in the depot, this type of group always
deploys the latest versions to target systems.
Group properties
When using the tacmd command for deployment (createNode, addSystem, etc.), you can use the p
option to specify properties that modify parameters of the deployment or to specify configuration
parameters to the agent installation process. When using the command interface with deploy and bundle
252
groups, properties previously specified on the deploy command line can be attached to the groups
themselves or to group members. Attaching properties to groups and group members means that they are
available for reuse each time the group is referenced in a deployment command without your needing to
specify properties again on the command line. This creates a very powerful mechanism for defining
behaviors around your deployments and reusing them each time you specify a group as part of a
deployment.
When creating your deploy and bundle groups for bulk deployment, you can assign properties to these
groups. Properties at the group level apply to all members of the group. Additionally, properties can be
assigned to individual group members. Any property assigned to a member applies to that member only
and overrides a similar property specified at the group level. This allows you to build a hierarchy of
properties that is consolidated during deployments to create a complete deployment request. Properties
specified on the groups and group members can include any combination of OS agent, application agent,
and deployment properties. Refer to the IBM Tivoli Monitoring: Command Reference for more information
about specific properties.
Consider the following example where you want to deploy the UNIX OS agent to five AIX systems with the
following hostnames.
aix1.mycompany.com
aix2.mycompany.com
aix3.mycompany.com
aix4.mycompany.com
aix5.mycompany.com
To do this, you must provide login credentials. All five systems share the same root userid and password.
When creating the group, you can assign the userid and password to the group so it applies to all
members. This way, you need specify the login credentials only once.
# KDYRXA.RXAPASSWORD=mypass
# tacmd addgroupmember
# tacmd addgroupmember
...
By attaching the userid and password to the deploy group at the group level, the properties apply to all
members without having to repeat them for each member.
Suppose that one of the systems was on the far side of a slow link and required more time to complete a
deployment. By adding a property to that member, you can change the behavior associated with that
member without affecting the behavior of the others.
# tacmd addgroupmember
But suppose that one of the members had a different root password. By attaching this password to the
member for that system, you override the value at the group level and replace it with the member-specific
value.
# tacmd addgroupmember
Table 53 illustrates how the properties from the above example combine to build a useful set of
consolidated properties.
Table 53. Interaction between member properties and group properties
Member
Group properties
Member properties
Consolidated properties
aix1
KDYRXA.RXAUSERNAME=root
KDYRXA.RXAPASSWORD=mypass
KDYRXA.RXAUSERNAME=root
KDYRXA.RXAPASSWORD=mypass
aix2
KDYRXA.RXAUSERNAME=root
KDYRXA.RXAPASSWORD=mypass
KDYRXA.RXAUSERNAME=root
KDYRXA.RXAPASSWORD=mypass
253
Table 53. Interaction between member properties and group properties (continued)
Member
Group properties
Member properties
Consolidated properties
aix3
KDYRXA.RXAUSERNAME=root
KDYRXA.RXAPASSWORD=mypass
KDYRXA.TIMEOUT=3600
KDYRXA.RXAUSERNAME=root
KDYRXA.RXAPASSWORD=mypass
KDYRXA.TIMEOUT=3600
aix4
KDYRXA.RXAUSERNAME=root
KDYRXA.RXAPASSWORD=mypass
aix5
KDYRXA.RXAUSERNAME=root
KDYRXA.RXAPASSWORD=mypass
KDYRXA.RXAUSERNAME=root
KDYRXA.RXAPASSWORD=mypass
KDYRXA.RXAPASSWORD=yourpass
KDYRXA.RXAUSERNAME=root
KDYRXA.RXAPASSWORD=yourpass
As you can see in the above example, group and member properties are joined during deployment to
create a consolidate set of properties that are used for the deployment. Notice, however, that the deploy
group member properties take precedence over and can actually override those at the deploy group level.
This is visible with aix5.mycompany.com where the member value for KDYRXA.RXAPASSWORD overrode
the group value.
This same property mechanism applies to bundle groups as well. There might be a property that you want
to apply to an agent bundle in a group regardless of where they are deployed. In this case you would
attach the property to the bundle group. Likewise, you might want to specify a property that applies to a
single agent type within a bundle group. In this case you attach the property to the bundle member.
During a deployment operation where you deploy a bundle group to a deploy group, property consolidation
occurs across all properties at the group and member level for both groups. Table 54 lays out the
precedence of group/ member properties.
Table 54. Property precedence between deploy groups and bundle groups
Precedence
Property location
highest
Deploy group
Bundle group
lowest
254
Create a deploy group for any collection of deployment targets. Group properties can be applied to the
group during creation or afterward. Use any of the best practices described above in Deploy groups
on page 251.
# tacmd creategroup -t DEPLOY -g DBServers ...
...
Deployment
The following best practices are employed to initiate and perform actual deployments. These best
practices apply regardless of whether the deployment is an initial agent installation, an agent upgrade, or
an agent configuration update. In fact, it is important to recognize that the groups created for deployment
are applicable to all agent maintenance operations throughout the agent lifecycle.
1. Submit deployment.
Submitting the deployment differs slightly depending on the type of deployment activity you desire, but
they are all very similar.
v createNode
Use the tacmd createNode command when performing initial deployments of OS and SSM agents.
This includes deploying bundle groups that contain OS agents grouped together with application
agents.
# tacmd createnode -g DBServers -b OS_DBAgentBundles
v addSystem
Use the tacmd addSystem command when initially deploying application agents.
# tacmd addsystem -g AppServers -b AppAgentBundles
v updateAgent
Use the tacmd updateAgent command when performing agent or SSM upgrades.
# tacmd updateagent -g DBServers -b LatestAgentFixpacks
# tacmd updateagent -g AppServers -b LatestAppAgentFixpacks
Note: The KUIWINNT.dsc file on Windows systems and the uiplatform.dsc files (where platform is
the platform name) on Linux and UNIX systems have been added or updated so that the KUI
package (the tacmd commands) can be remotely deployed. Use the following command:
tacmd updateAgent -t ui -n node
where:
255
node
is the IBM Tivoli Monitoring node on which the KUI package is to be remotely deployed.
v configureSystem
Use the tacmd configureSystem command when performing configuration updates to agents or
SSMs.
# tacmd configuresystem -g DBServers -b DBServerAgentConfig
v removeSystem
Use the tacmd removeSystem command when removing application agents or SSM patches from
a system.
# tacmd removesystem -g AppServers -b AppAgentBundles
2. Monitor deployment.
There are many mechanisms that you can use for monitoring deployment status. These include:
v Using the CLI.
The tacmd getDeployStatus command returns the complete status of requested deployment
transactions.
# tacmd getdeploystatus -g group_transactionID
-h hostname
Then issue the operation directly by using the managed system name parameter instead of the deploy
group:
# tacmd updateAgent
256
-t product_code -n managed_OS
4. Purge status.
When all deployment transactions have completed or the deployment status is no longer required,
clear the transaction's deployment status from the systems.
# tacmd clearDeployStatus -g group_transactionID
While the steps above describe the complete process of deployment, it is common to perform deployments
in small sets of consumable transactions. That is, perform an initial deployment on a smaller group of
systems to ensure that there are no issues that would inhibit the successful completion of deployment.
Once that deployment is complete, continue with one or more larger deployments using target deploy
groups with more members. Work through each deploy group representing some portion of your enterprise
until you have completed all deployments for your enterprise.
257
258
259
260
2. Run the following command using the parameters described in Table 56:
./itmcmd manage [-h install_dir] [-s]
Table 56. Parameters for the itmcmd manage command
-h
install_dir
261
Description
IP.UDP Settings
Hostname or IP Address
IP.PIPE Settings
Hostname or IP Address
Port Number
IP.SPIPE Settings
Hostname or IP Address
Port number
SNA Settings
Network Name
262
Description
LU Name
LU 6.2 LOGMODE
TP Name
6. If the monitoring server was running when you began the configuration process, after the
reconfiguration is complete, you are asked if you want it restarted.
Reply 1 or 2, as appropriate.
If you choose the restart, the monitoring server is stopped and then started again. These actions are
necessary to force the server to read your changed configuration (which is always read at server startup).
On UNIX platforms the component should be restarted with the same user that it previously ran on. If the
monitoring server was not running when reconfigured, no action is performed, and the server remains
stopped.
Notes:
1. Use caution when starting, stopping, or restarting the Tivoli Enterprise Monitoring Server, as it is a key
component.
2. You cannot configure a hub monitoring server and a remote monitoring server on the same system
because you cannot have two processes listen to the same IP port on the same system.
v You can configure any number of remote monitoring servers on the same system as long as each
reports to a different hub and uses a different port number.
v You can configure any number of hub monitoring servers on the same system as long as each uses
a different port number.
v If a hub monitoring server and a remote monitoring server are configured on the same system, the
remote monitoring server must report to a hub on another system using a port other than the one
used by the hub running on the same system.
263
v You cannot have two monitoring servers talking to each other over IP from the same system unless
one of them is a high-availability hub monitoring server, because a high-availability hub is isolated to
a private IP address. See the IBM Tivoli Monitoring: High-Availability Guide for Distributed Systems.
Description
IP.UDP Settings
Hostname or IP Address
IP.PIPE Settings
Hostname or IP Address
Port Number
IP.SPIPE Settings
Hostname or IP Address
Port number
SNA Settings
Network Name
LU Name
LU 6.2 LOGMODE
TP Name
6. If the agent was running when you began the configuration process, after the reconfiguration is
complete, you are asked if you want the agent restarted.
264
Reply 1 or 2, as appropriate.
If you choose the restart, the agent is stopped and then started again. These actions are necessary to
force the agent to read your changed configuration (which is always read at agent startup). On UNIX
platforms the component should be restarted with the same user that it previously ran on. If the agent was
not running when reconfigured, no action is performed, and the agent remains stopped.
265
tacmd stopAgent
Stops both Windows, Linux, and UNIX monitoring agents.
See the IBM Tivoli Monitoring: Command Reference for the syntax of these commands.
266
Changing the port number for browser client connections to the portal server
A portal server on Windows, Linux, or UNIX uses port 1920 for HTTP connections and 3661 for HTTPS
connections from portal browser clients.
Do not change the default port settings, especially on multifunction UNIX and Linux systems, since many
components might be located on the same system and some of these components might depend on the
default values being used for HTTP and HTTPS ports.
If you need to change the default settings, you can change them by using the KDE_TRANSPORT
environment variable:
On Windows:
1. In the Manage Tivoli Enterprise Monitoring Services window, right-click Tivoli Enterprise Portal
Server, point to Advanced, and select Edit ENV File to open the KFWENV file.
2. Add the following line to the file:
KDE_TRANSPORT=HTTP:1920 HTTPS:3661
267
Changing the port number for desktop client connections to the portal server
Use the following procedure for each desktop client instance you want to change:
1. In the Manage Tivoli Enterprise Monitoring Services window, right-click the Tivoli Enterprise Portal
Desktop instance that you want to change, and select Reconfigure.
The Configure Application Instance window opens.
2. In the Parms list, scroll down to cnp.http.url.port and double-click.
3. On the Edit window, perform the following steps:
a. Change the port number value to the port number you want.
b. Select the In Use check box.
Note: If you fail to select this check box, the port number value that you entered will revert to the
original value.
c. Click OK.
4. On the Configure Application Instance window, verify that the port number value (in the Value column)
for cnp.http.url.port has changed.
5. Click OK.
See also the following information about using the IP.PIPE and IP.SPIPE protocols and parameters:
v The PORT parameter specifies the well-known port for the monitoring server.
v The COUNT:N parameter is the mechanism for reserving IP.PIPE ports for components that connect to
the monitoring server. N is the number of IP.PIPE ports to reserve on a host in addition to the
well-known port for the monitoring server. Use the COUNT parameter to reserve ports for components
that must be accessible from outside a firewall. Accessibility from outside the firewall requires IP.PIPE
ports and because these ports must be permitted at the firewall, the ports must be predictable.
For example, if the well-known port is 1918, COUNT:3 starts the search at port 6014 (1918 + 1*4096). If
the agent process cannot bind to port 6014, the algorithm tries port 10110 (1918 + 2*4096). If port
10110 is not available, the search goes to port 14206 (1918 + 3*4096).
The agent is assigned to the first available port encountered in the search. The process fails to start if
the search reaches the highest port without a successful binding (port 14206 in this example).
268
v The SKIP:N parameter specifies the number of ports to skip when starting the search for an available
port using the port assignment algorithm. Use the SKIP parameter for components that do not need
access across a firewall.
For example, if the well-known port is 1918, SKIP:2 specifies to start the search at port 10110 (1918 +
2*4096), skipping ports 1918 and 6014 (1918 + 1*4096). The algorithm continues searching until it finds
an available port.
v The USE parameter enables or disables a protocol. To disable a protocol, specify use:n. To enable a
protocol, specify use:y. This parameter has no default.
Note: Tivoli Monitoring agents allocate ports 1920 and 3661 as HTTP and HTTPS listener ports.
Example
The example in Table 59 shows the coding to use on a system that contains the components shown:
Table 59. Using COUNT and SKIP variables to assign port numbers
Component
Coding
KDE_TRANSPORT=IP.PIPE COUNT:1
Windows OS agent
Does not require firewall access
269
where N is the number of ports to reserve (COUNT:N) or the number of ports to skip (SKIP:N). This
example uses the IP.PIPE protocol. It also applies to IP.SPIPE.
3. Search the file for a KDC_FAMILIES environment variable. If a KDC_FAMILIES variable exists in the
file, merge its settings with the new KDE_TRANSPORT variable. The KDE_TRANSPORT variable
supersedes and overrides the KDC_FAMILIES variable.
4. Save the file.
5. Recycle the component. (Right-click the component and select Recycle.)
At the highest level, the hub monitoring server receives heartbeat requests from remote monitoring servers
and from any monitoring agents that are configured to access the hub monitoring server directly (rather
than through a remote monitoring server). The default heartbeat interval used by remote monitoring
servers to communicate their status to the hub monitoring server is 3 minutes. The default heartbeat
interval of 3 minutes for monitoring servers is suitable for most environments, and should not need to be
changed. If you decide to modify this value, carefully monitor the system behavior before and after making
the change.
270
At the next level, remote monitoring servers receive heartbeat requests from monitoring agents that are
configured to access them. The default heartbeat interval used by monitoring agents to communicate their
status to the monitoring server is 10 minutes.
You can specify the heartbeat interval for a node (either a remote monitoring server or a remote
monitoring agent) by setting the CTIRA_HEARTBEAT environment variable. For example, specifying
CTIRA_HEARTBEAT=5 sets the heartbeat interval to 5 minutes. The minimum heartbeat interval that can
be configured is 1 minute.
v For monitoring servers on Windows computers, you can set this variable by adding the entry to the
KBBENV file. You can access this file from the Manage Tivoli Enterprise Monitoring Services utility by
right-clicking Windows OS Monitoring Agent and clicking Advanced -> Edit ENV File. Note that you
must stop and restart the monitoring server for the changes to the KBBENV file to take effect.
v For monitoring servers on Linux and UNIX computers, you can set the CTIRA_HEARTBEAT variable by
adding the entry to the monitoring server configuration file. The name of the monitoring server
configuration file is of the form hostname_ms_temsname.config. For example, a remote monitoring
server named REMOTE_PPERF06 running on host pperf06 has a configuration filename of
pperf06_ms_REMOTE_PPERF06.config. Note that you must stop and restart the monitoring server for the
configuration changes to take effect.
v For remote monitoring servers, you can set this variable by adding an entry to the KBBENV file. You
can access this file from Manage Tivoli Enterprise Monitoring Services by right-clicking Tivoli
Enterprise Monitoring Server and clicking Advanced Edit ENV File. You must stop and restart the
monitoring server for changes to the KBBENV file to take effect.
v For Windows OS agents, you can set this variable by adding the entry to the KNTENV file. You can
access this file from Manage Tivoli Enterprise Monitoring Services by right-clicking Windows OS
Monitoring Agent and clicking Advanced Edit ENV File. You must stop and restart the monitoring
agent for the changes to the KNTENV file to take effect.
v For agents on Linux and UNIX computers, you can set the CTIRA_HEARTBEAT variable by adding an
entry to the agent .ini file (for example, lz.ini, ux.ini, ua.ini). When the agent is stopped and restarted,
the agent configuration file is recreated using settings in the .ini file.
When a monitoring agent becomes active and sends an initial heartbeat request to the monitoring server, it
communicates the desired heartbeat interval for the agent in the request. The monitoring server stores the
time the heartbeat request was received and sets the expected time for the next heartbeat request based
on the agent heartbeat interval. If no heartbeat interval was set at the agent, the default value is used.
Changes to offline status typically require two missed heartbeat requests for the status to change. Offline
status is indicated by the node being disabled in the portal client's Navigator View. If the heartbeat interval
is set to 10 minutes, an offline status change would be expected to take between 10 and 20 minutes
before it is reflected on the portal client's Navigator View.
Attention: Lower heartbeat intervals increase CPU utilization on the monitoring servers processing the
heartbeat requests. CPU utilization is also affected by the number of agents being monitored. A low
heartbeat interval and a high number of monitored agents could cause the CPU utilization on the
monitoring server to increase to the point that performance related problems occur. If you reduce the
heartbeat interval, you must monitor the resource usage on your servers. A heartbeat interval lower than 3
minutes is not supported.
271
Figure 40. Manage Tivoli Enterprise Monitoring Services Advanced Utilities window
2. From the pop-up menu, select Advanced Utilities Build TEPS Database.
v If the only RDBMSes installed on this computer are DB2 Database for Linux, UNIX, and Windows
and the portal server's embedded Derby database, select the appropriate database manager, as
shown in Figure 41 on page 273.
272
Figure 41. The Manage Tivoli Enterprise Monitoring Services select the new portal server database window. Only
available databases DB2 for the workstation and Derby
v If this computer is running both DB2 Database for Linux, UNIX, and Windows, Microsoft SQL
Server, and embedded Derby, select the appropriate database manager, as shown in Figure 42.
Figure 42. The Manage Tivoli Enterprise Monitoring Services select the new portal server database window. The
available databases are DB2 for the workstation, SQL Server, and Derby
Note: Changing to a different database does not migrate the portal server data stored in it.
3. Continue with the database configuration, as explained in number 12 on page 140 under Prerequisites
for the single-computer installation on page 139.
273
274
3. Return to your regular ID after you have moved the user ID to root.
275
may occur when large queries are being handled. If the topas output shows that KfwServices is nearing
these values, the memory model should be changed in smaller environments as well as large ones.
Note: This section applies only to a 32-bit portal server running on AIX. When installed for the first time
on an AIX system with a 64-bit kernel, a 64-bit portal server is installed, and this procedure is not
needed. When upgrading from previous versions, the 32-bit portal server will be rebuilt, so you
must re-enable the large-memory model for the KfwServices process.
Changing the memory model requires that the KfwServices load header be modified. Also, changes must
be made to your DB2 for the workstation configuration. Complete the following steps to make these
changes. These steps use the default directory. Use the directory locations appropriate to your system.
Important:: You must run the ldedit command below each time new Tivoli Enterprise Portal Server
maintenance has been applied.
To make the required memory model and DB2 for the workstation configuration changes:
1. Stop the Tivoli Enterprise Portal Server:
cd /opt/IBM/ITM/bin ./itmcmd agent stop cq
2.
3. To verify that the maxdata value has been set issue the following command:
dump -ov KfwServices
This command displays the maxdata value in KfwServices. Maxdata should show as:
maxSTACK maxDATA SNbss magic modtype 0x00000000 0x80000000 0x0003 0x010b 1L
5. Using your preferred editor add the following line to file cq.ini:
EXTSHM=ON
6. Using the DB2 for the workstation installation user ID (the default value is db2inst1), make the DB2 for
the workstation configuration changes as follows:
a. Stop the DB2 for the workstation server if not already stopped:
cd /db2inst1/sqllib/adm db2stop
For more information about using the large and very large address-space models to accommodate
programs requiring data areas that are larger than those provided by the default address-space model,
see http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.genprogc/doc/
genprogc/lrg_prg_support.htm. For more information about using the EXTSHM environment variable to
276
increate the number of share memory segments to which a single process can be attached, see
http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic.
Is the number of database connections between Warehouse Proxy Agent and the database
server. (The default number of database connections is 10; you can change this number.)
10
Is the number of log and configuration files used by the Warehouse Proxy Agent.
The value used for the file descriptor ulimit should be high enough so that the limit is not met. A simpler
formula is X + 1000, where X is the number of agents warehousing.
After you determine the value for the file descriptor ulimit, modify the ulimit as appropriate for the
operating system. Refer to the system documentation for the command (usually ulimit) and procedures to
make this change permanently across system restarts, or contact your UNIX or Linux System
Administrator.
277
278
279
If no existing snapshot can be found for the monitoring server that is being switched to, a new set of portal
server data is created, which means all existing customizations will not be included. To restore these for
use with the new monitoring server (if needed), invoke the migrate-import process using as input the
saveexport.sql file created during the previous snapshot request.
When reconfiguring the Tivoli Enterprise Portal Server to switch back to the previous Tivoli Enterprise
Monitoring Server, answering Yes causes the previous snapshot to be automatically loaded, thus restoring
your prior customizations.
280
If you intend to use SSL for communication between the Tivoli Enterprise Portal Server and its clients, use
the GSKit provided with IBM Tivoli Monitoring to manage certificates and keys. See the IBM Tivoli
Monitoring: Administrator's Guide for instructions for setting up this encryption.
For additional information about using public/private key pairs, see the iKeyman documentation available
at http://www-128.ibm.com/developerworks/java/jdk/security/50/.
Enabling and disabling SSL for the Tivoli Enterprise Portal Server
IBM Tivoli Monitoring is shipped with SSL disabled as the default.
If you want to use Secure Sockets Layer communication between the portal server and the portal client,
use the following steps to enable it:
Note: This procedure disables the primary interface. For instructions on disabling additional interfaces,
see Chapter 12, Additional Tivoli Enterprise Portal configurations, on page 279.
On a Windows system:
1. In the Manage Tivoli Enterprise Monitoring Services window, right-click Tivoli Enterprise Portal
Server.
2. Click Advanced Configure TEPS Interfaces.
3.
4.
5.
6.
Highlight cnps and click Edit in the TEPS Interface Definitions window.
Select Enable SSL for TEP Clients.
Click OK to save your changes and close the window.
Recycle the service by stopping and starting it.
On a Linux system:
1. Change to the install_dir/bin directory
2. Run the following command:
./itmcmd manage
3. In the Manage Tivoli Enterprise Monitoring Services window, right-click Tivoli Enterprise Portal
Server.
4. Click Configure
5. In the first tab, select Enable SSL for TEP Clients to enable SSL in the Tivoli Enterprise Portal Server
window.
6. Click OK to save your changes and close the window.
7. Recycle the service by stopping and starting it.
Disabling SSL
If you do not want to use Secure Sockets Layer communication between IBM Tivoli Monitoring
components and the Tivoli Enterprise Portal Server, use the following steps to disable it:
Note: Each interface independently enables or disables SSL. If you are using multiple interfaces, you
must disable all of them.
1. In Manage Tivoli Enterprise Monitoring Services, right-click Tivoli Enterprise Portal Server.
2. Click Advanced Edit ENV file.
3. Find the following line:
kfw_interface_cnps_ssl=Y
4. Change the Y to N.
5. Save the file and exit.
6. Click Yes when you are asked if you want to recycle the service.
Chapter 12. Additional Tivoli Enterprise Portal configurations
281
282
5. Type the IP address for the Tivoli Enterprise Portal Server computer (this should be the same
computer where IIS 6.0 is running) and click Next.
6. Type the path to the IBM Tivoli Monitoring home directory that is the root of the Web Content
subdirectories. The default path is C:\IBM\ITM\CNB. Click Next.
7. Select Read, Run scripts, and Execute. Click Next.
8. Click Finish.
9. Right-click the new Web site and click Properties.
10. Click the Documents tab.
11. In the Add Content Page field, type index.html. This is the main page for the Tivoli Enterprise
Portal.
12. Click the Move Up button to move index.html to the top of the list.
13. Click the HTTP Headers tab.
14. Click MIME Types.
15. Click New next to MIME Types.
16.
17.
18.
19.
MIME Type
.class
application/java-class
.ior
application/octet-stream
.jar
application/java-archive
.jks
application/octet-stream
.jnlp
application/x-java-jnlp-file
.js
application/x-javascript
.llser
application/octet-stream
.pl
application/x-perl
.ser
application/java-serialized-object
.txt
text/plain
.zip
20. Click OK.
21. Click Apply.
application/zip
283
v On Linux and UNIX computers, change the value between the double quotation marks ("") to
itm_installdir/os_dir/cw, where itm_installdir is the directory where IBM Tivoli Monitoring is
installed and os_dir is the operating system type (li6263 for SLES9 for Intel systems, li3263 SLES9
for zSeries systems, or aix533for AIX systems). For example:
/opt/IBM/ITM/ls3263/cw
v On Windows computers, change the value between the double quotation marks () to
itm_installdir/CNB where itm_installdir is the directory where IBM Tivoli Monitoring is installed.
Use forward slashes for the path. For example:
DocumentRoot "C:/IBM/ITM/CNB"
4. Find the line that begins with <Directory docRoot>. Change the path to the value used for
DocumentRoot. (From our previous examples, this would be itm_installdir/ls3263/cw on Linux and
UNIX and itm_installdir/CNB on Windows.)
5. Save and close the file.
6. Open the mime.types file in a text editor and make the following changes. For the IBM HTTP Server,
this file is located in the conf directory of the server installation directory. For an Apache server, this file
is typically located in /etc/mime.types, but the location may differ by platform. You may need to search
for the file.
a. If the following lines are not in the file, add them:
application/java-archive jar
image/icon ico
b. If you will be using Java Web Start and the following lines are not in the file, add them:
application/x-java-jnlp-file jnlp
image/x-icon ico
c. Modify the line that begins with application/octet-stream to include ior ser at the end. For
example:
application/octet-stream bin dms lha lzh exe class so dll cab ior ser
Browser client
During installation, the IBM Tivoli integral Web server is installed as a component of the Tivoli Enterprise
Portal Server. You can also use an external Web server on your Tivoli Enterprise Portal Server computer,
as shown in Firewall scenarios for Tivoli Enterprise Portal on page 287.
Currently, IBM supports an external Web server for browser client access only when the external server is
installed on the same computer as the Tivoli Enterprise Portal Server.
Desktop client
Although the desktop client does not need a Web server to start Tivoli Enterprise Portal, it does use it for
common files stored on the Tivoli Enterprise Portal Server, such as the graphic view icons and style
sheets. If your Tivoli Enterprise Portal Server setup disables the integral Web server and uses only an
external Web server, you need to specify the Interoperable Object Reference (IOR) for every desktop
client.
284
Note: The last line of cnp.sh is very long and has various options on it, including several other -D
options to define other properties. It is very important to add the option in the correct place.
If the last line of your bin/cnp.sh originally looked like this:
${JAVA_HOME}/bin/java -showversion -noverify -classpath ${CLASSPATH}
-Dkjr.trace.mode=LOCAL -Dkjr.trace.file=/opt/IBM/ITM/logs/kcjras1.log
-Dkjr.trace.params=ERROR -DORBtcpNoDelay=true -Dcnp.http.url.host=
-Dvbroker.agent.enableLocator=false
-Dhttp.proxyHost=
-Dhttp.proxyPort=candle.fw.pres.CMWApplet 2>& 1 >> ${LOGFILENAME}.log
To specify the IOR, change the line to look like the following:
${JAVA_HOME}/bin/java -showversion -noverify -classpath ${CLASSPATH}
-Dcnp.http.url.DataBus=http://xyz.myserver.com/cnps.ior
-Dkjr.trace.mode=LOCAL -Dkjr.trace.file=/opt/IBM/ITM/logs/kcjras1.log
-Dkjr.trace.params=ERROR -DORBtcpNoDelay=true -Dcnp.http.url.host=
-Dvbroker.agent.enableLocator=false
-Dhttp.proxyHost=
-Dhttp.proxyPort=candle.fw.pres.CMWApplet 2>& 1 >> ${LOGFILENAME}.log
285
codebase=http://$HOST$:$PORT$///cnp/kdh/lib
with:
codebase= http://$HOST$/
.
v Add the following statement to the set of <property> elements underneath the <resources>
section:
<property name="cnp.http.url.DataBus" value="http://$HOST$/cnps.ior"/>
4. For these changes to take effect, reconfigure the Tivoli Enterprise Portal Server.
Use the following URL to launch the Java Web Start client:
http://teps_hostname/tep.jnlp
286
Interface Name
Type a one-word title for the interface.
Host
If you are defining an interface for a specific NIC or different IP address on this computer, type
the TCP/IP host address. Otherwise, leave this field blank.
Proxy Host
If you are using address translation (NAT), type the TCP/IP address used outside the firewall.
This is the NATed address.
Port
Type a new port number for the Tivoli Enterprise Portal Server. The default 15001 is for the
server host address, so a second host IP address or a NATed address requires a different port
number.
Proxy Port
If the port outside the firewall will be translated to something different than what is specified for
Port, set that value here.
Enable SSL for TEP clients
Enable secure communications between clients and the portal server.
6. Click OK to add the new Tivoli Enterprise Portal Server interface definition to the list.
2. Following the entries for the default cnps interface, add the following variables as needed, specifying
the appropriate values:
KFW_INTERFACE_interface_name_HOST=
If you are defining an interface for a specific NIC or different IP address on this computer,
specify the TCP/IP host address.
KFW_INTERFACE_interface_name_PORT=
Type a port number for the Tivoli Enterprise Portal Server. The default 15001 is for the server
host address, so a second host IP address or a NATed address requires a different port
number.
KFW_INTERFACE_interface_name_PROXY_HOST=
If you are using address translation (NAT), type the TCP/IP address used outside the firewall.
This is the NATed address.
KFW_INTERFACE_interface_name_PROXY_PORT=
If the port outside the firewall will be translated to something different than what is specified for
Port, set that value here.
KFW_INTERFACE_interface_name_SSL=Y
If you want clients to use Secure Sockets Layers (SSL) to communicate with the Tivoli
Enterprise Portal Server, add the following variable.
287
Note: You can download the IBM HTTP Server for free at http://www-306.ibm.com/software/webservers/
httpservers/.
Figure 45 shows scenario with the following configuration:
v Has an intranet firewall
v Has no NAT
v Uses the integral Web server
The default Tivoli Enterprise Portal Server interface "cnps" is used. No additional interface definitions are
needed. Browser mode users, whether going through the firewall or not, start Tivoli Enterprise Portal at:
http://10.10.10.10:1920///cnp/client
288
v Has no NAT
v Uses an external Web server (such as IBM HTTP Server, Apache or IIS)
Browser mode users, whether going through the firewall or not, start Tivoli Enterprise Portal Server with
http://10.10.10.10 or http://10.10.10.10/mydirectory
(where mydirectory is the alias), or substitute the host name for the IP address.
For intranet configurations using an external Web server, with no NAT, you do not need to add a new
interface definition. Web server port 80 is used automatically when none is specified in the URL.
In this scenario, the monitoring server and agents can be installed on the Tivoli Enterprise Portal Server
computer.
Figure 47 on page 290 shows the following two-part configuration:
v Intranet firewall without NAT and using the integral Web server
v Internet firewall with NAT and using an external Web server
289
Figure 47. Intranet with integral Web server; Internet with external Web server
Intranet users can enter the URL for either the integral Web server or the external Web server:
http//10.10.10.10:1920///cnp/client or http://10.10.10.10
v Internet firewall with NAT through the firewall to the external Web server
http://198.210.32.34/?ior=internet.ior
290
Figure 48. Intranet and Internet with integral and external Web servers
The intranet firewall configuration requires a new Tivoli Enterprise Portal Server interface named "intranet",
which uses proxy host 192.168.1.100 and port 15003.
The Internet DMZ configuration requires a new Tivoli Enterprise Portal Server interface definition.
The Internet configuration uses the same Tivoli Enterprise Portal Server "internet" interface definition as
the previous scenario: proxy host 198.210.32.34 and port 15002.
In this scenario, the monitoring server and agents cannot be installed on the Tivoli Enterprise Portal Server
computer.
Figure 49 on page 292 shows the following two-part configuration:
v Intranet firewall with NAT through the firewall to the external Web server using http://192.168.1.100, and
without NAT inside the DMZ to the integral Web server uses http://10.10.10.10:1920///cnp/client
v Internet firewall with NAT through the firewall to the external Web server using http://198.210.32.34.
291
Figure 49. Two host addresses, intranet and Internet, with integral and external Web servers
The intranet firewall configuration uses the same Tivoli Enterprise Portal Server interface definition (named
"intranet") as in the scenario shown in Figure 48 on page 291: http://10.10.10.10; proxy host
192.168.1.100; and port 15003.
The intranet DMZ configuration uses the default Tivoli Enterprise Portal Server interface definition: host
192.168.33.33; proxy host 198.210.32.34; port 15002; and proxy port 444.
In this scenario, the monitoring server and agents cannot be installed on the Tivoli Enterprise Portal Server
computer.
292
Defining hubs
Defining hubs
In this step you activate a SOAP server and define hubs with which it communicates. You can use
Manage Tivoli Enterprise Monitoring Services to configure the SOAP server. On Linux and UNIX you can
also use the itmcmd config command. After you configure the SOAP Server, you must restart the Tivoli
Enterprise Monitoring Server.
v Windows: Defining hubs
v UNIX and Linux: Defining hubs (GUI procedure) on page 294
v UNIX and Linux: Defining hubs (CLI procedure) on page 295
293
5. Select the communications protocol to be used with the hub from the Protocol menu.
6. Specify an alias name in the Alias field.
The alias for the local hub monitoring server must always be "SOAP". For hubs with which the local
SOAP Server communicates, you may choose any alias (for example, HUB2). Alias names can be a
minimum of 3 characters and a maximum of 8 characters.
7. Do one of the following:
v If you are using TCP/IP or TCP/IP Pipe communications, complete the fields in Table 61:
Table 61. TCP/IP Fields in Hub Specification dialog
Field
Description
Hostname or IP Address
Port
v If you are using SNA communications, complete the fields in Table 62:
Table 62. SNA Fields in Hub Specification dialog
Field
Description
Network Name
LU Name
LU6.2 LOGMODE
TP Name
8. Click OK. The server tree is displayed, with the newly defined hub.
You can define any hubs with which the local SOAP Server will communicate by repeating steps 4 - 7, or
you can specify which user IDs can access the SOAP Server you just defined and what access privileges
they have. See Adding users on page 296.
294
2. Accept the default values, which should reflect the choices made during the last configuration, until you
see the following prompt:
*************************
Editor for SOAP hubs list
*************************
Hubs
## CMS_Name
1 ip.pipe:TEMS_NAME[port_#]
1)Add, 2)Remove ##, 3)Modify Hub ##, 4)UserAccess ##, 4)Cancel, 5)Save/exit:
CMS Name
The host name of the hub monitoring server. The host name must be fully
Port Number
Alias
An alias for the hub. Alias names can be a minimum of 3 characters and a maximum of 8
characters. The alias for the local hub must always be SOAP, but you must create a new alias
for additional hubs (for example, HUB2).
After you enter the alias, the list of hubs is displayed with the new hub added. For example,
Hubs
##
1
2
CMS_Name
ip.pipe:chihuahua[1918]
ip:maple[1918]
1)Add, 2)Remove ##, 3)Modify Hub ##, 4)UserAccess ##, 5)Cancel, 6)Save/exit:
You can continue to add hubs, or you can proceed to define user access for the hubs you have
already defined.
Chapter 13. Configuring IBM Tivoli Monitoring Web Services (the SOAP Server)
295
Adding users
Access to SOAP server can be secured in one of two ways: by enabling Security: Validate User and
creating user accounts on the host of the hub monitoring server, or by adding specific users to the server
definition.
If Security: Validate User is not enabled and no users are added to the server definition, the SOAP
server honors all requests regardless of the sender. If Security: Validate User is enabled on the hub
monitoring server, the SOAP server honors requests only from users defined to the operating system or
security authorization facility of the monitoring server host.
However, if any users are added to the SOAP server definition, only those users who have also been
defined to the operating system or the security authorization facility of the monitoring server host have
access to the server, regardless of whether or not Security: Validate User is enabled.
In this step you define users to a SOAP Server and specify the access privileges for each user: Query or
Update. The access privileges control what methods the user is allowed to use. Update access includes
Query access. Table 64 lists the methods associated with each access. For information on using these
methods, see the IBM Tivoli Monitoring: Administrator's Guide.
Table 64. SOAP methods associated with access privileges
Method
Update
Query
CT_Acknowledge
CT_Activate
CT_Alert
CT_Deactivate
CT_Email
CT_Execute
CT_Export
CT_Get
CT_Redirect
CT_Reset
CT_Resurface
CT_Threshold
WTO
After you configure the SOAP Server, you must restart the Tivoli Enterprise Monitoring Server.
296
2. To define users and assign access privileges, enter 4 followed by a space, and then the number (in the
list) of the hub you want to configure. For example:
1)Add, 2)Remove ##, 3)Modify Hub ##, 4)UserAccess ##, 5)Cancel, 6)Save/exit:4 1
Any users already defined are listed under the corresponding access.
3. To define a user with Query access, enter 1 followed by a space and the user ID. To define a user with
Update access, enter 2 followed by a space and the user ID. See Table 64 on page 296 for a list of the
methods associated with each type of access.
Note: If the Security: Validate User option is enabled, the user ID must be a valid user on the hub
system.
The prompt appears again with the new user added to the appropriate access list. You can continue to
add users by repeating step 3 or complete the configuration by typing 4 and pressing Enter.
Chapter 13. Configuring IBM Tivoli Monitoring Web Services (the SOAP Server)
297
5. Create another ASCII text file named URLS.txt that contains URLs that will receive the SOAP request.
For example: http://hostname:1920///cms/soap
6. Run the following command:
kshsoap SOAPREQ.txt URLS.txt
SOAPREQ.txt is the name of the file that contains the SOAP request and URLS.txt is the name of the
file that contains the URLs.
The kshsoap utility processes the SOAPREQ.txt file and displays the output of the SOAP request in the
command window. The SOAP request is sent to each URL listed in URLS.txt, and the SOAP response
from each URL displays in the DOS window.
298
&shilev.&rtename.RKANPARU(KDSENV)
For example: ITM.SYP1.RKANPARU(KDSENV)
Note: The &shilev and &rtename are variables that correspond to high level qualifiers of the
RKANPARU(KDSENV) partitioned dataset. These variables can take 1 to 8 characters.
299
Be aware that on each occasion maintenance or reconfiguration takes place in your environment these
files may be recreated and changes lost and need to be reapplied.
The following lists the settings that may affect the monitoring server performance:
KDCFP_RXLIMIT
This parameter establishes the maximum number of 1 KB packets which may be transmitted to
this endpoint in a single RPC request or response. The default value is 4096 KB (4 MB); the
minimum is 1024 KB (1 MB); there is no maximum. If the remote endpoint (session partner)
exceed this limit (that is, send more), the RPC request will be failed with status
KDE1_STC_RXLIMITEXCEEDED. The intent of RXLIMIT is to prevent memory overrun by placing
an upper-limit on a single RPC request or response. If sufficient capacity exists in a large-scale
deployment, consider setting this value to 8192.
To increase the buffer size to 8 MB, include the following environment setting:
KDCFP_RXLIMIT=8192
CMS_DUPER
This parameter enables or disables situation synchronization of common filter objects actually
monitored by agents or endpoints. Enabling this setting in monitoring server environments with
predominantly z/OS address space applications for example, OMEGAMON XE for CICS or
Sysplex, improves performance and response time by limiting data collection samplings on behalf
of running situations. Enable it by setting the value to YES. Disable by setting the value to NO.
This parameter is enabled by default.
EVENT_FLUSH_TIMER
This parameter is set (in minutes) to set an interval at which time pending I/Os are committed to
situation status history as a background writer and garbage collection task. This feature improves
performance of incoming event throughput into the hub monitoring server per arriving situation
statuses.
EIB_FLUSH_TIMER
This parameter is used to specify in minutes how long the monitoring server should wait before
resetting distribution and database event requests to an initial state, thereby freeing held resources
by the request if no event information has been able to get processed in the specified time. The
default setting is 2 minutes. If event requests are not responding within 2 minutes it may be
desirable to allow for a higher minutes, setting to allow requests more time to process, particularly
in larger, more complex environments.
DELAY_STOP
This parameter is used to specify in seconds how long to delay monitoring server shutdown for
UNIX and Linux monitoring servers, as invoked by the itmcmd server stop tems_name
command. The default value is 60 seconds. The delay is used to allow network connections to
close prior to an immediate restart of the monitoring server with the itmcmd server start
tems_name command. If you do not immediately restart the monitoring server after shutting it
down, this parameter can be set to a lower value to cause the monitoring server shutdown to
complete more quickly.
KGLCB_FSYNC_ENABLED
This parameter is not available on Windows systems, and is not available on IBM Tivoli Monitoring
V6.1 systems.
For Linux and UNIX platforms, this variable can be used to specify whether the fsync() system call
should be invoked after writes to the filesystem. This configuration variable may be set in the
standard configuration file for the monitoring server . By default, for maximum reliability, fsync() will
be called. If and only if the following line is added to the monitoring server configuration file,
fsync() calls be omitted:
KGLCB_FSYNC_ENABLED='0'
The default behavior is to call fsync() after writes, which is equivalent to the setting:
300
KGLCB_FSYNC_ENABLED='1'
The fsync() system call flushes the filesystem's dirty pages to disk and protects against loss of
data in the event of an operating system crash, hardware crash or power failure. However, it can
have a significant negative effect on performance because in many cases it defeats the caching
mechanisms of the platform file system. On many UNIX platforms, the operating system itself
syncs the entire filesystem on a regular basis. For example, by default the syncd daemon that
runs on AIX syncs the filesystem every 60 seconds which limits the benefit of fsync() calls by
application programs to protecting against database corruption in the most recent 60 second
window.
301
Because the KFWTSIT table is the most active table, use the runstats and reorg facilities on a regular
(for example, daily) basis.
The following commands illustrate how to run the RUNSTATS facility on the KFWTSIT table (from a
DB2 command prompt):
CONNECT TO TEPS;
RUNSTATS ON TABLE TEPS.KFWTSIT AND INDEXES ALL ALLOW WRITE ACCESS ;
CONNECT RESET;
The following commands illustrate how to run the REORG facility on the KFWTSIT table (from a DB2
command prompt):
CONNECT TO TEPS;
REORG TABLE TEPS.KFWTSIT INPLACE ALLOW WRITE ACCESS
CONNECT RESET;
START ;
Note: These tuning changes can also be made using the DB2 Control Center.
302
303
304
Table 65. Default Java heap size parameters by portal client type
Client type
Where specified
4 MB
64 MB
increase to 128 MB
increase to 256 MB
128 MB
256 MB
Desktop
v cnp.bat (Windows)
v cnp.sh (Linux)
v in install_dir/CNP
128 MB
256MB
For the desktop client and Java Web Start desktop client, the default maximum Java heap size is 256MB,
which is a good setting for most environments.
For the browser client, the Java Plug-in has a default maximum Java heap size of 64MB on Windows,
which is too low. The Tivoli Enterprise Portal browser client uses the IBM Java Plug-in, which is
automatically installed on your computer with the Tivoli Enterprise Portal. If the Java heap size parameters
are not set properly, browser client performance will be slow and your workstation may receive
HEAPDUMP and JAVACore files, an out-of-memory condition, when you are logged on.
To receive good performance from the browser client, you must increase the Java heap size parameters
from their default values. Before you start the browser client, take the following steps:
1. In the Windows Control Panel, open the Java Control Panel.
2. If you have multiple Java versions, verify that you have the correct control panel open by confirming
the Java Runtime is Version 1.5 and that the JRE is in the IBM path (such as c:\program
files\IBM\Java50\jre\bin). To verify, click on the Java(TM) tab and check the Location column for the
JRE.
3. Set the Java Runtime Parameters:
a. Click the Java tab.
b. Click the Java Applet Runtime Settings View button.
c. Double-click the Java Runtime Parameters field and enter -Xms128m -Xmx256m -Xverify:none.
d. Click OK.
The -Xms128m specifies the starting size of the Java heap (128 MB) and -Xmx256m specifies the
maximum size.
4. Confirm that the Temporary Files settings are set to Unlimited:
a. Click the General tab.
b. Click Settings.
c. Select Unlimited for the Amount of disk space to use.
d. Click OK.
5. Clear the browser cache:
a. In the General tab, click Delete Files.
b. In the window that opens, select Downloaded Applets and click OK.
With either the desktop or browser client, if you observe symptoms of heap memory exhaustion using a
maximum Java heap size setting of 256 MB, increase the maximum setting in increments of 64 MB until
the symptoms disappear.
Make sure the client workstation has enough memory to handle the maximum heap size. To determine if
the client workstation has sufficient memory, observe the available physical memory (as shown on the
Chapter 14. Performance tuning
305
Windows Task Manager Performance tab) when the workstation is not running the Tivoli Enterprise
Portal client, but is running any other applications that need to run concurrently with the portal client. Verify
that the client workstation has enough available physical memory to hold the entire maximum Java heap
size for the Tivoli Enterprise Portal plus another 150 MB. The additional 150 MB provides an allowance for
non-Java heap storage for the Tivoli Enterprise Portal and extra available memory for use by the operating
system.
For more information on Java memory management and Java heap tuning parameters, please refer to the
IBM Developer Kit and Runtime Environment, Java 2 Technology Edition, Version 5.0 Diagnostics Guide
(SC34-6650).
306
cnp.siteventcon.initial_batchsize
Maximum number of events that the situation event console will process in the first event batch
cycle. The default value is 100.
cnp.siteventcon.initial_dispatchrate
Number of milliseconds that will elapse after the first event batch cycle is processed by the
situation event console. The default value is 5000 milliseconds (5 seconds).
cnp.siteventcon.batchsize
Maximum number of events that the situation event console will process in all subsequent event
batch cycles. The default value is 100.
cnp.siteventcon.dispatchrate
Number of milliseconds that will elapse after each subsequent event batch cycle is processed by
the situation event console. The default value is 1000 milliseconds (1 second).
Note: Settings made here at the client level take precedence over those at the monitoring environment
level. If you have specified a setting at the client level, then any event variables defined within the
portal server configuration are ignored. If you have not specified a setting at the client level, the
portal server variables are used. If you have not specified any settings, the default values are used.
307
collection interval The data collection interval, in minutes. This value can be 1, 5, 15, 30, 60, or
1440 (1 day).
24 Represents 24 hours in one day.
# instances at each interval The number of instances recorded at each interval.
The two variables in this formula are the collection interval and the number of instances recorded at
each interval.
The collection interval is a configuration parameter specified in the Historical Data Collection
configuration dialog.
The number of instances recorded at each interval depends on the nature of the attribute group and
the managed system being monitored. Some attribute groups, such as NT_Memory, generate a
single row of data per collection interval. Most attribute groups, however, generate multiple rows of
data, one row for each monitoring instance (for example, one per CPU, one per disk, and so on).
Certain attribute groups can be instance-intense, generating dozens or hundreds of rows of data per
collection interval. Examples of instance-intense attribute groups would be those reporting
information at the process level, thread level, disk level (for file servers), or network connection level.
For attribute groups that return multiple rows, the number of instance recorded at each interval is
configuration dependent, and can be different from one monitoring environment to another. The
Warehouse load projections spreadsheet requires the user to specify the number of instances recorded
at each interval. There are several approaches that you can use to come up with this number.
Using the portal client, build a table view for the monitoring agent and define a query to obtain data
from the desired attribute group. The number of rows shown in the table view is the number of
instances that would be recorded at each interval. For details on how to define table views in the
portal client, refer to the IBM Tivoli Monitoring: Tivoli Enterprise Portal User's Guide.
Issue a SOAP call to collect data for this attribute group from the monitoring agent. The number of
data rows returned by the SOAP call is the number of instances that would be recorded at each
interval. For details on how to issue SOAP calls, refer to Appendix A Tivoli Enterprise Monitoring
Web services in the IBM Tivoli Monitoring: Administrator's Guide.
If you have a test environment, you can create a monitoring server to use in historical data collection
testing. Enable historical data collection for the desired attribute group under this remote monitoring
server, and configure a representative agent to connect to this monitoring server. When the agent
uploads data to the Warehouse Proxy agent, you can query the WAREHOUSELOG table to see how
many rows were written by the agent for the attribute group.
To minimize the data volume (rows per day) generated by a monitoring agent for an attribute group,
consider the following two recommendations:
Use the longest data collection interval (1, 5, 15, 30, 60 or 1440 minutes) that will provide the
desired level of information.
Avoid or minimize data collection for instance-intense attribute groups, which can generate a large
number of rows per data collection interval.
v When configuring historical data collection for an attribute group, you specify which monitoring servers
collects the data. Since historical data collection is configured at the monitoring server level, restrict the
number of agents collecting data for an attribute group, if possible, by enabling historical data collection
on a subset of the monitoring servers.
v To minimize the performance impact on the monitoring server, configure historical data to keep
short-term history files at the agent, if possible, rather than at the monitoring server.
v Enable warehouse collection only for attribute groups where there is a planned use for the information.
For attribute groups with historical data collection enabled but not configured for warehouse collection,
you need to schedule regular tasks to prune the short-term history files (the supplied programs are
described in the IBM Tivoli Monitoring: Administrator's Guide).
v To spread the warehouse collection load across the day, configure warehouse collection to occur hourly
rather than daily.
308
309
Linux Half the real storage with a minimum of 16 MB and a maximum of 512 MB -1.
Note: The above values are from the IBM Developer Kit and Runtime Environment, Java 2 Technology
Edition, Version 5.0 Diagnostics Guide.
To set the size of maximum Java heap size for the Warehouse Proxy agent, edit the hd.ini configuration
file and modify the KHD_JAVA_ARGS variable as shown below:
KHD_JAVA_ARGS=-Xmx256m
A maximum Java heap size of 256 megabytes is more than adequate for most environments.
Setting KHD_JNI_VERBOSE=Y in the configuration file will enable logging of the garbage collectors
actions. If the Java log contains an excessive number of garbage collection entries during a single
warehousing interval, consider increasing the size of the Java heap.
310
v A maximum Java heap size of 256 megabytes (shown in the example above) is adequate for most
environments.
v In addition to the mx Java parameter, you can also specify the -verbose:gc Java run-time parameter,
which causes diagnostic messages related to garbage collection to be written to the log. If there are an
excessive number of garbage collection entries consider increasing the size of the Java heap.
v For more information on Java heap tuning parameters, refer to the IBM Developer Kit and Runtime
Environment, Java 2 Technology Edition, Version 5.0 Diagnostics Guide, which is available from
http://www.ibm.com/developerworks/java/jdk/diagnosis.
311
Wwhere:
A
Specifies the maximum number of Java-based internal trace files that can exist at any one time for a
single launch.
Specifies the maximum number of lines per Java-based internal trace file.
Consider increasing these log parameters so that you have a few days worth of data in the logs for
diagnostic purposes.
Database tuning
Database tuning is a complex task, and for important databases, it requires the skills of a database
administrator. For the Tivoli Data Warehouse, a database located on a single disk using out of the box
database parameters is suitable only for small test environments. For all other environments, careful
planning, monitoring and tuning are required to achieve satisfactory performance.
312
There are a number of sources of database configuration and tuning information that should be helpful in
the planning, monitoring, and tuning of the Tivoli Data Warehouse:
1. Understanding the disk requirements for your database on page 337 describes factors to consider in
planning the disk subsystem to support the Tivoli Data Warehouse.
2. The paper Relational Database Design and Performance Tuning for DB2 Database Servers is
available from the Tivoli Open Process Automation Library (OPAL) by searching for "database tuning"
or navigation code "1TW10EC02" at OPAL. This paper is a concise reference describing some of the
major factors that affect DB2 performance. This document is a good starting point to use before
referencing more detailed information in manuals and Redbooks devoted to DB2 performance. While
DB2 specific, many of the concepts are applicable to Relational Databases in general, such as Oracle
and Microsoft SQL Server.
3. The Tivoli Data Warehouse tuning chapter of the Redbook Tivoli Management Services Warehouse
and Reporting (SG24-7443) builds upon the previously referenced OPAL paper, supplementing it with
information about Oracle and Microsoft SQL Server. This chapter is almost 100 pages in length.
4. The Optimizing the performance chapter of the Redbook IBM Tivoli Monitoring Implementation and
Performance Optimization for Large Scale Environments (SG24-7443) contains a section on database
tuning considerations for the Tivoli Data Warehouse. This section makes suggestions about specific
tuning parameters for DB2, Oracle and MS SQL. At approximately 12 pages, this section is much
shorter than the tuning chapter in the previously referenced Redbook (item number 3).
5. The IBM Tivoli Monitoring: Troubleshooting Guide contains an item Summarization and Pruning Agent
in large environment which lists specific tuning changes that were made to a Tivoli Monitoring V6.2.2
Tivoli Data Warehouse supporting a large number of agents.
The remainder of this section is a summarized version of material from the OPAL paper in number 2
above, which has been supplemented by identifying specific parameters relevant to Tivoli Data Warehouse
and providing some suggested ranges of values.
Terminology
The following terms are useful for understanding performance issues:
Throughput
The amount of data transferred from one place to another or processed in a specified amount of
time. Data transfer rates for disk drives and networks are measured in terms of throughput.
Typically, throughputs are measured in kilobytes per second, Mbps, and Gbps.
Optimizer
When an SQL statement must be executed, the SQL compiler must determine the access plan to
the database tables. The optimizer creates this access plan, using information about the
distribution of data in specific columns of tables and indexes if these columns are used to select
rows or join tables. The optimizer uses this information to estimate the costs of alternative access
plans for each query. Statistical information about the size of the database tables and available
indexes heavily influences the optimizer estimates.
Clustered Index
An index whose sequence of key values closely corresponds to the sequence of rows that are
stored in a table. Statistics that the optimizer uses measure the degree of this correspondence.
Chapter 14. Performance tuning
313
Cardinality
The number of rows in the table or for indexed columns the number of distinct values of that
column in a table.
Prefetch
An operation in which data is read before its use when its use is anticipated. DB2 supports the
following mechanisms:
Sequential prefetch
A mechanism that reads consecutive pages into the buffer pool before the application
requires the pages.
List prefetch or list sequential prefetch
Prefetches a set of non-consecutive data pages efficiently.
Performance factors
The following performance factors, which are thoroughly detailed in subsequent sections, affect overall
performance of any application:
v Database Design
v Application Design
v Hardware Design and Operating System Usage
This section identifies areas where you can influence performance of the Tivoli Data Warehouse database.
314
Performance and table space types: DMS table spaces usually perform better than SMS table spaces
because they are pre-allocated and do not use up time extending files when new rows are added. DMS
table spaces can be either raw devices or file system files. DMS table spaces in raw device containers
provide the best performance because double buffering does not occur. Double buffering, which occurs
when data is buffered first at the database manager level and subsequently at the file system level, might
be an additional cost for file containers or SMS table spaces.
If you use SMS table spaces, consider using the db2empfa command on your database. The db2empfa
command, which runs the Enable Multipage File Allocation tool, runs multipage file allocation for a
database. With multipage file allocation enabled for SMS table spaces, disk space is allocated one extent
rather than one page at a time, improving INSERT throughput. In version 8 of DB2, this parameter is
turned on by default.
If you are using a RAID device, create the table space with a single container for the entire array. When a
database is created in DB2, a default table space called USERSPACE1 is created, and by default, Tivoli
Monitoring uses this table space when creating its tables in the DB2 database. You can create a new
default table space in DB2 by creating a table space with the name IBMDEFAULTGROUP. If a table space
with that name, and a sufficient page size, exists when a table is created without the IN tablespace clause,
it is used. You can create the table space in a different location, for example, a RAID array. You could also
create it as a DMS table space if you prefer, as in the following example:
CREATE REGULAR TABLESPACE IBMDEFAULTGROUP IN DATABASE
PARTITION GROUP IBMDEFAULTGROUP PAGESIZE 4096 MANAGED
BY SYSTEM
USING ('E:\DB2\NODE0000\SQL00001\IBMDEFAULTGROUP')
EXTENTSIZE 32
PREFETCHSIZE AUTOMATIC
BUFFERPOOL IBMDEFAULTBP
OVERHEAD 12.670000
TRANSFERRATE 0.180000
DROPPED TABLE RECOVERY ON;
File system caching on a Windows system: For Windows systems caching, the operating system might
cache pages in the file system cache for DMS file containers and all SMS containers. For DMS device
container table spaces, the operating system does not cache pages in the file system cache. On Windows,
the DB2 registry variable DB2NTNOCACHE specifies whether or not DB2 will open database files with the
NOCACHE option. If DB2NTNOCACHE is set to ON, file system caching is eliminated. If DB2NTNOCACHE
is set to OFF, the operating system caches DB2 files. This standard applies to all data except for files that
contain LONG FIELDS or LOBs. Eliminating system caching permits more available memory for the
database, and the buffer pool or SORTHEAP can be increased.
Buffer pools: A buffer pool is an area of memory into which database pages are read, modified, and
held during processing. On any system, accessing memory is faster than disk I/O. DB2 uses database
buffer pools to attempt to minimize disk I/O. Although the amount of memory to dedicate to the buffer pool
varies, in general it is true that more memory is preferable. Start with between 50 - 75% of your systems
main memory devoted to buffer pools if the machine is a dedicated database server. Because it is a
memory resource, buffer pool usage must be considered along with all other applications and processes
that are running on a server. If your table spaces have multiple page sizes, create only one buffer pool for
each page size.
Logging: Maintaining the integrity of your data is extremely important. All databases maintain log files
that record database changes. DB2 logging involves a set of primary and secondary log files that contain
log records that show all changes to a database. The database log is used to roll back changes for units
of work that are not committed and to recover a database to a consistent state.
DB2 provides the following strategies for logging:
v Circular logging
v Log retention logging
Chapter 14. Performance tuning
315
Circular logging: Circular logging is the default log mode in which log records fill the log files and
subsequently overwrite the initial log records in the initial log file. The overwritten log records are not
recoverable. This type of logging is not suited for a production application.
Log retain logging: With log retain logging, each log file is archived when it fills with log records. New log
files are made available for log records. Retaining log files enables roll-forward recovery, which reapplies
changes to the database based on completed units of work (transactions) that are recorded in the log. You
can specify that roll-forward recovery is done to the end of the logs, or to a specific point in time before
the end of the logs. Because DB2 never directly deletes archived log files, the application is responsible
for maintaining them (for example, archiving, purging, and so on).
Log performance: Ignoring the logging performance of your database can be a costly mistake, especially
in terms of time. Optimize the placement of the log files for write and read performance, because the
database manager must read the log files during database recovery. To improve logging performance, use
the following suggestions:
v Use the fastest disks available for your log files. Use a separate array or channel if possible.
v Use Log Retain logging.
v Mirror your log files.
Increase the size of the database configuration Log Buffer parameter (LOGBUFSZ). This parameter
specifies the amount of the database heap to use as a buffer for log records before writing these
records to disk. The log records are written to disk when one of the following points occurs:
A transaction commits, or a group of transactions commit, according to the definition in the
MINCOMMIT configuration parameter.
The log buffer is full.
Another internal database manager event occurs.
v Buffering the log records supports more efficient logging file I/O because the log records are written to
disk less frequently and more log records are written each time.
v Tune the Log File Size (LOGFILSIZ) database configuration parameter so that you are not creating
excessive log files.
Database maintenance: Regular maintenance, which involves running the REORG, RUNSTATS, and
REBIND facilities in that order on the database tables, is a critical factor in the performance of a database
environment. A regularly scheduled maintenance plan is essential for maintaining peak performance of
your system. Implement at least a minimum weekly maintenance schedule.
REORG: After many INSERT, DELETE, and UPDATE changes to table data, often involving variable
length columns activity, the logically sequential data might be on non-sequential physical data pages. The
database manager must perform additional read operations to access data. You can use the REORG
command to reorganize DB2 tables, eliminating fragmentation and reclaiming space. Regularly scheduled
REORGs improve I/O and significantly reduce elapsed time. Implement a regularly scheduled maintenance
plan.
DB2 provides the following types of REORG operation, classic REORG and In-Place REORG. If you have
an established database maintenance window, use the classic REORG. If you operate a 24 by 7
operation, use the In-Place REORG.
v Classic REORG
The fastest method of REORG
Indexes rebuilt during the reorganization
Ensures perfectly ordered data
Access limited to read-only during the UNLOAD phase, with no access during other phases
Not re-startable
v In-Place REORG
316
Slower than the Classic REORG and takes more time to complete
Does not ensure perfectly ordered data or indexes
Requires more log space
Can be paused and re-started
Can allow applications to access the database during reorganization
RUNSTATS: The DB2 optimizer uses information and statistics in the DB2 catalog to determine optimal
access to the database based on the provided query. Statistical information is collected for specific tables
and indexes in the local database when you execute the RUNSTATS utility. When significant numbers of
table rows are added or removed, or if data in columns for which you collect statistics is updated, execute
RUNSTATS again to update the statistics.
Use the RUNSTATS utility to collect statistics in the following situations:
v When data was loaded into a table and the appropriate indexes were created
v When you create a new index on a table. Execute RUNSTATS for the new index only if the table was
not modified since you last ran RUNSTATS on it.
v When a table has been reorganized with the REORG utility
v When the table and its indexes have been extensively updated by data modifications, deletions, and
insertions. Extensive in this case might mean that 10 to 20 percent of the table and index data was
affected.
v Before binding or rebinding, application programs whose performance is critical
v When you want to compare current and previous statistics. If you update statistics at regular intervals,
you can discover performance problems early.
v When the prefetch quantity is changed
v When you have used the REDISTRIBUTE DATABASE PARTITION GROUP utility
The RUNSTATS command has several formats that primarily determine the depth and breadth or statistics
that are collected. If you collect more statistics, the command takes more time to run. The following
options are included:
v Collecting either SAMPLED or DETAILED index statistics
v Collecting statistics on all columns or only columns used in JOIN operations
v Collecting distribution statistics on all, key, or no columns. Distribution statistics are very useful when
you have an uneven distribution of data on key columns.
Take care when running RUNSTATS, because the collected information impacts the optimizers selection
of access paths. Implement RUNSTATS as part of a regularly scheduled maintenance plan if some of the
conditions occur. To ensure that the index statistics are synchronized with the table, execute RUNSTATS
to collect table and index statistics at the same time.
Consider some of the following factors when deciding what type of statistics to collect:
v Collect statistics only for the columns that join tables or in the WHERE, GROUP BY, and similar clauses
of queries. If these columns are indexed, you can specify the columns with the ONLY ON KEY
COLUMNS clause for the RUNSTATS command.
v Customize the values for num_freqvalues and num_quantiles for specific tables and specific columns in
tables.
v Collect detailed index statistics with the SAMPLE DETAILED clause to reduce the amount of
background calculation performed for detailed index statistics. The SAMPLE DETAILED clause reduces
the time required to collect statistics and produces adequate precision in most cases.
v When you create an index for a populated table, add the COLLECT STATISTICS clause to create
statistics as the index is created.
317
REBIND: After running RUNSTATS on your database tables, you must rebind your applications to take
advantage of those new statistics. Rebinding ensures that DB2 is using the best access plan for your SQL
statements. Perform a REBIND after running RUNSTATS as part of you normal database maintenance
procedures. The type of SQL that you are running determines how the rebind occurs.
DB2 provides support for the following types of SQL:
v Dynamic SQL
v Static SQL
Dynamic SQL: Dynamic SQL statements are prepared and executed at run time. In dynamic SQL, the
SQL statement is contained as a character string in a host variable and is not precompiled. Dynamic SQL
statements and packages can be stored in one of the DB2 caches. A rebind occurs under the following
conditions when you are using dynamic SQL:
v If the statement is not in the cache, the SQL optimizer binds the statement and generates a new
access plan.
v If the statement is in the cache, no rebind occurs. To clear the contents of the SQL cache, use the
FLUSH PACKAGE CACHE SQL statement.
Static SQL: Static SQL statements are embedded within a program, and are prepared during the program
preparation process before the program is executed. After preparation, a static SQL statement does not
change, although values of host variables that the statement specifies can change. These static
statements are stored in a DB2 object called a package.
A rebind occurs under the following conditions when you are using static SQL:
v Explicitly, if an explicit REBIND package occurs
v Implicitly, if the package is marked invalid, which can happen if an index that the package was using
was dropped.
318
Memory: Understanding how DB2 organizes memory helps you tune memory use for good performance.
Many configuration parameters affect memory usage. Some may affect memory on the server, some on
the client, and some on server and client. Furthermore, memory is allocated and de-allocated at different
times and from different areas of the system.
While the database server is running, you can increase or decrease the size of memory areas inside the
database shared memory. Understand how memory is divided among the different heaps before you tune
to balance overall memory usage on the entire system. Refer to the DB2 Administration Guide:
Performance for a detailed explanation of the DB2 memory model and all of the parameters that effect
memory usage.
CPU: CPU utilization should be about 70 to 80% of the total CPU time. Lower utilization means that the
CPU can cope better with peak workloads. Workloads between 85% to 90% result in queuing delays for
CPU resources, which affect response times. CPU utilization above 90% usually causes unacceptable
response times. While running batch jobs, backups, or loading large amounts of data, the CPU may be
driven to high percentages such as to 80 to 100% to maximize throughput.
DB2 supports the following processor configurations:
Uni-Processor
A single system that contains only one single CPU
SMP (symmetric multiprocessor)
A single system that can contain multiple CPUs. Scalability is limited to the CPU sockets on the
motherboard.
MPP (massively parallel processors)
A system with multiple nodes connected over a high speed link. Each node has its own CPUs.
Adding new nodes achieves scalability.
Notes:
1. Inefficient data access methods cause high CPU utilization and are major problems for database
system. Regular database maintenance is an important factor.
2. Paging and swapping requires CPU time. Consider this factor while planning your memory
requirements.
I/O: Improving I/O can include making accurate calculations for total disk space that an application
requires, improving disk efficiency, and providing for parallel I/O operations.
Calculating disk space for an application: Use the following guidelines to calculate total disk space that
an application requires:
v Calculate the raw data size. Add the column lengths of your database tables and multiply by the number
of expected rows.
v After calculating the raw data size, use the following scaling up ratios to include space for indexing,
working space, and so on:
OLTP ratio: 1:3
DSS ratio: 1:4
Data warehouse ratio: 1:5
Disk efficiency: You can improve disk efficiency with attention to the following concerns:
v Minimize I/O. Access to main memory is much faster than accessing the disk. Provide as much memory
as possible to the database buffer pools and various memory heaps to avoid I/O.
v When I/O is needed, reading simultaneously from several disks is the fastest method. You can use
several smaller disks rather than one big disk, or place the disk drives on separate controllers.
319
Selecting disk drives: Disks tend to grow larger every year, doubling in capacity every 18 months, and the
cost per GB is lower each year. The cost difference of the two smallest drives diminishes until the smaller
drive is not practical. The disk drives improve a little each year in seek time. The disk drives get smaller in
physical size. While the disk drives continue to increase capacity with a smaller physical size, the speed
improvements, seek, and so on, are small in comparison. A database that would have taken 36 * 1 GB
drives a number of years ago can now be placed on one disk.
This growth trend highlights database I/O problems. For example, if each 1 GB disk drive can do 80 I/O
operations a second, the system can process a combined 2880 I/O operations per second (36 multiplied
by 80). But a single 36-GB drive with a seek time of 7 milliseconds can process only 140 I/O operations
per second. Although increasing disk drive capacity is beneficial, fewer disks cannot deliver the same I/O
throughput.
Parallel operations: Provide for parallel I/O operations. Use the smallest disk drives possible to increase
the number of disks for I/O throughput. If you buy larger drives, use only half the space, especially the
middle area, for the database. The other half is useful for backups, archiving data, off hour test databases,
and extra space for accommodating upgrades.
Network: After a system is implemented, the network should be monitored to ensure that its bandwidth is
not being consumed more than 50%. The network can influence overall performance of your application,
especially if a delay occurs in the following situations:
v Lengthy time between the point a client machine sends a request to the server and the server receives
this request
v Lengthy time between the point the server machine sends data back to the client machine and the client
machine receives the data
Tuning
This section explains relevant database and database manager configuration parameters and includes
guidelines for setting their values.
Database manager configuration tuning: Each instance of the database manager has a set of
database manager configuration parameters, also called database manager parameters, which affect the
amount of system resources that are allocated to a single instance of the database manager. Some of
these parameters are used for configuring the setup of the database manager and other information that is
not related to performance. You can use either the DB2 Control Center or the following command to
change the parameters:
UPDATE DATABASE MANAGER CONFIG USING keyword
Parameters: The following database manager configuration parameters have a high impact on
performance:
Note: Refer to the DB2 Administration Guide: Performance for detailed explanations about all the
database manager configuration parameters.
AGENTPRI
Controls the priority that the operating system scheduler gives to all agents and to other database
manager instance processes and threads. Use the default value unless you run a benchmark to
determine the optimal value.
ASLHEAPSZ
Represents a communication buffer between the local application and its associated agent. The
application support layer heap buffer is allocated as shared memory by each database manager
agent that is started. If the request to the database manager or its associated reply do not fit into
the buffer, they are split into two or more send-and-receive pairs. The size of this buffer should be
set to handle the majority of requests using a single send-and-receive pair.
320
INTRA_PARALLEL
Specifies whether the database manager can use intra-partition parallelism on an SMP machine.
Multiple processors can be used to scan and sort data for index creation. Usually, this parameter
is set to YES, especially if you are running on a dedicated database server. The default value is NO.
MAX_QUERYDEGREE
Specifies the maximum degree of intra-partition parallelism that is used for any SQL statement that
is executing on this instance of the database manager. An SQL statement does not use more than
this number of parallel operations within a partition when the statement is executed. The
intra_parallel configuration parameter must be set to a value greater than 1 to enable the database
partition to use intra-partition parallelism. Setting the value to ANY enables use of all partitions.
Usually, this parameter should be set to ANY, especially if you are running on a dedicated database
server. The default value is ANY.
SHEAPTHRES
Determines the maximum amount of memory available for all the operations that use the sort
heap, including sorts, hash joins, dynamic bitmaps that are used for index ANDing and Star Joins,
and operations where the table is in memory. Set the sort heap threshold parameter to a
reasonable multiple of the largest SORTHEAP parameter in your database manager instance. This
parameter should be at least twice as large as the largest sort heap defined for any database
within the instance, but you must also consider the number of concurrent sort processes that can
run against your database.
Database configuration tuning: Each database has a set of the database configuration parameters,
which are also known as database parameters. These parameters affect the amount of system resources
that are allocated to that database. Furthermore, some database configuration parameters provide
descriptive information only and cannot be changed, and others are flags that indicate the status of the
database. You can use the DB2 Control Center or the UPDATE DATABASE CONFIG FOR dbname
USING keyword command to change those parameters. For more information about the numerous
database configuration parameters, see the DB2 Administration Guide: Performance.
The following data configuration parameters have a high impact on performance:
DBHEAP
Contains control block information for tables, indexes, table spaces, and buffer pools, and space
for the log buffer (LOGBUFSZ) and temporary memory that the utilities use. Each database has
only one database heap, and the database manager uses it on behalf of all applications connected
to the database. The size of the heap is dependent on a large number of variables. The control
block information is kept in the heap until all applications disconnect from the database. The DB2
default value is typically too low, particularly for the Tivoli Data Warehouse. Start with a value
between 2000 and 8000.
DFT_DEGREE
Specifies the default value for the CURRENT DEGREE special register and the DEGREE bind
option. The default value is 1, which means no intra-partition parallelism. A value of -1 means that
the optimizer determines the degree of intra-partition parallelism based on the number of
processors and the type of query. The degree of intra-partition parallelism for an SQL statement is
specified at statement compilation time using the CURRENT DEGREE special register or the
DEGREE bind option. The maximum runtime degree of intra-partition parallelism for an active
application is specified using the SET RUNTIME DEGREE command. The Maximum Query
Degree of Parallelism (max_querydegree) configuration parameter specifies the maximum query
degree of intra-partition parallelism for all SQL queries. The actual runtime degree that is used is
the lowest of one of the following:
v The max_querydegree configuration parameter
v Application runtime degree
v SQL statement compilation degree
321
For a multi-processor machine, set this to 1 (ANY), to allow intra-partition parallelism for this
database.
CHNGPGS_THRESH
Improves overall performance of the database applications. Asynchronous page cleaners write
changed pages from the buffer pool or the buffer pools to disk before a database agent requires
the space in the buffer pool. As a result, database agents should not have to wait for changed
pages to be written out so that they might use the space in the buffer pool. Usually, you can start
out with the default value.
LOCKLIST
Indicates the amount of storage that is allocated to the lock list. This parameter has a high impact
on performance if frequent lock escalations occur. The DB2 default value is typically too low,
particularly for the Tivoli Data Warehouse. Start with a value of between 500 and 800.
MAXLOCKS
Maximum percent of lock list before escalation. Use this parameter with the LOCKLIST parameter
to control lock escalations. Increasing the LOCKLIST parameter augments the number of available
locks.
LOGBUFSZ
Specifies the amount of the database heap (defined by the dbheap parameter) to use as a buffer
for log records before writing these records to disk. Buffering the log records supports more
efficient logging file I/O because the log records are written to disk less frequently, and more log
records are written at each time. The DB2 default value is typically too low, particularly for the
Tivoli Data Warehouse. Start with a value of between 256 and 768.
NUM_IOCLEANERS
Specifies the number of asynchronous page cleaners for a database. These page cleaners write
changed pages from the buffer pool to disk before a database agent requires the space in the
buffer pool. As a result, database agents do not wait for changed pages to be written out so that
they can use the space in the buffer pool. This parameter improves overall performance of the
database applications. The DB2 default value is typically too low. Set this parameter equal to the
number of physical disk drive devices that you have. The default value is 1.
NUM_IOSERVERS
Specifies the number of I/O servers for a database. I/O servers perform prefetch I/O and
asynchronous I/O by utilities such as backup and restore on behalf of the database agents.
Specify a value that is one or two more than the number of physical devices on which the
database is located. The DB2 default value is typically too low. Set this parameter equal to the
number of physical disk drive devices that you have, and add two to that number. The default
value is 3.
PCKCACHESZ
The package cache is used for caching sections for static and dynamic SQL statements on a
database. Caching packages and statements eliminates the need to access system catalogs when
reloading a package so that the database manager can reduce its internal overhead. If you are
using dynamic SQL, caching removes the need for compilation.
SORTHEAP
Defines the maximum number of private memory pages to be used for private sorts, or the
maximum number of shared memory pages to be used for shared sorts. Each sort has a separate
sort heap that is allocated as needed by the database manager. This sort heap is the area where
data is sorted. Increase the size of this parameter when frequent large sorts are required. The
DB2 default value may be too low, particularly for the Tivoli Data Warehouse. Start with a value of
between 256 and 1024. When changing this parameter, you might want to change the
SHEAPTHRES database manager parameter too.
322
LOGFILSIZ
Defines the size of each primary and secondary log file. The size of these log files limits the
number of log records that can be written to them before they become full, which requires a new
log file. The DB2 default value is too low, particularly for the Tivoli Data Warehouse. Start with a
value of between 4096 and 8192.
LOGPRIMARY
Specifies the number of primary log files to be pre-allocated. The primary log files establish a fixed
amount of storage that is allocated to the recovery log files. The DB2 default value may be too
low, particularly for the Tivoli Data Warehouse. Start with a value of between 6 and 10.
Buffer pools: The buffer pool is the area of memory where database pages, table rows or indexes, are
temporarily read and manipulated. All buffer pools are located in global memory, which is available to all
applications using the database. The purpose of the buffer pool is to improve database performance. Data
can be accessed much faster from memory than from disk. Therefore, as the database manager is able to
read from or write to more data (rows and indexes) to memory, database performance improves.
The default buffer pool allocation is usually not sufficient for production applications, and must be
monitored and tuned before placing your application in production. The DB2 default value is typically too
low, particularly for the Tivoli Data Warehouse. Start with a value of between 50 75 percent of your
system memory if the database server is dedicated. You can use the SQL statement ALTER
BUFFERPOOL to change this value.
Registry variables: Each instance of the database manager has a set of registry and environment
variables that affect various aspects of DB2 processing. You can change the value of DB2 registry
variables using the DB2SET command. Although numerous other registry and environment variables exist,
the DB2_PARALLEL_IO variable has a high impact on performance.
Note: Refer to the DB2 Administration Guide: Performance for a detailed explanation of all the registry
and environment variables.
While reading or writing data from and to table space containers, DB2 may use parallel I/O for each table
space value that you specify. The prefetch size and extent size for the containers in the table space
determine the degree of parallelism. For example, if the prefetch size is four times the extent size, there
are four extent-sized prefetch requests. The number of containers in the table space does not affect the
number of prefetchers.
To enable parallel I/O for all table spaces, you can specify the asterisk (*) wildcard character. To enable
parallel I/O for a subset of all table spaces, enter the list of table spaces. For several containers,
extent-size pieces of any full prefetch request are broken down into smaller requests that are executed in
parallel based on the number of prefetchers. When this variable is not enabled, the number of containers
in the table space determines the number of prefetcher requests that are created.
Monitoring tools
DB2 provides the following tools that can be used for monitoring or analyzing your database:
v Snapshot Monitor, which captures performance information at periodic points of time and is used to
determine the current state of the database
v Event Monitor, which provides a summary of activity at the completion of events such as statement
execution, transaction completion, or when an application disconnects
v Explain Facility, which provides information about how DB2 accesses the data to resolve the SQL
statements
v db2batch tool, which provides performance information as a benchmarking tool
SNAPSHOT and EVENT monitors: DB2 maintains data as the database manager runs about its
operation, its performance, and the applications that are using it. This data can provide important
performance and troubleshooting information. For example, you can track the following developments:
Chapter 14. Performance tuning
323
v The number of applications connected to a database, their status, and which SQL statements each
application is executing, if any
v Information that shows how well the database manager and database are configured, helping you make
tuning decisions
v Information about the time that deadlocks occurred for a specified database, the applications that were
involved, and the locks that were in contention
v The list of locks held by an application or a database. If the application cannot proceed because it is
waiting for a lock, additional information is on the lock, including which application is holding it.
Collecting performance data introduces some overhead on the operation of the database. DB2 provides
monitor switches to control which information is collected. You can use the following DB2 commands to
turn these switches on:
UPDATE
UPDATE
UPDATE
UPDATE
UPDATE
UPDATE
MONITOR
MONITOR
MONITOR
MONITOR
MONITOR
MONITOR
SWITCHES
SWITCHES
SWITCHES
SWITCHES
SWITCHES
SWITCHES
USING
USING
USING
USING
USING
USING
BUFFERPOOL
LOCK
SORT
STATEMENT
TABLE
UOW
ON
ON
ON
ON
ON
ON
;
;
;
;
;
;
You can access the data that the database manager maintains either by taking a snapshot or by using an
event monitor.
SNAPSHOTs: Use the GET SNAPSHOT command to collect status information and format the output for
your use. The returned information represents a snapshot of the database manager operational status at
the time the command was issued. Various formats of this command obtain different kinds of information,
and the specific syntax can be obtained from the DB2 Command Reference. The following formats are
quite useful:
GET SNAPSHOT FOR DATABASE
Provides general statistics for one or more active databases on the current database partition.
GET SNAPSHOT FOR APPLICATIONS
Provides information about one or more active applications that are connected to a database on
the current database partition.
GET SNAPSHOT FOR DATABASE MANAGER
Provides statistics for the active database manager instance.
GET SNAPSHOT FOR LOCKS
Provides information about every lock held by one or more applications connected to a specified
database.
GET SNAPSHOT FOR BUFFERPOOLS
Provides information about buffer pool activity for the specified database.
GET SNAPSHOT FOR DYNAMIC SQL
Returns a point-in-time picture of the contents of the SQL statement cache for the database.
You can create some simple scripts and schedule them to get periodic snapshots during your test cycles.
DB2BATCH: A benchmark tool called DB2BATCH is provided in the sqllib/bin subdirectory of your DB2
installation. This tool can read SQL statements from either a flat file or standard input, dynamically
describe and prepare the statements, and return an answer set. You can specify the level of performance
information that is supplied, including the elapsed time, CPU and buffer pool usage, locking, and other
statistics collected from the database monitor. If you are timing a set of SQL statements, DB2BATCH also
summarizes the performance results and provides both arithmetic and geometric means. For syntax and
options, type db2batch.
324
Optimizing queries
This section discusses tuning the queries that are processed to display the tables, charts and graphs that
make up workspace views within the Tivoli Enterprise Portal.
Processing queries
The query assigned to a chart or table view requests data from a particular attribute group. It executes
when you open or refresh the workspace. Queries make up the processing load for on-demand data
collection. You can reduce the frequency and amount of data sampling by:
v Customizing the query to filter out unwanted data. This reduces the number of selection criteria (rows)
and attributes (columns) collected.
v Applying the same query to other views in the workspace. This reduces the number of data samples
required: one query uses a single sample for multiple views.
v Disabling automatic refresh of workspace views or adjust the refresh rate to longer intervals. This
causes Tivoli Enterprise Monitoring Agent data to be collected less frequently.
v Consider how you wish to display the data returned by a query. A graphical view workspace may require
significantly less data compared to a table view, because it uses only the truly numeric data and leaves
out the character data.
Do not confuse custom queries with view filters from the Filters tab of the Query Properties editor. View
filters fine-tune the data after it has been retrieved by the query and do not reduce network traffic, data
collection processing, or memory demands.
The following general recommendations and observations might be considered as well:
v Some attributes are more expensive to retrieve than others. One expensive column in a table will make
any workspace view or situation that references that table more expensive. An example of an expensive
attribute is one that must run long storage chains to determine its value, such as using a process table
to look for looping tasks. Where possible, ensure you only retrieve attributes that are required for the
monitoring process.
v Column function (such as MIN, MAX, AVG, and so on) requires post processing of the query results set
after data is returned to Tivoli Enterprise Monitoring Server.
v Use more efficient data manipulating functions, such as substring instead of string scan. If you know the
position of the string to search, do not scan the whole string to check for the value.
Post-filters versus pre-filters
Performance will be improved for each view if you pre-filter the required view information and only send to
portal client what is needed in that view. This reduces the amount of data sent through the network and do
not have to post-process it either. However, there is one exception to the rule.
Think of a workspace that contains many views. Each of those views has a query associated with it, which
will be issued when the workspace is accessed. This might result in many queries that have to be
processed in parallel.
A better way of doing this might be to create one query that returns all data that is required in the
workspace. In this case, the query will only be issued once and the data can then be post-filtered for each
view to only display information as it applies to each view.
One important consideration however is that queries are saved globally and are not user ID dependent.
This means that only administrators will be able to modify queries in most installations. For the end user to
be able to modify filters, the preferred method might therefore be the filters applied in the view properties
Filters tab.
325
326
It may be that you want this query to be applied to only certain managed systems and so
distribution to all managed systems is an unnecessary waste of resource. Modify MSLs to reduce
the distribution of the query. Also remove the MSLs for queries that are of no interest to the user.
Even if you are not viewing the results of the query, there may be a use of system resources that
can be avoided by restricting the distribution of the unneeded queries.
Optimizing situations
Situations are conditions that you want to monitor on managed systems. Each situation contains a name,
a predicate formula, special attributes for specific situation processing, information on whether the situation
is automatically started or not, and the sampling interval. It can also contain a command to execute when
the situation is true, and advice to give the client when an alert for the situation is surfaced, and so on.
A situation predicate is a formula that uses attributes, functions, and other situations as operands along
with threshold values to describe a filtering criterion. The situation formula is made up of one or more
logical expressions.
Each expression takes the form:
[Attribute name] / [logical operator] / [value]
For example: PROCESS_ID == 0
The situation predicates are similar to a WHERE clause in SQL. In IBM Tivoli Monitoring, predicates are
processed sequentially, and perform better if you put the most restrictive condition as the first predicate.
You can use the situation editor to create/edit situations by choosing attributes, logical operator, value, and
sampling interval, and so on. Situations are assigned to run on behalf of managed systems or lists of
managed systems.
When a situation is running on a monitoring agent, the agent collects the current values of the attributes
specified in the formula, and tests the values against a threshold. When the condition is met, i.e. the
threshold is exceeded or a value is matched, the agent passes the collected data back to the monitoring
server to which it is connected, and an event will be generated.
However, some types of situations cannot be tested at the agent level:
v Situations involving group functions, such as MIN, MAX, AVG, SUM, COUNT
v Embedded situations
v Correlated situations
For these types of situations, the agent returns all rows back to the monitoring server to which it is
connected, and the server performs the testing. In large scale environments, especially if the sampling
interval for the situation is short, evaluation for such situations dramatically increases the workload and
memory usage for the monitoring server.
In general, there is limited situation processing at the hub monitoring server if there are no agents directly
connected to the hub. For remote monitoring servers, there is a direct correlation to the number of agents,
situations, and data rows to the amount of memory required. Therefore, the number of situations and size
of the data may be the limiting factor that determines how many agents an individual remote monitoring
server can support.
Since the performance of the monitoring server is greatly affected by the volume and frequency of
situation state changes, do not run a situation and collect data unless you are interested in the results.
The following recommendations provide guidance for how to write more efficient situations and how to
reduce the situation processing requirements:
327
1. If possible, avoid use of group functions, embedded situations or correlated situations. Processing
demands for such situations are much higher, since all attribute group rows must be sent to the
monitoring server for testing, increasing the processing demand and memory usage on the monitoring
server.
2. Put the most stringent condition at the beginning of the formula because the conditions are evaluated
sequentially, left to right.
Consider a situation that has the first condition test on real storage use, the result set may contain
multiple rows; then the second condition tests whether a given process name is among the returned
rows. It would be more efficient to first test on process name (the result will be one row), followed by
the test on the storage usage, just on the single row.
Consider the following in creating the condition tests:
a. Numeric attributes are processed more quickly than text attributes.
b. String checking with substring (STR) is more efficient than the string scan (SCAN), especially for
long strings. If you know the exact location of the text or characters to be evaluated, use a
substring.
3. Use longer sampling intervals where possible, especially for situations that are distributed to a large
number of agents.
4. Minimize the number of situations on attribute groups that can return a large number of rows, such as
process or disk attribute groups.
5. Put mission critical systems in a separated managed system group, and distribute the heavy workload
situations to them only if necessary.
6. Consider spreading business-critical situations amongst several different remote Tivoli Enterprise
Monitoring Servers. For example, a database monitoring situation might be high load, but
business-critical. Instead of having all 500 database agents report through the same remote monitoring
server, consider configuring the database agents across five remote monitoring servers, which are
supporting other less-demanding agents.
The -p flag makes the change persistent, so that it will still be in effect at the next boot. This is a dynamic
change that takes effect immediately.
328
329
330
New in V6.2.1
v Starting with IBM Tivoli Monitoring V6.2.1, the Warehouse Proxy agent and the Summarization and
Pruning agent can use a DB2 version 9.1 (or subsequent) environment running on z/OS as the data
repository. Instructions for implementing a Tivoli Data Warehouse using DB2 on z/OS are in Chapter 17,
Tivoli Data Warehouse solution using DB2 on z/OS, on page 373.
v The Tivoli Data Warehouse now supports 64-bit agent data.
v The Tivoli Data Warehouse now supports IBM DB2 Database for Linux, UNIX, and Windows 9.5.
Copyright IBM Corp. 2005, 2010
331
v With the new schema publication tool, you can now generate the SQL statements needed to create the
database objects (data warehouse tables, indexes, functions, views, and ID table inserts) required for
initial setup of the Tivoli Data Warehouse; see Generating SQL statements for the Tivoli Data
Warehouse: the schema publication tool on page 340.
332
Step 1: Determine the number of detailed records per day for each attribute group
Determine the number of detailed records per day for each attribute group that you want to collect data for.
Use the following equation:
(60 / collection interval) * (24) * (# instances at each interval)
where:
60
collection interval
The data collection interval, in minutes. This value can be 1, 5, 15, 30, 60, or 1440 (1 day).
24
Step 2: Determine the hard disk drive footprint for each attribute group
Determine the hard disk drive footprint for each attribute group. The result generated by this formula gives
an estimate of the amount of disk space used for this attribute group for 24 hours worth of data for a
single agent.
333
where:
# detailed records
The number of detailed records for the attribute. This is the value you calculated in Step 1:
Determine the number of detailed records per day for each attribute group on page 333.
attribute group detailed record size
The detailed record size for the attribute group. See the agent user's guide for this value.
1024
Represents 1 KB and causes the equation to generate a kilobyte number instead of a byte
number.
Step 3: Determine the amount of detailed data for each attribute group
Determine the amount of detailed data in the warehouse database for each attribute group. Use the
following equation:
(attribute group disk footprint) * (# of agents) * (# days of detailed data) / 1024
where:
attribute group disk footprint
The disk footprint for the attribute group. This is the value you calculated in Step 2: Determine the
hard disk drive footprint for each attribute group on page 333.
# of agents
The number of agents of the same agent type in your environment.
# days of detailed data
The number of days for which you want to keep detailed data in the warehouse database.
1024
Step 4: Calculate the amount of aggregate data for each attribute group
Determine the amount of aggregate data in the warehouse for each attribute group.
First, calculate the number of aggregate records per agent. Use the following equation:
(#hourly + #daily + #weekly + #monthly + #quarterly + #yearly) * (# instances at each interval)
where:
#hourly
The number of hourly records for the attribute group. For example, if you have hourly records for
60 days, the number of hourly records is 1440 (60 multiplied by 24 hours per day).
#daily The number of daily records for the attribute group. For example, if you have daily records for 12
months, the number of daily records is 365.
#weekly
The number of weekly records for the attribute group. For example, if you have weekly records for
a 2-year period, the number of weekly records is 104 (2 multiplied by 52 weeks per year).
#monthly
The number of monthly records for the attribute group. For example, if you have monthly records
for a 2-year period, the number of monthly records is 24.
#quarterly
The number of quarterly records for the attribute group. For example, if you have quarterly records
for a 2-year period, the number of quarterly records is 8 (2 years multiplied by 4 quarters in a
year).
334
#yearly
The number of yearly records for the attribute group. For example, if you have yearly reports for a
10-year period, the number of yearly records is 10.
# instances at each interval
The number of instances recorded at each interval. See the agent user's guide for this value.
Next, use the following equation to calculate the amount of attribute data in the warehouse for the attribute
group:
(# aggregate records per agent) * (attribute group aggregate record size) * (# agents) / 1048576
where:
# aggregate records per agent
The number of aggregate records per agent for the attribute group.
attribute group aggregate record size
The size of the attribute group aggregate group. See the agent user's guide for this value.
# agents
The number of agents of the same agent type in your environment.
1048576
Represents 1 MB and causes the equation to generate a megabyte number.
Second, determine the total space required for all attribute groups for the agent. Add the total space for
each attribute group that you want to collect. Use the following equation:
aggGroup1 + aggGroup2 + aggGroup3 ...
Third, determine the total space required for all agents. Add the total space for each agent. Use the
following equation:
agent1 + agent2 + agent3 ...
Finally, to estimate the total disk space requirement for the database, multiplying the total amount of data
(detailed + aggregate for all attribute groups) by 1.5 (to increase the number by 50%). Compare this
number to the Database data row in Table 67 on page 337 to determine the number of disks you need for
your database.
Use the following worksheet to estimate the size of your data warehouse database.
335
336
IBM Tivoli Monitoring: Installation and Setup Guide
Record
size
Number
of agents detailed
Record
size
Interval
Collection
aggregated instances interval
Detailed
records
per day*
Attribute
agent
disk
space
(KB)*
Warehouse
Warehouse
space
space
detailed
Aggregate aggregated
(MB)*
(MB)*
records
Days of
detailed
data kept
* Use the equations in Estimating the required size of your database on page 333 to obtain these values.
Attribute
group
Small RDBMS
Large RDBMS
Operating System
1 + mirror
Use above
1 + mirror
RDBMS data
1 + mirror
RDBMS indexes
1 + mirror
RDBMS temp
Use above
1 + mirror
RDBMS logs
1 + mirror
1 + mirror
1 + mirror
Database data
12 GB
24 GB
48 GB
108+ GB
Number of disks
12
24
The Absolute minimum disks column specifies the minimum number of disks for an RDBMS. In this
column, the index and temporary space is allocated onto one disk. While not an ideal arrangement, this
might work in practice because databases tend to use indexes for transactions or temporary space for
index creation and sorting full table scan large queries, but not both at the same time. This is not a
recommended minimum disk subsystem for a database, but it does have the lowest cost.
The Small RDBMS column represents a minimum disk subsystem, although there might be limits in I/O
rates because of the data being placed on only one disk. Striping the data, indexes, and temporary space
across these three disks might help reduce these I/O rate limits. This disk subsystem arrangement does
not include disk protection for the database or other disks (apart from the mandatory log disk protection for
transaction recovery).
The Small and safe RDBMS column adds full disk protection and can withstand any disk crash with zero
database downtime.
337
The Large RDBMS column represents a typical size database for a database subsystem. Disk protection
is not included in these sizings but can be added to increase the stability of the database.
Increasing the size of your database (DB2 for the workstation only)
DB2 for the workstation Workgroup Edition has a default table size limit of 64 GB with a page size of 4
KB. To increase the capacity of your DB2 for the workstation database, you can create a new tablespace,
IBMDEFAULTGROUP, and choose a larger page size (up to 32 KB). This increases the capacity of the
database up to 512 GB per table.
The following example creates the IBMDEFAULTGROUP tablespace with a page size of 16 K. This
increases the table size capacity to 256 GB.
CREATE REGULAR TABLESPACE IBMDEFAULTGROUP IN DATABASE PARTITION GROUP IBMCATGROUP
PAGESIZE 16384 MANAGED BY DATABASE
USING (FILE 'E:\DB2\NODE0000\SQL00001\IBMDEFAULTGROUP.001'1500000)
EXTENTSIZE 32
PREFETCHSIZE AUTOMATIC
BUFFERPOOL IBM16KBP
OVERHEAD 12.670000
TRANSFERRATE 0.180000
DROPPED TABLE RECOVERY ON
If 512 GB of space per table is not enough for your environment, move to DB2 for the workstation
Enterprise Edition and using physical or logical partitioning.
The following steps outline the process for using database partitioning with DB2 for the workstation
Enterprise Edition:
1. Add a new database partition to the DB2 for the workstation instance by running the db2ncrt
command.
2. Use the ALTER TABLE statement to add a partitioning key to the tables that you want to partition. For
example:
ALTER TABLE "CANDLE "."NT_System"
USING HASHING
3. Use the ALTER DATABASE PARTITION GROUP statement to assign the new partition to the database
partition group. You can do this either from the command line or from the DB2 Control Center.
4. Redistribute the data in the database partition group, using the Redistribute Data Wizard in the DB2
Control Center.
For additional information about database partitioning, see the following DB2 for the workstation sources:
v IBM DB2 Universal Database Administration Guide: Planning
v IBM DB2 Universal Database Administration Guide: Implementation
v DB2 for the workstation Information Center at http://publib.boulder.ibm.com/infocenter/db2help/index.jsp.
For additional database performance and tuning information, see the IBM Tivoli Monitoring: Administrator's
Guide.
Planning assumptions
The information in this and the next four chapters is based on the following assumptions:
v You have completed the preliminary planning required to determine the size and topology of your
environment and your warehousing needs. You may have already installed a monitoring server, a portal
server, and some monitoring agents, but you have not yet created the Tivoli Data Warehouse.
v You are not installing IBM Tivoli Monitoring on one computer.
v You will create the Tivoli Data Warehouse database on a different computer from the Tivoli Enterprise
Portal Server.
338
v If installing the warehouse database on Microsoft SQL Server, you will also install the Tivoli Enterprise
Portal Server on a Windows-based computer. This restriction applies even if the warehouse database
and portal server are installed on separate computers. For example, a portal server on Linux does not
support a warehouse database using Microsoft SQL Server.
v If you are upgrading from IBM Tivoli Monitoring V6.1, you have performed the necessary agent
warehouse database upgrade steps.
The following sections discuss these assumptions in more detail.
339
Generating SQL statements for the Tivoli Data Warehouse: the schema
publication tool
With the schema publication tool (an SQL editor), you can perform these tasks:
v Generate the SQL statements needed to create the database objects required for initial setup of the
Tivoli Data Warehouse.
v Create the necessary database objects whenever either historical collection is enabled for additional
attribute groups or additional summarizations are enabled. This is referred to as running the schema
publication tool in updated mode.
340
Procedure
1. Create a new response file by making a copy of the sample response file:
v On Windows systems: itm_install_dir\TMAITM6\tdwschema.rsp
v On Linux and UNIX systems: itm_install_dir/arch/bin/tdwschema.rsp
2. Using an ASCII text editor, edit the response file to indicate the options you want to use. The keywords
in the response file affect which SQL statements are generated, as well as other options:
KSY_PRODUCT_SELECT=category
The category of products for which you want to generate SQL files:
Value
Description
installed
configured
updated
Description
Hourly
Daily
Weekly
Monthly
Quarterly
Yearly
341
The SQL files for the products specified in the response file are generated and written to the directory
indicated by the KSY_SQL_OUTPUT_FILE_PATH keyword (or to the current working directory, if no
output directory is specified).
5. Make any necessary changes to the generated SQL files. For example, you might want to partition
tables or assign tables to table spaces.
Note: Do not change the names of any tables specified in the generated SQL files.
6. Use the appropriate tools to run the SQL queries to create the warehouse tables, indexes, views,
inserts, and functions for your relational database. Execute the scripts in this order:
a. tdw_schema_table.sql
b. tdw_schema_index.sql
c. tdw_schema_view.sql
d. tdw_schema_insert.sql
e. tdw_schema_function.sql
342
v If using DB2 for the workstation and if the IBM Tivoli Monitoring V6.1 database is not migrated
properly, the schema tool may produce SQL that fails. The tool may generate ALTER TABLE
statements that cause a table not to fit into the table's page size.
Procedure
1. Create a new response file by copying the sample response file:
v On Windows systems, copy itm_install_dir\TMAITM6\tdwschema.rsp.
v On Linux and UNIX systems, copy itm_install_dir/arch/bin/tdwschema.rsp2.
2. Using an ASCII text editor, edit the response file as follows: KSY_PRODUCT_SELECT=updated. (See
the description above.)
You may also specify an output path via the KSY_SQL_OUTPUT_FILE_PATH=path parameter, as
explained above.
3. Using either the historical configuration interface within the Tivoli Enterprise Portal or the historical
configuration CLI, make the desired changes to your site's historical configuration. If enabling historical
collection for new attribute groups, configure but do not start historical collection. If collection is started,
the warehouse proxy agent may attempt to create the database objects before you have a chance to
generate, edit, and execute the SQL.
4. Ensure the Tivoli Enterprise Portal Server is started.
5. Run the schema publication tool script:
v On Windows systems:
tdwschema -rspfile response_file
where response_file is the name of the response file you edited in step 1. The SQL files for the
products specified in the response file are generated and written to the directory indicated by the
KSY_SQL_OUTPUT_FILE_PATH keyword (or to the current working directory, if no output directory is
specified).
6. Make any necessary changes to the generated SQL files. For example, you might want to partition
tables or assign tables to table spaces.
Note: Do not change any table names specified in the generated SQL files.
7. Use the appropriate tools to run the SQL queries to create the warehouse tables, indexes, views,
inserts, and functions for your relational database. Execute the scripts in this order:
a. tdw_schema_table.sql
b. tdw_schema_index.sql
c. tdw_schema_view.sql
d. tdw_schema_insert.sql
e. tdw_schema_function.sql
8. Using either the historical configuration interface within the portal or the historical configuration CLI,
start historical collection for the newly configured attribute groups.
Next steps
After you have completed your preliminary planning, you are ready to implement your Tivoli Data
Warehouse solution:
v Review the Summary of supported operating systems section in this chapter to understand your options
for operating system platforms, database platforms, and communications between warehousing
components, all summarized in a single composite graphic. This section also describes the relationships
between warehousing components.
343
v The next four chapters are organized by database platform. Follow the instructions in the chapter that
pertains to the database platform you are using for the Tivoli Data Warehouse database: IBM DB2 for
the workstation, IBM DB2 on z/OS, Microsoft SQL Server, or Oracle.
344
In the following discussion, numbered product components correspond to the numbers on the diagram.
Tivoli Data Warehouse database
Data collected by monitoring agents is stored at intervals in binary files. The data in the binary files is
referred to as historical data. The binary files is located either at the agents or at the monitoring server
(hub or remote) to which the agents report. (An administrator can determine where the data is stored. The
monitoring agents and monitoring server are not shown in the diagram.)
The historical data is sent from its temporary storage location (at the monitoring agents or monitoring
server) to the Warehouse Proxy agent at a preset interval (either every hour or every 24 hours). The
Warehouse Proxy agent sends the data it receives to the Tivoli Data Warehouse.
Long-term historical data in binary files (data that is older than 24 hours) is pruned when the monitoring
agent or monitoring server receives an acknowledgment that the data has been successfully inserted into
the Tivoli Data Warehouse (an operation that may take some time). (The pruning of binary files is not the
Chapter 15. Tivoli Data Warehouse solutions
345
pruning performed by the Summarization and Pruning agent. The Summarization and Pruning agent
prunes data in the Tivoli Data Warehouse.) The result of these operations is that at any given time the
binary files contain primarily short-term historical data (data less than 24 hours old) and the Tivoli Data
Warehouse contains primarily long-term historical data (data older than 24 hours).
The Tivoli Data Warehouse database can be created using Microsoft SQL Server, IBM DB2 for the
workstation, or Oracle, on the indicated operating system platforms. Note that the warehouse database is
supported on Microsoft SQL Server only if the Tivoli Enterprise Portal Server (TEPS) is installed on
Windows. This condition applies even if the warehouse database and portal server are installed on
separate computers. For example, a portal server on Linux does not support a warehouse database on
Microsoft SQL Server.
Warehouse Proxy agent
A Warehouse Proxy agent on Windows uses an ODBC connection to send the collected data to the
warehouse database. A Warehouse Proxy agent on Linux or AIX uses a JDBC connection.
The Warehouse Proxy agent will do the following as needed:
v Create new tables and indexes.
v Alter existing tables.
For example, add a new column to a table. This can occur if the table was created for an older version
of an agent but data is being exported for the first time from a newer version of the agent that has more
attributes.
Firewall considerations:
v The Warehouse Proxy agent must be able to communicate with the hub monitoring server.
v The Warehouse Proxy agent must be able to communicate with the RDBMS.
v The agents exporting data must be able to communicate with the Warehouse Proxy and the Warehouse
Proxy agent must be able to communicate with the agents (there must be two-way communication
between agents and the Warehouse Proxy agent).
Appropriate ports must be open to support these communication channels (see Appendix C, Firewalls, on
page 551).
Tivoli Enterprise Portal Server
The Tivoli Enterprise Portal Server retrieves historical data for display in historical data views in the Tivoli
Enterprise Portal. It retrieves short-term historical data from the binary files on the monitoring agents or
monitoring server. It retrieves long-term historical data from the Tivoli Data Warehouse.
In Figure 50 on page 345, the Tivoli Enterprise Portal Server is shown with the portal server database
(designated as TEPS database in the diagram). The portal server database stores user data and
information required for graphical presentation on the user interface. Before you install and configure the
portal server, you must install the database platform (RDBMS) to be used for the portal server database
(DB2 for the workstation or Microsoft SQL Server) on the same computer. The portal server database is
created automatically during configuration of the portal server.
Although the portal server database is not considered a warehousing component, it is included in the
diagrams in this and the following chapters because it can affect the installation and configuration tasks
required for the Tivoli Data Warehouse database. For example, the database client already installed for the
portal server database can connect to a remote warehouse database, provided both databases use the
same database platform. There is no need to manually install another client.
346
If the portal server is installed on Windows, it uses an ODBC connection to request and retrieve historical
data from the warehouse database. If the portal server is installed on Linux or AIX, it communicates with
the warehouse database through a JDBC connection, if the warehouse is installed on Oracle, or through a
proprietary DB2 CLI connection if the warehouse is installed on DB2 for the workstation.
Summarization and Pruning agent
The Summarization and Pruning agent retrieves detailed data from the warehouse database, aggregates
or prunes the data, and returns the processed data to the warehouse. Communication takes place through
a JDBC connection, regardless of the operating system on which the Summarization and Pruning agent is
installed.
The Summarization and Pruning agent will create tables, indexes, and views as needed. This could
happen in either of the following situations:
v Summarization is enabled for the first time for an attribute group.
v Additional summarizations are enabled for an attribute group.
In addition, the Summarization and Pruning agent, like the Warehouse Proxy agent, may alter existing
tables by adding new columns.
Firewall considerations:
v The Summarization and Pruning agent must be able to communicate with the RDBMS.
v The Summarization and Pruning agent must be able to communicate with the Tivoli Enterprise Portal
Server.
v The Summarization and Pruning agent must be able to communicate with the Tivoli Enterprise
Monitoring Server it is configured to use.
347
348
Chapter 16. Tivoli Data Warehouse solution using DB2 for the
workstation
Use the information and instructions in this chapter to implement a Tivoli Data Warehouse solution using
DB2 for the workstation for the warehouse database. The following table lists the goals for creating a DB2
for the workstation solution.
Table 68. Goals for creating a Tivoli Data Warehouse solution using DB2 for the workstation
Goal
Review your options, specific to a DB2 for the workstation Supported components
solution, for operating system platforms and
communications between warehousing components.
Install prerequisite software before implementing your
Tivoli Data Warehouse solution.
Supported components
Figure 51 on page 350 presents the options for a Tivoli Data Warehouse solution using DB2 for the
workstation for the warehouse database. The diagram summarizes the supported operating system
platforms for the various warehousing components, the supported database products, and the connections
between components. For more specific information about supported operating systems and database
products, including product names and versions, see Hardware and software requirements on page 96.
349
Figure 51. Tivoli Data Warehouse solution using DB2 for the workstation
Note: An asterisk (*) next to a database client indicates that you must manually install the client if it does
not already exist.
In the following discussion, numbered product components correspond to the numbers on the diagram.
Tivoli Data Warehouse on DB2 for the workstation
A Tivoli Data Warehouse database on DB2 for the workstation can be installed on supported Windows,
Linux, or any UNIX platform that is supported by DB2 for the workstation. Ensure that the DB2 TCP/IP
listeners are active in order to accept connections from a DB2 client or JDBC driver.
350
Prerequisite installation
Before you implement your Tivoli Data Warehouse solution, complete one or more hub installations,
excluding the warehousing components. Include the following components in each hub installation:
v The hub Tivoli Enterprise Monitoring Server
v (Optional) One or more remote monitoring servers
v The Tivoli Enterprise Portal Server, including the prerequisite RDBMS for the portal server database
(DB2 for the workstation or Microsoft SQL Server)
v An IBM DB2 for the workstation server on the computer where you will create the Tivoli Data
Warehouse database. (The Tivoli Data Warehouse database can be shared in a multi-hub installation or
dedicated to a single hub.)
v (Optional) A portal desktop client
v (Optional) Monitoring agents, and the application support for the monitoring agents
Note: The term monitoring agent, as used here, refers to agents that collect data directly from
managed systems, not the Warehouse Proxy agent or Summarization and Pruning agent.
v (Optional) Language packs for all languages other than English
Refer to Table 69 on page 352 for related information:
Chapter 16. Tivoli Data Warehouse solution using DB2 for the workstation
351
Table 69. Information topics related to installation of prerequisite software for a Tivoli Data Warehouse solution
Topic
Assumptions
The implementation instructions are based on the following assumptions:
v You will create the Tivoli Data Warehouse database on a different computer from the Tivoli Enterprise
Portal Server.
v You will create a single Tivoli Data Warehouse database, to be used either within a single hub
installation or to be shared in a multi-hub installation. If you have multiple independent hub installations,
repeat the implementation steps for each hub installation. (See Locating and sizing the hub Tivoli
Enterprise Monitoring Server on page 40 for information about hub installations.)
v No assumption is made about where you will install the Warehouse Proxy agent and Summarization
and Pruning agent. Either of these agents may be installed on the same computer as the Tivoli Data
Warehouse or on a different computer.
Solution steps
To implement your Tivoli Data Warehouse solution using DB2 for the workstation, complete the four major
steps described in the remaining sections of this chapter, in the order listed:
1.
2.
3.
4.
Each major step consists of a series of installation and configuration tasks, listed and described in a table.
Use the step tables as a road map for implementing your solution. The step tables describe the tasks at a
high level, account for variations among configuration options (such as which operating system is used for
a component), and reference the appropriate sections for detailed implementation procedures. To
implement your solution successfully:
v Perform the tasks in the order listed in the table.
v Do not skip a table to the procedures that follow it.
Be aware that some of the implementation procedures referenced in a table are included in this chapter
and some are documented elsewhere. In some cases, the task is described in the table, without
referencing a separate procedure. Read and follow all instructions in the tables.
352
Procedure
Create the Tivoli Data Warehouse database on one of the For guidance on planning the size and disk requirements
supported Windows, Linux, or UNIX operating systems.
for the warehouse database, see Planning
considerations for the Tivoli Data Warehouse on page
To comply with the assumptions described in the
333.
introduction to this chapter, create the warehouse
database on a different computer from the Tivoli
For information about creating the warehouse database
Enterprise Portal Server.
using DB2, see Creating the warehouse database on
DB2 for the workstation.
Create an operating system (OS) user account (user
name and password) with administrator authority on the
computer where the warehouse database is located.
Chapter 16. Tivoli Data Warehouse solution using DB2 for the workstation
353
v Create a name for the warehouse database, and an operating system (OS) user account (user name
and password) that the warehousing components (portal server, Warehouse Proxy agent, and
Summarization and Pruning agent) can use to access the data warehouse. In these instructions, this
user account is referred to as the warehouse user.
v Consider using the default values shown in Table 71 for the warehouse name and warehouse user. The
default values are used in the configuration procedures for connecting the warehousing components to
the warehouse database. (For example, see the entries in Figure 53 on page 363.)
Table 71. Default values for Tivoli Data Warehouse parameters
Parameter
Default value
WAREHOUS
User name
itmuser
User password
itmpswd1
v Give the warehouse user administrative authority to the database initially. After that, you can optionally
limit the authority of the warehouse user to just the privileges required for interacting with the data
warehouse. See the following sections for information about creating and limiting the authority of the
warehouse user.
Creating a warehouse user on Windows
Creating a warehouse user on Linux or UNIX
Limiting the authority of the warehouse user on page 355
v For a Tivoli Data Warehouse on Linux or AIX, ensure that the DB2 for the workstation server is
configured for TCP/IP communications. See Activating the DB2 listeners on a UNIX DB2 server on
page 357.
354
v To give this user administrative authority to the data warehouse, add the user to the DB2 for the
workstation SYSADM group. Run the following command to find the name of the SYSADM group:
db2 get dbm cfg | grep SYSADM
For example:
db2 get dbm cfg | grep SYSADM
SYSADM group name
(SYSADM_GROUP) = DB2GRP1
In this example, the name of the DB2 for the workstation SYSADM group is DB2GRP1. If you created
an OS user named ITMUSER, add ITMUSER to DB2GRP1.
Chapter 16. Tivoli Data Warehouse solution using DB2 for the workstation
355
Procedure
To limit the authority of the warehouse user, complete the following steps:
1. Connect to the data warehouse with db2admin privileges:
db2 connect to warehouse user db2admin using password
where warehouse is the name of the Warehouse database, db2admin is the DB2 for the workstation
administrator ID, and password is the password of the db2admin user ID. The user ID must be a DB2
user with SYSADM authority
2. Change to the directory where the KHD_DB2_crt_BP_TBSP.sql script is located.
3. Run the script to create the required bufferpool and tablespaces:
db2 -stvf KHD_DB2_crt_BP_TBSP.sql
4. Remove administrative privileges from the warehouse user (OS user) that you created when you
created the warehouse database:
v On Windows, remove the warehouse user from the Administrator group.
v On Linux or UNIX, remove the warehouse user from the SYSADM group to which it was assigned
(for example, DB2GRP1). (See Creating a warehouse user on Linux or UNIX on page 354.)
5. Grant these database authorities to the warehouse user:
CONNECT
CREATETAB
USE OF TABLESPACE
CONNECT authority grants the user access to the database. CREATETAB authority allows the user to
create and drop tables, to alter tables, to create and drop indexes for the tables, and to insert, delete,
or update data in the tables. USE OF TABLESPACE grants the user authority to use particular
tablespaces, in this case ITMREG8K.
To grant these authorities, you can use either the DB2 Control Center (Database Authorities window)
or the command line interface. If you use the command line interface, run commands similar to the
following. In this example, the name of the warehouse user is itmuser.
db2 "GRANT CONNECT ON DATABASE TO USER itmuser"
db2 "GRANT CREATETAB ON DATABASE TO USER itmuser"
db2 "GRANT USE OF TABLESPACE ITMREG8K TO itmuser"
If you have upgraded or will be upgrading IBM Tivoli Monitoring to a later release or fix pack, the
ALTER privilege is necessary:
db2 "GRANT ALTER ON DATABASE TO USER itmuser"
356
LOGFILSIZ
Size of transaction log files. In a production environment, this value should be larger than the default
value.
LOCKTIMEOUT
Defaults to infinite wait, which can cause Warehouse Proxy and Summarization and Pruning
processing to appear to be locked up. This should be set to a reasonable value such as 120 seconds.
where instance_name is the name of the instance in which you created the warehouse database and
port_number is the listening port for the instance. (The port number is specified in the file /etc/services.)
For example:
db2set -i db2inst1 DB2COMM=tcpip
db2 update dbm cfg using SVCENAME 60000
db2stop
db2start
Chapter 16. Tivoli Data Warehouse solution using DB2 for the workstation
357
Procedure
If you are installing more than one Warehouse Proxy agent, each agent
must be installed on a separate computer.
The installation procedure for Windows includes steps for configuring the
connection between the agent and the hub Tivoli Enterprise Monitoring
server. On Linux or AIX, this step is performed in a separate configuration
v Installing the monitoring agent
procedure (Configuring the monitoring agent) and an X11 GUI is required
v Configuring the monitoring agent
to configure the agent. Alternatively, you can run the following command
to utilize an X terminal emulation program (such as Cygwin) that is
v Changing the file permissions for
running on another computer:
agents (if you used a nonroot user to
install the Warehouse Proxy)
export DISPLAY=my_windows_pc_IP_addr:0.0
Do not complete the procedure for
where my_windows_pc_IP_addr is the IP address of a computer that is
starting the agent.
running an X terminal emulation program. See the information at right. Be
sure to perform all referenced installation and configuration procedures.
Note for sites setting up autonomous operation:: The installation
procedure includes steps for configuring the connection between the
agent and the hub Tivoli Enterprise Monitoring Server. On Windows
operating systems, if you want to run the Warehouse Proxy agent without
a connection to the hub, accept the defaults for the connection
information, but specify a nonvalid name for the monitoring server. On
UNIX and Linux operating systems, check No TEMS on the TEMS
Connection tab of the configuration window.
(Warehouse Proxy agent on Windows only)
v Install a DB2 for the workstation client on the computer where the
Warehouse Proxy agent is installed if both of the following statements
are true:
The Warehouse Proxy is installed on Windows, and
The Warehouse Proxy needs to connect to a remote data
warehouse.
v Catalog the remote data warehouse on the Windows computer where
you installed the DB2 for the workstation client. You must perform this
step before configuring an ODBC data source. (See the next row.)
v Set the following system variable on the computer where the
Warehouse Proxy agent is installed. Restart the computer after setting
the variable.
DB2CODEPAGE=1208
Set the environment variable whether or not the warehouse database
is local or remote.
358
Table 73. Tasks for installing and configuring communications for the Warehouse Proxy agent (continued)
Task
Procedure
If the data warehouse is located on a remote computer, copy the DB2 for
the workstation JDBC Universal Driver (Type 4 driver) JAR files, included db2installdir/java/db2jcc.jar
with the DB2 for the workstation product installation, to the local computer db2installdir/java/db2jcc_license_cu.jar
where the Warehouse Proxy agent is installed. You can copy the files to
any directory on the local computer.
where db2installdir is the directory where
DB2 for the workstation was installed.
The default DB2 for the workstation
Version 9 installation directory is as
follows:
v On AIX: /usr/opt/db2_09_01
v On Linux: /opt/IBM/db2/V9.1
Configure the Warehouse Proxy agent to connect to the data warehouse.
Perform this procedure whether or not the Warehouse Proxy agent and
the warehouse database are installed on the same computer.
If you are installing more than one Warehouse Proxy agent within the
same hub monitoring server installation, associate each Warehouse Proxy
agent with a subset of monitoring servers (hub or remote) within the
installation. Each Warehouse Proxy agent receives data from the
monitoring agents that report to the monitoring servers on the list. Use the
environment variable KHD_WAREHOUSE_TEMS_LIST to specify a list of
monitoring servers to associate with a Warehouse Proxy agent.
359
Complete the following steps on the computer where the DB2 for the workstation client is installed (the
local computer):
1. Catalog the remote TCP/IP node where the warehouse database is installed:
db2 catalog tcpip node node_name remote host_name server port
db2 terminate
where the indicated variables identify the location and port of the remote DB2 for the workstation
server. For host_name, specify the host name or IP address. The default port for a DB2 server is
60000. For example:
db2 catalog tcpip node amsc2424 remote 8.53.36.240 server 60000
db2 terminate
where
db_name is the name of the remote warehouse database.
db_alias is the nickname or alias used to identify the remote warehouse database on the local
computer. The local alias for the warehouse database must match the name that you specify in the
configuration procedure for the portal server, Warehouse Proxy agent, or Summarization and
Pruning agent.
node_name is the name of the node where the warehouse database is located.
Example:
db2 catalog db WAREHOUS as WAREHOUS at node amsc2424
db2 terminate
where:
db_alias
Is the nickname or alias used to identify the remote warehouse database on the local computer.
user_name
Is the user ID that the local DB2 client uses to access the warehouse database.
user_password
Is the password for that user_name.
These values must match the values that you specify in the configuration procedures for the portal
server, Warehouse Proxy agent, or Summarization and Pruning agent.
Example:
db2 connect to WAREHOUS user itmuser using itmpswd1
360
v This procedure uses default values for the data source name, warehouse alias, and warehouse user ID.
(Default values are used in configuration procedures for warehousing components.) Substitute different
values if you do not want to use the default values.
Procedure
Complete the following procedure to set up an ODBC connection for a Warehouse Proxy agent on
Windows to a local or remote Tivoli Data Warehouse.
1. On the computer where the Warehouse Proxy agent is installed, open the Control Panel.
2. Click Administrative Tools Data Sources (ODBC)
3. Click Add in the System DSN tab in the ODBC Data Source Administrator window.
4. Select IBM DB2 ODBC DRIVER from the list.
5. Click Finish.
6. In the ODBC DB2 Driver - Add window, perform the following steps:
a. Enter ITM Warehouse in Data source name.
b. Enter Warehous in Database Alias.
If the Tivoli Data Warehouse is located on a remote computer, ensure that the database alias
matches the alias that you used when cataloging the remote data warehouse. See Cataloging a
remote data warehouse on page 359.
If local, ensure that the database alias matches the name used for the warehouse database.
c. Click OK.
7. Test the ODBC database connection before continuing:
a. In the ODBC Data Source Administrator window, select ITM Warehouse.
b. Click Configure.
c. In the CLI/ODBC Settings - ITM Warehouse window, you see the data source name, ITM
Warehouse.
d. Enter ITMUser for the User ID.
e.
f.
g.
h.
Type a password for the user in the Password field. The default password is itmpswd1.
Click Connect.
A Connection test successful message is displayed.
Click OK.
Chapter 16. Tivoli Data Warehouse solution using DB2 for the workstation
361
4. Select DB2 from the list of selectable databases (see Figure 52), and click OK
.
362
Figure 53. Configure DB2 Data Source for Warehouse Proxy window
5. Click OK to accept all default information on this window, or change one or more default values and
then click OK. The fields on this window are described in Table 74 on page 364.
Note: The values for the data source name, database name, and database user ID and password
must match the values that you used when configuring an ODBC connection for the Warehouse
Proxy agent. (See Configuring an ODBC data source for a DB2 data warehouse on page
360.)
Chapter 16. Tivoli Data Warehouse solution using DB2 for the workstation
363
Table 74. Configuration information for the Tivoli Data Warehouse database on DB2 for the workstation
Field
Default value
Description
ITM Warehouse
Database Name
WAREHOUS
Admin User ID
db2admin
Admin Password
(no default)
Database User ID
ITMUser
Database Password
itmpswd1
Reenter Password
itmpswd1
yes
6. Click OK.
where install_dir is the installation directory for IBM Tivoli Monitoring. The default installation
directory is /opt/IBM/ITM.
The Manage Tivoli Enterprise Monitoring Services window is displayed.
b. Right-click Warehouse Proxy and click Configure.
The Configure Warehouse Proxy window is displayed.
364
2. On the TEMS Connection tab, review the settings for the connection between the Warehouse Proxy
agent and the hub monitoring server. Correct the settings if necessary.
The Warehouse Proxy agent must use the same protocols used by the application agents and by the
hub monitoring. If the proxy agent does not have the same protocol as the hub monitoring server, it
cannot register with the hub. If the proxy does not have the same protocol as the application agents,
then the application agents cannot communicate with the proxy when they to try create a route to it.
3. Click the Agent Parameters tab.
365
a. Use the scroll bar at the bottom of the window to display the Add and Delete buttons, which are
located to the right of the JDBC Drivers list box.
b. Click Add to display the file browser window. Navigate to the location of the driver files on this
computer and select the following driver files:
db2jcc.jar
db2jcc_license_cu.jar
c. Click OK to close the browser window and add the JDBC driver files to the list.
If you need to delete an entry from the list, select the entry and click Delete.
6. Change the default value displayed in the Warehouse URL field if it is not correct. The default Tivoli
Data Warehouse URL for IBM DB2 for the workstation is as follows:
jdbc:db2://localhost:60000/WAREHOUS
v If the Tivoli Data Warehouse is installed on a remote computer, specify the host name of the
remote computer instead of localhost.
v Change the port number if it is different.
v If the name of the Tivoli Data Warehouse database is not WAREHOUS, replace WAREHOUS with
the actual name. (See Creating the warehouse database on DB2 for the workstation on page
353.)
Note: When specifying the DB2 for the workstation database name on Linux and UNIX, case is
ignored. In other words, it makes no difference whether you provide the database name in
lowercase or uppercase.
7. Verify the JDBC driver name, which is displayed in the Warehouse Driver field. (Note that the
Warehouse Driver field displays the driver name, in contrast to the driver JAR files that are listed in
the JDBC Drivers field.)
The DB2 for the workstation JDBC driver name is as follows:
com.ibm.db2.jcc.DB2Driver
8. If necessary, change the entries in the Warehouse User and Warehouse Password fields to match
the user name and password that were created for the Tivoli Data Warehouse. (See Creating the
warehouse database on DB2 for the workstation on page 353.) The default user name is itmuser
and the default password is itmpswd1.
9. Check the Use Batch check box if you want the Warehouse Proxy agent to submit multiple execute
statements to the Tivoli Data Warehouse database for processing as a batch.
In some situations, such as crossing a network, sending multiple statements as a unit is more
efficient than sending each statement separately. Batch processing is one of the features provided
with the JDBC 2.0 API.
10. Click Test database connection to ensure you can communicate with the Tivoli Data Warehouse
database.
11. Click Save to save your settings and close the window.
366
Procedure
Test the connection between the portal server and the Tivoli Data
Testing the connection between the
Warehouse by creating a customized query in the Tivoli Enterprise Portal. portal server and the Tivoli Data
Warehouse on page 453
Procedure
Complete the following procedure to configure a portal server on Windows to connect to a DB2 for the
workstation data warehouse:
1. Log on to the Windows system where the portal server is installed and begin the configuration:
a. Click Start Programs IBM Tivoli Monitoring Manage Tivoli Monitoring Services.
The Manage Tivoli Enterprise Monitoring Services window is displayed.
Chapter 16. Tivoli Data Warehouse solution using DB2 for the workstation
367
368
5. Click OK to accept all default information on this window, or change one or more default values and
then click OK. The fields on this window are described in the following table.
Table 76. Configuration information for the Tivoli Data Warehouse database on DB2 for the workstation
Field
Default value
Description
ITM Warehouse
Database Name
WAREHOUS
Admin User ID
db2admin
Admin Password
(no default)
Database User ID
ITMUser
Database Password
itmpswd1
Reenter Password
itmpswd1
6. Click OK.
Configuring a Linux or AIX portal server (DB2 for the workstation CLI
connection)
Use this procedure to configure a portal server on Linux or AIX to connect to a DB2 for the workstation
Tivoli Data Warehouse on any operating system.
1. Log on to the computer where the Tivoli Enterprise Portal Server is installed and begin the
configuration.
a. Change to the install_dir/bin directory and run the following command:
./itmcmd manage [-h install_dir]
where install_dir is the installation directory for IBM Tivoli Monitoring. The default installation
directory is /opt/IBM/ITM.
The Manage Tivoli Enterprise Monitoring Services window is displayed.
b. Right-click Tivoli Enterprise Portal Server and click Configure.
The Configure Tivoli Enterprise Portal Server window is displayed.
2. On the TEMS Connection tab, review the settings for the connection between the portal server and
the hub monitoring server. These settings were specified when the portal server was installed.
3. Click the Agent Parameters tab.
4. Select the DB2 radio button.
The fields for configuring the connection to a DB2 for the workstation data warehouse are displayed at
the bottom of the window.
Chapter 16. Tivoli Data Warehouse solution using DB2 for the workstation
369
Figure 57. Configuring the connection to a DB2 for the workstation data warehouse
Default value
Description
WAREHOUS
itmuser
The login name of the database user that the portal server
will use to access the Tivoli Data Warehouse database.
itmpswd1
itmpswd1
370
Procedure
Install the Summarization and Pruning agent if you have not already
installed it. For best performance, install the Summarization and Pruning
agent on the same computer as the data warehouse.
The installation procedure for Windows includes steps for configuring the
connection between the agent and the hub Tivoli Enterprise Monitoring
server. On Linux or AIX, this step is performed in a separate configuration
procedure (Configuring the monitoring agent). See the information at right.
Be sure to perform all referenced installation and configuration
procedures.
Note: The Summarization and Pruning agent is not automatically started
after installation. Do not complete any step or procedure for starting the
agent at this point.
If the data warehouse is located on a remote computer, copy the DB2 for
the workstation JDBC Universal Driver (Type 4 driver) JAR files, included
with the DB2 for the workstation product installation, to the local computer
where the Summarization and Pruning agent is installed. You can copy
the files to any directory that the user that the Summarization and Pruning
agent process runs as has access to.
Chapter 16. Tivoli Data Warehouse solution using DB2 for the workstation
371
Table 78. Tasks for installing and configuring communications for the Summarization and Pruning agent (continued)
Task
Procedure
When you configure history collection, you specify settings for how often
to collect, aggregate, and prune data for individual monitoring agents and
attribute groups. Configure history collection from the Tivoli Enterprise
Portal.
Start the Summarization and Pruning agent.
372
373
Supported components
Figure 58 presents the IBM Tivoli Monitoring environment when implementing a Tivoli Data Warehouse
solution using DB2 on z/OS as the warehouse database. The diagram summarizes the supported
operating system platforms for the various warehousing components, the supported database products,
and the connections between components. For more specific information about supported operating
systems and database products, including product names and versions, see Hardware and software
requirements on page 96.
Note: An asterisk (*) next to a database client indicates that you must manually install the client if it does
not already exist.
In the following discussion, numbered product components correspond to the numbers on the diagram.
Tivoli Data Warehouse on DB2 on z/OS
A Tivoli Data Warehouse repository on DB2 on z/OS can be accessed by any ITM-supported platform that
can run the Warehouse Proxy agentWindows, Linux, or AIXas well as the DB2 Connect software.
Warehouse Proxy agent
A Warehouse Proxy agent on Linux or AIX communicates with the warehouse database through a JDBC
connection. Install a Type 4 driver (DB2 on z/OS JDBC Universal Driver) on the computer where the
Warehouse Proxy agent is located.
A Warehouse Proxy agent on Windows communicates with the warehouse database through an ODBC
connection. The ODBC driver is included with the DB2 on z/OS client. You must install a DB2 client on the
Windows computer where the Warehouse Proxy agent is located, and then catalog the remote node and
database on the local computer.
374
Note: If you install a version 9 DB2 client, you also must install DB2 Connect Server Edition to connect to
a DB2 on z/OS data server.
Summarization and Pruning Agent
The Summarization and Pruning agent communicates with the warehouse database through a JDBC
connection from any supported operating system. Install a DB2 on z/OS Type 4 JDBC driver (DB2 on
z/OS JDBC Universal Driver) on the computer where the Summarization and Pruning agent is located.
Prerequisite installation
Before you implement your Tivoli Data Warehouse solution, complete one or more hub installations,
including the warehousing components (see the appropriate chapter within this section for the necessary
installation instructions). Include the following components in each hub installation:
v
v
v
v
375
Requirements
The Warehouse Proxy agent and the Summarization and Pruning agent both use the implicit table creation
option when connected to a DB2 database on z/OS. Both agents create tables without specifying a table
space or a database in the IN clause of a CREATE statement.
v
DB2 on z/OS version 9 creates an implicit database each time a table is created using a name in the
range DSN00001 to DSN60000. The characteristics of implicitly created databases are shown in
Table 81.
Value
Name
BUFFERPOOL
INDEXBP
STOGROUP
SYSDEFLT
ENCODING_SCHEME
SBCS_CCSID
DBCS_CCSID
MIXEC_CCSID
Notes:
1. DB2 on z/OS chooses a specific buffer pool during creation of the implicit object, depending on the
record size. When the maximum record size reaches approximately 90% of the capacity of the
smaller page size, DB2 chooses the next larger page size, as shown in Table 82.
Table 82. Maximum DB2 on z/OS page sizes
Name
Page Size
BP0
4K
BP8KO
8K
BP16KO
16KB
BP32K
32KB
2. To ensure databases can be created implicitly, DB2 on z/OS's CREATE IMPLICIT DATABASES
installation parameter must be set to YES.
v DB2 on z/OS v9 creates implicit table spaces when implicitly creating a table.
v The Summarization and Pruning agent creates functions that Tivoli Enterprise Portal user can take
advantage of when creating custom history queries. To let the agent create those functions, a default
WLM (Workload Manager) should be created.
v Before beginning this procedure, gather the following information about your target DB2 on z/OS
database:
376
Table 83. Required parameters for accessing the DB2 on z/OS database
DB2 on z/OS parameter
Your value
Database name
Port number
DB2 userid
DB2 password
Fully qualified host name
Solution steps
To implement your Tivoli Data Warehouse solution using DB2 on z/OS, complete the major steps
described in the remaining sections of this chapter, in the order listed:
1. Connect the Warehouse Proxy node to your DB2 on z/OS database.
2. Configure the Tivoli Data Warehouse agents.
To implement your solution successfully:
v Perform the tasks in the order listed.
v Do not skip a task and move forward to the procedures that follow it.
377
378
Activate the Manually configure a connection to a database radio button; then click Next.
379
380
381
382
Figure 64. DB2 Add Database Wizard notebook, Data Source tab
383
Figure 65. DB2 Add Database Wizard notebook, Node Options tab
On
1.
2.
3.
384
Figure 66. DB2 Add Database Wizard notebook, System Options tab
385
Figure 67. DB2 Add Database Wizard notebook, Security Options tab
On the Security Options notebook page, activate the Server authentication (SERVER) radio button, and
click Next.
386
Figure 68. DB2 Add Database Wizard notebook, DCS Options tab
387
If this window is not displayed, press the Back button, verify each notebook page, and correct the
information as necessary.
388
To test the connection to the remote DB2 on z/OS database, press the Test Connection button.
The Connect to DB2 Database screen, shown in Figure 70, opens.
Enter the user ID and password for the remote DB2 on z/OS database, and press the Test Connection
button.
389
If the database connection can be made, the confirmation screen shown in Figure 71 is displayed.
390
391
Select the plus sign ( ) to the left of the z/OS system that is running the DB2 on z/OS subsystem that
owns your Tivoli Data Warehouse repository; then expand its list of databases. The list should include the
database you're using.
To again check your database connection, right-click your database name, and select Connect from the
pop-up menu, as shown in Figure 73 on page 393.
392
393
Enter the user ID and password for the remote DB2 on z/OS database, and press the OK button.
394
Figure 75. DB2 Command Line Processor window. Note that the password is hidden in this figure.
where database_name is the name you assigned to the DB2 on z/OS database that contains your data
warehouse, and userid and password are the DB2 user ID and password required to access that
database.
The database information is displayed, as shown in Figure 75.
395
396
Review your options, specific to a Microsoft SQL solution, Supported components on page 398
for operating system platforms and communications
between warehousing components.
Complete prerequisite configuration steps before
implementing your Tivoli Data Warehouse solution.
397
Supported components
Figure 76 presents the options for a Tivoli Data Warehouse solution using Microsoft SQL Server for the
warehouse database. The diagram summarizes the supported operating system platforms for the various
warehousing components, the supported database products, and the connections between components.
For more specific information about supported operating systems and database products, including product
names and versions, see Hardware and software requirements on page 96.
Figure 76. Tivoli Data Warehouse solution using Microsoft SQL Server
Note: An asterisk (*) next to a database client indicates that you must manually install the client if it does
not already exist.
In the following discussion, numbered product components correspond to the numbers on the diagram.
Tivoli Data Warehouse on Microsoft SQL Server
398
A Tivoli Data Warehouse database on Microsoft SQL Server can be installed on supported Windows
platforms.
Warehouse Proxy Agent
A Warehouse Proxy agent on Linux or AIX communicates with the warehouse database through a JDBC
connection. Install a Microsoft SQL Type 4 driver on the computer where the Warehouse Proxy is located.
Important: Use the 2005 SQL driver even if you are connecting to a warehouse database that was
created in Microsoft SQL Server 2000.
A Warehouse Proxy agent on Windows communicates with the warehouse database through an ODBC
connection. The ODBC driver is included with the Microsoft SQL Server client. If the Tivoli Data
Warehouse is located on a remote computer, install a Microsoft SQL Server client on the local computer
where the Warehouse Proxy agent is located. Also, configure a remote client connection to the Tivoli Data
Warehouse.
Tivoli Enterprise Portal Server
A Tivoli Enterprise Portal Server on Windows can connect to a Microsoft SQL Server data warehouse
through a Microsoft SQL Server client installed on the portal server. If the portal server database
(designated as TEPS database in the diagram) uses Microsoft SQL Server, the client already exists.
Manually install a Microsoft SQL Server client on the portal server only if the portal server database uses
DB2 for the workstation.
The portal server communicates with the warehouse database through an ODBC connection. The ODBC
driver is included with the Microsoft SQL Server client. Configure a remote client connection to the Tivoli
Data Warehouse.
Summarization and Pruning Agent
The Summarization and Pruning agent communicates with the warehouse database through a JDBC
connection from any supported operating system. Install a Microsoft SQL Type 4 JDBC driver on the
computer where the Summarization and Pruning agent is located.
Important: Use the 2005 SQL driver even if you are connecting to a warehouse database that was
created in Microsoft SQL Server 2000.
Prerequisite installation
Before you implement your Tivoli Data Warehouse solution, complete one or more hub installations,
excluding the warehousing components. Include the following components in each hub installation:
v The hub Tivoli Enterprise Monitoring Server
v (Optional) One or more remote monitoring servers
v The Tivoli Enterprise Portal Server, including the prerequisite RDBMS for the portal server database
(DB2 for the workstation or Microsoft SQL Server)
v A Microsoft SQL Server instance on the computer where you will create the Tivoli Data Warehouse
database. (The Tivoli Data Warehouse database can be shared in a multi-hub installation or dedicated
to a single hub.) The SQL Server instance must be patched to the current service pack level.
v (Optional) A portal desktop client
v (Optional) Monitoring agents, and the application support for the monitoring agents
Chapter 18. Tivoli Data Warehouse solution using Microsoft SQL Server
399
Note: The term monitoring agent, as used here, refers to agents that collect data directly from
managed systems, not the Warehouse Proxy agent or Summarization and Pruning agent.
v (Optional) Language packs for all languages other than English
Refer to the following table for related information:
Table 85. Information topics related to installation of prerequisite software for a Tivoli Data Warehouse solution
Topic
Assumptions
The implementation instructions are based on the following assumptions:
v You will create the Tivoli Data Warehouse database on a different computer from the Tivoli Enterprise
Portal Server.
v You will create a single Tivoli Data Warehouse database, to be used either within a single hub
installation or to be shared in a multi-hub installation. If you have multiple independent hub installations,
repeat the implementation steps for each hub installation. (See Locating and sizing the hub Tivoli
Enterprise Monitoring Server on page 40 for information about hub installations.)
v No assumption is made about where you will install the Warehouse Proxy agent and Summarization
and Pruning agent. Either of these agents may be installed on the same computer as the Tivoli Data
Warehouse or on a different computer.
Solution steps
To implement your Tivoli Data Warehouse solution using Microsoft SQL Server, complete the four major
steps described in the remaining sections of this chapter, in the order listed:
1. Create the Tivoli Data Warehouse database.
2. Install and configure communications for the Warehouse Proxy agent.
3. Configure communications between the Tivoli Enterprise Portal Server and the data warehouse.
4. Install and configure communications for the Summarization and Pruning agent.
Except for Step 1, each major step consists of a series of installation and configuration tasks, listed and
described in a table. Use the step tables as a road map for implementing your solution. The step tables
describe the tasks at a high level, account for variations among configuration options (such as which
operating system is used for a component), and reference the appropriate sections for detailed
implementation procedures. To implement your solution successfully:
v Perform the tasks in the order listed in the table.
400
Chapter 18. Tivoli Data Warehouse solution using Microsoft SQL Server
401
Default value
ITMUser
User password
itmpswd1
v If the Warehouse Proxy and Summarization and Pruning agents create database objects at runtime, you
must give the warehouse user public and db_owner privileges to the Tivoli Data Warehouse database.
The warehouse user may have much fewer rights if the schema publication tool is used to create the
database objects. If the schema tool is used, the warehouse user needs only the db_datareader and
db_datawriter roles. If the warehouse user has limited privileges, the schema tool must be used to
create any additional database objects (using the schema tool's updated mode) if the historical
configuration is changed; see Generating SQL statements for the Tivoli Data Warehouse: the schema
publication tool on page 340.
v For Microsoft SQL Server 2005, do the following:
Create a schema with the same name (and owner) as the database user login name (for example,
ITMUser) and change the default schema for the user from dbo to this login name. (This step is not
necessary if you are using Microsoft SQL Server 2000.)
Make sure the database is set up to support inbound network TCP/IP connections.
402
Procedure
If you are installing more than one Warehouse Proxy agent, each agent
must be installed on a separate computer.
The installation procedure for Windows includes steps for configuring the
connection between the Warehouse Proxy and the hub Tivoli Enterprise
Monitoring server. On Linux or AIX, this step is performed in a separate
configuration procedure (Configuring the monitoring agent). See the
information at right. Be sure to perform all of the referenced installation
and configuration procedures.
Note for sites setting up autonomous operation:: The installation
procedure includes steps for configuring the connection between the
agent and the hub Tivoli Enterprise Monitoring Server. On Windows
operating systems, if you want to run the Warehouse Proxy agent without
a connection to the hub, accept the defaults for the connection
information, but specify a nonvalid name for the monitoring server. On
UNIX and Linux operating systems, check No TEMS on the TEMS
Connection tab of the configuration window.
(Warehouse Proxy agent on Windows only)
v Install a Microsoft SQL Server client on the computer where the
Warehouse Proxy agent is installed if both of the following statements
are true:
Perform this procedure whether or not the Warehouse Proxy agent and
the warehouse database are installed on the same computer.
Chapter 18. Tivoli Data Warehouse solution using Microsoft SQL Server
403
Table 87. Tasks for installing and configuring communications for the Warehouse Proxy agent (continued)
Task
Procedure
Install the Microsoft SQL Server 2005 JDBC Driver (Type 4 driver) on the
computer where the Warehouse Proxy agent is installed. Use the 2005
SQL JDBC driver even if you are connecting to a Microsoft SQL Server
2000 data warehouse.
http://msdn.microsoft.com/data/jdbc/
default.aspx
Follow the instructions on the Microsoft
download page for installing the driver.
After you install the driver, the JAR file
name and location are as follows:
mssql2005installdir/sqljdbc_1.1/
enu/sqljdbc.jar
If you are installing more than one Warehouse Proxy agent within the
same hub monitoring server installation, associate each Warehouse Proxy
agent with a subset of monitoring servers (hub or remote) within the
installation. Each Warehouse Proxy agent receives data from the
monitoring agents that report to the monitoring servers on the list. Use the
environment variable KHD_WAREHOUSE_TEMS_LIST to specify a list of
monitoring servers to associate with a Warehouse Proxy agent.
404
Click Next.
Click Next again.
Click Finish.
Click Test Data Source to test the connection to the database.
Click OK.
Chapter 18. Tivoli Data Warehouse solution using Microsoft SQL Server
405
Figure 77. Configure SQL Data Source for Warehouse Proxy window
5. Click OK to accept all default information on this window, or change one or more default values and
then click OK. The fields on this window are described in Table 88.
Note: The values for the data source name, and database user ID and password must match the
values that you used when configuring an ODBC connection for the Warehouse Proxy agent.
(See Configuring an ODBC data source for a Microsoft SQL data warehouse on page 404.)
Table 88. Configuration information for the Tivoli Data Warehouse database on Microsoft SQL Server
Field
Default value
Description
ITM Warehouse
Database Name
WAREHOUS
406
Table 88. Configuration information for the Tivoli Data Warehouse database on Microsoft SQL Server (continued)
Field
Default value
Description
Admin User ID
sa
Admin Password
(no default)
Database User ID
ITMUser
The login name of the database user that the portal server
will use to access the Tivoli Data Warehouse database.
The user ID must match exactly, including case, what was
created in the SQL Server database if using database
authentication or the ID that was created in the Windows
OS.
Database Password
itmpswd1
Reenter Password
itmpswd1
yes
6. Click OK.
where install_dir is the installation directory for IBM Tivoli Monitoring. The default installation
directory is /opt/IBM/ITM.
The Manage Tivoli Enterprise Monitoring Services window is displayed.
b. Right-click Warehouse Proxy and click Configure.
The Configure Warehouse Proxy window is displayed.
Chapter 18. Tivoli Data Warehouse solution using Microsoft SQL Server
407
2. On the TEMS Connection tab, review the settings for the connection between the Warehouse Proxy
agent and the hub monitoring server.
The Warehouse Proxy agent must use the same protocols used by the application agents and by the
hub monitoring. If the proxy agent does not have the same protocol as the hub monitoring server, it
cannot register with the hub. If the proxy does not have the same protocol as the application agents,
then the application agents cannot communicate with the proxy when they to create a route to it.
3. Click the Agent Parameters tab.
408
Chapter 18. Tivoli Data Warehouse solution using Microsoft SQL Server
409
Procedure
If the portal server database was created using DB2 for the workstation,
install a Microsoft SQL Server database client on the portal server.
Test the connection between the portal server and the Tivoli Data
Testing the connection between the
Warehouse by creating a customized query in the Tivoli Enterprise Portal. portal server and the Tivoli Data
Warehouse on page 453
410
5. Click OK to accept all default information on this window, or change one or more default values and
then click OK. The fields on this window are described in the following table:
Table 90. Configuration information for the Tivoli Data Warehouse database on Microsoft SQL Server
Field
Default value
Description
ITM Warehouse
Database Name
WAREHOUS
Admin User ID
sa
Admin Password
(no default)
Database User ID
ITMUser
The login name of the database user that the portal server
will use to access the Tivoli Data Warehouse database.
Chapter 18. Tivoli Data Warehouse solution using Microsoft SQL Server
411
Table 90. Configuration information for the Tivoli Data Warehouse database on Microsoft SQL Server (continued)
Field
Default value
Description
Database Password
(no default)
Reenter Password
(no default)
6. Click OK.
Procedure
Install the Summarization and Pruning agent if you have not already
installed it. For best performance, install the Summarization and Pruning
agent on the same computer as the data warehouse.
The installation procedure for Windows includes steps for configuring the
connection between the agent and the hub Tivoli Enterprise Monitoring
server. On Linux or AIX, this step is performed in a separate configuration
procedure (Configuring the monitoring agent). See the information at right.
Be sure to perform all referenced installation and configuration
procedures.
Note: The Summarization and Pruning agent is not automatically started
after installation. Do not complete any step or procedure for starting the
agent at this point.
Install the Microsoft SQL Server 2005 JDBC Driver (Type 4 driver) on the Obtain the 2005 JDBC driver from the
following Microsoft Web page:
computer where the Summarization and Pruning agent is installed. Use
the 2005 SQL JDBC driver even if you are connecting to a Microsoft SQL
http://msdn.microsoft.com/data/jdbc/
Server 2000 data warehouse.
default.aspx
Follow the instructions on the Microsoft
download page for installing the driver.
After you install the driver, the JAR file
name and location are as follows:
mssql2005installdir/sqljdbc_1.1/
enu/sqljdbc.jar
Configure the Summarization and Pruning agent.
412
Table 91. Tasks for installing and configuring communications for the Summarization and Pruning agent (continued)
Task
Procedure
When you configure history collection, you specify settings for how often
to collect, aggregate, and prune data for individual monitoring agents and
attribute groups. Configure history collection from the Tivoli Enterprise
Portal.
Start the Summarization and Pruning agent.
Chapter 18. Tivoli Data Warehouse solution using Microsoft SQL Server
413
414
Supported components
Figure 81 on page 416 presents the options for a Tivoli Data Warehouse solution using Oracle for the
warehouse database. The diagram summarizes the supported operating system platforms for the various
warehousing components, the supported database products, and the connections between components.
For more specific information about supported operating systems and database products, including product
names and versions, see Hardware and software requirements on page 96.
415
Note: An asterisk (*) next to a database client indicates that you must manually install the client if it does
not already exist.
In the following discussion, numbered product components correspond to the numbers on the diagram.
Tivoli Data Warehouse on Oracle
A Tivoli Data Warehouse database on Oracle can be installed on supported Windows, Linux, or any UNIX
platform support by Oracle. Ensure that the Oracle listener is active in order to accept connections from an
Oracle client or JDBC driver.
416
Prerequisite installation
Before you implement your Tivoli Data Warehouse solution, complete one or more hub installations,
excluding the warehousing components. Include the following components in each hub installation:
v The hub Tivoli Enterprise Monitoring Server
v (Optional) One or more remote monitoring servers
v The Tivoli Enterprise Portal Server, including the prerequisite RDBMS for the portal server database
(DB2 for the workstation or Microsoft SQL Server)
v An Oracle database server on the computer where you will create the Tivoli Data Warehouse database.
(The Tivoli Data Warehouse database can be shared in a multi-hub installation or dedicated to a single
hub.)
v (Optional) A portal desktop client
v (Optional) Monitoring agents, and the application support for the monitoring agents
Note: The term monitoring agent, as used here, refers to agents that collect data directly from
managed systems, not the Warehouse Proxy agent or Summarization and Pruning agent.
v (Optional) Language packs for all languages other than English
Refer to the following table for related information:
Table 93. Information topics related to installation of prerequisite software for a Tivoli Data Warehouse solution
Topic
417
Table 93. Information topics related to installation of prerequisite software for a Tivoli Data Warehouse
solution (continued)
Topic
Assumptions
The implementation instructions are based on the following assumptions:
v You will create the Tivoli Data Warehouse database on a different computer from the Tivoli Enterprise
Portal Server.
v You will create a single Tivoli Data Warehouse database, to be used either within a single hub
installation or to be shared in a multi-hub installation. If you have multiple independent hub installations,
repeat the implementation steps for each hub installation. (See Locating and sizing the hub Tivoli
Enterprise Monitoring Server on page 40 for information about hub installations.)
v No assumption is made about where you will install the Warehouse Proxy agent and Summarization
and Pruning agent. Either of these agents may be installed on the same computer as the Tivoli Data
Warehouse or on a different computer.
Solution steps
To implement your Tivoli Data Warehouse solution using Oracle, complete the four major steps described
in the remaining sections of this chapter, in the order listed:
1. Create the Tivoli Data Warehouse database.
2. Install and configure communications for the Warehouse Proxy agent.
3. Configure communications between the Tivoli Enterprise Portal Server and the data warehouse.
4. Install and configure communications for the Summarization and Pruning agent.
Each major step consists of a series of installation and configuration tasks, listed and described in a table.
Use the step tables as a road map for implementing your solution. The step tables describe the tasks at a
high level, account for variations among configuration options (such as which operating system is used for
a component), and reference the appropriate sections for detailed implementation procedures. To
implement your solution successfully:
v Perform the tasks in the order listed in the table.
v Do not skip a table to the procedures that follow it.
Be aware that some of the implementation procedures referenced in a table are included in this chapter
and some are documented elsewhere. In some cases, the task is described in the table, without
referencing a separate procedure. Read and follow all instructions in the tables.
418
Procedure
Create the Tivoli Data Warehouse database on one of the For guidance on planning the size and disk requirements
supported Windows, Linux, or UNIX operating systems.
for the warehouse database, see Planning
considerations for the Tivoli Data Warehouse on page
To comply with the assumptions described in the
333.
introduction to this chapter, create the database on a
different computer from the Tivoli Enterprise Portal
For information about creating the warehouse database
Server.
using Oracle, see Creating the warehouse database on
Oracle.
Activate the Oracle listener on the Oracle server where
the Tivoli Data Warehouse is installed. To activate the
Oracle listener:
Default value
WAREHOUS
User name
ITMUser
User password
itmpswd1
v Create an ITM_DW role, and give this role the following permissions:
CREATE ROLE role not IDENTIFIED;
GRANT CREATE SESSION TO role;
GRANT ALTER SESSION TO role;
GRANT CREATE PROCEDURE TO role;
GRANT CREATE TABLE TO role;
GRANT CREATE VIEW TO role;
After you create the warehouse user ID that will be used by the Warehouse Proxy and the
Summarization and Pruning agents to connect to the Tivoli Data Warehouse database, give this user ID
the role you just created:
419
As all the Tivoli Data Warehouse tables are created in the user's default tablespace, you need to
allocate enough space quota on the default tablespace to this user to create all the tables, or you can
simplify it by allowing unlimited tablespace to this user.
Note: The ITM_DW role needs the connect privilege only if the database objects are created using the
schema publication tool. If the historical configuration is changed and the warehouse user has
limited privileges, the schema tool must be used to create any additional database objects (using
the schema tool's updated mode; see Generating SQL statements for the Tivoli Data
Warehouse: the schema publication tool on page 340).
v Activate the Oracle listener using the Oracle Listener Service on Windows or the lsnrctl start command
on Linux and UNIX.
420
Procedure
If you are installing more than one Warehouse Proxy agent, each agent
must be installed on a separate computer.
The installation procedure for Windows includes steps for configuring the
connection between the agent and the hub Tivoli Enterprise Monitoring
server. On Linux or AIX, this step is performed in a separate configuration
procedure (Configuring the monitoring agent). See the information at right.
Be sure to perform all referenced installation and configuration
procedures.
Note for sites setting up autonomous operation:: The installation
procedure includes steps for configuring the connection between the
agent and the hub Tivoli Enterprise Monitoring Server. On Windows
operating systems, if you want to run the Warehouse Proxy agent without
a connection to the hub, accept the defaults for the connection
information, but specify a nonvalid name for the monitoring server. On
UNIX and Linux operating systems, check No TEMS on the TEMS
Connection tab of the configuration window.
(Warehouse Proxy agent on Windows only)
v Install an Oracle client on the computer where the Warehouse Proxy
agent is installed if both of the following statements are true:
The Warehouse Proxy is installed on Windows, and
The Warehouse Proxy needs to connect to a remote data
warehouse.
v Ensure that the the latest Oracle patches are installed.
v Set the following system variable on the computer where the
Warehouse Proxy agent is installed. Restart the computer after setting
the variable. The format of the NLS_LANG environment variable is:
NLS_LANG=language_territory.charset
Set the language and territory to appropriate variables. For the United
States this is NLS_LANG=AMERICAN_AMERICA.AL32UTF8.
Perform the last two tasks whether or not the warehouse database is
local or remote.
(Warehouse Proxy agent on Windows only)
421
Table 96. Tasks for installing and configuring communications for the Warehouse Proxy agent (continued)
Task
Procedure
If you are installing more than one Warehouse Proxy agent within the
same hub monitoring server installation, associate each Warehouse Proxy
agent with a subset of monitoring servers (hub or remote) within the
installation. Each Warehouse Proxy agent receives data from the
monitoring agents that report to the monitoring servers on the list. Use the
environment variable KHD_WAREHOUSE_TEMS_LIST to specify a list of
monitoring servers to associate with a Warehouse Proxy agent.
422
Do not perform this procedure on the computer where the data warehouse (Oracle server) is installed or
on a computer where there is no Oracle client (for example, on a computer where a Type 4 Oracle JDBC
driver is used to communicate with the remote data warehouse).
Note: This procedure uses the default value for the warehouse name (WAREHOUS). Substitute a different
value if you do not want to use the default name.
Complete the following steps to create the TNS Service Name. Click Next after each step.
1. Enter dbca at the Oracle command line to start the Oracle Net Configuration Assistant tool.
2. On the Welcome window, select Local Net Service Name configuration.
3. Select Add.
4. Enter WAREHOUS in the Service Name field. (This is the remote name for the Tivoli Data Warehouse.)
5. Select TCP as the network protocol to communicate with the Tivoli Data Warehouse database.
6. Specify the fully qualified host name and port number of the computer where the warehouse
database is installed.
7. Perform the connection test to verify the connection to the warehouse database.
8. Optionally change the default name in the Net Service Name field.
This is the TNS Service Name. The default name matches the name that you entered in Step 4. You
can change this to a different name. The TNS Service Name can be considered a local alias for the
remote Tivoli Data Warehouse name.
9. When prompted to configure another net service name, click No to return to the Welcome window.
10. Click Finish.
5.
6.
7.
8.
9.
10.
423
424
Figure 82. Configure Oracle Data Source for Warehouse Proxy window
5. Click OK to accept all default information on this window, or change one or more default values and
then click OK. The fields on this window are described in Table 97.
Note: The values for the data source name, database name, and database user ID and password
must match the values that you used when configuring an ODBC connection for the Warehouse
Proxy agent. (See Configuring an ODBC data source for an Oracle data warehouse on page
423.)
Table 97. Configuration information for the Tivoli Data Warehouse database on Oracle
Field
Default value
Description
ITM Warehouse
Database User ID
ITMUser
The login name of the database user that the portal server
will use to access the Tivoli Data Warehouse database.
Database Password
itmpswd1
425
Table 97. Configuration information for the Tivoli Data Warehouse database on Oracle (continued)
Field
Default value
Description
Reenter Password
itmpswd1
6. Click OK.
where install_dir is the installation directory for IBM Tivoli Monitoring. The default installation
directory is /opt/IBM/ITM.
The Manage Tivoli Enterprise Monitoring Services window is displayed.
b. Right-click Warehouse Proxy and click Configure.
The Configure Warehouse Proxy window is displayed.
3. On the TEMS Connection tab, review the settings for the connection between the Warehouse Proxy
agent and the hub monitoring server.
The Warehouse Proxy agent must use the same protocols used by the application agents and by the
hub monitoring. If the proxy agent does not have the same protocol as the hub monitoring server, it
cannot register with the hub. If the proxy does not have the same protocol as the application agents,
then the application agents cannot communicate with the proxy when they to create a route to it.
4. Click the Agent Parameters tab.
426
427
9. If necessary, change the entries in the Warehouse User and Warehouse Password fields to match
the user name and password that were created for the Tivoli Data Warehouse. (See Creating the
warehouse database on Oracle on page 419.) The default user name is itmuser and the default
password is itmpswd1.
10. Check the Use Batch check box if you want the Warehouse Proxy agent to submit multiple execute
statements to the Tivoli Data Warehouse database for processing as a batch.
In some situations, such as crossing a network, sending multiple statements as a unit is more
efficient than sending each statement separately. Batch processing is one of the features provided
with the JDBC 2.0 API.
11. Click Test database connection to ensure you can communicate with the Tivoli Data Warehouse
database.
12. Click Save to save your settings and close the window.
428
Procedure
Test the connection between the portal server and the Tivoli Data
Testing the connection between the
Warehouse by creating a customized query in the Tivoli Enterprise Portal. portal server and the Tivoli Data
Warehouse on page 453
429
5. Click OK to accept all default information on this window, or change one or more default values and
then click OK. The fields on this window are described in the following table:
Table 99. Configuration information for the Tivoli Data Warehouse database on Oracle
Field
Default value
Description
ITM Warehouse
Database User ID
ITMUser
The login name of the database user that the portal server
will use to access the Tivoli Data Warehouse database.
Database Password
itmpswd1
Reenter Password
itmpswd1
430
6. Click OK.
where install_dir is the installation directory for IBM Tivoli Monitoring. The default installation
directory is /opt/IBM/ITM.
The Manage Tivoli Enterprise Monitoring Services window is displayed.
b. Right-click Tivoli Enterprise Portal Server and click Configure.
The Configure Tivoli Enterprise Portal Server window is displayed.
2. On the TEMS Connection tab, review the settings for the connection between the portal server and
the hub monitoring server. These settings were specified when the portal server was installed.
3. Click the Agent Parameters tab.
4. Select the Oracle radio button.
The fields for configuring the connection to an Oracle data warehouse are displayed at the bottom of
the window.
5. Fill in the fields in Figure 86 with the configuration values described in Table 100.
Table 100. Configuration information for a Tivoli Data Warehouse database on Oracle
Field
Default value
Description
The name of the Tivoli Data Warehouse database.
431
Table 100. Configuration information for a Tivoli Data Warehouse database on Oracle (continued)
Field
Default value
Description
Warehouse DB user
ID
ITMUser
Warehouse user
password
itmpswd1
Re-type Warehouse
user password
itmpswd1
oracleinstalldir/jdbc/lib/
ojdbc14.jar
oracle.jdbc.driver.OracleDriver
jdbc:oracle:thin:@localhost:
1521:WAREHOUS
User-defined
attributes
(no default)
432
Procedure
Install the Summarization and Pruning agent if you have not already
installed it. For best performance, install the Summarization and Pruning
agent on the same computer as the data warehouse.
The installation procedure for Windows includes steps for configuring the
connection between the agent and the hub Tivoli Enterprise Monitoring
server. On Linux or AIX, this step is performed in a separate configuration
procedure (Configuring the monitoring agent). See the information at right.
Be sure to perform all referenced installation and configuration
procedures.
Note: The Summarization and Pruning agent is not automatically started
after installation. Do not complete any step or procedure for starting the
agent at this point.
433
Table 101. Tasks for installing and configuring communications for the Summarization and Pruning agent (continued)
Task
Procedure
When you configure history collection, you specify settings for how often
to collect, aggregate, and prune data for individual monitoring agents and
attribute groups. Configure history collection from the Tivoli Enterprise
Portal.
Start the Summarization and Pruning agent.
434
Use the DB2 for the workstation JDBC Universal Driver (Type 4 driver). The DB2 for the
workstation driver files are located with your Tivoli Data Warehouse server installation. The
Type 4 driver file names and locations are as follows:
db2installdir/java/db2jcc.jar
db2installdir/java/db2jcc_license_cu.jar
where db2installdir is the directory where DB2 for the workstation was installed. The default
DB2 for the workstation Version 9 installation directory is as follows:
v On Windows: C:\Program Files\IBM\SQLLIB
v On AIX: /usr/opt/db2_09_01
v On Linux and Solaris: /opt/IBM/db2/V9.1
435
Table 102. Where to obtain the JDBC driver files for the Summarization and Pruning agent (continued)
Database platform
Microsoft SQL Server Use the Microsoft SQL Server 2005 Type 4 driver to connect to a Tivoli Data Warehouse on
either SQL Server 2000 or SQL Server 2005. (The SQL Server 2005 JDBC Driver works with
a Tivoli Data Warehouse on SQL Server 2000.) Obtain the 2005 JDBC driver from the
following Microsoft Web page:
http://msdn.microsoft.com/data/jdbc/default.aspx
Download and install the driver to the computer where you installed the Summarization and
Pruning agent. Follow the instructions on the Microsoft download page for installing the driver.
The SQL Server 2005 JAR file name and location after installation is as follows:
mssql2005installdir/sqljdbc_1.1/enu/sqljdbc.jar
Oracle
Obtain the Oracle JDBC Type 4 driver from the following Web site:
http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/index.html
The Oracle JDBC driver JAR file name and location after installation is as follows:
oracleinstalldir/jdbc/lib/ojdbc14.jar
The ojdbc14.jar file supports JRE 1.5 or higher, the required Java Runtime Environment for
IBM Tivoli Monitoring.
Procedure
Complete the following steps to configure the Summarization and Pruning agent.
Note: A Reload button is available on the configuration windows. Click Reload at any time during the
procedure to restore the original settings.
1. Log on to the computer where the Summarization and Pruning agent is installed and begin the
configuration:
a. Open the Manage Tivoli Enterprise Monitoring Services window:
v On Windows, click Start Programs IBM Tivoli Monitoring Manage Tivoli Monitoring
Services.
v On Linux or UNIX, change to the install_dir/bin directory and run the following command:
./itmcmd manage [-h install_dir]
where install_dir is the installation directory for IBM Tivoli Monitoring. The default installation
directory is /opt/IBM/ITM.
b. Right-click Summarization and Pruning Agent.
c. On Windows, click Configure Using Defaults. On Linux or UNIX, click Configure. If you are
reconfiguring, click Reconfigure.
2. Review the settings for the connection between the Summarization and Pruning agent and the hub
Tivoli Enterprise Monitoring server. These settings were specified when the Summarization and
Pruning agent was installed.
v On Windows, perform the following steps:
a. On the Warehouse Summarization and Pruning Agent: Agent Advanced Configuration window,
verify the communications protocol of the hub monitoring server in the Protocol drop down list.
Click OK.
b. On the next window, verify the host name and port number of the hub monitoring server. Click
OK.
For information about the different protocols available to the hub monitoring server on Windows,
and associated default values, see 12e on page 150.
v On Linux or UNIX, verify the following information on the TEMS Connection tab:
436
The hostname of the hub monitoring server in the TEMS Hostname field. (If the field is not
active, clear the No TEMS check box.)
The communications protocol that the hub monitoring server uses in the Protocol drop down
list.
- If you select IP.UDP, IP.PIPE, or IP.SPIPE, enter the port number of the monitoring server in
the Port Number field.
- If you select SNA, enter information in the Net Name, LU Name, and LOG Mode fields.
For information about the different protocols available to the hub monitoring server on Linux or
UNIX, and associated default values, seeTable 30 on page 154.
3. When you are finished verifying or entering information about the hub monitoring server:
v On Windows, click Yes on the message asking if you want to configure the Summarization and
Pruning agent.
v On Linux or UNIX, click the Agent Parameters tab.
A multi-tabbed configuration window is displayed with the Sources tab at the front.
Figure 87 shows the configuration window for a Summarization and Pruning agent on Windows
(values displayed are for a DB2 for the workstation warehouse database). The configuration window
for a Summarization and Pruning agent on Linux or UNIX is similar.
Figure 87. Sources tab of Configure Summarization and Pruning Agent window
437
4. Add the names and directory locations of the JDBC driver JAR files to the JDBC Drivers list box:
a. On Linux or UNIX, use the scroll bar at the bottom of the window to display the Add and Delete
buttons, which are located to the right of the JDBC Drivers list box.
b. Click Add to display the file browser window. Navigate to the location of the driver files on this
computer and select the Type 4 driver files for your database platform. See Table 102 on page
435 for the names and default locations of the driver files to add.
c. Click OK to close the browser window and add the JDBC driver files to the list.
If you need to delete an entry from the list, select the entry and click Delete.
5. In the Database field, select the database platform you are using for the Tivoli Data Warehouse from
the drop-down list: DB2, SQL Server, or Oracle.
The default values for the database platform you selected are displayed in the other text fields on the
Sources tab.
6. Change the default value displayed in the Warehouse URL field if it is not correct. The following table
lists the default Tivoli Data Warehouse URLs for the different database platforms:
Table 103. Tivoli Data Warehouse URLs
Database platform
Warehouse URL
jdbc:db2://localhost:60000/WAREHOUS
Oracle
jdbc:oracle:thin:@localhost:1521:WAREHOUS
v If the Tivoli Data Warehouse is installed on a remote computer, specify the host name of the
remote computer instead of localhost.
v Change the port number if it is different.
v If the name of the Tivoli Data Warehouse database is not WAREHOUS, replace WAREHOUS with
the actual name.
7. Verify the JDBC driver name, which is displayed in the Warehouse Driver field. (Note that the
Warehouse Driver field displays the driver name, in contrast to the driver files that are listed in the
JDBC Drivers field.)
The following table lists the JDBC Type 4 driver names for each database platform:
Table 104. JDBC driver names
Database platform
com.ibm.db2.jcc.DB2Driver
Oracle
oracle.jdbc.driver.OracleDriver
8. If necessary, change the entries in the Warehouse User and Warehouse Password fields to match
the user name and password that were created for the Tivoli Data Warehouse. The default user name
is itmuser and the default password is itmpswd1.
9. In the TEP Server Host and TEP Server Port fields, enter the host name of the computer where the
Tivoli Enterprise Portal Server is installed and the port number that it uses to communicate with the
Summarization and Pruning agent.
438
Note: The default Tivoli Enterprise Portal Server interface port of 15001 is also used after the
Summarization and Pruning agent's initial connection to the portal server over port 1920. Any
firewalls between the two need to allow communications on either 15001 or whichever port is
defined for any new Tivoli Enterprise Portal Server interface used per the instructions in
Defining a Tivoli Enterprise Portal Server interface on Windows on page 286.
10. Click Test database connection to ensure you can communicate with the Tivoli Data Warehouse
database.
11. Select the Scheduling tab to specify when you want summarization and pruning to take place. You
can schedule it to run on a fixed schedule or on a flexible schedule:
Figure 88. Scheduling tab of Configure Summarization and Pruning Agent window
v Fixed
Schedule the Summarization and Pruning agent to run every x days.
Select the time of the day that you want the agent to run.
Set the time to at least 5 minutes from the current time if you want it to run right away.
v Flexible
Schedule the Summarization and Pruning agent to run every x minutes.
Optionally, specify the times when the agent should not run, using the format HH:MM-HH:MM.
Press Add to add the text. For example, to block the agent from running between 00:00 and
01:59 and between 04:00 and 04:59, type 00:00-01:59, click Add, type 04:00-04:59 and click
Add. Do not use the Add button unless you are adding a blackout period. All values must be
between 00:00 and 23:59 and the end time must be greater than start time.
439
Note: If you select Fixed, the Summarization and Pruning agent does not immediately perform any
summarization or pruning when it starts. It performs summarization and pruning when it runs. It
runs according to the schedule you specify on the Scheduling tab. If you select Flexible, the
Summarization and Pruning agent runs once immediately after it is started and then at the
interval you specified except during any blackout times.
12. Specify shift and vacation settings in the Work Days tab:
Figure 89. Work Days tab of Summarization and Pruning Agent configuration window
When you enable and configure shifts, IBM Tivoli Monitoring produces three separate summarization
reports:
v Summarization for peak shift hours
v Summarization for off-peak shift hours
v Summarization for all hours (peak and off-peak)
Similarly, when you enable and configure vacations, IBM Tivoli Monitoring produces three separate
summarization reports:
v Summarization for vacation days
v Summarization for nonvacation days
v Summarization for all days (vacation and nonvacation)
Complete the following steps to enable shifts, vacations, or both:
v Select when the beginning of the week starts.
v To configure shifts:
a. Select Specify shifts to enable shifts.
440
b. Optionally change the default settings for peak and off peak hours by selecting and moving
hours between the Peak Shift Hours box and the Off Peak Shift Hours box using the arrow
buttons.
Note: Changing the shift information after data has been summarized creates an inconsistency
in the data. Data that was previously collected is not summarized again to account for
the new shift values.
v To configure vacation settings:
a. Select Specify vacation days to enable vacation days.
b. Select Yes in the drop down list if you want to specify weekends as vacation days.
c. Select Add to add vacation days.
d. Select the vacation days you want to add from the calendar.
On UNIX or Linux, right-click, instead of left-click, to select the month and year.
The days you select are displayed in the list box.
If you want to delete any days you have previously chosen, select them and click Delete.
Notes:
1) Add vacation days in the future. Adding vacation days in the past creates an inconsistency
in the data. Data that was previously collected is not summarized again to account for
vacation days.
2) Enabling shifts or vacation periods can significantly increase the size of the warehouse
database. It will also negatively affect the performance of the Summarization and Pruning
Agent.
13. Select the Log Parameters tab to set the intervals for log pruning:
441
Figure 90. Log Parameters tab of Summarization and Pruning Agent configuration window
v Select Keep WAREHOUSEAGGREGLOG data for, select the unit of time (day, month or year), and
the number of units for which data should be kept.
v Select Keep WAREHOUSELOG data for, select the unit of time (day, month or year), and the
number of units for which data should be kept.
14. Specify additional summarization and pruning settings in the Additional Parameters tab:
442
Figure 91. Additional Parameters tab of Summarization and Pruning Agent configuration window
a. Specify the number of additional threads you want to use for handling summarization and pruning
processing. The number of threads should be 2 * N, where N is the number of processors running
the Summarization and Pruning agent. A higher number of threads can be used, depending on
your database configuration and hardware.
b. Specify the maximum rows that can be deleted in a single pruning transaction. Any positive
integer is valid. The default value is 1000. There is no value that indicates you want all rows
deleted.
If you increase the number of threads, you might consider increasing this value if your transaction
log allows for it. The effective number of rows deleted per transaction is based on this value
divided by the number of worker threads.
c. Indicate a time zone for historical data from the Use timezone offset from drop down list.
This field indicates which time zone to use when a user specifies a time period in a query for
monitoring data.
v Select Agent to use the time zone (or time zones) where the monitoring agents are located.
Chapter 20. Tivoli Data Warehouse solutions: common procedures
443
v Select Warehouse to use the time zone where the Summarization and Pruning agent is
located. If the Tivoli Data Warehouse and the Summarization and Pruning agent are in different
time zones, the Warehouse choice indicates the time zone of the Summarization and Pruning
agent, not the warehouse.
Skip this field if the Summarization and Pruning agent and the monitoring agents that collect data
are all in the same time zone.
d. Specify the age of the data you want summarized in the Summarize hourly data older than and
Summarize daily data older than fields. The default value is 1 for hourly data and 0 for daily
data.
e. The Maximum number of node errors to display refers to the node error table in the
Summarization and Pruning workspace. It determines the maximum number of rows that
workspace is to save and display.
f. The Maximum number of summarization and pruning runs to display refers to the
Summarization and Pruning Run table in the Summarization and Pruning workspace. It determines
the maximum number of rows that workspace is to save and display.
Maximum number of Summarization and Pruning runs to display and Maximum number of node
errors to display together determine the number of rows shown in the Summarization and Pruning
overall run table and Errors table respectively. There is a minimum value of 10 for each. These
equate to keywords KSY_SUMMARIZATION_UNITS and KSY_NODE_ERROR_UNITS in file
KSYENV/sy.ini.
g. The Database Connectivity Cache Time determines how long after a positive check for
connectivity that the result will be cached. Longer times may result in inaccurate results in the
workspace; however, it saves processing time.
Database Connectivity Cache Time records the number of minutes to cache the database
connectivity for MOSWOS reporting purposes. The minimum value is 5 minutes. This equates to
keyword KSY_CACHE_MINS in file KSYENV/sy.ini.
h. Batch mode determines if data from different managed systems are used in the same database
batch; this setting also improves performance.
Batch mode controls the batching method used by the Summarization and Pruning agent. A value
of Single Managed System (0) means that data should only be batched for the same system,
whereas a value of Multiple Managed System (1) means that data from multiple systems can be
batched together; this can lead to higher performance at potentially bigger transaction sizes. The
default value is Single Managed System (0). This equates to keyword KSY_BATCH_MODE in file
KSYENV/sy.ini.
To change these values, you can either use the Summarization and Pruning configuration window's
Additional parameters tab or update these parameters directly in file KSYENV/sy.ini.
15. Save your settings and close the window. Click Save to save your settings. On Windows, click Close
to close the configuration window.
v On Windows, click Save and then click Close.
v On Linux or UNIX, click Save and then click Cancel.
where sy is the product code for the Summarization and Pruning agent.
444
445
KHD_WAREHOUSE_TEMS_LIST=REMOTE_TEMS1 REMOTE_TEMS2
The name of a monitoring server is specified when the server is installed. The default name of a
monitoring server is HUB_host_name (for a hub monitoring server) or REMOTE_host_name (for a
remote monitoring server), where host_name is the short host name.
Each proxy agent will register with an annotation representing one or more monitoring server
names, indicating that it will serve all the agents reporting to those monitoring servers. The default
annotation (or the string *ANY) in the KHD_WAREHOUSE_TEMS_LIST variable indicates that a
proxy may serve as a possible failover agent. Adding the *ANY string in the monitoring server list
for the third proxy agent means that this agent will serve as a failover agent if one or more proxy
agents stop working.
When a lookup to find the address of the proxy agent is done by a monitoring agent (if data is
collected at the monitoring agent) or by the monitoring server (if data is collected at the monitoring
server), the address of the proxy agent registered with an annotation equal to the name of the
monitoring server the agents are connected to is found first, then the proxy agents registered with
a default annotation are found in the order they were registered on the hub monitoring server. If an
historical export has been redirected to a proxy agent, the historical exports will continue to be sent
to that agent as long as the connection is working. To reestablish the original configuration, stop
and restart the fall-back WHP.
The name of a particular monitoring server can be specified in the list for only one proxy agent. Do
not specify the same monitoring server name in more than one list. Do not try to do load balancing
by specifying the same monitoring server name in two different KHD_WAREHOUSE_TEMS_LIST
variables.
3. Optionally modify the interval at which a monitoring server queries the Global Location Broker to find
out which Warehouse Proxy it is associated with:
a. Open the environment file for the monitoring server:
v (Windows) ITMinstall_dir\CMS\KBBENV
v (Linux or UNIX) ITMinstall_dir/config/hostname_ms_TEMSid.config
where ITMinstall_dir is the directory where you installed the product.
b. Change the following entry to specify a different query interval:
KPX_WAREHOUSE_REGCHK=60
446
TEPServer
IP.PIPE SKIP:2
(listens on port 16124)
WHProxy port 6014
Open through firewall for export
TEMS port 1918
Open through firewall
This setting prints the value of KHD_WAREHOUSE_TEMS_LIST and shows any errors associated
with its components.
v To determine which Warehouse Proxy a particular monitoring server uses for its agents:
1. Open the environment file for the monitoring server:
Chapter 20. Tivoli Data Warehouse solutions: common procedures
447
(Windows) ITMinstall_dir\CMS\KBBENV
(Linux or UNIX) ITMinstall_dir/config/hostname_ms_TEMSid
where ITMinstall_dir is the directory where you installed the product.
2. Add the following entry to the KBB_RAS1 trace setting:
KBB_RAS1=ERROR(UNIT:kpxrwhpx STATE)
This setting prints entries in the RAS log of the monitoring server when a registration change
occurs. The entry specifies the name and address of the new Warehouse Proxy agent that the
monitoring server is using.
448
name for the monitoring server. On UNIX and Linux operating systems, check No TEMS on the
TEMS Connection tab of the configuration window.
2. Add the following variable to the Warehouse Proxy agent environment file (install_dir\TMAITM6\khdenv
on Windows operating systems; install_dir/config/hd.ini on UNIX and Linux operating systems):
KHD_REGWITHGLB=N
3. Configure the Warehouse Proxy agent to use the same IP port number as you chose for the various
autonomous agents that will be sending historical data to it; see the IBM Tivoli Monitoring:
Administrator's Guide for details.
4. Restart the agent.
5. Restart the agent's Tivoli Enterprise Monitoring Server, if necessary.
If the Warehouse Proxy agent you reconfigured to run autonomously has previously connected to
either a hub monitoring server or a remote monitoring server, the agent has already registered with the
monitoring server to which it connected. To clear this registration information now that the agent is
running autonomously, recycle the monitoring server. If the monitoring server is a remote monitoring
server, also recycle the hub monitoring server to which it connects.
To configure a monitoring agent with the location of the Warehouse Proxy agent or agents to which it
should export historical data, complete the following steps:
1. Install the agent following Installing monitoring agents on page 183 or the documentation for the
agent.
2. Open the monitoring agent environment file in a text editor:
v Windows operating systems: install_dir\TMAITM6\kpcenv
v Linux and UNIX operating systems: install_dir/config/pc.ini
v z/OS operating system: &hilev.&rte.RKANPARU(KPCENV)
where pc is the two-character product code for the monitoring agent (see Appendix D, IBM Tivoli
product, platform, and component codes, on page 567).
3. Add the following variable to the file:
KHD_WAREHOUSE_LOCATION=family protocol.#network address[port number]
The value of the variable can be a semicolon-delimited list of network addresses. For example:
KHD_WAREHOUSE_LOCATION=ip.pipe:SYS2-XP[63358];ip:SYS2-XP[63358]
449
These files are named dockpc, where pc is the two-letter product code for the monitoring agent (see
Appendix D, IBM Tivoli product, platform, and component codes, on page 567). For Universal Agent
attributes, the file is named cccodicc.
On Windows operating systems, the files are located in install_dir\cnps directory; on Linux and
UNIX operating systems, the files are located in the installdir/arch/cq/data directory.
By default, the Summarization and Pruning agent looks for the application support files in the
install_dir\TMAINT6\data directory (on Windows) or the install_dir/config/data directory (on UNIX
and Linux). If you do not create this directory and copy the files to it, you must add the
KSY_AUTONOMOUS_ODI_DIR variable to the Summarization and Pruning agent environment file and
specify the alternative location.
Note: There is no need to copy the dockcj file; it is not used when reconfiguring the Summarization
and Pruning agent. If you do copy this file, the following error will occur and can be ignored.
Validation failed: Column name exceeds 10 characters: ACKNOWLEDGED.
ODI File contents not loaded: /install_dir/dockcj
4. On the machine where the Summarization and Pruning agent is installed, open its environment file in a
text editor:
v Windows operating systems: install_dir\cnps\KSYENV
v Linux and UNIX operating systems: install_dir/config/data/sy.ini
5. Edit the following variables:
v To enable the Summarization and Pruning agent to run without connecting to the Tivoli Enterprise
Portal Server, set KSY_AUTONOMOUS=YES.
v If you did not install the application support files in the default directory (see step 3), set
KSY_AUTONOMOUS_ODI_DIR=alternative location of application support files.
6. Restart the Summarization and Pruning agent agent. The WAREHOUSESUMPRUNE table is
automatically created when the Summarization and Pruning agent is started.
7. If you are upgrading from a previous version and already have summarization and pruning settings
stored in the Tivoli Enterprise Portal Server database, restart the Tivoli Enterprise Portal Server.
The first time the portal server is started after the WAREHOUSESUMPRUNE table has been created,
any previously existing data collection and summarization and pruning configuration settings are
migrated to the WAREHOUSESUMPRUNE table in the warehouse data base. Subsequently, any
settings configured using the portal server are stored directly in the warehouse.
Configuring summarization and pruning without the Tivoli Enterprise Portal Server
You can configure historical data collection and summarization and pruning using the Tivoli Enterprise
Portal , or you can configure them directly in the warehouse database WAREHOUSESUMPRUNE table
using SQL commands.
Table 105 contains descriptions of the columns in the WAREHOUSESUMPRUNE table. Insert one row for
each attribute group for which you want to collect historical data, along with the values for any
summarization and pruning settings. You do not need to set defaults for unused options; they are built into
the table design. Varchar values must be enclosed in single quotes (' ')
Table 105. Descriptions of the columns in the WAREHOUSESUMPRUNE control settings table
Name
Type
Description
TABNAME
VARCHAR (40)
NOT NULL PRIMARY
KEY
The short table name. In the application support file, this is the
value of TABLE. Review the application support file associated
with each agent for the TABLE names.
YEARSUM
VARCHAR (8)
DEFAULT -16823
QUARTSUM
VARCHAR (8)
DEFAULT -16823
450
Table 105. Descriptions of the columns in the WAREHOUSESUMPRUNE control settings table (continued)
Name
Type
Description
MONSUM
VARCHAR (8)
DEFAULT -16823
WEEKSUM
VARCHAR (8)
DEFAULT -16823
DAYSUM
VARCHAR (8)
DEFAULT -16823
HOURSUM
PYEAR
VARCHAR (8)
DEFAULT -16838
PYEARINT
SMALLINT
DEFAULT 1
PYEARUNIT
VARCHAR (8)
DEFAULT -16834
PQUART
VARCHAR (8)
DEFAULT -16838
PMON
VARCHAR (8)
DEFAULT -16838
PMONINT
SMALLINT
DEFAULT 1
PMONUNIT
VARCHAR (8)
DEFAULT -16835
PWEEK
VARCHAR (8)
DEFAULT -16838
PWEEKINT
SMALLINT
DEFAULT 1
PWEEKUNIT
VARCHAR (8)
DEFAULT -16835
PDAY
VARCHAR (8)
DEFAULT -16838
PDAYINT
SMALLINT
DEFAULT 1
PDAYUNIT
VARCHAR (8)
DEFAULT -16835
PHOUR
VARCHAR (8)
DEFAULT -16838
PHOURINT
SMALLINT
DEFAULT 1
PHOURUNIT
VARCHAR (8)
DEFAULT -16836
PRAW
VARCHAR (8)
DEFAULT -16838
PRAWINT
SMALLINT
DEFAULT 1
PRAWUNIT
VARCHAR (8)
DEFAULT -16836
451
Examples: Following are examples of basic collection and summarization and pruning configuration.
Configuration and daily/hourly summarization
Collection is configured, and daily and hourly summarizations are set. No pruning has been
specified. Use the SQL INSERT command.
Required:
v TABNAME= Table Code
v DAYSUM= -16822 (summarize daily)
v HOURSUM=-16822 (summarize hourly)
INSERT INTO WAREHOUSESUMPRUNE (TABNAME,DAYSUM,HOURSUM) VALUES ('WTMEMORY','-16822','-16822');
452
Testing the connection between the portal server and the Tivoli Data
Warehouse
To test the connection between the Tivoli Enterprise Portal Server and the Tivoli Data Warehouse
database, use the Tivoli Enterprise Portal to create a custom SQL query to the warehouse database and
display the results. The procedure described in this section creates an SQL query to the
WAREHOUSELOG table, a status table that is created in the warehouse database when the Warehouse
Proxy agent starts. See WAREHOUSELOG and WAREHOUSEAGGREGLOG tables on page 458 for a
description of this table.
Before you begin, make sure that the following components are installed and started:
v The Tivoli Enterprise Portal Server
v The Warehouse Proxy agent
v The Tivoli Data Warehouse RDBMS server
The test procedure consists of the following steps:
1. Create a custom SQL query to the WAREHOUSELOG table.
2. Create a new workspace to display the results.
3. Assign the query to the workspace.
453
4. Enter a name and description for the query. For this test, enter WAREHOUSELOG for the query name.
5. From the Category drop-down list, select the folder where you want the WAREHOUSELOG query to
appear in the Query tree.
For example, select a folder name that corresponds to the operating system (Windows OS or UNIX
OS) where the Tivoli Data Warehouse is installed.
6. Select the name of the data source for the Tivoli Data Warehouse in the Data Sources list.
7. Click OK.
The WAREHOUSELOG query appears in the Query tree in the Custom_SQL folder under the category
that you selected. The Specification tab opens with a Custom SQL text box for you to enter an SQL
command.
8. Enter the following SQL statement in the Custom SQL text box to select all columns of the
WAREHOUSELOG table:
select * from WAREHOUSELOG
9. Click OK to save the query and close the Query Editor window.
2. Create a workspace
1.
2.
3.
4.
454
You now have a duplicate of the Enterprise Status workspace named WAREHOUSELOG that you can
modify.
6. Click OK to select the WAREHOUSELOG query for the table and close the Query editor.
7. Click OK to close the Properties editor.
The columns of the WAREHOUSELOG table are displayed within the table view in the
WAREHOUSELOG workspace. The content of the table is also displayed if the table is not empty.
455
Database initialization
When the Warehouse Proxy starts, the following tests are done:
v Checks that the Warehouse Proxy can connect to the database.
v If the database is Oracle or DB2 for the workstation, checks that the encoding is set to UTF8.
v If the database is DB2 for the workstation, checks that a bufferpool of page size 8KB is created. If it is
not, one is created, along with three new tablespaces that use the 8KB bufferpool. The bufferpool is
called "ITMBUF8K" and the tablespaces are named "ITMREG8K," "ITMSYS8K," and "ITMBUF8K."
v Creates a database cache that contains a list of all the tables and columns that exist in the database.
If any of these tests fail, a message is written to the log file and messages are displayed in the Event
Viewer.
These tests are repeated every 10 minutes.
You can change this default start up behavior by changing the following environment variables:
KHD_CNX_WAIT_ENABLE
Enables the Warehouse Proxy to wait in between attempts to connect to the database. By default,
this variable is set to Y. If you do not want the Warehouse Proxy to wait, change the variable to N.
However, this can generate a large log file if the connection to the database fails with each
attempt.
KHD_CNX_WAIT
Defines the amount of time, in minutes, that the Warehouse Proxy waits between attempts to
connect to the database. The default value is 10 minutes.
Work queue
The work queue consists of a single queue instance and a configurable number of worker threads that run
work placed on it. There are two primary configuration parameters that you can set. You can set these
parameters in the KHDENV file on Windows or the hd.ini file on Linux and UNIX before starting the
Warehouse Proxy.
KHD_QUEUE_LENGTH
The length of the KHD work queue. This is an integer that identifies the maximum number of
export work requests that can be placed on the work queue before the work queue begins
rejecting requests. The default value is 1000. Setting this value to 0 means that the work queue
has no limit.
KHD_EXPORT_THREADS
The number of worker threads exporting data to the database. The default value is 10.
456
Connection pool
The Warehouse Proxy uses several pre-initialized ODBC connections to access the target database. The
use of these ODBC connection objects is synchronized through a single connection pool. The connection
pool is initialized when the Warehouse Proxy starts.
You can configure the number of connections in the pool by defining the following environment variable in
the KHDENV file on Windows or the hd.ini file on Linux and UNIX:
v KHD_CNX_POOL_SIZE: The total number of pre-initialized ODBC connection objects available to the
work queue export threads. The default value is 10.
All export worker threads request connections from the connection pool and must obtain a connection
before the work of exporting data can continue.
You only see the connections established when a request is active. It is important to set the number of
worker threads to greater or equal to the number of ODBC connections. To do this, set
KHD_KHD_EXPORT_THREADS >= KHD_CNX_POOL_SIZE.
Timeout values
You can set two environment variables to control the timeout value. One variable is set on the application
agent, the other on the Warehouse Proxy.
KHD_STATUSTIMEOUT
The time, in seconds, to wait for a status from the Warehouse Proxy before sending an export
request again. The default value is 900 seconds, or 15 minutes.
KHD_SRV_STATUSTIMEOUT
The timeout value, in seconds, for the work queue to perform work. The default value is 600
seconds, or 10 minutes.
Export requests are rejected by the Warehouse Proxy are the following four reasons:
v The time between when an export request is sent to the work queue and when it is extracted from the
queue exceeds the timeout. If you have tracing for the Warehouse Proxy set to ERROR, an error similar
to the following is logged in the Warehouse Proxy log file:
REJECTED: The export for the originnode OriginNodeName, the application
applicationName and the table tableName has been rejected for timeout
reason in stage END_QUEUE.
v The time between when an export request is sent to the work queue and when the work queue starts to
do existence checking in the database exceeds the timeout. If you have tracing for the Warehouse
Proxy set to ERROR, an error similar to the following is logged in the Warehouse Proxy log file and the
WAREHOUSELOG table:
Sample data rejected for timeout reason at stage START EXPORT
v The time between when an export request is sent to the work queue and when the work queue fetches
all the rows in the sample exceeds the timeout. If you have tracing for the Warehouse Proxy set to
ERROR, a message similar to the following is logged in the Warehouse Proxy log file and the
WAREHOUSELOG table:
Sample data rejected for timeout reason at stage START SAMPLE
Chapter 20. Tivoli Data Warehouse solutions: common procedures
457
v The time between when an export request is sent to the work queue and when the work queue commits
the rows in the database exceeds the timeout. If you have tracing for the Warehouse Proxy set to
ERROR, a message similar to the following is logged in the Warehouse Proxy log file and the
WAREHOUSELOG table:
Sample data rejected for timeout reason at stage COMMIT
WPSYSNAME
The name of the Warehouse Proxy Agent that inserted the rows into the database.
458
The WAREHOUSEAGGREGLOG table logs the progress of the Summarization and Pruning agent as it is
processing data. Each time the Summarization and Pruning agent executes, it adds an entry for each
attribute group (OBJECT column) and origin node (ORIGINNODE column) that was processed. The
WAREHOUSEAGGREGLOG table contains the following columns:
ORIGINNODE
The name of the computer that is being summarized. This name is the node name for the agent.
For example, Primary::box1:NT. OBJECT
The attribute group that was processed.
LOGTMZDIFF
The time zone difference for the Summarization and Pruning agent.
MINWRITETIME
The minimum WRITETIME value that was read from the sample data for the specified
ORIGINNODE and OBJECT.
MAXWRITETIME
The maximum WRITETIME value that was read from the sample data for the specified
ORIGINNODE and OBJECT
STARTTIME
The time that the Summarization and Pruning agent processing began for the specified
ORIGINNODE and OBJECT.
ENDTIME
The time that the Summarization and Pruning agent processing ended for the specified
ORIGINNODE and OBJECT.
ROWSREAD
The number of sample data rows read for the specified ORIGINNODE and OBJECT in the time
interval MINWRITETIME and MAXWRITETIME.
459
460
461
As of IBM Tivoli Monitoring V6.2.2, a new type of Tivoli Management Services agent, the System Monitor
Agent, allows you to send OS monitoring data directly to Netcool/OMNIbus without first passing the data to
a Tivoli Enterprise Monitoring Server. In this way these agents can run in agent-only environments that
lack the standard Tivoli Monitoring servers (the Tivoli Enterprise Monitoring Server and the Tivoli Enterprise
Portal Server). These monitoring agents, which run on Windows and on Linux/UNIX, effectively replace the
OMNIbus System Service Monitors for monitoring of desktop operating systems. Chapter 23, Monitoring
your operating system via a System Monitor Agent, on page 517 provides complete information on
installing, configuring, and uninstalling a System Monitor Agent on either Windows or Linux.
Note: As of fix pack 2 for V6.2.2, for sites running x86_64 CPUs, both 32-bit and 64-bit Windows
environments (Windows 2003, Vista, 2008) are supported for the System Monitor Agents.
An enhancement provided with the first fix pack for version 6.2.2 enables the System Monitor Agents to
send event data directly to OMNIbus, thus making a monitoring server unnecessary even for event
processing.
v EIF events generated by autonomous agents (including the System Monitor Agents) can be sent
directory to either Tivoli Enterprise Console or OMNIbus for private situations only.
v SNMP alerts generated by autonomous agents can be forwarded to any SNMP trap receiver, including
OMNIbus's MTTRAPD probe for both enterprise and private situations.
For more information, see Event forwarding from autonomous agents on page 53.
462
If you are upgrading from IBM Tivoli Monitoring and Tivoli Event Synchronization version 1.x to version
2.2.0.0, see Upgrading to Tivoli Event Synchronization version 2.2.0.0 on page 485.
463
manage and maintain availability across your enterprise. Managing situation events with the Tivoli
Enterprise Console product gives you the following advantages:
v Aggregation of event information from a variety of different sources including those from other Tivoli
software applications, Tivoli partner applications, custom applications, network management platforms,
and relational database systems
v Pre-configured rules that automatically provide best-practices event management
v Persistence and processing of a high volume of events in an IT environment by:
Prioritizing events by their level of importance
Filtering redundant or low priority events
Correlating events with other events from different sources
Root cause analysis and resolution
Initiating automatic corrective actions, when appropriate, such as escalation
v Unified system and network management by automatically performing the following event management
tasks:
Correlating the status of a system or application to the status of the network that it uses
Determining if the root cause of a system or application problem is an underlying network failure
Note: If you already have policies that contain emitter activities that send events to the Tivoli Enterprise
Console, turning on Tivoli Event Integration event forwarding will result in duplicate events. You can
deactivate the emitter activities within policies so you do not have to modify all your policies when
you activate Tivoli Event Integration Facility forwarding by using Disable Workflow Policy/Tivoli
Emitter Agent Event Forwarding when you configure the monitoring server.
Using policies gives you more control over which events are sent and may not want to lose this
granularity. Moreover, it is likely the policies that are invoking the Tivoli Enterprise Console emitter
are doing little else. If you deactivate these activities, there is no point in running the policy. You
may prefer to delete policies that are longer required, instead of disabling them.
464
Figure 94. One or more hub monitoring servers connecting to a single event server
465
Figure 95. Single hub monitoring server and multiple event servers
466
Figure 96. Multiple hub monitoring servers and multiple event servers in a hub and spoke configuration
Note: This graphic is intended to be an example of one possible scaled configuration for the IBM Tivoli
Monitoring and Tivoli Enterprise Console integration. The procedures in this chapter do not provide
all of the information needed to set up this sort of configuration.
For this configuration, you must install the Tivoli Enterprise Console event synchronization component on
the hub event server. You must also load the omegamon.baroc and Sentry.baroc files on the spoke event
servers, as described in Modifying an existing rule base on page 481. In addition, you must load each
.baroc file for any monitoring agent generating situations that are forwarded to spoke event servers, as
described in Installing monitoring agent .baroc files on the event server on page 481.
467
v Aggregation of event information from a large number and variety of different sources including those
from other Tivoli software applications, Tivoli partner applications, custom applications, network
management platforms, and relational database systems
v Pre-configured rules that automatically provide best-practices event management and root cause
determination from an end to end perspective
v Persistence, processing, and access to a high volume of events in an IT environment
v Unified system and network management by automatically performing the following event management
tasks:
Correlating the status of a system or application to the status of the network that it uses
Determining if the root cause of a system or application problem is an underlying network failure
If you are monitoring fewer than 1000 active events and you want to view only situation events (not the
other types of events that IBM Tivoli Enterprise Console can monitor), you can use the Situation Event
Console in the Tivoli Enterprise Portal. If you are monitoring more than 1000 active events, consider
moving to IBM Tivoli Enterprise Console for your event aggregation, and use the Tivoli Enterprise Console
view within the Tivoli Enterprise Portal to display the event information. The response time for the Tivoli
Enterprise Console view is better than the Situation Event Console view when a large number of events is
displayed.
For additional information about the integration with Tivoli Enterprise Console, see Event synchronization
component on page 8. For additional information about Tivoli Enterprise Console itself, see the Tivoli
Enterprise Console information center. For additional information about using the Situation Event Console
in the Tivoli Enterprise Portal, see the IBM Tivoli Monitoring: Tivoli Enterprise Portal User's Guide.
468
2. If you have a monitoring server on an operating system like UNIX or Linux, you must configure your
TCP/IP network services in the /etc/hosts file to return the fully qualified host name. See Host name
for TCP/IP network services on page 91 for more information.
3. For a Windows event server, any existing rule base that you use must indicate a relative drive letter
(such as C:\) as part of its associated path. To verify that your existing rule base contains a relative
drive letter, run the following command from a bash environment on your event server:
wrb -lsrb -path
If the returned path includes something like hostname:\rulebase_directory, with no drive letter (such as
C:\), copy the ESync2200Win32.exe file from the \TEC subdirectory of the IBM Tivoli Monitoring
installation image to the drive where the rule base exists and run the installation from that file.
4. If you are using a Windows event server, if you have any rule base with an associated path that does
not contain a relative drive letter and that has the Sentry2_0_Base class imported, copy the
ESync2200Win32.exe file from the \TEC subdirectory of the IBM Tivoli Monitoring installation image to
the drive where the rule base exists and run the installation from that file.
To verify if you have any rule bases that have an associated path containing no relative drive letter, run
the wrb -lsrb -path command as described in the previous note.
To determine if your rule bases have the Sentry2_0_Base class imported, run the following command
against all of your rule bases:
wrb -lsrbclass rule_base
Figure 97. Window shown when no Tivoli Enterprise Console event server is found.
1. On the host of the event server, launch the event synchronization installer from the installation media:
Chapter 21. Setting up event forwarding to Tivoli Enterprise Console
469
On Windows, double-click the ESync2200Win32.exe file in the \tec subdirectory on the IBM Tivoli
Monitoring V6.2.2 Tools DVD or DVD image.
On Linux or UNIX, change to the \tec subdirectory of the IBM Tivoli Monitoring V6.2.2 Tools DVD
and run the following command:
ESync2200operating_system.bin
where operating_system is the operating system you are installing on (aix, HP11, Linux, linux390,
or Solaris). For example, run the following command on an AIX computer:
ESync2200Aix.bin
Description
5. Complete the following information about the files where events will be written and click Next:
Table 108. IBM Tivoli Enterprise Console event synchronization configuration fields, continued
Field
Description
470
Table 108. IBM Tivoli Enterprise Console event synchronization configuration fields, continued (continued)
Field
Description
6. Type the following information for each monitoring server with which you want to synchronize events
and click Add. You must specify information for at least one monitoring server.
Host name
The fully qualified host name for the computer where the monitoring server is running. This
name must match the information that will be in events that are issued from this monitoring
server.
User ID
The user ID to access the computer where the monitoring server is running.
Password
The password to access the computer.
Confirmation
The same password, for confirmation.
You can add information for up to 10 monitoring servers in this wizard. If you want to add additional
monitoring servers, add them after you install them by using the steps provided in Defining additional
monitoring servers to the event server on page 484.
7. When you have provided information about all of the monitoring servers, click Next.
You are presented with the options of having the installer automatically perform rule base
modifications, or manually performing the modifications after installation is complete (see Table 109).
Table 109. Options for rule base modification
Option
Description
The installation wizard will ask for the rule base into which event synchronization
class files and rule set will be imported, and automatically execute the rule base
commands to do this.
The installation wizard will not create or update any rule base with event
synchronization files.
Important: You will have to manually create or update the rule base with event
synchronization files after the installation is complete. See Manually importing the
event synchronization class files and rule set on page 479.
If you select the automatic option, continue with step 8. If you select the manual option, skip to step
11.
8. Specify the rule base that you want to use to synchronize events. You have two choices:
v Create a new rulebase
v Use existing rulebase
If you select to use an existing rule base, the event synchronization .baroc class files
(omegamon.baroc and Sentry.baroc [if not present]) and the omegamon.rls rule set file are imported
into your existing rule base. Also, if Sentry.baroc has already been imported into the existing rule
base, the Sentry2_0_Base class is extended to define additional integration attributes for the situation
events from IBM Tivoli Monitoring.
Chapter 21. Setting up event forwarding to Tivoli Enterprise Console
471
v If you are creating a new rule base, type the name for the rule base you want to create and the
path to where the new rule base will be located. There is no default location; you must specify a
location.
v If you are using an existing rule base, type the name of the rule base.
v If you want to import an existing rule base into a new rule base, type the name of the existing rule
base in the Existing rulebase to import field.
Note: This step is only available if you are creating a new rule base.
9. Click Next.
10. If you indicated in the previous step that the installer uses an existing rule base to import the event
synchronization class files and rule set, a window is displayed that allows you to specify whether you
want the installer to back up the rule base before updating it. If you request a backup, specify both
the backup rule base name and backup rule base path. If you leave these fields blank, no backup is
made. Click Next to proceed to the pre-installation summary panel.
11. Verify the installation location, then click Next.
The installation begins.
12. When the installation and configuration steps are finished, a message telling you to stop and restart
the event server is displayed.
If you chose to have the installer automatically update the rule base, you are offered the option of
having the installer restart the event server for you. Check the box to have the installer restart the
server. If you want to restart the event server yourself, leave the box unchecked.
13. Click OK.
14. Click Finish on the Summary Information window.
Note: If any configuration errors occurred during installation and configuration, you are directed to a
log file that contains additional troubleshooting information.
Perform the following tasks after the installation is finished:
v Stop and restart the event server for the configuration changes to take effect.
v Install the monitoring agent .baroc files on the event server as described in Installing monitoring agent
.baroc files on the event server on page 481.
v Configure the monitoring server to forward events to the event server as described in Configuring your
monitoring server to forward events on page 482.
v If you did not choose to have the rule base updated automatically, update the rule base as described in
Manually importing the event synchronization class files and rule set on page 479.
On UNIX or Linux:
ESync2200operating_system.bin -console
where operating_system is the operating system you are installing on (aix, HP11, Linux, linux390,
or Solaris). For example, run the following command on an AIX computer:
ESync2200Aix.bin -console
472
7. Press Enter to use the default configuration file, situpdate.conf. If you want to use a different
configuration file, type the name and press Enter.
The following prompt is displayed:
Number of seconds to sleep when no new situation updates [3]
8. Type the number of seconds that you want to use for the polling interval. The default value is 3, while
the minimum value is 1. Press Enter.
The following prompt is displayed:
Number of bytes to use to save last event [50]
9. Type the number of bytes to use to save the last event and press Enter. The default and minimum
value is 50.
The following prompt is displayed:
URL of the CMS SOAP server [cms/soap]
10. Type the URL for the monitoring server SOAP server and press Enter. The default value is cms/soap
(which you can use if you set up your monitoring server using the defaults for SOAP server
configuration).
The following prompt is displayed:
Rate for sending SOAP requests to CMS from TEC via Web Services [10]
11. Supply the maximum number of event updates to send to the monitoring server at one time and press
Enter. The default and minimum value is 10.
The following prompt is displayed:
Level of debug for log
[x] 1 low
[ ] 2 med
[ ] 3 verbose
To select an item enter its number, or enter 0 when you are finished: [0]
12. Type the level of debugging that you want to use and press Enter. The default value is Low, indicated
by an x next to Low.
13. Type 0 when you have finished and press Enter.
The following prompt is displayed:
Press 1 for Next, 2 for Previous, 3 to Cancel, or 4 to Redisplay [1]
473
15. Type the maximum size, in bytes, for the cache file and press Enter. The default value is 50000. Do
not use commas (,) when specifying this value.
The following prompt is displayed:
Maximum number of cache files [10]
16. Type the maximum number of cache files to have at one time and press Enter. The default value is
10, while the minimum is 2.
On Windows, the following prompt is displayed:
Directory for cache files to reside [C:/tmp/TME/TEC/OM_TEC/persistence]
17. Type the directory for the cache files and press Enter. The default directory on Windows is
C:\tmp\TME\TEC\OM_TEC\persistence; on UNIX, /var/TME/TEC/OM_TEC/persistence.
The following prompt is displayed:
Press 1 for Next, 2 for Previous, 3 to Cancel, or 4 to Redisplay [1]
Type the fully qualified host name for the computer where the monitoring server is running. This name
must match the information that is in events issued by this monitoring server. Press Enter.
The following prompt is displayed:
User ID []
20. Type the user ID to use to access the computer where the monitoring server is running and press
Enter.
The following prompt is displayed:
Password:
21. Type the password to access the computer and press Enter.
The following prompt is displayed:
Confirmation:
23. Repeat steps 19 to 22 for each monitoring server for which you want to receive events on this event
server.
When you have provided information for all the monitoring servers and you specified information for
less than 10 monitoring servers, press Enter to move through the remaining fields defining additional
monitoring servers. Do not specify any additional monitoring server information.
The following prompt is displayed:
[X] 1 Automatically install rules and classes (recommended)
[ ] 2 Manually install rules and classes (advanced users)
To select an item enter its number, or 0 when you are finished: [0]
24. If you want to have the installer automatically install the rules and classes, enter 1 and continue with
step 25. If you prefer to manually install the rules and classes, enter 2 and proceed to step 35.
25. When you see the following prompt, type 1 and press Enter to continue:
Press 1 for Next, 2 for Previous, 3 to cancel or 4 to Redisplay [1]
474
26. Type 1 to create a new rule base or 2 to use an existing rule base. Press Enter.
27. Type 0 when you are finished and press Enter.
28. If you are creating a new rule base, the following prompt is displayed:
Rulebase Name []
type the name for the rule base and press Enter.
The following prompt is displayed:
Rulebase Path []
29. If you are creating a new rule base, type the path for the new rule base and press Enter.
30. If you are using an existing rule base, the following prompt is displayed:
Rulebase Name []
If you want to import an existing rule base into the new rule base, type the name of the existing rule
base and press Enter.
The following prompt is displayed:
Press 1 for Next, 2 for Previous, 3 to Cancel, or 4 to Redisplay [1]
33. Type the name for the backup rule base and press Enter to continue. If you do not want the installer
to back up the existing rule base, press Enter without providing a backup rule base name.
The following prompt is displayed:
If you have provided a backup rule base name you must provide a backup
rule base path. NOTE: We append the backup rule base name to the backup
rule base path for clarity and easy lookup.
Backup rule base path. []
34. Type the path for the backup rule base and press Enter to continue. If you did not provide a name for
a backup rule base, press Enter without providing a rule base path.
The following prompt is displayed:
Press 1 for Next, 2 for Previous, 3 to Cancel or 4 to Redisplay [1]
475
The option to automatically restart the Tivoli Enterprise Console is presented only if you chose to
have the installer automatically update the rules and classes.
37. If you want the installer to stop and restart the Tivoli Enterprise Console server, type 1 and press
Enter. If you want to stop and restart yourself, type 0 and press Enter to continue. The following
prompt is displayed:
Press 3 to Finish, or 4 to Redisplay [1]
where filename is the name of the configuration file to create, for example, es_silentinstall.conf.
On UNIX:
ESync2200operating_system.bin -options-template filename
where operating_system is the operating system you are installing on (Aix, HP11, Linux, linux390,
or Solaris). For example, run the following command on an AIX computer:
ESync2200Aix.bin -options-template filename
3. Edit the output file to specify the values shown in Table 110 on page 477.
Notes:
a. Remove the pound signs (###) from the beginning of any value that you want to specify.
b. Do not enclose any values in quotation marks (").
c. You must specify the following values:
v configInfoPanel2.fileLocn
476
-P
-W
-W
-W
installLocation
configInfoPanel3.filesize
configInfoPanel3.fileNumber
configInfoPanel3.fileLocn
e. If you specify values, ensure that the value you specify meets the minimum required values.
Otherwise, the installation stops and an error is written to the log file.
Table 110. IBM Tivoli Enterprise Console event synchronization configuration values
Value
Description
installLocation
configInfoPanel.filename
configInfoPanel.pollingInt
configInfoPanel.crcByteCnt
configInfoPanel.cmsSoapURL
configInfoPanel.bufFlushRate
477
Table 110. IBM Tivoli Enterprise Console event synchronization configuration values (continued)
Value
Description
configInfoPanel.logLevel
configInfoPanel2.filesize
configInfoPanel2.fileNumber
configInfoPanel2.fileLocn
cmsSvrsPnlNotGuiMode.hostname#
The host name of each monitoring server that will send
Note: The pound sign (#) stands for a number between 1 events to the event server. Specify up to 10 monitoring
and 10. For example, "hostname1".
servers.
cmsSvrsPnlNotGuiMode.userID#
cmsSvrsPnlNotGuiMode.pswd#
cmsSvrsPnlNotGuiMode.retypePswd#
rbInstallTypePanel.rbInstallType
rulebasePanel.chooseNewOrExistingRB
rulebasePanel.rbName
rulebasePanel.rbPath
rulebasePanel.fromRB
bckupERB.backupName
rulebasePanel.backupPath
restartTECQ.restartTEC
478
where operating_system is the operating system you are installing on (Aix, HP11, Linux, linux390,
Solaris). For example, on AIX, run the following command:
ESync2200Aix.bin -options filename -silent
You must stop and restart the event server for these changes to take effect.
When installation is complete, the results are written to the itm_tec_event_sync_install.log file. On UNIX,
this log file is always created in the /tmp directory. For Windows, this file is created in the directory defined
by the %TEMP% environment variable. To determine where this directory is defined for the current
command line window, run the following command:
echo %TEMP%
If you specified the monitoring servers in the silent installation configuration file, you might consider
deleting that file after installation, for security reasons. The passwords specified in the files are not
encrypted.
If you want to define additional monitoring servers (in addition to the one required monitoring server), run
the sitconfsvruser.sh command as described in Defining additional monitoring servers to the event server
on page 484. Repeat this command for each monitoring server.
If you specified your monitoring servers after the installation, you must stop and restart the Situation
Update Forwarder process manually. See Starting and stopping the Situation Update Forwarder process
on page 484 for information.
Perform the following tasks after the installation is finished:
v Install the monitoring agent .baroc files on the event server as described in Installing monitoring agent
.baroc files on the event server on page 481.
v Configure the monitoring server to forward events to the event server as described in Configuring your
monitoring server to forward events on page 482.
v If you did not choose to have the rule base updated automatically, update the rule base as described in
Manually importing the event synchronization class files and rule set.
Manually importing the event synchronization class files and rule set
If you do not want to permit the installation program to modify your rule base, you can choose the manual
rule base modification option during the installation and then use one of the following methods to manually
modify your rule base:
v Creating a new rule base on page 480
v Creating a new rule base and importing an existing rule base into it on page 480
v Modifying an existing rule base on page 481
Before you can run any of the commands in the following sections, you must source your Tivoli
environment by running the following command:
On Windows, run the following command from a command prompt:
Chapter 21. Setting up event forwarding to Tivoli Enterprise Console
479
C:\Windows\system32\drivers\etc\Tivoli\setup_env.cmd
See the IBM Tivoli Enterprise Console Command and Task Reference for more information about the wrb,
wstopesvr, and wstartesvr commands.
where newrb_path is the path to where you want to create the new rule base, and newrb_name is the
name for the new rule base.
2. Import the event synchronization class and rule files into the new rule base from the
$BINDIR/TME/TEC/OM_TEC/rules directory created during the installation of the event synchronization
component. Run the following commands:
wrb -imprbclass path_to_Sentry_baroc_file newrb_name
wrb -imprbclass path_to_omegamon_baroc_file newrb_name
wrb -imprbrule path_to_omegamon_rls_file newrb_name
wrb -imptgtrule omegamon EventServer newrb_name
3. Compile and load the new rule base by running the following commands:
wrb -comprules newrb_name
wrb -loadrb newrb_name
4. Stop and restart the event server by running the following commands:
wstopesvr
wstartesvr
Creating a new rule base and importing an existing rule base into it
Use the following steps to create a new rule base and import an existing rule base into it:
1. Create the new rule base by running the following command:
wrb -crtrb -path newrb_path newrb_name
where newrb_path is the path to where you want to create the new rule base and newrb_name is the
name for the new rule base.
2. Import the existing rule base into the new rule base by running the following commands:
wrb -cprb -overwrite existing_rbname newrb_name
where existing_rbname is the name of the existing rule base that you want to import.
3. If the existing rule base is an older rule base, you must upgrade the tec.baroc file to include the
TEC_Generic class. Run the following command:
perl $BINDIR/TME/TEC/OM_TEC/bin/upg_tec_baroc.pl newrb_name
4. If the rule base already contains a Sentry.baroc file, you must upgrade it with the event synchronization
event class attributes. Run the following command:
perl $BINDIR/TME/TEC/OM_TEC/bin/upg_sentry_baroc.pl
5. If the rule base does not contain a Sentry.baroc file, you must import it from the $BINDIR/TME/TEC/
OM_TEC/rules directory created during event synchronization installation. Run the following command:
wrb -imprbclass path_to_Sentry_baroc_file newrb_name
480
6. Import the omegamon.baroc and rules file into the rule base from the $BINDIR/TME/TEC/OM_TEC/
rules directory created during event synchronization installation. Run the following commands:
wrb -imprbclass path_to_omegamon_baroc_file newrb_name
wrb -imprbrule path_to_omegamon_rls_file newrb_name
wrb -imptgtrule omegamon EventServer newrb_name
7. Compile and load the new rule base by running the following commands:
wrb -comprules newrb_name
wrb -loadrb newrb_name
8. Stop and restart the event server by running the following commands:
wstopesvr
wstartesvr
2. If the rule base already contains a Sentry.baroc file, you must upgrade it with the event synchronization
event class attributes. Run the following command:
perl $BINDIR/TME/TEC/OM_TEC/bin/upg_sentry_baroc.pl
3. If the rule base does not contain a Sentry.baroc file, you must import it from the $BINDIR/TME/TEC/
OM_TEC/rules directory created during event synchronization installation. Run the following command:
wrb -imprbclass path_to_Sentry_baroc_file newrb_name
4. Import the omegamon.baroc and rules file into the rule base from the $BINDIR/TME/TEC/OM_TEC/
rules directory created during event synchronization installation. Run the following commands:
wrb -imprbclass path_to_omegamon_baroc_file newrb_name
wrb -imprbrule path_to_omegamon_rls_file newrb_name
wrb -imptgtrule omegamon EventServer newrb_name
5. Compile and load the new rule base by running the following commands:
wrb -comprules newrb_name
wrb -loadrb newrb_name
6. Stop and restart the event server by running the following commands:
wstopesvr
wstartesvr
481
1. Copy the monitoring agent .baroc files from the computer where the monitoring server is installed to a
temporary directory on the event server computer (for example, /tmp). The location of the agent .baroc
files is described above. Do not copy the om_tec.baroc file; this file contains classes that are
duplicates of classes in the omegamon.baroc file.
2. Set up the Tivoli Management Framework environment by running the following command:
On Windows, run the following command:
C:\WINDOWS\system32\drivers\etc\Tivoli\setup_env.cmd
On Linux and UNIX, run the following command from a shell environment:
. /etc/Tivoli/setup_env.sh
3. For each monitoring agent .baroc file to load into the rule base, run the following command from the
same command prompt:
wrb -imprbclass /tmp/agent_baroc_file rb_name
where:
/tmp/agent_baroc_file
Specifies the location and name of the monitoring agent .baroc file. The example above uses the
/tmp directory as the location.
rb_name
Is the name of the rule base that you are using for event synchronization.
4. Compile and load the rule base by running the following commands
wrb -comprules rb_name
wrb -loadrb rb_name
5. Stop and restart the event server by running the following commands:
wstopesvr
wstartesvr
When you have loaded each of the agent .baroc files into the rule base and restarted the event server, the
event server is ready to receive and correctly parse any events it receives from the monitoring server from
one of the installed monitoring agents.
See the IBM Tivoli Enterprise Console Command and Task Reference for more information about the wrb,
wstopesvr, and wstartesvr commands.
482
Port Number
Type the port number for the event server. If the event server is using port mapping, set this
value to 0. If the event server was configured to use a specific port number, specify that
number.
To determine the port number that the event server is using, search for the
tec_recv_agent_port parameter in the .tec_config file in the $BINDIR/TME/TEC directory on
the event server. If the parameter is commented out with a pound sign (#), the event server is
using port mapping. If it is not, the event server is using the port number specified by this
parameter.
For Linux and UNIX monitoring servers: You configured the TEC Server and TEC Port information for
the Linux/UNIX monitoring server during installation, if you installed the monitoring server using the
configuration instructions in this installation guide. However, if you did not configure this information, see
Configuring the hub monitoring server on page 154 for the procedure.
Note: If you already have policies that contain emitter activities that send events to the Tivoli Enterprise
Console, turning on Tivoli Event Integration event forwarding will result in duplicate events. You can
deactivate the emitter activities within policies so you do not have to modify all your policies when
you activate Tivoli Event Integration Facility forwarding by specifying Disable Workflow
Policy/Tivoli Emitter Agent Event Forwarding when you enable forwarding using the Event
Integration Facility.
Using policies gives you more control over which events are sent and you may not want to lose this
granularity. Moreover, it is likely the policies that are invoking the TEC emitter are doing little else. If
you deactivate these activities, there is no point in running the policy. You may prefer to delete
policies that are longer required, instead of disabling them. Note that events forwarded via the TEC
event emitter are not eligible for event synchronization (that is, changes to these events on the TEC
side will not be sent back to the monitoring server).
483
On UNIX:
startSUF.sh
On UNIX:
stopSUF.sh
On Windows, you can also start and stop the Tivoli Situation Update Forwarder service to start or stop the
forwarding of event updates. You can start and stop this service either from the Windows Service Manager
utility or with the following commands:
net start situpdate
net stop situpdate
v Run the sitconfig.sh command directly, specifying only those settings that you want to change. See the
IBM Tivoli Monitoring: Command Reference for the full syntax of this command.
After you change the configuration of the event synchronization, you must manually stop and restart the
Situation Update Forwarder process. See Starting and stopping the Situation Update Forwarder process
for information.
v On operating systems like UNIX and Linux, run the following command from a shell environment:
. /etc/Tivoli/setup_env.sh
484
3. Change to the $BINDIR/TME/TEC/OM_TEC/bin directory (where $BINDIR is the location of the Tivoli
Management Framework installation) and enter the following command:
sitconfsvruser.sh add serverid=server userid=user password=password
where:
server Is the fully qualified host name of the monitoring server.
user
Is the user ID to access the computer where the monitoring server is running.
password
Is the password to access the computer.
Repeat this command for each monitoring server.
You can also delete monitoring servers. See the IBM Tivoli Monitoring: Command Reference for the full
syntax of this command.
After you change the configuration of the event synchronization, you must manually stop and restart the
Situation Update Forwarder process. See Starting and stopping the Situation Update Forwarder process
on page 484 for information.
where timeout_value is the length of the timeout period, in half seconds. To configure a timeout of 30
seconds, set the timeout_value value to 60.
On Solaris and HP-UX, run the following command:
ndd -set /dev/tcp tcp_ip_abort_cinterval timeout_value
where timeout_value is the length of the timeout period, in milliseconds. To configure a timeout of 30
seconds, set the timeout_value value to 30000.
485
where operating_system is the operating system you are installing on (aix, HP11, Linux, linux390,
or Solaris). For example, run the following command on an AIX computer:
ESUpgrade22Aix.bin
Proceed to step 5.
486
5. Specify the name of the rule base to be upgraded. The rule base must be one that has event
synchronization previously installed.
6. If you want the installer to back up the rule base before it is modified, specify a name and a path for
the backup rule base.
7. Click Next to continue.
A window is displayed that summarizes the information you entered.
8. If the information is correct, click Next to proceed with the installation. If the information is not correct,
click Back and correct the fields as necessary; then click Next and Next again to proceed.
A progress indicator shows the progress of the installation and configuration.
9. When the installation completes successfully, you will see a message that reminds you to restart the
TEC server. If the updated rule base is not the currently loaded rule base, you are reminded to load
the rule base and restart the server. Click OK to dismiss the message.
A window is displayed that reminds you to restart the Tivoli Enterprise Console server.
10. If you want the installer to restart the server for you, check Restart the Tivoli Enterprise Console
server to make changes effective; then click Next. If you do not want the installer to restart the
server, leave the option unchecked and click Next.
11. Click Finish to exit the installer.
Important:: If you chose the manual update option, you must copy the files in $BINDIR/TME/TEC/OM_TEC/
rules directory to the rule base, recompile and reload the rule base, and restart the Tivoli
Enterprise Console. Refer to Manually importing the event synchronization class files and
rule set on page 479 for the commands to use to do this.
where operating_system is the operating system you are installing on (Aix, HP11, Linux, linux390,
or Solaris). For example, run the following command on an AIX computer:
ESUpgrade22Aix.bin -console
487
6. If you want to have the installer automatically install the rules and classes, enter 1. The following
prompt is displayed:
Rule base Name []
Continue with step 7. If you prefer to manually install the rules and classes, enter 2 and proceed to
step 11.
7. Type 1 and press Enter to continue.
8. Type the name of the rule base to upgrade then press Enter.
The rule base must be one in which event synchronization was previously installed.
The following prompt is displayed:
Backup rule base name. []
9. If you want the installer to back up the rule base before modifying it, type a name for the backup rule
base, then press Enter. If you do not want the installer to create a backup rule base, press Enter
without typing any name. If you provide a name for the backup rule base, you are prompted for the
path for the backup rule base:
If you have provided a backup rule base name you must provide a backup
rule base path. NOTE: We append the backup rule base name to the
backup rule base path for clarity and easy look-up.
Backup rule base path. []
10. Type the path and press Enter. The following prompt is displayed:
Press 1 for Next, 2 for Previous, 3 to Cancel, or 4 to Redisplay []
If you did not choose to have the installer automatically update the rule base, you will not be offered
the option to restart the Tivoli Enterprise Console automatically.
14. If you want the installer to stop and restart the Tivoli Enterprise Console server, type 1 and press
Enter. If you want to stop and restart the console yourself, type 0 and press Enter. The following
prompt is displayed:
488
where filename is the name of the configuration file to create, for example, es_silentinstall.conf.
On UNIX:
ESUpgrade22operating_system.bin -options-template filename
where operating_system is the operating system you are installing on (Aix, HP11, Linux, linux390,
or Solaris). For example, run the following command on an AIX computer:
ESUpgrade22Aix.bin -options-template filename
d. If you specify values, ensure that the value you specify meets the minimum required values.
Otherwise, the installation stops and an error is written to the log file.
Chapter 21. Setting up event forwarding to Tivoli Enterprise Console
489
where operating_system is the operating system you are installing on (Aix, HP11, Linux, linux390,
Solaris). For example, on AIX, run the following command:
ESUpgrade22Aix.bin -options filename -silent
The rule base that is updated during silent installation is made current.
Important: If you chose the manual update option, you must copy the files in $BINDIR/TME/TEC/OM_TEC/
rules directory to the rule base, recompile and reload the rule base, and restart the Tivoli
Enterprise Console. Refer to Manually importing the event synchronization class files and rule
set on page 479 for the commands to use to do this.
490
Figure 99. Event flow and synchronization between Tivoli Monitoring and Netcool/OMNIbus. Events are sent through
the EIF probe to the OMNIbus server
Note: The EIF Probe and the OMNIbus Object Server do not have to be installed on the same machine.
Setting up event integration for Netcool/OMNIbus involves these tasks:
v Installing the event synchronization component on page 494
The event synchronization component enables changes in event status made on a Netcool/OMNIbus
Object Server to be sent back to the Tivoli Enterprise Monitoring Server and reflected on the Tivoli
Enterprise Portal. When you install the event synchronization component for Netcool/OMNIbus, a new
process, Situation Update Forwarder, is installed along with its supporting binary and configuration files.
This process is used to forward updates to the situation events back to the originating monitoring server.
On Windows, a Situation Update Forwarder service is also created.
v Configuring the Netcool/OMNIbus Object Server on page 502
You configure the Netcool/OMNIbus server to receive situation event information and reflect changes to
their status to the Netcool/OMNIbus console. You also configure the Object Server to send changes in
event status back from Netcool/OMNIbus to the monitoring server.
v Configuring the monitoring server to forward events on page 508
You configure the hub monitoring server to enable situation event forwarding and to define the default
event receiver.
v Customizing the OMNIbus configuration on page 509
After you have set up event integration, you can customize:
How events are mapped
How events are synchronized
v Defining additional monitoring servers to the Object Server on page 510
491
If you have multiple monitoring servers sending events to the same Netcool/OMNIbus Object Server,
you can add additional monitoring servers to the list that receive event status updates from the Object
Server after you have installed the event synchronization component.
The following products must be installed and configured before you install the event synchronization
component and configure event forwarding for Netcool/OMNIbus:
v IBM Tivoli Netcool/OMNIbus V7.x
v IBM Tivoli Netcool/OMNIbus V7.x probe for Tivoli EIF
v IBM Tivoli Monitoring V6.2.2
492
Figure 100. One or more hub monitoring servers connecting to a single event server
493
Figure 101. Single hub monitoring server and multiple event servers
494
Notes:
1. You cannot install event synchronization for Netcool/OMNIbus on the same system as an IBM Tivoli
Enterprise Console event server.
2. If the Object Server is running on Windows 2003 and you are planning to install the event
synchronization remotely (using a program such as Terminal Services to connect to that Windows 2003
computer), you need to run the change user /install command before you run the installation, which
puts the computer into the required "install" mode. After the installation, run the change user /execute
command to return the computer to its previous mode.
3. If you have a monitoring server on an operating system like UNIX or Linux, you must configure your
TCP/IP network services in the /etc/hosts file to return the fully qualified host name. See Host name
for TCP/IP network services on page 91 for more information.
4. Linux or UNIX users can run the EIF installer under either a root or a non-root userid.
where operating_system is the operating system you are installing on (aix, HP11, Linux,
linux390, or Solaris). For example, run the following command on an AIX computer:
ESync2200Aix.bin
If the installer cannot locate OMNIbus in its usual place, the following window is displayed. Click
Next to continue installing the event synchronization:
Figure 102. Installation of IBM Tivoli Monitoring and Tivoli Event Synchronization
495
4. Click Next to install the synchronization component in the default location, or use the Browse button
to select another location and then click Next to continue.
5. Complete the fields in the installation windows using the configuration values described in Table 111
and click Next.
Table 111. Netcool/OMNIbus event synchronization configuration fields
Field
Description
6. Complete the fields on the installation window about the files where events will be written using the
values described in Table 112 and click Next:
Table 112. Netcool/OMNIbus event synchronization configuration fields, continued
Field
Description
496
7. Type the following information for each hub monitoring server with which you want to synchronize
events and click Add. You must specify information for at least one hub monitoring server.
Host name
The fully qualified host name for the computer where the monitoring server is running. The
name must match the information that will be in events coming from this monitoring server.
User ID
The user ID to access the computer where the monitoring server is running.
Password
The password to access the computer.
Confirmation
The password, again.
You can add information for up to 10 monitoring servers in this wizard. If you want to add additional
monitoring servers, do so after install by using the steps provided in Defining additional monitoring
servers to the event server on page 484.
8. When you have provided information about all of the monitoring servers, click Next.
A summary window is displayed.
9.
On UNIX or Linux:
ESync2200operating_system.bin -console
where operating_system is the operating system you are installing on (aix, HP11, Linux, linux390,
or Solaris). For example, run the following command on an AIX computer:
ESync2200Aix.bin -console
497
7. Press Enter to use the default configuration file, situpdate.conf. If you want to use a different
configuration file, type the name and press Enter.
The following prompt is displayed:
Number of seconds to sleep when no new situation updates [3]
8. Type the number of seconds that you want to use for the polling interval. The default value is 3, while
the minimum value is 1. Press Enter.
The following prompt is displayed:
Number of bytes to use to save last event [50]
9. Type the number of bytes to use to save the last event and press Enter. The default and minimum
value is 50.
The following prompt is displayed:
URL of the CMS SOAP server [cms/soap]
10. Type the URL for the monitoring server SOAP Server and press Enter. The default value is cms/soap
(which you can use if you set up your monitoring server using the defaults for SOAP server
configuration).
The following prompt is displayed:
Rate for sending SOAP requests to CMS from TEC via Web Services [10]
11. Type maximum number of event updates to send to the monitoring server at one time and press
Enter. The default and minimum value is 10.
The following prompt is displayed:
Level of debug for log
[x] 1 low
[ ] 2 med
[ ] 3 verbose
To select an item enter its number, or enter 0 when you are finished: [0]
12. Type the level of debugging that you want to use and press Enter. The default is Low, indicated by an
x next to Low.
13. Type 0 when you have finished and press Enter.
The following prompt is displayed:
Press 1 for Next, 2 for Previous, 3 to Cancel, or 4 to Redisplay [1]
15. Type the maximum size, in bytes, for the cache file and press Enter. The default is 50000. Do not use
commas (,) when specifying this value.
The following prompt is displayed:
Maximum number of cache files [10]
16. Type the maximum number of cache files to have at one time and press Enter. The default is 10,
while the minimum is 2.
On Windows, the following prompt is displayed:
Directory for cache files to reside [C:/Program Files/IBM/SitForwarder/persistence]
498
17. Type the directory for the cache files and press Enter. The default directory on Windows is
C:\Program Files\IBM\SitForwarder\persistence; on UNIX, /opt/IBM/SitForwarder/persistence.
The following prompt is displayed:
Press 1 for Next, 2 for Previous, 3 to Cancel, or 4 to Redisplay [1]
Type the fully qualified host name for the computer where the monitoring server is running. This
should match the information that will be in events coming from this monitoring server. Press Enter.
The following prompt is displayed:
User ID []
20. Type the user ID to use to access the computer where the monitoring server is running and press
Enter.
The following prompt is displayed:
Password:
21. Type the password to access the computer and press Enter.
The following prompt is displayed:
Confirmation:
23. Repeat steps 19 to 22 for each monitoring server for which you want to receive events on this event
server.
When you have provided information for all the monitoring servers and you specified information for
less than 10 monitoring servers, press Enter to move through the remaining fields defining additional
monitoring servers. Do not specify any additional monitoring server information.
24. When you see the following prompt, type 1 and press Enter to continue:
Press 1 for Next, 2 for Previous, 3 to cancel or 4 to Redisplay [1]
499
where filename is the name of the configuration file to create, for example, es_silentinstall.conf.
On UNIX:
ESync2200operating_system.bin -options-template filename
where operating_system is the operating system you are installing on (aix, HP11, Linux, linux390,
or Solaris). For example, run the following command on an AIX computer:
ESync2200Aix.bin -options-template filename
3. Edit the output file to specify the values described in Table 113.
Notes:
a. Remove the pound signs (###) from the beginning of any value that you want to specify.
b. You must specify the following values:
### -P installLocation="value"
### -W configInfoPanel3.fileLocn="value"
If you do not specify any of the other values, the default values are used. If you specify values,
ensure that the values you specify meets the minimum required values. Otherwise, the installation
stops and an error is written to the log file.
Table 113. Netcool/OMNIbus event synchronization configuration values
Value
Description
installLocation
configInfoPanel.filename
configInfoPanel.pollingInt
configInfoPanel.crcByteCnt
500
Description
configInfoPanel.cmsSoapURL
configInfoPanel.bufFlushRate
configInfoPanel.logLevel
configInfoPanel3.filesize
configInfoPanel3.fileNumber
configInfoPanel3.fileLocn
cmsSvrsPnlNotGuiMode.pswd#
cmsSvrsPnlNotGuiMode.retypePswd#
cms_port
ITMPort
situation_fullname
ITMSitFullName
situation_group
ITMSitGroup
appl_label
ITMApplLabel
501
where operating_system is the operating system you are installing on (aix, HP11, Linux, linux390,
Solaris). For example, on AIX, run the following command:
ESync2200Aix.bin -options filename -silent
When installation is complete, the results are written to the itm_tec_event_sync_install.log file. On UNIX,
this log file is always created in the /tmp directory. For Windows, this file is created in the directory defined
by the %TEMP% environment variable. To determine where this directory is defined for the current
command line window, run the following command:
echo %TEMP%
If you specified the monitoring servers in the silent installation configuration file, you might consider
deleting that file after installation, for security reasons. The passwords specified in the files are not
encrypted.
If you want to define additional monitoring servers (in addition to the one required monitoring server), run
the sitconfuser.sh command as described in Defining additional monitoring servers to the Object Server
on page 510. Repeat this command for each monitoring server.
If you specified your monitoring servers after the installation, you must stop and restart the Situation
Update Forwarder process manually. See for information.
Perform the following tasks after the installation is finished:
v Configure OMNIbus to receive events and update event status as described in Configuring the
Netcool/OMNIbus Object Server.
v Configure the monitoring server to forward events to OMNIbus as described in Configuring the
monitoring server to forward events on page 508.
502
For Windows: The PA.Username property must be set to a Windows account name, and the
PA.Password property must be set to the password for that account.
Refer to OMNIbus documentation for more information on configuring OMNIbus server under process
control and for information on the nco_pa_crypt utility that encrypts the PA.Password property value.
After you change the PA.Username and PA.Password properties in the OMNIHOME/etc/NCOMS.props file,
perform the procedure below to restart the OMNIbus Object Server:
1. Stop the OMNIbus server:
v On Windows: In the Control Panel, open Administrative Tools, then Services. In the list of services,
double-click OMNIbus server, then click Stop.
v On UNIX: Issue the following command from command line
$OMNIHOME/bin/nco_pa_stop -process server_name
where:
$OMNIHOME
Is the system-defined variable defining the installation location of OMNIbus.
server_name
Is the OMNIbus Object Server name defined for process control.
2. Restart the OMNIbus server.
v On Windows: In the list of services, double-click OMNIbus server, then click Start.
v On UNIX: Issue the following command from command line:
$OMNIHOME/bin/nco_pa_start -process server_name
where:
$OMNIHOME
Is the system-defined variable defining the installation location of OMNIbus
username
Is the OMNIbus Object Server user name
password
Is the OMNIbus Object Server password
503
server_name
Is the OMNIbus Object Server name defined for process control.
path_to_file
Is the fully qualified path to the specified SQL file
v On UNIX:
$OMNIHOME/bin/nco_sql -user username
-password password
-server server_name
< path_to_file/itm_proc.sql
$OMNIHOME/bin/nco_sql -user username
-password password
-server server_name
< path_to_file/itm_db_update.sql
$OMNIHOME/bin/nco_sql -user username
-password password
-server server_name
< path_to_file/itm_sync.sql
where:
$OMNIHOME
Is the system-defined variable defining the installation location of OMNIbus
username
Is the OMNIbus Object Server user name
password
Is the OMNIbus Object Server password
server_name
Is the OMNIbus Object Server name defined for process control.
path_to_file
Is the fully qualified path to the specified SQL file
Notes:
1. Object exists and "Attempt to insert duplicate row" errors will occur if the scripts were previously run.
These errors are harmless.
2. The schema updates in the itm_db_update.sql file add a number of columns to the alerts.status table
for the Object Server. If any Object Server gateways forward from the Object Server, consider adding
these columns to the gateway mapping file. Also, if the IBM Tivoli Monitoring fields will be viewed in
both Object Servers, run the schema updates on the other Object Server as well.
3. If you are integrating IBM Tivoli Monitoring and Tivoli Business Service Manager, add the Tivoli
Business Service Manager schema updates before you add the Tivoli Monitoring schema updates. If
you add the Tivoli Monitoring schema updates before the Tivoli Business Service Manager schema
updates, rerun the procedure above to add the IBM Tivoli Monitoring schema updates again to ensure
the latest definitions are used.
After updating the OMNIbus schema with the Tivoli Monitoring updates, run the Tivoli Business Service
Manager discover schema utility (rad_discover_schema). See the Tivoli Business Service Manager
Information Center for detailed instructions on using this utility: http://publib.boulder.ibm.com/infocenter/
tivihelp/v3r1/index.jsp?topic=/com.ibm.tivoli.itbsm.doc/welcome.htm.
After running the discover schema utility, remember to restart the Tivoli Business Service Manager
Dataserver. Failure to do so can cause connection problems.
4. If the OMNIbus Object Server is running on UNIX as a non-root user and the event synchronization
component is installed and run as either root or another user, verify that the user under which the
Object Server is running has write permission to the event_sync_install_dir\log directory prior to
updating the OMNIbus database schema.
504
OMNIbus Attribute
situation_name
AlertKey
situation_origin
Node
situation_origin
NodeAlias
source
Agent
default
situation_displayitem
ITMDisplayItem
situation_status
ITMStatus
situation_time
ITMTime
situation_type
ITMSitType
situation_thrunode
ITMThruNode
situation_eventdata
ITMEventData
cms_hostname
ITMHostname
master_reset_flag
ITMResetFlag
integration_type
ITMIntType
event_class
AlertGroup
msg
Summary
Manager
6601
Class
505
OMNIbus Attribute
severity
Severity
FATAL / 60 = Critical
CRITICAL / 50 = Critical
MINOR / 40 = Minor
WARNING / 30 = Warning
HARMLESS / 20 = Indeterminate
UNKNOWN / 10 = Indeterminate
INFORMATIONAL = Indeterminate
getdate
LastOccurrence/FirstOccurrence
date
TECDate
repeat_count
TECRepeatCount
fqhostname
TECFQHostname
hostname
TECHostname
cms_port
ITMPort
situation_fullname
ITMSitFullName
situation_group
ITMSitGroup
appl_label
ITMApplLabel
Take the following steps to update the tivoli_eif.rules file if you are not integrating with Tivoli Business
Service Manager:
1. Copy the contents of event_sync_install_dir\omnibus\tivoli_eif.rules (Windows) or
event_sync_install_dir/omnibus/tivoli_eif.rules (UNIX) to the following file on the machine where
the EIF probe is installed:
%OMNIHOME%\probes\os_dir\tivoli_eif.rules (Windows)
or
$OMNIHOME/probes/os_dir/tivoli_eif.rules (UNIX)
where:
OMNIHOME
Is a system-defined variable defining the installation location of OMNIbus.
os_dir
Is the operating system, such as Windows or AIX.
2. Stop the EIF probe.
v On Windows: In the Control Panel, open Administrative Tools, then Services. In the list of
services, double-click the EIF probe; then click Stop.
v On UNIX: Issue the following command from the command line
$OMNIHOME/bin/nco_pa_stop -process probe_name
where:
$OMNIHOME
Is the system-defined variable defining the installation location of OMNIbus.
probe_name
Is the OMNIbus EIF probe name defined for Process Control.
3. Restart the OMNIbus probe.
v On Windows: In the list of services, double-click OMNIbus EIF Probe; then click Start.
506
where:
$OMNIHOME
Is the system-defined variable defining the installation location of OMNIbus.
probe_name
Is the OMNIbus EIF probe name defined for Process Control.
3. Restart the OMNIbus probe.
On Windows: In the list of services, double-click OMNIbus EIF Probe; then click Start.
On UNIX: Issue the following command from the command line:
$OMNIHOME/bin/nco_pa_start -process probe_name
507
where:
eventsync_install
Is the location where event synchronization program is installed (on Windows the default installation
directory is C:\Program Files\IBM\SitForwarder; on Linux and UNIX operating systems, the default
installation directory is /opt/IBM/SitForwarder).
ServerName
Is the name of the computer where the EIF probe is running.
ServerPort
Is the listening port for EIF probe. The default value is 9999.
For Linux and UNIX monitoring servers: You configured the EIF probe and port information for the
Linux/UNIX monitoring server during installation, if you installed the monitoring server using the
configuration instructions in this installation guide. However, if you did not configure this information, see
Configuring the hub monitoring server on page 154 for the procedure.
508
If your environment can have a large number of open situation events, you may want to adjust this
parameter to control the rate at which events are sent to the event server. To edit this file and change this
parameter:
v On Windows:
1. Open Manage Tivoli Enterprise Monitoring Services.
2. Right-click Tivoli Enterprise Monitoring Server, and click Advanced Edit EIF Configuration.
v On Linux or UNIX, edit file install_dir/tables/hostname/TECLIB/ om_tec.config, where install_dir is
your Tivoli Monitoring installation directory and hostname is the name of the host running this monitoring
server.
v For UNIX:
509
where:
OMNIHOME
Is the system-defined variable defining the installation location of OMNIbus
username
Is the OMNIbus Object Server user name
password
Is the OMNIbus Object Server password
server_name
Is the OMNIbus Object Server name defined for process control.
path_to_file
Is the fully qualified path to the specified SQL file
password=password
where:
server
Is the fully qualified host name of the monitoring server.
user
Is the user ID used to access the computer where the monitoring server is running.
password
Is the password used to access the host computer.
path_to_conf_file
Is the directory containing the situser.conf file.
Repeat this command to add short host name information for the same monitoring server by specifying
the short host name value for the serverid parameter.
v On UNIX, change to the /opt/IBM/SitForwarder/bin directory and enter the following command:
sitconfuser.sh add serverid=server userid=user
pathc=path_to_conf_file type=OMNIBUS
password=password
where:
server
Is the fully qualified host name of the monitoring server.
user
Is the user ID used to access the computer where the monitoring server is running.
510
password
Is the password used to access the host computer.
path_to_conf_file
Is the directory containing the situser.conf file.
Repeat this command to add short host name information for the same monitoring server by specifying
the short host name value for the serverid parameter.
Repeat this step for each monitoring server that you want to add.
You can also delete monitoring servers. See the IBM Tivoli Monitoring: Command Reference for the full
syntax of this command.
After you change the configuration of the event synchronization, you must manually stop and restart the
Situation Update Forwarder process. See Starting and stopping the Situation Update Forwarder process
on page 484 for information.
where:
$ITM_INSTALL_PATH
Is the system variable defining the installation location of the monitoring server.
2.
tems_server
Is the host name of the computer where the monitoring server is installed (the host where you
are executing this command).
On the OMNIbus server machine, run the following command to start the OMNIbus console:
$OMNIHOME/bin/nco_event
3. Refresh the event view in the OMNIbus console and check for situation events from the monitoring
server.
On UNIX:
startSUF.sh
511
On UNIX:
stopSUF.sh
On Windows, you can also start and stop the Tivoli Situation Update Forwarder service to start or stop the
forwarding of event updates. You can start and stop this service either from the Windows Administrative
Tools > Services utility or with the following commands:
net start situpdate
net stop situpdate
where operating_systemis the operating system you are installing on (aix, HP11, Linux, linux390, or
Solaris).
For example, run the following command on an AIX computer:
ESUpgrade22Aix.bin
512
%OMNIHOME%\..\bin\redist\isql -U username
-P password
-S server_name
< path_to_file\itm_proc.sql
%OMNIHOME%\..\bin\redist\isql -U username
-P password
-S server_name
< path_to_file\itm_db_update.sql
%OMNIHOME%\..\bin\redist\isql -U username
-P password
-S server_name
< path_to_file\itm_sync.sql
where:
$OMNIHOME
Is the system-defined variable defining the installation location of OMNIbus
username
Is the OMNIbus Object Server user name
password
Is the OMNIbus Object Server password
server_name
Is the OMNIbus Object Server name defined for process control.
path_to_file
Is the fully qualified path to the specified SQL file
v On UNIX:
$OMNIHOME/bin/nco_sql -user username
-password password
-server server_name
< path_to_file/itm_proc.sql
$OMNIHOME/bin/nco_sql -user username
-password password
-server server_name
< path_to_file/itm_db_update.sql
$OMNIHOME/bin/nco_sql -user username
-password password
-server server_name
< path_to_file/itm_sync.sql
where:
$OMNIHOME
Is the system-defined variable defining the installation location of OMNIbus
username
Is the OMNIbus Object Server user name
password
Is the OMNIbus Object Server password
server_name
Is the OMNIbus Object Server name defined for process control.
path_to_file
Is the fully qualified path to the specified SQL file
513
Notes:
1. Object exists and "Attempt to insert duplicate row" errors will occur if the scripts were previously run.
These errors are harmless.
2. The schema updates in itm_db_update.sql add a number of columns to the alerts.status table for the
Object Server. If any Object Server gateways forward from the Object Server, consider adding these
columns to the gateway mapping file. Also, if the IBM Tivoli Monitoring fields will be viewed in both
Object Servers, run the schema updates on the other Object Server as well.
3. If you are integrating IBM Tivoli Monitoring and Tivoli Business Service Manager, add the Tivoli
Business Service Manager schema updates before you add the Tivoli Monitoring schema updates. If
you add the Tivoli Monitoring schema updates before the Tivoli Business Service Manager schema
updates, rerun the procedure above to add the IBM Tivoli Monitoring schema updates again to ensure
the latest definitions are used.
After updating the OMNIbus schema with the Tivoli Monitoring updates, run the Tivoli Business Service
Manager discover schema utility (rad_discover_schema). Refer to the Tivoli Business Service Manager
Information Center for detailed instructions on using this utility: http://publib.boulder.ibm.com/infocenter/
tivihelp/v3r1/index.jsp?topic=/com.ibm.tivoli.itbsm.doc/welcome.htm.
After running the discover schema utility, remember to restart the Tivoli Business Service Manager
Dataserver. Failure to do so can cause connection problems.
4. If the OMNIbus Object Server is running on UNIX as a non-root user and the event synchronization
component is installed and run as either root or another user, verify that the user under which the
Object Server is running has write permission to the event_sync_install_dir\log directory prior to
updating the OMNIbus database schema.
3. Copy this command to a temporary file. For this example, the temporary file is /tmp/dedup.sql.
4. Run the command to replace the deduplication trigger.
v On Windows:
%OMNIHOME%\..\bin\redist\isql -U username -P password -S server_name < C:\tmp\dedup.sql
514
v On UNIX:
$OMNIHOME/bin/nco_sql -user username -password password -server server_name < /tmp/dedup.sql
where:
OMNIHOME
Is the system-defined variable defining the installation location of OMNIbus
username
Is the OMNIbus Object Server user name
password
Is the OMNIbus Object Server password
server_name
Is the OMNIbus Object Server name defined for process control.
v On UNIX:
$OMNIHOME/probes/os_dir/tivoli_eif.rules
where:
OMNIHOME
Is a system-defined variable defining the installation location of OMNIbus.
os_dir
Is the operating system, such as Windows or AIX.
2. Stop the EIF probe:
v On Windows:
a. In the Control Panel, open Administrative Tools, then Services.
b. In the list of services, double-click the EIF probe; then click Stop.
v On UNIX, issue the following command from the command line:
$OMNIHOME/bin/nco_pa_stop -process probe_name
where:
$OMNIHOME
Is the system-defined variable defining the installation location of OMNIbus.
probe_name
Is the OMNIbus EIF probe name defined for Process Control.
3. Restart the OMNIbus probe:
v On Windows, in the list of services, double-click OMNIbus EIF Probe; then click Start.
v On UNIX, issue the following command from the command line:
$OMNIHOME/bin/nco_pa_start -process probe_name
515
Integrating IBM Tivoli Monitoring with Tivoli Business Service Manager and
Netcool/OMNIbus
If you are integrating IBM Tivoli Monitoring, Tivoli Business Service Manager, and Netcool/OMNIbus, the
Tivoli Business Service Manager EIF probe installation provides a tivoli_eif.rules file that contains EIF
probe rules for multiple products including IBM Tivoli Monitoring. Tivoli Business Service Manager also
provides .sql files for updating the OMNIbus database schema.
Before configuring the EIF probe, you must update the OMNIbus database schema with the schema
updates provided with Tivoli Business Service Manager and then follow them with the IBM Tivoli
Monitoring database schema updates described in the previous section.
v If you are using Tivoli Business Service Manager Version 4.2 or earlier, refer to the Tivoli Monitoring
support site for a tech note that describes how to configure the OMNIbus Object Server and EIF probe
to use the latest version of the IBM Tivoli Monitoring .sql files and probe rules.
v If you are using Tivoli Business Service Manager Version 4.2.1, the Tivoli Business Service Manager
EIF probe installation creates a tivoli_eif.rules file that includes the itm_event.rules file. To replace
the itm_event.rules file provided by Tivoli Business Service Manager with the itm_event.rules file
included with the IBM Tivoli Monitoring synchronization component while still using the
tbsm_eif_event.rules file:
1. Copy the files as follows:
On Windows, copy files event_sync_install_dir\omnibus\tbsm\itm_event.rules and
tbsm_eif_event.rules to directory %OMNIHOME%\probes\os_dir on the machine where your EIF
probe is installed
On UNIX, copy files event_sync_install_dir/omnibus/tbsm/itm_event.rules and
tbsm_eif_event.rules to directory $OMNIHOME/probes/os_dir on the machine where your EIF
probe is installed
where:
OMNIHOME
Is a system-defined variable defining the installation location of OMNIbus.
os_dir
Is the operating system, such as Windows or AIX.
2. Stop the EIF probe.
On Windows: In the Control Panel, open Administrative Tools, then Services. In the list of
services, double-click the EIF probe; then click Stop.
On UNIX: Issue the following command from the command line
$OMNIHOME/bin/nco_pa_stop -process probe_name
where:
$OMNIHOME
Is the system-defined variable defining the installation location of OMNIbus.
probe_name
Is the OMNIbus EIF probe name defined for Process Control.
3. Restart the OMNIbus probe.
On Windows: In the list of services, double-click OMNIbus EIF Probe; then click Start.
On UNIX: Issue the following command from the command line:
$OMNIHOME/bin/nco_pa_start -process probe_name
516
517
provided they run in the same mode as the Windows System Monitor Agent itself: if the Windows
agent runs in 32-bit mode, only 32-bit Agent Builder agents are supported; if the Windows agent
runs in 64-bit mode, only 64-bit Agent Builder agents are supported.
The following 64-bit Windows environments support 64-bit System Monitor Agents:
v Windows Server 2003 Standard Edition R2 on x86-64 CPUs in 64-bit mode
v Windows Server 2003 Enterprise Edition R2 on x86-64 CPUs in 64-bit mode
v Windows Server 2003 Datacenter Edition R2 on Intel x86-64 CPUs in 64-bit mode
v Windows Vista Enterprise Edition on Intel x86-64 CPUs in 64-bit mode
v Windows Server 2008 Standard Edition on Intel x86-64 CPUs in 64-bit mode
v Windows Server 2008 Enterprise Edition on Intel x86-64 CPUs in 64-bit mode
v Windows Server 2008 Datacenter Edition on Intel x86-64 CPUs in 64-bit mode
You must invoke a silent installation procedure to install the System Monitor Agent that monitors the
Windows operating system:
silentInstall.cmd -p response_file
where:
response_file
is the name of the updated version of the nt_silent_install.txt response file you created that
names the components to be installed and where.
On completion of the install script, the OS agent for Windows is installed, configured with a default
configuration, and started.
518
Chapter 23. Monitoring your operating system via a System Monitor Agent
519
520
Note: On Windows, there are new CLI commands to start and stop an agent:
%CANDLE_HOME%\InstallITM\itmcmd.cmd agent start nt
%CANDLE_HOME%\InstallITM\itmcmd.cmd agent stop nt
You are prompted to confirm your request to uninstall the agent. Respond Y to continue the uninstallation.
The -f option forces the uninstallation command to bypass the confirmation prompt.
If the command is invoked while the current working directory is %CANDLE_HOME%\InstallITM, the directory
is not deleted; you must delete it manually.
Notes:
1. Uninstalling the Windows operating system agent also uninstalls all Agent Builder agents installed in
the same environment.
2. Directories are not removed on Microsoft Windows systems if the command that attempts to remove it
is running from the directory being removed or if a file is locked within that directory. Thus, if you run
uninstall.cmd from %CANDLE_HOME%\BIN, the directory is not removed; you must remove it yourself.
Before running uninstall.cmd, it is recommended that you first close all processes (such as command
prompts and text editors) currently accessing subdirectories of %CANDLE_HOME%. Then run the
uninstall.cmd command from outside of %CANDLE_HOME% . Specify a fully qualified path.
If uninstall.cmd cannot remove a subdirectory, it displays the following message:
directory may have to be removed manually after this script completes.
Chapter 23. Monitoring your operating system via a System Monitor Agent
521
where:
installation_dir
is the directory on the target machine where the System Monitor Agent is to be installed.
response_file
is the name of the response file you created that names the components to be installed and where.
To verify the installation, enter this command:
InstDir/bin/cinfo -i
where InstDir is the directory where you installed the System Monitor Agent. The list of installed agent
components is displayed, as shown in Figure 103 on page 523.
522
If no agents are listed, check the installation logs for more information.
Chapter 23. Monitoring your operating system via a System Monitor Agent
523
524
Chapter 23. Monitoring your operating system via a System Monitor Agent
525
where InstDir is the directory where you installed the System Monitor Agent.
2. Stop any other agents running from the same InstDir.
3. Issue one of the following commands :
InstDir/bin/uninstall.sh
or:
InstDir/bin/uninstall.sh REMOVE EVERYTHING
526
Invoking the uninstall.sh script with the REMOVE EVERYTHING parameter removes all agent files and
deletes the installation subdirectory tree.
On UNIX and Linux, you can uninstall multiple, individual agents by listing each one in the uninstall.sh
command.
Note: Uninstalling the UNIX or Linux OS System Monitor Agent also removes common files that are
necessary for Agent Builder agents to function. If you are uninstalling the UNIX or Linux OS system
monitor agent, you must also uninstall any Agent Builder agents in the same environment.
Agents recognize the following keywords and substitute for them runtime values retrieved from the client:
@PRODUCT@
Agent's lowercase, two-letter product code. Example: For a Windows OS agent,
@PRODUCT@_trapcnfg.xml resolves to nt_trapcnfg.xml.
@ITMHOME@
IBM Tivoli Monitoring installation path. Example: If this is a Linux system and the default
installation path is used, @ITMHOME@ resolves to /opt/IBM/ITM/.
@MSN@
Agent's managed system name (not the subnode name). Example: If the agent's managed system
name is primary:icvw3d62:nt, @MSN@ resolves to primary-icvw3d62-nt.
@TASKNAME@
Agent's process name. Examples: klzagent; kntcma.
@VERSION@
Agent's product version string. Example: If the agent's version is Tivoli Monitoring 6.2.2 fixpack 2,
@VERSION@ resolves to 06-22-02.
@HOSTNAME@
Computer host name. Example: myhost.
@IPADDRESS@
Computer network interface IP address. Example: If the agent's IP address is 9.42.38.333,
@IPADDRESS@ resolves to 9-42-38-333.
Chapter 23. Monitoring your operating system via a System Monitor Agent
527
@OSTYPE@
Operating system type. Examples: linux; win2003.
@OSVERSION@
Operating system version. Examples: Red Hat Enterprise Linux Version 5 (64 bit) resolves to
2-6-18-128-el5; Windows 2003 (32 bit) with Service Pack 2 resolves to 5-2-sp2
@SYSTEMID@
Computer system identifier. Example: System ID icvr4a04.mylab.mycity.ibm.com resolves to
icvr4a04-mylab-mycity-ibm-com.
For detailed information on using the centralized configuration facility, see the IBM Tivoli Monitoring:
Administrator's Guide.
Notes:
1. All special characters in the parameter values for all keywords other than @ITMHOME@ are converted
to dashes (-). For example, if the IP address is 9.42.38.233, keyword @IPADDRESS@ resolves to
9-42-38-233.
The value for @ITMHOME@, however, remains unchanged.
2. The value of SETENCR_IRA_CONFIG_SERVER_PASSWORD may be either plain text or encrypted
when saved in the pc_silent_install.txt silent response file. Plain-text values are encrypted when
created in the agent environment file. Encrypted values are created as specified. The itmpwdsnmp utility
is used interactively on another system to encrypt the password string if desired; see the IBM Tivoli
Monitoring: Administrator's Guide.
528
529
530
KDC_PARTITION
NIC interface name ("Optional Primary Network Name")
Agents to install on this computer
Agents for which to add application support data
531
532
533
Program folder
Host name of the computer where you are installing the
portal server
Portal server database administrator ID
Portal server database administrator password
Portal server database user ID (default = TEPS)
Portal server database user password
Warehouse database administrator ID
Warehouse database administrator password
Warehouse database user ID (default = ITMUser)
Warehouse database user password
Warehouse data source name (default = ITM Warehouse)
Warehouse database name
Hub monitoring server host name
Hub monitoring server communications protocol details
534
535
Agents to install
536
Agents to install
KDC_PARTITION
NIC interface name (Optional Primary Network Name)
root user password
User group name
Optional user name
537
538
539
540
Gather any information required for successful installation Specific information to have ready on page 83
(such as DB2 user information and security specifications).
Run the silent installation.
Start the portal client to verify that you can view the
monitoring data.
For information on performing a silent uninstallation, see Uninstalling components and agents silently on
page 587.
541
Complete all of the steps listed in the file. Each line of the file must be either a comment (containing a
semi-colon in column one) or a meaningful statement that starts in column one.
Note: If you want to use the TCP/IP protocol, make sure to specify "IP.UDP." If you specify "TCP/IP,"
IP.PIPE is used by default.
Attention: Do not modify any other files supplied with the installation (for example, the SETUP.ISS
file).
5. Save the file and close the editor.
6. Run the silent installation using one of the following methods:
v Running the silent installation from the command line with parameters on page 544
v Running the silent installation using SMS on page 544
Notes:
1. A silentInstall.cmd script has been added to the Agents DVD. To install this agent you need to run
this script:
silentInstall.cmd
To install the agent in a different directory than the default one (CANDLE_HOME), use the -h option:
silentInstall.cmd -h directory
If this directory name contains spaces, make sure you enclose the name in quotation marks:
silentInstall.cmd -h "directory_with_spaces"
3. If the installation fails for any reason, a log file, "Abort IBM Tivoli Monitoring date time.log," is created
to document the problem. If the installation fails before reading in the installation location, the log file is
written to the Windows boot drive, typically the C:\ drive. If the installation fails after reading the
installation location, the log file is written to an \install subdirectory in the installation directory. For
example, if you use the default installation directory, the log file is written to the C:\ibm\itm\installITM
directory.
542
Note: This option does not appear if the agent has not yet been configured.
2. A window opens where you can specify the path into which you want the generated response files
written; the default directory is ITM_Install_Home\response.
The newly generated response files are named silent_install_pc.txt and silent_deploy_pc.txt,
where pc is the product code for the agent whose configuration parameters you want saved. See
Appendix D, IBM Tivoli product, platform, and component codes, on page 567 for the list of product
codes. If the directory you specify already contains files with these names, you are prompted to either
replace them or specify new names.
The installation and remote-deployment response files are created. When they are available, you can use
them to either install or remotely deploy identical agents across your IBM Tivoli Monitoring network:
1. Edit the response file, and supply missing security parameters such as passwords and encryption
keys.
Note: For security reasons, the IBM Tivoli Monitoring encryption key entered during installation is not
saved in the generated response file, and password parameters are blanked out. You must
supply these values yourself.
2. Install the agent.
v If you prefer to install it locally on the target computer, perform a silent installation using the
generated response file for installation: silent_install_pc.txt.
Appendix B. Performing a silent installation of IBM Tivoli Monitoring
543
v If instead you prefer to remotely deploy the new agent, use the generated response file for remote
deployment: silent_deploy_pc.txt.
When the silent installation completes successfully, the new agent is installed and configured with
identical settings as the first one.
Note: Menu option Generate Response Files creates silent response files for the selected agent; such
files contain all the installation parameters required to install the same agent with identical settings
on other machines. However, the generated response files apply only to the Common Installer.
They do not work with Solution Installer-based agent images.
Running the silent installation from the command line with parameters
Use the following steps to run the installation from the command line:
1. Start a DOS Command Shell.
2. From the shell, change to the directory containing this installation (where setup.exe and setup.ins are
located).
3. Run the setup as follows. You must specify the parameters in the same order as listed.
start /wait setup /z"/sfC:\temp\SILENT_*.TXT" /s /f2"C:\temp\silent_setup.log"
where:
/z"/sf"
Specifies the name of the installation driver you customized for your site. This is a required
parameter. This file must exist.
SILENT_*.TXT
is the name of the input silent_server.txt, silent_agent.txt, or silent_WIA64.txt file, as described
above.
/s Specifies that this is a silent installation. This causes nothing to be displayed during installation.
/f2 Specifies the name of the InstallShield log file. If you do not specify this parameter, the default
action is to create Setup.log in the same location as the setup.iss file. In either case, the Setup
program must be able to create and write to this file.
where:
SILENT_*.TXT
is the name of the input silent_server.txt, silent_agent.txt, or silent_WIA64.txt file, as described
above.
544
On the product installation media (both base IBM Tivoli Monitoring and agent installation media)
After installation, a sample file is located in the install_dir/samples directory
v Silent configuration files: After you install the product, a configuration file for each component that
requires configuration is located in the install_dir/samples directory. There is also a sample
configuration file that you can use to configure any component.
Before editing any of the response files, note the following syntax rules:
v Comment lines begin with a pound sign (#).
v Blank lines are ignored.
v Parameter lines are PARAMETER=value. Do not use a space before the parameter; you can use a
space before or after an equal sign (=).
v Do not use any of the following characters in any parameter value:
$
dollar sign
=
equal sign
|
pipe
Use the following procedures to perform silent installations:
v Installing components with a response file
v Configuring components with a response file on page 547
where:
install_dir
identifies the installation location for the IBM Tivoli Monitoring component. The default installation
location is /opt/IBM/ITM.
response_file
identifies the response file that you edited to specify installation parameters. Specify the absolute path
to this file.
The parameters that you can configure in the silent_install.txt file vary depending on the component that
you are installing. Each of the files contains comments that explain the options. The following procedure is
an example of installing all available components on one computer. Use this to determine the type of
information you need to gather when you are setting up your own silent installation.
Use the following steps to perform a silent installation on a UNIX computer:
1. Edit the silent_install.txt file.
2. Set the following parameters as appropriate for your environment:
545
Definition
INSTALL_ENCRYPTION_KEY
INSTALL_FOR_PLATFORM
tps
tpw
tpd
546
Definition
INSTALL_PRODUCT
MS_CMS_NAME
547
where:
response_file
Is the name of the configuration response file. Specify an absolute path to this file.
ms_name
Is the name of the monitoring server that you want to configure.
pc Is the product code for the component or agent that you want to configure. See Appendix D, IBM
Tivoli product, platform, and component codes, on page 567 for the list of product codes.
where:
directory
is the name of the directory where you want the generated files stored. The default directory is
itm_installdir/response.
pc is the product code for the agent whose configuration parameters you want saved. See Appendix D,
IBM Tivoli product, platform, and component codes, on page 567 for the list of product codes.
Possible errors are that either the directorypath or the product code, pc, is invalid. In either case, an error
message is displayed, and the response files are not generated.
When it completes, the itmcmd resp command creates these installation and remote-deployment response
files:
silent_install_pc.txt
silent_deploy_pc.txt
silent_config_pc.txt
When response files are available, you can use them to either install or remotely deploy, and then
configure, identical agents across your IBM Tivoli Monitoring network:
1. Edit the response files, and supply missing security parameters such as passwords and encryption
keys.
Note: For security reasons, the IBM Tivoli Monitoring encryption key entered during installation is not
saved in the generated response file, and password parameters are blanked out. You must
supply these values yourself.
2. Install the agent.
548
v If you prefer to install it locally on the target computer, perform a silent installation using the
generated response file for installation: silent_install_pc.txt.
v If instead you prefer to remotely deploy the new agent, use the generated response file for remote
deployment: silent_deploy_pc.txt.
3. Using the configuration response file (silent_config_pc.txt), configure the newly installed agent:
549
550
Appendix C. Firewalls
There are four options for achieving interoperability between Tivoli Management Services components
across network firewalls:
1. Automatic (do nothing)
2. Configure IP.PIPE with the ephemeral keyword
3. Use a broker partition file
4. Implement a firewall gateway
Use the following topics to select and implement the appropriate option for your environment:
v Determining which option to use
v Basic (automatic) implementation on page 552
v Implementation with ephemeral pipe on page 552
v Implementation with partition files on page 554
v Implementation with firewall gateway on page 557
551
a Warehouse Proxy server, then two ports must be opened at the firewall, one for the monitoring server
(typically 1918) and one for the Warehouse Proxy (typically 63358) for interoperability in this firewall
environment.
552
You configure an IP.PIPE or IP.SPIPE connection as an ephemeral pipe by adding the ephemeral keyword
ephemeral:y the KDE_TRANSPORT environment variable immediately following the protocol keyword
(IP.PIPE or IP.SPIPE) in the associated KPPENV file for that process. You must then restart the process
for the change to be effective.
Some KPPENV files use KDC_FAMILIES instead of KDE_TRANSPORT. The process is exactly the same
for the KDC_FAMILIES environment variable: adding the ephemeral keyword ephemeral:y immediately
following the protocol keyword (IP.PIPE or IP.SPIPE) that is to be designated ephemeral.
For example, to configure the KNTAGENT to make an ephemeral connection to the monitoring server,
change KDE_TRANSPORT (or KDC_FAMILIES) in the file KNTENV from
KDE_TRANSPORT=IP.PIPE PORT:1918 IP SNA
to
KDE_TRANSPORT=IP.PIPE ephemeral:y PORT:1918 IP SNA
or from
KDC_FAMILIES=IP.PIPE PORT:1918 IP SNA
to
KDC_FAMILIES=IP.PIPE ephemeral:y PORT:1918 IP SNA
To configure a remote monitoring server to make an ephemeral connection to the hub, change
KDE_TRANSPORT (or KDC_FAMILIES) in the file KDSENV from
KDE_TRANSPORT=IP.PIPE PORT:1918 IP SNA
to
KDE_TRANSPORT=IP.PIPE ephemeral:y PORT:1918 IP SNA
or from
KDC_FAMILIES=IP.PIPE PORT:1918 IP SNA
to
KDC_FAMILIES=IP.PIPE ephemeral:y PORT:1918 IP SNA
Monitoring agents that configure their connections as ephemeral cannot warehouse data unless
KPX_WAREHOUSE_LOCATION is also configured at the remote monitoring server to which the
monitoring agent reports. The variable KPX_WAREHOUSE_LOCATION is an optional list of fully qualified,
semicolon-delimited network names that must be added to environment file of the monitoring server to
which the agents are connected. This file is located in different places, depending on the platform:
v On Windows systems: install_dir\CMS\KBBENV
v On UNIX systems:install_dir/config/kbbenv.ini
The syntax is:
KPX_WAREHOUSE_LOCATION=family_protocol:#network_address[port_number];
For example:
KPX_WAREHOUSE_LOCATION=ip.pipe:#192.168.0.14[18303];ip:#192.168.0.14[34543];
See also Setting a permanent socket address for a proxy agent on page 447.
Appendix C. Firewalls
553
Sample scenarios
The following scenarios illustrate how to implement partition files in various configurations. In these
scenarios, your site has one firewall that contains two partitions, which are named OUTSIDE and INSIDE:
v Scenario 1: Hub monitoring server INSIDE and monitoring agents OUTSIDE
v Scenario 2: Hub and remote monitoring servers INSIDE and monitoring agents OUTSIDE
v Scenario 3: Hub monitoring server INSIDE, remote monitoring server and agents OUTSIDE on page
555
OUTSIDE is the partition name outside the firewall and hub's_external_address is the address of the hub
monitoring server that is valid for the agents.
As part of the configuration of each agent, specify the name of the partition that each is located in
OUTSIDE.
When an agent starts, parthub.txt is searched for an entry that matches the partition name OUTSIDE and
sees the monitoring server address that is valid for the agents (the external address).
Scenario 2: Hub and remote monitoring servers INSIDE and monitoring agents
OUTSIDE
Note: In Scenarios 2 and 3, all agents report to the remote monitoring server.
As part of the configuration of the hub monitoring server, specify the name of the partition that it is located
in INSIDE. No partition file is needed because the only component that reports to it (the remote monitoring
server) is also inside the firewall.
554
As part of the configuration of the remote monitoring server, specify the name of the partition that it is
located in INSIDE. A partition file, partremote.txt, must also be created at the remote monitoring server. It
contains the following entries:
OUTSIDE ip.pipe: remote's_external_address
When configuring the agents (all of which are outside the firewall, reporting to the remote monitoring
server), specify the name of the partition that they are located in OUTSIDE. When the agents start,
partremote.txt is searched for an entry that matches the partition name OUTSIDE and sees the remote
monitoring server address that is valid for them (the external address).
Scenario 3: Hub monitoring server INSIDE, remote monitoring server and agents
OUTSIDE
As part of the configuration of the hub monitoring server, specify the name of the partition that it is located
in INSIDE. Create a partition file, parthub.txt, containing the following entry:
OUTSIDE ip.pipe: hub's_external_address
OUTSIDE is the partition name outside the firewall and hub's_external_address is the address of the hub
monitoring server that is valid for the remote monitoring server.
As part of the configuration of both the agents and the remote monitoring server, specify the name of the
partition they are located in OUTSIDE.
A partition file partremote.txt also must be created at the remote monitoring server. It contains the
following entry:
INSIDE ip.pipe: remote's_internal_address
If the hub monitoring server needs to communicate with the remote monitoring server (for example, to
issue a report request from an agent that is connected to the remote monitoring server), the
partremote.txt file is searched for an entry that matches the partition name INSIDE and sees the remote
monitoring server address that is valid for it (the internal address).
555
b. For Partition Name, type the name of the partition to which the file applies.
c. Click Edit File.
A message appears saying that the partition file cannot be found and asking you if you want to
create it.
d. Click Yes to create the file.
The file is created and opened in Notepad.
e. Create the entry for the partition.
The format for the entries is PARTITION-ID IP.PIPE:nn.nn.nn.nn IP.PIPE:nn.nn.nn.nn. For
example, to create a monitoring server partition for a typical scenario with a monitoring agent
outside of a NAT firewall connecting to a monitoring server behind a firewall, use the partition ID of
your monitoring agent, two spaces, and then the IP address of the host of the monitoring server.
Add additional IP.PIPE:nn.nn.nn.nn addresses on a single line for multiple network interface cards.
See Sample partition file on page 557 for more information on creating entries in the partition file.
f. Save and close the file.
7. Click OK to save the changes and close the configuration window.
6. Click Create to create the file (if it does not exist) or Modify to edit the file.
7. Enter the partition ID in the first column.
8. Enter the IP address in the second column. If you require a second IP address, enter it in the third
column. (If more than two IP addresses are required for a partition ID, use a text editor to add the
additional addresses. See Sample partition file on page 557.)
9. Click Save to save the file and exit or Cancel to return to the previous screen without modifying the
file.
556
A partition file is a standard text file defined to the system using the KDC_PARTITIONFILE environment
variable. Within this file, each line describes a partition name with its constituent IP addresses using space
delimited tokens. The format is as follows:
PARTITION-ID IP.PIPE:nn.nn.nn.nn IP.PIPE:nn.nn.nn.nn
The first token on each line is used as a case-insensitive partition ID. The partition ID can be any
alphanumeric string with a maximum length of 32 characters. Subsequent tokens specified are treated as
interface addresses in standard NCS format (address-family:address). For communication across
firewalls, use only IP.PIPE for address-family.
The expected default location of the file is /install_dir/tables/tems_name.
Appendix C. Firewalls
557
v All ports used by gateway instances are configurable. Port pooling is available to constrain client proxy
connections to designated port values.
v Multiple failover addresses can be configured for all gateway connections.
NAT alone is not a reason to use the firewall gateway, which is content-neutral and can proxy any TCP
connection. In most cases, NAT processing is handled by the PIPE protocol (IP.PIPE or IP.SPIPE), which
can be used without the firewall gateway. Use the gateway when you have any of the following scenarios:
v A single TCP connection cannot be made to span between IBM Tivoli Monitoring components An
example would be that there are multiple firewalls between these components and a policy that does
not allow a single connection to traverse multiple firewalls.
v Connection requirements do not allow the IBM Tivoli Monitoring default pattern of connections to the
hub monitoring server. An example here would be agents residing in a less-secure zone connecting to a
monitoring server residing in a more-secure zone. Security policy would only allow a connection to be
established from a more-secure zone to a less-secure zone, but not the other way round.
v You must reduce open firewall ports to a single port or connection. For example, rather than opening
the port for every system being monitored, you would like to consolidate the ports to a single
concentrator. Connection requirements do not allow the IBM Tivoli Monitoring default pattern of
connections to the hub monitoring server.
v You must reduce open firewall ports to a single port or connection. You must manage agent failover and
monitoring server assignment symbolically at the hub monitoring server end of the gateway. Because
gateway connections are made between matching service names, an administrator can change the
failover and monitoring server assignment of downstream gateway agents by changing the client proxy
bindings next to the hub monitoring server.
In the context of firewalls, the server and client relationship can best be described in terms of upstream
and downstream. Those entities that open a socket to listen for requests are at the upstream or server
end. Those entities connecting to the server are at the downstream or client end. Using one or more relay
configurations, logical connection requests flow from a listening downstream server proxy interface, and
terminate in an outbound connection from an upstream client proxy interface to a listening server.
Intermediate relay configurations consist of an upstream relay interface containing at least one
downstream relay interface.
Configuration
The gateway component is configured through an XML document that specifies a set of zones, each of
which contain at least one upstream interface with one or more imbedded downstream interfaces. The
following sections provide information on the activation, structure, and content of the XML document:
v Activation
v IPv4 Address Data on page 559
v IPv6 Address Data on page 559
v XML Document Structure on page 559
Activation
The gateway feature can be activated within any IBM Tivoli Monitoring process. However, use must be
limited to the host computer operating system agent to prevent potential resource consumption conflicts
with Tivoli Enterprise Monitoring Server and Tivoli Enterprise Portal Server processes.
The configuration variable KDE_GATEWAY is set to the XML configuration file name. A line of the form
KDE_GATEWAY=filename must be added to the following configuration files, depending on your environment:
v On Windows computers, configuration variables for the Windows operating system agent are located in
the ITMinstall_dir\tmaitm6\KNTENV file.
v On UNIX computers, configuration variables for the UNIX operating system agent are located in the
ITMinstall_dire/config/ux.ini and ITMinstall_dir/config/ux.config files. Add the entry to both files
for reliable results.
558
v On Linux computers, configuration variables for the Linux operating system agent are located in the
ITMinstall_dir/config/lz.ini and ITMinstall_dir/config/lz.config files. Add the entry to both files
for reliable results.
After you make these changes, stop and restart the monitoring agents.
<gateway>
A <gateway> element in the assigned namespace http://xml.schemas.ibm.com/tivoli/tep/kde/
contains configuration elements described within this document. The gateway XML processor
semantically ignores valid XML until the container is opened, allowing for configuration documents
to be imbedded in other documents. This element cannot contain data.
name
The name attribute is required, cannot contain imbedded delimiters, and must begin with a
nonnumeric. This attribute is used to identify a specific gateway instance.This attribute cannot
be inherited from an outer element.
threads
The threads attribute specifies the number of worker threads in a general purpose thread
pool. The specification must satisfy 1 <= value <= 256, and defaults to 32. Threads in this
Appendix C. Firewalls
559
pool are shared by all defined zones, and are used only by interface startup logic, and to
recover from outbound buffer exhaustion conditions. The default value is generally more than
adequate.
<zone>
A zone is a container of interfaces sharing communication resources. This element cannot contain
data.
name
The name attribute is required, cannot contain imbedded delimiters, and must begin with a
nonnumeric. This attribute is used to identify a specific zone instance. This attribute cannot be
inherited from an outer element.
maxconn
The maxconn attribute imposes an upper limit on the number of concurrent gateway
connections within the zone. Each proxy physical connection and each logical connection
crossing a relay interface consume this value. The specification must satisfy 8 <= value <=
4096, and defaults to 256.
bufsize
The bufsize attribute sets the data buffer size within the zone. The specification must satisfy
256 <= value <= 16384, and defaults to 2048.
minbufs
The minbufs attribute sets the minimum number of buffers in the zone buffer pool that are
reserved for inbound traffic. The specification must satisfy 4 <= value <= 1024, and defaults to
64.
maxbufs
The maxbufs attribute sets the maximum number of buffers in the zone buffer pool that are
reserved for inbound traffic. The specification must satisfy minbufs <= value <- 2048, and
defaults to 128.
<interface>
An interface describes a set of network bindings that exhibit a fixed behavior according to a
specified role, and based on whether it is defined as upstream, which means that the enclosing
element is <zone>, or downstream, where the enclosing element is <interface>. In all roles, logical
connections arrive through one or more downstream interfaces and are forwarded through the
upstream interface. After a logical connection has been established end to end, data flow is full
duplex. A valid configuration requires an upstream interface to contain at least one downstream
interface. This element cannot contain data.
name
The name attribute is required, cannot contain imbedded delimiters, and must begin with a
nonnumeric. This attribute is used to identify a specific interface instance. This attribute cannot
be inherited from an outer element.
role
The role attribute is required, and describes the behavior of network bindings contained within.
The role attribute must be specified as proxy, listen, or connect. Downstream proxy
interfaces represent local listening endpoints, and function as a server proxy. Upstream proxy
interfaces represent local connecting endpoints, and function as a client proxy. Relay
interfaces are assigned either listen or connect. No configuration restriction is made on the
relay connection role other than peer relay connections must specify the opposite role. Relay
connections are considered persistent, are initiated at gateway startup, and automatically
restarted in the event of a network disruption.
<bind>
A <bind> element represents connection resources on one or more local interfaces. When
specified within interfaces that listen (downstream proxy, relay listen), bind elements represent
listening ports on local interfaces. For connect interfaces (upstream proxy, relay connect), they
560
represent the local binding to be used for the outbound connection. Specific local interface
addresses can be supplied as data; the default interface is any.
localport
The localport attribute is required within listen interfaces, and is optional within connect
interfaces. The value supplied can be either a number that satisfies 1 <= value <= 65535, or
for connect based roles, can only contain the name of a portpool element defined within the
gateway.
ipversion
Theipversion attribute declares the address family to be used for activity within the tag scope.
Valid values are 4 or 6, with a default of 4.
ssl
The ssl attribute controls SSL negotiation for connections within the scope of this binding.
When specified as yes, a successful negotiation is required before a connection is allowed on
the gateway. The default value is no, meaning no SSL negotiation occurs on behalf of the
gateway connection. Note that this does not restrict the conveyance of SSL streams across a
gateway, only whether or not the gateway acts as one end of the SSL negotiation. When this
operand is specified on a relay binding, it can be used to secure relay traffic, and must be
specified on both ends of the relay connection.
service
The service attribute is a character string used to represent a logical connection between
client and server proxy interfaces. Each connection accepted by a server proxy must find an
upstream client proxy connection with a matching service string. No value restrictions are
imposed.
<connection>
The <connection> tag is used to supply remote network interfaces as data. When applied to a
listen mode binding, the connection tag represents the list of remote interface addresses that are
allowed to make a connection, and is optional. This tag is required for connect mode bindings,
and describes the remote end of the connection. Multiple addresses can be supplied for failover
purposes.
remoteport
The remoteport attribute supplies the default port number of remote interfaces described
within this tag. The value supplied must satisfy 1 <= value <= 65535.
<portpool>
The <portpool> tag is used to create a list of local port numbers to be used for outbound
connections. Port numbers are supplied as data, and can be specified discretely or as a range
expression separated by hyphen ("-"). Range expressions are limited to 1024 bytes to prevent
syntax errors from resulting in larger ranges than expected. Multiple specifications of either form
are allowed.
name
The name attribute is required, cannot contain imbedded delimiters, and must begin with a
nonnumeric. This attribute is used to identify a specific portpool instance. This attribute cannot
be inherited from an outer element, and is referenced by a localport attribute on a bind
element.
Appendix C. Firewalls
561
The effect of this configuration change is to force the Warehouse Proxy to listen at the Tivoli Enterprise
Monitoring Server well-known port number (default 1918) plus the quantity 4096 multiplied by 15. For
example purposes, if the monitoring server port is defaulted to 1918, this causes the Warehouse Proxy to
listen at 63358. The following examples assume this recommendation has been implemented.
562
563
564
Appendix C. Firewalls
565
566
Product code
tm
a4
lz
sy
ms
cw
cj
cq
um
ul
ux
Warehouse Proxy
hd
nt
kf
r2
r3
r4
r5
r6
r9
A complete, alphabetical list of product codes for IBM Tivoli Monitoring can be found at this Web site:
http://www-01.ibm.com/support/docview.wss?uid=swg21265222&myns=swgtiv&mynp=OCSSZ8F3
&mync=E.
567
Table 129 lists the platform codes required by the commands related to remote agent deployment.
Table 129. Platform codes required for remote agent deployment
Platform
Code
AIX R4.1
aix4
AIX R4.2
aix42
AIX R4.2.0
aix420
AIX R4.2.1
aix421
AIX R4.3
aix43
AIX R4.3.3
aix433
aix513
aix516
aix523
aix526
aix533
aix536
HP-UX R10.01/10.10
hp10
HP-UX R10.20
hp102
hp11
hp116
hpi113
hpi116
li622
li6223
li624
li6242
li6243
li6245
li6262
li6263
li6265
ls322
ls3223
ls3226
ls324
ls3242
ls3243
ls3245
ls3246
ls3262
ls3263
ls3265
ls3266
568
Table 129. Platform codes required for remote agent deployment (continued)
Platform
Code
lx8243
lx8246
lx8263
lx8266
lia246
lia266
lpp246
lpp263
lpp266
MVS
mvs
Digital UNIX
osf1
OS/2
os2
OS/400
os400
Solaris R2.4
sol24
Solaris R2.5
sol25
Solaris R2.6
sol26
sol273
sol276
sol283
sol286
sol293
sol296
sol503
sol506
sol603
sol606
ta6046
tv6256
Tru64 V5.0
tsf50
tms
tps
tpd
tpw
unix
WIA64
WIX64
winnt
569
Table 130 lists the various IBM Tivoli Monitoring components and the codes that represent them; these
show up when invoking the cinfo -t command.
Table 130. Application support codes
Component
Code
TEMS
TEPS
TEP
Windows
WI
WICMS
WICNS
WIXEB
WIXEW
tms
tps
tpw
tpd
570
571
CTIRA_MAX_RECONNECT_TRIES
Obsolete. Number of consecutive unsuccessful attempts the agent makes to connect to a Tivoli
Enterprise Monitoring Server before giving up and exiting. The default value of 0 means that the agent
remains started regardless of its connection status with the monitoring server. (Prior to IBM Tivoli
Monitoring V6.2.2, the default value was 720.)
CTIRA_NCSLISTEN
Number of RPC listen server threads to create for the agent. The default value is 10.
CTIRA_NODETYPE
Supplies the agent node type qualifier (subsystem:hostname:nodetype) of the agent managed system
name (msn). Provide the agent product indicator in this name. This value may also be set by the agent
itself.
CTIRA_OS_INFO
Overrides the default value for agent entries in the Tivoli Enterprise Monitoring Server's
ManagedSystem.Host_Info column. This variable is used to build the Tivoli Enterprise Portal Server's
navigation tree. The value must match an existing entry in the CNPS/osnames file. This variable is not
applicable to subnode type records in the monitoring server's ManagedSystem table.
CTIRA_PRIMARY_FALLBACK_INTERVAL
Forces the agent to switch from the primary Tivoli Enterprise Monitoring Server to one of the defined
secondary monitoring servers, because the primary monitoring server is offline or due to
network-connectivity issues. You want the agent to switch back to the primary monitoring server as
soon as possible when it becomes available. This parameter controls the frequency with which the
agent performs a lookup of the primary monitoring server. If the primary monitoring server is found, the
agent disconnects from the secondary monitoring server and reconnects to the primary monitoring
server. A value of zero disables this feature. The minimum value must be 2.5 times
CTIRA_RECONNECT_WAIT value. The default value is 4500 seconds, or 75 minutes.
CTIRA_PRODUCT_SEP
Supplies an alternate qualifier for the agent's managed system name (msn). The default value is a
colon (:).
CTIRA_RECONNECT_WAIT
Time interval, in seconds, that the agent waits between attempts to register with a Tivoli Enterprise
Monitoring Server. The default value is 600 seconds.
CTIRA_REFLEX_ATOMIC
For subnode targets only. Evaluates situation state by any existing specified display item column name
when deciding which reflex situation automation command the agent should execute. Not applicable to
reflex situation commands executed or evaluated by the Tivoli Enterprise Monitoring Server. Disable by
setting to N. The default value is Y.
CTIRA_REFLEX_TARGET
For subnode targets only. Evaluates situation state by subnode name value in the ORIGINNODE
column when deciding which reflex situation automation command the agent should execute. Not
applicable to reflex situation commands executed or evaluated by the Tivoli Enterprise Monitoring
Server. Disable by setting to N. The default value is Y.
CTIRA_SIT_CLEAN
Number of seconds between garbage cleanup of stale entries in the agent persistent situation file. The
default value is 900 seconds.
CTIRA_SIT_FILE
Specifies an alternate name for the default agent-based persistent situation file. This variable should
be used only in exceptional conditions, because the file name reflects the agents managed system
name. Unsupported for z/OS-based agents.
CTIRA_SIT_MGR
Specifies whether or not to use the agent's persistent situation file when registering with the Tivoli
572
Enterprise Monitoring Server. Using this file improves performance of the monitoring server. Set this
variable to N to disable usage. For a z/OS agent, the value must be N, because this feature is not
implemented for a z/OS-based monitoring server. For all other platforms, the default value is Y.
CTIRA_SIT_PATH
Required variable that specifies the directory where the agent-based persistent situation file is stored.
This agent-only file contains a copy of the Tivoli Enterprise Monitoring Server monitoring situations for
the agent's use while registering with the monitoring server. The file is named psit_msn.str, where
msn is the managed system name of the agent process. Unsupported for z/OS-based agents.
CTIRA_SUBSYSTEM_ID
Optional variable that overrides the subsystem ID qualifier (subsystem:hostname:nodetype) of the
agent managed system name (msn). Describes a monitored resource instance to help make this name
unique. Value may also be set by the agent itself.
CTIRA_SYSTEM_NAME
Sets an alternate system name for agent entries in the Tivoli Enterprise Monitoring Server's
ManagedSystem.Host_Address column within the <NM>mysystem</NM> tags. Used to build the
Tivoli Enterprise Portal Server's navigation tree. Not applicable to subnode type records in the
monitoring server's ManagedSystem table.
CTIRA_THRESHOLDS
Specifies the fully qualified name of the XML-based adaptive (dynamic) threshold override file. The
default file is located in $CANDLE_HOME/localconfig/pc/pc_thresholds.xml, where pc is the agent
product code. On z/OS system, the default file name is pcTHRES.
IRA_ADAPTIVE_THRESHOLD_MODE
Specifies the adaptive (dynamic) threshold operation mode, either CENTRAL or LOCAL. In CENTRAL
mode, threshold overrides are centrally created and distributed to the agent, and the threshold
override XML file should not be modified. This is the recommended mode and the default.
In LOCAL mode, central distribution to the agent is inhibited, and threshold overrides are locally
created and managed. Use LOCAL mode to specify that the agent should ignore enterprise
distribution; its affinity will not be registered, so the Tivoli Enterprise Portal cannot override the agent's
managed system node. This mode should be used cautiously since it causes the Tivoli Enterprise
Monitoring Server's thresholds and the agent's thresholds to be out of sync.
You must create and manually write override definitions in the same file that is created in CENTRAL
mode: managed-system-name_product-code_thresholds.xml. For instance, on Windows, this file is
named Primary_myagent_NT_thresholds.xml; on Linux, myagent_LZ_thresholds.xml; on UNIX,
myagent_UX_thresholds.xml. On Windows, the file is stored in the %CANDLEHOME%\TMAITM6 directory; on
Linux and UNIX, the file is stored in $CANDLEHOME/interp/product-code/bin.
The names of the columns to be used when specifying overrides is taken from the attributes file. The
override name and objectId must be unique in the XML file. Timestamp is not required.
If later you switch back from LOCAL mode to CENTRAL mode, centrally located overrides will again
override the local definitions.
IRA_AUTONOMOUS_LIMIT
Sets the saved event limit for autonomous operation. If the specified value is a number (for example,
500), then it is the maximum number of situation event records to be saved by the agent. If the
specification is in common disk space units such as KB, MB, or GB (for example, 5MB), then it is the
total amount of disk space to be used by the agent for saving situation event data. The default value is
2MB.
IRA_AUTONOMOUS_MODE
Turns on (YES) or off (NO) agent autonomous mode. While in autonomous mode, the agent continues
to run Enterprise situations. The situation event data persists on disk even after agent restart. Upon
reconnection to the Tivoli Enterprise Monitoring Server, the agent uploads saved situation event data
to the monitoring server. The default value is Y.
Appendix E. Common agent environment variables
573
IRA_DEBUG_AUTONOMOUS
Turns on (Y) or off (N) debug trace for agent autonomous operation. The default value is N.
IRA_DEBUG_EVENTEXPORT
Turns on (Y) or off (N) agent event export operations, such as SNMP trap and debug trace. The
default value is N.
IRA_DEBUG_PRIVATE_SITUATION
Turns on (Y) or off (N) debug trace when processing an agent's private situations. The default value is
N.
IRA_DEBUG_SERVICEAPI
Turns on (Y) or off (N) debug trace for agent service interface processing. The default value is N.
IRA_DEBUG_TRANSCON
Turns on (Y) or off (N) debug trace for agent transport conduit processing. The default value is N.
IRA_DUMP_DATA
Used by both agents and the Tivoli Enterprise Monitoring Server for debugging. Set to Y to do a
hexadecimal dump of RPC transaction content data contents into the RAS1 log. The default value is
N. Can produce voluminous RAS1 message output if enabled.
IRA_EVENT_EXPORT_CHECKUSAGE_INTERVAL
Specifies the frequency with which the agent checks and calculates the autonomous operation saved
event usage limit that is defined by the IRA_AUTONOMOUS_LIMIT parameter . The default value is
90 seconds; the agent enforces the minimum setting of 30 seconds.
IRA_EVENT_EXPORT_LISTSTAT_INTERVAL
Defines the frequency with which the agent outputs collected situation statistics to the debug trace log.
The default value is 900 seconds, or 15 minutes.
IRA_EVENT_EXPORT_LISTSTAT_OUTPUT
Enables (Y) or disables (N) periodic output of situation operation statistics data to the trace log. The
default is N.
IRA_EVENT_EXPORT_SIT_STATS
Enables (Y) or disables (N) basic situation operation statistics data collection. The basic situation data
includes situation first start time, situation first event time, situation last start time, situation last stop
time, situation last true event time, situation last false event time, number of times situation recycles,
number of times situation enters autonomous operation. The default value is Y.
IRA_EVENT_EXPORT_SIT_STATS_DETAIL
Enables (Y) or disables (N) details situation operation statistics data collection. The detail data
collected includes 8 days' situation operation profile such as hourly true event count, hourly false event
count, hourly data row count, hourly true event ratio, and hourly false event ratio. The default value is
N.
IRA_EVENT_EXPORT_SNMP_TRAP
Enables (Y) or disables (N) the SNMP trap emitter capability. When enabled, the SNMP trap
configuration file is required and must exist for the agent to emit SNMP V1, V2, or V3 trap to
configured SNMP trap managers. The default value is Y.
IRA_EVENT_EXPORT_SNMP_TRAP_CONFIG
Specifies the fully qualified SNMP trap configuration file name. The default file is located in
$CANDLE_HOME/localconfig/pc/pc_trapcnfg.xml (member pcTRAPS on z/OS systems), where pc is the
agent product code.
IRA_LOCALCONFIG_DIR
Specifies the local configuration directory path that contains locally customized configuration files such
as threshold overrides, private situations, and SNMP trap configuration file. The default directory is the
localconfig subdirectory of the directory specified by the CANDLE_HOME environment variable,
which is the RKANDATV DD name on z/OS systems.
574
IRA_PRIVATE_SITUATION_CONFIG
Specifies the fully qualified autonomous Private Situation configuration file name. The default file on
distributed systems is located in $CANDLE_HOME/localconfig/pc/pc_situations.xml, where pc is the
agent product code. The default file on z/OS systems is the SICNFG member in the RKANDATV data
set.
IRA_SERVICE_INTERFACE_DEFAULT_PAGE
Instructs the agent to open the named product-specific HTML page instead of the installed
navigator.htm page upon logon to the agent service interface. The HTML file must exist in the agent
installation HTML subdirectory (CANDLE_HOME/localconfig/html/) or as specified by the
IRA_SERVICE_INTERFACE_DIRvariable.
IRA_SERVICE_INTERFACE_DIR
Defines the path specification of the agent service interface HTML directory. In conjunction with the
IRA_SERVICE_INTERFACE_DEFAULT_PAGE parameter, the agent constructs the file path to a
specific, requested HTTP GET object. The default filepath is CANDLE_HOME/localconfig on distributed
systems and the RKANDATV dataset on z/OS systems. The parameter is equivalent to the
IRA_HTML_PATH parameter.
Example: If IRA_SERVICE_INTERFACE_DIR="\mypath\private" and you enter http://
localhost:1920///kuxagent/kuxagent/html/myPage.htm in your browser, myPage.htm is retrieved from
\mypath\private\html\ instead of %CANDLE_HOME%\localconfig\html\.
IRA_SERVICE_INTERFACE_NAME
Specifies a unique agent interface name to represent this agent. The default agent service interface
name is pcagent, where pc is the application product code. In the scenario where multiple instances of
the same agent are running in the system, this parameter enables customization of a unique service
interface name to correspond to this agent.
ITM_BINARCH
Set by the installer to supply the platform architecture code. Used by the agents to read the agent
installation version files and retrieve agent version information.
KBB_RAS1
Sets the level of agent tracing:
ERR (UNIT:KRA ST)
View the state of main agent functions such as situation and report processing.
ERR (UNIT:KRA ALL)
View detailed debug messages for agent functions.
ERR (UNIT:KHDX ST)
View the state of the agent's short-term history data uploads to the Tivoli Data Warehouse.
ERR (UNIT:KHD ALL)
View detailed debugging messages about short-term history data uploads to the Tivoli Data
Warehouse.
575
576
*.idx Files
qa1cacts.db
qa1daggr.db
qa1cacts.idx
qa1daggr.idx
qa1cckpt.db
qa1dcct.db
qa1cckpt.idx
qa1dcct.idx
qa1ccobj.db
qa1dcct2.db
qa1ccobj.idx
qa1dcct2.idx
qa1ccomm.db
qa1dmobj.db
qa1ccomm.idx
qa1dmobj.idx
qa1ceibl.db
qa1dmtmp.db
qa1ceibl.idx
qa1dmtmp.idx
qa1chost.db
qa1dobja.db
qa1chost.idx
qa1dobja.idx
qa1ciobj.db
qa1dpcyf.db
qa1ciobj.idx
qa1dpcyf.idx
qa1cmcfg.db
qa1drnke.db
qa1cmcfg.idx
qa1drnke.idx
qa1cnodl.db
qa1drnkg.db
qa1cnodl.idx
qa1drnkg.idx
qa1cplat.db
qa1dsnos.db
qa1cplat.idx
qa1dsnos.idx
qa1cpset.db
qa1dspst.db
qa1cpset.idx
qa1dspst.idx
qa1cruld.db
qa1dstms.db
qa1cruld.idx
qa1dstms.idx
qa1csitf.db
qa1dstsa.db
qa1csitf.idx
qa1dstsa.idx
qa1csmni.db
qa1dstua.db
qa1csmni.idx
qa1dstua.idx
qa1cstsc.db
qa1dswrs.db
qa1cstsc.idx
qa1dswrs.idx
qa1cstsh.db
qa1dswus.db
qa1cstsh.idx
qa1dswus.idx
qa1cthre.db
qa1dwgrp.db
qa1cthre.idx
qa1dwgrp.idx
qa1dactp.db
qa1dwork.db
qa1dactp.idx
qa1dwork.idx
577
578
Usage
The secureMain commands use the following syntax:
secureMain [-h install_dir] [-g common_group] [-t type_code [-t type_code]] lock
secureMain [-h install_dir] [-g common_group] unlock
579
secureMain unlock is used to loosen permissions in an IBM Tivoli Monitoring 6.2 installation. secureMain
unlock is normally not necessary, but can be run if desired. It should be run before installing or
configuring components.
secureMain unlock does not return the installation to the permission state that it was in before running
secureMain lock. It only processes the common directories, like bin, config, registry, and logs, and the
files in them.
Examples
The following example locks the installation using the common group itmgroup:
secureMain -g itmgroup lock
The following example locks the base and mq component directories using the common group itmgroup:
secureMain -g itmgroup -t mq lock
580
581
5. Click OK.
The following progress window is displayed.
After Tivoli Enterprise services have stopped, you are asked if you want to remove the Tivoli Enterprise
Portal database.
6. Click Yes.
The following window is displayed, requesting information required to remove the database:
7. Type the password for the database administrator in the Admin Password field and click OK.
The following progress window is displayed.
582
8. Click Finish.
where install_dir is the path for the home directory for IBM Tivoli Monitoring.
2. Run the following command:
./uninstall.sh
A numbered list of product codes, architecture codes, version and release numbers, and product titles
is displayed for all installed products.
Appendix H. Uninstalling IBM Tivoli Monitoring
583
3. Type the number for the installed product that you want to uninstall. Repeat this step for each
additional installed product you want to uninstall.
4. After you have removed all installed components, you are asked if you want to remove the installation
directory. Type y and press Enter.
You can also run the following command to remove all installed components from the command line:
./uninstall.sh REMOVE EVERYTHING
After the command completes, you can manually remove the IBM Tivoli Monitoring installation directory.
Note: If for any reason, the UNIX uninstallation is not successful, run the following command to remove
all IBM Tivoli Monitoring directories:
rm -r install_dir
This uninstallation program does not delete the database created for Tivoli Enterprise Portal on a Linux
portal server. If you want to delete that database, you must remove it manually. See the documentation for
your database software for information about deleting a database.
584
a. For an agent, expand Tivoli Enterprise Monitoring Agents and select the agent you want to
uninstall.
b. For a component, select the component (such as Tivoli Enterprise Portal Desktop Client).
c. Click Next.
d. Click Next on the confirmation screen.
e. Depending on the remaining components on your computer, there might be a series of
configuration panels. Click Next on each of these panels.
Note: When you are uninstalling the Tivoli Enterprise Portal Server, the installer gives you the
option to remove the portal server database. If there are other databases created by Tivoli
Monitoring in this or previous version on the computer, the installer gives you the option to
remove them as well.
9. Click Finish to complete the uninstallation.
where install_dir is the path for the home directory for IBM Tivoli Monitoring.
2. Run the following command:
./uninstall.sh
A numbered list of product codes, architecture codes, version and release numbers, and product titles
is displayed for all installed products.
3. Type the number for the agent or component that you want to uninstall. Repeat this step for each
additional installed product you want to uninstall.
Note: When you are uninstalling the Tivoli Enterprise Portal Server, the installer gives you the option
to remove the portal server database. If there are other databases created by Tivoli Monitoring
in this or previous version on the computer, the installer gives you the option to remove them as
well.
585
8. Delete any PC*.EXE or PC*.DLL files for the product. PC is the product internal identifier
three-character code from the tables.
9. Exit Manage Candle Services or Manage Tivoli Enterprise Monitoring Services, and launch it again.
The agent and all instances should not be shown under the Service/Application column.
Note: You can also use this procedure to remove IBM Tivoli Monitoring agents if you use the TMAITM6
directory instead of the CMA directory in step 6 on page 585. All of the other steps do not change.
Table 132. Candle OMEGAMON Release 04R1
Internal identifier
Release
Description
K3Z
400
KA2
120
KA4
300
KBL
320
KBR
320
KEZ
251
KIC
100
KIE
100
KMA
201
KMC
360
KMQ
360
KNW
300
KOQ
301
KOR
301
KOY
300
KPT
201
KQI
120
KSA
301
KTX
300
KUD
400
KWE
130
KWL
100
KWN
100
Release
Description
KIC
110
KIE
110
KMC
370
KMQ
370
WebSphere MQ Agent
KQI
130
586
If you are using a Microsoft SQL database or an Oracle database, use the Windows ODBC Data Source
Administrator utility to remove the ODBC data source.
587
v To select a component to uninstall, uncomment the following line in the ACTION TYPE section
;UNINSTALLSELECTED=Yes
and uncomment the component or components to be removed in the FEATURE section. For
example:
;*********************************************************************
;
;
TIVOLI ENTERPRISE MONITORING AGENT
;
TEMA INSTALLATION SECTION
;
; Any Feature selected that ends in CMA will cause the TEMA Framework
; and specific Agent to be installed.
;
;*********************************************************************
;KGLWICMA=Tivoli Enterprise Monitoring Agent Framework
;KNTWICMA=Monitoring Agent for Windows OS
;KNT64CMA=Monitoring Agent for Windows OS (86-x64 only)
;KR2WICMA=Agentless Monitoring for Windows Operating Systems
;KR3WICMA=Agentless Monitoring for AIX Operating Systems
;KR4WICMA=Agentless Monitoring for Linux Operating Systems
;KR5WICMA=Agentless Monitoring for HP-UX Operating Systems
;KR6WICMA=Agentless Monitoring for Solaris Operating Systems
;KUMWICMA=Universal Agent
;KAC64CMA=32/64 Bit Agent Compatibility Package
;KUEWICMA=Tivoli Enterprise Services User Interface Extensions
588
where:
/z"/sf
Specifies the fully qualified name of the response file you edited. For example:
/z"/sfC:\temp\myresponse.txt"
/s
Specifies that this is a silent installation. This causes nothing to be displayed during
installation.
/f2
Specifies the name of the InstallShield log file. If you do not specify this parameter, the default
is to create Setup.log in the same location as the setup.iss file. In either case, the Setup
program must be able to create and write to this file.
where
-f
-i
product
Is the two-letter code for the product to be uninstalled.
platformCode
Is the platform code for the product (such as aix513, sol286, hp11, and so forth: see
Appendix D, IBM Tivoli product, platform, and component codes, on page 567).
Repeat the command for each agent or component you want to remove on the target computer.
4. To remove all components and agents enter the following command:
uninstall.sh REMOVE EVERYTHING
When the uninstallation is complete, the uninstall command returns to the command prompt. Some
messages may be written to the screen. There may be additional steps, depending on the component
Appendix H. Uninstalling IBM Tivoli Monitoring
589
being uninstalled. For example, if you uninstall the Warehouse Proxy, the warehouse database is not
removed and historical situations on the agent are not stopped (see Uninstalling the Warehouse Proxy
on page 587).
If the uninstallation is unsuccessful, some messages may be written to the screen. See the installation log
in the install_dir/logs/product_nametime_stamp.log directory or the IBM Tivoli Monitoring:
Troubleshooting Guide for more information. If all components have been removed, the log is at the root.
Note: If for any reason, the UNIX uninstallation is not successful, run the following command to remove
all Tivoli Monitoring directories:
rm -r install_dir
or
C:\winnt\system32\drivers\etc\Tivoli\setup_env.cmd
You can run this uninstallation program in silent mode (by running the program from the command line
with the -silent parameter) or in console mode (by using the -console parameter).
3. Follow the prompts in the uninstallation program.
When the uninstallation is completed, you can tell the installer what rule base should be loaded. If initial
installation created a new rule base, the value shown in Rule base name of rule base to be loaded on
completion of this uninstall will be Default, meaning that the Default rule base will be loaded. If the initial
installation updated an existing rule base, that rule base name is provided as the value for Rule base
name of rule base to be loaded on completion of this uninstall. You can override this value by typing in
the name of the rule base you want to have loaded.
You can also tell the uninstaller to stop and restart the event server.
You can run the silent uninstallation using default processing or create a template to change the default
values. The default processing will load the Default rule base (or the existing rule base was chosen during
installation) and will not restart the TEC server.
To create and use a template:
1. Create the template:
v On Windows:
590
%BINDIR%\TME\TEC\OM_TEC\_uninst\uninstaller.exe options-template
itmeventsynchU.txt
v On operating systems like UNIX or Linux:
$BINDIR/TME/TEC/OM_TEC/_uninst/uninstaller.bin options-template
itmeventsynchU.txt
2. Modify the template as desired:
v To specify which rule base to load, modify the restartTECU.uRBN file.
v To automatically restart the event server, modify the restartTECU.restartTECU file.
3. Set the Tivoli environment.
4. Run the uninstallation program as follows:
v On Windows:
%BINDIR%\TME\TEC\OM_TEC\_uninst\uninstaller.exe options
itmeventsynchU.txt silent
v On operating systems like UNIX or Linux:
$BINDIR/TME/TEC/OM_TEC/_uninst/uninstaller.bin options
itmeventsynchU.txt silent
If your event server is running on an HP-UX computer, ensure that the _uninst and _jvm directories are
successfully removed by the uninstallation program. If they are not, manually delete these directories.
2. Run the following command to determine whether the operating system still knows that the event
synchronization component is there:
swlist -v TecEvntSyncInstaller
If it is there but all the code has been deleted, or just the uninstaller is deleted, you can try this
command:
swremove TecEvntSyncInstaller
3. If errors are returned saying that TecEvntSyncInstaller cannot be removed due to consistency or
dependency checks, create a file named something like "remove_EvntSync.txt" and add these two
lines:
enforce_dependencies=false
enforce_scripts=false
The -X option tells the swremove command to ignore checks and dependencies and remove the
files regardless.
4. Remove any event synchronization directories that are left behind.
Remove any directories found in OM_TEC including OM_TEC itself. OM_TEC is found in
$BINDIR/TME/TEC. To use $BINDIR you must run the following command:
Appendix H. Uninstalling IBM Tivoli Monitoring
591
. /etc/Tivoli/setup_env.sh
If the installation was for Netcool/OMNIbus, remove files from the location indicated during
installation.
Windows
1. Stop the situation update forwarder long running process if it is still running:
a. In the Control Panel, open Administrative Tools, then Services.
b. Find the Tivoli Situation Update Forwarder service, and right-click it and select Stop.
2. Go to operating system directory (C:\windows or C:\winnt) and open the vpd.properties file.
3. Remove all lines that have itmTecEvntSyncProduct, EvntSyncForwarder, itmTecEvntSyncLapComp
or EvntSyncForwarderWin in them.
4. Remove any event synchronization directories that are left behind.
Remove any directories found in OM_TEC including OM_TEC itself. OM_TEC is found in
%BINDIR%/TME/TEC. To use %BINDIR% you must run the C:\windows\system32\drivers\etc\
Tivoli\setup_env.cmd command. If the installation was for Netcool/OMNIbus, remove the files from
the location indicated during installation.
AIX
1. Stop the situation update forwarder long running process if it is still running:
a. Find the long running process using:
ps ef
2. Go to operating system directory (this is typically /usr/lib/objrepos) and open the vpd.properties file.
3. Remove all lines that have itmTecEvntSyncProduct, EvntSyncForwarder, itmTecEvntSyncLapComp
or EvntSyncForwarderWin in them.
4. Remove any event synchronization directories that remain.
Remove any directories found in OM_TEC including OM_TEC itself. OM_TEC is found in
$BINDIR/TME/TEC. To use $BINDIR you must run the following command:
. /etc/Tivoli/setup_env.sh
If the installation was for Netcool/OMNIbus, remove the files from the location specified during
installation.
Linux
1. Stop the situation update forwarder long running process if it is still running:
a. Find the long running process using:
ps ef
2. Go to operating system directory (this is typically / or /root) and open the file vpd.properties.
3. Remove all lines that have itmTecEvntSyncProduct, EvntSyncForwarder, itmTecEvntSyncLapComp
or EvntSyncForwarderWin in them.
4. Remove any event synchronization directories that are left behind.
Remove any directories found in OM_TEC including OM_TEC itself. OM_TEC is found in
$BINDIR/TME/TEC. To use $BINDIR you must run the following command:
. /etc/Tivoli/setup_env.sh
If the installation was for OMNIbus, remove the files from the location specified during installation
592
Solaris
1. Stop the situation update forwarder long running process if it is still running:
a. Find the long running process using:
ps ef
If the installation was for OMNIbus, remove the files from the location specified during installation.
593
594
595
596
v System P agents:
AIX Premium Agent User's Guide, SA23-2237
CEC Base Agent User's Guide, SC23-5239
HMC Base Agent User's Guide, SA23-2239
VIOS Premium Agent User's Guide, SA23-2238
v Other base agents:
Monitoring Agent for IBM Tivoli Monitoring 5.x Endpoint User's Guide, SC32-9490
Related publications
You can find useful information about related products in the IBM Tivoli Monitoring and OMEGAMON XE
Information Center at http://publib.boulder.ibm.com/infocenter/tivihelp/v15r1/.
597
598
599
Education offerings
A listing of all the current Tivoli Monitoring training can be found at http://www.ibm.com/software/tivoli/
education/edu_prd.html#M.
The training road maps for Tivoli Monitoring can be found at http://www.ibm.com/software/tivoli/education/
eduroad_prod.html#2.
Support Technical Exchange (STE) Seminars
Expand your technical understanding of your current Tivoli products, in a convenient format hosted by IBM
Tivoli Worldwide Support and Services. These live seminars are support oriented discussions of product
information, deployment and trouble-shooting tips, common issues, problem solving resources and other
support and service recommendations. Tivoli engineers and consultants who are subject matter experts for
the product(s) discussed lead each STE. Each STE is recorded and playback is available at anytime. To
attend a live STE or review a previously recorded STE go to http://www.ibm.com/software/sysmgmt/
products/support/supp_tech_exch.html.
Service offerings
There are several Services offerings for the Tivoli Monitoring product. Access the Services offerings and
additional details on some of the offerings at the following link:
http://www.ibm.com/software/tivoli/services/consulting/offers-availability.html#monitoring
IBM QuickStart Services for Tivoli Monitoring
This offering is designed to facilitate ease of deployment and rapid time to value for Tivoli
Monitoring, allowing you to begin monitoring and reporting on your essential system resources by
providing architecture and design recommendation for production, implementation plan, hands-on
training and working test lab prototype using up to six standard resource models.
IBM Migration Assistance Services for Tivoli Monitoring
This new packaged service offering is tailored to help you obtain a clear and applicable
understanding in order to migrate an existing Tivoli based monitoring environment to the new Tivoli
Monitoring technology.
600
Other resources
AA&BSM Enablement Best Practices website
http://www.ibm.com/software/tivoli/features/monitoring-best-practices/index.html
Tivoli AA&BSM Technical Exchange Wiki
http://www.ibm.com/developerworks/wikis/display/aabsmenbl/Home
IBM Tivoli Monitoring 6 Forum
http://www.ibm.com/developerworks/forums/dw_forum.jsp?forum=796&cat=15
601
602
603
6. The final dialog box requests whether or not you want to upload the collection file to IBM Support or
another FTP location. If you only want to view the collected files on your computer, choose Do Not
FTP the Logs.
7. The collection has finished. You can view the collected files by clicking the compressed file in the
Collector Status dialog box.
Obtaining fixes
A product fix might be available to resolve your problem. To determine which fixes are available for your
Tivoli software product, follow these steps:
1. Go to the IBM Software Support Web site at http://www.ibm.com/software/support.
2. Under Select a brand and/or product, select Tivoli.
3. Click the right arrow to view the Tivoli support page.
4. Use the Select a category field to select the product.
5. Select your product and click the right arrow that shows the Go hover text.
6. Under Download, click the name of a fix to read its description and, optionally, to download it.
If there is no Download heading for your product, supply a search term, error code, or APAR number
in the field provided under Search Support (this product), and click the right arrow that shows the
Go hover text.
For more information about the types of fixes that are available, see the IBM Software Support Handbook
at http://techsupport.services.ibm.com/guides/handbook.html.
604
By phone
Call 1-800-IBM-4You (1-800-426-4968).
605
Severity 3
The problem has some business impact. The program is usable, but less significant features (not
critical to operations) are unavailable.
Severity 4
The problem has minimal business impact. The problem causes little impact on operations, or a
reasonable circumvention to the problem was implemented.
Submitting problems
You can submit your problem to IBM Software Support in one of two ways:
Online
Click Submit and track problems on the IBM Software Support site at http://www.ibm.com/
software/support/probsub.html. Type your information into the appropriate problem submission
form.
By phone
For the phone number to call in your country, go to the contacts page of the IBM Software Support
Handbook at http://techsupport.services.ibm.com/guides/contacts.html and click the name of your
geographic region.
If the problem you submit is for a software defect or for missing or inaccurate documentation, IBM
Software Support creates an Authorized Program Analysis Report (APAR). The APAR describes the
problem in detail. Whenever possible, IBM Software Support provides a workaround that you can
implement until the APAR is resolved and a fix is delivered. IBM publishes resolved APARs on the
Software Support Web site daily, so that other users who experience the same problem can benefit from
the same resolution.
606
Appendix L. Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the
products, services, or features discussed in this document in other countries. Consult your local IBM
representative for information on the products and services currently available in your area. Any reference
to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785 U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property
Department in your country or send inquiries, in writing, to:
Intellectual Property Licensing
Legal and Intellectual Property Law
IBM Japan, Ltd.
1623-14, Shimotsuruma, Yamato-shi
Kanagawa 242-8502 Japan
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS"
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE.
Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this
statement might not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication. IBM
may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of
the materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the
exchange of information between independently created programs and other programs (including this one)
and (ii) the mutual use of the information which has been exchanged, should contact:
Copyright IBM Corp. 2005, 2010
607
IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758 U.S.A.
Such information may be available, subject to appropriate terms and conditions, including in some cases
payment of a fee.
The licensed program described in this document and all licensed material available for it are provided by
IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any
equivalent agreement between us.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been
made on development-level systems and there is no guarantee that these measurements will be the same
on generally available systems. Furthermore, some measurement may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their
published announcements or other publicly available sources. IBM has not tested those products and
cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products.
Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice,
and represent goals and objectives only.
All IBM prices shown are IBM's suggested retail prices, are current and are subject to change without
notice. Dealer prices may vary.
This information is for planning purposes only. The information herein is subject to change before the
products described become available.
This information contains examples of data and reports used in daily business operations. To illustrate
them as completely as possible, the examples include the names of individuals, companies, brands, and
products. All of these names are fictitious and any similarity to the names and addresses used by an
actual business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing
application programs conforming to the application programming interface for the operating platform for
which the sample programs are written. These examples have not been thoroughly tested under all
conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these
programs. You may copy, modify, and distribute these sample programs in any form without payment to
IBM for the purposes of developing, using, marketing, or distributing application programs conforming to
IBMs application programming interfaces.
Each copy or any portion of these sample programs or any derivative work, must include a copyright
notice as follows:
(your company name) (year). Portions of this code are derived from IBM Corp. Sample Programs.
Copyright IBM Corp. _enter the year or years_. All rights reserved.
608
If you are viewing this information in softcopy form, the photographs and color illustrations might not be
displayed.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at
Copyright and trademark information at www.ibm.com/legal/copytrade.shtml.
Java and all Java-based trademarks and logos are trademarks or registered trademarks
of Sun Microsystems, Inc. in the United States, other countries, or both.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, and service names may be trademarks or service marks of others.
Appendix L. Notices
609
610
Accessibility features
The following list includes the major accessibility features in IBM Tivoli Monitoring:
v Keyboard-only operation
v Interfaces that are commonly used by screen readers
v Keys that are discernible by touch but do not activate just by touching them
v Industry-standard devices for ports and connectors
v The attachment of alternative input and output devices
The Tivoli Monitoring Information Center and its constituent publications are accessibility-enabled. The
accessibility features of the information center are described at http://publib.boulder.ibm.com/infocenter/
tivihelp/v15r1/.
611
612
Glossary
A
activity. One phase within a sequence of predefined
steps called a policy that automate system responses
to a situation that has fired (that is, become true).
administration mode. See workspace administration
mode on page 623.
Advanced Encryption Standard. An encryption
algorithm for securing sensitive but unclassified material
designed by the National Institute of Standards and
Technology (NIST) of the U.S. Department of
Commerce. AES is intended to be a more robust
replacement for the Data Encryption Standard. The
specification calls for a symmetric algorithm (in which
the same key is used for both encryption and
decryption), using block encryption of 128 bits and
supporting key sizes of 128, 192 and 256 bits. The
algorithm was required to offer security of a sufficient
level to protect data for the next 20 to 30 years. It had
to be easily implemented in hardware and software and
had to offer good defenses against various attack
techniques. AES has been published as Federal
Information Processing Standard (FIPS) 197, which
specifies the encryption algorithm that all sensitive,
unclassified documents must use.
AES. See Advanced Encryption Standard.
affinity. A label that classifies objects by managed
system.
agent. Software installed on systems you want to
monitor that collects data about an operating system,
subsystem, or application running on each such system.
Because an executable file gathers information about a
managed system, there is always a one-to-one
correspondence between them. Also called a Tivoli
Enterprise Monitoring Agent.
agentless monitor. An agentless monitor uses a
standard API (such as SNMP or CIM) to identify and
notify you of common problems with the operating
system running on a remote computer. Thus, as their
name implies, the agentless monitors can retrieve
monitoring and performance data without requiring OS
agents on the computers being monitored. The
agentless monitors provide monitoring, data gathering,
and event management for Windows, Linux, AIX,
HP-UX, and Solaris systems.
agentless monitoring server. A computer that has
one or more agentless monitors running on it. Each
agentless monitoring server can support up to 10 active
instances of the various types of agentless monitors, in
any combination. Each instance can communicate with
613
B
baroc files. Basic Recorder of Objects in C files define
event classes for a particular IBM Tivoli Enterprise
Console server. Baroc files also validate event formats
based on these event-class definitions.
browser client. The software installed with the Tivoli
Enterprise Portal Server that is downloaded to your
computer when you start Tivoli Enterprise Portal in
browser mode. The browser client runs under the
control of a Web browser.
C
Candle Management Workstation. The client
component of a CandleNet Command Center
environment; it provides the graphical user interface. It
is replaced by the Tivoli Enterprise Portal user interface
in the IBM Tivoli Monitoring environment.
capacity planning. The process of determining the
hardware and software configuration required to
accommodate the anticipated workload on a system.
chart. A graphical view of data returned from a
monitoring agent. A data point is plotted for each
attribute chosen and, for bar and pie charts, a data
series for each row. Types of charts include pie, bar,
plot, and gauge.
614
D
Data Encryption Standard. A widely used method of
private-key data encryption that originated at IBM in
1977 and was adopted by the U.S. Department of
Defense. DES supports 72 quadrillion or more possible
encryption keys; for each message, the key is chosen at
random from among this enormous number of possible
keys. Like all other private-key cryptographic methods,
both the sender and the receiver must know and use
the same private key.
DES applies a 56-bit key to each 64-bit block of data.
Although this is considered strong encryption, many
companies use triple DES, which applies three keys in
succession.
data source name. The name that is stored in the
database server and that enables you to retrieve
information from the database through ODBC. The DSN
includes such information as the database name,
database driver, user ID, and password.
data sources. Data pertaining to J2EE data sources,
which are logical connections to database subsystems.
data warehouse. A central repository for all or
significant parts of the data that an organization's
business systems collect.
E
EIB. See Enterprise Information Base.
EIF. See Event Integration Facility on page 616.
endcode. You assign endcodes in a policy when you
connect one activity to another. The endcode indicates
the result of this activity that triggers the next activity.
Enterprise Information Base. A database used by the
Tivoli Enterprise Monitoring Server that serves as a
repository of shared objects for all systems across your
enterprise. The EIB stores all persistent data, including
Glossary
615
F
filter criteria. These criteria limit the amount of
information returned to the data view in response to a
query. You can apply a prefilter to the query to collect
616
G
georeferenced map. A special type of graphic that
has built-in knowledge of latitude and longitude and can
be zoomed into and out of quickly. The Tivoli Enterprise
Portal uses proprietary .IVL files generated with the
map-rendering component. These files cannot be
opened or saved in a graphics editor.
GSKit. The Global Security Toolkit provides SSL
(Secure Sockets Layer) processing within protocols
such as SPIPE and HTTPS. On z/OS systems, GSKit is
known as the Integrated Cryptographic Service Facility,
or ICSF.
H
historical collection. A definition for collecting and
storing data samples for historical reporting. The
historical collection identifies the attribute group, any
row filtering you have assigned, the managed system
distribution, frequency of data collection, where to
store it for the short term, and whether to save data
long term (usually to the Tivoli Data Warehouse).
historical data management. The procedures applied
to short-term binary history files that roll off historical
data to either the Tivoli Data Warehouse or to
delimited text files (the krarloff utility on UNIX or
Windows systems; ddname KBDXTRA for the z/OS
Persistent Datastore), and then delete entries in the
short-term history files over 24 hours old, thereby
making room for new entries.
hot standby. A redundant Tivoli Enterprise
Monitoring Server that, if the primary or hub
monitoring server should fail, assumes the
responsibilities of the failed monitoring server.
HTTP. The Hypertext Transfer Protocol is a suite of
Internet protocols that transfer and display hypertext
documents within Web browsers.
HTTP sessions. Data related to invocations of specific
World Wide Web sites.
HTTPS. The Secure Hypertext Transport Protocol is an
implementation of the Hypertext Transport Protocol
(HTTP) that relies on either the Secure Sockets Layer
(SSL) API or the Transport Layer Security (TLS) API to
provide your users with secure access to your site's
Web server. These APIs encrypt and then decrypt user
page requests as well as the pages returned by the
Web server.
hub Tivoli Enterprise Monitoring Server. (1) A
central host system that collects the status of situations
I
IBM Tivoli Monitoring. A client/server
implementation for monitoring enterprise-wide computer
networks that comprises a Tivoli Enterprise
Monitoring Server, an application server known as the
Tivoli Enterprise Portal Server, one or more Tivoli
Enterprise Portal clients, and multiple monitoring
agents that collect and distribute data to the monitoring
server.
IIOP. See Internet Inter-ORB Protocol.
input data. Data provided to the computer for further
processing. See also output data on page 619.
integral Web server. A proprietary Web server
developed for IBM Tivoli Monitoring that is installed and
configured automatically with the Tivoli Enterprise
Portal Server. You enter the URL of the integral Web
server to start the Tivoli Enterprise Portal client in
browser mode.
Internet Inter-ORB Protocol. An Internet
communications protocol that runs on distributed
platforms. Using this protocol, software programs written
in different programming languages and running on
distributed platforms can communicate over the Internet.
IIOP, a part of the CORBA standard, is based on the
client/server computing model, in which a client
program makes requests of a server program that waits
to respond to client requests. With IIOP, you can write
client programs that communicate with your site's
existing server programs wherever they are located
without having to understand anything about the server
other than the service it performs and its address
(called the Interoperable Object Reference, IOR, which
comprises the server's port number and IP address).
Interoperable Object Reference. Connects clients to
the Tivoli Enterprise Portal Server. The IOR identifies
a remote object, including such information as name,
capabilities, and how to contact it. The URL may include
an IOR because it goes through the Web server; the
portal server uses it to tell the client which IOR to fetch.
After it does that, the portal server extracts the host and
port information and tells the client where to route the
request.
interval. The number of seconds that have elapsed
between one sample and the next.
IOR. See Interoperable Object Reference.
J
Java Database Connectivity. A standard API that
application developers use to access and update
relational databases (RDBMSes) from within Java
programs. The JDBC standard is based on the X/Open
SQL Call Level Interface (CLI) and complies with the
SQL-92 Entry Level standard; it provides a
DBMS-independent interface that enables
SQL-compliant database access for Java programmers.
Java Management Extensions. A set of Java classes
for application and network management in J2EE
environments. JMX provides Java programmers a set of
native Java tools called MBeans (managed beans) that
facilitate network, device, and application management.
JMX provides a Java-based alternative to the Simple
Network Management Protocol.
JDBC. See Java Database Connectivity.
JMX. See Java Management Extensions.
L
LDAP. See Lightweight Directory Access Protocol.
Lightweight Directory Access Protocol. A protocol
that conforms to the International Standards
Organization's X.500 directory standard that uses
TCP/IP to access directory databases where
applications can store and retrieve common naming
and location data. For example, applications can use
LDAP to access such directory information as email
addresses, service configuration parameters, and public
keys.
location broker. The component that manages
connections for the hub monitoring server, enabling it to
find all other Tivoli Management Services
components, including remote monitoring servers, the
Tivoli Enterprise Portal Server, and monitoring
agents.
M
managed object. An icon created in the Tivoli
Enterprise Portal from a managed object template that
represents resources you monitor using situations.
Managed objects are converted to items in the
Navigator's Logical view.
managed system. A particular operating system,
subsystem, or application in your enterprise where a
monitoring agent is installed and running. A managed
system is any system that IBM Tivoli Monitoring is
monitoring.
managed system group. (Formerly managed system
list.) A named, heterogeneous group of both similar and
dissimilar managed systems organized for the
Glossary
617
N
NAT. See Network Address Translation.
618
O
object. An instance of a class, which comprises an
implementation and an interface. An object reflects its
original, holding data and methods and responds to
requests for services. CORBA defines an object as a
combination of state and a set of methods characterized
by the behavior of relevant requests.
ODBC. See Open Database Connectivity on page
619.
P
PDS. See Persistent Datastore.
PerfMon. See Performance Monitor
performance. A major factor in measuring system
productivity. Performance is determined by a
combination of throughput, response time, and
availability.
Performance Monitor (PerfMon). The Windows
Performance Monitor is an SNMP-based
performance-monitoring tool for Windows environments.
PerfMon monitors network elements such as computers,
routers, and switches.
Persistent Datastore. A set of z/OS data sets where
IBM Tivoli Monitoring running on z/OS systems stores
historical monitoring data.
platform. The operating system on which the
managed system is running, such as z/OS or Linux.
The Navigator's Physical mapping places the platform
level under the Enterprise level.
policy. A set of automated system processes that can
perform actions, schedule work for users, or automate
manual tasks, frequently in response to events.
Policies are the IBM Tivoli Monitoring automation tool;
they comprise a series of automated steps, called
activities, whose order of execution you control.
In most cases, a policy links a Take Action command
to a situation that has turned true. When started, the
policy's workflow progresses until all activities have
been completed or until the Tivoli Enterprise Portal user
manually stops the policy. You can create both policies
that fully automate workflow strategies and those that
require user intervention. As with situations, policies are
distributed to the managed systems you want to
monitor and to which you are sending commands.
private situation. A situation that is defined in an
XML-based private configuration file for the local Tivoli
Enterprise Monitoring Agent or Tivoli System Monitor
Agent and that does not interact with a Tivoli
Enterprise Monitoring Server. Such events can be
sent via either EIF or SNMP alerts to a receiver such
as IBM Tivoli Enterprise Console or Netcool/OMNIbus.
See also situation on page 621.
product code. The three-letter code used by IBM
Tivoli Monitoring to identify the product component. For
example, the product code for IBM Tivoli Monitoring for
WebSphere Application Server is KWE.
Glossary
619
Q
query. A particular view of specified attributes of
selected instances of a set of managed-object classes,
arranged to satisfy a user request. Queries are written
using SQL.
R
remote deployment. Using IBM Tivoli Monitoring
software, you can deploy agents and other non-agent,
Tivoli Management Services-based components to
remote nodes without your having to sign onto those
nodes and perform the installation and configuration
steps yourself. Remote deployment requires two pieces
on the destination node: (1) a bundle containing the
component code and the instructions for installing and
configuring it and (2) an operating-system agent to read
the bundle and perform the installation and
configuration steps.
Remote Procedure Call. A protocol based on the
Open Software Foundation's Distributed Computing
Environment (DCE) that allows one program to request
services from a program running on another computer
in a network. RPC uses the client/server model: the
requesting program is the client, and the responding
program is the server. As with a local procedure call
(also known as a function call or a subroutine call),
an RPC is a synchronous operation: the requesting
program is suspended until the remote procedure
returns its results.
remote Tivoli Enterprise Monitoring Server. A
remote monitoring server collects monitoring data from
a subset of your site's monitoring agents and passes its
collected data to the hub Tivoli Enterprise Monitoring
Server to be made available to one or more Tivoli
Enterprise Portal clients through the Tivoli Enterprise
Portal Server, thereby creating an enterprise-wide view.
rolloff. The transfer of monitoring data to a data
warehouse.
RPC. See Remote Procedure Call.
RTE. See runtime environment.
620
S
sample. The data that the monitoring agent collects
for the monitoring server instance. The interval is the
time between data samplings.
sampled event. Sampled events happen when a
situation becomes true. Situations sample data at
regular intervals. When the situation becomes true, it
opens an event, which gets closed automatically when
the situation goes back to false (or when you close it
manually). See also event on page 616.
Secure Sockets Layer. A security protocol for
communication privacy that provides secure
client/server conversations.
seed data. The product-provided situations,
templates, policies, and other sample data included
with a monitoring agent to initialize the Tivoli
Enterprise Monitoring Server's Enterprise
Information Base. Before you can use a monitoring
agent, the monitoring server to which it reports must be
seeded, that is, initialized with application data.
server. An application that satisfies data and service
requests from clients.
SELinux. The National Security Agency's
security-enhanced Linux (SELinux) is a set of patches
to the Linux kernel plus utilities that together incorporate
a strong, flexible mandatory access control (MAC)
architecture into the kernel's major subsystems.
SELinux enforces the separation of information based
on confidentiality and integrity requirements, which
allows attempts to tamper with or bypass application
security mechanisms to be recorded and enables the
confinement of damage caused by malicious or flawed
applications.
Simple Network Management Protocol. A TCP/IP
transport protocol for exchanging network management
data and controlling the monitoring of network nodes in
a TCP/IP environment. The SNMP software protocol
facilitates communications between different types of
networks. IBM Tivoli Monitoring uses SNMP messaging
to discover the devices on your network and their
availability.
Simple Object Access Protocol. The Simple Object
Access Protocol is an XML-based interface that vendors
use to bridge remote procedure calls between
Glossary
621
T
Take Action. A Tivoli Enterprise Portal dialog box from
which you can enter a command or choose from a list
of predefined commands. It also lists systems on which
to effect the command, which is usually a response to
an event.
Take Action command. A Take Action command
allows you to send commands to your managed
systems, either automatically, in response to a
situation that has fired (that is, turned true), or
manually, as the Tivoli Enterprise Portal operator
requires. With Take Action commands, you can enter a
command or select one of the commands predefined by
your product and run it on any system in your managed
network. Thus you can issue Take Action commands
either against the managed system where the situation
fired or a different managed system in your network.
target libraries. SMP/E-controlled libraries that contain
the data installed from the distribution media.
task. A unit of work representing one of the steps in a
process.
TCP/IP. See Transmission Control Protocol/Internet
Protocol.
TDW. See Tivoli Data Warehouse.
telnet. A terminal emulation program used on TCP/IP
networks. You can start a telnet session with another
system and enter commands that execute on that
system. A valid user ID and password for that remote
system are required.
threshold. (1) A level set in the system at which a
message is sent or an error-handling program is called.
For example, in a user auxiliary storage pool, the user
can set the threshold level in the system values, and
the system notifies the system operator when that level
is reached. (2) A customizable value for defining the
acceptable tolerance limits (maximum, minimum, or
reference limit) for an application resource or system
resource. When the measured value of the resource is
greater than the maximum value, less than the minimum
value, or equal to the reference value, an exception is
raised.
Tivoli Data Warehouse. This member of the IBM
Tivoli Monitoring product family stores Tivoli Monitoring
agents' monitoring data in separate relational
database tables so you can analyze historical trends
using that enterprise-wide data. Reports generated from
Tivoli Data Warehouse data provide information about
the availability and performance of your monitored
environment over different periods of time.
622
U
User Datagram Protocol. A TCP/IP communications
protocol that exchanges messages ("datagrams")
between networked computers linked by the Internet
Protocol (IP). UDP is an alternative to the Transmission
Control Protocol (TCP), which, like UDP, uses IP to
move a message from one computer to another. Unlike
TCP, however, UDP does not divide the message into
packets and reassemble them at the other end.
The Network File System uses UDP to move file
contents and file updates between the NFS server and
the NFS client.
UDP. See User Datagram Protocol.
V
value of expression. A function in a situation
condition, query specification, or data view filter or
threshold that uses the raw value of an attribute. A
value can be a number, text string, attribute, or modified
attribute. Use this function with any operator.
W
Warehouse Proxy agent. One of the IBM Tivoli
Monitoring base agents, the Warehouse Proxy agent
passes historical monitoring data from either a
monitoring agent or the Tivoli Enterprise Monitoring
Server to the Tivoli Data Warehouse. This
multithreaded server process can handle concurrent
requests from multiple data sources to roll off data from
their short-term history files to the data warehouse.
X
XML. See Extensible Markup Language on page
616.
Z
z/OS. IBM's operating system for its line of mainframe,
zSeries computers known for its processing speed and
its ability to manage large amounts of memory,
direct-access storage, and data.
Glossary
623
624
Index
Special characters
/3GB boot option 12
/etc/hosts file 469
<bind> element 560
<connection> element 561
<gateway> element 559
<interface> element 560
<portpool> element 561
<zone> element 560
A
AC (agent compatibility) component 23, 185, 186
errors 185, 187
access plan 318
accessibility features for this product 611
activating a firewall gateway 558
Active Directory, Microsoft
See Microsoft Active Directory
adding application support
for nonbase agents 199
itmcmd support command 156
Linux desktop client 211
Linux monitoring server 128, 155, 204
Linux portal server 208
to a remote monitoring server from Linux or
UNIX 214
to a remote monitoring server from Windows 212
UNIX monitoring server 128, 155, 204
Windows desktop client 209
Windows monitoring server 128, 200
Windows portal server 206
advanced configuration
Linux 275
UNIX 275
Advanced Encryption Standard 125, 613
AES
See Advanced Encryption Standard
agent autonomous mode 15
fully connected agent 15
Managed Mode 15
partially connected agent 15
Unmanaged Mode 15
Agent Builder CD 7
Agent Compatibility Package
See AC (agent compatibility) component
agent deployment 237
deploying OS agents 242
managing agent depot 240
OS agents, deploying 242
populating agent depot 237
sharing an agent depot 241
tacmd createNode command 242
Tivoli Universal Agent 245
agent depot 237
DEPOTHOME environment variable 240
location 240
Copyright IBM Corp. 2005, 2010
625
B
backing up a current installation
UNIX or Linux 121
Windows 120
backing up the portal server database 122
backing up the Tivoli Data Warehouse database
backup monitoring servers 15
baroc files 614
for Tivoli Enterprise Console 64
update history 65
update history 65
barrier firewall 551
Base DVDs
version 6.2.1 15
626
122
230
C
cache, clearing 318
caching
dynamic SQL 318
Windows system 315
Candle Management Workstation 614
Candle Management Workstation coexistence 133
CandleNet Command Center 72
enabling historical collection 72
CandleNet Portal database, upgrading 134
cardinality 314
cat and atr files
location of on Windows 213
catalog and attribute (cat and atr) files 196
catalog files 13
change user /execute command 86
CHNGPGS_THRESH parameter 322
CIM
See Common Information Model
cinfo command 547, 570
circular logging, definition and purpose 316
Citrix client 86
class, Sentry2_0_Base 84, 468
Classic REORG 316
client/server architecture 614
clustered index 313
clustering of IBM Tivoli Monitoring components 15
coexistence with version 6.2.2 124
Cold Backup 49
COLLECT STATISTICS clause 317
command
sitconfuser 510
tacmd createNode 243
commands 87, 121, 126, 128, 132, 136, 154
cgabge user/execute 86
itmcmd agent 192, 265
itmcmd config 547
commands (continued)
itmcmd manage 261
itmcmd server 155, 156, 265
itmcmd support 156
sitconfig.sh 484
sitconfuser.sh 484
tacmd
installation 21
packaging 21
tacmd addBundles 240
tacmd createNode 242
common event connector
described 12
Common Event Console view 8, 12
Tivoli Enterprise Portal 461
Common Information Model 54, 614
Common Installer 544
Common Object Request Broker Architecture 614, 621
common services components 3
communications
securing 94
security options for 93
components 6, 11, 20, 24, 119, 517, 522
Agent Builder 7
Agents DVD 7, 15, 58, 198
browser client 6
deciding which to upgrade 117
desktop client 6
event synchronization 8
Infrastructure DVD 7, 15, 198
installing an agent 38, 189
monitoring agents 6
monitoring server 5
portal client 5, 6
portal server 5
Summarization and Pruning agent 7
Tivoli Data Warehouse 7
Tivoli Enterprise Portal 5
Tivoli Enterprise Portal Server 5
Tivoli Universal Agent 6
use with System Monitor Agents 521, 527
Warehouse Proxy 7
configuration fields
Netcool/OMNIbus event synchronization 497
Configuration Tool, z/OS 614
configuration values
validation 13
configuration variables, agent 571
configuring
agents 264
hub monitoring server, Linux 154
hub monitoring server, UNIX 154
Linux, silent 547
monitoring agents 264
monitoring agents on Linux 190
monitoring agents on UNIX 190
monitoring server 262
monitoring server, Linux 161
password authentication 94
port number assignments 266
portal server 279
configuring (continued)
portal server on Linux or AIX 171, 175
remote monitoring server, UNIX 161
response file, using 547
UNIX, silent 547
configuring a firewall gateway 558
configuring Netcool/OMNIbus Object Server
error event flow 508
for program execution from scripts 502
updating the Netcool/OMNIbus database
schema 503
configuring the desktop client 221
connection
inbound 318
outbound 318
pooling 318
containers, definition and purpose 314
contents of installation media 11, 13
control block information 321
controllers, using separate 319
CORBA
See Common Object Request Broker Architecture
COUNT and SKIP 38
CPU utilization 319
creating a partition file 555
creating a workspace 454
Crystal Reports 7
CT/Service Console 267
CTIRA_HEARTBEAT environment variable 271
CURRENT DEGREE special register 321
customer support
See Software Support
D
daily health checks 78
Data Encryption Standard 125, 615
data integrity, importance of logs 315
data size, calculating 319
data warehouse
creating database objects for 341
creating functions for 341
creating indexes for 341
creating table inserts for 341
creating tables for 341
creating views for 341
data warehouse ratio 319
Database Managed Space (DMS) table space
database objects, creating manually 341
database parameters 321
database server 614
database size, estimating 335
databases
configuration parameters 316
heap 316
reorganization 316
databases, supported
portal server 107
Tivoli Data Warehouse 107
DB2 Control Center 320
314
Index
627
628
E
Eclipse Help Server 12, 110, 120, 121, 125, 145, 164,
171, 567
education 599
offerings 600
EGG1 encryption scheme 96
EIB 31
See Enterprise Information Base
EIB tables 577
EIF
See Event Integration Facility
EIF Probe 36
EIF probe, configuring 505
Enable Multipage File Allocation tool 315
enabling application support
for nonbase agents 199
F
file descriptors limit 93
file system caching 315
file, Sentry.baroc 481
files
atr 213
cat 213
catalog and attribute 196
seedkpp.log file 214
SQL 196
FIPS 197 613
Firefox browser 14
tips for using with Tivoli Monitoring 223
versions supported 22, 98, 112
firewall gateway 37
activation 558
configuration 39, 558
configuration for Warehouse Proxy 562
determine if you require 29
example configuration scenario 562
KDE_GATEWAY configuration variable 558
usage scenarios 558
uses 557
XML configuration document 559
Firewall gateway 37
firewall gateway configuration document 559
firewall support 556
Index
629
G
georeferenced map 616
GET SNAPSHOT command 324
get_config_parms procedure 509
Global Location Broker 447
Global Security Toolkit
See GSKit
Global Zone 92
globalization, installing support 218
GROUP BY clause 317
GSK_V3_CIPHER_SPECS parameter
GSKit 95, 280, 616
32-bit environments 23, 96, 187
64-bit environments 23, 96, 187
installation requirements 106
installation with Solaris zones 92
125
H
HACMP
See High-Availability Cluster Multiprocessing
hard disk drive footprint 333
hardware design 318
630
hardware requirements
distributed systems 109
System z 111
hash JOINs 321
health checks 77
daily 78
monthly 78
quarterly 79
weekly 78
heaps 319, 321
heartbeat 31
tracking 31
heartbeat interval 270
CTIRA_HEARTBEAT environment variable 271
performance issues 271
setting the interval 271
heartbeat monitoring 270
performance issues 271
Help Server
See Eclipse Help Server
high availability and disaster recovery 47
agent and remote monitoring server
considerations 49
hub monitoring server considerations 48
portal server considerations 48
Summarization and Pruning agent
considerations 51
Tivoli Data Warehouse considerations 50
Warehouse Proxy agent considerations 50
high speed link 319
High-Availability Cluster Multiprocessing 15
historical data 7
host variables 318
Hot Standby monitoring servers 15
hot-standby monitoring server 616
HTTP 616
port assignments 269
HTTP daemon 267
KDE_TRANSPORT environment variable for 267
HTTPS 92, 94, 95, 266, 267, 280, 616, 621
port assignments 269
HTTPS daemon 267
KDE_TRANSPORT environment variable for 267
hub monitoring server
adding application support on Linux 155
adding application support on UNIX 155
configuring on Linux 154
configuring on UNIX 154
installing 148
installing on Linux 153
installing on UNIX 153
installing on Windows 148
planning worksheet, Linux or UNIX 531
planning worksheet, Windows 530
Hypertext Transport Protocol
See HTTP
I
I/O operations, parallel 320
I/O, minimizing 315, 319
indexes (continued)
clustered 313
columns 317
dropped 318
extensive updates 317
new 317
perfectly ordered 316
rebuilt, reorganizing with REORG 316
statistics 313, 317
synchronized statistics 317
information to gather before installing 83
install
See deployment
installation
hardware requirements, distributed systems 109
hardware requirements, System z 111
information to gather 83
Linux considerations 87
monitoring server name guidelines 84
operating systems, supported 96
order of install 86
overview of the process 83
required information 83
requirements 96
software requirements 111
UNIX considerations 87
Windows considerations 86
installation media
changes to 11, 13
contents of 11, 13
installation steps 147
installing 147
application support 199
desktop client 193
globalization support 218
hub monitoring server 148
language packs 218
Linux monitoring server 153, 160
monitoring agents 183
monitoring agents on Linux 189, 190
monitoring agents on UNIX 189, 190
monitoring agents on Windows 184
planning worksheets 529
portal server 162
remote monitoring server 157
silent 541
UNIX monitoring server 153, 160
Windows monitoring server 148, 157
installing all components on a single Windows
computer 139
installing application support
catalog and attribute files 196
seed data 196
SQL files 196
installing into Solaris zones 92
installing monitoring agent .baroc files on the event
server 481
installing the event synchronization component
on Netcool/OMNIbus Object Server 494
InstallPresentation.sh 125
integral Web server 617
Index
631
J
Java Database Connectivity 617
Java Management Extensions API 617
Java Runtime Environment, required 131
Java Runtime Environment, Sun 14
Java Web Start 221
using to download the desktop client 233
JDBC
See Java Database Connectivity
JMX
See Java Management Extensions API
JRE
enabling tracing for 234
K
KBB_RAS1 trace setting 448
kcirunas.cfg file 88
KDC_FAMILIES configuration variable
632
562
L
language packs, installing 218
large environments 40
large-scale deployment 60
LDAP
See Lightweight Directory Access Protocol
ldedit command 275
libraries
IBM Tivoli Monitoring 595
License Manager, IBM Tivoli 3
Lightweight Directory Access Protocol 617
link, high speed 319
Linux
/etc/hosts file 91
adding application support 155
backing up a current installation 121
configuring 547
configuring desktop client 196
configuring monitoring agents 190
configuring portal desktop client 196
configuring the portal server 171, 175
EIB tables 577
file descriptors limit 93
file permissions for monitoring agents 191
fully qualified path names 91
host names, setting 91
hub monitoring server 153, 154
installation considerations 87
installation user account 90
installing as root 90
installing monitoring agents 190
installing the desktop client from installation
media 194
installing the portal server 170
maxfiles parameter 93
monitoring agents 189
network interface cards 91
NFS environment 91
Linux (continued)
planning 87
portal server installation 169
remote monitoring server 160, 161
response file configuration 547
response file installation 545
silent configuration 547
silent installation 544, 545
starting monitoring agents 192
starting portal server 182
TCP/IP network services 91
Linux monitoring server
firewall support 556
KDC_PARTITION 556
NAT 556
network address translation (NAT) 556
LOBs, DB2 315
Local Location Broker 447
localhost requirements
Linux 277
location broker 552
Location Broker
Global 447
Local 447
lock list 322
LOCKLIST parameter 322
Log Buffer parameter 316
log retain logging, definition and purpose 316
LOGBUFSZ parameter 316, 322
LOGFILSIZ parameter 316, 323
logging
circular 316
definition and purpose 315
log retain 316
mirroring 316
performance 316
records 316
LOGPRIMARY parameter 323
logs
for desktop client 233
lz.config file 559
M
mainframe 61
deployment considerations 61
maintenance 75, 77
installing 236
post-upgrade 76
pre-upgrade 75
maintenance utilities for the database 316
Manage Tivoli Enterprise Monitoring Services
adding application support 212, 214
agents, configuring 264
application support, adding 212, 214
defining SOAP hubs 293
editing the partition file
UNIX and Linux 556
Windows 555
itmcmd manage command 261
monitoring agents, configuring 264
261
633
Monitoring components 30
Firewall gateway 37
IBM Tivoli Monitoring 5.x Endpoint 36
Netcool/OMNIbus integration 36
Summarization and Pruning agent 35, 86
Tivoli Data Warehouse 36
Tivoli Enterprise Console integration 36
Tivoli Enterprise Monitoring Agent 34
Tivoli Enterprise Monitoring Server 31
Tivoli Enterprise Portal client 33
Tivoli Enterprise Portal Server 32
Tivoli Universal Agent 37
Warehouse Proxy agent 35, 86
monitoring environment
Tivoli Enterprise Console 467
monitoring server
adding application support on Linux 128, 155, 204
adding application support on UNIX 128, 155, 204
adding application support on Windows 128, 200
agent depot, location 240
agent depot, populating 237
communications protocol planning worksheet 540
configuration file 150, 158, 164, 185, 194, 240, 241,
249, 250, 271, 446, 447, 553
configuring 262
configuring on Linux 154, 161
configuring on UNIX 154, 161
defined 5
defining additional to the event server
Netcool/OMNIbus 510
DEPOTHOME environment variable 240
EIB tables 577
environment file 150, 158, 164, 185, 194, 240, 241,
249, 250, 271, 446, 447, 553
file descriptor limit on UNIX 93
forwarding events to Netcool/OMNIbus 508
forwarding events to Tivoli Enterprise Console 482
heartbeat interval, configuring 270
installing on Linux 153, 160
installing on UNIX 153, 160
installing on Windows 148, 157
KBBENV configuration file 150, 158, 164, 185, 194,
240, 241, 249, 250, 271, 446, 447, 553
DEPOTHOME keyword 41, 150
naming 84
starting 155, 265
stopping 156, 265
Tivoli Event Integration Facility 482, 508
monthly health checks 78
motherboard 319
MPP (Massively Parallel Processors) 319
MSCS
See Microsoft Cluster Server
MTTRAPD trap receiver 462
multiple network interface cards 91, 286
portal server interface, defining 286, 287
multiple nodes 319
multiprocessor systems 109
634
N
naming conventions 71
naming the monitoring server 84
NAT 286, 556
See Network Address Translation
Netcool SSM agents
deployment of 251, 252
Netcool/OMNIbus 24, 53, 517, 522, 621
event integration scenarios 17
information required for event forwarding 84
integration 492
MTTRAPD trap receiver for 462, 621
planning 492
verifying installation of event synchronization
component 511
Netcool/OMNIbus configuration, customizing 509
Netcool/OMNIbus event synchronization configuration
fields 497
Netcool/OMNIbus integration 36
Netcool/OMNIbus Object Server
configuring for event integration 502
configuring for program execution from scripts 502
configuring the EIF probe 505
configuring the error event flow 508
updating the database schema 503
Network Address Translation 618
and firewalls 552
network address translation (NAT) 286, 556
portal server interface, defining 286, 287
Network File System 618
network interface cards
specifying 91
network, monitoring bandwidth 320
NFS
See Network File System
NFS environment, installing into 91
required permissions 92
NIC 286
portal server interface, defining 286, 287
NOCACHE option 315
node ID 217
nodes, multiple 319
non-agent bundles 23
Linux and UNIX 531, 533
Windows 530, 532
non-NIS Solaris monitoring server, configuring
permissions for 275
non-OS agents 6
deploying 244
non-sequential physical data pages 316
NUM_IOCLEANERS parameter 322
NUM_IOSERVERS parameter 322
O
ODBC 35
See Open Database Connectivity
ODBC connection
DB2 for the workstation data warehouse
Microsoft SQL data warehouse 404
360
P
packages
cache for dynamic and static SQL
invalid 318
static SQL 318
page cleaners, asynchronous 322
322
635
636
pre-deployment (continued)
staffing 67
task estimates 63
Tivoli Universal Agent deployments 59
understanding your network 28
pre-installation 69
prefetch
definition 314
list 314
list sequential 314
quantity 317
sequential 314
prelink SELinux command 105
prerequisites
for upgrading 117
hardware 96
software 96
private situation 619
problem determination and resolution 605
problem resolution 603
processor configurations 319
processor requirements 109
proddsc.tbl 93, 547
product codes 567
Linux 93
UNIX 93
product documentation 599
product overview 3
project management and planning 63
protocols
validation 13
public/private key pair 280
publication tool, schema 340
publications
developerWorks Web site 597
OPAL 597
Redbooks 597
related 597
Technotes 597
types 595
wikis 597
Q
quarterly health checks 79
queries 317
queue delays, causes 319
R
RAID device 315
ratios, scaling up 319
raw data size, calculating 319
rc.itm script 87
read operations, excessive 316
REBIND utility
place in order of utilities 318
purpose 318
SQL types supported 318
rebinding, explicit and implicit 318
recovery, roll-forward 316
resources (continued)
IBM Tivoli Monitoring 6 Welcome Kit 599
other 601
product documentation 599
Redbooks 599
service offerings 600
support Web sites 599
response file
use with schema publication tool 341
response time, delayed 319
reverse synchronization 12
roll-forward recovery 316
RPC
See Remote Procedure Call
rule base
creating new 480
importing existing rule base into 480
modifying 481
RUNSTATS utility
formats 317
list of purposes 317
options 317
precaution 317
S
SAMP
See System AutomationMultiplatform
sample deployment scenarios
event integration 461
SAMPLE DETAILED clause 317
scalability 318, 319
scaling up ratios 319
scenarios
event integration 461
partition files 554
secureMain 580
upgrade 115
schema publication tool 340
updated mode 342
scripts
autostart 87
secondary monitoring servers 15
Secure Hypertext Transport Protocol
See HTTPS
secure pipe
See SPIPE
secure protocols
Internet Inter-ORB Protocol (IIOP) 94
Interoperable Object Reference (IOR) protocol 94
Secure Hypertext Transport Protocol (HTTPS) 94
Secure Sockets Layer API 53, 92, 94, 95, 173, 280,
287, 517, 616, 620, 621
asymmetric cryptography 280
digital certificates 280
disabling 281
enabling 281
Global Security Toolkit (GSKit) 280
public/private key pair 280
secureMain utility 579
examples 580
Index
637
638
SSO
See single sign-on capability
staffing 67
Star JOINs 321
starting
Situation Update Forwarder process 511
starting Manage Tivoli Enterprise Monitoring Services
itmcmd manage command 261
on Linux 261
on UNIX 261
on Windows 261
starting monitoring agents
itmcmd agent command 192
starting the monitoring server
itmcmd server command 155
starting the portal server 182
startSUF command 511
stash file 96
statistics
collected for RUNSTATS 317
comparing current and previous 317
distribution 317
index 317
queries 317
types of collections 317
stopping
Situation Update Forwarder process 511
stopping the monitoring server
itmcmd server command 156
stopSUF command 511
Structured Query Language 621
SUF
See Situation Update Forwarder
Summarization and Pruning agent 7, 35, 86, 123
configuration 73
configuration for autonomous operation 448
configuring communications for DB2 for the
workstation 371
configuring JDBC connection 435
considerations 44
deployment task estimates 64
description of 347
flexible scheduling 13
high availability and disaster recovery
considerations 51
installing on DB2 for the workstation 371
log trimming 13
product code 567
role in Tivoli Data Warehouse 7
starting 444
Sun Java Runtime Environment 14
version 1.6.0_10 or higher
browser requirements 227
support assistant 603
support Web sites 599
supported databases
portal server 107
Tivoli Data Warehouse 107
supported operating systems 96
swapping 319
switches 324
T
table spaces
containers 314
definition and purpose 314
multiple page sizes 315
types 314
tacmd addBundles command 188, 240, 246, 252
tacmd addSystem command 251, 252, 255, 256
tacmd AddSystem command 188
tacmd clearDeployStatus command 256, 257
tacmd commands
errors 77, 189
installation 21
packaging 21
remote deployment 255
tacmd configureSystem command 247, 251, 256
tacmd createNode command 242, 246, 251, 252, 255
requirements 242
using 243
tacmd createsystemlist command 618
tacmd deletesystemlist command 618
tacmd editsystemlist command 618
tacmd executecommand command
installing application support for an IBM Tivoli
Composite Application Manager agent 197
tacmd getDeployStatus command 249, 256
Index
639
640
U
UDP
See User Datagram Protocol
Uni-Processor 319
622
uninstallation 581
agents 587
component 584
Linux 585
UNIX 585
Warehouse Proxy 587
Windows 584
from the portal 587
monitoring component 584
Linux 585
UNIX 585
Warehouse Proxy 587
Windows 584
monitoring environment 581
Linux 583
UNIX 583
Windows 581
Warehouse Proxy 587
uninstalling
event synchronization 590
event synchronization component
manually 591
IBM Tivoli Monitoring environment 581
ODBC data source 587
OMEGAMON Monitoring Agents 585
Warehouse Proxy agent 587
UNIX
/etc/hosts file 91
adding application support 155
backing up a current installation 121
configuring 547
configuring monitoring agents 190
EIB tables 577
file descriptors limit 93
file permissions for configuring monitoring
agents 191
fully qualified path names 91
host names, setting 91
hub monitoring server 153, 154
installation considerations 87
installation user account 90
installing as root 90
installing monitoring agents 190
maxfiles parameter 93
monitoring agents 189
monitoring server parameter 275
network interface cards 91
NFS environment 91
planning 87
remote monitoring server 160, 161
response file configuration 547
response file installation 545
silent configuration 547
silent installation 544, 545
Solaris zones
installing into 92
starting monitoring agents 192
TCP/IP network services 91
UNIX environment 64
UNIX monitoring server
configuring permissions for non-NIS Solaris
275
Index
641
642
V
validation
configuration values 13
protocols 13
variables
KDC_FAMILIES 553
KDE_TRANSPORT 553
KPX_WAREHOUSE_LOCATION 553
variables, host 318
versioning 59
view
topology 117
virtual memory
increasing for large environments on AIX
VMWare ESX Server 48, 103, 104
275
W
Warehouse load projection spreadsheet 42
Warehouse Proxy
removing 587
tuning performance 456
uninstalling 587
Warehouse Proxy agent 7, 35, 86, 123
configuration 73, 445
configuration for autonomous operation 448
port assignment for 449
configuration with firewall gateway 562
considerations 42
deployment task estimates 64
file descriptor limit on UNIX 93
high availability and disaster recovery
considerations 50
installing 445
product code 567
role in Tivoli Data Warehouse 7
support for multiple 445
uninstalling 587
verifying configuration 447
Warehouse load projection spreadsheet 42
Warehouse Summarization and Pruning agent
See Summarization and Pruning agent
WAREHOUSEAGGREGLOG table 458, 459
ENDTIME column 459
LOGTMZDIFF column 459
MAXWRITETIME column 459
MINWRITETIME column 459
OBJECT column 459
ORIGINNODE column 459
ROWSREAD column 459
STARTTIME column 459
WAREHOUSELOG query 455
WAREHOUSELOG table 458
ENDQUEUE column 458
ENDTIME column 458
ERRORMSG column 458
EXPORTTIME column 458
OBJECT column 458
ORIGINNODE column 458
ROWSINSERTED column 458
worksheets (continued)
Windows remote monitoring server
workspace, WAREHOUSELOG 455
workspaces, creating 454
world permission
required for NFS access 92
532
X
XML
See Extensible Markup Language
Z
z/OS server
64
Index
643
644
Printed in USA
GC32-9407-03