AIX Advance Admin0
AIX Advance Admin0
AIX Advance Admin0
2011 6 20
Advanced Administration And Problem Determination 1. The Object Data Manager 2. Error monitoring 3. System initializing 4. Disk management
-The Object Data Manager (ODM) is a database intended for storing system information.
-Physical and logical device information is stored and maintained through the use of object with associated characteristics.
ODM
SMIT menus
NIM
PdDv - predefined device PdAt - predefined Attribute PdCn - predefined connection CuDv - Customized device CuAt - Customized Attribute CuVPD - Customized vital Product data CuDvDr -Customized Device Driver CuDep - Customized dependency Config_rules -
7
Config_Rules
Customized database
CuDep CuDv CuAt
CuDvDr
8
CuVPD
When an AIX system boots, the Configuration Manager (cfgmgr) is responsible for configuring devices.
There is one ODM object class which the cfgmgr uses to determine the correct sequence when configuring devices: Config_Rules. This ODM object class also contains information about various methods files used for device management.
10
11
12
13
14
16
Chgstatus This flag tells whether the device instance has been altered since the last system boot. The diagnostics facility uses this flag to validate system configuration. The flag can take these values : - chgstatus = 0 (New device) - chgstatus = 1 (Dont care) - chgstatus = 2 (Same) - chgstatus = 3 (Device is missing)
18
19
2. Error monitoring
20
21
22
23
2. Error mornitoring
2. Error mornitoring
DISK_ERR2, DISK_ERR3
DISK_ERR4
P
T
SCSI_ERR*
25
2. Error mornitoring
S,P
LVM_SA_QUORCLOSE
H,P
2. Error mornitoring
#smit errclear
27
28
*.debug; mail.none
collect all daemon messages in /tmp/daemon.debug Send all messages, except mail messages, to host server
29
# errpt IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION ... C6ACA566 0505071399 U S syslog MESSAGE REDIRECTED FROM SYSLOG ...
30
3. System initialization
31
Hardware error.
Bootstrap code :
System p and pSeries systems can manage several different operation systems. The hardware is not bound to the software. The first block of the boot disk contains bootstrap code that is loaded into RAM during the boot process. This part is sometimes referred to as System Read Only Storage(ROS). The bootstrap code gets control. The task of this code is to locate the boot logical volume on the disk, and load the boot image. In some technical manuals, this second part is called the Software ROS.
33
RAMFS
Reduced ODM
RAMFS :
This RAMFS is a reduced or miniature root file system which is loaded into memory and used as if it were a disk-based file system. The contents of the RAMFS are slightly different depending on the type of system boot.
Reduced ODM :
The boot logical volume contains a reduced copy of the ODM. During the boot process, many devices are configured before hd4 is available. For these devices, the corresponding ODM files must be stored in the boot logical volume.
34
35
Maintenance 1. Access a Root Volume Group 2. Copy a System Dump to Media 3. Access Advanced Maintenance 4. Install from a System Backup
37
Rebuild BLV
Maintenance mode
If the boot logical volume is corrupted (for example, bad blocks on a disk might cause a corrupted BLV), the machine will not boot.
To fix this situation, you must boot your machine in maintenance mode, from a CD or tape. If NIM has been set up for a machine, you can also boot the machine from a NIM master in maintenance mode. NIM is actually a common way to do special boots in a logical partition environment.
38
3. Clear the boot record at the beginning of the disk. # chpv c hdisk0
4. Create a new hd5 logical volume : one physical partition in size, it must be in rootvg and outer edge as intrapolicy. Specify boot as the logical volume type. # mklv y hd5 t boot a e rootvg 1 5. Run the bosboot command as described on the visual. # bosboot ad /dev/hdisk0 6. Check the actual bootlist and reboot the system # bootlist m normal o # sync;sync;shutdown -Fr
39
/ etc,dev,mnt,usr
rc.boot 1 rc.boot 2
rc.boot 3
3. System initialization
Boot sequence
The visual shows the boot sequence after loading the AIX kernel from the boot image 1.The kernel restores a RAM file system into memory by using information provided in the boot image. At this stage the rootvg is not available, so the kernel needs to work with commands provided in the RAM file system. You can consider this RAM file system as a small AIX operating system. 2.The kernel starts the init process which was provided in the RAM file system (not from the root file system). This init process executes a boot script rc.boot. 2.rc.boot controls the boot process. In the first phase (it is called by init with rc.boot 1), the base devices are configured. In the second phase (rc.boot 2), the rootvg is activated (or varied on). 4.After activating the rootvg at the end of rc.boot 2, the kernel overmounts the RAM file system with the file systems from rootvg. The init from the boot image is replaced by the init from the root file system, hd4. 5. This init processes the /etc/inittab file. Out of this file, rc.boot is called a third time(rc.boot 3) and all remaining devices are configured.
41
548
510
511
42
3. System initialization
rc.boot phase 1 actions
rc.boot phase 1 actions The init process started from the RAM file system, executes the boot script rc.boot 1. If init fails for some reason(for example, a bad boot logical volume), c06 is shown on the LED display. The following steps are executed when rc.boot 1 is called: 1.The restbase command is called which copies the ODM from the boot image into the RAM file system. After this step, an ODM is available in the RAM file system. The LED shows 510 if restbase completes successfully, otherwise LED 548 is shown. 2.When the restbase has completed successfully, the configuration manager(cfgmgr) is run with the option f (first). cfgmgr reads the Config_Rules class and executes all methods that stored under phase=1. Phase 1 configuration methods result in the configuration of base devices into the system, so that the rootvg can be activated in the next rc.boot phase.
43
3. System initialization
rc.boot phase 1 actions
3. Base devices are all devices that are necessary to access the rootvg. If the rootvg is stored on hdisk0, all devices from the mother board to the disk itself must be configured in order to be able to access the rootvg. 4.At the end of rc.boot 1, the system determines the last boot device by calling bootinfo b. The LED shows 511.
44
hd4: /
hd6
copycore: if dump, copy
dev
etc
mnt
usr
var
3. System initialization
rc.boot phase 2 actions (part 1)
rc.boot is run for the second time and is passed the parameter 2. The LED shows 551. The following steps take part in this boot phase : 1.The rootvg is varied on with a special version of the varyonvg command designed to handle rootvg. If ipl_varyon completes successfully, 517 is shown on the LED, otherwise 552,552 or 556 are shown and the boot process stops. 1.The root file system, hd4, is checked by fsck. The option f means that the file system is checked only if it was not unmounted cleanly during the last shutdown. This improves the boot performance. If the check fails, LED 555 is shown. 1.Afterwards, /dev/hd4 is mounted directly onto the root in the RAM file system. If the mount fails, for example due to a corrupted JFS log, the LED 557 is shown and the boot process stops.
46
3. System initialization
rc.boot phase 2 actions (part 1)
4. Next, /dev/hd2 is checked and mounted (again with option f, it is checked only if the file system wasnt unmounted cleanly). If the mount fails, LED 518 is displayed and the boot stops. 5.Next, the /var file system is checked and mounted. This is necessary at this stage, because the copycore command checks if a dump occurred. If a dump exits in a paging space device, it will be copied from the dump device, /dev/hd6, to the copy directory which is by default the directory /var/adm/ras. /var is unmounted afterwards.
47
hd4: /
hd6
dev
dev
Copy boot messages to alog
etc ODM
mnt
usr
var
48
3. System initialization
rc.boot phase 2 actions (part 2)
After the paging space /dev/hd6 has been made available, the following tasks are executed in rc.boot 2 : 1.To understand this steps, remember two things : - /dev/hd4 is mounted onto root in the RAM file system. - In rc.boot 1, the cfgmgr has been called and all base devices are configured. This configuration data has been written into the ODM of the RAM file system. Now, mergedev is called and all /dev files from the RAM file system are copied to disk.
2.All customized ODM files from the RAM file system ODM are copied to disk as well. At this stage, both ODMs (in hd5 and hd4) are in sync now.
49
3. System initialization
rc.boot phase 2 actions (part 2)
3. The /var/file system (hd9var) is mounted
4. All messages during the boot process are copied into a special file. You must use the alog command to view this file : # alog t boot o As no console is available at this stage all boot information is collected in this file. When rc.boot 2 is finished, the /, usr, and /var file system in rootvg are active.
Final stage
At this stage, the AIX kernel removes the RAM file system (returns the memory to the free memory pool) and starts the init process from the / file system in rootvg.
50
553
/etc/objrepos : ODM
551 551
551 551
savebase
51
hd5 :
ODM
3. System initialization
rc.boot phase 3 actions (part 1)
At this boot stage, the /etc/init process is started. It reads the /etc/inittab file (LED 553 is displayed) and executes the commands line-by-line. It runs rc.boot for the third time, passing the argument 3 that indicates the last boot phase. rc.boot 3 executes the following tasks : 1.The /tmp file system is checked and mounted. 1.The rootvg is synchronized by syncvg rootvg. If rootvg contains any stale partitions (for example, a disk that is part of rootvg was not active), these partitions are updated and synchronized. syncvg is started as background job. 1.The configuration manager is called again. If the key switch or boot mode is normal, the cfgmgr is called with option p2 (phase 2). If the key switch or boot mode is service, the cfgmgr is called with option p3 (phase 3).
52
3. System initialization
rc.boot phase 3 actions (part 1)
4.The configuration manager reads the ODM class Config_Rules and executes either all methods for phase=2 or phase=3. All remaining devices that are not base devices are configured in this step. 4.The console will be configured by cfgcon. The numbers c31, c32, c33 or c34 are displayed depending on the type of console : - c31 : Console not yet configured. Provides instruction to select a console. - c32 : console is lft terminal. - c33 : console is tty. - c34 : console is a file on the disk. If CDE is specified in /etc/inittab, the CDE will be started and you get a graphical boot on the console. 6.To synchronize the ODM in the boot logical volume with the ODM from the / file system, savebase is called.
53
syncd 60 errdemon
Turn off LEDs hd5 : rm /etc/nologin ODM
Yes
A device that was previously detected could not be found. Run diag a. System initialization is completed.
54
3. System initialization
rc.boot phase 3 actions (part 2)
After the ODMs have been synchronize again, the following steps take place : 1.The syncd daemon is started. All data that is written to disk is first stored in a cache in memory before writing it to the disk. The syncd daemon writes the data from the cache each 60 second to the disk 1.The LED displayed is turn off. 1.If the file /etc/nologin exists, it will be removed. If a system administrator creates this file, a login to the AIX machine is not possible. During the boot process /etc/nologin will be removed. 1.If devices exists that are flagged as missing in CuDv (chgstatus=3), a message is displayed on the console. For example, this could happen if external devices are not powered on during system boot.
1.The last message, System initialization completed, is written to the console. rc.boot 3 is finished. The init process executes the next command in /etc/inittab.
55
# # # # # # # #
logform V jfs2 /dev/hd8 fsck y V jfs2 /dev/hd1 fsck y V jfs2 /dev/hd2 fsck y V jfs2 /dev/hd3 fsck y V jfs2 /dev/hd4 fsck y V jfs2 /dev/hd9var fsck y V jfs2 /dev/hd10opt fsck y V jfs2 /dev/hd11admin
56
Do not use an editor to change /etc/inittab. Use mkitab, chitab, rmitab instead.
57
3. System initialization
Modifying /etc/inittab
Do not use an editor to change the /etc/inittab file. One small mistaken in /etc/inittab, and your machine will not boot. Instead use the command mkitab, chitab, and rmitab to edit /etc/inittab. Consider the following examples : -To add a line to /etc/inittab, use the mkitab command. For example : # mkitab myid:2:once:/usr/local/bin/errlog.check -To change /etc/inittab so that init will ignor the line tty1, use the chitab command : # chitab tty1:2:off:/usr/sbin/getty /dev/tty1 -To remove the line tty1 from /etc/inittab, use the rmitab command. For example : # rmitab tty1
Viewing /etc/inittab
The lsitab command can be used to view /etc/inittab file. For example : # lsitab dt dt:2:wait:/etc/rc.dt If you issue lsitab a, the complete /etc/inittab file is shown.
58
3. System initialization
Boot problem management
Check LED User action
Bootlist wrong? /etc/inittab corrupt? /etc/environment corrupt? Boot logical volume or Boot record corrupt? JFS / JFS2 log corrupt?
Power on, press F1, select Multi-Boot, select the correct boot device. Access the rootvg. Check /etc/inittab (empty, missing or corrupt). Check /etc/environment. Access the rootvg. Re-create the BLV : # bosboot ad /dev/hdiskXX Access rootvg before mounting the rootvg file systems. Re-create the JFS / JFS2 log: #logform V jfs /dev/hd8 or #logform V jfs2 /dev/hd8 Run fsck afterwards. Run fsck against all rootvg file system. If fsck indicates errors (not an AIX file system), repair the superblock as described in the notes. Access rootvg and unlock the rootvg : # chvg u rootvg ODM files are missing or inaccesible. Restore the missing files from a system backup Check /etc/filesystems. Check network (remote mount), file systems (fsck) and hardware.
Superblock corrupt?
552.554.556
59
3. System initialization
Notes:
Superblock corrupt? Another thing you can try is to check the superblocks of your rootvg file systems. If you boot maintenance mode and you get error messages like Not an AIX file system or Not a recognized file type, it is probably due to a corrupt superblock in the file system. Each file system has two super blocks. Executing fsck should automatically recover the primary superblock by copying from the backup superblock. The following is provided in case you need to do this manually For JFS, the primary superblock is in logical block 1 and copy is in logical block 31. To manually copy the superblock from block 31 to block 1 for the root file system(in this example), issue the following command : # dd count=1 bs=4k skip=31 seek=1 if=/dev/hd4 of=/dev/hd4 For JFS2, the locations are different. To manually recover the primary superblock from the backup superblock for the root file system(in this example), issue the following command : # dd count=1 bs=4k skip=15 seek=8 if=/dev/hd4 of=/dev/hd4
60
3. System initialization
Notes:
rootvg locked? Many LVM commands place a lock into the ODM to prevent other commands from working at the same time. If a lock remains in the ODM due to a crash of a command, this may lead to a hanging system. To unlock the rootvg, boot in maintenance mode and access the rootvg with file systems. Issue the following command to unlock the rootvg: # chvg u rootvg
61
4. Disk management
62
63
4. Disk management
Notes:
The LVCB and the getlvcb command The LVCB stores attributes of a logical volume. The getlvcb command queries an LVCB. Example on visual In the example on the visual, the getlvcb command is used to obtain information from the logical volume hd2. The information displayed includes the following : -Intrapolicy -Number of copies (1=No mirroring) -Interpolicy -Number of logical partition (10) -Can the partitions be reorganized? (relocatable = y)
64
VGDA LVCB
ODM /etc/filesystems
Change, using low-level command Match IDs by name
Update
exportvg
remove
65
4. Disk management
Notes:
High-level commands Most of the LVM commands that are used when working with volume groups, physical, or logical volumes are high-level commands. These high-level commands (like mkvg, extendvg, mklv and others listed on the visual) are implemented as shell scripts and use names to reference a certain LVM object. The ODM is consulted to match a name, for example, rootvg or hdisk0, to an identifier. Interaction with disk control blocks and the ODM The high-level commands call intermediate or low-level commands that query or change the disk control blocks VGDA or LVCB. Additionally, the ODM has to be updated; for example, to add a new logical volume. The high-level commands contain signal handlers to clean up the configuration if the program is stopped abnormally. If a system crashes, or if high-level commands are stopped by kill -9, the system can end up in a situation where the VGDA/LVCB and the ODM are not in sync. The same situation may occur when low-level commands are used incorrectly.
66
4. Disk management
Notes:
The importvg and exportvg commands The command importvg imports a complete new volume group based on a VGDA and LVCB on a disk. The command exportvg removes a complete volume group from the ODM. VGDA and LVCB corruption The focus in this course is on situation where the ODM is corrupted and we assume that the LVM control data (for example, the VGDA or the LVCB) are correct. If an attempted execution of LVM commands (for example: lsvg, varyonvg, reducevg) results in a failure with core dump, that could be an indication that the LVM control data on one of the disk has become corrupted. In this situation, do not attempted to resync the ODM using the procedures covered. In most cases, you will need to recover from a volume group backup. If recovery from backup is not a viable option, It is suggested that you work with AIX Support in dealing with the problem. Attempting to use the procedures covered In this unit will not solve the problem. Even worse, you will likely propagate the corruption to other disks in the volume group, thus making the situation even worse.
67
70
4. Disk management
Notes:
Problems in rootvg For ODM problems in rootvg, finding a solution is more difficult because rootvg cannot be varied off or exported. However, it may be possible to fix the problem using one of the techniques described below. The rvgrecover procedure If you detect ODM problems in rootvg, you can try using the procedure called rvgrecover. You may want to code this in a script (shown on the visual) in /bin and mark it executable. The rvgrecover procedure removes all ODM entries that belong to your rootvg by using odmdelete. That is the same way exportvg works. After deleting all ODM object from rootvg, it imports the rootvg by reading the VGDA and LVCB from the boot disk. This results in completely new ODM object that describe your rootvg.
71
4. Disk management
RAM disk maintenance mode (1 of 3)
With the rootvg, the corruption problem may prevent a normal boot to multiuser mode. Thus, you may need to handle this situation in RAM Disk Maintenance Mode (boot into Maintenance mode from the CD-ROM or NIM). Before attempting this, you shuold make sure you have a current mksysb backup.
Use the steps in the following table (which are similar to those in the rvgrecover script shown on the visual) to recover the rootvg volume group after booting to maintenance mode and file system mounting. 1.Delete all of the ODM information about logical volumes. Get the list of logical volumes from the VGDA of the physical volume. # lqueryvg p hdisk0 L | awk {print $2} | while read LVname; do >odmdelete q name=$LVname o CuAt >odmdelete q name=$LVname o CuDv >odmdelete q value3=$LVname o CuDvDr >done
72
4. Disk management
RAM disk maintenance mode (2 of 3)
2. Delete the # odmdelete # odmdelete # odmdelete # odmdelete # odmdelete # odmdelete # odmdelete volume group information from ODM. q name=rootvg o CuAt q parent=rootvg o CuDv q name=rootvg o CuDv q name=rootvg o CuDep q dependency=rootvg o CuDep q value1=10 o CuDvDr q value3=rootvg o CuDvDr
3. Add the volume group associate with the physical volume back to the ODM. # importvg y rootvg hdisk0 4. Recreate the device configuration database in the ODM from the information on the physical volume. #varyonvg f rootvg
73
4. Disk management
RAM disk maintenance mode (3 of 3)
This assumes that hdisk0 is part of rootvg.
In CuDvDr : value1 = major number value2 = minor number value3 = object name for major/minor number rootvg always has value1 = 10. The steps can also be used to recover other volume groups by substituting the appropriate physical volume and volume group information. It is suggested that this example be made a script.
74
4. Disk management
Disk replacement : Starting point
A disk must be replaced...
Disk mirrored?
No
Yes
procedure 1
Yes
procedure 2
procedure 3
rootvg
procedure 4
procedure 5
75
4. Disk management
Procedure 1 : Disk mirrored
1. Remove all copies from disk : # unmirrorvg vg_name hdiskX 2. Remove disk from volume group : # reducevg vg_name hdiskX 3. Remove disk from ODM : # rmdev dl hdiskX 4. Connect new disk to system May have to shutdown if not hot-pluggable 5. Add new disk to volume group : # extendvg vg_name hdiskY 6. Create new copies : # mirrorvg vg_name hdiskY # syncvg vg_name
76
4. Disk management
Procedure 2 : Disk still working
1. Connect new disk to system. 2. Add new disk to volume group : # extendvg vg_name hdiskY 3. Migrate old disk to new disk : # migratepv hdiskX hdiskY 4. Remove old disk from volume group # reducevg vg_name hdiskX 5. Remove old disk from ODM : # rmdev dl hdiskX
77
4. Disk management
Procedure 2 : Disk still working (special steps for rootvg)
1. Connect new disk to system.
3. Disk contains hd5? # migratepv l hd5 hdiskX hdiskY # bosboot ad /dev/hdiskY # chpv c hdiskX # bootlist m normal hdiskY If the disk contains primary dump device, you must deactivate the dump device. # sysdumpdev p /dev/sysdumpnull Migrate old disk to new disk : # migratepv hdiskX hdiskY After completing migration, you have to activate dump device again. # sysdumpdev p /dev/hdX
78
4. Disk management
Procedure 3 : Disk in missing or removed state
1. Identify all LVs and file systems on failing disk : # lspv l hdiskY
2. Umount all file systems on failing disk : # umount /dev/lv_name 3. Remove all file systems and LVs from failing disk : # smit rmfs #rmlv lv_name 4. Remove disk from volume group # reducevg vg_name hdiskX 5. 6. 7. 8. Remove old disk from system : # rmdev dl hdiskX Add new disk to volume group : # extendvg vg_name hdiskZ Recreate all LVs and file systems on new disk : # mklv y lv_name # smit crfs Restore file systems from backup : # restore rvqf /dev/rmt0
79
4. Disk management
Procedure 4 : Total rootvg failure
1. Replace bad disk
1. Boot maintenance mode 1. Restore from a mksysb tape
1. import each volume group into the new ODM (importvg) if needed
80
4. Disk management
Procedure 5 : Total non-rootvg failure
1. Export the volume group from the system : # exportvg vg_name
2. Check /etc/filesystems. 3. 4. 5. 6. Remove bad disk from ODM and the system : # rmdev l hdiskX -d Connect the new disk. If volume group backup is available (savevg) : # restvg f /dev/rmt0 hdiskY If no volume group backup is available : Recreate ... - Volume group (mkvg) - Logical volumes and file systems (mklv, crfs)
81
82 82
82