Sol Unix 2
Sol Unix 2
Sol Unix 2
This will modify the /etc/coreadm.conf which is read at boot when /etc/init.d/coreadm is executed from
a runtime control script. To make permanent changes to coreadm, do not edit the /etc/coreadm.conf
file, use the coreadm command.
passwd -l user_login_name
passwd -s user_login_name
(dtlogin script:Solaris do not use startx command. It comes with dtlogin script located in /etc/init.d directory.
It is a display manager for the X Window System)
# /usr/dt/bin/dtconfig e
# /usr/dt/bin/dtconfig d
(enable)
( disable)
mountd Handles file system mount requests from remote systems, and provides access control (server)
2.
nfsd Handles client file system requests (both client and server)
3.
statd Works with the lockd daemon to provide crash recovery functions for the lock manager (server)
4.
5.
nfslogd Provides filesystem logging. Runs only if one or more filesystems is mounted with log attribute.
biod: On the client end, handles asynchronous I/O for blocks of NFS files.
10
-r-xr-xr-x 62 root
lrwxrwxrwx 1 root
$ pwd
bin
other
SDS
Explain RAID0, RAID1, RAID3,
RAID 0 Concatenation/Striping
RAID 1 Mirroring
RAID 5-Striped array with rotating parity.
Concatenation: Concatenation is joining of two or more disk slices to add up the disk space.
Concatenation is serial in nature i.e. sequential data operations are performed serially on first disk then
second disk and so on. Due to serial nature new slices can be added up without having to take the
backup of entire concatenated volume, adding slice and restoring backup.
Striping: Spreading of data over multiple disk drives mainly to enhance the performance by
distributing data in alternating chunks - 16 k interleave across the stripes. Sequential data operations
are performed in parallel on all the stripes by reading/writing 16k data blocks alternatively form the
disk stripes.
Mirroring: Mirroring provides data redundancy by simultaneously writing data on to two sub mirrors of
a mirrored device. A submirror can be a stripe or concatenated volume and a mirror can have three
mirrors. Main concern here is that a mirror needs as much as the volume to be mirrored.
RAID 5: RAID 5 provides data redundancy and advantage of striping and uses less space than
mirroring. A RAID 5 is made up of at least three disks, which are striped with parity information written
alternately on all the disks. In case of a single disk failure the data can be rebuild using the parity
information from the remaining disks.
How many replicas should be for raid5 in sds if I have 5 disk
No of Hard Devices
No of State Database Replicas to created
One
Three, all on one slice
Two-four
Two on each drive
Five or more
One on each drive
Were will be the configuration for metadevice
#/etc/lvm/md.tab
or
#/etc/opt/SUNWmg/md.tab
How to grow disk size in SDS
Identified the free disks and the volumes size and meta device name
#df h
/dev/md/dsk/d19
27G 1.5G 25G
6% /rpbkup
Increase the filesystem by 10Gb
#metattach d102 10G
#growfs -M /agtmgt/ora1data /dev/md/rdsk/d102
To find the free space on soft partition
#metarecover -v -n d40 -p|grep -i free
How to find the disk controller
#cfgadm
Creating New FS in LUNs and new mount point to the Oracle filesystem
# metainit d111 -p d200 20G
d111: Soft Partition is setup
# newfs /dev/md/rdsk/d111
newfs: construct a new file system /dev/md/rdsk/d111: (y/n)? y
# mkdir ora13data
# chown oracle:dba /ora13data
11
# ls -la ora13data
# mount /dev/md/dsk/d111 /ora13data
#df -k
What is luxadm probe used
#luxadm probe
Found Enclosure(s):
SUNWGS INT FCBPL Name:FCloop
Logical Path:/dev/es/ses0
Logical Path:/dev/es/ses1
Node WWN:50800200001bcf28
12
or
# /usr/sbin/luxadm insert_device <enclosure_name,sx>
luxadm insert_device /dev/rdsk/c1t49d0s2
where sx is the slot number
or
# /usr/sbin/luxadm insert_device (if enclosure name is not known)
Note: In many cases, luxadm insert_device does not require the enclosure
name and slot number.
Use the following to find the slot number:
# luxadm display <enclosure_name>
To find the <enclosure_name> use:
# luxadm probe
Run "ls -ld /dev/dsk/c1t1d*" to verify that the new device paths have
been created.
Update hardware device numbers:
At the end of metastat command output are the hardware device numbers. After replacement the
metadevadm command should be run to update the new device number.
#metadevadm u c1t0d0
Write vtoc to replacement disk:
#fmthard s /var/adm/mmddyyc1t0d0.vtoc /dev/rdsk/c1t0d0s2
Or use format to copy the partition table.
Create new meta devices:
#metainit d30 1 1 c1t0d0s0
#metainit d31 1 1 c1t0d0s1
Attach mirrors:
#metattach d0 d30
#metattach d1 d31
Add metadbs to replacement disk:
#metadb a c 3 c1t0d0s7
Check that metadbs are correct:
The lower case lettered flags may not appear until the server is rebooted.
be six total metadbs. Three on each of the root mirrored disk.
# metadb
flags
a m p luo
a p luo
a p luo
first blk
16
8208
16400
block count
8192
8192
8192
/dev/dsk/c1t0d0s7
/dev/dsk/c1t0d0s7
/dev/dsk/c1t0d0s7
13
Following file systems are not able to open, while using dk k its shows i/o error.
[root drcs1] ksh$ df -k | grep -i /dev/md/meter
/dev/md/meter/dsk/d14 1001382 117016 874353 12% /appl/TEST
/dev/md/meter/dsk/d6 11329080 681210 9514970
7% /ora1data/METR
/dev/md/meter/dsk/d10 5664168 1651 5096107
1% /ora1index/METR
/dev/md/meter/dsk/d22 2002021
10 1981991
1% /oraredo/METR
/dev/md/meter/dsk/d26 1887813
20 1699013
1% /redoarch/METR
Step 1
[root drcs1] ksh$
[root drcs1] ksh$ metastat -s meter d18
meter/d18: Trans
State: Hard Error
Size: 4087280 blocks
Master Device: meter/d17
Logging Device: meter/d5
meter/d17: Mirror
Submirror 0: meter/d15
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 4087280 blocks
meter/d15: Submirror of meter/d17
State: Okay
Size: 4087280 blocks
Stripe 0:
Device
Start Block Dbase State
c2t5d1s0
0
No Okay
Hot Spare
Hot Spare
14
Device
c1t4d0s6
Hot Spare
Step 2:- Analyzed both the disk and no error found disks are okay.
21. c1t3d4 <SUN2.1G cyl 2733 alt 2 hd 19 sec 80>
22. c1t4d0 <SUN2.1G cyl 2733 alt 2 hd 19 sec 80>
analyze> test
Ready to analyze (won't harm data). This takes a long time,
but is interruptable with CTRL-C. Continue? yes
pass 0 - pattern = 0xc6dec6de
2732/18/14
pass 1 - pattern = 0x6db6db6d
2732/18/14
Total of 0 defective blocks repaired.
Step 3:- Take the output of metadevice configuration for meter
[root drcs1] ksh$ metastat -s meter -p
meter/d6 -t meter/d2 meter/d5
meter/d2 -m meter/d0 1
meter/d0 1 6 c1t4d0s0 c1t4d1s0 c1t4d2s0 c1t4d3s0 c1t4d4s0 c1t5d0s0 -i 256b
meter/d10 -t meter/d9 meter/d5
meter/d9 -m meter/d7 1
meter/d7 1 3 c1t3d1s0 c1t3d2s0 c1t3d3s0 -i 256b
meter/d14 -t meter/d13 meter/d5
meter/d13 -m meter/d11 1
meter/d11 1 1 c1t5d1s0
meter/d5 -m meter/d3 meter/d1 1
meter/d3 1 1 c1t3d4s6
meter/d1 1 1 c1t4d0s6
meter/d4 1 1 c1t5d3s0
[root drcs1] ksh$
Sterp4: Find the disk is place with some other metaset
[root drcs1] ksh$ metastat -s drcs1 -p | grep -i c1t3d4
[root drcs1] ksh$ metastat -s drcs1 -p | grep -i c1t4d0
[root drcs1] ksh$ metastat -s tdcc -p | grep -i c1t3d4
[root drcs1] ksh$ metastat -s tdcc -p | grep -i c1t4d0
[root drcs1] ksh$ metastat -s ssd -p | grep -i c1t3d4
[root drcs1] ksh$ metastat -s msp_cd -p | grep -i c1t3d4
[root drcs1] ksh$ metastat -s meter -p | grep -i c1t4d0
meter/d0 1 6 c1t4d0s0 c1t4d1s0 c1t4d2s0 c1t4d3s0 c1t4d4s0 c1t5d0s0 -i 256b
meter/d1 1 1 c1t4d0s6
[root drcs1] ksh$
Step 5: Find the entry in /etc/vfstab for meter
/dev/md/meter/dsk/d14 /dev/md/meter/rdsk/d14 /adev ufs 1
no
/dev/md/meter/dsk/d6 /dev/md/meter/rdsk/d6 /ora1data/METR ufs 1 no
/dev/md/meter/dsk/d10 /dev/md/meter/rdsk/d10 /ora1index/METR ufs 1 no Step 6: Unmount all the following file systems
#umount /appl/TEST
#umount /ora1data/METR
#umount /ora1index/METR
15
#umount /oraredo/METR
#umount /redoarch/METR
Check df k whether the file system are umounted
Step7:- Clearing all the trans device from d5
meter/d5: Logging device for meter/d6 meter/d10 meter/d14 meter/d18 meter/d22
meter/d26
[root drcs1] ksh$ metaclear -s meter d10
meter/d10: Trans is cleared
[root drcs1] ksh$ metaclear -s meter d14
meter/d14: Trans is cleared
[root drcs1] ksh$ metaclear -s meter d18
meter/d18: Trans is cleared
Step 8:- Find all the Trans device configuration has cleared
[root drcs1] ksh$ metastat -s meter -p
Step 9:- Mirror meter/d5 with meter/d3
[root drcs1] ksh$ metainit meter/d5 -m meter/d3
meter/d5: Mirror is setup
Step 10:- Attache the mirror device meter/d5 with sub mirror meter/d1
[root drcs1] ksh$ metattach meter/d5 meter/d1
meter/d5: submirror meter/d1 is attached
Step 11:- Create all the trans device of d5
meter/d5: Logging device for meter/d6 meter/d10 meter/d14 meter/d18 meter/d22
meter/d26
#metainit meter/d6 -t meter/d2 meter/d5
#metainit meter/d10 -t meter/d9 meter/d5
Step 12:- Mount all the following file systems
#mount /dev/md/meter/dsk/d6
#mount /dev/md/meter/dsk/d10
#mount /dev/md/meter/dsk/d14
Most Important:- Verification and confirmation
Check df k whether the file system are mounted
Check all the trans device is presentmetastat -s meter p
Check any error is exist metastat -s meter any hard error
Check all the file system specifically using ls lrt
Confirmed with user and close the call.
How to do the disk cloning on solaris
Here is the procedure
install the disk
you can do this few ways, let's the scenario be, the disk is already attached and its been label through
format.
primary disk is u r c1t0d0s2
#dd if=/dev/dsk/c1t0d0s2 of=/dev/dsk/c1t1d0s2 bs=256k
This will take time, depends on the size of the primary disk
verfy the clone disk has a clean filesystem, for that
#fsck -y /dev/rdsk/c1t1d0s0
To verify that mount the clone disk
#mount /dev/dsk/c1t1d0s0 /mnt
change the /etc/vfstab to point to the clone device
16
#vi /mnt/etc/vfstab
After making changes, boot the clone disk
-----Done
Backups
How will you take ufsdump and ufsrestore in a sing command line?
# ufsdump 0f - /dev/rdsk/c0t0d0s6 | (cd /mnt/prasad ufsrestore xf -)
To check the status of the media inseted on the tape drive
# mt /dev/rmt/0 status
Syntex to execute a ufsdump
# ufsdump 0uf /dev/rmt/1
Difference between ufs and tar commnad
ufsdump
1. Used for complete file system backup.
2. It copies every thing from regular files in a file system to special character and block
device files.
3. It can work on mounted or unmounted file systems.
Tar:
1. Used for single or multiple files backup.
2. Can't backup special character & block device files.
3. Works only on mounted file system.
How copy all the files to new filesystem
#cd /export/home
#tar -cf - . | ( cd /mnt ; tar -xpf - )
What is different between crontab and at command?
Crontab: job can be scheduled
At: Job can be a run once only
What is difference between incremental backup and differential backup?
Incremental: Only those files will be included which have been changed since the last backup.
Differential: Only those files will be included which have been changed since the last Full backup
How many ufsdump level
0-9 level
0 = Full Backup
1-9 = Incremental backup of file,That have changed since the last lower level backup.
Options in ufsdump
S = size estimate amount of space need on tape
L = auto loaded
O = offline once the backup completed & if possible to eject the media
U = update the /etc/dumdate files (Indicate:Name of the file system,Level of the backup 0-9,Date.
F = specified the tape devices name
Options in ufsrestore
T= list the content of the media
R =restore entire file system
X = restore only the file named on the command line
I = interactive mode
V = verbose mode
F = specified the tape devices name
17
18
# sudo l
How edit the sudo
#/usr/local/sbin/visudo
Veritas
Run the following command to prevent VxVM from starting up after reboot:
touch /etc/vx/reconfig.d/state.d/install-db
19
#/etc/vx/bin/vxunroot
Reboot system #init 6
20
protocol_maximum: 60
protocol_current: 0
2006 testdg.1138894812.396.kirkcmis3
2006
8 2006 test1.1157725540.188.kirkcmis3
22 15:59 infodg.1134131590.274.kirkcmis3
22 15:59
3 15:59 devdg.1157983965.194.kirkcmis3
21
22
23
This approach can be used for both first time complete refresh and ongoing mirroring
process
Step 1: Create a VERITAS snap in the same
server at source system
24
25
----------------End----------------If you want to take backup the snapshot files follow the below processor
Backup the files
#tar cvf /dev/rmt/0 /prasadly
or
#ufsdump 0uf /dev/rmt/0 /dev/vx/rdsk/<give the dg name> snap-db1
/ora1data/CUSMARP2 file system not able to mounted on veritas
Solution:veritas volume made stale & cleaned
7001 vxvea
7004 vxrecover -s -g cusmarp2_dg vol_ora1data
7005 vxrecover -v -g cusmarp2_dg vol_ora1data
7006 vxprint -Ath | more
7009 datapath query device | more
7010 vxprint -Ath | more
7011 mount -F vxfs /dev/vx/dsk/cusmarp2_dg/vol_ora1data
7012 vxdiskadm
7015 vxdisk list
7016 vxprint -Ath | more
7021 ./vxse &
7027 vxdiskadm
7049 mount -F vxfs /dev/vx/dsk/cusmarp2_dg/vol_ora1data
7050 vxprint -Ath
7051 vxmend -g cusmarp2_dg fix stale vol_ora1data-01
7052 vxprint -Ath
7053 vxmend -g cusmarp2_dg fix clean vol_ora1data-01
7054 vxprint -Ath
7055 vxvol -g cusmarp2_dg start vol_ora1data
7056 vxprint -Ath
7057 mount -F vxfs /dev/vx/dsk/cusmarp2_dg/vol_ora1data
7058 fsck -F vxfs /dev/vx/rdsk/cusmarp2_dg/vol_ora1data
7059 mount -F vxfs /dev/vx/dsk/cusmarp2_dg/vol_ora1data
/ora1data/CUSMARP2
/ora1data/CUSMARP2
/ora1data/CUSMARP2
/ora1data/CUSMARP2
26
How to change the mirror status from 0:1 to 0:5 and how?
#vxassist g dgname v volume name relayout layout=strip
Were the veritas disk information store.
#/kernel/drv/sd.cf
How to find the plex, sub disk, Volume group, disk status, free spaces, disk controller,
Volume controller?
Displays info about plexes
#vxprint -lp
#vxprint -l plex_name
Displays info about subdisks
# vxprint -st
#vxprint -l disk##-##
show disk iops over 10 seconds...
#ssaadm display -p c#
Traces all i/o on a volume..
#vxtrace vol
To report disk stats
#vxstat -d
Displays the free space on the disks
#vxdg free
Display the disk crontroller
#vxprint list
#vxprint -Aht
Tells you how much you can grow a volume by
#vxassist maxgrow vol
In VERITAS How to recover the mirror disk with data.
How to increase the size of the file system?
# df k ( dentify the VERITAS group and volume name of /myhr on EXU407 )
# vxassist g appdg maxsize (check for the free space available in the group appdg on EXU407 )
# vxprint thA g appdg (Check if /myhr is mirrored)
# mount p (check the file system of /myhr )
# cp p /etc/vfstab/etc/vfstab.070223 ((take backup of /etc/vfstab file )
you can increase the size of teh file system by using solaris volume manager
# vxresize F vxfs g rootdg myapps +5g (increase the size of /myapps by 5gb)
How to increase the size of the file system?
# vxresize F vxfs g rootdg myapps +5g (decrease the size of /myapps by 5gb)
#df -k
#vxresize b F vxfs ora03vol +70gb
What is encapsulation?
This is used to bring the disk under volume manager , which are already present in the system with
data but without volume manager . Data on these disks are not disturbed and if these disks meets
certain volume manager requirements these are added under volume manager
What is the difference between the VERITAS 3.0 and VERITAS 4.0?
27
28
snapstart starts creating a online snapshot mirror of the volume using the available disk space . The
snapshot is completed with vxassist snapshot command when offline snapshot volume is created with a
userdefined name.
Command Syntax: vxassist snapstart volume_name
To create a snapshot mirror of a volume called vol8, type
29
Node Cluster 2
Minimum 2 nodes, 2 etherned address, shared disk and HA applications (ex) oracle
What is the purpose of ha daemons in VCS servers?
Ha daemons are used to start/stop services in VCS servers
How to check the communication between 2 nodes?
Heart beat checks the communication between 2 nodes.
What is a heart-beat?
It is a script that checks the communication between nodes.
Heart-beat is a communication which can be set at the time of creating a system in a cluster, which can
send and receive signal through that designed port.
to check the heartbeat use the command gabconfig -a
What are the two types of service groups?
1. Parallel Service group
2. Fail over service group.
How to unconfigure llt and gab
#lltconfig U
#gabconfig -U
#hastop
How to start llt and gab
#lltconfig c
#gabconfig c -x
#hastart
How to start one node cluster.
Ok boot x
How to stop one node cluster exclusively
# hastop local force ( This will ring down the vcs only not application it will on alive)
Were the vcs logs will be
#/var/VRTSvcs/log/engine_A.log
What are configuration files and to configure it in VCS
Configuration files:
Notes :- Before configuring VCS make sure the local_mac_address =true.
#eeprom local_mac_address?=true ( on both the nodes)
# /etc/llthosts (specify the node names)
(eg) 0 sun 1
1 sun 2
#/etc/llttab (specify the node names)
Set-node 0
Link qfe0 /dev/qfe:0 _ ether _ _
Link qfe1 /dev/qfe:1 _ ether _ _
set_cluster 10
Start
#/etc/gabtab (specify the node names)
/sbin/config c n 2
Path to be set = /etc/profile
#PATH=$PATH:/opt/VRTS/bin:/sbin:/opt/VRTSllt
#export PATH
30
#/etc/VRTSvcs/conf/config/main.cf
#/etc/VRTSvcs/conf/config/sysname
How to bring the resource to online and offline
# /opt/VRTSvcs/bin/hagrp -online (service_group) -sys (system_name)
# /opt/VRTSvcs/bin/hagrp -offline (service_group) -sys (system_name)
How to Switch service group between nodes
# /opt/VRTSvcs/bin/hagrp -switch (service_group) -to (system_name)
How to Freeze svcgroup, (disable onl. & offl.)
# /opt/VRTSvcs/bin/hagrp -freeze (service_group) [-persistent]
How to unfreeze the svcgroup, (enable onl. & offl.)
# /opt/VRTSvcs/bin/hagrp -unfreeze (service_group) [-persistent]
What is the command to check the connectivity between 2 nodes
Get the mac address from both the nodes
#getmac /dev/qfe:0
Sv from server side
Cv from client side
#./dlpiping sv /dev/qfe:0 macaddresss
#./dlpiping cv /dev/qfe:0 macaddresss
How to stop the VCS
#hastop local
#hastop local evaculate 100% shutdown the system
#hastop loca force without shutting down the application,only vcs can be down (hadaemon)
What are the service group dependence?
4types
Online local
Online remote
Online global
Offline global
How to delete a service group
1. Bring all the resource offline
2. Disable resource
3. Delete resource
Eg : #hares delete mysun
How to add a group
#haconf makerw
#hagrp add groupname
#hagrp modify groupname systemList add node1 node2
#haconf dump -makero
If a main.cf files corrupted how will you rectify
#hastop all
Create a config file
#dtpad /etc/VRTSvcs/bin/sysname
#mkdir /etc/VRTSvcs/bin/sysname/config
#cp types.cf config
#cd config
#dtpad main.cf
#vi main.cf
Include types.cf
Cluster mycluster
31
System node1
System node2
Snmp mycluster
#hacf verify .
#hacf cftocmd .
#hastart
#hastatus sum
How to backup the VCS configuration Files
After configuration of the cluster VCS creates the following files on each node participating in the cluster
configuration
/etc/llthosts
/etc/llttab
/etc/gabtab
/etc/VRTSvcs/conf/config/main.cf
/etc/VRTSvcs/conf/config/types.cf
To take the back up of the cluster configuration files of all the node do the following procedure
Go to the directory /opt/VRTS/bin
Stop the cluster at all nodes
# hastop
-all
Fire the Backup Command
# hasnap
-backup
want to dump VCS configuration before proceeding? : y
Name
: snap.bak
(name of the file in which Backup has to be taken up )
.
(dot) ( as terminator)
do you want each file to backup to be confirmed (y/n): n ( choose the option)
Backup will be created in the directory /var/VRTSvcs/hasnap/data/repository/vcs
Now Start the Cluster on this terminal first by using the following command and use same
commands on each node
# hastart -force
How to restoring the Cluster configuration
Stop the cluster at all nodes
# hastop
-all
start the process of restoring
# hasnap -restore
specify the option [14] : 1 ( this is the Sr. Number of the cluster configuration Backup
file)
Now Start the Cluster on this terminal first by using the following command and use same
commands on each node
# hastart -force
T3 Storage
1)Vol add volname data undn raid n standby undn
2)Vol stat
3)Vol init volname data
4)Vol mount vol name
5)Vol list
6)Mkdir /dev/es
7)Luxadm insert
8)if above solaris 7 exclude the steps 6 & 7
9) format and partition .
How many controllers in 3510 storage.
4 channels
2 controllers
32
33
# vi llttab
Node
Seeding; It is use to protect the cluster in pre-existing network, One seeding system can run vcs
Automatic seeding #gabconfig c n < no of nodes>
Manual seeding
#gabconfig c x
Amnesia: Guarantees that when a cluster is booted, it has at least one node that was a member of the
most recent cluster membership (and thus has the latest configuration data).
Jeopardy Defined
The design of VCS requires that a minimum of two heartbeat-capable channels be available between nodes to protect
against network failure. When a node is missing a single heartbeat connection, VCS can no longer discriminate
between a system loss and a loss of the last network connection. It must then handle loss of communications on a
single network differently from loss on multiple networks. This procedure is called "jeopardy." As mentioned
previously, low latency transport (LLT) provides notification of reliable versus unreliable network communications to
global atomic broadcast (GAB). GAB uses this information, with or without a functional disk heartbeat, to delegate
cluster membership. If the system heartbeats are lost simultaneously across all channels, VCS determines the system
has failed. The services running on that system are then restarted on another. However, if the node was running with
one heartbeat only (in jeopardy) prior to the loss of a heartbeat, VCS does not restart the applications on a new node.
This action of disabling failover is a safety mechanism that prevents data corruption.
I/O Fencing SCSI III Reservations - I/O Fencing (VxFEN) is scheduled to be included in the VCS 4.0 version. VCS
can have parallel or failover service groups with disk group resources in them. If the cluster has a split-brain, VxFEN
should force one of the subclusters to commit suicide in order to prevent data corruption. The subcluster which
commits suicide should never gain access to the disk groups without joining the cluster again. In parallel service
groups, it is necessary to prevent any active processes from writing to the disks. In failover groups, however, access to
the disk only needs to be prevented when VCS fails over the service group to another node. Some multipathing
products will be supported with I/O Fencing.
The cluster resource group and resources showing ERROR_STOP_FAILED, then follow the below
mentioned steps.
1.
-- Resource Groups -Group Name
Node Name
State
---------------------Group: pspd-rg
phys-pspd1
Error--stop failed
Group: pspd-rg
phys-pspd2
Offline
=======================================================================
For clearing the STOP_FAILED flag ---- -c is for clear flag, -h for nodename, -j for
resource name, -f for error flag.
root@phys-pspd1 # scswitch -c -h phys-pspd1 -j pspd-oralisten-res -f STOP_FAILED
(if more then one resource showing error use this command every resource and then go to
next step)
For Bring down the resource group ----- (If bring down the resource group STOP_FAILED
error will clear and it goes to Offline state)
root@phys-pspd1 # scswitch -F -g pspd-rg
=======================================================================
2. root@phys-pspd1 # scstat -g
-- Resource Groups and Resources --- Resource Groups -Group Name
Node Name
State
---------------------Group: pspd-rg
phys-pspd1
Offline
Group: pspd-rg
phys-pspd2
Offline
Resource: pspd-oralisten-res phys-pspd1
Offline
Offline
root@phys-pspd1 #
=======================================================================
To bring up the resource group-root@phys-pspd1 # scswitch -Z -g pspd-rg
=======================================================================
34
root@phys-pspd1 # scstat -g
-- Resource Groups and Resources -Resources: pspd-rg
pspd pspd-hastorageplus-res pspd-orasrv-res pspdoralisten-res
Resource: pspd-oralisten-res phys-pspd1
Online
Online
Resource: pspd-oralisten-res phys-pspd2
Offline
Offline
35