1 Preliminary Note: GPT Partitions
1 Preliminary Note: GPT Partitions
1 Preliminary Note
In this example I have two hard drives, /dev/sda and /dev/sdb, with the
partitions /dev/sda1 and /dev/sda2 as well as /dev/sdb1 and /dev/sdb2.
/dev/sda1 and /dev/sdb1 make up the RAID1 array /dev/md0.
/dev/sda2 and /dev/sdb2 make up the RAID1 array /dev/md1.
/dev/sda1 + /dev/sdb1 = /dev/md0
/dev/sda2 + /dev/sdb2 = /dev/md1
If a disk has failed, you will probably find a lot of error messages in the log files,
e.g. /var/log/messages or /var/log/syslog.
You can also run
cat /proc/mdstat
and instead of the string [UU] you will see [U_] if you have a degraded RAID1 array.
To remove /dev/sdb, we will mark /dev/sdb1 and /dev/sdb2 as failed and remove them
from their respective RAID arrays (/dev/md0 and /dev/md1).
First we mark /dev/sdb1 as failed:
The output of
cat /proc/mdstat
And
cat /proc/mdstat
Now we do the same steps again for /dev/sdb2 (which is part of /dev/md1):
cat /proc/mdstat
shutdown -h now
and replace the old /dev/sdb hard drive with a new one (it must have at least the same
size as the old one - if it's only a few MB smaller than the old one then rebuilding the
arrays will fail).
After you have changed the hard disk /dev/sdb, boot the system.
The first thing we must do now is to create the exact same partitioning as on /dev/sda.
We can do this with one simple command:
fdisk -l
cat /proc/mdstat
When the synchronization is finished, the output will look like this:
server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6
] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
24418688 blocks [2/2] [UU]