Software RAID with mdadm
Create a new RAID array
Create (mdadm --create) is used to create a new array:
mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb2
# or using the compact notation:
mdadm -Cv /dev/md0 -l1 -n2 /dev/sd[ab]1Create /etc/mdadm.conf
/etc/mdadm.conf is the main configuration file for mdadm. After we create our RAID arrays we add them to this file using:
mdadm --detail --scan >> /etc/mdadm.confRemove a disk from an array
We can’t remove a disk directly from the array, unless it is failed, so we first have to fail it (if the drive it is failed this is normally already in failed state and this step is not needed):
mdadm --fail /dev/md0 /dev/sda1and now we can remove it:
mdadm --remove /dev/md0 /dev/sda1This can be done in a single step using:
mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1Add a disk to an existing array
We can add a new disk to an array (replacing a failed one probably):
mdadm --add /dev/md0 /dev/sdb1Verifying the status of the RAID arrays
We can check the status of the arrays on the system with:
cat /proc/mdstat
# or
mdadm --detail /dev/md0
[user@localhost:]$ cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]
md1 : active raid1 sdb3[1] sda3[0]
19542976 blocks [2/2] [UU]
md2 : active raid1 sdb4[1] sda4[0]
223504192 blocks [2/2] [UU]here we can see both drives are used and working fine – U. A failed drive will show as F, while a degraded array will miss the second disk -
Note: while monitoring the status of a RAID rebuild operation using watch can be useful:
watch cat /proc/mdstatStop and delete a RAID array
If we want to completely remove a raid array we have to stop if first and then remove it:
mdadm --stop /dev/md0
mdadm --remove /dev/md0and finally we can even delete the superblock from the individual drives:
mdadm --zero-superblock /dev/sdaFinally in using RAID1 arrays, where we create identical partitions on both drives this can be useful to copy the partitions from sda to sdb:
sfdisk -d /dev/sda | sfdisk /dev/sdbExample of RAID6 across 24 Drives
Note: All the drives need to have the same partition information
In this example sda is the OS and is ignored.
Script to create all the partitions:
#!/bin/bash
count=0;
for i in `ls /dev/sd*`
do
# ignore sda, sda1 and sda2
if [ $count -lt 3 ]
then
echo "skipping sda"
else
parted $i -- mklabel msdos
parted $i -- mkpart primary 0 -0
fi
count=`echo "$count +1"|bc`;
done
Create the array (sdb1, sdc1 ... sdy1)
mdadm -Cv /dev/md0 -l6 -n24 /dev/sd[b-y]1Create the filesystem on md0 (STFC args)
[root@localhost MegaCli]$ mkfs.xfs -f -l version=2 -i size=1024 -n size=65536 -L raid6 /dev/md0
meta-data=/dev/md0 isize=1024 agcount=32, agsize=167678704 blks
= sectsz=4096 attr=0
data = bsize=4096 blocks=5365718336, imaxpct=25
= sunit=16 swidth=352 blks, unwritten=1
naming =version 2 bsize=65536
log =internal log bsize=4096 blocks=32768, version=2
= sectsz=4096 sunit=1 blks, lazy-count=0
realtime =none extsz=1441792 blocks=0, rtextents=0Copy drive partition information
If you need to copy drive partition information, use sfdisk
# e.g clone sdc with partition info from sda
[root@compute00 ~]$ sfdisk -d /dev/sda > partition_info.pt
[root@compute00 ~]$ sfdisk /dev/sdc < partition_info.pt
Moving the RAID disks to a new system
Connect the RAID disk to a the new system. mdadm will identify and assemble the RAID with following cmd.
sudo mdadm --assemble --scan