Difference between revisions of "Software RAID with mdadm"

From Define Wiki
Jump to navigation Jump to search
(Created page with "===== Create a new RAID array ===== Create (mdadm --create) is used to create a new array: <syntaxhighlight> mdadm --create --verbose /dev/md0 --level=1 /dev/sda1 /dev/sdb2 # or using ...")
 
 
(4 intermediate revisions by 2 users not shown)
Line 4: Line 4:
  
 
<syntaxhighlight>
 
<syntaxhighlight>
mdadm --create --verbose /dev/md0 --level=1 /dev/sda1 /dev/sdb2
+
mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb2
 
# or using the compact notation:
 
# or using the compact notation:
 
mdadm -Cv /dev/md0 -l1 -n2 /dev/sd[ab]1
 
mdadm -Cv /dev/md0 -l1 -n2 /dev/sd[ab]1
Line 147: Line 147:
 
[root@compute00 ~]$ sfdisk -d /dev/sda > partition_info.pt
 
[root@compute00 ~]$ sfdisk -d /dev/sda > partition_info.pt
 
[root@compute00 ~]$ sfdisk /dev/sdc < partition_info.pt  
 
[root@compute00 ~]$ sfdisk /dev/sdc < partition_info.pt  
 +
</syntaxhighlight>
 +
 +
 +
==== Moving the RAID disks to a new system ====
 +
 +
Connect the RAID disk to a the new system.
 +
mdadm will identify and assemble the RAID with following cmd.
 +
<syntaxhighlight> sudo mdadm --assemble --scan </syntaxhighlight>
 +
 +
we can then mount it as any other disk using  /dev/md[n] name
 +
 +
 +
You may need to set up thje config file first:
 +
 +
<syntaxhighlight>
 +
echo DEVICE /dev/sd{a,b,c} > /etc/mdadm.conf
 +
mdadm --detail --scan >> /etc/mdadm.conf
 
</syntaxhighlight>
 
</syntaxhighlight>

Latest revision as of 17:02, 18 November 2013

Create a new RAID array

Create (mdadm --create) is used to create a new array:

mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb2
# or using the compact notation:
mdadm -Cv /dev/md0 -l1 -n2 /dev/sd[ab]1
Create /etc/mdadm.conf

/etc/mdadm.conf is the main configuration file for mdadm. After we create our RAID arrays we add them to this file using:

mdadm --detail --scan >> /etc/mdadm.conf
Remove a disk from an array

We can’t remove a disk directly from the array, unless it is failed, so we first have to fail it (if the drive it is failed this is normally already in failed state and this step is not needed):

mdadm --fail /dev/md0 /dev/sda1

and now we can remove it:

mdadm --remove /dev/md0 /dev/sda1

This can be done in a single step using:

mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1
Add a disk to an existing array

We can add a new disk to an array (replacing a failed one probably):

mdadm --add /dev/md0 /dev/sdb1
Verifying the status of the RAID arrays

We can check the status of the arrays on the system with:

cat /proc/mdstat
# or
mdadm --detail /dev/md0

[user@localhost:]$ cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]

md1 : active raid1 sdb3[1] sda3[0]
19542976 blocks [2/2] [UU]

md2 : active raid1 sdb4[1] sda4[0]
223504192 blocks [2/2] [UU]

here we can see both drives are used and working fine – U. A failed drive will show as F, while a degraded array will miss the second disk -

Note: while monitoring the status of a RAID rebuild operation using watch can be useful:

watch cat /proc/mdstat
Stop and delete a RAID array

If we want to completely remove a raid array we have to stop if first and then remove it:

mdadm --stop /dev/md0
mdadm --remove /dev/md0

and finally we can even delete the superblock from the individual drives:

mdadm --zero-superblock /dev/sda

Finally in using RAID1 arrays, where we create identical partitions on both drives this can be useful to copy the partitions from sda to sdb:

sfdisk -d /dev/sda | sfdisk /dev/sdb
Example of RAID6 across 24 Drives

Note: All the drives need to have the same partition information

In this example sda is the OS and is ignored.

Script to create all the partitions:

#!/bin/bash

count=0;
for i in `ls /dev/sd*`
do

# ignore sda, sda1 and sda2
if [ $count -lt 3 ]
then
        echo "skipping sda"
else
        parted $i -- mklabel msdos
        parted $i -- mkpart primary 0 -0
fi

count=`echo "$count +1"|bc`;

done


Create the array (sdb1, sdc1 ... sdy1)

mdadm -Cv /dev/md0 -l6 -n24 /dev/sd[b-y]1

Create the filesystem on md0 (STFC args)

[root@localhost MegaCli]$ mkfs.xfs -f -l version=2 -i size=1024 -n size=65536 -L raid6 /dev/md0 
meta-data=/dev/md0               isize=1024   agcount=32, agsize=167678704 blks
         =                       sectsz=4096  attr=0
data     =                       bsize=4096   blocks=5365718336, imaxpct=25
         =                       sunit=16     swidth=352 blks, unwritten=1
naming   =version 2              bsize=65536 
log      =internal log           bsize=4096   blocks=32768, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=0
realtime =none                   extsz=1441792 blocks=0, rtextents=0
Copy drive partition information

If you need to copy drive partition information, use sfdisk

# e.g clone sdc with partition info from sda
[root@compute00 ~]$ sfdisk -d /dev/sda > partition_info.pt
[root@compute00 ~]$ sfdisk /dev/sdc < partition_info.pt


Moving the RAID disks to a new system

Connect the RAID disk to a the new system. mdadm will identify and assemble the RAID with following cmd.

 sudo mdadm --assemble --scan

we can then mount it as any other disk using /dev/md[n] name


You may need to set up thje config file first:

echo DEVICE /dev/sd{a,b,c} > /etc/mdadm.conf
mdadm --detail --scan >> /etc/mdadm.conf