How to Set Up a Software RAID on the Raspberry Pi

I’ve written about my Raspberry Pi setup before. At that time I had just one hard drive plugged into the external power hub. Periodically the thought would occur to me, “I should really make that a RAID. Someday that hard drive’s gonna fail and I will be sad” along with “Hard drives are expensive” and “RAID’s not a true backup anyway.” Then last month the hard drive died and I lost a lot of media. Can’t say I didn’t see it coming. So I set up a two-hard-drived RAID 1 array to replace it.

RAID = Redundant Array of Independent Disks

There are several RAID levels, I wanted a RAID 1. It requires a minimum of two hard disks, and the RAID software configures them such that the computer treats them as one disk. All data is fully copied onto both disks. Therefore, the available storage size is equal to the size of the smallest disk in the array.

Step 1: Wipe and Partition the Disks

Be careful and ensure you are formatting the proper disks. I advise you only have disks you want wiped plugged in to prevent disaster.


$ lsblk


$ df -h

to see a list of storages and their corresponding device names. Make sure that the size of the drives match up with what you expect, and identify which devices are the partitions on the Pi’s SD card that are currently running Raspbian (Rasperry Pi’s variant of Debian). They will be mounted in various places on /, for example I see /, /media/pi/SETTINGS1, and /boot.

Once the disks you will format have been identified, see that the hard drives can be found in the folder /dev/.

$ ls /dev

Partition each disk individually with the interactive command line software fdisk. Say you have two drives, and they are located at /dev/sda and at /dev/sdb.

$ sudo fdisk /dev/sda
d    (wipe)
n    (create new partition)
p    (make it a primary partition)
1    (make it partition number 1)
t    (select a system type for partition)
L    (pull up list of system types)
fd   (enter Linux RAID autodetect system type)
w    (write all changes to disk, this is permanent)

And you should now see that /dev/sda1 now shows up alongside /dev/sda if it did not before. lsblk should show /dev/sda1 nested below /dev/sda.

Write the same changes to /dev/sdb.

Step 2: Format Drives as ext4

$ sudo mkfs.ext4 /dev/sda1
$ sudo mkfs.ext4 /dev/sdb1

You can instead do this to /dev/md0, your RAID1 device, after it is created.

Step 3: Install mdadm and Create RAID 1 Device

Update your repositories and install mdadm (Multi Disk Admin):

$ sudo apt update
$ sudo apt install mdadm

In the install dialog (which you can re-run later with ‘sudo dpkg-reconfigure mdadm’), enter ‘none’ or leave blank when it asks if any MD arrays are needed for the root filesystem, because this RAID will be set up as external storage only, and the SD card will still be used to host the operating system. The next dialog asks if you’d like to run monthly redundancy checks, your choice. The next dialog gives you the option of getting notified by email if a drive goes down.

Create the RAID1 at /dev/md0:

$ sudo mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sda1 /dev/sdb1

The command will exit quickly, but it’s not done. It took a whole day to build the arrays on my machine. Check on the progress of the array while it is initializing (or to see its status when it is done) with:

$ cat /proc/mdstat
$ # or to periodically update status (Ctrl+C to kill):
$ while true; cat /proc/mdstat; sleep 30; done

When that’s done, you’ve got your array!

Step 4: Mount the Array and Add It to /etc/fstab

Create a place to mount it and give yourself permission to use it:

$ sudo mkdir /media/raid
$ sudo chown -R jacob /media/raid

Mount it:

$ sudo mount /dev/md0 /media/raid

Check that all’s good by trying to copy some data over.

You don’t want to have to mount it every time you boot, though, so add a new line to the end of the file /etc/fstab:

# dev name,  mount point,   filesystem,  defaults options and      don't dump,  run fsck at boot
#                                        don't fail to boot if      
#                                        /dev/md0 is not present,
/dev/md0     /media/raid    ext4         defaults,nofail           0            2

Step 5: Assemble RAID From Disks at Boot

/etc/fstab will not be able to mount /dev/md0 unless you assemble it manually or assemble it at boot. Save the mdadm configuration permanently in /etc/mdadm/mdadm.conf.

Conveniently, the output of the status command

$ sudo mdadm --detail --scan

is made to work directly as a config file for mdadm. Append the line beginning with “ARRAY /dev/md0” to the end of /etc/mdadm/mdadm.conf manually, or through Bash magic as root user:

$ sudo su
# mdadm --detail --scan >> /etc/mdadm/mdadm.conf
# exit

Now mdadm will compare the UUID of the device /dev/md0 to the UUIDs of the drives. The drives each have two UUIDs, one shared among the RAID and a UUID_SUB which is the original UUID of the disks. To get the UUIDs of the devices:

$ sudo blkid

Now on any other system you would be good to go, and the service mdadm-raid would assemble the RAID at boot. However, for many on Raspbian there appears to be a race condition at boot between the Linux kernel initializing the hardware drivers and mdadm-raid, which scans for drives to assemble. This can be fudged by adding a delay parameter for the kernel in the file /boot/cmdline.txt, which contains a single line. Add on the same line “rootdelay=3”.

If upon reboot /dev/md0 is not present, first confirm that you can manually assemble the array with:

$ sudo mdadm --assemble --verbose /dev/md0

You can stop the array with:

$ sudo mdadm --stop /dev/md0

If that works, try increasing the delay time to 5 or 10 seconds.

And you’ve got a RAID!

Appendix: If Shit Fails

Case 1: The Raspberry Pi SD Card Fails

In this scenario, you have foolishly not backed up the configuration located at /etc/mdadm/mdadm.config and your /etc/fstab. If you had, you would be able to just pop a new SD card into the Pi with NOOBs on it, install mdadm, copy the configurations to the right places, and add the boot delay. But since you didn’t, you must run:

$ sudo mdadm --assemble --verbose /dev/md0 /dev/sda1 /dev/sdb1
$ # assuming that the two drives are still called sda1 and sdb1

You can instead use the UUID of the RAID:

$ sudo mdadm --assemble --verbose /dev/md0 uuid=your-uuid-here

And then perform steps 4 and 5 again.

This is the same procedure you would follow if you wanted to move the RAID to another server.

Case 2: A Disk Fails

Drive down, drive down!

Confirm which drive is bad by looking for the device number (0 or 1 since there are only two in this array). If a device state is “active sync,” it is still active in the array.

$ sudo mdadm --detail /dev/md0
$ sudo mdadm --examine /dev/sda1
$ sudo mdadm --examine /dev/sdb1

Check the system logs for instances of I/O errors. If any, that’s not a good sign and you should definitely replace the drive.

$ less /var/log/messages
$ dmesg | less

Remove the bad drive:

$ sudo mdadm --remove /dev/md0 /dev/sda1

And add a new drive:

$ sudo mdadm --add /dev/md0 /dev/sdc1

Done. Enjoy your data redundancy, but don’t neglect to make an off-site backup!