With mdadm, it is possible to create and manage Software Raid Devices.

First of all, it is not possible (yet), to boot IPFire from a software RAID. A boot drive continues is needed. This may be a hard drive, but also a USB stick.

From 2 hard drives, it is possible to set up a Raid 0 (for speed) or RAID 1 (for safety).

From 3 hard drives, a Raid 5 (security and less space loss over Raid 1) is possible.

Good article can be found here:

Creating a Raid 0, 1 or 5

You need to setup each partition on each disk with type 'fd' using fdisk.

If you're planning to do multiple partitions in each disk and different raids, all partitions should have the same size.

Creating the raid itself:
2 Disks, Raid 0 Mode:

mdadm --create --verbose /dev/md0 --auto=yes --level=0 --raid-devices=2 /dev/sde1 /dev/sdf1

2 Disks, Raid 1 Mode:

mdadm --create --verbose /dev/md0 --auto=yes --level=1 --raid-devices=2 /dev/sde1 /dev/sdf1

3 Disks, Raid 5 Mode:

mdadm --create --verbose /dev/md0 --auto=yes --level=5 --raid-devices=3 /dev/sde1 /dev/sdf1 /dev/sdg1

Saving settings at mdadm.conf

Note - the IPFire mdadm.conf must be created under /etc/mdadm.conf.

This can be easily done using the following command:

  cd / etc
  echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf
  mdadm - detail - scan >> mdadm.conf

simply replace the /dev/hd* ... by the precious created raid partitions


A setup with 3 disks, each has 1 partition type software raid (fdisk type "fd"):

  • /dev/sda1
  • /dev/sdb1
  • /dev/sdc1

so, the script will be like:

cd / etc
  echo 'DEVICE /dev/sda1 /dev/sdb1 /dev/sdc1' > mdadm.conf
  mdadm - detail - scan >> mdadm.conf

Attention, an entry in the /etc/fstab pointing to a device that does not exist, leads to boot failure. It is worth a live CD laying around just in case you do something wrong.

Don't forget to make a filesystem your new /dev/md0 (mkfs.)

Prefer the option ExtraHD to mount your new Raid device.

RAID status monitoring with email alarm

Thus an alarm mail can be sent we must first setup IPFire.

Encryption on fetchmail can be dispensed if the mail server is only used to send status messages.

Now we need to setup our postfix with password and smtp settings.

We must create the file /etc/postfix/password:

touch /etc/postfix/password
vim /etc/postfix/password

with the following content:

# Smtp.isp.com       username:password
mail.ipfire.org     user@ipfire.org:password

We set the necessary permissions:

chown root:root /etc/postfix/password
chmod 0600 /etc/postfix/password

And a hash of the password in order to create the password in the database:

postmap hash:/etc/postfix/password

Then we add the following changes in the /etc/postfix/main.cf:

relayhost = mail.ipfire.org
smtpd_sasl_auth_enable = yes
smtpd_sasl_password_maps = hash:/etc/postfix/password
smtpd_sasl_security_options =

And reload postfix:

/etc/init.d/postfix reload

IPFire is now able to send mail through our mail address.

Now we create the script which monitors the status of the RAID.

Create the file /usr/bin/statusmail.sh with the following content:

if ! grep -q "UU" /proc/mdstat;
      echo "Subject: RAID-Statusmail" > /tmp/statusmail
      echo "From: user@ipfire.org" >> /tmp/statusmail
      echo "To: user2@ipfire.org" >> /tmp/statusmail
      mdadm --detail /dev/md0 >> /tmp/statusmail
      /usr/sbin/sendmail -F user@ipfire.org -t user2@ipfire.org < /tmp/statusmail
      rm -f /tmp/statusmail;

and change permissions to make root read only and executable:

chmod 700 /usr/bin/statusmail.sh
chmod +x /usr/bin/statusmail.sh

Finally, we add a cron job every hour so it test whether the raid is still running:

ln -s /usr/bin/statusmail.sh /etc/fcron.hourly/statusmail.sh