[FrontPage] [TitleIndex] [WordIndex

This is a read-only archived version of wiki.centos.org

How to install CentOS 5 & 6 on a software partitionable RAID1.

1. What exactly is a software partitionable RAID1?

Short answer: it is a md RAID volume which can be partitioned as a regular hard drive.

For the long time the traditional way of using md RAIDs was to create a separate partition on each drive going into the raid volume, mark these partitions as type 0xfd (linux raid autodetect), build a volume from these partitions, format it to a certain filesystem and use it. If you needed several filesystems on RAID, you had to setup several RAID volumes, which means that you had to create several exactly identical partitions on each of the drives in the RAID and create several RAID volumes. Compare this to a true RAID controller which allows you to build one RAID volume which can be partitioned as a regular hard drive.

So, since mdadm 2.6 it is possible to create partitionable RAID volumes. Such RAID volumes get device names like /dev/md_dX and /dev/md_dXpY for partitions on the volume. The dev entries for partitions on partitionable RAIDs are created automatically by mdadm when the volume is assembled.

2. Why would you want to have a system installed on a partitionable software RAID1?

If you are installing a system on a partitionable RAID you can use the whole hard drive as a RAID component device, and since RAID1 is a mirror, you will be able to boot your system from any of the drives in case of failure without any additional tricks required to preserve bootloader configuration, etc. And when you need to repair a failed RAID volume with the whole hard drive as a RAID component, all you have to do is to insert a new hard drive and run mdadm --add; no partitioning or anything else required.

3. Installation procedure

This guide describes how to:

  1. Install CentOS 5 & 6 on a partitionable RAID1 system from scratch.

  2. Migrate an existing non-raid CentOS 5 & 6 installation to a partitionable RAID1 system.

In this howto we assume that your system has two hard drives, /dev/sda and /dev/sdb.

3.1. Part 1: Build the array, mount and chroot

If you have to perform either a new installation or a migration of an existing non-raid one, you need to reserve some unpartitioned space at the end of your drive. This free space is where mdadm will store the array's metadata, i.e. the RAID Superblock. The version-0.90 Superblock - the one we're going to use - needs only few KB of disk space. Therefore, reserving about 1 MB of unpartitioned space at the end of the disk will be perfectly safe.

After having reserved space for the Superblock, we'll create it and start the array with only one disk, /dev/sda. You can then add the mirror disk, /dev/sdb, either at this point or later, after having configured and booted-up your RAID1 system in degraded mode. The second choice means building the mirror disk in background, while you're using your system normally; as the build of the mirror disk is a very long process, you'll probably prefer this way of doing it.

In both cases, having built and started the RAID1 array, we'll manually prepare the system to be configured in order to boot-up from RAID.

3.1.1. Steps for both CentOS 5 & 6

  1. Reserve space for the Superblock.
    1. New installation. Install CentOS using the standard installer on the first hard disk, /dev/sda. Select manual partitioning during the installation and leave at least 1 MB at the very end of the disk unpartitioned.

    2. Migration of an existing non-raid system. Shrink the last partition, if needed, in order to obtain the 1 MB of free space at the end of the disk. To shrink a partition, you must first shrink the file system in it and then resize the partition itself accordingly. Use resize2fs and parted for the task.

  2. Boot from the CentOS installation disk in the Rescue mode. The installer will ask you if you wish to mount an existing CentOS installation, you must refuse.

  3. Build the software RAID1 using mdadm in degraded mode, with /dev/sda as the only drive:

    mdadm --create --metadata=0.90 --level=1 --raid-devices=2 /dev/md_d0 /dev/sda missing
  4. Optionally, add the mirror drive /dev/sdb into the raid and check /proc/mdstat to see that the raid started building:

    mdadm --add /dev/md_d0 /dev/sdb
    cat /proc/mdstat

    Note: even if required, this step can be postponed; if you skip it now, you'll perform it after the next boot of your system.

  5. Now you must manually mount the system and chroot to it to modify the boot settings:
    mkdir /mnt/sysimage
    mount /dev/md_d0p1 /mnt/sysimage
    mount -o bind /dev /mnt/sysimage/dev
    mount -o bind /selinux /mnt/sysimage/selinux
    mount -o bind /tmp /mnt/sysimage/tmp
    mount -t proc none /mnt/sysimage/proc
    mount -t sysfs none /mnt/sysimage/sys
    chroot /mnt/sysimage
    The example above assumes that whole system resides on a single file system, as only '/' is mounted . When the system is divided into more filesystems, you must mount at least '/', '/boot' and '/usr', as in the following example (partition names can clearly differ):
    ...
    mount /dev/md_d0p9 /mnt/sysimage
    mount /dev/md_d0p1 /mnt/sysimage/boot
    mount /dev/md_d0p3 /mnt/sysimage/usr
    ...

3.2. Part 2: Configure the system to boot-up from RAID

Note: the following steps will be performed in the chrooted system.

3.2.1. Steps for CentOS 5

  1. Create the /etc/mdadm.conf:

    mdadm --detail --scan > /etc/mdadm.conf 
  2. Edit /etc/fstab, you must change all mounts from using LABEL= to explicit device names, like /dev/md_d0p1, /dev/md_d0p2, ...

  3. Edit /etc/grub.conf, replace root=LABEL=... with root=/dev/md_d0p1 (or the corresponding partition for your setup).

  4. Now you have to patch the mkinitrd script. Download the patch from this page and do the following:

    cd /sbin
    cp mkinitrd mkinitrd.dist
    patch -p0 < /tmp/mkinitrd-md_d0.patch

    See the corresponding bug report for more details.

  5. Disable updating mkinitrd rpm with yum, append
    exclude=mkinitrd*
    to /etc/yum.conf.
  6. Build the new initrd image:
    cd /boot
    mv initrd-2.6.18-128.el5.img initrd-2.6.18-128.el5.img.bak
    mkinitrd /boot/initrd-2.6.18-128.el5.img 2.6.18-128.el5
    If you need to update the mkinitrd package some time later and the bug with partitionable raid detection will not be fixed yet, you will need to reapply the patch to mkinitrd and recreate the initrd after update.

3.2.2. Steps for CentOS 6

  1. Create the /etc/mdadm.conf:

    mdadm --detail --scan > /etc/mdadm.conf
  2. Edit /etc/fstab, you must change all mounts from using UUID= to explicit device names, like /dev/md_d0p1, /dev/md_d0p2, ...

  3. Edit /etc/grub.conf, replace root=UUID=... with root=/dev/md_d0p1 (or the corresponding partition for your setup). You must also remove the kernel option rd_NO_MD (if present), otherwise no md device (RAID) will be discovered at boot time.

  4. Build the new initramfs image:
    cd /boot 
    mv initramfs-2.6.32-220.el6.x86_64.img initramfs-2.6.32-220.el6.x86_64.img.bak 
    dracut /boot/initramfs-2.6.32-220.el6.x86_64.img 2.6.32-220.el6.x86_64

3.3. Part 3: Reboot

What to do now depends on whether you have skipped the step marked as "optional" in Part 1 or not:

  1. If you haven't skipped the optional step, i.e. you have already added the mirror drive to the array, all you have to do is to check /proc/mdstat periodically to check if the mirror has been built. When the build process is finished you may safely reboot the system.

  2. Otherwise, if you have skipped the optional step, reboot directly and then add the mirror drive by performing the skipped step (issue the commands as root); the mirror drive will be built in background while you use your system normally (check /proc/mdstat periodically to know when the build process is finished).


2023-09-11 07:22