Gibbserv.net

Icon

A slow updating blog I write for no particular reason.

MDADM Raid5 + LVM migration to Centos 6.3

So time had come to do something about my raid.  I had been limping along for sometime now nursing 100GB free on my 4TB software raid and had purchased two additional WD 1TB Caviar Black drives bringing my device count up to 7 physical disks (the two new drives currently added as hot-spares).  It then dawned on me that maybe now would be a good time to migrate my raid level to raid6.  But as with most things in life, things eventually hit a speed bump.  Seems my trusty do-it-all storage server was unable to change the raid level of /dev/md0 due to a limitation between kernel and mdadm versions I was running on a dated Centos 5.8 install.  I decided to do several things.

  1. Move to Centos 6.3
  2. Run a newer version of MDADM (3.0+)
  3. Migrate current raid and LVM config to new install
  4. Not loose years of data.

So I first unmounted the raid:
# umount /mnt/nas

Then I locked the volume group from being written and exported it so the OS would not access it:
# vgchange -an lvm_data
# vgexport lvm_data

At this point I made a backup of /etc/mdadm.conf and documented the output of pvdisplay, vgdisplay, lvdisplay just to cover my bases and threw in install media for Centos 6.3 x64. I ran through the install making sure to not touch any of the WD 1TB drives. Once the new OS was installed and I had networking and SSH configured I began the process of getting the raid back online. I ran into an issue with mdadm --assemble --scan not finding the array. Throwing –verbose into the mix presented me with enough info to determine that the new default metadata for MDADM was 1.2 whereas my array, having been built with MDADM v2.6.9 contained 0.90 metadata. I was able to run the following to force mdadm to look for my array using the older metadata.

# mdadm --assemble --scan -e 0.90

I verified the array and populated /etc/mdadm.conf using the following commands:
# mdadm --detail /dev/md127
# mdadm --detail --scan >> /etc/mdadm.conf

You may have noticed that my array is now identified as /dev/md127 ass opposed to /dev/md0 prior to the move to Centos 6.3. From what I could find, the change in identifying number was not going to be an issue.

Running pvscan showed me what I wanted to see:
# pvscan
PV /dev/md127 VG lvm_data lvm2 [3.64 TiB / 0 free]

I was then able to import the volume group using:
# vgimport lvm_data

The volume group was made accessable again using:
# vgchange -ay lvm_data

All that was left was to make a directory where I was going to mount the filesystem and add it to /etc/fstab. All in all it went pretty well and my paranoia was unnecessary.

The last thing to do before going to bed was to kick off the following command to change the raid level and incorporate the two new disks sitting there as hot-spares:
# mdadm --grow /dev/md127 --level=6 --raid-devices=7 --backup-file=/root/md127.backup

Now all I need to do is wait 29 hours for the reshape to complete and I can grow the physical extents expand the volgroup and resize the filesystem.

Category: Linux, Public, Servers

Tagged: , , ,

Leave a Reply

You must be logged in to post a comment.