growing ext3 partition on RAID1 without rebooting

Posted by joostje on Tue 25 Jul 2006 at 09:56

Tags: ,
Although rather straightforward, I couldn't find an easy step-by-step guide, so here I'll describe how I ended up growing my ext3 partion on a RAID-1 array.

For me, the complicating factors are 1) using commands I don't use daily, and 2) that the kernel doesn't re-read the partition table if a disk still has a mounted partition.

First, the commands if you don't mind rebooting to grow /dev/md1 on /dev/sda3 and /dev/sdb3:

  #Alter the partition tables, to make /dev/sda3 and /dev/sdb3 have new size:
  fdisk /dev/sda
  fdisk /dev/sdb

  #reboot (make kernel read new partition table)

  #then, grow the raid1 /dev/md1
  mdadm --grow /dev/md1 --size=max

  #and finaly, grow the ext2 (or ext3) fs on /dev/md1
  resize2fs /dev/md1

The trick to avoiding the reboot is to make sure a hd is not used (no partition at all) when we alter the partition table with fdisk, this is possible as when all partitions on the disk are raid1 (or can be unmounted). So, before changing the partition sizes with fdisk, we'll mdadm /dev/mdX --fail /dev/sdXX --remove /dev/sdXX each partition on that disk from every raid device on the disk, alter the partition tables, then add the partitions again to the raid devices, and repeat for the other disk.

First, here's the layout of my disks:

/dev/sda:
  sda1    part of /dev/md0
  sda2    SWAP (used)
  sda3    part of /dev/md1
  sda4    UNUSED

/dev/sdb:
  sdb1    part of /dev/md0
  sdb2    SWAP (unused)
  sdb3    part of /dev/md1
  sdb4    UNUSED

Next, the commands I issued:

#Freeing /dev/sda:
#First, removing all RAID1 partitions on /dev/sda:
mdadm /dev/md1 --fail /dev/sda3 --remove /dev/sda3
mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1

#Then, stopping the swapspace on /dev/sda:
swapoff /dev/sda2

#Then, altering the partition tables:
fdisk /dev/sda
  #entering 'w' at the end to write. This should go OK now.

#Start using the partitions again:
swapon /dev/sda2
mdadm /dev/md0 --add /dev/sda1
mdadm /dev/md1 --add /dev/sda3

# wait for both md devices to be fully synced
# (check /proc/mdstat)

#same with /dev/sdb
mdadm /dev/md1 --fail /dev/sdb3 --remove /dev/sdb3
mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
fdisk /dev/sdb
mdadm /dev/md0 --add /dev/sdb1
#don't add /dev/sdb3 yet, to have a spare copy while resize2fs-ing

mdadm --grow /dev/md1 --size=max

resize2fs /dev/md1

#If the resize2fs-ing went OK, we can now add /dev/sdb3:
mdadm /dev/md1 --add /dev/sdb3
Anyway, that's the theory. In my case, resize2fs reported resize2fs: Operation not permitted While trying to add group #384, and that made me worry enough to unmount the partition to run fsck.ext3 on it. fsck.ext3 BTW found Directories count wrong for group #299 (65535, counted=0) errors, for groups 299 through 383 (but not 384).

So, in the end I still ended up with some effective down-time (unmounted /dev/md1), but not as much as I would have had had I done it with a reboot. Also, the error message from resize2fs is unknown to google, so it seems quite rare.

For the 'live' growing of ext3 partitions to work, a kernel newer than 2.6.10 is needed (IIRC). Packages I'm using:

ii  e2fsprogs      1.39-1         ext2 file system utilities and libraries
ii  mdadm          2.4.1-6        tool to administer Linux MD device arrays (s
ii  util-linux     2.12r-10       Miscellaneous system utilities
linux-kernel 2.6.16.18

 

 


Posted by Anonymous (212.254.xx.xx) on Tue 25 Jul 2006 at 10:11
Another easier and nicer way of acieving that would be to use LVM:

- Create a RAID1 partition of the entire disk
- Create LVM volume group VG_MAIN
- Add the RAID1 partition in VG_MAIN
- Create partition (/dev/VG_MAIN/part1) in VG_MAIN

When you need to resize:
- Unmount /dev/VG_MAIN/part1 (or resize online)
- resize /dev/VG_MAIN/part1

Done, without reboot!
-jec

[ Parent | Reply to this comment ]

Posted by joostje (193.173.xx.xx) on Wed 26 Jul 2006 at 09:18
I believe that still doesn't work if your disks grow in size. In my case, I had a 60Gig and a 80 Gig disk; the 60 Gig broke down, and I replaced it with a 160 Gig disk. Now I wanted one of the already existing filesystems to grow (to make use of the full 80 Gig that can now be mirrored).

Could also happen if you simply replace both your harddisks with bigger ones.

Anyway, the "Create a RAID1 partition of the entire disk" line may not always work.

But, you're right, I should probably look into this LVM thing some time.

[ Parent | Reply to this comment ]

Posted by undefined (192.91.xx.xx) on Fri 28 Jul 2006 at 22:41
if you are replacing your harddrives as in your situation, and unless you have hotswap (hardware & sofware support), then you are going to have downtime, so what's an extra few minutes to resize the partitions after you boot with the new drives?

using LVM would allow you to resize your partitions without rebooting (though instead of resizing a hard drive partition, we are going to create a new hard drive parition and LVM physical volume and resize a LVM logical volume).

1. take that extra 20 GB on the old 80 GB drive and format it into a partition
2. format 20 GB on the new 160 GB into a partition
3. RAID those two 20 GB partitions into md2
4. format md2 as a LVM physical volume (PV)
5. add that LVM PV to a LVM volume group (VG)
6. resize the LVM logical volume (LV) sitting on top of the expanded LVM VG
7. resize the ext3 file system sitting on top of the resized LVM LV

all of that can be done without any downtime except for possible unmounting the file systems; i can't remember if any ext2/3 tools currently support "online" enlargement. i'm usually shrinking one and enlarging another, so i have to umount at least one and usually do both while i've reinited to single user mode.

pretty ascii art would have worked well above, but i don't have the computer art skills, so i'll refer you to the LVM howto.

[ Parent | Reply to this comment ]

Posted by Anonymous (80.69.xx.xx) on Wed 26 Jul 2006 at 08:43
perhaps partprobe (from package parted) can help you to let the kernel reread the partion table without rebooting.

[ Parent | Reply to this comment ]

Posted by joostje (193.173.xx.xx) on Wed 26 Jul 2006 at 09:11
Ah, yes, thanks, that should also work (at least that's what it looks like after reading the manual page).

[ Parent | Reply to this comment ]

Posted by fangorn (83.14.xx.xx) on Wed 12 Sep 2007 at 18:29
If you want to grow RAID1 partition used by LVM group that contains root partition, you could try this.

My setup:
two ata drives,
software RAID1 md0 for /boot
software RAID1 md1 as Physical Volume for LVM
Volume Group VG00 using md1
root ext3 partition, and other partitions on VG00

I wanted to grow md1 (got bigger disk), to grow vg00.

So what you need to do is:

1) get a rescue cd, so you can boot into something after partiion resizing

2) run fdisk on RAID partitions /dev/hdX

3) reboot into rescue cd (I used Debian 4.0 netinst cd)
Don't worry about not starting md1

4) umount all partitions using vg00 and deactivate volume group with resized partition

vgchange -a n

(it's activated by debian rescue mode automatically)

5) you need to recreate RAID1 array md1 - adapt to your setup and run

mdadm --create /dev/md1 --verbose --level=1 -n 2 /dev/hda2 missing

6) start volume group

vgchange -a y

7) mount root and boot partition

mount /dev/mapper/vg00-root /target
mount /dev/md0 /target /boot

8) supply new array info to mdadm.conf

mdadm --detail --scan >>/target/etc/mdadm/mdadm.conf

and edit mdadm.conf to conform to current setup

9)chroot into /target (root)

chroot /target /bin/bash

10) create new initrd with new RAID configuration

mkinitramfs -o /boot/initrd.img-2.6.18-5-486

11) reboot into your system
12) grow your volume group

pvresize /dev/md1

12) follow the main post and grow filesystems.



Good luck,

Fangorn

[ Parent | Reply to this comment ]

Posted by Anonymous (220.233.xx.xx) on Sat 28 Jun 2008 at 01:45
"mdadm --grow -n X /dev/md0" (where X is a number of RAID partitions) can be used to increase the number of live partitions when adding new disks instead of increasing partition sizes.
This causes the RAID array to rebuild onto the new disks at the same time if you add 2 new drives first... Probably saves some time copying (should only need to read once, rather than twice).

After it's done, fail the old drives and regrow it back to 2 devices and the bigger size (not sure if the one grow command does that if you use the one shown in the article...).


The above is more notes for myself than anything, but it might help someone else too.
--Tin

[ Parent | Reply to this comment ]

Posted by gavin (64.70.xx.xx) on Tue 2 Aug 2011 at 03:32
this is my way:
1. mdadm /dev/mdX -a /dev/sd_bigger1
2. mdadm /dev/mdX -f /dev/sd_smaller1 -r /dev/sd_smaller1

then waiting the rebuilding process to finish.

3. mdadm /dev/mdX -a /dev/sd_bigger2
4. mdadm /dev/mdX -f /dev/sd_smaller2 -r /dev/sd_smaller2

then waiting the rebuilding process.

now the mdX raid1 device contains two bigger partition.
now grow the mdX raid device with --grow:

5. mdadm --grow /dev/mdX --size=max

it's time to enlarge the file system with resize2fs:
6. resize2fs -p /dev/mdX

now resize2fs only support ex3 with kernal 2.6 online mode(which means don't need to umount /dev/mdX, of course don't need rebooting).

this way work fine on my system:
2.6.36-gentoo-r8
mdadm - v3.1.4 - 31st August 2010.


[ Parent | Reply to this comment ]

Sign In

Username:

Password:

[Register|Advanced]

 

Flattr

 

Current Poll

What do you use for configuration management?








( 470 votes ~ 5 comments )