Migrating To RAID1 Mirror on Sarge

Posted by philcore on Thu 8 Sep 2005 at 21:03

A guide to migrating to RAID1 on a working Debian Sarge installation which was installed on a single drive.

I suggest reading the following links: Migrating to a mirrored raid using Grub, GRUB and RAID mini-HOWTO.

My setup:

/dev/sda == original drive with data
/dev/sdb == new 2nd drive.

(It is assumed that you have RAID1 enabled in your kernel.)

First of all install md tools:

apt-get install mdadm

change the system types on partitions you want to mirror on the old drive to fd (raid autodetect) using [s]fdisk. Don't change the swap partition! Your finished drive should resemble this output:

[root@firefoot root]# sfdisk -l /dev/sda

Disk /dev/sda: 8942 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sda1   *      0+    242     243-   1951866   fd  Linux raid autodetect
/dev/sda2        243     485     243    1951897+  fd  Linux raid autodetect
/dev/sda3        486     607     122     979965   82  Linux swap / Solaris
/dev/sda4        608    8923    8316   66798270    5  Extended
/dev/sda5        608+   1823    1216-   9767488+  fd  Linux raid autodetect
/dev/sda6       1824+   4255    2432-  19535008+  fd  Linux raid autodetect
/dev/sda7       4256+   4377     122-    979933+  fd  Linux raid autodetect
/dev/sda8       4378+   8923    4546-  36515713+  fd  Linux raid autodetect

Now use sfdisk to duplicate partitions from old drive to new drive:

sfdisk -d /dev/sda | sfdisk /dev/sdb

Now use mdadm to create the raid arrays. We mark the first drive (sda) as "missing" so it doesn't wipe out our existing data:

mdadm --create /dev/md0 --level 1 --raid-devices=2 missing /dev/sdb1

Repeat for the remaining raid volumes md1,md2, etc....

mdadm --create /dev/md1 --level 1 --raid-devices=2 missing /dev/sdb2

Now that the volumes are ready create filesystems for the raid devices. My example shows using ext3, but pick the filesystem of your choice. Again, make sure you have kernel support for your selected filesystem.

mkfs.ext3 /dev/md0
mkfs.ext3 /dev/md1
etc...

Now mount the new raid volumes. I mount them under the /mnt directory:

mount /dev/md0 /mnt
cp -dpRx / /mnt

Now copy the remaining partitions. Be careful to match your md devices with your filesystem layout. This example is for my particular setup.

mount /dev/md1 /mnt/var
cp -dpRx /var /mnt
mount /dev/md2 /mnt/usr
cp -dpRx /usr /mnt/
mount /dev/md3 /mnt/home
cp -dpRx /home /mnt
mount /dev/md4 /mnt/tmp
cp -dpRx /tmp /mnt
mount /dev/md5 /mnt/data
cp -dpRx /data /mnt

Format the swap partition on the new drive:

mkswap -v1 /dev/sdb3

Edit /mnt/etc/fstab and change to use the md devices, also note the pri=1 on both swap partitions. This should increase swap performance.

# /etc/fstab: static file system information.
#
proc            /proc           proc    defaults                   0       0
/dev/md0        /               ext3    defaults,errors=remount-ro 0       1
/dev/md1        /var            ext3    defaults                   0       2
/dev/md2        /usr            ext3    defaults                   0       2
/dev/md3        /home           xfs     defaults                   0       2
/dev/md4        /tmp            ext3    defaults,noexec            0       2
/dev/md5        /data           xfs     defaults                   0       2
 
/dev/sda3       none            swap    sw,pri=1                   0       0
/dev/sdb3       none            swap    sw,pri=1                   0       0

/dev/hda        /media/cdrom0   iso9660 ro,user,noauto             0       0
/dev/fd0        /media/floppy0  auto    rw,user,noauto             0       0

Now to setup the bootloader, edit /mnt/boot/grub/menu.lst and add an entry to boot using raid and a recovery mode in case the first drive fails.


title       Custom Kernel 2.6.11.7
root        (hd0,0)
kernel      /boot/vmlinuz-2.6.11.7 root=/dev/md0 md=0,/dev/sda1,/dev/sdb1 ro
boot

title       Custom Kernel 2.6.11.7 (RAID Recovery)
root        (hd1,0)
kernel      /boot/vmlinuz-2.6.11.7 root=/dev/md0 md=0,/dev/sdb1 ro
boot

Install grub on the second drive so if the first drive fails we can still boot.

grub-install /dev/sda
grub
grub: device (hd0) /dev/sdb
grub: root (hd0,0)
grub: setup (hd0)
grub: quit

Copy the live GRUB configuration and fstab files to the old drive:

cp -dp /mnt/etc/fstab /etc/fstab
cp -dp /mnt/boot/grub/menu.lst /boot/grub

Now is time to reboot and test things.

Once the system comes up, you should see the mounted md devices.

[root@firefoot root]# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md0               1921036    304552   1518900  17% /
tmpfs                   193064         4    193060   1% /dev/shm
/dev/md1               1921100    206768   1616744  12% /var
/dev/md2               9614052   2948620   6177064  33% /usr
/dev/md3              19524672    741140  18783532   4% /home
/dev/md4                964408     16448    898968   2% /tmp
/dev/md5              36497820   6683308  29814512  19% /data

At this point, you have all of your original data on the new drive, so we can safely add the original drive to the raid volume.

mdadm --add /dev/md0 /dev/sda1
mdadm --add /dev/md1 /dev/sda2
... repeat for remaining partitions.

Check /proc/mdstat for the skinny on what's done and what's not.. when everything is done, all the devices should show [UU]. Don't reboot until it's done synching the drives.

[root@firefoot root]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
      1951744 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
      1951808 blocks [2/2] [UU]

md2 : active raid1 sdb5[1] sda5[0]
      9767424 blocks [2/2] [UU]

md3 : active raid1 sdb6[1] sda6[0]
      19534912 blocks [2/2] [UU]

md4 : active raid1 sdb7[1] sda7[0]
      979840 blocks [2/2] [UU]

md5 : active raid1 sdb8[1] sda8[0]
      36515648 blocks [2/2] [UU]

 

 


Posted by Anonymous (69.196.xx.xx) on Fri 9 Sep 2005 at 03:43
It is *not* a good idea to have the kernel write directly to the swap partitions. If you lose a drive, your system will crash _hard_.

Put swap on a raid device.

A.

[ Parent | Reply to this comment ]

Posted by Serge (213.118.xx.xx) on Fri 9 Sep 2005 at 07:35
[ View Serge's Scratchpad | View Weblogs ]
Well, this is a question that I've asked myself a long time.

Although with a hard drive crash you will loose memory, and hence some or more processes will be killed, it appears the system would not crash hard. The downside of swap on raid on the other side would be a performance hit.

At least, that's what I was told. I'm still not sure what to think about it. Anybody with concrete experience on this?

[ Parent | Reply to this comment ]

Posted by Anonymous (64.4.xx.xx) on Thu 23 Nov 2006 at 06:34
I have expereince -

Depends on how much swap space you have, how much of your stuff is swapped to disk, and which processes are swapped.

If you have a little memory and a lot of swap, and everything is swapped to disk, and some of those processes are moderately important (except init, being the first process to start, and memory being a first come, first serve...), you can have some problems with a failed disk.

Or, if a really dumb process has a memory leak (that never happens...) and has consumed a lot of swap and the drive fails, you can be left with a frozen machine.

I raid1 a few partitions together and mkswap the md*. There is no performance hit, other than you have only have half the swap that you would otherwise, but with harddrive space being so cheap now days (750GB for 350$, and that's the biggest drive ever. I remember spending 300$ for a 850mB drive when 1.2GB was out), it's kinda dumb not to toss a straight 2gb into swap.

At worst, applications swap to disk as fast as they would otherwise, and you can get them pulled back into memory twice as fast. At best... well, sorry, that's your only option :)

[ Parent | Reply to this comment ]

Posted by philcore (216.54.xx.xx) on Fri 9 Sep 2005 at 13:28
[ View Weblogs ]
Interesting. I have never used swap on a raid device. Not many HOWTO's mention that. Thanks for the info. I'm going to convert a system right now.


And here's a link to a LJ article about it.

Thanks again.

[ Parent | Reply to this comment ]

Posted by Anonymous (207.34.xx.xx) on Fri 9 Sep 2005 at 22:10
I've been running swap on raid1 (mirrored) for a long time. I agree 100% with the poster -- why have raid, if you virtually guaranteeing that your machine will crash, or be substantially degraded in the case of the hard drive failure?

The nice thing about raid1 is that you get virtually 100% write throughput, and virtually 200% read throutput; data is read from both platters simultaneously.

So, in addition to the extra security of knowing that your swap won't get corrupted in a failure, you get double the "swap in" performance, and almost no additional "swap out" cost!

[ Parent | Reply to this comment ]

Posted by philcore (68.107.xx.xx) on Fri 9 Sep 2005 at 23:43
[ View Weblogs ]
'Tis the beauty of open forums.

...and I filled my quota for learning something new today.

[ Parent | Reply to this comment ]

Posted by Anonymous (67.104.xx.xx) on Mon 12 Sep 2005 at 17:50
I set up Raid1 on 2 Sata drives in a Dell poweredge 750.

The raid works kinda. Compared to the many other Raid1 setups I have maid with woody using both IDE and SCSI the SATA is a little lacking. If you lose one drive (tested by physically unpluging the drive) the system will crash. You can then reboot on the good drive and the system will come back up.

To make sure it was SATA I ran the test on another machine using IDE and a crash did not occur. I also have about 20 other machines that have upgraded to sarge with both SCSI and IDE and they seem to still work as planned. For what ever reason SATA and raid get a score of "Better than nothing" rather than a nice Raid1 set.

Be warned and let me know if anybody else has got it to work.

[ Parent | Reply to this comment ]

Posted by Anonymous (210.211.xx.xx) on Tue 6 May 2008 at 11:11
"tested by physically unplugging the drive"

Um. I hope that wasn't intentional. Unplugging a live drive that isn't hotplug
can destroy that drive at the least.

DAMHIKIJKOK?

PJ

[ Parent | Reply to this comment ]

Posted by Anonymous (213.209.xx.xx) on Mon 12 Sep 2005 at 23:25
hello,
i try your howto a lot of times, but after the reboot i get all times kernel panic.
can i do all steps like you do in the howto? my partition are list next. as far as i understand it is better to make the swap with on the raid1. is that right?

Dateisystem Größe Benut Verf Ben% Eingehängt auf
/dev/sda1 250M 69M 168M 30% /
tmpfs 249M 0 249M 0% /dev/shm
/dev/sda10 136G 33M 129G 1% /files
/dev/sda9 1,6G 8,1M 1,5G 1% /home
/dev/sda8 361M 8,1M 334M 3% /tmp
/dev/sda5 4,6G 288M 4,1G 7% /usr
/dev/sda6 2,8G 79M 2,6G 3% /var

thx for help

[ Parent | Reply to this comment ]

Posted by philcore (68.107.xx.xx) on Mon 12 Sep 2005 at 23:42
[ View Weblogs ]
Can you see exactly what is causing the panic? Is RAID1 enabled in your kernel? Stock debian kernel or custom?

Make sure there's no typo's and you are mapping the right physical disk to the right md device.

And yes, I think the raid swap partitions are the way to go. You'd just set them to type fd in fdisk just like the other partitions, and then modify your /etc/fstab to use the md device instead of the raw disk partition.

philcore

[ Parent | Reply to this comment ]

Posted by Anonymous (213.209.xx.xx) on Tue 13 Sep 2005 at 00:00
->raid1 enable
yes i supose - how can i test it?

->swap
ok i did it

->the error
still the same
pivot_root: no such file or directory sbin/init:432 cannot open dev/console: no such file
kernel panic....

[ Parent | Reply to this comment ]

Posted by Anonymous (213.209.xx.xx) on Tue 13 Sep 2005 at 00:14
i found this while i booting:

ext3-fs: unable to read superblock
mount:wrong fs type, bad option, bad superblock on md...

i use debian sarge with the actual amd64 kernel on a new installed system with 2sata2 disks

[ Parent | Reply to this comment ]

Posted by philcore (68.107.xx.xx) on Tue 13 Sep 2005 at 00:52
[ View Weblogs ]
double check /boot/grub/menu.lst and make sure you're using the right root partition.

Did everything go ok when you created the raid devices? cat /proc/mdstat reported good things? Are you able to boot at all from another kernel/config?

Simple way to check if raid is enabled is to cat /proc/mdstat. I'm pretty sure mdadm --create would have failed if it wasn't enabled.

[ Parent | Reply to this comment ]

Posted by Anonymous (213.209.xx.xx) on Tue 13 Sep 2005 at 19:42
mh, cat /proc/mdstat doesnt show anything, how can i load the raid into the kernel? can i post more text here, or where else? thx

[ Parent | Reply to this comment ]

Posted by Anonymous (213.209.xx.xx) on Tue 13 Sep 2005 at 20:06
ist me again,

cat /proc/mdstat
Personalities :
unused devices:

yes at all i can boot at my old kernel image

[ Parent | Reply to this comment ]

Posted by Anonymous (80.203.xx.xx) on Wed 14 Sep 2005 at 09:53
I had the same problem, and fixed it this way:

1) After installing mdadm, remove the empty mdadm.conf. It will stop the boot process from finding the md* drives:
 mv /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.away
2) After the 'mdadm --create' lines, create a new initrd with md0 as a boot drive:
 mkinitrd -r /dev/md0 -o /boot/initrd.img-2.6.11-686
3) In the grub config, make sure you use an initrd in your /boot/grub/menu.lst:
 initrd          /boot/initrd.img-2.6.11-686
Good luck!

[ Parent | Reply to this comment ]

Posted by Anonymous (213.209.xx.xx) on Wed 14 Sep 2005 at 23:06
many thx. now the system boot correct but if i call df i get as devices sda1 .... and not md. whats wrong?
many thx

[ Parent | Reply to this comment ]

Posted by Anonymous (213.191.xx.xx) on Thu 15 Sep 2005 at 13:08
i try it again and again. and i get lots of failure while im booting. superblock path not right... after that i log in and the md partition doesnt looks right. md1 to / and md5 to /usr dir. so please help.

i need information about the swap on the raid1 drive - can i make it the same way i did with the other partition (mkfs.ext3?)

can i create my sda1 to md1? or must i have the md0?
can i count the md´s like my partitions - create ... md8.../dev/sdb8?
if i create my sda1 to md1, is it right that i have to mount this to dev/md1 /mnt and cp to / /mnt ?
how must i handle the swap after the swap is on raid1?(like mkswap....)

at the point with grub - can i use grub still with the option hd0,0 ?

sorry for all these questions, but i worked on it since 5days.

[ Parent | Reply to this comment ]

Posted by philcore (68.107.xx.xx) on Sat 17 Sep 2005 at 13:36
[ View Weblogs ]
to make the swap partitions, do a swapoff -a. Use fdisk to change the partition type of your swap partitions to fd on both disks. Now create a md device using
mdadm --create just like in the original article using the swap partitions. If your last md device is, say, md5, then create md6.

Check /proc/mdstat to make sure the device was created. Now you can mkswap /dev/md? on the new md device. Now do swapon /dev/md?. At this point you can run swapon -s to see that swap is using the raid device.

Now change your /etc/fstab to tell it to use the md device instead of the raw disk partition.

See the referenced linux journal article for a more complete view.

[ Parent | Reply to this comment ]

Posted by Anonymous (213.209.xx.xx) on Sun 18 Sep 2005 at 22:24
hi,
and thx for your help so far. now i can sync the raid with both devices. but after reboot there is only one device in raid.
any idea?

[ Parent | Reply to this comment ]

Posted by Anonymous (213.209.xx.xx) on Sun 18 Sep 2005 at 22:24
hi,
and thx for your help so far. now i can sync the raid with both devices. but after reboot there is only one device in raid.
any idea?

[ Parent | Reply to this comment ]

Posted by Anonymous (213.209.xx.xx) on Sun 18 Sep 2005 at 22:53
ok, i found the problem. now i tested it and disconnect a disk. after reboot the system boots from the other disk. then i connect the missing disk and i hoped that the system automaticly resync the disk to the raid. but it didnt. is that normal? what can i do?
thx

[ Parent | Reply to this comment ]

Posted by philcore (68.107.xx.xx) on Mon 19 Sep 2005 at 12:07
[ View Weblogs ]
you should do a mdadm --add /dev/md? /dev/sd? to add the other disk to the array. Is it the first disk that is not booting? Do you have grub installed on that disk?

[ Parent | Reply to this comment ]

Posted by geomanous (195.167.xx.xx) on Mon 18 Aug 2008 at 09:41
Does this still works when you upgrading your kernel from 2.6.18 to 2.6.22-2. I spend my weekend and i never managed to boot my system with the new kernel.

/dev/md0 is the boot partition
/dev/md1 is swap
/dev/md2 is the root partition

I tried lots of things in the recreation of the initrd.img but nomatter what i tried the fs was not mounted during the boot so a kernel panic error could not complete the boot. Is there an update procedure to make it work.
The only big differense with the old (and currently working) kernel is that the new one uses a boot image drive in ram to start the proccess (if i got it right).

any idea???

[ Parent | Reply to this comment ]

Posted by Netsnipe (211.30.xx.xx) on Sat 25 Feb 2006 at 14:06
pivot_root: no such file or directory sbin/init:432 cannot open dev/console: no such file

This got me too when I was migrating. I discovered that in fact /dev/console did not exist on my newly created RAID array.

When copying / (and /dev) over from the old drive to the degraded array, make sure that udev is disabled or else /dev ends up being remounted on its own virutal filesystem meaning that cp/rsync will skip over it entirely leaving you without a legacy /dev directory for the system to bootstrap itself from.

So before you copy your data, drop into single user mode

# init 1
and then disable udev
# /etc/init.d/udev stop

--
Andrew "Netsnipe" Lau
http://www.cse.unsw.edu.au/~alau/

Debian GNU/Linux Maintainer
Computer Science, UNSW

[ Parent | Reply to this comment ]

Posted by Anonymous (81.169.xx.xx) on Fri 16 Sep 2005 at 21:59
Debian Sarge (and Woody)
Ubuntu

Problem:
--------
Root is installed on raid but after reboot other drive is always missing and
logs looks like this:

***************************************************
md: md0 stopped.
md: bind
raid1: raid set md0 active with 1 out of 2 mirrors
***************************************************
If you use Debian Sarges installer to create root on raid you will encounter
this problem. Also migrating from non-raid root may lead you to this
problem. If you build degraded array (mdadam /dev/md0 --create -l1 -n2
/dev/hda? missing) and install kernel-image to raid-root while it's degraded
you will end up missing other disk after every reboot even if you alvway
hotadd it to array.

Problem lies in mkinitrd, following is from it's manpages: "If both mdadm(8)
and raidtools2 are installed, the former is preferred. At the moment,
mkinitrd uses the -D option of mdadm(8) to discover the constituent devices.
!!!This means that only devices that are part of the array at the time that
mkinitrd is run will be used later on.!!! This problem does not exist when
raidtools2 is used."


Solution:
---------
Reinstall kernel-image (or re-create initrd.img) after you have hot-added
partition that raid losed in boot (and wait it to resync). This way mkinitrd
creates a new initrd.img which contains both partitions for raid1.

http://piirakka.com/misc_help/Linux/raid_starts_degraded.txt

[ Parent | Reply to this comment ]

Posted by fphart (24.211.xx.xx) on Tue 20 Sep 2005 at 23:49
Back in March, I installed Debian Sarge RC2+ (2/28 snapshot) from scratch as a RAID-1 install on two identical SCSI disks. All of my md partitions worked after first and subsequent shutdown/boots *except* for the swap partition. It worked only on the first boot after partitioning, but after shutting down and rebooting, it was absent, and I have never been able to figure out why. (Thread with details here.)

I was able to identify a workaround to recreate the swap device after every boot, but unfortunately, the folks at the Debian Testing list weren't able to help to help with a fuller resolution. So I put the thing aside for a few months, hoping that it would be fixed when Sarge was finally released (and before I really needed the Linux box, which I do now). Recently, I updated my Sarge installation (kept it at Sarge by changing 'testing' to 'sarge' in my apt sources file), but the same behavior remains. Because the problem only showed on reboots after the first one, I've always thought the problem was in the shutdown scripts.

This installation has only mdadm, so could the initrd.img problem above be the cause of my troubles? I'd appreciate any helpful hints, even pointers to good docs. Unfortunately, docs on the Debian startup and shutdown scripts seem scarce.

[ Parent | Reply to this comment ]

Posted by Anonymous (213.118.xx.xx) on Sun 2 Oct 2005 at 13:34
Hi, when I try to follow this howto, I got the following error when trying to write the partition-table to the second harddisk. Someone has any ideas how to solve this?

server:/# sfdisk -d /dev/hda | sfdisk /dev/hdc
Checking that no-one is using this disk right now ...
OK

Disk /dev/hdc: 24321 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/hdc1 0 - 0 0 0 Empty
/dev/hdc2 0 - 0 0 0 Empty
/dev/hdc3 0 - 0 0 0 Empty
/dev/hdc4 0 - 0 0 0 Empty
Warning: given size (268221240) exceeds max allowable size (260654625)

sfdisk: bad input

Thanks!

[ Parent | Reply to this comment ]

Posted by Anonymous (68.107.xx.xx) on Sun 2 Oct 2005 at 18:03
second disk is smaller than first disk?

[ Parent | Reply to this comment ]

Posted by philcore (68.107.xx.xx) on Sun 2 Oct 2005 at 18:19
[ View Weblogs ]
I've realized I missed the boat on putting some boundries on what this howto was supposed to achieve. I didn't go into depth about possible differences in kernel configs, grub configs, etc. This is really not a good guide for beginners, I'm afraid. It's geared more to advanced users who want to add a second disk without trashing their install. I'm going to try this again sometime soon, geared more towards beginners with more step by step action and more talk of possible gotcha's that can occur along the way.

philcore.

[ Parent | Reply to this comment ]

Posted by Anonymous (200.208.xx.xx) on Mon 17 Oct 2005 at 14:53
I've made all the things like tutorial, but I have a problem. My /dev/hda1 is not in /dev/md0. Everytime I boot the server, I have to do mdadm --add /dev/md0 /dev/hda1
Can anyone tell my why this happens, and how can I solve it?
Thanks

[ Parent | Reply to this comment ]

Posted by Anonymous (216.54.xx.xx) on Mon 17 Oct 2005 at 15:07
someone above posted about this issue. See comment 17 above.

[ Parent | Reply to this comment ]

Posted by Anonymous (210.6.xx.xx) on Wed 26 Oct 2005 at 04:49
Hi Philcore,

How's your progress for the HOWTO guide for beginner? I'm very appreicate your work and lookforward to seeing your guide soon. Becasue i'm the newbie and want to have a stey-by-step guide for setup my server with RAID 1. Thank you for sharing! Good JOB!

Lawrence

[ Parent | Reply to this comment ]

Posted by philcore (70.161.xx.xx) on Wed 26 Oct 2005 at 12:24
[ View Weblogs ]
Working on it. Been playing with ldap lateley, too.

Also remember, sarge has support for raid on fresh installations, so if you have the means to install a new server, you don't have to migrate to raid after the install anymore like you did with woody.

Anyway, it's coming... :)

[ Parent | Reply to this comment ]

Posted by Anonymous (210.6.xx.xx) on Wed 26 Oct 2005 at 12:58
Hi, Philcore,

Thanks for your quick reply. Ys, i know. But i tried before, just prompt a lot of error like "invalid argument etc". Maybe I dont know how to setup properly! :( Do you have any refernce URL? so that I can follow it. Thank again!

Let me say my server configuation.

AsRock P4VM800 (PM800+VIA8237R)
P4 2.4Ghz with HT (512KB)
1GB DDR400 RAM
Seagate 160GB SATA HDD x 2 (want to setup RAID 1)

Thanks!

Lawrence

[ Parent | Reply to this comment ]

Posted by philcore (216.54.xx.xx) on Fri 25 Nov 2005 at 15:30
[ View Weblogs ]
I've posted an update to my article on my weblog.

http://www.debian-administration.org/users/philcore/weblog/4

Hopefully it's easier to follow.

[ Parent | Reply to this comment ]

Posted by Anonymous (147.197.xx.xx) on Fri 6 May 2011 at 15:18
Thanks for the tutorial, but it would be responsible of you to put a link to the updated article at the beginning of this one. Otherwise you'll end up with people like me that follow this tutorial, then read the comments and (possibly) see the updated one hidden away.

[ Parent | Reply to this comment ]

Posted by Anonymous (82.241.xx.xx) on Sun 8 Jan 2006 at 20:21
This thread helped me pass threw.
I just post some problems and solutions i've encountered:

1 - I have an error the firs time i wanted to create a raid device, the system
pretending the node doesn't exists, so i had to create the node before
create the raid device
mknod /dev/md0 b 9 0

2 - I had to edit /etc/mdadm/mdadm.conf to set the system automount the RAID
partitions, but it seems that i am the only who had to do this.

3 - I had to add the modules 'md' and 'raid1 in the kernel image, and build the
new image whith
mkinitrd --o /boot/initrd.img-2.6.11-386
To have the root partition in RAID

4 - I have to check the /dev partition, because the files had not been copied
with the <cp -dprx>

5 - I had to make a second <mkinitrd --o /boot/initrd.img-2.6.11-386> after
synchronisation of the 2 systems partitions to have the Second system
partition in the corresponding RAID ( Which had already been reported
above)

So, i have now a working secure system, so thanks every body for your tips !

LaMain, vous sert bien !

[ Parent | Reply to this comment ]

Posted by bujecas (194.65.xx.xx) on Mon 6 Nov 2006 at 11:50
Maybe your problem with the RAID automount is because you didn't set the partitions to fd(linux raid autodetect).

I'm having the same problem with the /dev. The cp -dpRx does'nt copies the devices. I have to copy the devices to the new partition.

[ Parent | Reply to this comment ]

Posted by Anonymous (74.101.xx.xx) on Thu 6 Dec 2007 at 19:16
For kernels 2.6.13 and newer, mkinitrd will not work. The two options are initramfs-tools and yaird. Yaird will be your biggest source of problems when you're switching from non-raid to raid. Use initramfs-tools.

apt-get install initramfs-tools
update-initramfs -c -k <kernel_version>
update-grub

[ Parent | Reply to this comment ]

Posted by Anonymous (195.127.xx.xx) on Tue 10 Jan 2006 at 08:51
Hi there,

from my point of view copying the files by tar (for the / - Filesystem) should work better.

Instead of the cp commands use:

For the / -filesystem:
tar -clf - -C / . | tar -xpvf - -C /mnt

For the /var -filesystem:
tar -clf - -C /var . | tar -xpvf - -C /mnt

For the /usr -filesystem:
tar -clf - -C /usr . | tar -xpvf - -C /mnt

For the /home -filesystem:
tar -clf - -C /home . | tar -xpvf - -C /mnt

For the /tmp -filesystem:
tar -clf - -C /tmp . | tar -xpvf - -C /mnt

For the /data -filesystem:
tar -clf - -C /data . | tar -xpvf - -C /mnt

Regards,

[ Parent | Reply to this comment ]

Posted by Anonymous (216.155.xx.xx) on Thu 10 Aug 2006 at 16:07
Very good the article, with my partner follow this one to create de Raid 1 in Debian Sarge, we had some problems but finally we can make the raid, now we test this one, have some doubts, but we follow working in that, thanks a sorry about my english we are chilean byeeee and again good article

Fidel and René

[ Parent | Reply to this comment ]

Posted by Anonymous (71.97.xx.xx) on Sat 6 Jan 2007 at 00:55
I see most of the migration guides use grub. What if my original installation
is lilo based? Convert to grub? Reinstall? Suggestions?

Thanks
Ramesh

[ Parent | Reply to this comment ]

Posted by kvorg (194.249.xx.xx) on Sun 25 Mar 2007 at 21:43
I have had luck with a lilo-based installation simply setting the new root and adding the required md parameters in the "append" parameter to be appended to the kernel command-line, effectively doing the same thing the Grub setup her does; so, to follow the example:

from:

root=/dev/sda1

default=Linux

image=/boot/vmlinuz-2.6
label=Linux
read-only
initrd=/boot/initrd.img-2.6

to:

root=/dev/md0

default=Linux

image=/boot/vmlinuz-2.6
label=Linux
read-only
initrd=/boot/initrd.img-2.6
append='md=0,/dev/sda1,/dev/sdb1'

Remember that you can stick both root and append in a specific kernel setup or in the general setup, so you can have a raid and non-raid setup at the same time.

This is a very good idea, since lilo is somewhat more limited in the booting versatility once your setup does not work. You can edit the command line, though.

Of course, all the initrd procedure is still essential!

Best success,
-kvorg

[ Parent | Reply to this comment ]

Posted by m_mancini (159.213.xx.xx) on Mon 7 May 2007 at 17:07
I used LILO in all my raids,

I found (starting from version 22):


raid-extra-boot=[auto|mbr|<list of device|...]




mbr should work even if I always used

raid-extra-boot=/dev/sda,/dev/sdc




have good day
massi

[ Parent | Reply to this comment ]

Posted by Anonymous (80.38.xx.xx) on Thu 14 Jun 2007 at 12:21
Hi, good job.

Only one thing to say if you let me, I append a migrating to raid1 from LVM
OS: Debian, kernel 2.6.21
/ ext3
else in LVM
to migrate the / do it as above (works great) and to migrate the LVM:
Supossed:
/dev/hda1 /
/dev/hda2 swap
/dev/hda3 LVM
/dev/hdb1 /dev/md0
/dev/hdb2 /dev/md1
/dev/hdb3 /dev/md2
# vgextend datavg /dev/md2
# pvmove /dev/hda2 /dev/md2
# vgreduce datavg /dev/hda2

Simply, but I have been searching google for perhaps an hour without the appropiate answers......

And another thing is the grub config, dont forget to append the same initrd file you have for normal startup to the new lines:

title Custom Kernel
root (hd0,0)
kernel /boot/vmlinuz-2.6.21-1-686 root=/dev/md0 md=0,/dev/hda1,/dev/hdb1 ro
initrd /boot/initrd.img-2.6.21-1-686 #Dont forget this one
boot

title Custom Kernel (RAID Recovery)
root (hd1,0)
kernel /boot/vmlinuz-2.6.21-1-686 root=/dev/md0 md=0,/dev/hdb1 ro
initrd /boot/initrd.img-2.6.21-1-686 @Dont forget this one
boot

title Debian GNU/Linux, kernel 2.6.21-1-686
root (hd0,0)
kernel /boot/vmlinuz-2.6.21-1-686 root=/dev/hda1 ro
initrd /boot/initrd.img-2.6.21-1-686
savedefault

[ Parent | Reply to this comment ]

Posted by Anonymous (76.190.xx.xx) on Tue 31 Jul 2007 at 22:20
My output doesn't seem to be the same though it looks like everything worked. There is a status bar that I thought would say the percentage that it's copied over from the existing partition to the new --add partition but it's just fluctuating between 1% and 20%. How do I know how far along the mirroring is?:
$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 dm-4[2] dm-3[1] 117218176 blocks [2/1] [_U] [==>..................] recovery = 11.4% (13363968/117218176) finish=63.7min speed=27132K/sec unused devices:
Also, I used drives /dev/hdb1 and /dev/hdc1 but it's showing them as dm-*. What does that mean? Thanks Luke

[ Parent | Reply to this comment ]

Posted by philcore (70.106.xx.xx) on Wed 1 Aug 2007 at 21:08
[ View Weblogs ]
multipath. Is your box connecting to a fibre channel SAN?

phil

[ Parent | Reply to this comment ]

Posted by Anonymous (68.254.xx.xx) on Fri 3 Aug 2007 at 16:28
Some of the problems reported here may be caused by a typo in the example commands:
Now use mdadm to create the raid arrays. We mark the first drive (sda) as "missing" so it doesn't wipe out our existing data:
But the first command marks the second drive (hdb) as missing:
mdadm --create /dev/md0 --level 1 --raid-devices=2 missing /dev/sdb1
And then the error repeats itself again below:
mdadm --create /dev/md1 --level 1 --raid-devices=2 missing /dev/sdb2
I haven't tried this myself, but my guess is that this would completely destroy the data on the first drive, if the system will even let you do this on a mounted filesystem. After reading other guides, these commands should read as:
mdadm --create /dev/md0 --level 1 --raid-devices=2 missing /dev/sda1
mdadm --create /dev/md1 --level 1 --raid-devices=2 missing /dev/sda2
Anyone having issues should look at the commands they used. If you follow the directions as posted, you may have lost your data. I am going to try this in VMWare first before trying it on a live system.

[ Parent | Reply to this comment ]

Posted by philcore (70.106.xx.xx) on Fri 3 Aug 2007 at 16:39
[ View Weblogs ]
Nope.

The "missing" argument does not refer to /dev/sdb1 or sdb2. It refers to the fact that you are creating an array with 2 devices, the first is a placeholder - described by the word "missing", the second is /dev/sdb1.

If you try your way, you are definitely going to destroy your data.

[ Parent | Reply to this comment ]

Posted by Anonymous (68.254.xx.xx) on Fri 3 Aug 2007 at 17:11
Thanks for the clarification.

[ Parent | Reply to this comment ]

Posted by Anonymous (68.254.xx.xx) on Fri 3 Aug 2007 at 17:34
I have followed your example, to the letter, using vmware, with great success!

Thanks for the excellent walkthrough. Now that I have been successful in the lab, I'm confident I can do the same on my live webserver.

Cheers!

Jon

[ Parent | Reply to this comment ]

Posted by Anonymous (68.23.xx.xx) on Wed 8 Aug 2007 at 23:07
Check out this manual.
How to migrate to raid1 on debian.
http://lucasmanual.com/mywiki/DebianRAID

[ Parent | Reply to this comment ]

Posted by diegobelotti (87.25.xx.xx) on Fri 24 Aug 2007 at 08:19
Hi, I'm using raid on Etch. I've tested some procedure and configuration during the setup of the server.

OK, the server boot correctly on a failure of any of the two disk. But there is a GRUB setting that I don't understand and don't seems to change the system behaviour:
kernel /boot/vmlinuz-2.6.11.7 root=/dev/md0 md=0,/dev/sda1,/dev/sdb1 ro
or
kernel /boot/vmlinuz-2.6.11.7 root=/dev/md0 md=0,/dev/sdb1 ro
I've tryed with or without the bold part of the line and have noticed no differences. By default Etch installer don't put this part in the menu.lst.

So the question is: what does this part of line stands for? Should we use it? Is a backwards compatible parameter, so not necessary any more in Etch?

Please let me know!

Thanks
Diego
http://www.diegobelotti.com

[ Parent | Reply to this comment ]

Posted by Anonymous (94.112.xx.xx) on Sat 13 Dec 2008 at 13:45
This line let GRUB know about the raid array configuration,
so if /dev/sda1 failed, grub try to boot from the second mi-
rror.

CORRECT SWAP IN RAID1

By the way, according to this manual swap wan't wake up
so I've correct this with:

a) remove all swap from /etc/fstab
b) reboot
c) recreate swap array:

mkswap /dev/md1
swapon /dev/md1
mdadm --add /dev/md1 /dev/hda5

d)Add raid configuration to mdadm config:

mdadm --examine --scan >> /etc/mdadm/dmadm.conf

e) Repair /etc/fstab from :

/dev/sda5 nono swap sw,pri=1 0 0
/dev/sdc5 nono swap sw,pri=1 0 0

to

/dev/md1 nono swap sw,pri=1 0 0

f) reboot.

-------------------


[ Parent | Reply to this comment ]

Posted by dsevil (74.143.xx.xx) on Thu 12 Mar 2009 at 20:29

This process worked perfectly on a system running etch. Haven't tried on the current stable release.

One thing that wasn't mentioned is that the "# kopt=" line in /boot/grub/menu.lst must be updated in order to prevent update-grub from clobbering your changes.

Once your migration to RAID1 is complete, change the following line:

# kopt=root=/dev/sda1 ro
to:
# kopt=root=/dev/md0 md=0,/dev/sda1,/dev/sdb1 ro

[ Parent | Reply to this comment ]

Posted by Anonymous (67.188.xx.xx) on Wed 3 Feb 2010 at 22:12
Excellent article. Still works on Debian Lenny!

[ Parent | Reply to this comment ]

Posted by philcore (72.218.xx.xx) on Thu 4 Feb 2010 at 11:55
[ View Weblogs ]
Cool!

[ Parent | Reply to this comment ]

Sign In

Username:

Password:

[Register|Advanced]

 

Flattr

 

Current Poll

What do you use for configuration management?








( 76 votes ~ 0 comments )

 

 

Related Links