A simple introduction to working with LVM

Posted by Steve on Wed 28 Jun 2006 at 21:22

The logical volume manager allows you to create and manage the storage of your servers in a very useful manner; adding, removing, and resizing partitions on demand. Getting started with LVM can be a little confusing to newcomer so this guide intends to show the basics in a simple manner.

There several pieces of terminology that you'll need to understand to make the best use of LVM. The most important things you must know are:

  • physical volumes
    • These are your physical disks, or disk partitions, such as /dev/hda or /dev/hdb1. These are what you'd be used to using when mounting/unmounting things. Using LVM we can combine multiple physical volumes into volume groups.
  • volume groups
    • A volume group is comprised of real physical volumes, and is the storage used to create logical volumes which you can create/resize/remove and use. You can consider a volume group as a "virtual partition" which is comprised of an arbitary number of physical volumes.
  • logical volumes
    • These are the volumes that you'll ultimately end up mounting upon your system. They can be added, removed, and resized on the fly. Since these are contained in the volume groups they can be bigger than any single physical volume you might have. (ie. 4x5Gb drives can be combined into one 20Gb volume group, and you can then create two 10Gb logical volumes.)

Logically these are stacked from top to bottom like this:


Dia source file

Creating A Volume Group

To use LVM you need to take at least one partition, initialise it for use with LVM and then include it in a volume group. Why would you do this? Well it would let you create new partitions on the fly, and make better use of your space.

In my case I have a laptop with the following setup:

    Name        Flags      Part Type  FS Type          [Label]        Size (MB)
 ------------------------------------------------------------------------------
    hda1        Boot        Primary   Linux ext3       [/]              8000.01 
    hda2                    Primary   Linux swap / Solaris              1000.20
    hda3                    Primary   Linux                            31007.57

Here I have a 7Gb root partition which contains my Debian GNU/Linux installation. I also have a 28Gb partition which will be used by LVM. I've chosen this setup so that I can create a dedicated /home partition using LVM - and if I need more space I can extend it.

In this example hda1, hda2, and hda3 are all physical volumes. We'll initialize hda3 as a physical volume:

root@lappy:~# pvcreate /dev/hda3

If you wanted to combine several disks, or partitions you could do the same for those:

root@lappy:~# pvcreate /dev/hdb
root@lappy:~# pvcreate /dev/hdc

Once we've initialised the partitions, or drives, we will create a volume group which is built up of them:

root@lappy:~# vgcreate skx-vol /dev/hda3

Here "skx-vol" is the name of the volume group. (If you wanted to create a single volume spanning two disks you'd run "vgcreate skx-vol /dev/hdb /dev/hdc".)

If you've done this correctly you'll be able to see it included in the output of vgscan:

root@lappy:~# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "skx-vol" using metadata type lvm2

Now that we have a volume group (called skx-vol) we can actually start using it.

Working with logical volumes

What we really want to do is create logical volumes which we can mount and actually use. In the future if we run out of space on this volume we can resize it to gain more storage. Depending on the filesystem you've chosen you can even do this on the fly!

For test purposes we'll create a small volume with the name 'test':

root@lappy:~# lvcreate -n test --size 1g skx-vol
Logical volume "test" created

This command creates a volume of size 1Gb with the name test hosted on the LVM volume group skx-vol.

The logical volume will now be accessible via /dev/skx-vol/test, and may be formatted and mounted just like any other partition:

root@lappy:~# mkfs.ext3 /dev/skx-vol/test
root@lappy:~# mkdir /home/test
root@lappy:~# mount /dev/skx-vol/test  /home/test

Cool, huh?

Now we get onto the fun stuff. Let us pretend that the test partition is full and we want to make it bigger. First of all we can look at how big it is at the moment with lvdisplay:

root@lappy:~# lvdisplay 
  --- Logical volume ---
  LV Name                /dev/skx-vol/test
  VG Name                skx-vol
  LV UUID                J5XlaT-e0Zj-4mHz-wtET-P6MQ-wsDV-Lk2o5A
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                1.00 GB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           254:0
   

We see it is 1Gb in size (no suprise really!) before we go on to resize the volume remember we should unmount it first:

root@lappy:~# umount  /home/test/
root@lappy:~# lvextend -L+1g /dev/skx-vol/test 
Extending logical volume test to 2.00 GB
Logical volume test successfully resized

(It is possible to resize ext3 filesystems whilst they're mounted, but I'd still suggest doing it offline as that is less scary.)

Looking at lvdisplay again we can see the volume was resized:

root@lappy:~# lvdisplay 
  --- Logical volume ---
  LV Name                /dev/skx-vol/test
  VG Name                skx-vg
  LV UUID                uh7umg-7DqT-G2Ve-nNSX-03rs-KzFA-4fEwPX
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                2.00 GB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           254:0

The important thing to realise is that although the volume has been resized the ext3 filesystem on it has stayed unchanged. We need to resize the filesystem to actually fill the volume:

root@lappy:~# e2fsck -f /dev/skx-vol/test 
root@lappy:~# resize2fs /dev/skx-vol/test

Remount the logical volume and you'll discover it is now only half full instead of completely full!

If you get bored of the volume and its contents you can remove it with the lvremove command:

root@lappy:~# lvremove /dev/skx-vol/test
Do you really want to remove active logical volume "test"? [y/n]: y
Logical volume "test" successfully removed

Other userful commands include lvrename to change the name, and lvreduce to reduce the size of a volume.

Mounting Logical Volumes

In the previous section we showed how you could mount a logical volume, with a command like this:

mount /dev/skx-vol/test  /home/test

If you want your partition to be mounted at boot-time you should update your /etc/fstab to contain an entry like this:

/dev/skx-vol/home    /home       ext3  noatime  0 2
/dev/skx-vol/backups /backups    ext3  noatime  0 2

Meta-Data

If you're worried about losing details about your volumes in the event of problems do not worry. The current state of the LVM setup upon your machine is maintained in the event of errors.

Running pvdisplay will allow you to see which physical volume(s) make up your volume group. In the case of our example we only used /dev/hda3, but if you're using more volumes it might be useful to take a look at them with the pvdisplay command:

root@lappy:~# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/hda3
  VG Name               skx-vol
  PV Size               28.88 GB / not usable 0   
  Allocatable           yes 
  PE Size (KByte)       4096
  Total PE              7392
  Free PE               5280
  Allocated PE          2112
  PV UUID               WyXQtL-OdT6-GnGd-edKF-tjRU-hoLA-RJuQ6x

If we ever lost this information we could find it contained in the file /etc/lvm/backup/skx-vol.

Similarly if we wanted to know which logical volumes we'd created we could example the directory /etc/lvm/archive. This contains numbered files containing backups of the operations we've conducted.

As an example we created the "test" volume, which we went on to resize. Here is the first section of /etc/lvm/archive/skx-vol_000009.vg:

# Generated by LVM2: Sat Jun 10 12:35:57 2006

contents = "Text Format Volume Group"
version = 1

description = "Created *before* executing 'lvcreate -n test --size 1g skx-vg'"

creation_host = "lappy" 
# Linux lappy 2.6.8-2-686 #1 Sat Jan 8 16:50:08 EST 2005 i686

Filesystems

When it comes to using LVM effectively it is worth considering the filesystem that you wish to use upon your logical volumes.

If you choose a filesystem which doesn't support resizing then increasing the size of your LVM volumes would be pointless. Here is a brief list of the resizable filesystems:

filesystem      increase while mounted  increase while unmounted  decrease

ext2fs          yes                     yes                       yes
ext3fs          yes                     yes                       yes
ReiserFS        yes                     yes                       yes
JFS             no                      no                        no
XFS             yes                     no                        no

Note that some filesystems can be increased in size, but cannot be reduced.

If I've missed one you're familiar with please do let me know.

Closing Comments

If you're ready to make the jump to LVM and don't have a lot of space handy for allocating to LVM then it might make sense to reinstall your system. The Debian installer has excellent support for creating LVM setups.

We've not really covered advanced usage in this introduction but there is a lot of readable and useful documentation available if you're prepared to search for it. The most obvious starting point is the LVM howto.

 

 


Posted by Anonymous (84.35.xx.xx) on Wed 28 Jun 2006 at 22:22
This is all really cool stuff but if I understand it correctly you lose all your data as soon as one physical disk in a group dies. Well, you might be able to salvage some data but effectively it's almost as bad as using RAID-0... LVM is very cool if you use it with RAID-1, RAID-5 or some other redundant RAID level.

[ Parent | Reply to this comment ]

Posted by Steve (62.30.xx.xx) on Wed 28 Jun 2006 at 22:29
[ View Steve's Scratchpad | View Weblogs ]

Yes this is true, and why this is a simple introduction rather than an indepth treatment of LVM.

If you want real safety you should use LVM on top of RAID that will give you the benefits of striping/mirroring/raid whilst still allowing you to create and remove partitions on the fly.

This is very simple to achieve and mostly means that instead of running "pvcreate /dev/hda .." you'd use "pvcreate /dev/md0 .." instead.

Steve

[ Parent | Reply to this comment ]

Posted by noahdain (24.184.xx.xx) on Thu 29 Jun 2006 at 03:47
raid isn't for protecting your data, either. it's for availability.

raid, volume management systems and backups all exist for different reasons and solve different problems.

[ Parent | Reply to this comment ]

Posted by Anonymous (70.253.xx.xx) on Thu 29 Jun 2006 at 05:19
RAID can used for many reasons: increased availability, increased data protection, or even increased data throughput. Don't presume that what you need it for is what everyone needs it for.

[ Parent | Reply to this comment ]

Posted by noahdain (24.184.xx.xx) on Thu 29 Jun 2006 at 06:00
"RAID can used for many reasons"

true.


"... increased availability ..."

keep going.


"... increased data protection ..."

bzzt! sorry hans, wrong answer.


"... or even increased data throughput."

very good. 2 out of 3, not too bad.


"Don't presume that what you need it for is what everyone needs it for."

One can presume all they want. However, raid still isn't for data protection.

[ Parent | Reply to this comment ]

Posted by Anonymous (84.45.xx.xx) on Thu 29 Jun 2006 at 20:14
Having the data mirrored pretty obviously protects it: without it mirrored, a single hardware failure can nuke it. With it mirrored, you need two failures to nuke it. With P(f) being the probability of an disk dying, in practice P(f) is so small that the probability of two independent failures (P(f) * P(f)) is orders of magnitude less than P(f).

If your point is that RAID is not a useful single backup strategy, fine. But to say it "isn't for data protection" is pretty clear nonsense. Availability and data integrity are actually quite closely related.

[ Parent | Reply to this comment ]

Posted by noahdain (24.45.xx.xx) on Thu 29 Jun 2006 at 20:57
"Having the data mirrored pretty obviously protects it ..."
and I would say, 'obviously, it does not, and has very little or nothing to do with the problem of data protection.'

"With it mirrored, you need two failures to nuke it."
false. that's quite a large assumption you make there.

"If your point is that RAID is not a useful single backup strategy, fine."
partially.

"But to say it "isn't for data protection" is pretty clear nonsense."
raid does not protect your data. maybe someday you'll learn this. better still, someone reading this will understand as well.

data can become lost and/or corrupt for many, many reasons like ...

accidental user deletion
* intentional malicous user deletion
* memory corruption
* filesystem corruption
* cosmic radiation

* accidental user deletion of information is, of course, the number threat to data protection.

raid just helps to keep your system running in the event that a harddrive fails in a somewhat "polite" manner. It is still possible for a failing harddrive (and most any other piece of hardware) to cause errors to propagate back up into system memory and cause corruption. Even with duplexing this can happen.

but, do feel free to keep believing as you see fit.

[ Parent | Reply to this comment ]

Posted by Anonymous (192.168.xx.xx) on Wed 17 Jul 2013 at 21:01
I think you are confusing complete data protection with some data protection.

[ Parent | Reply to this comment ]

Posted by Anonymous (212.84.xx.xx) on Sun 2 Jul 2006 at 17:15
"Having the data mirrored pretty obviously protects it"

Maybe, but I wouldn't rely on it.

"With it mirrored, you need two failures to nuke it. With P(f) being the probability of an disk dying, in practice P(f) is so small that the probability of two independent failures (P(f) * P(f)) is orders of magnitude less than P(f)."

Correct, assuming that the failures are independent. Unfortunately in a PC, this assumption is flawed. Although probably true for the disk failure modes, its not true for the system. Failures of the PC power supply are the most likely to occur:
- overvoltage failures may very well cause common mode failures of multiple disks
- undervoltage failures may cause memory failures which can cause corrupt data to be written to multiple disks.

Then add in external failure modes like fire and theft and it becomes blantantly obvious that RAID isn't for any "meaningful" data protection.

[ Parent | Reply to this comment ]

Posted by Anonymous (80.35.xx.xx) on Sun 2 Jul 2006 at 19:30
Once upon a time... there was a customer who relied on his RAID to keep his data safe. "I've got a RAID, dude, I don't need any backup". One day, Windows started doing funny things, so he called a clumsy technician (ok, I admit it, it was me) who ended up formatting the entire Windows partition. There was another partition for the data, but they kept 90% of data on the Windows partition. The technician thought the data was safe on the other partition, or maybe they had a backup. They didn't, and more than 3 years of work were lost. "But it's a RAID! There's still nother disk!". Yeah, but when you format a RAID, you format both disks, man. Sorry.

So you better don't trust RAID for backup purposes. Or the wolf will eat you sooner or later.

[ Parent | Reply to this comment ]

Posted by Anonymous (84.133.xx.xx) on Sat 8 Jul 2006 at 22:39
I had a case of a filesystem on an external SCSI-RAID being totally trashed because of some filesystem driver error in an early NT4 version...

The trashed FS was kept perfectly safe on that RAID5 plus one hotspare drive...

[ Parent | Reply to this comment ]

Posted by Anonymous (66.190.xx.xx) on Wed 7 Feb 2007 at 00:52
Another way RAID can bite you: If you are using a hardware RAID, and the controller dies, it can take one or more of the drives with it before it quits twitching. Your data is *DEAD* without an external backup. Had this happen to us twice. Second time was an Apple XServe RAID (which as Apple kindly pointed out is NOT a high availability system, and is instead intended for large short-term workspace projects)

Or, if you use Linux software raid, and your motherboard slowly develops an intermittent IDE problem, it can garble data on one or more drives, resulting in a case where you do not have enough redundancy syndrome information to recover your array. Happened to my wife...

[ Parent | Reply to this comment ]

Posted by Anonymous (189.160.xx.xx) on Tue 17 Jul 2007 at 19:27
How can i create a hardware RAID in a IntelliStation 9111-285?

[ Parent | Reply to this comment ]

Posted by DeadPing (124.148.xx.xx) on Sat 16 Mar 2013 at 16:43
Once upon a time a client asked for help to restore their tape backups. Unfortunately there were no tape cartridges in the drive and they had been backing up to an empty drive for over a decade. They remarked that the backup procedure did seem really fast at copying all their customer records. Fast alright, it took 0 seconds to copy nothing to an empty drive.

[ Parent | Reply to this comment ]

Posted by Anonymous (217.64.xx.xx) on Sun 30 Jul 2006 at 10:42
RAID mirror (name it "instant backup") and periodic backups are for different purposes: if a user accidentally deletes a vital file, RAID will not be of much help. It's best to follow both backup strategies. And/or even use the snapshot feature of said LVM2 on top of RAID mirror to create redundant frozen snapshots of valuable data at a particular time (say, at 8a.m., 11a.m., 15p.m., two days ago, a week ago, etc.).

[ Parent | Reply to this comment ]

Posted by DeadPing (124.148.xx.xx) on Sat 16 Mar 2013 at 16:53
It is also much smarter to copy and paste rather than cut and paste. You can always delete extra files once they have been moved.

Nothing beats a proper regular backup regime with multiple storage mediums. Important files should at least be also stored on something like DVD-R or BD-R and replaced every 12 months or so in case one of your backup drives suddenly decides to die on you.

[ Parent | Reply to this comment ]

Posted by Anonymous (213.112.xx.xx) on Sat 10 Jan 2009 at 04:05
*sigh*

Ever heard of the expression "Belt and Suspenders"?

No, you can NEVER have data protection without backup... but come on people don't make yourself look stupid. Raid 1 (or 5 or 10 or 01) OBVIOUSLY protects against some cases of failing hardware. Why else would they have mirroring, parity bits, and the rest? If we're only looking for speed and no data security we'd go with raid 0 and we wouldn't even need any other raid type.

Or are you arguing that all raid types but raid 0 are bogus?

Sure, for a complete newbie even the mention of Raid and "secure data" in the same sentence is probably guaranteed to cause grief and problems.

In my case raid is a comfort. I want it to protect against hardware failures and to limit my backups to once a day (with a certain degree of data security).

However, my advice on data security is to start with a backup solution and then look into raid etc. If you're doing something worth saving on the computer you can use the following rule of thumb:

Decide how much work you're willing to lose (1 days worth, 1 weeks worth, etc) then make backups that often. Make no backups at all: watch TV instead because one day it'll all be gone.

[ Parent | Reply to this comment ]

Posted by DeadPing (124.148.xx.xx) on Sat 16 Mar 2013 at 16:37
In a enterprise environment, running mission critical systems that can afford no downtime, RAID will help increase data protection as a failed drive can be removed on the fly without shutting down the entire system to replace the drive. RAID 0 won't give you better data protection, but more professional RAID arrays utilising RAID 10 or other variations of RAID 1+0 allow two or more drives to fail in an array and the system to keep running and quickly restore the data to the replacement drive in a minimum of time without losing much performance.

A RAID array is not a replacement for a regular backup regime, but all mediums are subject to failure occasionally and enterprise RAID systems aim to provide redundancy against storage failure that may result in the loss of important data during storage and retrieval operations increasing data protection.

[ Parent | Reply to this comment ]

Posted by chris (217.8.xx.xx) on Thu 29 Jun 2006 at 08:29
[ View Weblogs ]

I got badly bitten by this. Had a disk die on me. Very luckily only one logical volume was using the space on that physical volume.

Not that this is an in depth lvm review - but - just to make it more visible (I spent a _long_ time googling) - if you get a dead PV in your volume group - then - you can run

vgreduce --removemissing <volgrpname>

to get rid of it. Right enough that you will lose any partitions wholly or partly on that PV. But you should be able to rescue the rest :)

[ Parent | Reply to this comment ]

Posted by ajt (204.193.xx.xx) on Thu 29 Jun 2006 at 12:42
[ View Weblogs ]
LVM on AIX does RAID(mirroring) as well as LVM, AIX doesn't do software RAID on it's own, it's only part of the LVM suite.

I believe that Linux LVM2 can also do simple mirroring now, but I'm not sure if it's stable yet. It's been available since LVM 2.01.12, i.e. in Etch but not Sarge.

--
"It's Not Magic, It's Work"
Adam

[ Parent | Reply to this comment ]

Posted by marki (89.173.xx.xx) on Thu 29 Jun 2006 at 23:19
I had this problem - one of the disks in LVM died (I wasn't using RAID on this server - but now I use :)
The failed disk contained part of /home. I had a backup, but it was few days old, so I wanted to try to read new files from /home.
I put all good disks to another machine and booted from Live CD (INSERT or RiPLINUX, I don't remember which one worked). The problem was the VG refused to activate itself because of missing PV. I have found that switch "-P" to vgchange allows it to activate in partial mode. That was OK, but it activates itself only in read-only mode. Problem was the ext3 filesystem on /home, which wasn't unmounted and required recovery - which is not possible on read-only "disk" :(
I had to use mdadm to create bogus PV (which returns all nulls on read) instead of the missing one (it's written in man vgchange). But I had to google on how to create it.

Finally I created a "replacement" PV in RAM. Just created a big enough file on ramdisk, used losetup to make a loopback device of it, then used pvcreate --uuid with uuid of the missing PV. pvscan recognized it, but it didn't show that it is part of VG. Running vgcfgrestore solved also this. This allowed vgchange to activate the VG in read-write mode and I could mount the ext3 fs. I was able to read all data on the good disk.

So using LVM does not make your data unavailable when one of the disks dies (I mean it is possible to get data out of the good ones).

[ Parent | Reply to this comment ]

Posted by Anonymous (82.82.xx.xx) on Tue 15 Aug 2006 at 12:09
Thanks a big time, after the comments like "lvm is as bad as RAID0 for recovery" your words are most welcome. Right now i lost a 30G HDD out of an 150G LVM and I was really afraid to have lost the whole thing. I'm still too frightened to touch it yet as I first want to try to rescue the HDD itself (freezing, dd_rescue, whatever comes in mind) but as soon as I let it go for good, I know that there is a way to get the rest back. You really, really brightened my day. ;)

-Wiebel

[ Parent | Reply to this comment ]

Posted by chris (217.8.xx.xx) on Thu 29 Jun 2006 at 08:23
[ View Weblogs ]
Not sure I understand your filesystem table.

With ext2/ext3 (as you stated in the example) - we need to unmount it first. It's one of the reasons I use XFS. But in the table all filesystems got a Yes for "increase while mounted" - so I'm a little unclear what this column is telling us.

[ Parent | Reply to this comment ]

Posted by Steve (62.30.xx.xx) on Thu 29 Jun 2006 at 09:06
[ View Steve's Scratchpad | View Weblogs ]

Well spotted, that column is a bit bogus. I'll update it now.

Steve

[ Parent | Reply to this comment ]

Posted by mcphail (62.6.xx.xx) on Thu 29 Jun 2006 at 12:37
I usually use ext3 as my filesystem, and understand why the filesystem would need to be resized after increasing the size of the volume. However, would the same apply for XFS which (I think) allocates inodes dynamically? What would be the procedure after running lvextend? TIA.

Neil McPhail

[ Parent | Reply to this comment ]

Posted by Anonymous (128.173.xx.xx) on Thu 29 Jun 2006 at 12:56
xfs_growfs

[ Parent | Reply to this comment ]

Posted by mcphail (62.6.xx.xx) on Thu 29 Jun 2006 at 13:06
Many thanks.

[ Parent | Reply to this comment ]

Posted by joshuah (84.238.xx.xx) on Sat 1 Jul 2006 at 16:25
HP-UX also supports LVM miroring, whitch is pretty nice feature. Currently my home debian box uses LVM + LoopAES + XFS and i must say it's pretty nice. The only problem with this scenario is that you somehow loose the option to grow the XFS online. Actualy you don't loose it you just have to remount the volume so you can see the new size of the partition, so you do lvextend, xfs_growfs, umount /volume, mount /volume, and you will see the new size. Otherwise you will grow the fs, but you will not see it's new size, this is becouse of the use of LoopAES.

[ Parent | Reply to this comment ]

Posted by Anonymous (83.137.xx.xx) on Tue 4 Jul 2006 at 09:35
For JFS:
# mount -o remount,resize /mount/point

In other words you can increase the volume, remount it and your done.

Its only possible to grow, not to shrink.

more info: http://www.newsforge.com/article.pl?sid=03/10/07/2028234

[ Parent | Reply to this comment ]

Posted by etptupaf (158.227.xx.xx) on Wed 5 Jul 2006 at 08:46
Is there any cost to using LVM in terms of performance?

[ Parent | Reply to this comment ]

Posted by Steve (62.30.xx.xx) on Wed 5 Jul 2006 at 09:37
[ View Steve's Scratchpad | View Weblogs ]

Not that I've ever noticed..

Steve

[ Parent | Reply to this comment ]

Posted by Anonymous (84.108.xx.xx) on Tue 8 Aug 2006 at 19:58
I used this guide to create lvm's on one debian box and all is well...

on the 2nd box, i created a lvm, and now it goes to 'inactive' state each time after a reboot or shutdown, and i have to manually 'vgchange -a y' before it can be mountable again... any idea on how to solve this prob ??

thx !

[ Parent | Reply to this comment ]

Posted by Steve (62.30.xx.xx) on Tue 8 Aug 2006 at 20:01
[ View Steve's Scratchpad | View Weblogs ]

On my system this is called automatically via /etc/init.d/lvm - so I'd guess that something is failing.

Take a look at that script and see if you can debug it?

It looks like the file /etc/default/lvm-common is consulted and some other things happen conditionally, so it might be worth going over it and seeing if you can determine where it fails.

(That init script is contained in the lvm-common package which I'm assuming you have installed!)

Steve

[ Parent | Reply to this comment ]

Posted by itsec (85.177.xx.xx) on Thu 4 Jan 2007 at 22:53
[ View Weblogs ]
Sometimes things are documented very well (lots of reading and abstraction) that a small explanation like this article helps much better.

Your article shows that LVM can be easy to use. Diving into LVM documentation shows that it is much more flexible than anything else. Now after I recognized how easy LVM can be I would say it is a must on many/most production servers.

Thanks Steve.

P.S: Already switched 2 servers to LVM thx to your article

[ Parent | Reply to this comment ]

Posted by Anonymous (65.191.xx.xx) on Fri 29 Jun 2007 at 01:51
I just setup LVM and for those us us who are on the slow side (like me) the packages yo need to install are lvm2 and lvm-common. I was scratching my head for a sec when it said that I didn't have pvcreate...

[ Parent | Reply to this comment ]

Posted by Anonymous (82.228.xx.xx) on Mon 16 Jul 2007 at 14:18
warning : for etch lilo is buggy with /boot on lvm2.
sometime it failed to find root fs because udev device block number could change at reboot

[ Parent | Reply to this comment ]

Posted by ricoshay (213.113.xx.xx) on Tue 24 Mar 2009 at 14:14
mucho thanx!

[ Parent | Reply to this comment ]

Posted by Anonymous (132.252.xx.xx) on Wed 9 Dec 2009 at 03:58
Thank you, excellent introduction for a Linux noob like me.

[ Parent | Reply to this comment ]

Posted by Anonymous (20.133.xx.xx) on Wed 27 Jan 2010 at 12:21
Thank you for a good guide for a noob.

I wonder if someone would have advice for me as I managed to do a stupid thing. I had SMART disk read errors on one disk so bought a new disk to replace it. I decided to reduce the size of the VG, remove the PV and then reboot. I did not resize the filesystem.

I now get a superblock is bigger than physical volume size, do you know how I might resolve this so I can read the LV?

[ Parent | Reply to this comment ]

Posted by ardit121 (79.106.xx.xx) on Wed 10 Feb 2010 at 21:18
This is a great guide for a beginner, I'm not a pro with LVM myself but I knew some of the stuff you mentioned.

[ Parent | Reply to this comment ]

Posted by Anonymous (77.211.xx.xx) on Sat 8 May 2010 at 15:42
Hello,

Could you please improve your guide by adding UUID management in the /etc/fstab for LVM devices. I am searching how to do this setup, thanks.

Cheers

[ Parent | Reply to this comment ]

Posted by Anonymous (98.216.xx.xx) on Wed 26 May 2010 at 06:17
I read that LVM2 should not be used with UUID in /etc/fstab due to snapshot feature which will make multiple entries with the same UUID. Stick with normal name identifiers in fstab for now.

[ Parent | Reply to this comment ]

Posted by Anonymous (15.219.xx.xx) on Thu 14 Apr 2011 at 17:43
Hey thanks for this document! For those that are interested or are not familiar at all with LVM+RAID in Debian, I followed this guide and it work just perfect for me! jerryweb.org/settings/raid/

[ Parent | Reply to this comment ]

Sign In

Username:

Password:

[Register|Advanced]

 

Flattr

 

Current Poll

What do you use for configuration management?








( 64 votes ~ 0 comments )