Weblog entry #205 for ajt

Filesystems and layout
Posted by ajt on Wed 3 Apr 2013 at 16:00
Tags: none.

Sometime this spring (when it arrives) I will buy a new desktop system. It will probably have two block devices: a traditional SATA large capacity hard drive and a much smaller and faster flash drive.

The theory says that cheap flash drives are much faster and will even probably outlive mechanical spinning disks. Flash systems do slowly go bad so use wear-levelling software in the controller to maximise life. The other problem with flash drives is that they are relatively small, so a larger drive either in the box or on the network is required given how much space life takes up...

I've no plan to join the two drives together with LVM, it seems pointless, instead they will be kept separate and one mounted onto the other. At the moment most of my systems use ext3 except one box which uses ext3 and XFS.

If I install a new box from the Wheezy ISO I'm guessing I'll get ext4 as default. I gather this is the logical upgrade from ext3 until something fancy is really ready and it is not an all singing-dancing next generation filesystem. Does anyone know how it compares with XFS on large disks or flash disks?

I'll probably use ext4 on the flash disk (root & boot file systems) and XFS on the spinning disk (/srv) as it's where I'll dump my media files which aren't small and XFS is supposed to be good for that, unless it's not worth the effort.

 

Comments on this Entry

Posted by Anonymous (199.15.xx.xx) on Wed 3 Apr 2013 at 16:47
I'm with you on the "not much sense in RAIDing or LVMing" notion.

I recently lost a whole filesystem (fortunately full of relatively replaceable videos) when 1 drive in an 'array' failed, and anything I see vis-a-vis trying to add redundancy via RAID improves *some* aspects of redundancy whilst worsening other aspects, and actually eliminating single points of failure is *very* costly, and, in the laptop scenario, fairly much impossible.

The direction I'm thinking towards is to see if Joey Hess's "git-annex" could become a component to help manage 'reliability atop unreliable components.' It can help provide the guarantee that there are multiple copies of files, which is a different guarantee than RAID provides, but it doesn't require low level inside-the-filesystem-layer bits that are vulnerable to a larger set of problems.

Rather, if a drive fails, that means a git-annex repo fails, which ought to straightforwardly allow running an "oops, make things redundant enough again" step to get things better mirrored again.

-- cbbrowne@acm.org

[ Parent | Reply to this comment ]

Posted by mcortese (193.78.xx.xx) on Thu 9 May 2013 at 11:56
[ Send Message | View Weblogs ]
Don't forget to tune ext4 for SSD — see options stride (at mkfs time) and discard (at mount time).

[ Parent | Reply to this comment ]

Posted by ajt (89.240.xx.xx) on Sun 12 May 2013 at 21:38
[ Send Message | View Weblogs ]

Yes, I have to investigate ext4 a bit more before I build the system. Most of my boxes are running ext3 at the moment, didn't see the point in upgrading any of them to ext4 when it first came out.

--
"It's Not Magic, It's Work"
Adam

[ Parent | Reply to this comment ]