Fedora Logical Volume Manager Benchmarks
Phoronix: Fedora Logical Volume Manager Benchmarks
Last month when publishing Fedora 15 vs. Ubuntu 11.04 benchmarks in some of the disk workloads the Fedora Linux release was behind that of Ubuntu Natty Narwhal. Some users speculated in our forums that SELinux was to blame, but later tests show SELinux does not cause a huge performance impact. With Security Enhanced Linux not to blame, some wondered if Fedora's use of LVM, the Logical Volume Manager, by default was the cause.
What benefit does it bring me as a casual user to choose LVM during a system installation? I read the benchmark so I know there's speed, but you also mentioned there's a higher risk of loosing data. Can you please tell me why?
I also read that LVM can dynamically move the size of a partition if the filesystem supports it, what are the risks of loosing data when doing that and how easy (user friendly) an fast is it?
AFAIK nearly everything with LVM partition-wise using ext* requires no unmounting what so ever except reduce the size of a partition. The risk of losing data is pretty slim.
Originally Posted by SkyHiRider
not using barriers = I don't care about data = fedora is unfit for every even slightly serious setup.
Good to know. I thought Fedora has barriers enabled.
Originally Posted by energyman
When the conclusion is "LVM is faster than ext4, probably because ext4 on LVM doesn't enable write barriers", why isn't a 2nd test done with plain ext4 with write barriers disabled? That would show much more usable information. Right now we don't know exactly how much overhead LVM has and we don't know how much performance gain ext4 gets when write barriers are disabled.
It seems that it does: http://docs.fedoraproject.org/en-US/...rieronoff.html.
Originally Posted by kraftman
Nobody ever chooses LVM for speed, and you probably shouldn't either. I didn't even know it was faster, and in spite of this benchmark it probably isn't really. The benchmark result is likely an artifact of some other difference, probably the write barrier setting as speculated (what else could it possibly be?). If you're got some mountpoint where you're willing to take some slightly increased risk in exchange for performance, you can turn off write barriers anyway, even without LVM.
Originally Posted by SkyHiRider
The biggest reason to use LVM is convenience and easiness, when moving stuff around or resizing things. No mucking around with repartitioning or having to reboot to re-read partition tables. Your system just magically stays running as though no one had ever pulled the rug out from beneath your filesystem. It's actually pretty damn cool. But it's only useful if you're anticipating changing things. If "casual user" means you're just divvying up one disk (or raided md) into a swap and root partition, then LVM might be overkill.
The disabled write barrier risks (or lack thereof) (or controversy about "lack thereof") (or controversy about that supposed controversy) are discussed here. IMHO if you're using a UPS you can blow it off and not worry about write barriers. You're going to lose your data to user errors or simply due to using less-than-decade-debugged filesystems like ext4 or btrfs, long before write barriers are a factor. Again, IMHO.
with 2.6.38, 2.6.39, 3.0.0-rc:
Disconnect usb device, boom kernel panic
Add usb device boom kernel panic
some other reasons, kernel panic
your psu gets flaky
your mobo gets flaky
your fans clog up unnoticed
your ram overheats
your graphics drivers lock up the system
some in kernel driver locks up the system.
there are MANY reasons for a hard reboot. Power fluctuations are such a rare occurance (in civilized countries) they do not matter compared to kernel bugs or hardware failures.
Or even the occasional 'oops, tripped over the cord'.
Barriers are a must. Disabling them is a typical ext3/redhat/fedora move to blind the stupid. They want to look good in benchmarks made by people without a clue. Disabling barriers is like hitting the user in the face and telling him 'hey, I don't care if you lose all you files. I want to look good in stupid benchmarks'.
Thanks for the link, it helped me understand what barriers are. But as commits are usually written at the end, when can it happen that one block is not written and a commit is made? Some kind of disk I/O error? Even if that happens, the filesystem has to find out the anomaly pretty soon and fix it, so the risk is just those few seconds that the filesystem is in an inconsistent state.
Originally Posted by Zapitron