Page 1 of 2 12 LastLast
Results 1 to 10 of 30

Thread: Fedora Logical Volume Manager Benchmarks

Hybrid View

  1. #1
    Join Date
    Jan 2007
    Posts
    14,369

    Default Fedora Logical Volume Manager Benchmarks

    Phoronix: Fedora Logical Volume Manager Benchmarks

    Last month when publishing Fedora 15 vs. Ubuntu 11.04 benchmarks in some of the disk workloads the Fedora Linux release was behind that of Ubuntu Natty Narwhal. Some users speculated in our forums that SELinux was to blame, but later tests show SELinux does not cause a huge performance impact. With Security Enhanced Linux not to blame, some wondered if Fedora's use of LVM, the Logical Volume Manager, by default was the cause.

    http://www.phoronix.com/vr.php?view=16190

  2. #2
    Join Date
    Jan 2010
    Posts
    52

    Default

    What benefit does it bring me as a casual user to choose LVM during a system installation? I read the benchmark so I know there's speed, but you also mentioned there's a higher risk of loosing data. Can you please tell me why?

    I also read that LVM can dynamically move the size of a partition if the filesystem supports it, what are the risks of loosing data when doing that and how easy (user friendly) an fast is it?

  3. #3
    Join Date
    Dec 2009
    Posts
    110

    Default

    Quote Originally Posted by SkyHiRider View Post
    I also read that LVM can dynamically move the size of a partition if the filesystem supports it, what are the risks of loosing data when doing that and how easy (user friendly) an fast is it?
    AFAIK nearly everything with LVM partition-wise using ext* requires no unmounting what so ever except reduce the size of a partition. The risk of losing data is pretty slim.

  4. #4
    Join Date
    Jul 2008
    Posts
    1,718

    Default

    not using barriers = I don't care about data = fedora is unfit for every even slightly serious setup.

  5. #5

    Default

    Quote Originally Posted by energyman View Post
    not using barriers = I don't care about data = fedora is unfit for every even slightly serious setup.
    Good to know. I thought Fedora has barriers enabled.

  6. #6
    Join Date
    Mar 2009
    Posts
    5

    Default

    When the conclusion is "LVM is faster than ext4, probably because ext4 on LVM doesn't enable write barriers", why isn't a 2nd test done with plain ext4 with write barriers disabled? That would show much more usable information. Right now we don't know exactly how much overhead LVM has and we don't know how much performance gain ext4 gets when write barriers are disabled.

  7. #7
    Join Date
    Jan 2009
    Posts
    1,307

    Default

    Quote Originally Posted by kraftman View Post
    Good to know. I thought Fedora has barriers enabled.
    It seems that it does: http://docs.fedoraproject.org/en-US/...rieronoff.html.

  8. #8
    Join Date
    Aug 2009
    Location
    Albuquerque NM USA
    Posts
    42

    Default

    Quote Originally Posted by SkyHiRider View Post
    What benefit does it bring me as a casual user to choose LVM during a system installation? I read the benchmark so I know there's speed, but you also mentioned there's a higher risk of loosing data. Can you please tell me why?

    I also read that LVM can dynamically move the size of a partition if the filesystem supports it, what are the risks of loosing data when doing that and how easy (user friendly) an fast is it?
    Nobody ever chooses LVM for speed, and you probably shouldn't either. I didn't even know it was faster, and in spite of this benchmark it probably isn't really. The benchmark result is likely an artifact of some other difference, probably the write barrier setting as speculated (what else could it possibly be?). If you're got some mountpoint where you're willing to take some slightly increased risk in exchange for performance, you can turn off write barriers anyway, even without LVM.

    The biggest reason to use LVM is convenience and easiness, when moving stuff around or resizing things. No mucking around with repartitioning or having to reboot to re-read partition tables. Your system just magically stays running as though no one had ever pulled the rug out from beneath your filesystem. It's actually pretty damn cool. But it's only useful if you're anticipating changing things. If "casual user" means you're just divvying up one disk (or raided md) into a swap and root partition, then LVM might be overkill.

    The disabled write barrier risks (or lack thereof) (or controversy about "lack thereof") (or controversy about that supposed controversy) are discussed here. IMHO if you're using a UPS you can blow it off and not worry about write barriers. You're going to lose your data to user errors or simply due to using less-than-decade-debugged filesystems like ext4 or btrfs, long before write barriers are a factor. Again, IMHO.

  9. #9
    Join Date
    Jul 2008
    Posts
    1,718

    Default

    bullshit.
    with 2.6.38, 2.6.39, 3.0.0-rc:

    Disconnect usb device, boom kernel panic
    Add usb device boom kernel panic

    some other reasons, kernel panic

    your psu gets flaky
    your mobo gets flaky
    your fans clog up unnoticed
    your ram overheats
    your graphics drivers lock up the system
    some in kernel driver locks up the system.

    there are MANY reasons for a hard reboot. Power fluctuations are such a rare occurance (in civilized countries) they do not matter compared to kernel bugs or hardware failures.

    Or even the occasional 'oops, tripped over the cord'.

    Barriers are a must. Disabling them is a typical ext3/redhat/fedora move to blind the stupid. They want to look good in benchmarks made by people without a clue. Disabling barriers is like hitting the user in the face and telling him 'hey, I don't care if you lose all you files. I want to look good in stupid benchmarks'.

  10. #10
    Join Date
    Jan 2010
    Posts
    52

    Default

    Quote Originally Posted by Zapitron View Post
    The disabled write barrier risks (or lack thereof) (or controversy about "lack thereof") (or controversy about that supposed controversy) are discussed here. IMHO if you're using a UPS you can blow it off and not worry about write barriers. You're going to lose your data to user errors or simply due to using less-than-decade-debugged filesystems like ext4 or btrfs, long before write barriers are a factor. Again, IMHO.
    Thanks for the link, it helped me understand what barriers are. But as commits are usually written at the end, when can it happen that one block is not written and a commit is made? Some kind of disk I/O error? Even if that happens, the filesystem has to find out the anomaly pretty soon and fix it, so the risk is just those few seconds that the filesystem is in an inconsistent state.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •