Page 3 of 3 FirstFirst 123
Results 21 to 27 of 27

Thread: Testing Out Btrfs In Ubuntu 10.10

  1. #21

    Default

    Quote Originally Posted by cl333r View Post
    I remember Btrfs works (abnormally) slow with databases and such tests are not present in this benchmark.
    To understand what I'm talking about see previous benchmarks on Btrfs.
    If other tested file systems didn't flush data to the disk and if btrfs did we don't know how it performs. Look at zfs tests at Phoronix, it was very slow here.

  2. #22
    Join Date
    Oct 2007
    Posts
    22

    Default

    Ok Phoronix, did you manage to get ubuntu installed onto a btrfs partition that was mounted with compress or did you just run the compress tests later? the XUBUNTU daily build has btrfs, but I didn't see compress in the mount options.

  3. #23
    Join Date
    Apr 2008
    Location
    Saskatchewan, Canada
    Posts
    444

    Default

    Quote Originally Posted by cl333r View Post
    I remember Btrfs works (abnormally) slow with databases and such tests are not present in this benchmark.
    That's probably a consequence of copy on write combined with aggressive syncing: that will pretty much ensure that your database file will end up spread around the disk in small chunks with massive fragmentation.

  4. #24

    Default

    Quote Originally Posted by jetpeach View Post
    Hi, I'm curious about the transparent compression - I searched google but didn't find a lot of useful information on it (just that it is zlib compression). I'm curious, does it actually the files stored on the hard drive smaller? And if so, then wouldn't it's performance be highly dependent on the type of file and how much it can be compressed? (Like media files performing badly, while txt files doing well?)

    And as the others have asked, I would guess the CPU usage is very important with compression enabled. And along with the CPU usage, would it support parallel processing? Because I rarely use both the cores on my computer since most tasks can't, and if can harness the processing from the core that would have been idle anyway during a single-threaded task, then that would be awesome as well.

    Excited for a new FS!
    jetpeach


    It would not be compression if the things were not actually smaller. I believe that the compression is only applied to small files, as larger ones are usually precompressed.

  5. #25
    Join Date
    Aug 2009
    Location
    Albuquerque NM USA
    Posts
    42

    Default

    I really wish these benchmarks included at least one of the older non-ext filesystems in them. Please, throw in JFS (my fave) or at least XFS or Reiser3, the next time you do one of these. When I look at these articles, it's because I'm wondering, "How long until I start using one of the newer filesystems?" so I wanna see how the cool new stuff compares to the old stable stuff.

    ext4 isn't btrfs' only "competitor." Don't forget the legacies, because realistically, I think that's where most of us are, right now.

  6. #26
    Join Date
    Nov 2008
    Posts
    418

    Default

    Quote Originally Posted by Blue Beard View Post
    I disagree --> It takes a very large amount of satisfactory user testing to get acceptance.

    Few users --> decades
    If you are talking about production systems at Large Enterprise companies, the only matter is if they can trust the technology. They only run old mature stuff. Never new bleeding edge stuff. For these companies it does not matter if there are many users or not, as long as the technology is mature and safe.

    But sure, if you talk about home users, then it is a different thing.

  7. #27
    Join Date
    May 2010
    Location
    Australian Capital Territory
    Posts
    14

    Default

    Wikipedia explains many unknowns listed above. BTRFS seems to me a primitive version of the closed-source M$ NTFS-COMPRESSED.

    All my data & archive partitions on all drives are in M$ NTFS-COMPRESSED partitions. I have few trouble reading/ writing to these M$ NTFS-COMPRESSED partitions.

    I expect that like M$ claims, "compression" has negligible effects on the CPU. Drive I/O speeds are usually slow, so M$ NTFS-COMPRESSED speeds up this slowest part of computer usage.

    Linux-only file systems wrongly claim that they never need defragmenting. Luckily M$ Windows has many free defrag & undelete programs.

    Pity that NTFS-3G cannot match the official NTFS: no encryption, no compression. There are a few different types of M$ NTFS types.

    Using Linux benchtest programs, you can easily test the many file partition types on any hard disk drive. Create 4GB partitions close to each other on your drive. Repeat the first 4GB partion again in the last part of the 4GB partition group. This will show you the effect of cylinder speeds for slight partition differences.

    Once you've done the tests, it is very easy to remove these unnecessary partitions.

    Retired (medical) IT Consultant, Australian Capital Territory

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •