Page 4 of 5 FirstFirst ... 2345 LastLast
Results 31 to 40 of 41

Thread: Linux 3.3 Kernel: Btrfs vs. EXT4

  1. #31

    Default

    Quote Originally Posted by fackamato View Post
    Thanks. I can't wait for SSDs to become good and usable.
    That behaviour is specific to Sandforce controllers though, not every SSD does that.

  2. #32
    Join Date
    Jul 2008
    Posts
    1,718

    Default

    or deduplication - the controller does that already. You have 3000 write cycles with 34nm flash chips. 25 and 20nm have a lot less.

    A good controller with room to spare can use that and give you years of lifetimes. But with sf if you compress your stuff you loose a lot of this. Don't use compression with sandforce.

  3. #33
    Join Date
    Sep 2006
    Posts
    714

    Default

    A good controller with room to spare can use that and give you years of lifetimes. But with sf if you compress your stuff you loose a lot of this. Don't use compression with sandforce.
    Well then WTF is sandforce good for?

    JPEG images = Compressed.
    Movies/Videos/Flash = Compressed
    OpenOffice Documents = Compressed

    etc etc.

    90% of all the data that you give a shit about storing on a file system is going to be compressed.

    And Intel/You are/is telling us that we shouldn't use compression? What sort of crap is that?

    It does not make any sense. Having a drive that tries to outguess the operating system on stuff like this is a anti-feature. It's undesirable.

  4. #34
    Join Date
    Jul 2008
    Posts
    1,718

    Default

    if everything is compressed anyway, then why use filesystem compression?

    But you are wrong - most stuff on your disk is not compressed. Your mail database/maildir/mbox is not compressed. config files are not compressed, binaries are not compressed, libs are not compressed etc pp.

  5. #35
    Join Date
    Sep 2006
    Posts
    714

    Default

    if everything is compressed anyway, then why use filesystem compression?
    Because you have a unique set of circumstances were compression actually helps you. Which is not going to be typical for most desktop users or servers.

    Why don't you think that every file system supports compression? Why do you think that even when compression is available it's off by default? If it was such a slamdunk wonderful thing then it would be on by default for everything. This sort of stuff isn't new, you know. I had file system compression back when I used DOS 6.0..

    http://en.wikipedia.org/wiki/DriveSpace

    Generally speaking while compression can improve read/write speeds for large uncompressed files very well it harms random access. And on most systems random access to uncompressed files is going to be more important then raw read/write speeds. Especially for desktop performance.

    This is why compression looks so good in benchmarks, but it isn't a great thing in practice for every situation. Benchmarks tend to use large files with repetitive data that is easily compressed. If they used random data then it would make compression look lousy.

    But you are wrong - most stuff on your disk is not compressed.
    No, I am right because 'most stuff' is compressed. My operating system takes up about 2-4GB. Out of that...

    configs in /etc ...

    # du -sh /etc/
    11M /etc/

    executable program binaries...

    # du -sh /bin /usr/bin/ /usr/*/bin/
    4.9M /bin
    165M /usr/bin/
    0 /usr/local/bin/


    Library (*.so) files.
    # echo $(($(find /lib /usr/ -type f -name *.so |xargs du -k |cut -f1|while read i ; do size=$(($size + $i)) ; echo $size ;done |tail -n1) / 1024))M
    179M

    Compare that to mp3 files...

    # echo $(($(find / -type f -name *.mp3 -print0 |xargs -0 du -k |cut -f1|while read i ; do size=$(($size + $i)) ; echo $size ;done |tail -n1) / 1024))M
    1691M

    or even jpgs:

    # echo $(($(find / -type f -name *.jpg -print0 |xargs -0 du -k |cut -f1|while read i ; do size=$(($size + $i)) ; echo $size ;done |tail -n1) / 1024))M
    1012M

    Here is even webm files, mostly downloaded from youtube for safe keeping:
    # echo $(($(find / -type f -name *.webm -print0 |xargs -0 du -k |cut -f1|while read i ; do size=$(($size + $i)) ; echo $size ;done |tail -n1) / 1024))M
    1929M

    Now in my home directory I easily have about 2GB in my ~/Download directory. Then in my cache files for my browsers I have a few hundred meg alone. All that stuff is using compression. With one video game, say startcraft II, I have to download many multiple GB of data all of which is compressed. Even after installed to my wine directory all but a few megs is going to be compressed. A single HD movie is going to be several times larger then all the uncompressed text files in my system.

    Even a single OpenOffice document file can easily use up more disk space then all the uncompressed binaries or library files on my system if it's a fairly complex one, and those formats use compression also.

    .....................


    Which goes back to the my point that if Intel says not to use compression with their drives they are smoking crack. Either Intel is cranking out shitty designs whose main purpose is to look good in benchmarks, or some people are making claims about Intel's drives that are false.
    Last edited by drag; 03-04-2012 at 04:45 PM.

  6. #36
    Join Date
    Jul 2008
    Posts
    1,718

    Default

    when I used reiser4 I had compression on. Reiser4 tests if a file is compressible btw before committing so it does not try to compress stuff it can't compress anymore.

    And the advantages were huge.. /var /, even /home or /mnt/movies benefittet from it.

    So - most stuff is compressible - but and it is a nice feature for an fs - on a harddisk. Not for a fs on a sandforce ssd.

  7. #37
    Join Date
    Jan 2009
    Posts
    1,300

    Default

    Quote Originally Posted by AnonymousCoward View Post
    That behaviour is specific to Sandforce controllers though, not every SSD does that.
    Ignore SF for now. If you can, grab the newish Samsungs. They have the best all around performance, and are much more load independent based upon my research.
    Still, if it's the space saving due to compression you are interested in, grab some big platter drives and run a fs that supports compression. Much more cost efficient.

  8. #38
    Join Date
    Apr 2011
    Location
    Sofia, Bulgaria
    Posts
    75

    Default Peformance optimizations for mirrors

    I wonder if Btrfs in mirror configuration will issue requests to both drives simultaneously. After all, the information resides on two (or more but I assume just two) places so it would be of benefit to interleave the reads from the mirrors. With RAID 0 you get that automatically due to its nature and the writes also benefit from it. For a mirrored system only reads can be optimized but it should still yield better overall performance.

    I don't know if the other RAID implementations for Linux (or any other OS) do it. But it seems like an obvious optimization. Anyone has any thoughts on that?

  9. #39
    Join Date
    May 2008
    Posts
    43

    Default

    Quote Originally Posted by kobblestown View Post
    I wonder if Btrfs in mirror configuration will issue requests to both drives simultaneously. After all, the information resides on two (or more but I assume just two) places so it would be of benefit to interleave the reads from the mirrors. With RAID 0 you get that automatically due to its nature and the writes also benefit from it. For a mirrored system only reads can be optimized but it should still yield better overall performance.

    I don't know if the other RAID implementations for Linux (or any other OS) do it. But it seems like an obvious optimization. Anyone has any thoughts on that?
    mdadm

    See http://en.wikipedia.org/wiki/Non-sta...nux_MD_RAID_10

    Code:
    The driver also supports a "far" layout where all the drives are divided into f sections. All the chunks are repeated in each section but offset by one device. For example, f2 layouts on 2-, 3-, and 4-drive arrays would look like:
    
    2 drives             3 drives             4 drives
    --------             --------------       --------------------
    A1  A2               A1   A2   A3         A1   A2   A3   A4
    A3  A4               A4   A5   A6         A5   A6   A7   A8
    A5  A6               A7   A8   A9         A9   A10  A11  A12
    ..  ..               ..   ..   ..         ..   ..   ..   ..
    A2  A1               A3   A1   A2         A4   A1   A2   A3
    A4  A3               A6   A4   A5         A8   A5   A6   A7
    A6  A5               A9   A7   A8         A12  A9   A10  A11
    ..  ..               ..   ..   ..         ..   ..   ..   ..
    This is designed for striping performance of a mirrored array; sequential reads can be striped, as in RAID-0, random reads are somewhat faster (maybe 10-20 % due to using the faster outer disk sectors, and smaller average seek times), and sequential and random writes offer about equal performance to other mirrored raids. The layout performs well for systems where reads are more frequent than writes, which is common. The first 1/f of each drive is a standard RAID-0 array. This offers striping performance on a mirrored set of only 2 drives.

  10. #40
    Join Date
    Oct 2008
    Posts
    14

    Default xfs?

    The benchmark is missing XFS, which is the sane default pick of a fs for Linux.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •