Page 1 of 2 12 LastLast
Results 1 to 10 of 24

Thread: Ubuntu Linux Considers Greater Usage Of zRAM

Hybrid View

  1. #1
    Join Date
    Jan 2007
    Posts
    14,910

    Default Ubuntu Linux Considers Greater Usage Of zRAM

    Phoronix: Ubuntu Linux Considers Greater Usage Of zRAM

    Ubuntu developers and users have brought back up the matter of zRAM and using it as part of the default Ubuntu Linux installation in some intelligent manner...

    http://www.phoronix.com/vr.php?view=MTI0NjQ

  2. #2
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,563

    Default

    for those with modern computers having several Gigabytes of system memory.
    Nitpick: memory size is typically calculated in gibibytes, not gigabytes. Otherwise you'd have to say "I have 8.4 GB of RAM!".

  3. #3
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,463

    Default

    Yeah, strictly speaking a gigabyte is 10^^9, not 2^^30, although it gets used both ways. The term "gibibyte" (BInary GIgaBYTE presumably) is being promoted for the 2^^20 definition.

    That feels wrong somehow, although I guess "you know what I mean" isn't a good foundation for a technical standard

  4. #4
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,563

    Default

    Quote Originally Posted by bridgman View Post
    That feels wrong somehow, although I guess "you know what I mean" isn't a good foundation for a technical standard
    Personally I set KDE to use the SI system (1 kB = 1000 B) for all the file sizes. It's apparently default on Mac OS X as well. It makes little sense to count file size in powers of two to begin with, it just causes confusion, especially when dealing with large files. Though it does make sense for RAM, because its modules come in sizes of powers of two.

    And yea, Windows, last time I checked, still incorrectly labels "MB" and such even if it really means "MiB". Though I'm not sure if it was changed in Windows 8 or not. Probably not.

  5. #5

    Default

    I need an explanation about zRAM.

    If I understand it well, it will use RAM instead of disk for swapping/paging, right?

    Swapping happens when the system runs out of RAM and needs more. Then it will save some of the RAM contents on disk, free that RAM portion and use it. OK so far?

    So a system with a lot of RAM will never need to swap, or need it very rarely. A system with very little RAM will swap pretty soon. Now, if you use zRAM, you make swapping just happen earlier. How is this good at all? Yes, you're swapping to RAM, but you might as well use that RAM directly and have no need for swapping.

    Say you have 1GB of RAM and devote 256MB to zRAM. Now you have 768MB available. When the system needs 900MB, it will use part or your zRAM as its swap area. If you didn't have zRAM, the system would have accessed RAM directly: no need to swap. If the system needs 2GB, it'll swap to disk anyway, since the 256MB of zRAM will be of no use in that case.

    I'm confused. I'm definitely missing something. I'd appreciated an explanation.
    Last edited by Aleve Sicofante; 12-08-2012 at 09:22 PM.

  6. #6
    Join Date
    Apr 2011
    Location
    Slovakia
    Posts
    77

    Default

    Quote Originally Posted by Aleve Sicofante View Post
    I need an explanation about zRAM.
    Say you have 1GB of RAM and devote 256MB to zRAM. Now you have 768MB available. When the system needs 900MB, it will use part or your zRAM as its swap area. If you didn't have zRAM, the system would have accessed RAM directly: no need to swap. If the system needs 2GB, it'll swap to disk anyway, since the 256MB of zRAM will be of no use in that case.

    I'm confused. I'm definitely missing something. I'd appreciated an explanation.
    But zRAM is COMPRESSED. So imagine the system can put 512MB of data into that 256MB pool. Suddenly you have like 1,25GB of RAM. Yes, there is a tradeoff. Those 512MB can't be accessed directly by the kernel and the compression/decompression takes some CPU time. However, the theory says that even this is still faster than swapping to physical disk.

  7. #7

    Default

    Quote Originally Posted by Aleve Sicofante View Post
    Say you have 1GB of RAM and devote 256MB to zRAM. Now you have 768MB available. When the system needs 900MB, it will use part or your zRAM as its swap area. If you didn't have zRAM, the system would have accessed RAM directly: no need to swap. If the system needs 2GB, it'll swap to disk anyway, since the 256MB of zRAM will be of no use in that case.
    No

    zRAM consumes RAM only when used, not when just initialized, so if you have 1 GB of RAM and devote 256 MB to zRAM, you'll virtually see 1 GB RAM + 256 MB of compressed swap (clearly visible when running 'free'). When the system needs 900 MB it still only use RAM and no swap. If the system needs 1,2 GB of RAM it will start moving pages from RAM to zRAM. The available RAM will decrease since it will be used by zRAM, available zRAM will also decrease but slower since it is compressed.

    This is why zRAM works good in almost every scenario.

  8. #8
    Join Date
    Oct 2012
    Posts
    148

    Default

    Quote Originally Posted by GreatEmerald View Post
    Personally I set KDE to use the SI system (1 kB = 1000 B) for all the file sizes. It's apparently default on Mac OS X as well. It makes little sense to count file size in powers of two to begin with, it just causes confusion, especially when dealing with large files. Though it does make sense for RAM, because its modules come in sizes of powers of two.

    And yea, Windows, last time I checked, still incorrectly labels "MB" and such even if it really means "MiB". Though I'm not sure if it was changed in Windows 8 or not. Probably not.
    And yet the HDDs have internally 512 byte (2^9) or 4096 (2^12) byte sectors and can't process data in smaller packets than that... File systems (usually) do allocate space for files in 4096, 8192 or 16364 byte chunks and while other cluster sizes are available for different file systems, there are no file systems that operate using decimal 4KB, 8KB or 16KB clusters.

    The only usage in computing that sees SI prefix usage are network speeds but this is because we measure it in bits, not bytes, because bytes don't always have to be 8 bit long...

    Computers deal with binary numbers, so get used to it. Decimal prefixes for HDDs are just result of marketdroids messing with stuff they (as always) don't understand. If they could use 908 byte kilobytes, by God, they would.

  9. #9
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,563

    Default

    Quote Originally Posted by tomato View Post
    And yet the HDDs have internally 512 byte (2^9) or 4096 (2^12) byte sectors and can't process data in smaller packets than that... File systems (usually) do allocate space for files in 4096, 8192 or 16364 byte chunks and while other cluster sizes are available for different file systems, there are no file systems that operate using decimal 4KB, 8KB or 16KB clusters.
    Which is good and all, but it's completely pointless to use that system for humans. We don't think in powers of two. Therefore the fact that 8 GiB = 8589934592 B is something normal people will never calculate off the top of their heads. But 8 GB = 8000000000 B makes perfect sense and does not cause any confusion. It doesn't mean that internally things have to be counted in decimals, but when it's presented to the user, it would be nice if it was calculated in a way we'd understand. Similarly, all the configuration options you see in programs have descriptive names instead of using direct variable names, even if that could be easier and that's how it works internally...

  10. #10
    Join Date
    Jun 2012
    Posts
    350

    Default

    Quote Originally Posted by GreatEmerald View Post
    Personally I set KDE to use the SI system (1 kB = 1000 B) for all the file sizes. It's apparently default on Mac OS X as well. It makes little sense to count file size in powers of two to begin with, it just causes confusion, especially when dealing with large files. Though it does make sense for RAM, because its modules come in sizes of powers of two.
    Yes, lets instead say a file needs "100 MB" in space, when it really needs "100 Mib". Consumers are incapable of telling the difference, but HDD's have been using MB for years to oversell their capacity.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •