Announcement

Collapse
No announcement yet.

zRAM Is Still Hoping For A Promotion

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Another "great" use of zram I found is this:
    I have a new notebook with 6GB of ram: 3GB of which I'm using as a ZRAM block device.
    Then I mount this zram blockdevice as a cache to my home partition (using dm-cache).

    This way I can have a cache of my home partiion in zram (which with a compresion ratio of about 3 times can contain up to 9GB of files cached).
    This is a poor alternative to having a SSD and with low speed disk (5400 rpm), but this way I can achieve low latency.

    Comment


    • #12
      Originally posted by gilboa View Post
      Thinking about it, the obvious use for this in a non-mobile machine is tmpfs.
      I wonder if this use case was ever considered....

      - Gilboa
      Yes you can format the zram block device to a filesystem, and then mount it at /tmp/ or any other place you'd use tmpfs, or or you can just use tmpfs normally and pages will swap to zram as needed.

      Comment


      • #13
        Originally posted by marco View Post
        Another "great" use of zram I found is this:
        I have a new notebook with 6GB of ram: 3GB of which I'm using as a ZRAM block device.
        Then I mount this zram blockdevice as a cache to my home partition (using dm-cache).

        This way I can have a cache of my home partiion in zram (which with a compresion ratio of about 3 times can contain up to 9GB of files cached).
        This is a poor alternative to having a SSD and with low speed disk (5400 rpm), but this way I can achieve low latency.
        This is a *great* idea.
        1. How well does it work?
        (And for what do you use this machine?)
        2. What type of writing caching does this solution employ? (E.g. What happens if the machine dies / blows up?)

        - Gilboa
        oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
        oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
        oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
        Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

        Comment


        • #14
          Originally posted by Ericg View Post
          Also, if you believe Apple, compressed swap was one of their 'power saving' features in Mavericks because for them the CPU time to compress / decompress and writing to RAM used less power than to spin the disk up, write to it, then later spin it back up again and read from it. Even for SSD's, the Compress--> RAM was still faster (and less power hungry) than doing it to disk.
          That's pretty interesting.
          So in OSX they use a similar to zRAM method for compressing and storing swap in RAM rather than in the disk?

          Comment


          • #15
            Originally posted by marco View Post
            Another "great" use of zram I found is this:
            I have a new notebook with 6GB of ram: 3GB of which I'm using as a ZRAM block device.
            Then I mount this zram blockdevice as a cache to my home partition (using dm-cache).

            This way I can have a cache of my home partiion in zram (which with a compresion ratio of about 3 times can contain up to 9GB of files cached).
            This is a poor alternative to having a SSD and with low speed disk (5400 rpm), but this way I can achieve low latency.
            Very nice!
            So it has practical value even for new PCs with plenty of RAM.

            Comment


            • #16
              Originally posted by Apopas View Post
              That's pretty interesting.
              So in OSX they use a similar to zRAM method for compressing and storing swap in RAM rather than in the disk?
              Unless I misunderstood: http://arstechnica.com/apple/2013/06...-saving-magic/ , yes.
              All opinions are my own not those of my employer if you know who they are.

              Comment


              • #17
                Originally posted by BO$$ View Post
                OS X has this feature. This is exactly what I'm talking about. Kernel devs keeping Linux behind with their nonsensical reasoning. Dudes, they have it. You don't. It's important even for marketing purposes. Just put it in ASAP. Too many techies getting in the way of Linux advancement. Sometimes you put shit in only for the hype. Techies don't understand this. That is why a marketing team must exist to guide the techies.
                It has been in the linux kernel for more than a year, at least...

                Comment


                • #18
                  Originally posted by JanC View Post
                  It has been in the linux kernel for more than a year, at least...
                  as you can see (http://en.wikipedia.org/wiki/Zram and the mentions here in the comments) there has not been a lot of adaption so far.

                  Comment


                  • #19
                    Originally posted by gilboa View Post
                    This is a *great* idea.
                    1. How well does it work?
                    (And for what do you use this machine?)
                    2. What type of writing caching does this solution employ? (E.g. What happens if the machine dies / blows up?)

                    - Gilboa
                    The original idea was taken from here
                    https://lkml.org/lkml/2013/8/21/54
                    but the operative idea comes from
                    SSD Caching Using dm-cache Tutorial
                    http://blog.kylemanna.com/linux/2013/06/30/ssd-caching-using-dmcache-tutorial/

                    Instead of using two ssd partition, you can use 2 zram partition (zram0 for metadata and zram1 for cache).
                    DM-Cache is the only solution with which you have not to reformat the disk you are caching (bcache needs its own filesystem).

                    You can use dm-cache in two modes:
                    1. writethrough: every write is written also to the original disk);
                    2. writeback: every write emains in ram (until there is a sync or flus cmd or on a periodic timing).

                    Using it in writeback mode is really fast, very low latency, but is prone to corruption in case of machine hang or power outage.
                    In this case at next reboot device mapper can't fsck your device-mapper but you can always fsck the original disk (manually before mounting).
                    Using writethrough mode guarantees data are always consistent even in case of failure.

                    After testing in the two modes (verifyng that writeback is really fast), now I'm using it in ritethrough mode because I've a SI-radeon card that sometimes hangs my PC, but i'm planning to switch when radeon reach enough stability.
                    I'm using it to compile sources (kernel. mesa, whatever from git), but i think all gnome enviroment benefit from it not having to reread configurations..
                    Considder also that also in case of memory pressure, dm-cache can swap to disk normally.

                    Comment


                    • #20
                      Originally posted by BO$$ View Post
                      OS X has this feature. This is exactly what I'm talking about. Kernel devs keeping Linux behind with their nonsensical reasoning.
                      Yeah, totally. Kinda like the other new feature Apple announced, timer coalescing, which has been in the Linux kernel since 2010.

                      Comment

                      Working...
                      X