Results 1 to 2 of 2

Thread: Oracle Rewrites Linux ZCache Compression Code

  1. #1
    Join Date
    Jan 2007
    Posts
    15,111

    Default Oracle Rewrites Linux ZCache Compression Code

    Phoronix: Oracle Rewrites Linux ZCache Compression Code

    Seth Jennings of IBM proposed that ZCache be moved out of the Linux kernel's staging area and be accepted officially into the mainline tree. However, that proposal is being criticized by an Oracle engineers as they have evidently "completely rewritten zcache" and will share it soon but still doesn't see a reason for the memory compression code to leave staging...

    http://www.phoronix.com/vr.php?view=MTE0ODQ

  2. #2
    Join Date
    Nov 2009
    Location
    Madrid, Spain
    Posts
    398

    Default

    @Michael
    that proposal is being criticized by an Oracle engineers as they have evidently "completely rewritten zcache"
    What makes it so evident? Excluding this, it seems that is mostly a fairly open and clean development and release talk without a need for flashy news, isn't so?

    This piece of news is like an article like:
    "New IonMonkey virtual machine has a patch to dramatically improve performance", and points to ARFY and shows the revision: http://hg.mozilla.org/projects/ionmo...v/eef915d5a18f which improves Sunspider performance by 20%.
    But if you look better, and you click on the link, you will see that the bug shows a regression. Also Sunspider is not the most important Javascript benchmark, and is considered one of the worst (better ones are considered: V8, Kraken, Dromaeo or Peacekeeper).
    And at the end, the back development talk (even is posted on a public forum) makes little sense either way. Like LTO in GCC on Phoronix: LTO reduces in most cases the size, but Phoronix doesn't express how big/small are the binaries. LTO is buggy for some projects still as of today and has less impact on performance. Even is discussed from GCC 4.5, it have little tangible impact for users and certainly for Phoronix.
    If PTS is interested in final performance, it has to do few things:
    - use flag mining (make a program that gets the compiler flags) for computational intensive programs. And for every GCC version. If I would have some code that I know that is 15% faster but only if I add auto-vectorization in GCC 4.5 but is not so good because of a regression in GCC 4.6, I will change the flags accordingly
    - use PGO in every final product: in scientific code, or any other situation when you compare GCC (mostly in database code, or anything that the "kernel" of the code is complex), PGO gives tangible and visible performance boost, it gives inlining strategies information. What if PGO works too well, like 100% speedup in a particular test which is unachievable in practice? It means that this test should be removed, because is not relevant in practice
    - compilation time for a big project (either is Apache, Linux kernel or GCC) is not as relevant as is: compilation after changing a file. Pick a file that may be considered important in the baseline code and make the difference accordingly. Make a script that apply a patch (and if is applied, apply the reverse patch) and make a compilation. The initial compilation doesn't count as much even in LibreOffice. I pointed about this, because in many cases, the "configure" step is serial, when the "make" compilation step is most of the time parallel and the differences in compilation time should be bigger than ussual
    - for enterprise code, Java is the king, so if code is performance driven, try to use more flags as you should use in GCC. I'm used even on a desktop setup to use something like: -server -XX:+DoEscapeAnalysis
    - I have no relevant experience about video drivers (even I understand how they work pretty well), so I will skip it this part, even sometimes I notice forum criticism, I cannot say either yes or no.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •