Announcement

Collapse
No announcement yet.

Wikipedia Switches Over To HHVM For Faster Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Wikipedia Switches Over To HHVM For Faster Performance

    Phoronix: Wikipedia Switches Over To HHVM For Faster Performance

    Since a few weeks ago, all non-cached API and web traffic of Wikipedia is being served by Facebook's HHVM rather than PHP proper...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Wow

    50% performance gain is really impressive...

    Anyone know if it possible to use HHVM instead of any php web server? Or code must be adapted?

    As a wiki-powered site maintener I am really interrested!

    Comment


    • #3
      Originally posted by Passso View Post
      50% performance gain is really impressive...

      Anyone know if it possible to use HHVM instead of any php web server? Or code must be adapted?

      As a wiki-powered site maintener I am really interrested!
      For the most part things should "just work" as there's just a small set of PHP functions that are not implemented in HHVM, etc.
      Michael Larabel
      https://www.michaellarabel.com/

      Comment


      • #4
        Thx for the info Michael.

        Smells like :

        Code:
        sudo apt-get install nginx
        sudo apt-get install hhvm
        Cannot wait to test it

        https://github.com/facebook/hhvm/wiki/Getting-Started

        Comment


        • #5
          Originally posted by Passso View Post
          Anyone know if it possible to use HHVM instead of any php web server? Or code must be adapted?

          As a wiki-powered site maintener I am really interrested!
          You should really read the linked blog post, here again: http://hhvm.com/blog/7205/wikipedia-on-hhvm
          Some quotes:
          you?ve probably known for a while that Wikipedia has been transitioning to HHVM. This has been a long process involving lots of work from many different people
          if you?re thinking of switching to HHVM but have some native extensions you depend on, this is a great example to follow. The Lua library used by their extension is written in C and uses setjmp and longjmp to handle errors. In certain situations this was causing a number of C++ object destructors to be skipped, leading to memory corruption.
          most of that work has been on the PHP that powers Facebook. Like any large codebase, Facebook?s PHP code contains recurring idioms and design patterns, and HHVM is great at optimizing these. Looking at HHVM?s performance in the context of another large codebase, with its own recurring idioms and design patterns, was a great way to identify optimization opportunities we hadn?t yet discovered. I found two easy wins, the first of which was simply that the default Ubuntu PCRE package didn?t have the JIT enabled. We rebuilt PCRE with the JIT option enabled, which HHVM knows how to take advantage of, and saw a 8% improvement in the parsing benchmark.
          PHP classes can have destructor functions that are run when instances of that class are destroyed, even if that only happens at the very end of the request because the object was kept alive by a global variable or a cyclic reference. Exactly matching PHP?s behavior here requires a small but measurable amount of extra bookkeeping at runtime, so HHVM has an option to not run destructors for objects that are alive at the end of a request. This is fine for PHP codebases like Facebook?s, where the application was written with this restriction in mind. MediaWiki, however, was developed using the standard PHP interpreter, so for correctness it should be run in the slower but more correct configuration.
          switching all API/web servers over to HHVM is by no means the end of this process. There are a number of other tweaks that can be made to further improve HHVM?s performance

          Comment


          • #6
            Originally posted by TAXI View Post
            You should really read the linked blog post, here again: http://hhvm.com/blog/7205/wikipedia-on-hhvm
            Some quotes:
            Imagine if the sites had originally used some less shitty languages like

            At least there wouldn't be big news about performance cause the initial version would have had better performance than now with HHVM.

            Comment


            • #7
              Originally posted by caligula View Post
              Imagine if the sites had originally used some less shitty languages like

              At least there wouldn't be big news about performance cause the initial version would have had better performance than now with HHVM.
              Prove? First off most of the linked languages are useless as web languages. Second the Wikipedia article is out of date, PHP with HHVM is not interpreted and better than statically compiled:
              Rather than directly interpret or compile PHP code directly to C++, HHVM compiles Hack and PHP into an intermediate bytecode. This bytecode is then translated into x64 machine code dynamically at runtime by a just-in-time (JIT) compiler. This compilation process allows for all sorts of optimizations that cannot be made in a statically compiled binary, thus enabling higher performance
              (Source: http://hhvm.com/)

              //EDIT: See also: http://hhvm.com/blog/2027/faster-and...f-the-hhvm-jit
              Last edited by V10lator; 07 January 2015, 06:20 PM.

              Comment


              • #8
                Originally posted by TAXI View Post
                Prove? First off most of the linked languages are useless as web languages. Second the Wikipedia article is out of date, PHP with HHVM is not interpreted and better than statically compiled:
                (Source: http://hhvm.com/)

                //EDIT: See also: http://hhvm.com/blog/2027/faster-and...f-the-hhvm-jit
                "This compilation process allows for all sorts of optimizations that cannot be made in a statically compiled binary, thus enabling higher performance" is bullshit. It might be higher performance than bytecode interpreter. If you look at Mono/.NET/Android, they're all moved to AOT. Still, never seen any real world benchmarks where AOT/JIT is truly faster than GCC/Clang compiled native code.

                Second, you don't seem to have any clue. Web doesn't require any special properties like slow execution speed from the language. Despite all the fanboys gather around crappy languages like Node.js, Ruby, PHP.

                Comment


                • #9
                  Originally posted by TAXI View Post
                  Prove? First off most of the linked languages are useless as web languages. Second the Wikipedia article is out of date, PHP with HHVM is not interpreted and better than statically compiled:
                  (Source: http://hhvm.com/)

                  //EDIT: See also: http://hhvm.com/blog/2027/faster-and...f-the-hhvm-jit
                  "Faster than native" would also require competitive optimizers. GCC is 14,5M lines of code. Linux kernel is huge. They develop it at 2000-3000 LOC per day. That's 1M LOC per year MAX. HHVM isn't big. It's developed by a small team. Even with the rate kernel is developed it would take years to compete with native compilers on performance. In reality it will take years and still won't beat native.

                  BTW that article hardly talked about competing with native code.

                  Comment


                  • #10
                    Originally posted by caligula View Post
                    "This compilation process allows for all sorts of optimizations that cannot be made in a statically compiled binary, thus enabling higher performance" is bullshit. It might be higher performance than bytecode interpreter. If you look at Mono/.NET/Android, they're all moved to AOT. Still, never seen any real world benchmarks where AOT/JIT is truly faster than GCC/Clang compiled native code.
                    I think this specifically refers to compiling PHP, like compiling JavaScript it is a bit too flexible to compile efficiently ahead of time. It is better compiled later when the JIT compiler can observe what types are actually used. It can not be compared to compiling traditionally compiled languages which are designed for it.

                    So in a big surprise, languages meant for interpreting works better interpreted or JIT'ed than when traditionally compiled.

                    Comment

                    Working...
                    X