Announcement

Collapse
No announcement yet.

Using AVX2 With Android's Bionic Library Can Yield Much Better Chromebook Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Using AVX2 With Android's Bionic Library Can Yield Much Better Chromebook Performance

    Phoronix: Using AVX2 With Android's Bionic Library Can Yield Much Better Chromebook Performance

    Intel's Open-Source Technology Center has published a whitepaper looking at the Android application performance impact on Intel-powered Chromebooks when the Android Bionic Library is optimized for AVX2...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Now Intel will have to explain to consumers that most of their low cost processors, many of which end up in lower cost Chromebooks, don't have support for AVX.

    Back then, Google did not enable NEON for ARMv7 builds because Nvidia had put a Cortex-A9 with no NEON support. For a *single* SoC lacking that features, Android apps did not support NEON for years. Of course, the native libs can still support more features than what the NDK can but you have to rely on the OEM to do a proper job.

    IMHO Intel can only blame themselves for not having AVX enabled everywhere by default.

    Comment


    • #3
      Originally posted by ldesnogu View Post
      Now Intel will have to explain to consumers that most of their low cost processors, many of which end up in lower cost Chromebooks, don't have support for AVX.
      Fair point.

      However, perhaps these benchmarks are aimed precisely at the audience who's making the decision about whether to equip various Chromebook models with cores that support AVX vs. those that don't.

      Originally posted by ldesnogu View Post
      Of course, the native libs can still support more features than what the NDK can but you have to rely on the OEM to do a proper job.
      If it's something like AVX, can we not use runtime checks, as is currently the norm, elsewhere?

      Originally posted by ldesnogu View Post
      IMHO Intel can only blame themselves for not having AVX enabled everywhere by default.
      That's a bit harsh. Even if you only process AVX in 128-bit chunks, you still have the footprint of the 256-bit register file. So, especially in earlier cost-optimized cores, it might've added significant area.

      Since they could simply use runtime detection and an alternate codepath, would the cost of burning die area for forward-looking features that wouldn't have immediately confered any performance benefit really have been justified?
      Last edited by coder; 09 February 2019, 01:13 PM.

      Comment


      • #4
        On the up side, don't all modern AMD processors have AVX enabled?

        Comment


        • #5
          Originally posted by willmore View Post
          On the up side, don't all modern AMD processors have AVX enabled?
          Yes, since Bulldozer. although due to a different implementation their AVX units were not as fast under certain circumstances. I also hope that they are going to support at least some of the most important AVX-512 subsets with Zen 2.

          Comment


          • #6
            Originally posted by coder View Post
            This.

            However, perhaps these benchmarks are aimed precisely at the audience who's making the decision about whether to equip various Chromebook models with cores that support AVX vs. those that don't.


            If it's something like a AVX, can we not use runtime checks, as is currently the norm, elsewhere?


            That's a bit harsh. Even if you only process AVX in 128-bit chunks, you still have the footprint of the 256-bit register file. So, especially in earlier cost-optimized cores, it might've added significant area.

            Since they could simply use runtime detection and an alternate codepath, can you really justify the cost of burning die area for forward-looking features that don't immediately confer any performance benefit?
            Well if glibc can do it then bionic should be able to do it as well.

            Comment


            • #7
              Originally posted by ms178 View Post
              Yes, since Bulldozer.
              You should distinguish between AVX and AVX2. The article is about AVX2, which adds packed-integer support & a few other things.

              https://en.wikipedia.org/wiki/Advanc...r_Extensions_2


              It seems Jaguar (?), Puma, Bulldozer, Piledriver, and Steamroller all supported AVX, but not AVX2:

              https://en.wikipedia.org/wiki/Advanc...#CPUs_with_AVX


              The only pre-Zen uArch from AMD to support AVX2 appears to be Excavator:

              https://en.wikipedia.org/wiki/Advanc...CPUs_with_AVX2

              Comment


              • #8
                AVX is basically SSE4.2 with different encoding (VEX prefix, much more efficient). Still 128-bit registers.

                Comment


                • #9
                  I'm sure that'll help when I switch the OS to Ubuntu or Debian.

                  Comment


                  • #10
                    Originally posted by coder View Post
                    However, perhaps these benchmarks are aimed precisely at the audience who's making the decision about whether to equip various Chromebook models with cores that support AVX vs. those that don't.
                    That'd definitely be great! But given that Intel named their CFO as CEO, I'm not sure engineers will be listened to.

                    That's a bit harsh. Even if you only process AVX in 128-bit chunks, you still have the footprint of the 256-bit register file. So, especially in earlier cost-optimized cores, it might've added significant area.
                    All Core based CPU have AVX2. Intel just fuse them out to create Celeron and Pentium chips, so die area is consumed. Only Atom chips don't have AVX(2),

                    That's market segmentation at its worst.

                    Comment

                    Working...
                    X