Announcement

Collapse
No announcement yet.

Lavapipe CPU-Based Vulkan Performance Looking Good Compared To SwiftShader

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Lavapipe CPU-Based Vulkan Performance Looking Good Compared To SwiftShader

    Phoronix: Lavapipe CPU-Based Vulkan Performance Looking Good Compared To SwiftShader

    Google's open-source SwiftShader has been supporting a software-based Vulkan implementation for some time, building off its prior OpenGL / GLES and D3D9 support. While SwiftShader's Vulkan implementation has received heavy investment and attention from Google, it turns out Mesa's Lavapipe software implementation is beginning to pull ahead...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    FPS is not the only metric by which to measure a software rasterizer, so it's possible Google has their reasons for preferring SwiftShader. In particular if it's being used for Chrome software rendering perhaps it has better startup times or they're more confident that it doesn't have any security vulnerabilities.

    It does make the name a little strange, but it was always strange for a software rasterizer...

    Comment


    • #3
      Would be nice if Google devs would start working on lavapipe; And that Dave Airlie would refocus on getting Clover in a good enough state to be usable for apps such as Darktable, Blender, ImageMagick etc. I know this is subjective, but I think having a working OpenCL solution in mesa would benefit more than CPU based Vulkan.

      Comment


      • #4
        Originally posted by zcansi View Post
        FPS is not the only metric by which to measure a software rasterizer, so it's possible Google has their reasons for preferring SwiftShader. In particular if it's being used for Chrome software rendering perhaps it has better startup times or they're more confident that it doesn't have any security vulnerabilities.

        It does make the name a little strange, but it was always strange for a software rasterizer...
        SwiftShaders came first. AFAIK Lavapipe has only been a thing within the past year.

        so that would have been the primary motivator for using swiftshaders - it existed.

        Now that there is an alternative, things can get interesting as those using swiftshaders can choose to stick with it, or maybe consider reasons to use lavapipe.

        Comment


        • #5
          SwiftShader was originally started by Transgaming (The guys behind WineX / Cedega).

          I think it was originally a tech to get DirectX 9 (and shaders) running fast via software. We used to use it on our Linux build servers at work if we ever had to deal with consumer trash like Unity3D.

          Comment


          • #6
            Hi folks, I'm the lead developer of SwiftShader. Those are some nice numbers for Lavapipe, congrats!

            Not to take anything away from that, but I believe Lavapipe makes use of 256-bit vector instructions? SwiftShader currently only uses 128-bit, but will support wide vector instructions in due time, which would allow a closer apples-to-apples comparison.

            There are many reasons Google uses SwiftShader and not Lavapipe. SwiftShader has been a fully conformant Vulkan 1.1 implementation for a year and a half now. We've recently also completed the mandatory feature set for Vulkan 1.2. We work closely with Khronos to keep it a reference-quality implementation as the API evolves. And when used in combination with ANGLE, it's a fully conformant OpenGL ES 3.1 implementation. We also pass the Android CTS requirements, all of Chrome's graphics test suites, and there are numerous internal projects that use it. We support Linux, Windows, macOS, Chrome OS, and Fuchsia, on both x86 and ARM. We have extensive ASan, TSan, and MSan coverage, as well as various fuzzing tests, for security. When compiled to use the Subzero JIT-compiler instead of LLVM, SwiftShader is small enough to ship as part of the Chrome installer, providing WebGL (and soon WebGPU) fallback support.

            Comment


            • #7
              Originally posted by xxmitsu View Post
              Would be nice if Google devs would start working on lavapipe
              That's an interesting thought, but I'm not sure (1) what would be the incentive for Google to do so if we've already invested many years into using SwiftShader, and (2) what makes others interested in enhancing Lavapipe instead of enhancing SwiftShader?

              Comment


              • #8
                Originally posted by xxmitsu View Post
                I know this is subjective, but I think having a working OpenCL solution in mesa would benefit more than CPU based Vulkan.
                I'd like to politely disagree. This is not just about having a fallback for users who don't have a GPU that supports vulkan to be able to run vulkan apps.

                There is a very important use case for CPU-based Vulkan that many people might not have thought of: Continuous Integration (for the uninitiated: in software development, this means having a server that automatically runs tests and stuff, to ensure everything works, with every new contribution / change coming into the git repository).

                Having a CPU vulkan implementation lets people run tests that require Vulkan on CI servers like GitHub Actions. This is *HUGE* for the developers of open-source game engine projects and others of the like. It allows to automatically verify/test rendering code, and would be an incredible productivity boost when adopted.

                The benefits throughout the FOSS ecosystem for something like this can potentially be enormous!

                Just recently we were discussing this with the development of the Bevy game engine (new-ish game engine project in Rust). There were recently some major contributions to the renderer, and it ended up introducing a bunch of bugs that then had to be found and fixed. CI could not detect the bugs automatically (the way it would for every other part of the engine), as there can be no automated graphics tests without some sort of Vulkan implementation. The servers that run the tests don't have a GPU. Something like Lavapipe could solve this problem.

                Comment


                • #9
                  Originally posted by c0d1f1ed View Post
                  Hi folks, I'm the lead developer of SwiftShader. Those are some nice numbers for Lavapipe, congrats!

                  Not to take anything away from that, but I believe Lavapipe makes use of 256-bit vector instructions? SwiftShader currently only uses 128-bit, but will support wide vector instructions in due time, which would allow a closer apples-to-apples comparison.

                  There are many reasons Google uses SwiftShader and not Lavapipe. SwiftShader has been a fully conformant Vulkan 1.1 implementation for a year and a half now. We've recently also completed the mandatory feature set for Vulkan 1.2. We work closely with Khronos to keep it a reference-quality implementation as the API evolves. And when used in combination with ANGLE, it's a fully conformant OpenGL ES 3.1 implementation. We also pass the Android CTS requirements, all of Chrome's graphics test suites, and there are numerous internal projects that use it. We support Linux, Windows, macOS, Chrome OS, and Fuchsia, on both x86 and ARM. We have extensive ASan, TSan, and MSan coverage, as well as various fuzzing tests, for security. When compiled to use the Subzero JIT-compiler instead of LLVM, SwiftShader is small enough to ship as part of the Chrome installer, providing WebGL (and soon WebGPU) fallback support.
                  Oh interesting, I hadn't realised swiftshader wasn't able to use avx2 yet, that would definitely explain a lot of the speed difference. That is a lot of CPU time to leave on the table, at least from a CI perspective (granted llvmpipe has supported it for so long I hadn't considered it as a missing piece).

                  I just had always assumed the investment in SS was due to it being faster or better than llvmpipe, this was mostly me clarifying that assumption to be incorrect. All of the stuff you list above could easily be applied to llvmpipe/lavapipe with a small investment of time. The big advantage of having code in Mesa project is you get to share with others. Sharing the compiler, IR optimisations, WSI code, Vulkan common code/structure, it's why you can get more return from the investment.

                  so llvmpipe is GL4.5 complaint, and given a small effort could easily be GLES3.2 complaint, and with a smaller effort would be Vulkan 1.1 compliant now. I'm just trying not to push things to compliance early for Vulkan as I can probably get baseline VK 1.2 done. It will also at some point provide an OpenCL 3.0 stack. (it mostly does already).

                  The main reasons for having a mesa based vulkan swrast is to enable things like CI on zink and being an easy to install default fallback for Linux where distros don't want to ship another graphics stack for one corner case.

                  I totally understand Swiftshader has reasons for continued existence, but I was genuinely stunned it didn't end up wiping the floor with lavapipe. If I had to give one reason to not keep working on SS for me, it would be daunting effort we'd need to implement transform feedback, supporting that would absolutely suck to write.


                  Comment


                  • #10
                    Originally posted by airlied View Post
                    I just had always assumed the investment in SS was due to it being faster or better than llvmpipe, this was mostly me clarifying that assumption to be incorrect.
                    Hi Dave, thanks for joining the conversation! Always great to have other CPU rendering enthusiasts.

                    "Better" is a very subjective term here. Years ago, Chrome used to actually use Mesa LLVMpipe for its GPU-less CI tests. But it ran into flakiness and other issues, which nobody in the team knew how to fix properly. Updating the binary, for all platforms, was painful. SwiftShader addressed these serious concerns by having in-house expertise to get every test passing reliably, and integrating a build from source so every change would immediately take effect.

                    It's a very similar story for Android, as well as for Google's internal repository. Integration and in-house expertise make SwiftShader a far "better" choice for Google. Lavapipe can likewise be much better for other uses, without having to declare one or the other significantly better overall.
                    All of the stuff you list above could easily be applied to llvmpipe/lavapipe with a small investment of time.
                    This might be true for just getting the features in place, but the bulk of our time is actually spent on integration into various projects and analyzing bugs that could be anywhere (in SwiftShader, in platform code, in frameworks, in engines, in applications). Being the lowest layer of these graphics stacks isn't always a thankful job, because we get blamed every time a pixel doesn't quite look as expected.

                    So even if Lavapipe had all of the same features, or more, it's the service that truly counts. Rendering performance hasn't been a huge priority for us yet because CI tests spend most of their time compiling the source code and compiling shaders. Personally I'd love to spend some attention on it though, to unlock other use cases.
                    The big advantage of having code in Mesa project is you get to share with others. Sharing the compiler, IR optimisations, WSI code, Vulkan common code/structure, it's why you can get more return from the investment.
                    This architectural aspect has always fascinated me. I think it's a double edged sword. On the one hand you can get some features in place faster because you're sharing code with the GPU drivers, but on the other hand I think it complicates debugging and probably make it hard to redesign things in a way that makes more sense for a CPU-based implementation. Case in point, the FAQ on Lavapipe's architecture isn't exactly a love song on how to implement Vulkan.

                    For SwiftShader we instead chose to become Vulkan-only, meaning we're treating Vulkan as a HAL and other APIs are implemented on top of it. The advantage is that there are practically no extra abstractions inside of SwiftShader. You can see every Vulkan concept have a direct effect on the lower-level code so anyone with knowledge of Vulkan can understand what's going on. We also process SPIR-V directly instead of going through another shader IR. Note that we do benefit from the optimizations performed by the SPIRV-Tools library (many of its developers are Googlers).

                    I think this argument of sharing code with Mesa made more sense in the heydays of 'desktop' OpenGL. We never got around to supporting it for SwiftShader, because of its vast API surface and numerous extensions (and little interest from Google products). SwiftShader Vulkan now supports a conformant OpenGL ES 3.1 implementation through ANGLE, which benefits both projects and all the various platforms that make use of it. We'll also layer Dawn on top of it (for WebGPU, but there's also growing demand for it as a native API). So I think this is a forward-looking architecture that will pay dividends for years to come.
                    so llvmpipe is GL4.5 complaint, and given a small effort could easily be GLES3.2 complaint
                    Obviously some might care about that a lot, but we've observed that most cross-platform graphics engines support Vulkan now, and most of the revenue-generating applications that don't plan on upgrading to Vulkan directly or indirectly stick to OpenGL ES 3.1 or less.
                    The main reasons for having a mesa based vulkan swrast is to enable things like CI on zink
                    While I'm happy to see Zink replace the need for maintaining a custom OpenGL implementation (in the same way that ANGLE does for OpenGL ES), it's a little unclear to me where the value is. ANGLE has experimental desktop OpenGL support too, but it's not getting much attention because it's mostly for one-off applications that haven't adopted Vulkan yet, but these are dying out quickly. OpenGL ES however is still surprisingly popular, as a friendlier API than Vulkan, and will require long-term support for WebGL (which has gained popularity since the Flash deprecation).
                    and being an easy to install default fallback for Linux where distros don't want to ship another graphics stack for one corner case.
                    SwiftShader is available as a package for some distros. I don't encourage it much though, because it tends to lead to bug reports for things we've already fixed in our main branch. Also, while personally I'd love to help support every "corner case" scenario, there tends to be more value in picking our battles and supporting the actual needs of projects like Chrome and Android which affect billions of people.
                    I totally understand Swiftshader has reasons for continued existence, but I was genuinely stunned it didn't end up wiping the floor with lavapipe.
                    I'm surprised you're surprised. This makes me wonder... Do you know of substantial inefficiencies in Lavapipe that could be fixed and result in big speedups? I know SwiftShader is no longer as fast as it could be on modern CPUs, but it would be interesting to know the "lightspeed" scenario, i.e. with unlimited effort how fast could things get for a given amount of CPU power. There's nothing more motivating than seeing someone shatter the limits of what you deemed possible. The 128-bit / 256-bit difference pretty much explains the delta between SwiftShader and Lavapipe currently, but isn't very revolutionary to be frank. If there's a lot more headroom for Lavapipe, that would be exciting for CPU-based rendering as a whole.
                    If I had to give one reason to not keep working on SS for me, it would be daunting effort we'd need to implement transform feedback, supporting that would absolutely suck to write.
                    Meh, that's why we have interns.

                    All kidding aside, transform feedback is another one of those feature that is going the way of the dodo, being superseded by compute shaders and potentially mesh shaders. So we're trying hard not to spend precious time on that. Instead I think we're in a good position to start implementing features which are part of the next Vulkan version, and to improve performance where it matters most to our user base.

                    Comment

                    Working...
                    X