Announcement

Collapse
No announcement yet.

NVIDIA Posts Full PhysX SDK Source Code

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVIDIA Posts Full PhysX SDK Source Code

    Phoronix: NVIDIA Posts Full PhysX SDK Source Code

    This past week as part of Epic Games making UE4 free to developers they managed to get NVIDIA to let them open up some of the PhysX source code as PhysX is depended upon by Unreal Engine 4 for physics handling. NVIDIA this past week ultimately opened up their entire PhysX SDK to everyone...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    So this means PhysX can now be supported on AMD and Intel?

    Comment


    • #3
      Originally posted by Dukenukemx View Post
      So this means PhysX can now be supported on AMD and Intel?
      Probably, with quite a bit of work. I would imagine that best approach for now would be to port it to OpenCL, so that PhysX could then sit as a library on top of any GPU.

      Why is NVIDIA doing this? Because Vulkan/DX12 will make PhysX somewhat obsolete, so they are hoping that as open source it might increase adoption.

      The role PhysX fulfilled until now was to allow game devs to use ready-made accelerated compute functionality without having to use CUDA/OpenCL. But, with Vulkan/DX12, devs can do it much more easily on their own. In the upcoming age of Vulkan/DX12, I expect we're going to see a whole lot of physics libraries out there, open source and "closed" (shipped as compiled SPIR-V code). So, instead of an API, per se, it would be physics code that you can just throw into your game (and modify to suit your needs).

      Vulkan/DX12 is the biggest revolution in 3D programming since shaders were invented.

      Comment


      • #4
        Originally posted by Dukenukemx View Post
        So this means PhysX can now be supported on AMD and Intel?
        Most probably no. There licence might block using their code for it. Plus it is just sdk. Nobody can stop anybody to implement their api for Intel and AMD. But you can't use their implementation. And by a quick look at their source base, it seems they have still some GPU related code hidden. Raw GPU access codes are .h, can't find any implementation.

        Anyway from their EULA

        You are required to notify NVIDIA prior to use of the NVIDIA GameWorks Licensed Software in the development of any commercial Game, Expansion Pack or Demo.
        I have to notify NVIDIA before I even start development? This seems highly unlogical. They should have given that before publishing or sharing.

        Comment


        • #5
          About time. If nvidia really wanted to accelerate the adoption of this, they'd have done it when UE3 was first released. On the other hand, since the last-gen consoles didn't support physx, I guess it makes sense why many AAA titles never supported it.

          Originally posted by Dukenukemx View Post
          So this means PhysX can now be supported on AMD and Intel?
          Theoretically, yes, but it likely would run like crap since neither AMD nor nvidia have CUDA cores, so they'd have to be emulated. If someone could use these GPUs to emulate a CUDA core that'd be fantastic.

          To me, intel's IGP is the perfect use for physx. Physx isn't very memory intensive and from what I gather, it doesn't require a high core frequency either. It just needs to be very parallel. For most gaming systems, intel IGPs are utterly useless and a waste of money. Ironic, since the IGP in an i7 is significantly better than in an i3.

          Comment


          • #6
            Originally posted by schmidtbag View Post
            About time. If nvidia really wanted to accelerate the adoption of this, they'd have done it when UE3 was first released. On the other hand, since the last-gen consoles didn't support physx, I guess it makes sense why many AAA titles never supported it.


            Theoretically, yes, but it likely would run like crap since neither AMD nor nvidia have CUDA cores, so they'd have to be emulated. If someone could use these GPUs to emulate a CUDA core that'd be fantastic.

            To me, intel's IGP is the perfect use for physx. Physx isn't very memory intensive and from what I gather, it doesn't require a high core frequency either. It just needs to be very parallel. For most gaming systems, intel IGPs are utterly useless and a waste of money. Ironic, since the IGP in an i7 is significantly better than in an i3.
            I'm not so sure. When Agiea created its PhysX processor it was basically an in order RISC-like pipline. Then later when nVidia bought Agiea, the only reason they could emulate physX is because their GPU is also basically an in order RISC-like pipeline architecture. The other issue is that PhysX is basically serial. In other words you can't get much parallelism out of it. The most you can hope to do on a GPU is fill one or maybe two piplines. Also it's not an FP load. It's almost all integer processing which makes it less than ideal for a GPU.

            Comment


            • #7
              Full sdk source code for processor path of physX, not gpu. So direct competition for havok physics middleware, which is closed and $$$.

              Comment


              • #8
                Originally posted by duby229 View Post
                I'm not so sure. When Agiea created its PhysX processor it was basically an in order RISC-like pipline. Then later when nVidia bought Agiea, the only reason they could emulate physX is because their GPU is also basically an in order RISC-like pipeline architecture.
                Right but nvidia got such a pipeline architecture right when they started doing CUDA. I'm not sure how Intel's or AMD's architecture compares to nvidia's, but as long as it's RISC it shouldn't be too inefficient to emulate something like physx instructions.

                The other issue is that PhysX is basically serial. In other words you can't get much parallelism out of it. The most you can hope to do on a GPU is fill one or maybe two piplines. Also it's not an FP load. It's almost all integer processing which makes it less than ideal for a GPU.
                If that were the case then why does physx run so bad on CPU? I really don't see how it'd be any less parallel than 3D rendering. For example, if you're trying to get cloth physics, you're not only drawing however many thousands of triangles to make it look like a cloth but every vertex has its own math involved on where it should move based on the rules of physics applied. Serializing that is incredibly inefficient.

                According to wikipedia, the Ageia PPU consisted of a multi-core MIPS processor, which in turn controls many more SIMDs. Apparently Ageia never released how many cores and SIMDs it uses, but clearly it isn't very serial at all.

                Comment


                • #9
                  Originally posted by schmidtbag View Post
                  Right but nvidia got such a pipeline architecture right when they started doing CUDA. I'm not sure how Intel's or AMD's architecture compares to nvidia's, but as long as it's RISC it shouldn't be too inefficient to emulate something like physx instructions.


                  If that were the case then why does physx run so bad on CPU? I really don't see how it'd be any less parallel than 3D rendering. For example, if you're trying to get cloth physics, you're not only drawing however many thousands of triangles to make it look like a cloth but every vertex has its own math involved on where it should move based on the rules of physics applied. Serializing that is incredibly inefficient.

                  According to wikipedia, the Ageia PPU consisted of a multi-core MIPS processor, which in turn controls many more SIMDs. Apparently Ageia never released how many cores and SIMDs it uses, but clearly it isn't very serial at all.
                  I don't know this as a fact, but I think GCN could do it. I'm not so sure about Intels architecture. And I don't think it could be rewritten in OpenCL either because physics calculations are pretty much serial. As a game engine designer I'm sure physics could be multiprocessed of course. You'd just have to write you're physics code with the least amount of inline dependancies as possible so they could be split into different workloads.

                  Comment


                  • #10
                    Wait, if PhysX isn't very parallelisable, then why would it be faster on the GPU?

                    Comment

                    Working...
                    X