Announcement

Collapse
No announcement yet.

Qualcomm A4xx Gallium3D Support Takes Shape

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Qualcomm A4xx Gallium3D Support Takes Shape

    Phoronix: Qualcomm A4xx Gallium3D Support Takes Shape

    Those with Qualcomm Adreno A4xx series graphics hardware, the open-source 3D support is coming along nicely...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    When I can download a fedora-arm.iso which just boots somehow from a talbet I am happy to use it. Or something similar easy/general.

    When I just look around for tablets with that chip 8" and higher I only see the amazon tablet that has this andrano 400 chip. Even the new intel bay trails have problems with 32bit uefi what most distros kicked out because the only vendor who used that in last years was apple.

    So it would be nice if all this super duper drivers would end in usable and installable systems which work more or less perfektly.

    Comment


    • #3
      Rob, are you going to be able to reuse the hw binning support from the earlier series (if you implemented any?) or will a4xx require entirely new code to be written?

      Comment


      • #4
        Originally posted by blackiwid View Post
        When I can download a fedora-arm.iso which just boots somehow from a talbet I am happy to use it. Or something similar easy/general.

        When I just look around for tablets with that chip 8" and higher I only see the amazon tablet that has this andrano 400 chip. Even the new intel bay trails have problems with 32bit uefi what most distros kicked out because the only vendor who used that in last years was apple.

        So it would be nice if all this super duper drivers would end in usable and installable systems which work more or less perfektly.
        Nothing can be done until ARM has standardized connections and a BIOS/UEFI-like functionality

        Comment


        • #5
          Originally posted by Ancurio View Post
          Rob, are you going to be able to reuse the hw binning support from the earlier series (if you implemented any?) or will a4xx require entirely new code to be written?
          Some of the logic will be shared, but all the registers have shuffled around so I'll need to sort out again which bits to set in which registers for binning pass..

          Should be easier to get going than it was the first time on a3xx, since some of the common bits are already in place (like same basic shader isa, so we already have generation of binning pass shaders). But there is work to do.

          There are quite a number of features in that category.. I have all the test progs to run against blob driver from a2xx and a3xx. But just takes time to work through the traces. And in general this gpu tends to have plenty of interdependent register settings, so changing one thing in one group of registers, without figuring out the equivalent change in another group of registers results in lockup. That sort of thing takes some time to work through. But at least now I have an idea of what I'm looking for ;-)

          Comment


          • #6
            Reverse engineering?

            Do you think having someone do a binary reverse engineering on the blob drivers to come up with a hardware API spec would help things along?

            Comment


            • #7
              Originally posted by LLStarks View Post
              Nothing can be done until ARM has standardized connections and a BIOS/UEFI-like functionality
              Uhm... solving this issue is actually one of the goals of device tree. Rather than having your crazed device description files in arch/arm/mach-whatever/device.c, you are supposed to be able to end up with a generic file that loads the board specific device tree from the device's memory. The intention is to allow the kernel to remain "the same" rather than having to patch and recompile it for every wack-job device out there. The same kernel should be able to pick up the device specificiations and load everything appropriately.

              Comment


              • #8
                Originally posted by BradN View Post
                Do you think having someone do a binary reverse engineering on the blob drivers to come up with a hardware API spec would help things along?
                Not sure if that is the easiest way to figure out all the registers. It's not like you'd have comments or register bitfield #define's in decompiled code.

                In general, I don't particularlly want to know how the blob driver works. Just what register bits it sets in which cases. For that I have a bunch of simple gles test programs (freedreno.git/tests-3d mostly.. although some things I have to figure out from 2d/libC2D2 or opencl).. these I can run against blob driver, tweak some gl parameters, run again, compare. Certainly help w/ writing tests for some of the new features (additional shader stages, etc) would help. That and help analysing the captured cmdstream traces and implement features seems more productive.

                Comment

                Working...
                X