Announcement

Collapse
No announcement yet.

A New Open-Source GPU Comes About

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • A New Open-Source GPU Comes About

    Phoronix: A New Open-Source GPU Comes About

    After writing last month the open-source graphics card is dead and why the open-source graphics card failed, this weekend I received an email that begins with "Open Graphics! Here we go again! As our master thesis work we have implemented a open source graphics accelerator."..

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Aha, I don't think this will really do much for open-source graphics. It appears to be a basic fixed-point, non-programmable pipeline, if I'm not missing a lot by skimming the code. An actually good set of floating point units would probably help out far more.

    Kudos for finishing the master's thesis and making something neat, though.

    Comment


    • #3
      "An actually good set of floating point units would probably help out far more."
      Very true, but that will be a lot of more work.. you do what you can with the time you got. This is still better then nothing

      Comment


      • #4
        So they've implemented a GPU using a CPU.... isn't that essentially what you get by mixing LLVMpipe + GMA500?

        Comment


        • #5
          Originally posted by droidhacker View Post
          So they've implemented a GPU using a CPU.... isn't that essentially what you get by mixing LLVMpipe + GMA500?
          The pipeline is implemented in hardware, also on in the hardware there is a CPU, the OpenRISC processor that can send instructions to the graphics accelerator. If its still unclear please read this: http://en.wikipedia.org/wiki/Field-p...ble_gate_array

          Comment


          • #6
            They've implemented a GPU using a FPGA...

            Comment


            • #7
              Hmmm, I think this GPU will not be getting me 100 FPS in Crysis any time soon.

              Comment


              • #8
                Originally posted by hoohoo View Post
                Hmmm, I think this GPU will not be getting me 100 FPS in Crysis any time soon.
                They just need some good driver optimizations.

                case "crysis.exe":
                //display screenshots
                break;

                Comment


                • #9
                  Originally posted by smitty3268 View Post
                  They just need some good driver optimizations.

                  case "crysis.exe":
                  //display screenshots
                  break;
                  That's it: prerendered scenes! FPGAs are just a sea of lookup tables, right? This one will only be a little bigger...

                  Comment


                  • #10
                    This is close to my proposal but has mistakes. 1)We have a good software-rasterizer(llvm-pipe), just add some 3d-instructions in OpenRisc (like Mips-3d), in order to accelerate the rasterizer and write an llvm-backend. Don't create asic-circuits, they are difficult even for companies like Nvidia, they want to end them by adding more 3d-instructions to the shaders(general cores). 2)Use little big processing, 2-4cores for general computing with 7dmips(20m transistors), and 32-64-128cores for graphics 2,5dmips(1m transistor). Each mini core has 512bit-fmac, that is 64gflops@2ghz per core, also that gives many tflops/watt on latest lithography. 3)Add emulation-instructions like godson-mips, then you will be able to execute "qemu wine" in live speed.

                    Comment

                    Working...
                    X