Page 1 of 2 12 LastLast
Results 1 to 10 of 12

Thread: A New Open-Source GPU Comes About

  1. #1
    Join Date
    Jan 2007
    Posts
    14,613

    Default A New Open-Source GPU Comes About

    Phoronix: A New Open-Source GPU Comes About

    After writing last month the open-source graphics card is dead and why the open-source graphics card failed, this weekend I received an email that begins with "Open Graphics! Here we go again! As our master thesis work we have implemented a open source graphics accelerator."..

    http://www.phoronix.com/vr.php?view=MTExMzE

  2. #2

    Default

    Aha, I don't think this will really do much for open-source graphics. It appears to be a basic fixed-point, non-programmable pipeline, if I'm not missing a lot by skimming the code. An actually good set of floating point units would probably help out far more.

    Kudos for finishing the master's thesis and making something neat, though.

  3. #3
    Join Date
    May 2012
    Posts
    8

    Default

    "An actually good set of floating point units would probably help out far more."
    Very true, but that will be a lot of more work.. you do what you can with the time you got. This is still better then nothing

  4. #4
    Join Date
    Oct 2009
    Posts
    2,086

    Default

    So they've implemented a GPU using a CPU.... isn't that essentially what you get by mixing LLVMpipe + GMA500?

  5. #5
    Join Date
    May 2012
    Posts
    8

    Default

    Quote Originally Posted by droidhacker View Post
    So they've implemented a GPU using a CPU.... isn't that essentially what you get by mixing LLVMpipe + GMA500?
    The pipeline is implemented in hardware, also on in the hardware there is a CPU, the OpenRISC processor that can send instructions to the graphics accelerator. If its still unclear please read this: http://en.wikipedia.org/wiki/Field-p...ble_gate_array

  6. #6
    Join Date
    Jan 2012
    Posts
    111

    Default

    They've implemented a GPU using a FPGA...

  7. #7
    Join Date
    Dec 2010
    Location
    Calgary
    Posts
    136

    Smile

    Hmmm, I think this GPU will not be getting me 100 FPS in Crysis any time soon.

  8. #8
    Join Date
    Oct 2008
    Posts
    3,096

    Default

    Quote Originally Posted by hoohoo View Post
    Hmmm, I think this GPU will not be getting me 100 FPS in Crysis any time soon.
    They just need some good driver optimizations.

    case "crysis.exe":
    //display screenshots
    break;

  9. #9

    Default

    Quote Originally Posted by smitty3268 View Post
    They just need some good driver optimizations.

    case "crysis.exe":
    //display screenshots
    break;
    That's it: prerendered scenes! FPGAs are just a sea of lookup tables, right? This one will only be a little bigger...

  10. #10
    Join Date
    Apr 2011
    Posts
    319

    Default

    This is close to my proposal but has mistakes. 1)We have a good software-rasterizer(llvm-pipe), just add some 3d-instructions in OpenRisc (like Mips-3d), in order to accelerate the rasterizer and write an llvm-backend. Don't create asic-circuits, they are difficult even for companies like Nvidia, they want to end them by adding more 3d-instructions to the shaders(general cores). 2)Use little big processing, 2-4cores for general computing with 7dmips(20m transistors), and 32-64-128cores for graphics 2,5dmips(1m transistor). Each mini core has 512bit-fmac, that is 64gflops@2ghz per core, also that gives many tflops/watt on latest lithography. 3)Add emulation-instructions like godson-mips, then you will be able to execute "qemu wine" in live speed.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •