Page 3 of 3 FirstFirst 123
Results 21 to 30 of 30

Thread: Benchmarks Of Nouveau's Gallium3D Driver

  1. #21

    Default

    Quote Originally Posted by sreyan View Post
    Michael: Any chance we can get a comparison of 2D speed?

    For example, my 8600GTS seems exceedingly slow when I use it the NVIDIA blog with KDE 4.4
    That's a known bug in the original F12 X.org, but has been fixed in updates for ages. Do your updates and reboot, if you haven't already.

  2. #22
    Join Date
    Apr 2007
    Posts
    121

    Default

    Quote Originally Posted by AdamW View Post
    That's a known bug in the original F12 X.org, but has been fixed in updates for ages. Do your updates and reboot, if you haven't already.
    This machine is running Lucid so it probably has the same X org that F12 had /way/ back Maybe I'll switch to F13 when it comes out!

    I'm not a gamer so I don't care too much about 3D. Nouveau is mostly feature complete for my needs except the 8600GTS is too loud.

    The only reason I use the nvidia blob is the dual head support. I've tried the F12 live cd (to try out the nouveau goodness) which detects my monitors perfectly from EDID data but the fan noise is too much. When the nouveau driver can downclock the card / reduce the fan speed I'll ditch the blob forever! I can't wait

  3. #23
    Join Date
    Dec 2008
    Posts
    315

    Default

    Quote Originally Posted by sreyan View Post
    This machine is running Lucid so it probably has the same X org that F12 had /way/ back Maybe I'll switch to F13 when it comes out!

    I'm not a gamer so I don't care too much about 3D. Nouveau is mostly feature complete for my needs except the 8600GTS is too loud.

    The only reason I use the nvidia blob is the dual head support. I've tried the F12 live cd (to try out the nouveau goodness) which detects my monitors perfectly from EDID data but the fan noise is too much. When the nouveau driver can downclock the card / reduce the fan speed I'll ditch the blob forever! I can't wait
    Do it yourself. Use the Nbitor to set the default frequencies down a tad and then drop voltage. Not every card has voltage modification or some are technically hard to do through Bios but some are easy. Then just nvflash it. Fan controls are a snap on many of them. Set it at 60 or 70 percent speed and if you can drop .1 volts should run cooler. You lose a few gpixels per sec and few gtexels per sec from downclocking the core but some will let you keep super high 2d clocks and downclock 3d only.

  4. #24
    Join Date
    Aug 2007
    Location
    Europe
    Posts
    401

    Default

    Quote Originally Posted by rohcQaH View Post
    no, it's because nvidia spent lots of time/money on driver optimization and nouveau didn't. nouveau can become faster, but that can't be done by just switching out an algorithm. It needs work, lots of it.

    If only driver development was easy, everyone would rejoice
    Is there a way to automagically identify algorithm choices? It would be nice addition to Valgrind (http://valgrind.org/), which if I understand it correctly mainly identify memory usage, memory leaks etc.

    Has the art of algorithm identification ever been implemented in software, in any commercial and/or GPL software?


    .

  5. #25
    Join Date
    Apr 2007
    Posts
    121

    Default

    Quote Originally Posted by sabriah View Post
    Is there a way to automagically identify algorithm choices? It would be nice addition to Valgrind (http://valgrind.org/), which if I understand it correctly mainly identify memory usage, memory leaks etc.

    Has the art of algorithm identification ever been implemented in software, in any commercial and/or GPL software?


    .
    I'm pretty sure you can implement a gendarme plugin or a weaver in post sharp to do some impressive algorithm replacements and switching for .net at least.

    For compiled code you could always take the route of compiling to some IR representation and using heuristics to optimize better. Really advanced optimization plugins for LLVM would be great

    Both of these approaches require more information. Without an AST / simple IR available during compilation or the rich metadata that's in .net's IL it would be very tough to do. Doing something like this in valgrind would be annoying to implement and probably offer very little gains.

  6. #26
    Join Date
    Apr 2007
    Posts
    121

    Default

    Quote Originally Posted by Hephasteus View Post
    Do it yourself. Use the Nbitor to set the default frequencies down a tad and then drop voltage. Not every card has voltage modification or some are technically hard to do through Bios but some are easy. Then just nvflash it. Fan controls are a snap on many of them. Set it at 60 or 70 percent speed and if you can drop .1 volts should run cooler. You lose a few gpixels per sec and few gtexels per sec from downclocking the core but some will let you keep super high 2d clocks and downclock 3d only.
    That's far too much effort. If I can't do it as easily as installing the blob with yum or apt, it's not going to happen.

  7. #27

    Default

    Quote Originally Posted by sreyan View Post
    This machine is running Lucid so it probably has the same X org that F12 had /way/ back Maybe I'll switch to F13 when it comes out!

    I'm not a gamer so I don't care too much about 3D. Nouveau is mostly feature complete for my needs except the 8600GTS is too loud.

    The only reason I use the nvidia blob is the dual head support. I've tried the F12 live cd (to try out the nouveau goodness) which detects my monitors perfectly from EDID data but the fan noise is too much. When the nouveau driver can downclock the card / reduce the fan speed I'll ditch the blob forever! I can't wait
    Nah, if you're on Ubuntu it'll be something different. Don't know what, though.

  8. #28
    Join Date
    Dec 2008
    Posts
    315

    Default

    Quote Originally Posted by sreyan View Post
    That's far too much effort. If I can't do it as easily as installing the blob with yum or apt, it's not going to happen.
    No it's not. Takes about 10 minutes to set up your card the way you want it and flash it. Download NBitOr 5.4, Download nvflash. Make a boot disk out of floppy or usb. Edit bios save it. Boot dos disk or thumb drive, nvflash bios.name. Reset and your card is permanently quiet or power efficient or fast. It's only scary the first time. If you don't have windows partition it's a giant undertaking but if you do then it's nothing. The main problem is what it takes to really modify card the voltage changes are not working in FAR too many cards. A .1 to .15 volt drop on anything that isn't 40nm and doesn't use those funky leak transistors will take half the heat out of the card and allow super slow fan speeds with just a very very slight declock for safety.
    But I guess the engineers know best which is why they love to put hardware decoding in video cards that eat 25 or more watts when my cpu at 100 mhz declock and .125 devolt can decode a 1080i video at 50 to 60 percent cpu usage on about 12 to 15 watts.

  9. #29
    Join Date
    Aug 2007
    Location
    Europe
    Posts
    401

    Default

    I see the benchmarks and I have no issue with them. The nVidia drivers are superior. However, the Nouveau drivers are usable. But, and this is my gripe. The Phoronix graphs just don't show that, as it uses the same arbitrary y-axes as ever.

    The graphs in Phoronix have used the same incomprehensible y-scales for years, with "tickmarks" based on multiples of 13, 17, 18, 20, 21, 25, or whatever what fitted the standardized graph height, apparently in pixels. Please, if you want to make the y-axes usable, please, use regular intervals like 1,2,3,4; 5,10,15,20; or 20,40,60, etc. Or, as in this case, when the values differ greatly, please, use log_2, log_10, or log_whatever in order to show us what the differences are!

    Now, the Nouveau-drivers are crammed at the bottom, near the perhaps 15, 25, or 35 fps line. These fps numbers are relevant, and they are playable. Using a log scale, one would be able to interpret them readily. Using a log_2 scale may be superior in many more cases than using the log_10 scale.

    Please, change the scales of the Phoronix graphs to something interpretable! This is not the first request along this line here at Phoronix, and it is the 2nd request from me. Please, or at least explain why you value cosmetic consistency over accuracy. I just don't understand it.



    .

  10. #30
    Join Date
    Oct 2009
    Posts
    7

    Default

    Quote Originally Posted by Hephasteus View Post
    It's kind of moot to bench this since Gallium doesn't have working TTM manager yet as far as I can tell. It's doing everything out of memory mapped frame buffer. Some news on progress for TTM would be good. I can't find any.
    Afaik TTM/BufferObj is a hard requirement for Gallium3D drivers to work and be possible to implement in the first place.

    so TTM is likely already working. but there's a lot of things to optimize and probably a lot of stuff still goes through SoftPipe SW Rendering still that isn't hooked in yet to HW accelerated routines i'd bet.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •