Page 6 of 7 FirstFirst ... 4567 LastLast
Results 51 to 60 of 68

Thread: Ubuntu 12.10: Open-Source Radeon vs. AMD Catalyst Performance

  1. #51
    Join Date
    Oct 2011
    Location
    Rural Alberta, Canada
    Posts
    1,024

    Default

    Quote Originally Posted by GreatEmerald View Post
    Define that... You can be a real gamer while playing 2D games, and all of those are well supported by OSS drivers.
    Indeed, although I do play 3D games with R600g and I do tend to consider myself as being an enthusiast gamer. Now, I am an enthusiast Linux gamer, and this does limit the section of titles I can play somewhat, but for the most part I am quite happy of the performance of my Diamond Radeon HD 4670. Over the past year I have played through titles such as Enemy Territory: Quake Wars, Prey, Trine, Trine 2, and Amnesia: The Dark Descent with acceptable performance, and the only game I have not been able to launch ostensibly due to graphics issues is Bastion, and the only game where I have really performance problems is Pshyconauts, but this is due to bugs in the port and not my driver setup.

    And this is on an, admittedly updated, Fedora 16 setup. I am waiting to see if I will get performance improvements when I update to Fedora 18 or not.

  2. #52
    Join Date
    Jan 2009
    Posts
    619

    Default

    Quote Originally Posted by droidhacker View Post
    So to take that one step further, what you are implying, is that PTS is doing something that is crippling the results and is therefore NOT a valid benchmark. I wonder if that would be an oversight, of if the results are intentionally sabotaged...?

    I've never taken PTS seriously.
    I've finally managed to make Reaction Quake pick up the PTS config and I can now reproduce the low framerate. Sorry for the noise.

  3. #53
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,537

    Default

    Quote Originally Posted by bug77 View Post
    Does it make sense to use high LOD with low textures? I mean, the textures are already low-res, why store even lower res variants?
    Oh, I wasn't clear, I meant mesh LOD.

  4. #54
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    Quote Originally Posted by pingufunkybeat View Post
    Once the GPU HARDWARE gets the data, it will do it at the same speed. It's just that getting the right data to the hardware at the right moment takes longer with the open driver -- and that one runs on the CPU.
    Not at all. The CPU driver has a huge impact on what the GPU hardware can do. The driver has to configure a lot of functionality, control clock gating, optimize the shader programs, etc. There are still entire swaths of features on all three major lines of hardware that the FOSS drivers don't enable and the lack of which seriously impacts performance. Examples of such features recently brought up right on this very site are texture tiling, hierarchical Z-buffering, higher PCI transfer speeds, and power state control.

    You can also easily measure where the bottleneck lies. If your CPU is pegging 100%, the driver (or the app itself) is the bottleneck. If the GPU is pegging 100%, it's the hardware. Any recent tests I've seen have shown that the CPU usage with the FOSS drivers is not all that terrible (though not great) and yet the FPS is extremely worse. Clearly, then, the GPU hardware is the bottleneck; in this case, not because the hardware itself is bad, but because it's running in a race with its hands tied behind its back and one leg chopped off.

  5. #55
    Join Date
    Oct 2012
    Posts
    7

    Default

    Does anybody know what the fundamental reason for the speed difference is? The proprietary driver is at least 3X faster on all tests - that is not something that you can achieve only with incremental code tweaking.
    Last edited by Michael504; 10-31-2012 at 09:18 PM.

  6. #56
    Join Date
    Dec 2007
    Posts
    2,341

    Default

    From the perspective of the 3D engine, there are not really that many additional features missing between the open and closed driver. At this point about all that is missing is HiZ. PCIE 2.0 is now enabled by default in recent kernels and now that mesa 9.0 is out, 2D tiling is enabled by default in xf86-video-ati (not sure if it was enabled in these tests or not). What there is is 15+ years of optimizing memory managers, driver heuristics and micro-optimizations in the closed driver. A lot of what helps performance is driver heuristics for things like buffer placement or minimizing extra transfers of data. E.g., forcing some buffers to prefer vram rather than gart increases performance by a factor of 4x:
    http://lists.freedesktop.org/archive...er/029415.html

    General non-HW improvements that can make a huge difference:
    - Better buffer placement heuristics
    - Shader compiler improvements
    - Better buffer upload/caching heuristics
    - Utilize both cached and uncached (WC) gart memory
    - Better tiling (1D/2D/linear) selection heuristics

  7. #57
    Join Date
    Jun 2011
    Posts
    316

    Default

    Quote Originally Posted by GreatEmerald View Post
    Speaking of Piglit tests, are there any OGL conformance tests that work on Catalyst? It would be interesting to compare how both conform to standards.
    Mozilla had a WebGL conformance test and a lot of the proprietary drivers have historically failed it, such as the last proprietary Catalyst driver available for R300 hardware.

    The proprietary Catalyst driver for R300 hardware got blacklisted by Mozilla for WebGL while the open source r300g driver isn't blacklisted and works fine. You can force your way past the blacklist when using the proprietary driver, but then the PC just locks hard when it hits some WebGL. I think there is also some 2D acceleration blacklists too because scrolling in firefox is a *LOT* faster on the open source r300g driver than the old proprietary driver on pages with lots of images.

    Also, newer versions of Compiz and KWIN + latest Catalyst proprietary driver for r300 hardware locks the system hard as well..

    So pretty much r300g is the only option for R300 hardware users.

    Keep up the great work, Marek!

  8. #58
    Join Date
    Feb 2012
    Posts
    70

    Default

    i installed ubuntu 12.10 yesterday on a first gen i3 notebook with AMd 5470M dGPU.
    The default unity desktop performance is piss poor. A simple mouse click takes a few seconds to register.

  9. #59
    Join Date
    Jun 2010
    Location
    ฿ 16LDJ6Hrd1oN3nCoFL7BypHSEYL84ca1JR
    Posts
    1,043

    Default

    Quote Originally Posted by Veerappan View Post
    I think that this has possibly been fixed due to a KWin bug.

    https://bugs.freedesktop.org/show_bug.cgi?id=55998
    Fair enough, that's not intel's fault.

    I accidentally clicked on the adblock button in chromium.
    Code:
    [70796.656970] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer elapsed... GPU hung

  10. #60
    Join Date
    Jun 2009
    Posts
    2,927

    Default

    Quote Originally Posted by elanthis View Post
    Examples of such features recently brought up right on this very site are texture tiling, hierarchical Z-buffering, higher PCI transfer speeds, and power state control.
    Like Alex says, most of this is finished for radeon cards. Perhaps not turned off by default on most distros, but it's been written.

    And yes, optimizing shaders can bring huge gains, but Quake3 doesn't need much of that, and it's still slower.

    You can also easily measure where the bottleneck lies. If your CPU is pegging 100%, the driver (or the app itself) is the bottleneck. If the GPU is pegging 100%, it's the hardware. Any recent tests I've seen have shown that the CPU usage with the FOSS drivers is not all that terrible (though not great) and yet the FPS is extremely worse. Clearly, then, the GPU hardware is the bottleneck; in this case, not because the hardware itself is bad, but because it's running in a race with its hands tied behind its back and one leg chopped off.
    I don't know much about GPU drivers to argue with you about this. It will depend on how often the GPU has to wait for the driver to finish doing its stuff, and I can think of scenarios where this induces delays even when CPU is far from 100% load, I get this regularly when running OpenCL software. But like I said, I'm not a driver guy, so I have no clue how it is really done.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •