Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 21

Thread: Early Mesa 9.2 Benchmarks With Nouveau

  1. #11
    Join Date
    Jan 2009
    Posts
    1,419

    Default

    Has anyone heard of any movement towards glamour with nouveau?
    I can't think of any reason why this wouldn't have at least been attempted but I've been unable to find any discussion about it (thnx goog).

  2. #12
    Join Date
    Dec 2012
    Posts
    459

    Default

    Quote Originally Posted by liam View Post
    Has anyone heard of any movement towards glamour with nouveau?
    I can't think of any reason why this wouldn't have at least been attempted but I've been unable to find any discussion about it (thnx goog).
    Maybe it's viable only after reclocking has been done?

  3. #13
    Join Date
    Feb 2011
    Posts
    67

    Default Glamour

    Quote Originally Posted by liam View Post
    Has anyone heard of any movement towards glamour with nouveau?
    I can't think of any reason why this wouldn't have at least been attempted but I've been unable to find any discussion about it (thnx goog).
    And why exactly are we supposed to cripple our perfectly good (ok, maybe not perfectly) 2D driver ? Going via OpenGL would add considerable overhead and deprive us of the opportunity to use the 2D engine where it's appropriate / helpful.
    Plus, it's extra work with no significant (if any) gain, and we don't exactly have a lot of extra time at our disposal.

    And we wouldn't want to have to finish GL support for a new chipset before anyone can use X. The 2D driver is much much simpler and thus faster to write.

  4. #14
    Join Date
    Feb 2011
    Posts
    67

    Default Chipsets

    Quote Originally Posted by Calinou View Post
    Reclocking is painfully hard to do, you would also have to do it for almost every card (or at least every GPU: there are 4 Kepler GPUs I know for example: GK107, GK106, GK104, GK110), so we won't see it for a while, sadly. We can always hope though.
    Luckily the memory type (xDDRy) doesn't change that often, and the interfaces to it tend to only change with each new card generation (Fermi, Kepler, ...).
    The trouble is, the register values that the blob does write *depend* on the specific card you have (which registers set which frequencies to which values, how to extract memory timing information from the VBIOS, where to put it, how to even determine which memory type you have, etc.). I haven't worked on it myself but it looks like memory reclocking is the most difficult to get right. You can't just copy and paste from the binary driver. That will, at best, work on the very card you extracted the values from.

    We also want performance level to be selected dynamically based on load/temperature/power consumption, all that is being worked on. And we can't turn it on for users before it really works because there's always the danger of exposing your card to unhealthy levels of heat (or worse). But don't worry, I haven't heard of any dev's cards getting fried yet, even when experimenting with reclocking.
    Last edited by calim; 04-11-2013 at 07:46 AM.

  5. #15
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,464

    Default

    Quote Originally Posted by calim View Post
    And why exactly are we supposed to cripple our perfectly good (ok, maybe not perfectly) 2D driver ? Going via OpenGL would add considerable overhead and deprive us of the opportunity to use the 2D engine where it's appropriate / helpful.
    Plus, it's extra work with no significant (if any) gain, and we don't exactly have a lot of extra time at our disposal.

    And we wouldn't want to have to finish GL support for a new chipset before anyone can use X. The 2D driver is much much simpler and thus faster to write.
    I think the distinction here is presence of a 2D engine. If the GPU has a 2D engine that can handle EXA-style drawing functions then writing a traditional 2D driver first makes sense.

    If the GPU uses the 3D engine for 2D, then you need to write "most of a 3D HW driver" in order to run even basic 2D operations, and using something like Glamor or XA makes more sense.
    Last edited by bridgman; 04-11-2013 at 09:30 AM.

  6. #16
    Join Date
    Dec 2012
    Posts
    459

    Default

    Quote Originally Posted by bridgman View Post
    I think the distinction here is presence of a 2D engine. If the GPU has a 2D engine that can handle EXA-style drawing functions then writing a traditional 2D driver first makes sense.

    If the GPU uses the 3D engine for 2D, then you need to write "most of a 3D HW driver" in order to run even basic 2D operations, and using something like Glamor or XA makes more sense.
    Does that imply that the AMD cards do not have a suitable 2D engine that is capable of running EXA-style drawing? Or do they don't have a 2D engine at all?

  7. #17
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,464

    Default

    Quote Originally Posted by Rexilion View Post
    Does that imply that the AMD cards do not have a suitable 2D engine that is capable of running EXA-style drawing? Or do they don't have a 2D engine at all?
    No 2D engine at all. We had a 2D engine in 5xx and earlier, but it didn't do blends etc.. so we used the 3D engine for EXA anyways.

  8. #18
    Join Date
    Feb 2011
    Posts
    67

    Default

    Quote Originally Posted by bridgman View Post
    No 2D engine at all. We had a 2D engine in 5xx and earlier, but it didn't do blends etc.. so we used the 3D engine for EXA anyways.
    Neither does NV's 2D engine. It can do solids (with ROP) and blits. But still, setting up the 3D engine for a single, known operation is much easier than dealing with all of OpenGL. The most significant advantage being that you don't need a shader compiler. And little things, like you also don't need vertex buffers because the 3D engine has immediate mode (which is quite sufficient or even preferable for drawing a single quad).

  9. #19
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,130

    Default

    NV still has a 2d engine in Kepler?

    Then why did Nvidia's (blob) 2d performance take a nosedive after 7xxx? I recall at the time the official reason was that they no longer had a 2d engine, had to do 2d work on the 3d engine with 8xxx and onwards, and it took years to optimize it to the level of the 7xxx 2d engine.

    Google finds a lot of confirmations for this, that Nvidia dropped the 2d engine starting with 8xxx?

  10. #20
    Join Date
    Feb 2011
    Posts
    67

    Default 2D Engine

    Quote Originally Posted by curaga View Post
    NV still has a 2d engine in Kepler?

    Then why did Nvidia's (blob) 2d performance take a nosedive after 7xxx? I recall at the time the official reason was that they no longer had a 2d engine, had to do 2d work on the 3d engine with 8xxx and onwards, and it took years to optimize it to the level of the 7xxx 2d engine.

    Google finds a lot of confirmations for this, that Nvidia dropped the 2d engine starting with 8xxx?
    Do you think we're making this up ? - https://github.com/pathscale/envytoo...db/nv50_2d.xml (NV50 = G80, naming always uses chipset id where the class interface first appeared)

    It doesn't do all that much and it likely uses mostly the same circuits as the 3D engine (but different interface, separate state, and who knows what the internal details are like), but it's there.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •