Page 3 of 3 FirstFirst 123
Results 21 to 28 of 28

Thread: Another Game Studio Backs AMD's Mantle API

  1. #21
    Join Date
    Jan 2013
    Posts
    48

    Default

    Quote Originally Posted by Vim_User View Post
    Mantle will not change anything, unless anything else than GCN AMD chips are supported.
    As far as I understand it you need virtual memory with right management on the GPU for mantel because the vram is managed by the program and not the driver anymore. You don't want other programs be able to trash your vram content! And this is the technical reason why mantel only will work on GCN.

  2. #22
    Join Date
    Jan 2013
    Posts
    1,116

    Default

    Quote Originally Posted by Kraut View Post
    As far as I understand it you need virtual memory with right management on the GPU for mantel because the vram is managed by the program and not the driver anymore. You don't want other programs be able to trash your vram content! And this is the technical reason why mantel only will work on GCN.
    Then it is dead on arrival.

  3. #23
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,146

    Default

    All modern GPUs have a virtual memory manager built into hardware. It's part of the DirectX 10 requirements.

  4. #24
    Join Date
    Oct 2007
    Posts
    15

    Default

    Yeah it says Linux/Mac support in these slides don't worry:
    http://www.planet3dnow.de/cms/5720-a...les-und-jeden/

    And I'm sure there will be a Gallium3D state tracker for it, unless it needs to be more low-level? Perhaps a backend for the proprietary drivers which expose Mantle and have all of Gallium3D's state trackers supported on top of it?

    Currently when you design a modern game engine you spawn threads and split your work into small jobs/tasks with a well-defined dependency graph and throw them into a job/task pool to utilise your cores 100% (for CPU-only tasks). Currently with OpenGL (and probably D3D) it is much safer to just keep it in one dedicated thread to throw all your commands at it rather than balancing many GL contexts in many threads with clumsy fences to sync with, so now there's a serious bottleneck just getting your calls to the GPU, let alone the driver uses its own thread(s) and probably causes contention so cache flushes and thrashes as it's a complete black-box so you design your engine around this currently with some profiler.

    From what I see from Mantle, you can throw jobs at the GPU directly in any thread into some queue and you have full control of your command buffer and any sync points injected in or your own 'pipeline' which may do compute here, CPU work there, video codec decoding when this command buffer finishes, or just regular GPU grunt work, and when your CPU cores scale up, you have an almost linear performance gain in throwing draw calls at your GPUs, the article above explains it better. Also treating each GPU as a different device gives you more control and there could be chances of using one GPU to do one thing, and another doing something else instead of current SLI/Crossfire implementations just dividing the frame up. Maybe using the weaker APU GPU for physics only and the discrete GPU for pure rendering?

    I think if the SDK has many code examples and has great documentation on how to really use this API, it should be more enjoyable to use in a professional engine to squeeze all the performance out compared to D3D/OGL which you don't have full control over things like the command buffer.

    For smaller indie games it is probably overkill and simpler to just stick with OpenGL/ES for now for maximum portability, or just wait for Unity to support it and get the performance win for free as a toggle for the hardware supported.

  5. #25
    Join Date
    Jan 2011
    Posts
    1,287

    Default

    Quote Originally Posted by MistaED View Post
    And I'm sure there will be a Gallium3D state tracker for it, unless it needs to be more low-level? Perhaps a backend for the proprietary drivers which expose Mantle and have all of Gallium3D's state trackers supported on top of it?
    I don't think any of those approaches make sense. The first one, because it is supposed to be lower level, if I understand it correctly, so the Gallium architecture doesn't fit it well (again, if I get it, Gallium kind of abstracts part of the lower level, so you can just write generic state trackers). This is probably not a big problem, as being a lower level API it might be far easier to implement. It might be, ideally (I didn't read into the API, so I don't know) a thin wrapper around libdrm?
    On Gallium3D having a backend to the proprietary drivers, IMO it makes no sense at all: the whole point of using free drivers is avoiding the need for closed ones, not doing the work for them. If the proprietary team wants Mantle support, they better implement it themselves. If the open source team wants support, they better keep it open source. Otherwise, there is no reason for that team to exist.
    (Note, I actually mean the AMD teams, here, as they have people working on both closed and open drivers; as for unpaid individuals, they can do whatever they want, but I would really think they are doing pointless things if they add support for free to a closed driver)

    From what I see from Mantle, you can throw jobs at the GPU directly in any thread into some queue and you have full control of your command buffer and any sync points injected in or your own 'pipeline' which may do compute here, CPU work there, video codec decoding when this command buffer finishes, or just regular GPU grunt work, and when your CPU cores scale up, you have an almost linear performance gain in throwing draw calls at your GPUs, the article above explains it better. Also treating each GPU as a different device gives you more control and there could be chances of using one GPU to do one thing, and another doing something else instead of current SLI/Crossfire implementations just dividing the frame up. Maybe using the weaker APU GPU for physics only and the discrete GPU for pure rendering?
    I think this is all too great, but won't happen at all. You'd have to make too many assumptions about the hardware your users have, and that's something programmers try (and with good reasons) to keep at a minimum. One thing is to expect certain amount of RAM, or a card supporting a given API, but expecting either a SLI/Crossfire setup or an APU or Optimus support is being too specific, and doing dispatching is too much work. They don't usually do CPU dispatching, so I don't see them doing GPU dispatching either.

  6. #26
    Join Date
    Oct 2007
    Posts
    15

    Default

    I don't think any of those approaches make sense. The first one, because it is supposed to be lower level, if I understand it correctly, so the Gallium architecture doesn't fit it well (again, if I get it, Gallium kind of abstracts part of the lower level, so you can just write generic state trackers). This is probably not a big problem, as being a lower level API it might be far easier to implement. It might be, ideally (I didn't read into the API, so I don't know) a thin wrapper around libdrm?
    On Gallium3D having a backend to the proprietary drivers, IMO it makes no sense at all: the whole point of using free drivers is avoiding the need for closed ones, not doing the work for them. If the proprietary team wants Mantle support, they better implement it themselves. If the open source team wants support, they better keep it open source. Otherwise, there is no reason for that team to exist.
    (Note, I actually mean the AMD teams, here, as they have people working on both closed and open drivers; as for unpaid individuals, they can do whatever they want, but I would really think they are doing pointless things if they add support for free to a closed driver)
    What I meant is Gallium3D I think currently can optionally use an OpenGL implementation as a backend to run all of its state trackers on top, and this is how VMware's driver works for guest OSes, right? Mantle could be used instead of OpenGL for better performance for virtual machines, so it could make perfect sense in this scenario, or to better support Direct3D state trackers for Wine too like how that one patch used the D3D state tracker on open drivers. SteamOS might want to look into something like that or Onlive/cloud gaming types. As for the open drivers, they might want to support the API on top of Gallium3D as a state tracker to support games that want to use it, or if it needs to be lower, then on top of libdrm.

    I think this is all too great, but won't happen at all. You'd have to make too many assumptions about the hardware your users have, and that's something programmers try (and with good reasons) to keep at a minimum. One thing is to expect certain amount of RAM, or a card supporting a given API, but expecting either a SLI/Crossfire setup or an APU or Optimus support is being too specific, and doing dispatching is too much work. They don't usually do CPU dispatching, so I don't see them doing GPU dispatching either.
    Yeah agreed that it is a stretch to cater for random GPU configurations, but at least the developers have more control over it and could possibly tailor their engine to be smart about making a pipeline of sorts at runtime, if it's not too complex to do so or to cater specific applications.

  7. #27
    Join Date
    Jan 2011
    Posts
    1,287

    Default

    Quote Originally Posted by MistaED View Post
    What I meant is Gallium3D I think currently can optionally use an OpenGL implementation as a backend to run all of its state trackers on top, and this is how VMware's driver works for guest OSes, right? Mantle could be used instead of OpenGL for better performance for virtual machines, so it could make perfect sense in this scenario, or to better support Direct3D state trackers for Wine too like how that one patch used the D3D state tracker on open drivers. SteamOS might want to look into something like that or Onlive/cloud gaming types. As for the open drivers, they might want to support the API on top of Gallium3D as a state tracker to support games that want to use it, or if it needs to be lower, then on top of libdrm.
    I don't know exactly which parts of your post answer which parts of mine, so I'll have to guess.
    The first part, I assume, discuss on using blobs as a backend. I see how it makes sense here for enabling VMWare's Gallium3D drivers to leverage blobs on computers that use them, and I see the benefit on switching OpenGL for Mantle there (although it maybe bypasses it, I have no idea how it works but I think OpenGL to OpenGL conversion is more easily worked around by just bypassing the calls, provided your virtual and real environments share the same architecture). The thing is that I thought you meant the opposite: enabling Mantle on closed drivers through a Gallium3D state tracker.
    On open drivers, the thing is I don't see game developers dropping the higher level APIs, but rather adding Mantle support, at least by now, and emulating a lower level API with state trackers (which are, AFAIK, higher level abstractions) would probably make you lose the extra performance you'd gain by using the Mantle code path. If it is on top of libdrm, or whatever is the lowest level piece of the drivers in user space, it makes more sense to me.

    Yeah agreed that it is a stretch to cater for random GPU configurations, but at least the developers have more control over it and could possibly tailor their engine to be smart about making a pipeline of sorts at runtime, if it's not too complex to do so or to cater specific applications.
    I agree.

  8. #28
    Join Date
    Oct 2008
    Posts
    104

    Default

    Quote Originally Posted by TemplarGR View Post
    Carmack is one of the most overrated* developers in the history of video games. Seriously, he is not THAT great. People keep putting him on a pedestal though...

    *Overrated !=bad. Overrated is good, probably very good, but still getting far more praise and attention that he/she/it deserves...
    John Carmack was good back when the engines he came up with were truly state-of-the-art (Wolf3D and Doom in particular) but since about Quake III, the engines he has worked on have been much of a muchness vs the competition (just look at the Doom 3 engine vs competitors of the era like the Unreal engine, Source engine and CryEngine)
    He does have a great knowledge of GPU programming and graphics but so do many others.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •