Page 1 of 2 12 LastLast
Results 1 to 10 of 40

Thread: Valve's L4D2 Linux Presentation Slides

Hybrid View

  1. #1
    Join Date
    Jan 2007
    Posts
    15,629

    Default Valve's L4D2 Linux Presentation Slides

    Phoronix: Valve's L4D2 Linux Presentation Slides

    Here are the slides that Valve presented at SIGGRAPH LA 2012 about their Left 4 Dead 2 / Source Engine porting to Linux...

    http://www.phoronix.com/vr.php?view=MTE1ODI

  2. #2
    Join Date
    May 2011
    Posts
    1,611

    Default

    "Our performance is currently highest on NVIDIA's GL driver"


  3. #3
    Join Date
    May 2010
    Posts
    190

    Default

    Quote Originally Posted by johnc View Post
    "Our performance is currently highest on NVIDIA's GL driver"

    Way to stir up some flamewar shit Valve!

  4. #4
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    Quote Originally Posted by snuwoods View Post
    Way to stir up some flamewar shit Valve!
    It's really not a secret at all that NVIDIA has the better GL drivers by a huge margin. They are faster, support newer versions of the standard sooner, and are the place to find most of the cutting edge GL extension proposals which might end up in later versions of the standard, e.g. NV assembly shaders which are great for higher-level tools to target vs source-compiling to GLSL, and bindless graphics and DSA which every serious OpenGL professional is praying to $DEITY will finally be in GL 5.0. Most graphics developers tend to learn towards NVIDIA.

    I've seen game teams hold lotteries to see which "unlucky" developer got stuck with the fastest AMD GPU (to ensure at least one dev was testing on AMD regularly) while the others got more modest NVIDIA GPUs, both because of the difference in driver quality and because of NVIDIA's vastly superior graphics developer tools. As someone who turned into an AMD fanboy back in the days when r300 was The Shiznit for Linux users preferring FOSS drivers, I suppose I might opt for the AMD card if offered a choice (especially as I use AMD on my home workstation for hobby stuff and already have a decent grasp on what breaks their drivers). I suppose these days if I were still a Linux user I'd probably be an Intel fanboy since that driver appears to be the best FOSS one now... but it's close to impossible to be an Intel graphics fan if you're on Windows where their GL driver is the largest steaming pile of horseshit that has ever been produced by a driver team in the history of PC hardware.

    The only issue I have with NVIDIA in direct comparison to other GL drivers is that their GLSL compiler is much looser and accepts invalid code that the other drivers generally reject. It's less painful to develop with NVIDIA because you run into less (though not zero) stupid bugs, but it's at times less painful to develop on AMD since your shader code will be forced into being more portable.

    In general, if you're doing PC/Windows graphics development, just make sure you have one of each of NVIDIA, AMD, and Intel around to test on (sounds obvious, but you'd be surprised even how many professionals forget to test regularly on a variety of hardware until late in alpha or even beta). The drivers are just too different in quality and supported features to only test on just one. I mean, if it works on Intel it'll probably work anywhere, but if it works on Intel it's probably because all you're doing is calling glClear. (Okay, that's a teensy bit overstated, but seriously, Intel's Windows GL driver sucks.)

    Also, insert usual OpenGL commentary blah blah wish Khronos would have just made a standard cross-platform ICD library with shader frontend and API loader but its way too late to get the vendors on board even if they did blah blah
    Last edited by elanthis; 08-12-2012 at 04:38 PM.

  5. #5
    Join Date
    Jan 2009
    Posts
    1,762

    Default

    @elanthis

    Whats your opinion on the Mesa implementation?

  6. #6
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    On another note related to the slides, I'd like to see what they did for threading GL. I'm assuming they're using the newer GLX_ARG_create_context extension to make shared contexts, pre-assign them to any threads doing rendering, and then synchronizing draw calls to the display screen in the main thread. They might be doing something more funky and creative though.

    Generally the steps I go through these days to get multi-threaded GL rendering to work go something like:

    1) Create dummy context to get access to extensions
    2) Create real context for device using ARB_create_context extension (requires GL 3, basically)
    3) Kill dummy context
    4) Create shared context for main display window (shared with context from step #2)
    5) Create a per-thread context cache, generally just a TLS variable
    6) Create a context pool to accelerate step 8 in the common case
    7) Create several shared GL contexts to store in the context pool
    8) Create a function to check that a context has been bound to the current thread, and if not, pull one off the context queue and bind it; signal main thread and block if the pool is empty, wait for main thread to create a new shared context that we can bind
    9) Ensure that all threads that are ending return their cached context (if they have one) to the pool
    10) Write letters to Khronos asking them to just give us explicit device, surface, and context objects like they promised for Longs Peak

    The point of the separate context in step 4 is that you sometimes need to destroy and recreate your main window. Since OpenGL oh-so-wonderfully ties your device context (which controls the lifetime of your GPU objects) and the output window into a single object, there's no way to recreate an output window without also destroying all your textures, shaders, buffers, etc. Unless you create two shared contexts, which is a relatively newish feature and not yet supported everywhere (Mesa is only just getting support for it in 8.1, iirc). Again, I think it's obvious that the D3D approach is much superior here: separate objects for the device and swap chain, which are explicitly managed by the developer.

  7. #7
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    89c51
    @elanthis

    Whats your opinion on the Mesa implementation?
    It's been some time since I've actually had a Linux desktop machine (as you may have already picked up on, I became very disillusioned with Linux as a non-toy desktop OS last year), so I don't have much of an opinion on Mesa right now. What I've read and seen in the code all looks fairly good, but it's hard to say without actually using it for anything serious.

    I am impressed with the general speed of development on Mesa, specially from the Intel team lately, and I enjoy keeping up on the git changesets. I want to make clear that while I very strongly despise the Intel Windows GL driver, I harbor no disrespect for the Intel Linux driver team. Seems like all good work so far.

    I have been mulling getting a small Mac Mini like machine with Ivy Bridge (maybe the Giada i53 if it's available soon) specifically for Linux as I'd like to do some porting work, so I'll be experimenting with Mesa's features and quality quite a bit then. If I find problems there, though, export bug reports rather than forum bitching.

    The only problem I'm aware of with Intel's Linux support right now is that I'd really like Intel to switch to Gallium, so that any "better than OpenGL but not D3D" API/state-tracker experiments I might decide to try could actually be done with Intel hardware and not just softpipe/llvmpipe.

    [edit: I can't remember if you're with AMD or not, but if you are, I also had good experiences with r600. it was definitely very buggy and the DRI/Mesa model makes it way too easy for bugs to cause kernel oopses, but this was over a year ago, so again I don't have an educated opinion on today's state of the driver.]
    Last edited by elanthis; 08-12-2012 at 06:09 PM.

  8. #8
    Join Date
    Jan 2009
    Posts
    1,762

    Default

    Quote Originally Posted by elanthis View Post
    It's been some time since I've actually had a Linux desktop machine (as you may have already picked up on, I became very disillusioned with Linux as a non-toy desktop OS last year), so I don't have much of an opinion on Mesa right now. What I've read and seen in the code all looks fairly good, but it's hard to say without actually using it for anything serious.

    I am impressed with the general speed of development on Mesa, specially from the Intel team lately, and I enjoy keeping up on the git changesets. I want to make clear that while I very strongly despise the Intel Windows GL driver, I harbor no disrespect for the Intel Linux driver team. Seems like all good work so far.

    I have been mulling getting a small Mac Mini like machine with Ivy Bridge (maybe the Giada i53 if it's available soon) specifically for Linux as I'd like to do some porting work, so I'll be experimenting with Mesa's features and quality quite a bit then. If I find problems there, though, export bug reports rather than forum bitching.

    The only problem I'm aware of with Intel's Linux support right now is that I'd really like Intel to switch to Gallium, so that any "better than OpenGL but not D3D" API/state-tracker experiments I might decide to try could actually be done with Intel hardware and not just softpipe/llvmpipe.

    [edit: I can't remember if you're with AMD or not, but if you are, I also had good experiences with r600. it was definitely very buggy and the DRI/Mesa model makes it way too easy for bugs to cause kernel oopses, but this was over a year ago, so again I don't have an educated opinion on today's state of the driver.]
    Probably your best bet if you want to experiment with with Gallium is AMD. They are some tiny little machines (ie zotac) that are cheap (and probably slow) but if you get serious about your "better than OpenGL but not D3D" you 'll probably buy something better.

  9. #9

    Default

    Quote Originally Posted by elanthis View Post
    It's been some time since I've actually had a Linux desktop machine (as you may have already picked up on, I became very disillusioned with Linux as a non-toy desktop OS last year), so I don't have much of an opinion on Mesa right now. What I've read and seen in the code all looks fairly good, but it's hard to say without actually using it for anything serious.
    And who cares about stupid trolls thoughts? If there's a toy OS it's Windows and this was proven many times (nobody serious puts GUI into ring 0!). It was also proven you're a dumb troll:

    http://phoronix.com/forums/showthrea...427#post280427

    The sad thing you're still trolling even after being proven wrong.
    Last edited by kraftman; 08-13-2012 at 05:00 AM.

  10. #10
    Join Date
    Jul 2012
    Posts
    667

    Default

    STOP THE PRESS !!!

    Gabe will be at GT.TV ( http://www.gametrailers.com ) to make an interview !!!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •