Page 9 of 11 FirstFirst ... 7891011 LastLast
Results 81 to 90 of 104

Thread: How Valve Made L4D2 Faster On Linux Than Windows

  1. #81
    Join Date
    Jan 2009
    Posts
    462

    Default

    Quote Originally Posted by curaga View Post
    /me waits for gamerk2 to disclose where he works, and the ensuing lulz.
    He is correct but his phrasing is unclear, and he appears to be blaming the wrong entity.

    More security issues are detected in the GNU/Linux ecosystem because the detection methods are better (everyone has source access). He is also correct that the release timeframes are problematic when you have to go through a tiered change-managment system. I am in a similar situation, and had to implement a re-occurring change-request for kernel updates (I also did the same to manage SSL certificate expiration/renewal and new versions of Apache httpd) on the 1st and 15th of each month.

    Basically, the problem is not linux, it is change-managment, and can be mitigated via the use of pre-approved, reoccurring change requests.

    F

  2. #82
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    4,995

    Default

    @russofris

    You missed my point. The lulz are from two things: a defense contractor pretending to be professional yet not even having its own house in order, and from having a known target with vulnerabilities.

    Now that the name is out, someone is bound to do something, and perhaps we'll be seeing that name in the news next week.


    All this has nothing to do with the amount of vulnerabilities found or existing in open or closed software. Entirely different debate, that.

  3. #83
    Join Date
    Jan 2007
    Location
    Germany
    Posts
    2,117

  4. #84
    Join Date
    Aug 2012
    Posts
    2

    Default

    Quote Originally Posted by elanthis View Post
    Web, database, email, nas, etc.; I'm too lazy to write out "Internet-enabled multi-function server application hosting OS" every freaking time I need to mention what Linux does well, and "Web" sums it up easily enough for everyone except the nerdcore pedantics. I am very well aware of Linux's database capabilities, given I also used to manage very large databases for several organizations many years back (auto trade industry and government contracting for the most part).
    "Web server OS", you said. Server OS is what you're looking for. 3 letters less, can't be that hard. Linux can do pretty much everything in enterprise server market. Which your post doesnt say. It's not being pedantic. It's just correcting disingenuous statements.
    Last edited by remst; 08-18-2012 at 05:40 AM.

  5. #85
    Join Date
    Apr 2011
    Posts
    309

    Default

    Quote Originally Posted by elanthis View Post
    Which, again, is simply not true in the way you're phrasing it, and doesn't even make sense. There is no such thing as an OpenGL protocol, and there aren't any special GL-specific opcodes that somehow do something magic that the opcodes used by D3D's shader compiler don't do.



    Again, that's not OpenGL. There is no GLSL compiler for the PS3; their GL-derived API uses Cg... which is more or less HLSL. In any case, instruction sets are not a part of OpenGL, GLSL, D3D, or HLSL. Direct3D has all the same properties you're mentioning if you want to stretch things like that, since HLSL is just a high-level language that's compiled into an intermediary and then fed to hardware-specific code generations... just like GLSL.



    This statement about D3D is simply not true. There is absolutely nothing hardware-specific about the D3D API, and implementations exist for x86/x86_64, arm32/64, ppc32/64, sparc, and so on. Most of those are not from Microsoft, and are generally incomplete (just level every implementation of OpenGL is incomplete), but they exist (Wine, Transgaming, Gallium's D3D state tracker, Valve's custom translation layer, etc.).



    Yes, yes there are. You apparently are new here and to the state of Linux graphics in general. Since you obviously don't trust me, why don't you ask the forums in general why OpenGL 3.0 is supported by Mesa but disabled in every distro that ships it?

    1. Yes there are special instructions and functions specific for OpenGL inside GPUs, each company for their own approach, and thats because its """Open""".

    2. OpenGL is royalty-free and patent-free specification (how it works, not code, there is some pre-code). There are third party patents like S3TC or Floating-point-textures, but some will be replaced from open standards (S3TC), some will go invalid as patents, and some will not matter (LLVM on the fly Floating-point-textures). Also someones approach is patented, like Tiled-Rasterizer of PowerVR.

    3. OpenGL runs on all Instruction-Sets because its """Open""", not because D3D cannot. Machine with 1000 x86 CPUs and 1000 CPUs of another Instruction-Set can run the same LLVMized Rasterizer with just 2 back-ends.

    4. PS3 uses CG as compiler for OpenGL-ES2 shaders(GLSL). CG compiles like HLSL-compiler but on Nvidia cards like GLSL-compiler, using the NV-extensions.

    5. Its not that i don't trust you, its just i have a way to know things from inside, and you simply wrong.

    6. D3D and everything closed, its a dead end eventually. If you have something """Open""" you can develop for it without losing your property. So Nvidias next Voxel-RayTracing Tracer (not rasterizer), will be OpenGL (see ID-tech6).

  6. #86
    Join Date
    Apr 2011
    Posts
    309

  7. #87
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    Quote Originally Posted by remst View Post
    "Web server OS", you said. Server OS is what you're looking for. 3 letters less, can't be that hard. Linux can do pretty much everything in enterprise server market. Which your post doesnt say. It's not being pedantic. It's just correcting disingenuous statements.
    You are right, "server OS" is much clearer. My apologizes to you for my snarky reply.

    Quote Originally Posted by artivision
    <snip/>
    ... No. You are still massively confused by some basic terminology, outright wrong on several points, and misunderstanding how certain things work. You're not making any new points, so I'm not going to repeat myself any more.

  8. #88
    Join Date
    Apr 2011
    Posts
    309

    Default

    Quote Originally Posted by elanthis View Post
    You are right, "server OS" is much clearer. My apologizes to you for my snarky reply.



    ... No. You are still massively confused by some basic terminology, outright wrong on several points, and misunderstanding how certain things work. You're not making any new points, so I'm not going to repeat myself any more.

    You must learn first the basics. For example a GLSL compiler does not produce byte-code lower than assembly-level. There is not "to metal" access for GPUs. If I remember correctly only Kepler has some atomic operations exposure. For example MADD is a dual operant unit, at the same time is part of the instruction set, and at the same time its assembly.

  9. #89
    Join Date
    Oct 2007
    Posts
    15

    Default

    Hi all,

    I'd like to begin and say I am no senior expert at graphics programming and abstraction however I have spent numerous hours porting Horde3D to OpenGL ES 2.0 in my spare time and I have learned a lot about abstraction constructs and things so maybe I could explain a few things.

    Someone mentioned applying good OOP-design via C++ using subclassing or virtual functions. These are indeed nice, clean and readable design approaches in an ideal world. If you're a hardcore 'to-the-metal' coder you'll find out Ogre3D (and I assume Irrlict) these don't really fit the target hardware. On x86 where you use an out-of-order CPU with massive caches it isn't a huge issue speed-wise, but on in-order CPU platforms like PS3 or Xbox 360 having virtual functions going into a vtable have horrendous performance due to the vtable lookup into memory causing cache misses and huge clock cycles missed in this timeframe, and in a place where it's making a hell of a lot of calls per frame at 16/33ms then this is a massive bottleneck. For more info look here (on Unity's abstraction layer from Aras P):
    http://www.altdevblogaday.com/2011/0...nd-no-virtual/

    The approach to do #ifdefs is not nice to read (for an example, look at Panda3D's OpenGL codepath/abstraction for GL's) but can be advantageous speed-wise if you're doing it correctly, but you may need to make an executable per backend as you're compiling it for a specific target at build-time (which would make sense on platforms with only 1 API to target). I think iD Tech 2 or 3 use the approach outlined in the above link where they define abstraction by making a high-level struct with function pointers inside, and then make library .dll's/.so's for each backend they want to target whether it's a special OpenGL version or even a software renderer, and the game engine will choose the best one using the target system's library loader calls and grabs the function pointers that way, and then the engine will just calls into those while the backend handles the rest without the engine knowing the specifics. On this subject Fabien has made excellent breakdowns of how iD Tech engines are written: http://fabiensanglard.net/ Doom 3 is particularly interesting where they support quite a few code paths for OpenGL alone, to take advantage of extensions that some hardware vendors support like Nvidia's ultra shadow or depth bounds test (unfortunately there was no real support for OpenGL 2.x so it's mostly OpenGL 1.x fixed function with a few ARB shader programs on some hardware).

    I can't really comment on how valve does their DX9-to-OpenGL translation, but I guess they have a frontend which matches DX9 more and then under the scenes they have a 'smart' way to deal with OpenGL's terms and state machine. Horde3D has a similar approach where the 'RendererDeviceInterface' has more DX10-like names but uses OpenGL's functions eg. a 'RenderBuffer' is really a 'FramebufferObject' bound to a texture. Some things which are unavoidable would be .dds textures needing to be flipped (D3D/GL differ here), you either remake the assets or re-use the windows one and just flip them in the shader or when loading them in which might cause some performance penalties.

    I hope this explains a bit...

  10. #90
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    Quote Originally Posted by artivision View Post
    You must learn first the basics.
    ... That's my line.

    For example a GLSL compiler does not produce byte-code lower than assembly-level. There is not "to metal" access for GPUs.
    I'm not sure what you're trying to say (there's clearly a language barrier issue here), but it still sounds like you're possibly confused on how this all works. A GLSL compiler generally produces some kind of intermediary format which is then fed to a codegen pass that does generate real "to the metal" machine code for the GPU execution cores (has to happen somewhere, after all). Some drivers share this intermediary format with their HLSL compiler (which explicitly generates a Microsoft-defined cross-IHV intermediary format, unlike GLSL), others do not. In either case, all shader languages at some point are compiled to raw machine code, but that code is not specified by any API or language standard, because it varies not only per-vendor but even per-product-cycle, and the APIs are intended to work on all hardware of the appropriate generation. Hence why OpenGL mandates GLSL source code as the lowest level in the standard (NVIDIA defines their assembly program extension, but that itself is just another intermediary format) and why D3D mandates its intermediary format as the lowest level in the standard (basically the same general concept as NVIDIA's GL assembly extension, but part of the API specification rather than as a vendor add-on).

    Quote Originally Posted by MistaED
    Someone mentioned applying good OOP-design via C++ using subclassing or virtual functions.
    Most game developers certainly know that you can have good OOP design _without_ excessive subclassing or virtual functions. The wonderful thing about C++ is that it makes static polymorphism almost as easy as dynamic polymorphism, so you can write a compile-time abstraction layer (without nasty #ifdefs) that is still good OOP. Even at the C level, you can write a single API with multiple backends by simply compiling in different translation units that implement the API in different fashions.

    For example, in a graphics API abstraction layer I am using now, there is a header with non-virtualized class definitions and nothing is inlined. There are then multiple sets of .cpp files, e.g. GfxWin32D3D11*.cpp, GfxDarwinGL*.cpp, GfxiOSGLES*.cpp, etc. The compiler inlines the smaller functions in release builds thanks to LTO and everything else is just a regular function call. Sure, there are platforms that support multiple APIs, but that is very rarely worth even caring about. Each platform has a primary well-supported API which most users' hardware is compatible with, so just use that. And if you're a small-time indie developer, just write for GL ES and ifdef the few bits that need to change to run on regular GL. You probably don't have the time and money to write a high-end fully-featured D3D11 renderer, a D3D9 renderer, an GL3/4 Core renderer, a GL 2.1 renderer, a GLES1 renderer, and a GLES2 renderer... it's not just the API differences, but all the shaders, art content enhancements, testing and performance work, etc. As an indie dev, you'll be making stylized but simplistic graphics, so a single least-common-denominator API is preferable. If you're writing a big engine like Unity or Unreal or whatnot, well... you're going to have a LOT of problems to solve besides the easy stuff like abstracting the "create vertex buffer" API call efficiently.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •