Page 4 of 4 FirstFirst ... 234
Results 31 to 36 of 36

Thread: Intel Hits "Almost There" GL 3.0 Support In Mesa

  1. #31
    Join Date
    Feb 2009
    Posts
    161

    Default

    Quote Originally Posted by cl333r View Post
    "redundant API functions" and "bloated API" is pretty much the same, just as the term "legacy stuff", most of which has been deprecated, and some stuff is still there, which will likely be gradually fixed. No revolutions, only positive evolution like we have witnessed with GL's transition from 2.1 to 4.2.

    I'm not emotionally attached to GL, I'm saying cloning/using DX11 is likely to fail for the reasons I listed somewhere above, creating something new is too early, see explanation below.

    GL needs to be fixed, but it's nowhere as pressing as saying screw it we're gonna create a new standard or use DX11, that's silly.

    However, I'm in favor of rewriting GL completely by the time the next-gen (not "next-gen" as in marketing, but as in technology, like some real break-through) hw shows up, which might happen within like 5 to 15 years. So I'm not saying let's keep upgrading it forever, I'm just saying GL is good enough and for the time being abandoning it is reckless simply because of some non critical issues.

    To me, a critical point, is when, say we draw stuff not with triangles with slapped textures on them, but with real points/atoms/whatever which might happen, as I think, in 5 to 15 years - that would be a good enough reason to rewrite it, and DX11 might need a rewrite too, this way we don't have to go through a lot of trouble by not having to force the industry through an extra rewrite of the API.
    Happy to see a more serious post.

    I don't think people want to clone D3D11 (please note that D3D is not the same thing as DX). What people are saying is that OpenGL fits no more, and D3D11 is a nice example of how a good API should be.

    As you say, OGL needs to be fixed. And actually, it's so urgent that it's almost already too late. Why? Because of GP-GPUs. OpenGL and most of Direct3D is tied to the current paradigm of polygon rasterization used in most (all?) games/apps out there. But rasterization has it's limits, and currently we waste tons of "brain cycles" figuring out tricks to go around these limitations and still improve graphics. Things like simple reflections on materials are pure "magic" tricks that you don't seem aware of.

    We will not be building geometry mimicking real behaviour of atoms and so on for quiet some time (decades, probably...). The next step is realtime raytracing. GP-GPUs are bringing us closer and closer to being able to render very photorealistic frames faster and faster. You don't need very complicated tricks to render perfect reflections, depth of field and other realistic effects. OpenGL covers nothing here. OpenCL does. It's only a matter of time before OpenGL as it is dies. Meanwhile it has wasted people brain cycles that could be better invested in making actual functionality (gameplay for example) better.

    Actually, the term GPU will probably die in your 5-15 years timespan as it merges with the common CPU (look at AMD and Intel with their new "GPUs" integrated on the CPUs).

    From then on, people will no longer build software around a (less and less) limited pipeline like current GPU offerings have. People will create their own custom software pipelines that will give "unlimited" flexibility.

    Coming back to today:
    Today we are wasting brain cycles because OpenGL evolving too slowly.

  2. #32
    Join Date
    Oct 2009
    Posts
    353

    Default

    Quote Originally Posted by gigaplex View Post
    No, I think cl333r meant a completely different approach along the lines of voxels. What you described was tessellation, which is an enhancement that still uses triangles under the hood.
    Yes, I mean something like voxels, but more than that since I heard they have some severe limitations.

    And no, I'm using threads in Java (with Java's threads) and in C and Gtk (with pthtreads) and I didn't find it so mind-bogling difficult. I don't think it's so difficult in GL, you just have to use more brain cycles by which I mean - being more careful and writing additional code - if you think it's too much for you - then don't use GL, go with DX or create DX on Linux, whatever, but for me it's not a critical issue since I worked with threads in other environments and it wasn't a big deal. I didn't say: "Jesus Christ! Gtk is non thread safe! We need to rewrite it! OMG! I can't afford to create my threads and manage them - I'm too feeble minded for that!" - Did I ever have such an attitude? No. But why do others claim it's a critical issue with GL, not to mention that in C you actually gain speed cause you use extra CPU cores if available, while in GL/DX multi threading is using same abount of cores no matter what cause it's still being serialized under the hood.
    Last edited by cl333r; 12-22-2011 at 08:56 AM.

  3. #33
    Join Date
    Jul 2008
    Location
    Germany
    Posts
    651

    Default

    Quote Originally Posted by Temar View Post
    Why not just implement a DirectX 11 state tracker instead of inventing a new API?
    Because its from Microsoft........

    Quote Originally Posted by Wilfred View Post
    Isn't there already a d3d 11 state tracker for gallium and X? AFAIK nobody uses it.
    It is not done and incomplete .....

  4. #34
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    Quote Originally Posted by Temar View Post
    Why not just implement a DirectX 11 state tracker instead of inventing a new API?
    Politics. Linux distros might start carrying it (or might not -- look at Mono), but you'll likely never see it implemented on iOS or OS X. Since game developers care FAR more about those platforms than they do about Linux, the end result will be almost no real improvement for portable graphics.

    Also, D3D is C++, and (again, mostly due to politics) that means certain OSes or developers are going to shun it even if Microsoft LGPL'd the whole stack today.

    Quote Originally Posted by cl333r
    "Proper threading" "easy threading" is a reiteration of the same issue under different names - that you can use threads in GL if you use extra brain cycles
    These are not the same. It is _impossible_ to thread GL the same way you do D3D. Impossible. Unfixable. The workarounds possible in GL are not complete and do not give you all the same features and advantages D3D11 offers. This cannot be changed without completely breaking the API and making a new one that has explicit context object handles passed to all resource API calls, which even if we called that "OpenGL" would still be an entirely new and incompatible API to what OpenGL is today.

    You _can_ thread GL, but only in a way that mandates the use of global process-wide locks and tons of explicit state resetting and management inside those locks, and hence it is not possible to get the same level of performance as D3D (and performance is the whole damn reason we want threading). Using the DSA extension (which is a mess, and just a giant pile of hacks to work around GL's dumb-as-mud API) would fix part of that, but not all of it.

    It's also worth noting that the OpenGL API has other deficiencies that make it impossible (again, literally impossible) to achieve the same level of performance as D3D. The state management APIs are one, albeit they can be _partially_ worked around with the DSA extension or a possible future OpenGL API addition. The mutable object nature is another, which is not fixable. The object naming scheme (integer ids for all objects) is another design foible that requires extra driver-side work that shouldn't be necessary in any sensibly designed API.

    I know the Linux/FOSS users are still getting hard-ons about _finally_ maybe having OpenGL 3.0, but in the rest of the graphics programming world many people were outright pissed when they initially got OpenGL 3.0 because everybody and their mother desperately wanted (and expected after promises from Khronos) to get a new object-based API, a.k.a. Longs Peak. All of the complaints with OpenGL that lead to the Longs Peak proposal are still 100% valid today. Only nobody at Khronos is even pretending to care anymore.

    Longs Peak was not simple a "make a nicer API" deal. It was actually going to fix the problems I outlined above with OpenGL that result in it being incapable of matching D3D's performance and features entirely. These articles from Khronos about the new API design spell out in better details those problems and how a completely new API could fix them: http://www.opengl.org/pipeline/article/vol002_3/ and http://www.opengl.org/pipeline/article/vol003_1/. An example of their proposed API (which I personally don't think is as nice as D3D's still, but at least would not be objectively worse like the current API): http://www.opengl.org/pipeline/article/vol003_4/.

    But hey, let's all pretend that the rest of the graphics industry and even Khronos themselves haven't directly countered your complaints about my posts. I'm sorry for posting "emotional" hate posts about OpenGL, I'm just a Microsoft fanboy, and I can't possibly be speaking from any real experience or in-depth knowledge of games programming. You win the Internet, sir. </sarcasm>

    Quote Originally Posted by cl333r
    Yes, I mean something like voxels, but more than that since I heard they have some severe limitations.
    Thank you for posting insightful commentary about graphics technologies that you've "heard about." Your knowledge and insight are invaluable to the graphics community. I'll talk to some colleagues about getting you a invitation to speak at SIGGRAPH next year about this and other exciting innovations you've seen someone post about on Reddit once. </more-sarcasm>

  5. #35
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,129

    Default

    And no, I'm using threads in Java (with Java's threads) and in C and Gtk (with pthtreads) and I didn't find it so mind-bogling difficult. I don't think it's so difficult in GL [...]
    Aha, it seems that you are not aware of how OpenGL deals with threading. Elanthis covered this in some length, but here is the breakdown of the problems:

    OpenGL rendering is routed through a global, thread-local OpenGL context (i.e. you can only use OpenGL commands on the single thread that 'owns' the context). This means you can implement multiple threads in two ways:

    (a) move tasks such as texture loading to a background thread but issue all OpenGL commands from a single thread. This works quite well and it's something every game developer worth his salt does.

    (b) create multiple OpenGL contexts, one for each thread you wish to use OpenGL on. This is were the fun starts! OpenGL contexts don't share resources by default (i.e. a texture created on context #1 is not available on context #2), making this approach pretty much useless on its own. That's not good enough, of course, so platform-vendors offer ways to share resources (wglShareLists, glXShareLists etc etc) - the problem is that the exact behavior of these functions is ill-defined. Different drivers behave differently: some drivers work; others work but only if you don't create any resources prior to calling *ShareLists (or the command fails); some others don't support context sharing at all (e.g. Intel); and a few claim to support sharing but crash or behave weirdly if you try to use it.

    What's worse is that even when a driver claims to support sharing, it may still use global locks internally, making this slower than using just a single thread. An actual instance I've encountered: thread #1 renders, thread #2 compiles a new shader in the background. Even though this is a new shader (not used anywhere yet), thread #1 is stalled while thread #2 is compiling. Not good, not good at all.

    What does D3D11 do differently? It offers a third, more efficient way to do multi-threading:

    (c) thread #1 renders, thread #2-n create queues of commands that are send to the first thread for execution, keeping all cores happy. There is no way to do this in OpenGL (the only way, display lists, was removed in GL3.1, and it wasn't flexible enough anyway).

    What's worse, this is impossible to implement as an extension of the current OpenGL semantics (mutable objects, causing all kinds of undefined behavior). It requires an actual rewrite - something that Khronos is loathe to do.

    It's sad that the only cross-platform 3d API is in such a sad state. Unless this changes in a fundamental way, this pretty much ensures Microsoft's dominance in 3d gaming (which kinda puts this "5 to 15 years until we see something different" comment into a new light).

  6. #36

    Default

    Quote Originally Posted by Sidicas View Post
    D Software wrote a game in OpenGL called "RAGE", but requires OpenGL 3.3 as a minimum. ID Software really didn't have any hopes of releasing the game for Linux any time soon because of the lack of graphics driver support (and graphics driver performance problems). It was released for Mac OS X, Windows, and the PS3 game console.. It's a GREAT game.
    Released for MacOS? Really? Where?

    So what about libs and drivers on MacOS?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •