Is there any work towards "sanitizing" OpenGL? It is very hard to implement OpenGL properly in languages like java because you can pass args to OpenGL functions that will crash your program. This sort of thing does not fly in Java. If OpenGL or its wrapper were to do proper boundary checking on all its args it would slow everything down. I wonder if there is anything that can be done.
Have you ever profiled a 3-D app? Even with a fancy high-end card, your program spends the vast majority of its cycles inside of OpenGL calls. This means that you can write your app in a slow interpreted language and it will really not slow things down much at all.
3-D apps are all about look and it is very subjective, so you do a lot of fussing with the properties of objects to get them to look "right". This means lots of recompiling if you write in C or C++. If you work with an interpreted language you can quickly fix up the look and the action to the way you want. And THEN you can port it to C or C++ if you really want the speed.
"When writing a program, you should plan to throw the first version away, because you inevitably will, whether you planned to or not" - Gerald Sussman, inventor of Scheme and otherwise really smart guy.
If one follows his advice then one should always prototype in a nice, easy to work with language. There are enough headaches with development of new code, why give yourlself the extra burden of worrying about pointers and memory allocation when you should be focused on how your game is going to play or how your visualization is going to show the tumor cells?
The problem with OpenGL is, even your prototype app will dump core if you mess up, and you will still find yourself in gdb looking at stack traces even though you made an effort to keep your head out of the bits and bytes.
Last edited by frantaylor; 08-04-2009 at 10:58 PM.
OpenGL is very low level for performance reasons - it doesn't suit well to make it work directly with a higher level language, and that's why wrappers / bindings exist for it. OpenGL will return error values however, and these can be checked easily enough. If the error values are not properly reported, it's not the fault of OpenGL, but rather the implementation of it.
Interpreted languages also do an awful lot of error checking internally, so wrappers could do that quite easily as well.
Why is it a mistake to use the max out of existing hardware? ATI is free to add the same extensions.
It's not as long as you're mentally prepared to do things the Right Way (tm) and do distinct code paths for all vendors - read: Intel, nVidia, ATi, etc. (Like Microsoft's DirectX afaik does; Wine's devs seem to adamantly believe that they can do some uniform solution which is obviously wrong - from what I've heard they seem to believe it should be enough they just do the nVidia codepath and it's on the responsibility of driver vendors to port their drivers. This is very likely to give you reduced performance with non-nVidia hardware even if the driver implementation is as good as with nVidia. Of course, who wants to do three - or five if we count open ATi and nVidia drivers - times as much work?)
I am sure that will be done in time for fglrx with opengl 3.2 support. As NV provides now already those test drivers wine could adopt that new codepath and fglrx users will be happy when ATI manages to provide that too. Currently it does not matter in which way the functions are used as they run only on NV hardware.