Quote Originally Posted by XorEaxEax View Post
Not really following this, you can create a macro which reports any glError codes during runtime together with which source file and line number the error took place on. You can also make it so that the checking can be disabled and thus not have any performance impact on final builds. Something like this:

#ifndef _GLERROR_H_
#define _GLERROR_H_

#include <stdio.h>
#include <GL/gl.h>

#define _GLERROR_ENABLED_ // comment out to disable glGetError() checking

#define GLERROR() { int foo = glGetError(); if(foo != GL_NO_ERROR) printf("GLError:%d in file:%s at line:%d",foo,__FILE__,__LINE__); } 
#define GLERROR()

This would catch faulty parameters during runtime, giving you the file and line in which they occured and not force you to wait for some tester to report some texture bug. Granted it's nicer if the compiler catches the error but it's not something I would switch language for.
(a) This relies on the programmer to insert GLERROR() calls at proper places.
(b) This only detects errors at runtime, even though they could be detected by the compiler.

These issues might not matter for trivial applications. But what if you are developing a non-trivial game and the faulty code is only executed near the end of level 5, when the player tries to enter a non-essential secret area? It might be weeks before a tester encounters and reports the issue!

Bugs like this do happen and do go unnoticed (ever played any of the Elder Scrolls series?) The larger the application, the higher the chance of obscure bugs, and the higher the value of compile-time error checking. This is precisely the reason we have moved from assembly to C, to C++ and to other things.

Btw, C#/OpenTK not only detects errors at compile-time, it also inserts GL.GetError() calls automatically when running in debug mode. This is a huge safety net that you can't really appreciate before you use actually use it. OpenGL is much smoother in C# than in any other language I've ever used.