Over on Shamus Young's blog, he recently said this when talking about a programming project of his:
One of the things I like about this project is that it is uncluttered by goofy, awkwardly-designed libraries.

Shamus is working on a procedurally-generated 3D world using OpenGL. Now, I know what he means. He is trying to avoid relying on things like graphics and physics engines, or 3D model importers, or any of a number of other tools that often have asinine and byzantine APIs. I am, in fact, trying to do the same thing in my project (in my case, it is because I am using this project to learn graphics programming).

However, I have to object. libgl is a goofy, awkwardly-designed library. Of course, the fact that it that is has to be, in order to do what it does. However, code like this:

[sourcecode language="cpp" gutter="false"]
glBegin(GL_QUADS);
glColor3f(r1, g1, b1);
glVertex2i(x1, y1);
glColor3f(r_mid, g_mid, b_mid);
glVertex2i(x2, x1);
glColor3f(r2, g2, b2);
glVertex2i(x2, y2);
glColor3f(r_mid, g_mid, b_mid);
glVertex2i(x1, y2);
glEnd();
[/sourcecode]

is pretty goofy. Anyone with experience writing GUI code using a Windowing toolkit would be appalled to learn that this is how you draw a rectangle. A more reasonable API would let you get a 'rectangle' object, then define things like its x/y position, its width and height, colour, etc. Then, you might make a call like:

[sourcecode language="cpp" gutter="false"]
window->add(rectangle);
window->update(); // to draw the window
[/sourcecode]

But in OpenGL, we have to tell OpenGL that we want to start drawing a polygon, then tell it the colour and position of each vertex on the polygon, and then tell it when we're done, all with different function calls. And gods help you if you get them out of order:



What happened here? I told OpenGL to draw a rectangle, and I gave it the top-left vertex first, then the top-right, then the bottom-left, then the bottom-right. This is a pretty obvious way to think about listing the points on a rectangle, right?

Except that OpenGL is designed to actually create the polygon's bounding box based on the order you list the vertices, like so:



So, OpenGL did exactly what I told it to do; we just weren't speaking exactly the same language. OpenGL requires that I list the vertices in clockwise (or counter-clockwise) order around the edge of the polygon.

OpenGL is an iceberg, though, and this is just the tip. There are Display Lists, Vertex Buffer Objects, shaders, 3D objects, normal vectors, projection matrices - it is a very complex beast, and all of that complexity is exposed directly to the user. So why does Shamus knock on all those other libraries, but give OpenGL a pass?

The answer is that OpenGL *has* to be this complicated. The reason? OpenGL lives on the graphics card. This is something that it is easy to miss the ramifications of, but they're huge - the OpenGL function calls are talking directly to analogous function calls hard-coded in a chip on a piece of hardware. When you call glVertex2i(), you put data about a vertex directly into your video card's memory. OpenGL is fast; it's what lets us have advanced graphical environments that change and that we can interact with, rendering in realtime. So, subsequently, OpenGL's end-user libraries are complicated; they have to be to let you take full advantage of what OpenGL is designed to do.

That doesn't make it any less goofy, though.