One of the biggest headaches for me is debugging. As the author says, the facilities for reading state back out are often questionable. Even where they're not, I'd rather not spend all my time rolling my own custom OpenGL debugging tools. I'd love a cross-platform OpenGL debugger--even if it only handled basic stuff.
For example, when nothing renders, I don't want to waste an hour, staring at my code with no direction, until I realize I forgot a call to glEnableVertexAttribArray. Instead, I'd like to boot up my trusty debugger and go through a sane process of narrowing down the problem, like I do for just about every other class of bug.
Also a sane way to debug shaders would be fantastic. The usual advice is to write debug info out as color values. The fact that anyone considers that a healthy debugging strategy just illustrates how far behind graphics programming is in terms of developer friendliness.
I don't know if it's better on other APIs. OpenGL is the only one I use, because I never have occasion to develop Windows-only apps.
You should definitely check out bgfx (https://github.com/bkaradzic/bgfx). It abstracts various graphics APIs for you, making development and debugging markedly easier. It also stops you from having to deal with the complex state-machine known as GL.
If you're on an nvidia GPU, check out nvidia nsight. And this is also one problem with GL, if there are good debugging/profiling tool they often only work on one OS and/or for specific GPUs (and I hope this is where VOGL will come in and fix that).
Other then that I don't even think of OpenGL as a single standard anymore. It's more like "Nvidia GL" "AMD GL", "Intel GL", "Apple GL", etc... there is a core set of functionality which works across all implementations (and which could be cleaner), but if performance is more important then easy portability, you need to implement driver specific code paths anyway. Whether this is good or bad I haven't yet made up my mind completely. At least on GL you have an environment where GPU vendors can experiment and compete through extensions.
> if performance is more important then easy portability, you need to implement driver specific code paths anyway
I think that's fine, because as you said, it encourages GPU vendors to innovate. Plus, it probably affords more of an opportunity to squeeze every last drop of performance out of each card.
On the other hand, I'd start to get upset if we had to write card-specific code just to do the absolute basics. (Which isn't the case right now.) There are tons of apps that need only a tiny fraction of the performance a GPU offers. For those apps, it would be insane having to maintain multiple codebases just to, say, draw a box with a texture and Lambert reflectance.
I just gave this a try. Unfortunately when it runs my app the app will at some point crash, and the console will briefly display what went wrong but then the console exits. I can't figure out if the debugger kept a log somewhere.
Anyway, this seems closed-source. Are there any open-source equivalents?
* GL extensions are written as diffs vs the official spec
So if you're not a OpenGL Specification Expert it can be extremely
difficult to understand some/many extensions.
This is spot on. I was thinking last week about writing a webapp that displayed the spec with a list of extensions you could check to have them merged in.
Then I decided I was yak shaving and went back to actually working on my project.
That's really useful in itself, but no, I mean something that lets you read the actual OpenGl spec with the extension text patched in.
If you skim the text of an extension [1], you'll see an awful lot of this:
> Add a new section "Buffer Objects" between sections 2.8 and 2.9:
It'd be very nice to read those sections without having to have the full spec open in another window. Especially because some other extension you're using might be modifying the text as well.
After spending a few months porting a Direct3D9 game to work in OpenGL 2.x, this article certainly resonates with me. It's quite easy to get code working nice and fast in one driver, while in another it stalls and drops down to 1 fps because you didn't anticipate the driver developer designed their version of the API to work under different performance characteristics.
Meanwhile in Direct3D9, the game still runs smooth at 60+ fps on all major drivers on most recent hardware. Granted, it's a bit of an Apples vs Oranges comparison, but it certainly causes a lot of headaches especially when you need to go so far as to modify the art so it batches better.
There is also still a lot of conflicting information on how best to use OpenGL. OpenGL 3.x certainly helped by consolidating a lot of stuff which was in extensions, but in my case its not really that good for me as I still have to put up with the land of OpenGL 2.x.
> OpenGL 3.x certainly helped by consolidating a lot of stuff which was in extensions, but in my case its not really that good for me as I still have to put up with the land of OpenGL 2.x.
Ha, where I work we still get support tickets about our ancient GL1.5 renderer from time to time. If only we could drop it.
And then I get home and see people on /r/gamedev suggesting that OpenGL 3.2 is outdated and not even worth supporting anymore. I even got downvoted for saying that my less-than-three year old laptop ran GL3.2. Maybe I'm just in need of an upgrade...
> Mantle and D3D12 are going to thoroughly leave GL behind (again!) on the performance and developer "mindshare" axes very soon.
Huh? Performance, maybe, but how is "mindshare" being measured here?
DirectX "beat" OpenGL a long time ago - does the author claim OpenGL beat its way back to the top? If so, I can only assume that was due to GL on mobile platforms. But those mobile platforms - iOS and Android - still use GL, and are growing? How can D3D12 and Mantle beat those when they don't even run on those platforms, while Windows - the platform they do work on - is anyhow already under DirectX control?
Furthermore, GL is seeing another area of growth through WebGL, which now works on even Microsoft's browser.
Am I missing something? That mindshare statement seems completely off base.
He's referring to mindshare among professional (game) engine developers - those people who try to get maximum rendering performance out of multiple platforms. His statement is totally on target for reasons I enumerated with a blog article on this subject back in December: http://inovaekeith.blogspot.com/2013/12/why-opengl-probably-...
As for WebGL Apple intentionally disables WebGL support on iOS - except for in iAds - so game devs can't circumvent the app store. Since WebGL is needlessly restricted to the feature set of the OpenGL ES specification its usefulness is severely limited. WebGL also has many other problems such as the fact that JavaScript is slow as hell.
I read the blogpost, but it seems to focus on why OpenGL isn't good - not on measuring mindshare of OpenGL? Yes, perhaps Mantle and DirectX are better, and that might in theory lead to more mindshare. But the fact remains that OpenGL is standard on mobile and on the web, which should increase OpenGL's mindshare.
Yes, WebGL is disabled on iOS. It's disabled on OS X desktop too, for now. Hopefully that will change soon.
> JavaScript is slow as hell.
I wouldn't say that 67% of native speed is "slow as hell", and that's where things currently stand. Perhaps you have a specific workload in mind that happens to be slow - how did you measure?
The blog states that if the Mantle specification is adopted by other manufacturers, particularly on mobile, then it will become the new cross platform standard and OpenGL will cease to be relevant. 67% of native speed is astronomically slow in the world of real-time games where improving per-frame performance by 3ms is considered a huge win. Getting all of your nice special effects and post-processing implemented while still running at a smooth 60fps is very difficult - especially on mobile.
I see, thanks. Yes, if that happens it could certainly upset OpenGL's position. What, though, is the likelihood of that happening? Have there been signs of cross-vendor Mantle adoption, and of support by the OS vendors Google and Apple?
> Getting all of your nice special effects and post-processing implemented while still running at a smooth 60fps is very difficult
And has very little to do with CPU speed. There are some games where CPU speed is important but there's a large subset (I'd be willing to bet 95%) of AAA games where only GPU speed is important and they run just fine on low end CPUs.
The whole point of D3D12 and Mantle is to decrease driver overhead, improve multi-threading support, and increase the number of per-frame draw calls that can be made. Those are all CPU performance improvements so I think it's safe to say that CPU speed is quite important or else AMD and Microsoft wouldn't be going through the trouble.
There is an reason why review sites use games for CPU benchmarks. As you can see some games are indeed GPU bound. But vast majority is actually bit of both, e.g some section of it is purely CPU bound while another is purely GPU bound. So improving either component will increase FPS.
My experience with low-latency programming suggests that often the biggest cost is memory management -- garbage collection. In some cases, the GC compacter may even be limited by CPU, although usually its the collection pauses themselves which hurt the most.
I kind of inserted "because Valve, Steam, and Linux" into his statements about OpenGL versus DirectX, and then I was better able to understand where he's coming from.
I mean, you don't really think he has no idea what he's talking about, right? Then game dev world is really, really big. It's worth noting he has worked at Valve for some time, so he's coming from the Gabe Newell world.
If I had to guess, I'd say Linux gaming (and by extension the SteamBox), could suffer because of OpenGL. Right now we have XBox/Windows. From what I hear, Valve intends to make some SteamBox/GnuLinux competition. If OpenGL sucks compared to D3D and Mantle (more difficult to develop in, or just less efficient at runtime), this competition may not take off. Also if porting existing D3D (or PS3/4) games to OpenGL is hard, it likely won't happen.
Of course, I'm only talking about smokin' fast and shiny AAA gaming, where D3D currently dominates. OpenGL owns mobile and the web, but they don't compete in the same category.
I don't get it: here's what the Wikipedia currently says:
> To support backwards compatibility, the old state based API would still be available, but no new functionality would be exposed via the old API in later versions of OpenGL. This would have allowed legacy code bases, such as the majority of CAD products, to continue to run while other software could be written against or ported to the new API.
CAD dinosaurs would still have their old API. What was the matter, then?
The HTML5 canvas was created by Apple alone for use in their “Dashboard” product, and later ported to their Safari browser. Then other browsers decided to adopt it as well, because it supported a number of use cases which the browsers didn’t have another good solution for.
This is pretty much the opposite of “design by committee”.
Yeah, if only the HTML5 canvas was "designed by comittee" (it was not, it was an Apple design) and it was in any way useless (huh? what's the problem with it?)
I think it's instructive to compare with OpenCL. It was designed to be very similar to GL, but didn't have a backwards-compatibility legacy. It's still not exactly friendly, and some of the nicer bits are optional, but it does serve the purpose of being a hardware abstraction layer for parallel processing. It strikes me as being many times simpler than GL, even though its fundamental purpose is only a bit simpler.
OpenCL is actually much worse. it requires way more querying of what the hardware does to use it rather than abstracting it away like other GPGPU libraries.
While OpenGL can be full of extensions, if you ignore the extensions and stick to the base it's pretty easy to write a program that doesn't have to care what system it's on
OpenCL requires a lot of querying the hardware because that information can't be abstracted away without removing the ability to do effective performance tuning. If it were possible to make effective use of a GPU given a well-tuned BLAS, OpenCL wouldn't have been created in the first place.
OpenGL extensions are likewise absolutely necessary to make full use of the hardware, but the extensions mess is way worse than the range of optional OpenCL features.
You don't need full use of the hardware most of time. Sure you get some flurishy extras but few apps/games if any need those extensions to be fun and beautiful. I know of no games that require extensions to run. Those that use them are rarely all that much better with them than without.
So my point stands, you can generally use GL without worrying about hardware. Not so with CL.
I truly hope the next GL version would be similar to OpenCL. There is no mutable global state so it's threadsafe (one function in OpenCL 1.0 had one but it was swiftly deprecated and is no longer used).
Most importantly there is no global binding. Instead of binding one just sets kernel arguments and executes the kernel.
So is there a reason graphics programming is still in the dark ages. This sounds like assembly programming where you have N different add instructions with slightly different semantics for how flags are set.
The primary reason is that these APIs are very thin abstractions over the actual assembly-level instructions sent to the graphics card. It's certainly possible to write a higher-level abstraction; it's been done many times, in the form of game engines and so forth. The risk of doing so is that you narrow down the possible types of visual effects you can implement, and historically, the implementation of unique and original visual effects was a big part of the way AAA games competed with each other.
In principle there's no longer any reason why it has to work this way, because now we have shaders. These are a more domain-specific tool than a C API, and can thus safely present large cross-sections of graphics card functionality in a higher-level way without taking power away from the graphics programmer. So, for example, people modding Minecraft can introduce their own visual effects stages by hotloading shaders into the standard Minecraft graphics pipeline.
EDIT: I haven't talked about performance. For game developers, rendering a frame quickly is as important as having total control over the output of the rendering process. For the most part, building an API higher-level than OpenGL also means dictating a particular scene graph structure, as with OS-specific window system APIs. If the API imposes a certain way of organizing your scene graph, this can have serious impacts on your game's performance, because the scene graph traversal is optimized for one type of scene and you're using it for another. I'm not sure I'm simplifying this explanation very well, but that's the gist of it.
I don't disagree with what you're trying to say, but this is my line of work so I have a few nits, sorry.
> these APIs are very thin abstractions over the actual assembly-level instructions sent to the graphics card
If only this were the case... The drivers end up doing quite a bit, to the point where most renderers spend most of their time waiting for the driver to return. This is part of the reason Mantle was a big deal, and why DX12 and GL5 promise to be lower level.
> In principle there's no longer any reason why it has to work this way, because now we have shaders
Shaders don't replace all the fixed function parts of the GPU, and they don't try to. Maybe someday they'll replace more of it, but there are still several fixed function stages in the rendering pipeline.
Not to mention, setting GPU state will pretty much always be completely independent from shaders, and required for many visual effects. D3D or CG FX files try to abstract this, but I don't think they're popular anymore.
> For the most part, building an API higher-level than OpenGL also means dictating a particular scene graph structure...
The general premise that any layer on top of the driver will limit on the way you structure the renderer is accurate, but scene graphs are an antipattern inside a modern renderer. Plenty of engines use them for higher level organization, but keep it far away from the renderer. All that pointer chasing murders the cache.
And an "assembly-level graphics API" is exactly what is needed at the moment. The high-level abstractions are taken care of by game engines like Unity or UE, or programming frameworks for a specific scenario (e.g. 2D UI rendering vs. 2.5D side-scroller games vs. AAA first-person-shooters). One problem with GL (also D3D) is that it provides fairly highlevel abstractions which don't exist on the hardware level and either limit the flexibility or performance (e.g. OpenGL has "texture objects", when the GPU actually just sees a couple of sampler attributes and a blob of memory). Mantle provides exactly that simplified view on the GPU, and as a result is a much smaller API (but may be harder to code to).
Well the author also contradicts themselves. For example, this complaint:
> Drivers should not crash the GPU or CPU, or lock up when called in undefined ways via the API
runs directly counter to this complaint:
> They will not bother to re-write their entire rendering pipeline to use super-aggressive batching, etc. like the GL community has been recently recommending to get perf up.
Error checking = higher per call overhead = slower performance.
If you think there should be a way to turn on a sort of safe-mode or something where the driver holds your hand that sounds reasonable, except for this:
> I've seen major shipped GL apps with per-frame GL errors. (Is this normal? Does the developer even know?)
In other words, the error checking the driver does to is largely ignored. Adding additional error checking at the cost of performance? That won't help at all.
He's referring to the difference between the D3D debug layer (http://blogs.msdn.com/b/chuckw/archive/2012/11/30/direct3d-s...), which can be toggled on/off for debug/release builds (or even forced on in the driver), and OpenGL's near worthless glGetError() and debug output callback. Not only does D3D come with better debugging capabilities built directly into the API/driver its tools are vastly superior to OpenGL's - though with Valve's VOGL and NVIDIA's nSight that's beginning to change.
> 20 years of legacy, needs a reboot and major simplification pass
Since OpenGL 3.2
> http://en.wikipedia.org/wiki/OpenGL#OpenGL_3.2
there is a strict division between core profile and compatibility profile. If you want a major simplified API just request a core context instead of a (default) compatibility context.
The only graphics cards from the last years that don't support OpenGL >= 3.1 are from Intel before Sandy Brige. Since Sandy Brige OpenGL 3.1 (for which my remark is also true; the only difference is that in OpenGL 3.0/3.1 you probably want to use a forward-compatible context instead; this remark only makes sense with this specific OpenGL versions) is supported and since Ivy Brige OpenGL 4.0.
Thus you only drop support for computers with only Intel GPU and processor older than Sandy Brige. When I consider how many app developers drop support for older iPad/iPhone versions that are much more recent than pre-Sandy-Bridge CPUs and hardly anybody complains (the same is, of course, true on Android), I really have difficulties understanding where the problem is.
Do you have any info on "Autodesk's entire product line" migrating to D3D-only?
I know many of Autodesk apps run in Linux, Mac OSX, Windows and iOS--so I'm curious. The only reference I can find is a long reply from an Autodesk Inventor developer [1] from what looks like around 2007 where OpenGL was removed form their Windows-only app (Autodesk Inventor). It sounded like his beef was with the sub-par driver support for the OpenGL spec amongst video cards and not with the API itself (which I appreciate as an issue, but is very different from this discussion about it being a poorer API).
>>The API should be simplified and standardized so using a 3rd party lib shouldn't be a requirement just to get a real context going.
Nonsense! OpenGL is an evolving specification for the best-of-best. Why take away the tools that make games exceed the theoretical performance of a GPU? Removing features is unjustified if we know how to use them.
What the author needs is a wrapper and many exist. For example, Qt will let you write graphics code that runs on the desktop and mobile.
Certainly nobody would call for the end of WinAPI because thats how we wrote the other APIs!
Simplification does not mean removal of performance or features. In fact, most of the recommendations he makes (such as DSA) could perform better than current OpenGL.
>Removing features is unjustified if we know how to use them.
This is not the path that OpenGL has taken, as shown by all the deprecations and removals done in previous versions.
>>shown by all the deprecations
Yes, but they also added a bunch of stuff. Going from #version 120 and #version 430 feels like a whole different language.
Some good points here but what it misses is that generally time-wasting due to crappy API is less than time wasting due to having to learn a different api for different platforms. Performance is a bigger issue and it will be interesting to see whether opengl can improve here or mantle develop on other platforms (and be reasonably fast on them).
I disagree, crappy API design wastes an enormous amount of developer time and effort. OpenGL is full of legacy cruft and has lots of room for improvement, streamlining, and simplification(same with d3d12 for that matter). There is a lot of needless complexity, hoop jumping, and wheel reinventing when it comes to 3d graphics programming(not to mention, countless common "gotchas" and tricks of the trade that are not as well documented and easy to learn as they could be.
I don't disagree. The point I'm trying to make is just that using a non cross-platform api is likely to be just as time wasting (in having to port a program or just learn something new for the next one). And that user speed is effectively more important than both of these. The article is right, but it needs perspective. There's a reason for OpenGL's success - just that this is not because itsn fun to use.
We should be much more worried about the higher level interfaces/languages that app developers actually program to. They're stuck in the stone age. The level of abstraction needs to be raised many notches and programmers need to be freed from running around in circles after performance tricks.
Nah, just use a higher-level frameworks or a 3D engine for this. You give up some control and performance, but gain productivity. What we need at the moment is less abstractions, because D3D's and OpenGL's abstractions don't fit the current GPUs very well (D3D works around this somewhat by breaking API backward compatibility with each new release)
That's what I was talking about, those frameworks/engines aren't on a trajectory that's going to lead away from the current graphics programming quagmire.
For example, when nothing renders, I don't want to waste an hour, staring at my code with no direction, until I realize I forgot a call to glEnableVertexAttribArray. Instead, I'd like to boot up my trusty debugger and go through a sane process of narrowing down the problem, like I do for just about every other class of bug.
Also a sane way to debug shaders would be fantastic. The usual advice is to write debug info out as color values. The fact that anyone considers that a healthy debugging strategy just illustrates how far behind graphics programming is in terms of developer friendliness.
I don't know if it's better on other APIs. OpenGL is the only one I use, because I never have occasion to develop Windows-only apps.