Quote Originally Posted by WILL View Post
Mistakes, as you've mentioned in the post on your site, can happen and with an environment such as 3D/2D graphics processing could cause your system to suddenly slow to a crawl. Especially if it's an older or lower spec system.
Indeed, all parts of the application you're testing have to be fault-tolerant (not to mention capable of getting restored from a known state in a quick and reliable fashion).

DirectX & OpenGL are not very tolerant indeed, if you mess up you can quickly end up in a crawl, using all memory or cause a system crash.

On the other end, this is an issue that is currently being worked on for the web-based 3D standards, because well, that's the main issue of WebGL (do a shader wrong, and the system will crawl/crash). So I guess it's only a matter of times before things evolve, there is a massive push in that direction from Apple & Google, and they arguably got a portion of 3D acceleration stable enough to be used in browser compositing (and the basic 3D stuff through CSS 3D).

The most problematic stuff is the lack in DirectX/OpenGL of constraints, such as memory limits, fill-rate limits or shader execution time limits, but that's something that could technically be introduced in drivers.

Another option could be a revival of fixed-functionality pipelines, but much richer than before, and where you would have all the common lighting and rendering techniques exposed.

That's somewhat the direction taken by CSS, and it has merit as it shoves the low-level hardware-specific complexity back in the hands of the drivers and composition engines, which is certainly more future-proof than all of the existing shader approaches (a bump mapped phong lighting will always be a bump-mapped phong lighting, rather than the optimized-for-special-cases-of-bump-mapped-phong shaders you see these days, that f.i. don't scale with an arbitrary number of lights, handle only point lights or parallel lights, etc.).