I must say that I'm only disappointed in seeing that they are using 317 draw calls in each frame rendering just for the UI. And it isn't even some complex UI for that matter.
Yes optimizing of UI code would not make significant performance improvement but by rendering UI into cached texture and reusing that texture could reduce those 300+ draw calls for each frame down to about 10. Why? UI update frequency is a lot lower than the overall scene update frequency. So those 300+ calls would only be necessary when the UI does indeed update.
I was reading this earlier actually, it's a great breakdown of a classic pipeline (A great comparison for people looking to learn would be to study the Doom3/Quake4 rendering pipeline and compare it to this so you can get a feel for the evolution of graphics - what's old, what's hot etc)
Of course with the rise of computing on the GPU, many cutting edge pipelines/graphics algorithms are starting to leverage the power of DX11 Compute / OpenCL for certain rendering techniques. This means that pretty soon, shader tricks will be a thing of the past (I'm amazed that a DX11 title fully supporting tesselation would employ such crude SSAO + Shadow algorithms!)
Give it a few more years still and we'll see Engines + GPU's make the move to true real-time ray-tracing making all of our complicated raster 'short-cuts' and approximations a thing of the past. This is likely to be developed in tandem with voxel based static scenery at first (enabling the same LOD calculations to work on both 'meshes' and 'textures' and lending very well to the ray-tracing process) with the later developments of fully deformable/skinnable dynamic voxel objects.
As animating detailed voxel models (with multiple blend weights + animation path smoothing) is currently the primary bottleneck, expect to see a native voxel representation in hardware + APIs with the ability to apply weighted transformations on large groups of voxels.
Combine this with the very different approach of ray-tracing and GPU's of the near future will be very different beasts to what we see today, even to the point that I expect the current raster approach to real-time 3D graphics will simply be 'emulated' on these new cores via complex GPU bound code. There will no doubt be a point during the transition where dedicated raster cards beat ray-tracing cards at the raster game and for a short time the ability of a newer card to perform well with old raster style rendering will be a big factor.
It's a very, very exciting time to be in game development!
Give it a few more years still and we'll see Engines + GPU's make the move to true real-time ray-tracing making all of our complicated raster 'short-cuts' and approximations a thing of the past. This is likely to be developed in tandem with voxel based static scenery at first (enabling the same LOD calculations to work on both 'meshes' and 'textures' and lending very well to the ray-tracing process) with the later developments of fully deformable/skinnable dynamic voxel objects.
I don't think that these "complicated raster 'short-cuts' and approximations" as you call them will be the thing of the past any time soon. Why?
Nowadays the GPU's aren't only used for processing graphics but aso for procesing of some other things like physics.
And the main reason why many of todays top notch games still use these raster shot-cuts is to save some GPU performance so that it can be used for physics.
But it doesen't stop with physics. I have read an article about someone trying to use GPU power for processing of compley AI algorithms. And a guy even used GPU power for speach syntesysis.
So while GPU processing power is increasing a lot so does its utilization applications. Meaning that we will still have the ability and probably also the need to use these reaster short-cuts.
As for dynamic deformable voxels. I don't think this concept is realy sound. Deffinitly not as some pepole are imaginening it.
You see the biggest advantage of fixed size voexels is that you can greatly simplify the math when doing things like ray tracing, colisions detection etc. But as soon as you defore soe of these voxels you lose that ability.
It is like comparing the colision detection on fixed sized 2D grid (where you can use simplified math) and some 2D net with different sized and shaped cells (where you can't use any simplified math).
@phibermon If you want to do a more thoural discusion about dynamic voxels with me feel free to contact me though skype. I'll probably be online for the rest of the day.
Feel free to contact me if you might want to dicuss something else instead.
Be deformable voxel objects I mean groups of voxels, the size of any one voxel is arbitrary depending on the LOD.
The reason we don't already see high resolution, animated voxel objects is that there's an order to a couple of orders greater magnitude of calculations to perform. Even on a model that is hollow with voxels only on the surface.
Currently we can apply matricies/quaternions on the vertices of our polygons, we can use blend weights so that some verticies belong to more than one 'bone' (so think of shoulder deformation on a running character)
Unless you're happy with lego style characters that don't deform at the joints, you have to calculate the deformations for every single voxel, perhaps a thousand in place of 30 vertices.
When you're talking AAA games with multiple characters, this is borderline infeasible. But that's changing as we speak, very, very soon hardware will be powerful enough to allow for true voxel representations of levels/characters.
I was reading an interesting paper earlier on one proposed technique that you might find interesting :
So we're very nearly at the stage where polygon models will be a thing of the past, expect to see AAA games with deformable voxel objects soon and not long after that, ray-tracing games. And trust me, screen space tricks that are currently used for realtime graphics will most defintely be a thing of the past. (Shadowmaps, SSAO, environment maps etc etc all such screen space tricks are not needed in ray-tracing, all of their effects you get for free with raytracing)
By deformable voxel objects I mean groups of voxels, the size of any one voxel is arbitrary depending on the LOD.
Are you refering here to approach where LOD defines the density of voxel grid (higher LOD = smaller voxels but more of them froming certain object)?
Or are you refering to heuristic voxelization approach where you can replace certain number of existing voxels that from perfect cube with one voxel whose size is the size of the cube that those voxels would form?
Originally Posted by phibermon
The reason we don't already see high resolution, animated voxel objects is that there's an order to a couple of orders greater magnitude of calculations to perform. Even on a model that is hollow with voxels only on the surface.
That is correct. High resolution voxelization requires huge amount of processing power. Now while processing power is increasing steadily I think we are still far from required capabilites.
You also need to understand that processing power is no longer increasing so rapidly that it did in the past. Main reson for this are the physical limitation of transistors that are currently used everywhere. So unless new type of transistors are introduced the only way for constntly increasing processing power will be in increasing the number of processing units as it is already done nowadays. But that leads to increase in price and thus is moving this ouzt of reach for normal people.
vBulletin Message