Hello everyone,
Recently I've been thinking on how to achieve such thing as creating abstract graphical api for rendering 3d graphics, which on it's side could be implemented using Direct3D11, Direct3D9 or OpenGL3.3. It is very possible that I don't start doing it at all, but I'm still very curious for different and efficient approaches on achieving this.
I guess since most of you here are currently writing on the community game engine, you are faced to similar challenges.
Here are my few thoughts, and I'm very interested is someone with experience on similar things share their advices.
The first question I face is -what would be the level on which the api works (higher or lower). I.e. should I design the API to allow the user to draw primitives in BeginDraw/EndDraw style. Or should I create abstract class for a Meshes or Models and for Textures and Materials, which will be implemented in different ways for the different graphic providers then have a Context object to render Mesh objects? Should the mesh's per-vertex data (position, normals, etc) be predefined in a list of different variants or the user should be able to implement their descendant of the Mesh class.
- Maybe a good approach is to create my abstract api on the level similar to the fixed-function pipelines' of the legacy DirectX and OpenGL versions (but implemented via shaders)?
- For now I think it is a good idea to create abstract Context class, Mesh, Texture, Shader/Material, Model(group of meshes with materials), Camera class (which should provide functionality to easy manipulate the camera and feed the matrices to the shaders).
- Maybe support some kind of Scene Graph, where the user can add hyerarchy of Models, and light source objects
Another question is - What about shaders?
- One solution would be just not to allow the api user to create their own shaders and implement(hardcode) different rendering scenarios (for example, flat shading, blin-phong, blin-phong with bump mapping, shadow projection, and others) - lets say i have an abstract Context object, (lets say TGALContext), which have a predefined enum of scenarios, and I call TGALContext.SetScenario(csBlinPhongNormalMapping) or something like this, which internally would use particular group of shaders to render in that particular way.
- I have read that some engines (like Ogre3D) have some kind of their material meta language, which can be translated either to GLSL or HLSL depending on the graphic provider used.
- Another thing I've read is that I can make a material description file, (maybe XML), which describes the inputs of the shader and the outputs, followed by implementation in both HLSL and GLSL for Vertex and Fragment shaders (in both langagues using the same data types as inputs). So when the API user writes shaders, they should write such Material Description file containing implementation both in GLSL and HLSL.
Have someone really implemented such kind of (3d) graphic abstraction in their Pascal engines? And what, do you think, is the best way to do it?
P.S: Sorry for the bad organization of the post and my not so good english
Bookmarks