Results 1 to 10 of 16

Thread: Jengine - OpenGL 3.3/4.x engine & JUI progress

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    I believe iOS still uses GL2.x/ES

  2. #2
    PGD Staff / News Reporter phibermon's Avatar
    Join Date
    Sep 2009
    Location
    England
    Posts
    524
    I'm afraid that IOS doesn't interest me enough at the moment. There can be no doubt that IOS with it's staggering user-base is attracting a large number of developers but that's not where the cutting edge of graphics are and that's where I would like to be personally.

    If anything IOS is a huge step backwards for developers in terms of GPU technology.

    While the mobile platforms have performance characteristics that are very impressive for their size and power footprint, compared to the high end PC spectrum and indeed the soon to be released, next generation consoles?

    The IPad 2 for instance, with it's out-of-order, dual core 1gz A9 Cortex chip and PowerVR GPU would barely hold it's own against average gaming PC's of 8 years ago (especially given that the small cache that normally accompanies the A9 implementations; 'ghz to ghz' comparisons against desktop CPUs of similar frequencies are woefully ill-founded)

    I feel there's work to be done on the Object Pascal front, targetting the newer versions of OpenGL.

    And assuming I continue to work alone? I should be finished right about the time that mobile devices see the next generation of OpenGL ES

    My reasoning would be this : major companies have bought into ES fully; Apple, Samsung, Sony etc. They're all now bound to OpenGL (somthing that I bet microsoft is regretting given they've had plenty of opportunity to cross platform DirectX )

    When the next itteration of ES comes, that'll be what these companies use and it will almost certainly be a sub-set of GL4.0 or some future version. Khronos will not further diverge ES away from the mainline GL versions, if anything they'll aim to merge again.

    But I digress.

    Really it's because I want to, I'm not interested in 'following the money' or even targeting the largest audience. If that was my goal I'd be coding in C/C++
    Last edited by phibermon; 13-06-2011 at 07:53 PM.
    When the moon hits your eye like a big pizza pie - that's an extinction level impact event.

  3. #3
    PGD Staff code_glitch's Avatar
    Join Date
    Oct 2009
    Location
    UK (England, the bigger bit)
    Posts
    933
    Blog Entries
    45
    Ah but on iPad 2 its supposed to be '9x faster' than 'iPad 1' which is 'faster' than iPhone which is 'faster' than... well you get the idea. Overall, looking pretty good. Looks like some very nice performance there (455fps). What card is this running on? My HD4330 only has OpenGl 3.2 so I'm at a loss there, and most GMA cards come with 2.1 or somewhere in that ballpark. And the newest sandy bridge 'cards' shipping on o7s etc only sport OpenGl 3.1/3.2. Unfortunately, i7 is 'the future' according to intel, so yes I agree it will not be an issue with gamers that have a GTX460 (or even casual people like me that buy a mid range discrete card, no debates please, I get >30fps out of almost everything on med settings, will just be on that bandwagon). Just don't make the game too popular k? lol. You might end up getting intel to make real graphic chips one day.

    Anyways, I get what you're saying in a sense: if no one uses the latest technology, then why is it there in the first place and why make new things right? Shame no one likes the word 'new'. In programming new = crash, bugs, trouble and more trouble so I can understand some of the reasons. But OGL 1.5 on windows 7 platforms? come on...
    I once tried to change the world. But they wouldn't give me the source code. Damned evil cunning.

  4. #4
    PGDCE Developer Carver413's Avatar
    Join Date
    Jun 2010
    Location
    Spokane,WA,Usa
    Posts
    206
    I have to agree with phibermon, it is not very wise to build a new engine on old code.

  5. #5
    PGD Staff / News Reporter phibermon's Avatar
    Join Date
    Sep 2009
    Location
    England
    Posts
    524
    Hmm, well giving it thought the one feature I just can't justify loosing are Uniform Buffer Objects (change once, applies to all shaders that use it opposed to setting uniforms for each single shader. Think of them like a single instance of a customizable record that you can share across multiple shaders) which were introduced in GL3.1, so I'll do some damage control and see how much work it would represent to make 3.1 the lower dependancy. You made an excellent point about sandy bridge : I was not aware that the on-chip graphics were 3.1/3.2, I assumed 2.1. For that reason I shall have to look into it, I can sit pretty knowing that cheap 4.x cards will soon dominate but that on-die intel monstrosity is going to be the only solution a lot of laptop users will have for the next few years.

    Carver : I wouldn't like to offend those pursuing ES as their route to GL3/4 will be a lot easier than those coding in immediate mode 2.x, but yes I'd agree with that statement. It's not just the performance gains but it's the usability too.

    My terrain engine was nearly effortless with GL4.0.

    LOD, low level culling etc are all done on the GPU and as a result can sit exactly where they need to for the simplest approach. Older CLOD systems (Roam etc) are far more complex, doing all they can to minimize the bottleneck of constantly transfering vertices to the card from the system. That's just not an issue with tesselation; you just send a sparse patch mesh and tesselation+displacment does the rest with, more or less, free seamless welding of patch edges.
    Last edited by phibermon; 14-06-2011 at 02:21 PM.
    When the moon hits your eye like a big pizza pie - that's an extinction level impact event.

  6. #6
    PGD Staff code_glitch's Avatar
    Join Date
    Oct 2009
    Location
    UK (England, the bigger bit)
    Posts
    933
    Blog Entries
    45
    Or you could have abump at intels market share with another strategy: do it all in 4.x and make a 'crapo' mode where it has a very basic, quickly implemented set of shaders for 2.x/3.x and make a really good game - that way either intel gets some serious opnegl umpf, or ati/nvidia get some market boosts. Either way everybody wins

    But yes, I was dissapointed when sandy bridge (the creme de la creme) from intel came out with 3.1/3.2 support and ATI/NVidia cards had that since... well, the dawn of time. OK, not really, but a while now.

    Mind you, sandy bridge is the only GMA chip that can render something fast enough for it to even be visible to humans. (sorry gma fans - whoever you may be)

    Anyway, good luck and those features do indeed sound tempting.
    I once tried to change the world. But they wouldn't give me the source code. Damned evil cunning.

  7. #7
    PGD Staff / News Reporter phibermon's Avatar
    Join Date
    Sep 2009
    Location
    England
    Posts
    524
    hehe you might have somthing there. I've been looking at various CLOD techniques that could be used for <GL4.0. The only ones I'd be happy with from a technical stand-point are either a GPU optimized Geo-Clipmapping :

    http://research.microsoft.com/~hoppe/gpugcm.pdf

    or this :

    http://vertexasylum.com/2010/07/11/o...ndering-paper/

    The latter looks suprisingly similar in wireframe mode as my GL 4 technique and by my rough estimates is not that far off the FPS. However, it requires extensive pre-processing of the terrain dataset and does *far* more work on the CPU (which is not quite fair, the techniques I've employed (very nearly) don't use the CPU at all)

    And to top it all off, it's complicated to implement although a port of the source provided would be possible given enough Direct3D research.

    So to support older cards for the terrain, I'll simply brute force render (with a bit of culling) and drop both the poly count and the draw distance (like I'm doing) until it matches the FPS.

    If I did implement an alternate CLOD for older cards, I would most likely choose geo-clipmapping as I can use the same dataset as I use now without the pre-processing of the preffered technique. (it really is very impressive though, check it out if you have the time)
    Last edited by phibermon; 14-06-2011 at 04:47 PM.
    When the moon hits your eye like a big pizza pie - that's an extinction level impact event.

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •