Page 4 of 5 FirstFirst ... 2345 LastLast
Results 31 to 40 of 42

Thread: pascal and learning 3d

  1. #31
    Quote Originally Posted by Carver413 View Post
    I built my frame work using a custom linked list design. all things are connected so I worry little about mem leaks. I worked on it for over a year before I started checking for leaks and I wasn't really surprised that I had none. so much more is possible with such a design.
    Are you saying that you add all ingame classes into a single TList with which you free all of them in the end? It would be ince if you can give us more information on this approach or even show us some code examples of this.

  2. #32
    PGDCE Developer Carver413's Avatar
    Join Date
    Jun 2010
    Location
    Spokane,WA,Usa
    Posts
    206
    this will probable explain it much better then I could
    mine is a single linked looped list and I am using an array of children which makes it possible for each and every class to maintain any number of lists without a container class to manage them. I will try to extract it from my code if you really want it. I plan to release my framework when it is finished but I can't really say how long that will be.
    http://en.wikipedia.org/wiki/Linked_list



  3. #33
    another headache.. object picking!
    I've read about different methods and looks like ray casting is a way to go in my case. But I'm having problems with gluUnProject , not sure what to feed to it.
    Samples I've seen are useing something like that to obtain values for it:
    glGetIntegerv(GL_VIEWPORT,@viewport);
    glGetDoublev(GL_PROJECTION_MATRIX,@projectionMatri x);
    glGetDoublev(GL_MODELVIEW_MATRIX,@modelMatrix);

    those get me identity matrices for projection and model. I guess it's because i'm doing all 'projection' computations in shaders only. I think I know how to fill projection and viewport matrices but every model has it's own model matrix. Should I just use any's model matrix for gluUnProject ? (tried and it didn't work ;p )

  4. #34
    Do you really need ray casting? If you only want to select objects at e.g. a given cursor position, just go for color selection. Especially with shaders that's pretty simple and fast. I'm using this technique for selection too (see here). It's easy to implement and extremly fast.

    It works like this :
    - Render your scene to the backbuffer (so it's not visible), but only with colors and basic geometry. E.g. one building red, another yellow
    - Limit the viewport to the cursor position (for performance)
    - Read the pixel under the cursor (glReadPixels)
    - Compare read colors with your color-to-object-table

    It's very fast and can be implemented in just a few minuts. Especially when using only shaders you just need a new shader the outputs selection colors to the fragment buffer.

  5. #35
    If you want to use ray casting, you can check nxPascal demos. Model and Picking demos both use that math. Doing it mathematically has the advantage of knowing exactly which model face the ray hits, and what is the normal vector at that point.

    There is also GL_SELECT based function on the works, but it's not working in my tests yet. I used that style many years ago, so at least there's some code left.

  6. #36
    @Sascha: I'd use color picking if i could but I'm not texturing voxels and I don't plan to. To do color picking in this scenario I guess I'd have to render geometry with unique colours and then change them to proper values from some reference texture?

    @User137: yeah, I'm using your engine as a reference from time to time but picking there is done in a way I've described in my previous post. Where do I get model matrix from if glGetDoublev(GL_MODELVIEW_MATRIX) doesn't seem to do it's job?
    Internet says GL_SELECT is obsolete and shouldn't be used anymore

    edit: another problem with color picking is that I have tons of cubes on the scene so
    lookup would be costly unless I encode cube coords into color..
    Last edited by laggyluk; 09-01-2013 at 06:37 PM.

  7. #37
    The old OpenGL-Selection (GL_SELECT) shouldn't be used anymore and as far as I know it isn't available in newer OpenGL-versions anyway. And even if it would, hardware vendors decided to drop hardware support for it some time ago, so rendering in selection mode is done in software and therefore often extremly slow.

    As for color selection via shaders : No, you don't need textures at all. Just assing a color to each voxel when creating it (you've got 24 Bits to encode your color, that should be sufficent), render with that color, pick it from the backbuffer and compare. If you render a lot of cubes you should have some kind of visibility check (e.g. an octree), so you can use that to speed up this process.

  8. #38
    what I ment is that voxel/cubes won't be covered with textures, vertex color is used to represent diferrent block types. With color picking I'd use vertex color to identify the cube and lose info about how this block should look like. Maybe if I could have 2 atributes passed to shader, one with 'block color' and another with 'id color' and then render to two different textures? one for picking and other for showing on screen.

    edit: i did it geometry picking that is. used some unprojecting functions from glscene
    Last edited by laggyluk; 10-01-2013 at 06:23 PM.

  9. #39
    PGD Staff / News Reporter phibermon's Avatar
    Join Date
    Sep 2009
    Location
    England
    Posts
    524
    Picking should absolutely not be used for a voxel engine (Please forgive me Sascha, I've got an awesome amount of respect for you) you're hitting the limits of the hardware (or at least eventually will) the last thing you want is to render an additional pass of the scene for only a single task. Regardless if you remove textures or not, that's a hell of a lot of geometry to render twice.

    You could alleviate much of the performance hit by using MRTs (Multiple render targets) and actually render the picking data to a back-buffer, whilst doing the normal rasterization on your main buffer (or a further back-buffer if you're aiming for a deferred renderer). If you were looking to do minecraft style rendering, IE Ambient Occlusion, there may be additional data you can store in this back-buffer that would make this method more attractive.

    However you're just not going to beat raycasting when you've got a lot geometry, especially as you're storing your data in an oct-tree, there are oct-tree optimized raycast algorithms that you really want to use and not just for ray-casting the view direction to select stuff etc. if you want to do oct-tree style ambient occlusion you'll need rays, if you want to do any form of path finding in a voxel terrain, you'll need rays.

    The math isn't too complex, Jink (my soon to be released game engine) has a full range of ray-casting functions and algorithms, optimised for oct-trees, kd-trees etc

    Ray-casting, is a requirement for rendering 3D graphics onto a 2D display (I didn't say Ray-tracing before anybody jumps on me). it's all happening in the API even if you don't use it yourself, projecting the 3D vertices into screen-space coordinates.

    You're just casting in the opposite direction to do picking, you cast a line from the position the mouse intersects the camera clipping plane, outwards, the combination of view and projection matricies associated with the camera determining the two intersecting planes the ray is defined by (or a vector and position, whatever is most useful for your spatial partitioning scheme)

    Once that is done you have a vector in 3D space, for raycasting, think of it as an infinite line.

    Then you're either finding the closest 3D object to that line, or doing line/BBox followed by line/triangle intersection tests to determine the 'hit' object.

    Obviously oct-tree optimizations come into play at this stage, testing the intersection of this line against the bounding boxes of your tree nodes, you then perform this test, traversing down towards the leaves like you would when you're testing your camera frustum against it, only line/bbox is a lot faster than Frustum/BBox intersection testing.

    You further optimize the traversal because you should have stored your visible nodes during the frustum test, so you only need to test against that set of nodes.
    When the moon hits your eye like a big pizza pie - that's an extinction level impact event.

  10. #40
    PGD Staff / News Reporter phibermon's Avatar
    Join Date
    Sep 2009
    Location
    England
    Posts
    524
    Oh and anybody who doesn't think that record alignment is important for openGL, you could not be more wrong. in situations like loading single/dual channel image data or using uniform buffers, you'll very quickly discover that you'll need to setup either your record alignment or OpenGLs packing/unpacking options. You might of not come across these issues but that'll be because the default alignment on your platform matches that in your hardware/OpenGL driver implementation (see std140 block layout for and example) but deal with the more exotic GL features on multiple architectures and it matters a lot.

    You can't just use things like NumX * sizeof(X) and expect the layout on the target hardware to be the same.

    Oh and I've come across plenty of formats that store arrays of structures that must be tightly packed (aligned to 1 byte boundary). If Pascal pads a field out with a few bytes for optimized memory access and you just stream the bytes from a file across the array, you'll have written your last byte from the file before you reach the last byte in the array.

    Unfortuantly because of Intel CPU's and things like standards, most things are just 4-byte aligned, so many programmers never learn about it and will one day spend weeks trying to find the bug.

    The point is that records are optimized by the compiler so that memory operations operate as fast as possible.

    Assuming 32bit floats (single) it's actually faster on Intel hardware to load TVec3 data (8+8+8 bytes) that's aligned to TVec4 (8+8+8+8 bytes) into openGL with a stride parameter, than it is to load tightly packed TVec3 (assuming you're got hardware that comes with decent drivers). Because the Vec3 data will only be aligned to vec4 boundries on the hardware anyway, so any space you save by tightly packing your data you loose in A) system read performance and B) GPU unpacking operation.

    you will only see the performance difference between a packed and non packed record/array, if the combined size of the record/array elements don't already fall on the boundry. if they do not, then packing this data will slow memory operations.

    It is in fact, on the latest cards pretty pointless to use anything but a vec4. it takes no longer to copy the data (the bus is essentially transfering your vec3 to the GPU in a vec4 'box') and it's all operating on 32/64byte vectors in silicon.
    Last edited by phibermon; 30-01-2013 at 08:05 PM.
    When the moon hits your eye like a big pizza pie - that's an extinction level impact event.

Page 4 of 5 FirstFirst ... 2345 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •