PDA

View Full Version : implementing a 'Circlevision' strip



deathshadow
03-02-2007, 09:40 PM
Wasn't sure whether to put this in mathematics or graphics, since I think it draws heavily from both... should be an interesting 'first post'

For some time (hell, approaching a decade now) I've been playing with implementing a 3d engine that displays all 360 degrees around the user in a single 'strip' at the top of the screen... As described in the older Battletech fiction to create a "MechWarrior" style game (that perhaps actually has something to do with the parent product instead of just ripping off a few names and unit appearances). In the battletech universe it's one of the 'big features' of the BattleMech (and the helments of certain elite foot soldiers) that has been strangely missing from every attempt at a computer game of it.

Of course, the reason for it to be missing is simple - no 'popular' 3d perspective math is set up to do this well, if at all. You cannot tell OpenGL or DirectX to have a FoV of more than 90 degrees without horrible fish-eye effects... sure you can try splitting it into multiple views, but even with 16 of them across the top of the screen you STILL get wierd 'points' dropping down from the fish-eye effect, and if you compensate for the fish-eye the edges of the polys don't line up... much less how agonizingly slow it can get when you are rendering the same thing 17 times per FRAME.

So my question is, how would you guys implement this? I've asked in the past and the answer I've gotten most often is the 'multiple windows' that got rejected for the reasons above.

I was able to write a working demo in Delphi six years ago using glide that showed the concept in action, and I ported it to OpenGL and FPC a few years back... The .exe and needed files of that port are here:

http://battletech.hopto.org/circle/

I'm still looking for my old source... I'll post it when I find it

...but at the time I deemed it to be too slow to be practical on the hardware (of that era) for anything more than a very small map and one unit, with no textures and little more than simple (non shadow) shading. Now that the hardware has jumped forward I'm wondering if it might be practical for me to revisit again.

The technique I'm currently using is to use ATAN2 (actually, a workalike I wrote which is WAY faster but only works with 32 bit dwords and has 'screen' granularity) to convert the x and z coordinates of a point to a 'heading', which becomes my screen X coordinate - then I use pythag's theorem to get the distance on that axis, then use that distance and the y to get a 'pitch' value, which I project as screen y. One more sqrt(A^2+B^2) gets me the actual distance for z-buffering... I then project this one triangle at a time in orthogonal view in openGL (much as I did in glide). You have issues that have to be checked for like at the edges of the screen needing to render any triangles that have pixels on opposite sides of the screen are 'handled' down to be rendered twice on each side (+360 for the first, -360 for the second) but nothing insurmountable... just more if statements to slow things down :(

Basically the math boils down to:
ra_2_screen:=pi/(screen_width/2);
dxa:=sqrt(x*x+z*z);
screen_x:=atan2(x,z)*ra_2_screen;
screen_y:=atan2(y,dxa)*ra_2_screen;
distance:=sqrt(y*y+dxa*dxa);

The method itself has some advangages - like rotating the world around the 'camera', at least on two axis (yaw and pitch) becomes simple addition... (though roll is basically going back to sin/cos) but there are also some disadvantages - because I'm not using openGL's matrixis applying lighting and shadows is going to be a 'from scratch' affair... textures when applied may not 'wrap' to depth properly, etc, etc. (I never really got that far though)... It's a technique I pretty much had to come up with on my own, since I really could find no real texts on handling this sort of thing as everything is either matrixes or flat multiplies, with full rotations for the final camera and then a simple z divide to do the perspective.

That reminds me - can anyone explain to me the raging chodo for matrixes? I still have trouble wrapping my brain around the concept of a 16 variables (meaning more memory access AND less likely to fit the L2 cache, much less the L1) and 64 multiplies being FASTER than the classic 6 and 6, or this technique... hell, it's so grossly inefficient they ended up having to make an entire register set and series of opcodes (disabling the mathco in the process) to make it USABLE... Doesn't seem right to me.

Anywho, if someone can come up with other ways of implementing this, I'd love to hear them. It's been, well... almost a decade since I put any serious thought this direction, but now that I'm recently semi-successfully retired AND it looks like people are taking my favorite language seriously again, I'm thinking on dusting off the hat and taking a serious stab at it.

Oh, I've also been (the past year) playing on and off with the math from above to do simple 3d in javascript and SVG - it's got a LOT more possibilites in that environment since Matmults and world rotations are agonizing in a interpreted language run atop a web browser.

LP
03-02-2007, 10:42 PM
So my question is, how would you guys implement this? I've asked in the past and the answer I've gotten most often is the 'multiple windows' that got rejected for the reasons above.

I think that the approach you are suggesting:
ra_2_screen:=pi/(screen_width/2);
dxa:=sqrt(x*x+z*z);
screen_x:=atan2(x,z)*ra_2_screen;
screen_y:=atan2(y,dxa)*ra_2_screen;
distance:=sqrt(y*y+dxa*dxa);

...can be implemented using vertex shader. However, my shader knowledge is still a little rusty (I have just started messing around with vertex/pixel shaders). I am very interested in this approach and would like to attempt it one of these days.


That reminds me - can anyone explain to me the raging chodo for matrixes?
If you ask why matrices are used in modern 3D hardware is because it is relatively easy to put scale/shear/rotation/translation information in a single matrix and multiplication can be parallelized.


I still have trouble wrapping my brain around the concept of a 16 variables (meaning more memory access AND less likely to fit the L2 cache, much less the L1) and 64 multiplies being FASTER than the classic 6 and 6, or this technique... hell, it's so grossly inefficient
Why don't you try to implement this on GPU? If your algorithm ends up being much faster, you may as well win a noble prize and a guaranteed job at your favorite company. ;)

Clootie
04-02-2007, 10:37 AM
Interesting...
I still have to look at your approach, to understand it better. But from top of mine head - main problem you will encounter is that HW _rasterizarion_ happens in assumption what coordinates are linear and all other triangle attributes are perspective correct (using common camera model). But with your approach it's not true. So, to correctly draw scene you need either: implement ray-tracer (on GPU :) ) or tesselate all triangles you draw to sub-pixel (or maybe couple of pixels) size.

Summary: after your transformation lines are no more lines, but arcs of circle.

I think for typecal Mech scene (a lot of robots in fill-in-the-blank desert :D ) - ray-tracing can be implemented on GPU (as you don't need a lot of dynamic sorting of scene).

deathshadow
04-02-2007, 02:01 PM
...can be implemented using vertex shader. However, my shader knowledge is still a little rusty (I have just started messing around with vertex/pixel shaders).

I don't see how, since vertex shaders are usually applied after perspective projection - and standard perspective projection isn't compatable with what I'm doing... unless you are saying to use it to apply a correction to compensate for the fish-eye effect.



If you ask why matrices are used in modern 3D hardware is because it is relatively easy to put scale/shear/rotation/translation information in a single matrix and multiplication can be parallelized.

That's not really an answer - any series of repetative calculations can be parallelized, and I find the application of everything via a matrix to be very wasteful on both cpu and on programming effort. It reeks of trying to fit a square peg into a round hole.


Why don't you try to implement this on GPU?
I'm not certain you can - unless there's some way to program a gpu through opengl or directx to do a different type of perspective calculation than I'm aware of. It's seems hardcoded almost to either do flat z-index perspective off x and y, or to not do perspective at all (orthogonal)

The 'normal' process for rendering a perspective based landscape is:

world transform
matmult world rotate
screenx=x/z;
screeny=y/z;

Almost all of the above being handled by openGL and/or directX with no code changes.

where mine is a different order
world transform
screenx=atan(x,z)+yaw
screeny=atan(y,z)+pitch
roll screenx,screeny

ALL of it having to be done on the cpu since there's no interfaces IN openGL or directX (that I'm aware of) to actually write your own projections. Vertex shader is cute once you've sent it vertexes TO process, but doesn't help to actually change the projection itself.

I am discovering something interesting in Othogonal view though - z-index DOES appear to scale the texture properly... I also think I can manually calculate normals old-school to apply lighting, but I'm still completely in the dark on how I'm going to make this cast shadows.

It also appears the vertex distortion is low enough that most people wouldn't notice it.

JSoftware
04-02-2007, 04:14 PM
Why don't you think you could handle the projection in a vertex shader?

The reason to use matrices instead of your method is the atan call. If you didn't have a lookup table then it would take a long time to calculate. And the lookup table wouldn't be pretty precise enough unless you took a whole lot of memory of for it

deathshadow
04-02-2007, 04:40 PM
Why don't you think you could handle the projection in a vertex shader?
Since vertex shaders seem to be applied before projection requiring you to still use a final projection, and seems to not be good for changing based on the master camera. I'm not certain you can input a vertex value that at the current rotation ends up behind you, that will end up at the side of the screen... unless I'm missing a major part of how vertex shaders work. Can you actually override the final camera projection via shaders? If so, how? (I'm really lost on that) Would that include the ability to handle the left and right edge 'wraps' of elements?


The reason to use matrices instead of your method is the atan call. If you didn't have a lookup table then it would take a long time to calculate. And the lookup table wouldn't be pretty precise enough unless you took a whole lot of memory of for it
You only need the array accurate enough for screen granularity, therin you'd only need as many elements as there are pixels across the screen - and let's face it, in the modern computing environment 1024 dwords (4k) or even double precision (8k) is bupkis... If you keep the range below 45 degrees your precision remains high (always remember high angle skew on slopes - same problem the classic divide by depth for z-index has) meaning you can 'reuse' the same 45 degree stripe 8 times with a minimizing compare chain (or a modulo/integer divide). In fact it ends up so small you can fit it during calculation into the L1 cache if you plan the routine properly ;)