PDA

View Full Version : Normal mapping



User137
14-07-2012, 04:03 PM
Has anyone done a simple demo about normal mapping with OpenGL? All it would need to do is draw a cube, using texture and normalmap texture, as a bonus even a specular map texture.

It would be nice if it's possible without getting into pixel-shaders, but i know there are some demos using GL_ARB_multitexture. I have looked up some C sources but just can't get it working. Must be missing some detail. Best i have come up with is like in attached screenshot (you can see that it's actually just drawing the normal map, instead of using it for lighting, which in itself goes wrong).

Also, i am aware there are 2 different kinds of normal maps. One where it multiplies the normalmap with models own face normals, and one where normalmap is the only thing used for lighting calculation. I'm actually more interested in the latter, because it would mean less memory use if i don't need vector arrays for normals.

Well, i can give some converted code which has propably ton of bugs (it's fit for nxPascal engine):

model: TGLModel;
texture, normalMap, cubemap: cardinal;
dl: TDisplayList;
tangentSpaceLight: array of TVector;
sTangent, tTangent, normal: array of TVector;


procedure GenerateNormalisationCubeMap;
var size: integer; offset, halfSize: single; data: array of byte;

procedure Generate(arb: byte);
var i, j, dp: integer; tempVector: TVector;
begin
dp:=0;
for j:=0 to size-1 do
for i:=0 to size-1 do begin
case arb of
0: begin // +X
tempVector.x:=halfSize;
tempVector.y:=-(j+offset-halfSize);
tempVector.z:=-(i+offset-halfSize);
end;
1: begin // -X
tempVector.x:=-halfSize;
tempVector.y:=-(j+offset-halfSize);
tempVector.z:=(i+offset-halfSize);
end;
2: begin // +Y
tempVector.x:=(i+offset-halfSize);
tempVector.y:=halfSize;
tempVector.z:=(j+offset-halfSize);
end;
3: begin // -Y
tempVector.x:=(i+offset-halfSize);
tempVector.y:=-halfSize;
tempVector.z:=-(j+offset-halfSize);
end;
4: begin // +Z
tempVector.x:=(i+offset-halfSize);
tempVector.y:=-(j+offset-halfSize);
tempVector.z:=halfSize;
end;
5: begin // -Z
tempVector.x:=-(i+offset-halfSize);
tempVector.y:=-(j+offset-halfSize);
tempVector.z:=-halfSize;
end;
end;
tempVector:=Norm2(tempVector);
tempVector.x:=0.5*tempVector.x+0.5;
tempVector.y:=0.5*tempVector.y+0.5;
tempVector.z:=0.5*tempVector.z+0.5;
data[dp]:=round(tempVector.x*255);
data[dp+1]:=round(tempVector.y*255);
data[dp+2]:=round(tempVector.z*255);
inc(dp,3);
end;
arb:=GL_TEXTURE_CUBE_MAP_POSITIVE_X_ARB+arb;
glTexImage2D(arb, 0, GL_RGBA8, size, size, 0, GL_RGB,
GL_UNSIGNED_BYTE, @data[0]);
end;

var i: integer;
begin
size:=32; offset:=0.5; halfSize:=size/2;
setlength(data, size*size*3);
for i:=0 to 5 do Generate(i);
end;


procedure TForm1.FormCreate(Sender: TObject);
var bmp: TBitmap; i: integer;
begin
clientwidth:=800; clientheight:=600;
nx.CreateGlWindow(self);
nx.Perspective(false); nx.DefaultLights;

if not nx.GLInfo('GL_ARB_multitexture') then begin
showmessage('GL_ARB_multitexture not supported!'); exit;
end;

tex.Options:=tex.Options+[toKeepData];

model:=TGLModel.Create;
model.LoadFromW3D('data\donut.w3d');
model.LoadTextures('data');
texture:=model.mat[0].texIndex;
model.MakeDisplayList(dl);
setlength(tangentSpaceLight, model.vCount);
setlength(sTangent, model.fCount);
setlength(tTangent, model.fCount);
setlength(normal, model.fCount);
for i:=0 to model.fCount-1 do CalculateTangentSpace(i);

bmp:=TBitmap.Create;
MakeBumpTexture(0, bmp);

normalMap:=tex.AddTexture('bump','');
tex.LoadBMPData(normalMap, bmp);
tex.Restore(normalMap);

normalMap:=tex.texture[normalMap].index;
model.mat[0].texIndex:=normalMap;
bmp.Free;

cubemap:=tex.AddTexture('_cubemap_','');
cubemap:=tex.texture[cubemap].index;
glBindTexture(GL_TEXTURE_CUBE_MAP_ARB, cubemap);
GenerateNormalisationCubeMap;
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
...
end;


procedure TForm1.Timer1Timer(Sender: TObject);
var i, j: integer; objectLightPosition, lightVector: TVector;
begin
...
objectLightPosition:=vector(0, 5, -10);

for i:=0 to model.fCount-1 do
for j:=0 to 2 do begin
lightVector:=VectorSub2(objectLightPosition, model.va[model.fa[i, j]]);
tangentSpaceLight[model.fa[i, j]].x:=Dot(sTangent[i], lightVector);
tangentSpaceLight[model.fa[i, j]].y:=Dot(tTangent[i], lightVector);
tangentSpaceLight[model.fa[i, j]].z:=Dot(normal[i], lightVector);
end;

//Bind normal map to texture unit 0
glBindTexture(GL_TEXTURE_2D, normalMap);
glEnable(GL_TEXTURE_2D);

//Bind normalisation cube map to texture unit 1
glActiveTextureARB(GL_TEXTURE1_ARB);
glBindTexture(GL_TEXTURE_CUBE_MAP_ARB, CubeMap);
glEnable(GL_TEXTURE_CUBE_MAP_ARB);
glActiveTextureARB(GL_TEXTURE0_ARB);

glVertexPointer(3, GL_FLOAT, 0, @model.va[0]);
glEnableClientState(GL_VERTEX_ARRAY);

glTexCoordPointer(2, GL_FLOAT, 0, @model.ta[0]);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);

glClientActiveTextureARB(GL_TEXTURE1_ARB);
glTexCoordPointer(3, GL_FLOAT, 0, @tangentSpaceLight[0]);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTextureARB(GL_TEXTURE0_ARB);

glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_RGB_ARB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB_ARB, GL_REPLACE);

glActiveTextureARB(GL_TEXTURE1_ARB);

glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_RGB_ARB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB_ARB, GL_DOT3_RGB_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE1_RGB_ARB, GL_PREVIOUS_ARB);

glActiveTextureARB(GL_TEXTURE0_ARB);

// Render Bumps
glDrawElements(GL_TRIANGLES, model.fCount*3, GL_UNSIGNED_SHORT, @model.fa[0]);

glDisable(GL_TEXTURE_2D);

glActiveTextureARB(GL_TEXTURE1_ARB);
glDisable(GL_TEXTURE_CUBE_MAP_ARB);
glActiveTextureARB(GL_TEXTURE0_ARB);

//disable vertex arrays
glDisableClientState(GL_VERTEX_ARRAY);

glDisableClientState(GL_TEXTURE_COORD_ARRAY);

glClientActiveTextureARB(GL_TEXTURE1_ARB);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTextureARB(GL_TEXTURE0_ARB);

//Return to standard modulate texenv
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);

//Enable multiplicative blending
glBlendFunc(GL_DST_COLOR, GL_ZERO);
glEnable(GL_BLEND);

glBindTexture(GL_TEXTURE_2D, Texture);
glEnable(GL_TEXTURE_2D);

glVertexPointer(3, GL_FLOAT, 0, @model.va[0]);
glEnableClientState(GL_VERTEX_ARRAY);

glNormalPointer(GL_FLOAT, 0, @model.va[0]);
glEnableClientState(GL_NORMAL_ARRAY);

glTexCoordPointer(2, GL_FLOAT, 0, @model.ta[0]);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);

// Render Object
glDrawElements(GL_TRIANGLES, model.fCount*3, GL_UNSIGNED_SHORT, @model.fa[0]);

//Disable texture
glDisable(GL_TEXTURE_2D);

//disable vertex arrays
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);

//Disable blending if it is enabled
glDisable(GL_BLEND);

nx.Flip;

But of course, if this would be simpler or more efficient to do with pixel-shaders, then i will of course try that instead. I just don't know about this topic much.

User137
17-07-2012, 03:14 PM
Started research on GLSL, which seems to be mainstream for pixel/fragment-shading languages. What i gathered so far:

http://oss.sgi.com/projects/ogl-sample/registry/ARB/fragment_shader.txt
http://en.wikipedia.org/wiki/OpenGL_Shading_Language#A_sample_trivial_GLSL_frag ment_shader
http://nehe.gamedev.net/article/glsl_an_introduction/25007/

GL functions that are related:
glCreateProgram
glShaderSource
glUseProgram
glBindFragDataLocation
glValidateProgram
glLinkProgram
glAttachShader
glGetFragDataLocation
glDetachShader
glDeleteProgram

I'm also assuming that i still need to use multitexturing, but not cube-mapping.

edit: Finally it's working. GL code is much easier with GLSL than the above early attempt. But the shader code is beyond understanding for some parts, luckily there is ready code in the net. Shader in taken screenshot only supports 1 light source with its ambient, diffuse and position.

LP
17-07-2012, 10:48 PM
If you want to use normal mapping to simulate bump surfaces, then the typical approach would be:

1) Creating normal map. You seem to do this in code, but you can also use free tools like this one (http://www.blendernation.com/2009/11/20/open-source-normal-map-generator/). The normal map can be loaded using typical format like GL_RGBA.
2) In addition to vertex normals, you will also need to generate tangent and binormal vectors. This can be tricky, search for it on Internet.
3) Pass vertex normals, tangents and binormals to vertex shader, where you will also receive information about your light sources. You will need to convert your light source parameters such as direction into tangent space (this is why you need normals, tangents and binormals, to form the needed 3D matrix). Send these parameters to fragment shader.
4) In fragment shader, you receive interpolated light source parameters in tangent space along with normal map texture. Load/decode normal value from normal map texture, which should be by default in tangent space; using this information and interpolated light parameters all in tangent space, do the illumination calculation. You should receive perfect normal mapping surface.

I once provided bump-mapping example in Asphyre 4 (http://www.afterwarp.net/index.php?option=com_content&id=48), which used DX9 and HLSL shaders. You could probably translate shader code into GLSL easily.

P.S. You don't need multitexturing, cube texturing or other weird stuff for this.

User137
17-07-2012, 11:30 PM
1) Yeah that was done in code, but for this demo i did after-edit with Gimp to smooth it, and save to PNG. And i did notice there's alot going on with the normal map. It seems like lighting is little different on each face of the cube, they take the light coming from different angle... Guess it's about what you explain in 2).
edit: But here i am assuming that everything would be automatically fixed if i only had 1 texture that covers whole object. Not so that it's repeating itself, but each face being separate part of the texture.

And i did not need even multitexturing luckily :) Those examples were just too good. I just updated nxPascal SVN and here is source code, if someone wants to try it.
( There btw: http://code.google.com/p/nxpascal/ )

Then this just needs to be wrapped in a class for even easier use. Most of future additions to this should be just in the separate custom shader file and with its Uniform parameters.

1 more thing, i had to use vertex shader aswell to get something showing. Is this really necessary? Maybe it's preparing the data somehow to format that's usable for fragment shader. Maybe even a performance boost of some sort?

edit2: I have now tested that it still works on a surface of a sphere. This shader takes the vertex normals into account, so it's not true normal map, but there are uses for both styles.

User137
02-10-2012, 12:33 PM
I was trying to display some Skyrim objects that have both color and normalmap. The technique it use is not bump-mapping i believe. You can see example here. (http://amber.rc.arizona.edu/lw/normalmaps.html)
whereas typical bump-map looks like this. (http://t1.gstatic.com/images?q=tbn:ANd9GcRnrHcR86gb9CCy7_OhZccfcyCj52zkz UlSXLPHeEilegH-yotNNV30PRtFYw)

It is hard to find shaders that can render it >:( I have tried modifying the code myself a bit too, but the math behind them is still complicated. Basically neither shader should not include word "gl_Normal", in my opinion. The normal in pixel position is exactly that of normalMap texture in those coords. Every shader that i have found is using gl_Normal.

This also explains a bit:
http://mvarts.wordpress.com/2011/03/18/real-time-normal-map-dxt-compression/

Normal mapping is an application of bump mapping, and was introduced by Peercy et al. [2]. While bump mapping perturbs the existing surface normals of an object, normal mapping replaces the normals entirely. A normal map is a texture that stores normals. These normals are usually stored as unit-length vectors with three components: X, Y and Z. Normal mapping has significant performance benefits over bump mapping, in that far fewer operations are required to calculate the surface lighting.

LP
02-10-2012, 01:08 PM
It is hard to find shaders that can render it >:( I have tried modifying the code myself a bit too, but the math behind them is still complicated. Basically neither shader should not include word "gl_Normal", in my opinion. The normal in pixel position is exactly that of normalMap texture in those coords. Every shader that i have found is using gl_Normal.
In order to use normal mapping, you either need to work in tangent space altogether or transform the texture normal from tangent space to world space in pixel shader. Because of this, at one point or another you will have to pass vertex normal, tangent and binormal vectors either to vertex shader only or both, while the texture normal is used in pixel shader as a final step.

User137
02-10-2012, 01:31 PM
That is still too complicated instructions. Why would i need to pass vertex normal? I do not see why vertex-shader would use it. All it needs to do is pass light-vector to fragment shader, propably multiplied by modelview matrix, or inverse of it. But i don't know how to do any of that, especially the part in fragment shader.

Well, i have something like this in frag (overall doesn't work yet, but i feel like this might be close to right):

uniform sampler2D colorMap;
uniform sampler2D normalMap;
varying vec3 lightDir;
void main()
{
vec3 l = lightDir;
vec3 n = normalize(texture2D(normalMap, gl_TexCoord[0].st).xyz * 2.0 - 1.0);
float power = dot(n, l);
if (power<0.0) power = 0.0;
vec4 ambient = vec4(0.2, 0.2, 0.2, 1.0);
vec4 diffuse = vec4(0.6, 0.6, 0.6, 1.0);
vec4 specular = vec4(1, 1, 1, 1) * power;
gl_FragColor = (ambient+diffuse) * texture2D(colorMap, gl_TexCoord[0].st)+specular;
}
(Yes i took off all materials and light settings for now... And trying to use just 1 directional light.)

Update: I got much further after tried to render normalMap in place of colorMap. To my surprise it wasn't showing right. I changed code around from this:

tex.TextureUnit:=1; tex.SetByName('normalmap');
tex.TextureUnit:=0; tex.SetByName('texture');

to this, and it started working better. It is using the normals for specular, but i do need to multiply by modelview or something in vertex shader still:

tex.TextureUnit:=0; tex.SetByName('texture');
tex.TextureUnit:=1; tex.SetByName('normalmap');

I mean, i don't understand why that order would matter. All it does is:
glActiveTexture(GL_TEXTURE0 + n);
edit2: Oh, i think nxPascal thinks that texture index didn't change and didn't bind... Bug!

LP
02-10-2012, 02:49 PM
That is still too complicated instructions. Why would i need to pass vertex normal? I do not see why vertex-shader would use it.
Your normal map texture has normal values that are specified in tangent space (http://en.wikipedia.org/wiki/Normal_mapping#Calculating_tangent_space), while light direction and other lighting parameters are typically specified in world space (http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/3d-basics-r673). You need vertex normal, vertex tangent and vertex binormal for doing conversions between tangent space and world space.

User137
02-10-2012, 03:09 PM
Yeah, i know what you mean by world space. I passed modelview rotation to vertex shader with uniform now:

uniform mat3 mv_rotation;
varying vec3 lightDir;
void main()
{
gl_Position = ftransform();
gl_TexCoord[0] = gl_MultiTexCoord0;
vec3 vertexPos = vec3(gl_ModelViewMatrix * gl_Vertex);
lightDir = (gl_LightSource[0].position.xyz - vertexPos); // / lightRadius;
lightDir = mv_rotation * lightDir;
lightDir = normalize(lightDir);
}
Applied materials to fragment-shader, used shininess power etc, end result is already quite impressive. But it's not working with all rotations.

If model is looking towards me, and i rotate it 360 degrees around Y-axis, everything looks perfect.

If model is looking towards me, and i rotate it 360 degrees around X-axis, it feels like normals are inverted.

If model is backwards to me, and i rotate it 360 degrees around X-axis, everything looks perfect.

edit: Actually, i'm starting to think this is behaving like point light now, not directional light. Have to check the math.

Dan
02-10-2012, 03:40 PM
You can see example here.
if your normal maps look like the one in this example then you are simply dealing with another approach to bump mapping. instead of storing the normal map values in tangent space, they are here converted to the model space (the space where the 3d model is unmodified by any transformations). in a way this approach is actually more simple than the one with the tangent space normal maps. what's good about is that you don't even need to deal with tangent space at all, and you don't even need the vertex normals anymore because all the normals are stored in the texture, so all you need to do in pixel shader is fetch the normal map, get the model space normal vector (NormalMapPixel.xyz * 2 - 1), transform it by the world space transformation, normalize and you've got your world space normal vector. the major disadvantage of this method and the reason the tangent space is normally used is that these normal map textures are not reusable for different meshes, so it's ok for the character models and such, but it's not going to work with the various environment models.

User137
02-10-2012, 04:34 PM
This was good image, explaining the difference between the 2 normal maps:
http://mvarts.files.wordpress.com/2011/03/real-time-normal-map-dxt-compression.jpg?w=980

I know it's situational, but i like being able to use models that others do in games :) So far i've only encountered ones shown on left.

Dan
02-10-2012, 06:21 PM
so did you get it to work? if you need help with shaders you can post both vertex and pixel shader and I'll fix them for you.

User137
02-10-2012, 07:33 PM
No, it's frustratingly close to work though :D I see that the direction of light is rotating around the object faster than itself rotates, causes some angles to work and some not. Would be nice if you can see error in this.

From my tests, result is the same if i multiply normal with mv_rotation or if i multiply light in vertex-shader. But i assume it's more efficient if it's done per-vertex basis instead of per-pixel so i put it for light.
mv_rotation is 3x3 part of Modelview-matrix. So main program has this after camera is set:

glGetFloatv(GL_MODELVIEW_MATRIX, @m4);
m:=GetRotation3(m4);
glUniformMatrix3fv(locRotation, 1, bytebool(GL_FALSE), @m);

Vertex shader:

uniform mat3 mv_rotation;
varying vec3 lightDir;
void main()
{
gl_Position = ftransform();
gl_TexCoord[0] = gl_MultiTexCoord0;
lightDir = mv_rotation * gl_LightSource[0].position.xyz;
lightDir = normalize(lightDir);
}

Fragment shader:

uniform sampler2D colorMap;
uniform sampler2D normalMap;
//uniform mat3 mv_rotation;
varying vec3 lightDir;

void main()
{
vec3 l = lightDir;
vec3 n = texture2D(normalMap, gl_TexCoord[0].st).xyz * 2.0 - 1.0;
n = normalize(n);
float specular = max(dot(l, n), 0.0);
specular = pow(specular, gl_FrontMaterial.shininess);
vec4 vAmbient = gl_LightSource[0].ambient * gl_FrontMaterial.ambient;
float diffuse = max(dot(l, n), 0.0);
vec4 vDiffuse = gl_LightSource[0].diffuse * gl_FrontMaterial.diffuse * diffuse;
vec4 vSpecular = gl_LightSource[0].specular * gl_FrontMaterial.specular * specular;
gl_FragColor = (vAmbient + vDiffuse) * texture2D(colorMap, gl_TexCoord[0].st) + vSpecular;
}

Dan
03-10-2012, 03:55 AM
if you want to work in the model space aka transfrom the light vector instead of the normal - you are asking for a world of pain, especially if you are planning to add skinning to this code. btw if you still want to work in model space you need to multiply your light direction by a transpose of the rotation matrix (or just swap the order of multiplication, instead of l = mat * vec do l = vec * mat). also there is no need to normalize the light direction vector in vertex shader because you still have to normalize it in pixel shader (which you are not doing btw).
so here's the code that I think should work:
vertex shader


//it is a good idea to specify which version of GLSL
//specification you want to work with
#version 120
uniform mat4 WVP;
uniform mat4 W;
uniform vec3 CamPos; //this is needed for specular reflactions (camera position)
varying vec3 LightDir;
varying vec3 CamDir;
void main () {
//this way of transforming the position is deprecated in later versions
//of GLSL so I strongly suggest that you use the attributes of the vertex
//gl_Position = ftransform();
gl_Position = WVP * gl_Vertex;
//again I recommend using the texture coord attribute here
//but lets just keep it as is for now
gl_TexCoord[0] = gl_MultiTexCoord0;
vec3 VertexPosition = vec3(W * gl_Vertex);
LightDir = gl_LightSource[0].position.xyz - VertexPosition;
CamDir = VertexPosition - CamPos; //Direction from camera to vertex
//absolutely no need to do this in vertex shader
LightDir = normalize(LightDir);
}


pixel shader


#version 120
uniform sampler2D colorMap;
uniform sampler2D normalMap;
uniform mat4 W;
varying vec3 LightDir;
varying vec3 CamDir;
void main() {
vec3 l = normalize(LightDir);
vec3 n = texture2D(normalMap, gl_TexCoord[0].xy).xyz * 2.0 - 1.0;
n = normalize(mat3(W) * n);
vec3 r = CamDir - 2 * n * dot(CamDir, n);
float DiffuseLight = clamp(dot(n, LightDir), 0, 1);
float SpecularLight = pow(clamp(dot(n, r), 0, 1), gl_FrontMaterial.shininess);
vec4 AmbientColor = vec4(gl_LightSource[0].ambient.xyz * gl_FrontMaterial.ambient.xyz, 1);
vec4 DiffuseColor = vec4(gl_LightSource[0].diffuse.xyz * gl_FrontMaterial.diffuse.xyz * DiffuseLight, 1);
vec4 SpecularColor = vec4(gl_LightSource[0].specular.xyz * gl_FrontMaterial.specular.xyz * SpecularLight, 1);
gl_FragColor = texture2D(colorMap, gl_TexCoord[0].xy) * (DiffuseColor + AmbientColor) + SpecularColor;
}


for these shaders you will need to send additional uniforms to the shader like WVP is the combined matrix of ModelView * Projection, W is the transformation of the model, CamPos is just your camera position

User137
03-10-2012, 11:15 AM
You didn't test it i assume? There seems to be alot of new things, although it showed up black to me.

First i changed just this line, and i was able to see the ambient color:

//vec4 AmbientColor = vec4(gl_LightSource[0].ambient.xyz * gl_FrontMaterial.ambient.xyz, 1);
vec4 vAmbient = gl_LightSource[0].ambient;
edit: What was i thinking here.. When i add " * gl_FrontMaterial.ambient", i get black. Main program has this (tried gl_back just in case):

v4:=vector4f(0.6, 0.6, 0.6, 1.0);
glMaterialfv(GL_FRONT, GL_AMBIENT, @v4);
glMaterialfv(GL_BACK, GL_AMBIENT, @v4);
*Pulls hair*
Proceeded to change other materials too to this format, but all i see is ambient color still. Meaning some part of normal calculation or number feeding is failing.

That fail can be partly my fault too, because when i swap these comments around i get different shape of model:

//gl_Position = ftransform();
gl_Position = WVP * gl_Vertex;
I was previously doing just glTranslate, glRotate commands without own camera handler in use at all, so i'm now experimenting with it aswell. Object didn't have own rotation, but now i removed glRotate's and rotated object matrix instead. Oddly i had replaced ftransform(); one point before with custom multiplication and it worked back then. I have tried each multiplication other way around now, it just always makes a deformed model.


btw if you still want to work in model space you need to multiply your light direction by a transpose of the rotation matrix (or just swap the order of multiplication, instead of l = mat * vec do l = vec * mat)
It was one of the things i tried before :)

PS. I was able to make specular map texture work in yesterday's version, with the problems it had with normals though. It was simply new uniform and:

specular = pow(specular, gl_FrontMaterial.shininess) * texture2D(specMap, gl_TexCoord[0].st);

Will see if i can make a standalone project for testing with Lazarus, but i might be busy with other things than programming today.

Dan
03-10-2012, 11:23 AM
if you could post a standalone project that would be helpful, so I could test the code.

User137
03-10-2012, 11:59 AM
I put back exactly the shaders you posted after i managed to get main program better. Deforming and some problems were due to not passing right uniforms. I can see the pixel-shaded model (with no specular at all), but with light direction problems i had in my own version before. Not sure the code is right, but this is what i'm doing in loop:

glGetFloatv(GL_PROJECTION_MATRIX, @p4);
glLoadIdentity;
glTranslatef(0, 0, -2.2);
glMultMatrixf(@mat);

glGetFloatv(GL_MODELVIEW_MATRIX, @m4);
m4:=m4*p4;
if locWVP>0 then glUniformMatrix4fv(locWVP, 1, bytebool(GL_FALSE), @m4);
if locW>0 then glUniformMatrix4fv(locW, 1, bytebool(GL_FALSE), @mat);
campos:=vector(0, 0, 2.2);
if locCam>0 then glUniform3fv(locCam, 1, @campos);

//glMultMatrixf(@mat); // I think this must be called before getting model_view
model.Render; // (Model diameter is 0.5)

User137
15-10-2012, 05:21 PM
I made a tool for nxPascal that's able to create a normalmap texture, when the object normals are known. Naturally it's rough, but if there is a high quality model available in addition to low quality, then the result can be good. After little blurring maybe... I created a 12k triangle sphere for the normal map, and using 800 triangle model in the attached demo. This way i can be sure that color red means normal (1, 0, 0), exactly where the texture coordinate tells. Added some noise on gray background for specular map.

I already met 2 testers who had just black screen. Reason for other being that gl_Vertex stuff isn't even supported by his new card anymore, i think. Other had little different error, i assume his card is older than mine though. But i have improved the shaders error logging greatly after that, it should tell exactly what's wrong, and write it in errorlog.txt.

The attached project is standalone Lazarus project, included nxPascal files needed to compile and run. It's still buggy with light directions, and to help testing, i'm using 3 of the objects in the scene. 1 of them is rotating. Also i do not know why ambient light isn't taken into account. Dark side of each object should not be completely black, but they seem so.
Control keys are: W, S, A, D, Spacebar, C, and mouse dragging

Also just narrowed down the ambient color to this in fragment shader:

gl_FragColor = gl_FrontMaterial.ambient * texture2D(colorMap, gl_TexCoord[0].xy);
So gl_FrontMaterial.ambient returns black. I don't know why, it is set to higher color in main program.