I tried to implement normal mapping in my opengl application but I can't get it to work.

This is the diffuse map (which I add a brown color to) and this is the normal map.

In order to get the tangent and bitangent (in other places called binormals?) vectors, I run this function for every triangle in my mesh:

void getTangent(const glm::vec3 &v0, const glm::vec3 &v1, const glm::vec3 &v2,
const glm::vec2 &uv0, const glm::vec2 &uv1, const glm::vec2 &uv2,
std::vector<glm::vec3> &vTangents, std::vector<glm::vec3> &vBiangents)
    // Edges of the triangle : postion delta
    glm::vec3 deltaPos1 = v1-v0;
    glm::vec3 deltaPos2 = v2-v0;

    // UV delta
    glm::vec2 deltaUV1 = uv1-uv0;
    glm::vec2 deltaUV2 = uv2-uv0;

    float r = 1.0f / (deltaUV1.x * deltaUV2.y - deltaUV1.y * deltaUV2.x);
    glm::vec3 tangent = (deltaPos1 * deltaUV2.y   - deltaPos2 * deltaUV1.y)*r;
    glm::vec3 bitangent = (deltaPos2 * deltaUV1.x   - deltaPos1 * deltaUV2.x)*r;

    for(int i = 0; i < 3; i++) {

After that, I call glBufferData to upload the vertices, normals, uvs, tangents and bitangents to the GPU. The vertex shader:

#version 430

uniform mat4 ProjectionMatrix;
uniform mat4 CameraMatrix;
uniform mat4 ModelMatrix;

in vec3 vertex;
in vec3 normal;
in vec2 uv;
in vec3 tangent;
in vec3 bitangent;

out vec2 fsCoords;
out vec3 fsVertex;
out mat3 TBNMatrix;

void main()
    gl_Position = ProjectionMatrix * CameraMatrix * ModelMatrix * vec4(vertex, 1.0);

    fsCoords = uv;
    fsVertex = vertex;

    TBNMatrix = mat3(tangent, bitangent, normal);

Fragment shader:

#version 430

uniform sampler2D diffuseMap;
uniform sampler2D normalMap;
uniform mat4 ModelMatrix;
uniform vec3 CameraPosition;

uniform struct Light {
    float ambient;
    vec3 position;
} light;
uniform float shininess;

in vec2 fsCoords;
in vec3 fsVertex;
in mat3 TBNMatrix;

out vec4 color;

void main()
    //base color
    const vec3 brownColor = vec3(153.0 / 255.0, 102.0 / 255.0, 51.0 / 255.0);
    color = vec4(brownColor * (texture(diffuseMap, fsCoords).rgb + 0.25), 1.0);//add a fixed base color (0.25), because its dark as hell

    //general vars
    vec3 normal = texture(normalMap, fsCoords).rgb * 2.0 - 1.0;
    vec3 surfacePos = vec3(ModelMatrix * vec4(fsVertex, 1.0));
    vec3 surfaceToLight = normalize(TBNMatrix * (light.position - surfacePos)); //unit vector
    vec3 eyePos = TBNMatrix * CameraPosition;

    float diffuse = max(0.0, dot(normal, surfaceToLight));

    float specular;
    vec3 incidentVector = -surfaceToLight; //unit
    vec3 reflectionVector = reflect(incidentVector, normal); //unit vector
    vec3 surfaceToCamera = normalize(eyePos - surfacePos); //unit vector
    float cosAngle = max(0.0, dot(surfaceToCamera, reflectionVector));
    if(diffuse > 0.0)
        specular = pow(cosAngle, shininess);

    //add lighting to the fragment color (no attenuation for now)
    color.rgb *= light.ambient;
    color.rgb += diffuse + specular;

The image I get is completely incorrect. (light positioned on camera) horrible results

What am I doing wrong here?

My bet is on the color setting/mixing in fragment shader... 1. you are setting output color more then once (If I remember correctly on some gfx drivers that do a big problems) 2. you are adding color and intensities instead of color*intensity but I could overlook someting. 3. try just normal/bump shading at first (ignore ambient,reflect,specular...) and then if it works add the rest one by one ... always check shaders compilation logs – Spektre
Regarding 3, do you mean I should use only diffuse lighting? Also how do I check shader compilation logs? My compiler isn't familiar with glsl but only with c++. – Pilpel
Yes you should set the color variable only once and for starters with something like this: color.rgb=browncolor.rgb*fabs(dot(surface_normal,light_direc‌​tion)); also have added answer with mine shaders that do something similar to what you want to achieve. If you do not check the logs then you can easilly miss some things like optimized i/o variables, syntax errors etc ... – Spektre

2 Answers 11

My bet is on the color setting/mixing in fragment shader...

  1. you are setting output color more then once

    If I remember correctly on some gfx drivers that do a big problems for example everything after the line

    color = vec4(brownColor * (texture(diffuseMap, fsCoords).rgb + 0.25), 1.0);//add a fixed base color (0.25), because its dark as hell

    could be deleted by driver ...

  2. you are adding color and intensities instead of color*intensity

    but I could overlook someting.

  3. try just normal/bump shading at first

    Ignore ambient,reflect,specular... and then if it works add the rest one by one. Always check the shader's compilation logs

Too lazy to further analyze your code, so here is how I do it:

bump mapping example

Left size is space ship object (similar to ZXS Elite's Viper) rendered with fixed function. Right side the same (a bit different rotation of object) with GLSL shader's in place and this normal/bump map

enter image description here


#version 420 core
// texture units:
// 0 - texture0 map 2D rgba
// 1 - texture1 map 2D rgba
// 2 - normal map 2D xyz
// 3 - specular map 2D i
// 4 - light map 2D rgb rgb
// 5 - enviroment/skybox cube map 3D rgb

uniform mat4x4 tm_l2g;
uniform mat4x4 tm_l2g_dir;
uniform mat4x4 tm_g2s;

uniform mat4x4 tm_l2s_per;
uniform mat4x4 tm_per;

layout(location=0) in vec3 pos;
layout(location=1) in vec4 col;
layout(location=2) in vec2 txr;
layout(location=3) in vec3 tan;
layout(location=4) in vec3 bin;
layout(location=5) in vec3 nor;

out smooth vec3 pixel_pos;
out smooth vec4 pixel_col;
out smooth vec2 pixel_txr;
//out flat   mat3 pixel_TBN;
out smooth mat3 pixel_TBN;
void main(void)
    vec4 p;
    p.xyz=tan.xyz; p.w=1.0; pixel_TBN[0]=normalize((tm_l2g_dir*p).xyz);
    p.xyz=bin.xyz; p.w=1.0; pixel_TBN[1]=normalize((tm_l2g_dir*p).xyz);
    p.xyz=nor.xyz; p.w=1.0; pixel_TBN[2]=normalize((tm_l2g_dir*p).xyz);


#version 420 core
in smooth vec3 pixel_pos;
in smooth vec4 pixel_col;
in smooth vec2 pixel_txr;
//in flat   mat3 pixel_TBN;
in smooth mat3 pixel_TBN;

uniform sampler2D   txr_texture0;
uniform sampler2D   txr_texture1;
uniform sampler2D   txr_normal;
uniform sampler2D   txr_specular;
uniform sampler2D   txr_light;
uniform samplerCube txr_skybox;

const int _lights=3;
uniform vec3 light_col0=vec3(0.1,0.1,0.1);
uniform vec3 light_dir[_lights]=         // direction to local star in ellipsoid space
uniform vec3 light_col[_lights]=         // local star color * visual intensity

out layout(location=0) vec4 frag_col;

const vec4 v05=vec4(0.5,0.5,0.5,0.5);
const bool _blend=false;
const bool _reflect=true;
void main(void)
    float a=0.0,b,li;
    vec4 col,blend0,blend1,specul,skybox;
    vec3 normal;
    col=(texture2D(txr_normal,pixel_txr.st)-v05)*2.0;       // normal/bump maping
//  normal=pixel_TBN*col.xyz;

    if (_blend)

    col.xyz=light_col0; col.a=0.0; li=0.0;                          // normal shading (aj s bump mapingom)
    for (int i=0;i<_lights;i++)
            if (b<0.0) b=0.0;
//          b*=specul.r;
    if (li<=0.1)
    if (_reflect) col+=skybox*specul.r;
    if (col.r<0.0) col.r=0.0;
    if (col.g<0.0) col.g=0.0;
    if (col.b<0.0) col.b=0.0;
    if (a<col.r) a=col.r;
    if (a<col.g) a=col.g;
    if (a<col.b) a=col.b;
    if (a>1.0)

These source codes are bit old and mix of different things for specific application

So extract only what you need from it. If you are confused with the variable names then comment me...

  • tm_ stands for transform matrix
  • l2g stands for local coordinate system to global coordinate system transform
  • dir means that transformation changes just direction (offset is 0,0,0)
  • g2s stands for global to screen ...
  • per is perspective transform ...

The GLSL compilation log

You have to obtain its content programaticaly after compilation of your shader's (not application!!!). I do it with calling the function glGetShaderInfoLog for every shader,program I use ...


Some drivers optimize "unused" variables. As you can see at the image txr_texture1 is not found even if the fragment shader has it in code but the blending is not used in this App so driver deleted it on its own...

Shader logs can show you much (syntax errors, warnings...)

there are few GLSL IDEs for making shader's easy but I prefer my own because I can use in it the target app code directly. Mine looks like this:


each txt window is a shader source (vertex,fragment,...) the right bottom is clipboard, left top is shader's log after last compilation and left bottom is the preview. I managed to code it like Borland style IDE (with the keys also and syntax highlight) the other IDEs I saw look similar (different colors of coarse:)) anyway if you want to play with shader's download such App or do it your self it will help a lot...

There could be also a problem with TBN creation

You should visually check if the TBN vectors (tangent,binormal,normal) correspond to object surface by drawing colored lines at each vertex position. Just to be sure... something like this:


@Pilpel finely finish editing check the answer and comment me if you need further help... – Spektre
Thanks for your reply, didn't have much time to analyze your code (it isn't very readable too) but I will as soon as I can. Just to let you know.. – Pilpel

I will try to make your code work. Have you tried it with moving camera?

I cannot see anywhere that you have transformed the TBNMatrix with the transform, view and model matrices. Did you try with the vec3 normal = TBNMatrix[2]; original normals? (Fragment shader)

The following might help. In the Vertex shader you have:

uniform mat4 ProjectionMatrix;
uniform mat4 CameraMatrix;
uniform mat4 ModelMatrix;

However here, only these 3 matrices should be used:

uniform mat4 PCM;
uniform mat4 MIT;         //could be mat3
uniform mat4 ModelMatrix; //could be mat3

It is more efficient to calculate the product of those matrices on CPU (and yields the same because matrix multiplication is associative). Then this product, the PCM can be used as to calculate the new position with one multiplication per vertex:

gl_Position = PCM * vec4(vertex, 1.0);

The MIT is the inverse transpose of the ModelMatrix, you have to calculate it on the CPU. This can be used the transform the normals:

vec4 tang = ModelMatrix*vec4(tangent,0);
vec4 bita= ModelMatrix*vec4(bitangent,0);
vec4 norm= PCMIT*vec4(tangent,0);   
TBNMatrix = mat3(normalize(tang.xyz), normalize(bita.xyz), normalize(normal.xyz));

I am not sure what happens to the tangent and bitangent, but this way the normal will stay perpendicular to them. It is easy to prove. Here I use a ° b as the skalar product of a and b vectors. So let n be some normal, and a is some vektor on the surface (eg. {bi}tangent, edge of a triangle), and let A be any transformation. Then:

0 = a n = A^(-1) A a ° n = A a ° A^(-T) n = 0

Where I used the equality A x ° y = x ° A^T y. Therefore if a is perpendicular to n, then A a is perpendicular to A^(-T) n, so we have to transform it with the matrix's inverse transpose. However, the normal should have a length of 1, so after the transformations, it should be normalized.

You can get also get perpendicular normal by doing this:

vec3 normal = normalize(cross(tangent, bitangent));

Where cross(a,b) is the function that calculates cross product of a and b, witch is always perpendicular to both a and b.

Sorry for my English :)

Not the answer you're looking for? Browse other questions tagged or ask your own question.