LearnOpenGL

Translation in progress of learnopengl.com.
git clone https://git.mtkn.jp/LearnOpenGL
Log | Files | Refs

Normal-Mapping.html (33052B)


      1     <h1 id="content-title">Normal Mapping</h1>
      2 <h1 id="content-url" style='display:none;'>Advanced-Lighting/Normal-Mapping</h1>
      3 <p>
      4   All of our scenes are filled with meshes, each consisting of hundreds or maybe thousands of triangles. We boosted the realism by wrapping 2D textures on these flat triangles, hiding the fact that the polygons are just tiny flat triangles. Textures help, but when you take a good close look at the meshes it is still quite easy to see the underlying flat surfaces. Most real-life surface aren't flat however and exhibit a lot of (bumpy) details.
      5 </p>
      6 
      7 <p>
      8   For instance, take a brick surface. A brick surface is quite a rough surface and obviously not completely flat: it contains sunken cement stripes and a lot of detailed little holes and cracks. If we were to view such a brick surface in a lit scene the immersion gets easily broken. Below we can see a brick texture applied to a flat surface lit by a point light.
      9 </p>
     10 
     11 <img src="/img/advanced-lighting/normal_mapping_flat.png" class="clean" alt="Brick surface lighted by point light in OpenGL. It's not too realistic; its flat structures is now quite obvious"/>
     12   
     13 <p>
     14   The lighting doesn't take any of the small cracks and holes into account and completely ignores the deep stripes between the bricks; the surface looks perfectly flat. We can partly fix the flat look by using a specular map to pretend some surfaces are less lit due to depth or other details, but that's more of a hack than a real solution. What we need is some way to inform the lighting system about all the little depth-like details of the surface.
     15 </p>
     16   
     17 <p>
     18   If we think about this from a light's perspective: how comes the surface is lit as a completely flat surface? The answer is the surface's normal vector. From the lighting technique's point of view, the only way it determines the shape of an object is by its perpendicular normal vector. The brick surface only has a single normal vector, and as a result the surface is uniformly lit based on this normal vector's direction.  What if we, instead of a per-surface normal that is the same for each fragment, use a per-fragment normal that is different for each fragment? This way we can slightly deviate the normal vector based on a surface's little details; this gives the illusion the surface is a lot more complex:
     19 </p>
     20   
     21   <img src="/img/advanced-lighting/normal_mapping_surfaces.png" class="clean" alt="Surfaces displaying per-surface normal and per-fragment normals for normal mapping in OpenGL"/>
     22     
     23 <p>
     24   By using per-fragment normals we can trick the lighting into believing a surface consists of tiny little planes (perpendicular to the normal vectors) giving the surface an enormous boost in detail. This technique to use per-fragment normals compared to per-surface normals is called <def>normal mapping</def> or <def>bump mapping</def>. Applied to the brick plane it looks a bit like this:
     25 </p>
     26     
     27 <img src="/img/advanced-lighting/normal_mapping_compare.png" alt="Surface without and with normal mapping in OpenGL"/>
     28   
     29 <p>
     30   As you can see, it gives an enormous boost in detail and for a relatively low cost. Since we only change the normal vectors per fragment there is no need to change the lighting equation. We now pass a per-fragment normal, instead of an interpolated surface normal, to the lighting algorithm. The lighting then does the rest.
     31 </p>
     32   
     33 <h2>Normal mapping</h2>
     34 <p>
     35   To get normal mapping to work we're going to need a per-fragment normal. Similar to what we did with diffuse and specular maps we can use a 2D texture to store per-fragment normal data. This way we can sample a 2D texture to get a normal vector for that specific fragment.
     36   </p>
     37   
     38   <p>
     39     While normal vectors are geometric entities and textures are generally only used for color information, storing normal vectors in a texture may not be immediately obvious. If you think about color vectors in a texture they are represented as a 3D vector with an <code>r</code>, <code>g</code>, and <code>b</code> component. We can similarly store a normal vector's <code>x</code>, <code>y</code> and <code>z</code> component in the respective color components. Normal vectors range between <code>-1</code> and <code>1</code> so they're first mapped to [<code>0</code>,<code>1</code>]:
     40 </p>
     41   
     42 <pre><code>
     43 vec3 rgb_normal = normal * 0.5 + 0.5; // transforms from [-1,1] to [0,1]  
     44 </code></pre>
     45   
     46 <p>
     47   With normal vectors transformed to an RGB color component like this, we can store a per-fragment normal derived from the shape of a surface onto a 2D texture. An example <def>normal map</def> of the brick surface at the start of this chapter is shown below:
     48 </p>
     49   
     50   <img src="/img/advanced-lighting/normal_mapping_normal_map.png" alt="Image of a normal map in OpenGL normal mapping"/>
     51     
     52 <p>
     53    This (and almost all normal maps you find online) will have a blue-ish tint. This is because the normals are all closely pointing outwards towards the positive z-axis \((0, 0, 1)\): a blue-ish color. The deviations in color represent normal vectors that are slightly offset from the general positive z direction, giving a sense of depth to the texture. For example, you can see that at the top of each brick the color tends to be more greenish, which makes sense as the top side of a brick would have normals pointing more in the positive y direction \((0, 1, 0)\) which happens to be the color green!
     54 </p>
     55     
     56 <p>
     57   With a simple plane, looking at the positive z-axis, we can take <a href="/img/textures/brickwall.jpg" target="_blank">this</a> diffuse texture and <a href="/img/textures/brickwall_normal.jpg" target="_blank">this</a> normal map to render the image from the previous section. Note that the linked normal map is different from the one shown above. The reason for this is that OpenGL reads texture coordinates with the y (or v) coordinate reversed from how textures are generally created. The linked normal map thus has its y (or green) component inversed (you can see the green colors are now pointing downwards); if you fail to take this into account, the lighting will be incorrect. Load both textures, bind them to the proper texture units, and render a plane with the following changes in the lighting fragment shader:
     58 </p>
     59     
     60 <pre><code>
     61 uniform sampler2D normalMap;  
     62 
     63 void main()
     64 {           
     65     // obtain normal from normal map in range [0,1]
     66     normal = texture(normalMap, fs_in.TexCoords).rgb;
     67     // transform normal vector to range [-1,1]
     68     normal = normalize(normal * 2.0 - 1.0);   
     69   
     70     [...]
     71     // proceed with lighting as normal
     72 }  
     73 </code></pre>
     74   
     75 <p>
     76   Here we reverse the process of mapping normals to RGB colors by remapping the sampled normal color from [<code>0</code>,<code>1</code>] back to [<code>-1</code>,<code>1</code>] and then use the sampled normal vectors for the upcoming lighting calculations. In this case we used a Blinn-Phong shader.
     77 </p>
     78     
     79 <p>
     80   By slowly moving the light source over time you really get a sense of depth using the normal map. Running this normal mapping example gives the exact results as shown at the start of this chapter:
     81 </p>
     82   
     83 <img src="/img/advanced-lighting/normal_mapping_correct.png" class="clean" alt="Surface without and with normal mapping in OpenGL"/>
     84   
     85 <p>
     86   There is one issue however that greatly limits this use of normal maps. The normal map we used had normal vectors that all pointed somewhat in the positive z direction. This worked because the plane's surface normal was also pointing in the positive z direction. However, what would happen if we used the same normal map on a plane laying on the ground with a surface normal vector pointing in the positive y direction?
     87 </p>
     88   
     89   <img src="/img/advanced-lighting/normal_mapping_ground.png" class="clean" alt="Image of plane with normal mapping without tangent space transformation, looks off in OpenGL"/>
     90     
     91 <p>
     92   The lighting doesn't look right! This happens because the sampled normals of this plane still roughly point in the positive z direction even though they should mostly point in the positive y direction. As a result, the lighting thinks the surface's normals are the same as before when the plane was pointing towards the positive z direction; the lighting is incorrect. The image below shows what the sampled normals approximately look like on this surface:
     93 </p>
     94     
     95   <img src="/img/advanced-lighting/normal_mapping_ground_normals.png" class="clean" alt="Image of plane with normal mapping without tangent space transformation with displayed normals, looks off in OpenGL"/>
     96     
     97 <p>
     98   You can see that all the normals point somewhat in the positive z direction even though they should be pointing towards the positive y direction. One solution to this problem is to define a normal map for each possible direction of the surface; in the case of a cube we would need 6 normal maps. However, with advanced meshes that can have more than hundreds of possible surface directions this becomes an infeasible approach.
     99 </p>
    100     
    101 <p>
    102   A different solution exists that does all the lighting in a different coordinate space: a coordinate space where the normal map vectors always point towards the positive z direction; all other lighting vectors are then transformed relative to this positive z direction. This way we can always use the same normal map, regardless of orientation. This coordinate space is called <def>tangent space</def>.
    103 </p>
    104     
    105 <h2>Tangent space</h2>
    106 <p>
    107   Normal vectors in a normal map are expressed in tangent space where normals always point roughly in the positive z direction. Tangent space is a space that's local to the surface of a triangle: the normals are relative to the local reference frame of the individual triangles. Think of it as the local space of the normal map's vectors; they're all defined pointing in the positive z direction regardless of the final transformed direction. Using a specific matrix we can then transform normal vectors from this <em>local</em> tangent space to world or view coordinates, orienting them along the final mapped surface's direction.
    108 </p>
    109     
    110 <p>
    111   Let's say we have the incorrect normal mapped surface from the previous section looking in the positive y direction. The normal map is defined in tangent space, so one way to solve the problem is to calculate a matrix to transform normals from tangent space to a different space such that they're aligned with the surface's normal direction: the normal vectors are then all pointing roughly in the positive y direction. The great thing about tangent space is that we can calculate this matrix for any type of surface so that we can properly align the tangent space's z direction to the surface's normal direction. 
    112 </p>
    113     
    114 <p>
    115   Such a matrix is called a <def>TBN</def> matrix where the letters depict a <def>Tangent</def>, <def>Bitangent</def> and <def>Normal</def> vector. These are the vectors we need to construct this matrix. To construct such a <em>change-of-basis</em> matrix, that transforms a tangent-space vector to a different coordinate space, we need three perpendicular vectors that are aligned along the surface of a normal map: an up, right, and forward vector; similar to what we did in the <a href="https://learnopengl.com/Getting-Started/Camera" target="_blank">camera</a> chapter.
    116 </p>
    117     
    118 <p>
    119   We already know the up vector, which is the surface's normal vector. The right and forward vector are the tangent and bitangent vector respectively. The following image of a surface shows all three vectors on a surface:
    120 </p>
    121     
    122     <img src="/img/advanced-lighting/normal_mapping_tbn_vectors.png" class="clean" alt="Normal mapping tangent, bitangent and normal vectors on a surface in OpenGL"/>
    123       
    124 <p>
    125   Calculating the tangent and bitangent vectors is not as straightforward as the normal vector. We can see from the image that the direction of the normal map's tangent and bitangent vector align with the direction in which we define a surface's texture coordinates. We'll use this fact to calculate tangent and bitangent vectors for each surface. Retrieving them does require a bit of math; take a look at the following image:
    126 </p>
    127 
    128    <img src="/img/advanced-lighting/normal_mapping_surface_edges.png" class="clean" alt="Edges of a surface in OpenGL required for calculating TBN matrix"/>
    129      
    130 <p>
    131   From the image we can see that the texture coordinate differences of an edge \(E_2\) of a triangle (denoted as \(\Delta U_2\) and \(\Delta V_2\)) are expressed in the same direction as the tangent vector \(T\) and bitangent vector \(B\). Because of this we can write both displayed edges \(E_1\) and \(E_2\) of the triangle as a linear combination of the tangent vector \(T\) and the bitangent vector \(B\):
    132 </p>     
    133      
    134      \[E_1 = \Delta U_1T + \Delta V_1B\]
    135      \[E_2 = \Delta U_2T + \Delta V_2B\]
    136      
    137 <p>
    138   Which we can also write as:
    139 </p>
    140      
    141      \[(E_{1x}, E_{1y}, E_{1z}) = \Delta U_1(T_x, T_y, T_z) + \Delta V_1(B_x, B_y, B_z)\]
    142      \[(E_{2x}, E_{2y}, E_{2z}) = \Delta U_2(T_x, T_y, T_z) + \Delta V_2(B_x, B_y, B_z)\]
    143      
    144 <p>
    145    We can calculate \(E\) as the difference vector between two triangle positions, and \(\Delta U\) and \(\Delta V\) as their texture coordinate differences. We're then left with two unknowns (tangent \(T\) and bitangent \(B\)) and two equations. You may remember from your algebra classes that this allows us to solve for \(T\) and \(B\).
    146 </p>
    147      
    148 <p>
    149   The last equation allows us to write it in a different form: that of matrix multiplication:
    150 </p>
    151      
    152    \[\begin{bmatrix} E_{1x} & E_{1y} & E_{1z} \\ E_{2x} & E_{2y} & E_{2z} \end{bmatrix} = \begin{bmatrix} \Delta U_1 & \Delta V_1 \\ \Delta U_2 & \Delta V_2 \end{bmatrix} \begin{bmatrix} T_x & T_y & T_z \\ B_x & B_y & B_z \end{bmatrix} \]
    153      
    154 <p>
    155   Try to visualize the matrix multiplications in your head and confirm that this is indeed the same equation. An advantage of rewriting the equations in matrix form is that solving for \(T\) and \(B\) is easier to understand. If we multiply both sides of the equations by the inverse of the \(\Delta U \Delta V\) matrix we get:
    156 </p>
    157      
    158      \[ \begin{bmatrix} \Delta U_1 & \Delta V_1 \\ \Delta U_2 & \Delta V_2 \end{bmatrix}^{-1} \begin{bmatrix} E_{1x} & E_{1y} & E_{1z} \\ E_{2x} & E_{2y} & E_{2z} \end{bmatrix} = \begin{bmatrix} T_x & T_y & T_z \\ B_x & B_y & B_z \end{bmatrix} \] 
    159  
    160 <p>
    161   This allows us to solve for \(T\) and \(B\). This does require us to calculate the inverse of the delta texture coordinate matrix. I won't go into the mathematical details of calculating a matrix' inverse, but it roughly translates to 1 over the determinant of the matrix, multiplied by its adjugate matrix:
    162 </p>
    163       \[ \begin{bmatrix} T_x & T_y & T_z \\ B_x & B_y & B_z \end{bmatrix}  = \frac{1}{\Delta U_1 \Delta V_2 - \Delta U_2 \Delta V_1} \begin{bmatrix} \Delta V_2 & -\Delta V_1 \\ -\Delta U_2 & \Delta U_1 \end{bmatrix} \begin{bmatrix} E_{1x} & E_{1y} & E_{1z} \\ E_{2x} & E_{2y} & E_{2z} \end{bmatrix} \] 
    164      
    165 <p>
    166   This final equation gives us a formula for calculating the tangent vector \(T\) and bitangent vector \(B\) from a triangle's two edges and its texture coordinates.
    167 </p>
    168      
    169 <p>
    170   Don't worry if you do not fully understand the mathematics behind this. As long as you understand that we can calculate tangents and bitangents from a triangle's vertices and its texture coordinates (since texture coordinates are in the same space as tangent vectors) you're halfway there.
    171 </p>
    172      
    173 <h3>Manual calculation of tangents and bitangents</h3>
    174 <p>
    175   In the previous demo we had a simple normal mapped plane facing the positive z direction. This time we want to implement normal mapping using tangent space so we can orient this plane however we want and normal mapping would still work. Using the previously discussed mathematics we're going to manually calculate this surface's tangent and bitangent vectors.
    176 </p>
    177      
    178 <p>
    179   Let's assume the plane is built up from the following vectors (with 1, 2, 3 and 1, 3, 4 as its two triangles):
    180 </p>
    181      
    182 <pre><code>
    183 // positions
    184 glm::vec3 pos1(-1.0,  1.0, 0.0);
    185 glm::vec3 pos2(-1.0, -1.0, 0.0);
    186 glm::vec3 pos3( 1.0, -1.0, 0.0);
    187 glm::vec3 pos4( 1.0,  1.0, 0.0);
    188 // texture coordinates
    189 glm::vec2 uv1(0.0, 1.0);
    190 glm::vec2 uv2(0.0, 0.0);
    191 glm::vec2 uv3(1.0, 0.0);
    192 glm::vec2 uv4(1.0, 1.0);
    193 // normal vector
    194 glm::vec3 nm(0.0, 0.0, 1.0);  
    195 </code></pre>
    196 
    197 <p>
    198   We first calculate the first triangle's edges and delta UV coordinates:
    199 </p>
    200      
    201 <pre><code>
    202 glm::vec3 edge1 = pos2 - pos1;
    203 glm::vec3 edge2 = pos3 - pos1;
    204 glm::vec2 deltaUV1 = uv2 - uv1;
    205 glm::vec2 deltaUV2 = uv3 - uv1;  
    206 </code></pre>
    207      
    208 <p>
    209   With the required data for calculating tangents and bitangents we can start following the equation from the previous section:
    210 </p>
    211      
    212 <pre><code>
    213 float f = 1.0f / (deltaUV1.x * deltaUV2.y - deltaUV2.x * deltaUV1.y);
    214 
    215 tangent1.x = f * (deltaUV2.y * edge1.x - deltaUV1.y * edge2.x);
    216 tangent1.y = f * (deltaUV2.y * edge1.y - deltaUV1.y * edge2.y);
    217 tangent1.z = f * (deltaUV2.y * edge1.z - deltaUV1.y * edge2.z);
    218 
    219 bitangent1.x = f * (-deltaUV2.x * edge1.x + deltaUV1.x * edge2.x);
    220 bitangent1.y = f * (-deltaUV2.x * edge1.y + deltaUV1.x * edge2.y);
    221 bitangent1.z = f * (-deltaUV2.x * edge1.z + deltaUV1.x * edge2.z);
    222   
    223 [...] // similar procedure for calculating tangent/bitangent for plane's second triangle
    224 </code></pre>
    225      
    226 <p>
    227   Here we first pre-calculate the fractional part of the equation as <var>f</var> and then for each vector component we do the corresponding matrix multiplication multiplied by <var>f</var>. If you compare this code with the final equation you can see it is a direct translation. Because a triangle is always a flat shape, we only need to calculate a single tangent/bitangent pair per triangle as they will be the same for each of the triangle's vertices.
    228 </p>
    229      
    230 <p>
    231   The resulting tangent and bitangent vector should have a value of (<code>1</code>,<code>0</code>,<code>0</code>) and (<code>0</code>,<code>1</code>,<code>0</code>) respectively that together with the normal (<code>0</code>,<code>0</code>,<code>1</code>) forms an orthogonal TBN matrix. Visualized on the plane, the TBN vectors would look like this:
    232 </p>
    233     
    234      <img src="/img/advanced-lighting/normal_mapping_tbn_shown.png" class="clean" alt="Image of TBN vectors visualized on a plane in OpenGL"/>
    235        
    236 <p>
    237   With tangent and bitangent vectors defined per vertex we can start implementing <em>proper</em> normal mapping.
    238 </p>
    239        
    240 <h3>Tangent space normal mapping</h3>
    241 <p>
    242   To get normal mapping working, we first have to create a TBN matrix in the shaders. To do that, we pass the earlier calculated tangent and bitangent vectors to the vertex shader as vertex attributes:
    243 </p>
    244        
    245 <pre><code>
    246 #version 330 core
    247 layout (location = 0) in vec3 aPos;
    248 layout (location = 1) in vec3 aNormal;
    249 layout (location = 2) in vec2 aTexCoords;
    250 layout (location = 3) in vec3 aTangent;
    251 layout (location = 4) in vec3 aBitangent;  
    252 </code></pre>
    253        
    254 <p>
    255   Then within the vertex shader's <fun>main</fun> function we create the TBN matrix:
    256 </p>
    257 
    258 <pre><code>       
    259 void main()
    260 {
    261    [...]
    262    vec3 T = normalize(vec3(model * vec4(aTangent,   0.0)));
    263    vec3 B = normalize(vec3(model * vec4(aBitangent, 0.0)));
    264    vec3 N = normalize(vec3(model * vec4(aNormal,    0.0)));
    265    mat3 TBN = mat3(T, B, N);
    266 }
    267 </code></pre>
    268        
    269 <p>
    270   Here we first transform all the TBN vectors to the coordinate system we'd like to work in, which in this case is world-space as we multiply them with the <var>model</var> matrix. Then we create the actual TBN matrix by directly supplying <fun>mat3</fun>'s constructor with the relevant column vectors. Note that if we want to be really precise, we would multiply the TBN vectors with the normal matrix as we only care about the orientation of the vectors.
    271 </p>
    272 
    273 <note>
    274   Technically there is no need for the <var>bitangent</var> variable in the vertex shader. All three TBN vectors are perpendicular to each other so we can calculate the <var>bitangent</var> ourselves in the vertex shader by taking the cross product of the <var>T</var> and <var>N</var> vector: <code>vec3 B = cross(N, T);</code>
    275 </note>       
    276        
    277 <p>
    278   So now that we have a TBN matrix, how are we going to use it? There are two ways we can use a TBN matrix for normal mapping, and we'll demonstrate both of them:
    279 </p>
    280               
    281 <ol>
    282   <li>We take the TBN matrix that transforms any vector from tangent to world space, give it to the fragment shader, and transform the sampled normal from tangent space to world space using the TBN matrix; the normal is then in the same space as the other lighting variables.</li>
    283   <li>We take the inverse of the TBN matrix that transforms any vector from world space to tangent space, and use this matrix to transform not the normal, but the other relevant lighting variables to tangent space; the normal is then again in the same space as the other lighting variables.</li>
    284 </ol>
    285        
    286 <p>
    287   Let's review the first case. The normal vector we sample from the normal map is expressed in tangent space whereas the other lighting vectors (light and view direction) are expressed in world space. By passing the TBN matrix to the fragment shader we can multiply the sampled tangent space normal with this TBN matrix to transform the normal vector to the same reference space as the other lighting vectors. This way, all the lighting calculations (specifically the dot product) make sense.
    288 </p>
    289        
    290 <p>
    291   Sending the TBN matrix to the fragment shader is easy:
    292 </p>
    293        
    294 <pre><code>
    295 out VS_OUT {
    296     vec3 FragPos;
    297     vec2 TexCoords;
    298     mat3 TBN;
    299 } vs_out;  
    300   
    301 void main()
    302 {
    303     [...]
    304     vs_out.TBN = mat3(T, B, N);
    305 }
    306 </code></pre>
    307        
    308 <p>
    309   In the fragment shader we similarly take a <code>mat3</code> as an input variable:
    310 </p>
    311        
    312 <pre><code>
    313 in VS_OUT {
    314     vec3 FragPos;
    315     vec2 TexCoords;
    316     mat3 TBN;
    317 } fs_in;  
    318 </code></pre>
    319        
    320 <p>
    321   With this TBN matrix we can now update the normal mapping code to include the tangent-to-world space transformation:
    322 </p>
    323        
    324 <pre class="cpp"><code>
    325 normal = texture(normalMap, fs_in.TexCoords).rgb;
    326 normal = normal * 2.0 - 1.0;   
    327 normal = normalize(fs_in.TBN * normal); 
    328 </code></pre>
    329        
    330 <p>
    331   Because the resulting <var>normal</var> is now in world space, there is no need to change any of the other fragment shader code as the lighting code assumes the normal vector to be in world space. 
    332 </p>
    333        
    334 <p>
    335   Let's also review the second case, where we take the inverse of the TBN matrix to transform all relevant world-space vectors to the space the sampled normal vectors are in: tangent space. The construction of the TBN matrix remains the same, but we first invert the matrix before sending it to the fragment shader:
    336 </p>
    337        
    338 <pre><code>
    339 vs_out.TBN = transpose(mat3(T, B, N));   
    340 </code></pre>
    341        
    342 <p>
    343   Note that we use the <fun>transpose</fun> function instead of the <fun>inverse</fun> function here. A great property of orthogonal matrices (each axis is a perpendicular unit vector) is that the transpose of an orthogonal matrix equals its inverse. This is a great property as <fun>inverse</fun> is expensive and a transpose isn't.
    344 </p>
    345        
    346 <p>
    347   Within the fragment shader we do not transform the normal vector, but we transform the other relevant vectors to tangent space, namely the <var>lightDir</var> and <var>viewDir</var> vectors. That way, each vector is in the same coordinate space: tangent space.
    348 </p>
    349        
    350 <pre><code>
    351 void main()
    352 {           
    353     vec3 normal = texture(normalMap, fs_in.TexCoords).rgb;
    354     normal = normalize(normal * 2.0 - 1.0);   
    355    
    356     vec3 lightDir = fs_in.TBN * normalize(lightPos - fs_in.FragPos);
    357     vec3 viewDir  = fs_in.TBN * normalize(viewPos - fs_in.FragPos);    
    358     [...]
    359 }  
    360 </code></pre>
    361        
    362 <p>
    363   The second approach looks like more work and also requires matrix multiplications in the fragment shader, so why would we bother with the second approach?
    364 </p>
    365        
    366 <p>
    367   Well, transforming vectors from world to tangent space has an added advantage in that we can transform all the relevant lighting vectors to tangent space in the vertex shader instead of in the fragment shader. This works, because <var>lightPos</var> and <var>viewPos</var> don't update every fragment run, and for <var>fs_in.FragPos</var> we can calculate its tangent-space position in the vertex shader and let fragment interpolation do its work. There is effectively no need to transform a vector to tangent space in the fragment shader, while it is necessary with the first approach as sampled normal vectors are specific to each fragment shader run.
    368 </p>
    369        
    370 <p>
    371   So instead of sending the inverse of the TBN matrix to the fragment shader, we send a tangent-space light position, view position, and vertex position to the fragment shader. This saves us from having to do matrix multiplications in the fragment shader. This is a nice optimization as the vertex shader runs considerably less often than the fragment shader. This is also the reason why this approach is often the preferred approach.
    372 </p>
    373        
    374 <pre><code>
    375 out VS_OUT {
    376     vec3 FragPos;
    377     vec2 TexCoords;
    378     vec3 TangentLightPos;
    379     vec3 TangentViewPos;
    380     vec3 TangentFragPos;
    381 } vs_out;
    382 
    383 uniform vec3 lightPos;
    384 uniform vec3 viewPos;
    385  
    386 [...]
    387   
    388 void main()
    389 {    
    390     [...]
    391     mat3 TBN = transpose(mat3(T, B, N));
    392     vs_out.TangentLightPos = TBN * lightPos;
    393     vs_out.TangentViewPos  = TBN * viewPos;
    394     vs_out.TangentFragPos  = TBN * vec3(model * vec4(aPos, 1.0));
    395 }  
    396 </code></pre>
    397        
    398 <p>
    399   In the fragment shader we then use these new input variables to calculate lighting in tangent space. As the normal vector is already in tangent space, the lighting makes sense.
    400 </p>
    401        
    402 <p>
    403    With normal mapping applied in tangent space, we should get similar results to what we had at the start of this chapter. This time however, we can orient our plane in any way we'd like and the lighting would still be correct:
    404 </p>
    405        
    406 <pre><code>
    407 glm::mat4 model = glm::mat4(1.0f);
    408 model = <function id='57'>glm::rotate</function>(model, (float)<function id='47'>glfwGetTime</function>() * -10.0f, glm::normalize(glm::vec3(1.0, 0.0, 1.0)));
    409 shader.setMat4("model", model);
    410 RenderQuad();
    411 </code></pre>
    412        
    413 <p>
    414   Which indeed looks like proper normal mapping:
    415 </p>
    416        
    417        <img src="/img/advanced-lighting/normal_mapping_correct_tangent.png" class="clean" alt="Correct normal mapping with tangent space transformations in OpenGL"/>
    418          
    419 <p>
    420   You can find the source code <a href="/code_viewer_gh.php?code=src/5.advanced_lighting/4.normal_mapping/normal_mapping.cpp" target="_blank">here</a>.
    421 </p>
    422                
    423 <h2>Complex objects</h2>
    424 <p>
    425   We've demonstrated how we can use normal mapping, together with tangent space transformations, by manually calculating the tangent and bitangent vectors. Luckily for us, having to manually calculate these tangent and bitangent vectors is not something we do too often. Most of the time you implement it once in a custom model loader, or in our case use a <a href="https://learnopengl.com/Model-Loading/Assimp" target="_blank">model loader</a> using Assimp. 
    426 </p>
    427          
    428 <p>
    429    Assimp has a very useful configuration bit we can set when loading a model called <var>aiProcess_CalcTangentSpace</var>. When the <var>aiProcess_CalcTangentSpace</var> bit is supplied to Assimp's <fun>ReadFile</fun> function, Assimp calculates smooth tangent and bitangent vectors for each of the loaded vertices, similarly to how we did it in this chapter. 
    430 </p>
    431 
    432 <pre><code>
    433 const aiScene *scene = importer.ReadFile(
    434     path, aiProcess_Triangulate | aiProcess_FlipUVs | aiProcess_CalcTangentSpace
    435 );  
    436 </code></pre>
    437          
    438 <p>
    439     Within Assimp we can then retrieve the calculated tangents via:
    440 </p>
    441          
    442 <pre><code>
    443 vector.x = mesh->mTangents[i].x;
    444 vector.y = mesh->mTangents[i].y;
    445 vector.z = mesh->mTangents[i].z;
    446 vertex.Tangent = vector;  
    447 </code></pre>
    448          
    449 <p>
    450   Then you'll have to update the model loader to also load normal maps from a textured model. The wavefront object format (.obj) exports normal maps slightly different from Assimp's conventions as <var>aiTextureType_NORMAL</var> doesn't load normal maps, while <var>aiTextureType_HEIGHT</var> does:
    451 </p>
    452          
    453 <pre><code>
    454 vector&lt;Texture&gt; normalMaps = loadMaterialTextures(material, aiTextureType_HEIGHT, "texture_normal");  
    455 </code></pre>
    456          
    457 <p>
    458   Of course, this is different for each type of loaded model and file format.
    459 </p>
    460          
    461          <!--It's also important to realize that <var>aiProcess_CalcTangentSpace</var> doesn't always work. Calculating tangents is based on texture coordinates and some 3D artists do certain texture tricks like mirroring a texture surface over a model; this gives incorrect results when the mirroring is not taken into account. The nanosuit model for instance doesn't produce proper tangents as it has mirrored texture coordinates. Assimp gives us a multiplication factor (<code>1</code> or <code>-1</code>) in the tangent's <code>w</code> coordinate that we can use to multiply the bitangent with to account for the mirroring.
    462 </p>-->
    463          
    464 <p>
    465   Running the application on a model with specular and normal maps, using an updated model loader, gives the following result:
    466 </p>
    467          
    468          <img src="/img/advanced-lighting/normal_mapping_complex_compare.png" alt="Normal mapping in OpenGL on a complex object loaded with Assimp"/>
    469            
    470 <p>
    471   As you can see, normal mapping boosts the detail of an object by an incredible amount without too much extra cost. 
    472 </p>
    473            
    474 <p>
    475   Using normal maps is also a great way to boost performance. Before normal mapping, you had to use a large number of vertices to get a high number of detail on a mesh. With normal mapping, we can get the same level of detail on a mesh using a lot less vertices. The image below from Paolo Cignoni shows a nice comparison of both methods:
    476 </p>
    477            
    478            <img src="/img/advanced-lighting/normal_mapping_comparison.png" alt="Comparrison of visualizing details on a mesh with and without normal mapping"/>
    479 
    480 <p>
    481   The details on both the high-vertex mesh and the low-vertex mesh with normal mapping are almost indistinguishable. So normal mapping doesn't only look nice, it's a great tool to replace high-vertex meshes with low-vertex meshes without losing (too much) detail.
    482 </p>
    483              
    484 <h2>One last thing</h2>
    485 <p>
    486   There is one last trick left to discuss that slightly improves quality without too much extra cost.
    487 </p>
    488              
    489 <p>
    490   When tangent vectors are calculated on larger meshes that share a considerable amount of vertices, the tangent vectors are generally averaged to give nice and smooth results. A problem with this approach is that the three TBN vectors could end up non-perpendicular, which means the resulting TBN matrix would no longer be orthogonal. Normal mapping would only be slightly off with a non-orthogonal TBN matrix, but it's still something we can improve.
    491 </p>
    492              
    493 <p>
    494   Using a mathematical trick called the <def>Gram-Schmidt process</def>, we can <def>re-orthogonalize</def> the TBN vectors such that each vector is again perpendicular to the other vectors. Within the vertex shader we would do it like this:
    495 </p>
    496              
    497 <pre><code>
    498 vec3 T = normalize(vec3(model * vec4(aTangent, 0.0)));
    499 vec3 N = normalize(vec3(model * vec4(aNormal, 0.0)));
    500 // re-orthogonalize T with respect to N
    501 T = normalize(T - dot(T, N) * N);
    502 // then retrieve perpendicular vector B with the cross product of T and N
    503 vec3 B = cross(N, T);
    504 
    505 mat3 TBN = mat3(T, B, N)  
    506 </code></pre>
    507              
    508 <p>
    509   This, albeit by a little, generally improves the normal mapping results with a little extra cost. Take a look at the end of the <em>Normal Mapping Mathematics</em> video in the additional resources for a great explanation of how this process actually works.
    510 </p>
    511 
    512 <h2>Additional resources</h2>
    513   <ul>
    514      <li><a href="http://ogldev.atspace.co.uk/www/tutorial26/tutorial26.html" target="_blank">Tutorial 26: Normal Mapping</a>: normal mapping tutorial by ogldev.</li>
    515     <li><a href="https://www.youtube.com/watch?v=LIOPYmknj5Q" target="_blank">How Normal Mapping Works</a>: a nice video tutorial of how normal mapping works by TheBennyBox.</li>
    516     <li><a href="https://www.youtube.com/watch?v=4FaWLgsctqY" target="_blank">Normal Mapping Mathematics</a>: a similar video by TheBennyBox about the mathematics behind normal mapping.</li>
    517     <li><a href="http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-13-normal-mapping/" target="_blank">Tutorial 13: Normal Mapping</a>: normal mapping tutorial by opengl-tutorial.org.</li>
    518 </ul>       
    519 
    520     </div>
    521     
    522     <div id="hover">
    523         HI
    524     </div>
    525    <!-- 728x90/320x50 sticky footer -->
    526 <div id="waldo-tag-6196"></div>
    527 
    528    <div id="disqus_thread"></div>
    529 
    530     
    531 
    532 
    533 </div> <!-- container div -->
    534 
    535 
    536 </div> <!-- super container div -->
    537 </body>
    538 </html>