LearnOpenGL

Translation in progress of learnopengl.com.
git clone https://git.mtkn.jp/LearnOpenGL
Log | Files | Refs

Parallax-Mapping.html (28726B)


      1     <h1 id="content-title">Parallax Mapping</h1>
      2 <h1 id="content-url" style='display:none;'>Advanced-Lighting/Parallax-Mapping</h1>
      3 <p>
      4   Parallax mapping is a technique similar to normal mapping, but based on different principles. Just like normal mapping it is a technique that significantly boosts a textured surface's detail and gives it a sense of depth. While also an illusion, parallax mapping is a lot better in conveying a sense of depth and together with normal mapping gives incredibly realistic results. While parallax mapping isn't necessarily a technique directly related to (advanced) lighting, I'll still discuss it here as the technique is a logical follow-up of normal mapping. Note that getting an understanding of normal mapping, specifically tangent space, is strongly advised before learning parallax mapping.
      5 </p>
      6 
      7 <p>
      8   Parallax mapping is closely related to the family of <def>displacement mapping</def> techniques that <em>displace</em> or <em>offset</em> vertices based on geometrical information stored inside a texture. One way to do this, is to take a plane with roughly 1000 vertices and displace each of these vertices based on a value in a texture that tells us the height of the plane at that specific area. Such a texture that contains height values per texel is called a <def>height map</def>. An example height map derived from the geometric properties of a simple brick surface looks a bit like this:
      9 </p>
     10 
     11 <img src="/img/advanced-lighting/parallax_mapping_height_map.png" alt="Height map used in OpenGL for parallax mapping"/>
     12   
     13 <p>
     14  When spanned over a plane, each vertex is displaced based on the sampled height value in the height map, transforming a flat plane to a rough bumpy surface based on a material's geometric properties. For instance, taking a flat plane displaced with the above heightmap results in the following image:
     15 </p>
     16   
     17   <img src="/img/advanced-lighting/parallax_mapping_plane_heightmap.png" class="clean" alt="Height map applied to simple plane"/>
     18     
     19 <p>
     20   A problem with displacing vertices this way is that a plane needs to contain a huge amount of triangles to get a realistic displacement, otherwise the displacement looks too blocky. As each flat surface may then require over 10000 vertices this quickly becomes computationally infeasible. What if we could somehow achieve similar realism without the need of extra vertices? In fact, what if I were to tell you that the previously shown displaced surface is actually rendered with only 2 triangles. This brick surface shown is rendered with <def>parallax mapping</def>, a displacement mapping technique that doesn't require extra vertex data to convey depth, but (similar to normal mapping) uses a clever technique to trick the user.
     21 </p>
     22     
     23 <p>
     24   The idea behind parallax mapping is to alter the texture coordinates in such a way that it looks like a fragment's surface is higher or lower than it actually is, all based on the view direction and a heightmap. To understand how it works, take a look at the following image of our brick surface:
     25  </p>
     26     
     27     <img src="/img/advanced-lighting/parallax_mapping_plane_height.png" class="clean" alt="Diagram of how parallax mapping works in OpenGL"/>
     28       
     29 <p>
     30   Here the rough red line represents the values in the heightmap as the geometric surface representation of the brick surface and the vector \(\color{orange}{\bar{V}}\) represents the surface to view direction (<var>viewDir</var>). If the plane would have actual displacement, the viewer would see the surface at point \(\color{blue}B\). However, as our plane has no actual displacement the view direction is calculated from point \(\color{green}A\) as we'd expect. Parallax mapping aims to offset the texture coordinates at fragment position \(\color{green}A\) in such a way that we get texture coordinates at point \(\color{blue}B\). We then use the texture coordinates at point \(\color{blue}B\) for all subsequent texture samples, making it look like the viewer is actually looking at point \(\color{blue}B\).
     31 </p>
     32       
     33 <p>
     34   The trick is to figure out how to get the texture coordinates at point \(\color{blue}B\) from point \(\color{green}A\). Parallax mapping tries to solve this by scaling the fragment-to-view direction vector \(\color{orange}{\bar{V}}\) by the height at fragment \(\color{green}A\). So we're scaling the length of \(\color{orange}{\bar{V}}\) to be equal to a sampled value from the heightmap \(\color{green}{H(A)}\) at fragment position \(\color{green}A\). The image below shows this scaled vector \(\color{brown}{\bar{P}}\):
     35 </p>
     36       
     37       <img src="/img/advanced-lighting/parallax_mapping_scaled_height.png" class="clean" alt="Diagram of how parallax mapping works in OpenGL with vector scaled by fragment's height."/>
     38         
     39 <p>
     40   We then take this vector \(\color{brown}{\bar{P}}\) and take its vector coordinates that align with the plane as the texture coordinate offset. This works because vector \(\color{brown}{\bar{P}}\) is calculated using a height value from the heightmap. So the higher a fragment's height, the more it effectively gets displaced. 
     41 </p>
     42         
     43 <p>
     44   This little trick gives good results most of the time, but it is still a really crude approximation to get to point \(\color{blue}B\). When heights change rapidly over a surface the results tend to look unrealistic as the vector \(\color{brown}{\bar{P}}\) will not end up close to \(\color{blue}B\) as you can see below:
     45 </p>
     46         
     47         <img src="/img/advanced-lighting/parallax_mapping_incorrect_p.png" class="clean" alt="Diagram of why basic parallax mapping gives incorrect result at steep height changes."/>
     48         
     49 <p>
     50   Another issue with parallax mapping is that it's difficult to figure out which coordinates to retrieve from \(\color{brown}{\bar{P}}\) when the surface is arbitrarily rotated in some way. We'd rather do this in a different coordinate space where the <code>x</code> and <code>y</code> component of vector \(\color{brown}{\bar{P}}\) always align with the texture's surface. If you've followed along in the <a href="https://learnopengl.com/Advanced-Lighting/Normal-Mapping"  target="_blank">normal mapping</a> chapter you probably guessed how we can accomplish this. And yes, we would like to do parallax mapping in tangent space.
     51 </p>
     52           
     53 <p>
     54   By transforming the fragment-to-view direction vector \(\color{orange}{\bar{V}}\) to tangent space, the transformed \(\color{brown}{\bar{P}}\) vector will have its <code>x</code> and <code>y</code> component aligned to the surface's tangent and bitangent vectors. As the tangent and bitangent vectors are pointing in the same direction as the surface's texture coordinates we can take the <code>x</code> and <code>y</code> components of \(\color{brown}{\bar{P}}\) as the texture coordinate offset, regardless of the surface's orientation.
     55 </p>
     56           
     57 <p>
     58   But enough about the theory, let's get our feet wet and start implementing actual parallax mapping.
     59 </p>
     60           
     61 <h2>Parallax mapping</h2>
     62 <p>
     63   For parallax mapping we're going to use a simple 2D plane for which we calculated its tangent and bitangent vectors before sending it to the GPU; similar to what we did in the normal mapping chapter. Onto the plane we're going to attach a <a href="/img/textures/bricks2.jpg" target="_blank">diffuse texture</a>, a <a href="/img/textures/bricks2_normal.jpg" target="_blank">normal map</a>, and a <a href="/img/textures/bricks2_disp.jpg" target="_blank">displacement map</a> that you can download from their urls. For this example we're going to use parallax mapping in conjunction with normal mapping. Because parallax mapping gives the illusion of displacing a surface, the illusion breaks when the lighting doesn't match. As normal maps are often generated from heightmaps, using a normal map together with the heightmap makes sure the lighting is in place with the displacement.
     64 </p>
     65           
     66 <p>
     67   You may have already noted that the displacement map linked above is the inverse of the heightmap shown at the start of this chapter. With parallax mapping it makes more sense to use the inverse of the heightmap as it's easier to fake depth than height on flat surfaces. This slightly changes how we perceive parallax mapping as shown below:
     68 </p>
     69           
     70 <img src="/img/advanced-lighting/parallax_mapping_depth.png" class="clean" alt="Parallax mapping using a depth map instead of a heightmap"/>
     71   
     72 <p>
     73   We again have a points \(\color{green}A\) and \(\color{blue}B\), but this time we obtain vector \(\color{brown}{\bar{P}}\)  by <strong>subtracting</strong> vector \(\color{orange}{\bar{V}}\) from the texture coordinates at point \(\color{green}A\). We can obtain depth values instead of height values by subtracting the sampled heightmap values from <code>1.0</code> in the shaders, or by simply inversing its texture values in image-editing software as we did with the depthmap linked above.
     74 </p>
     75           
     76           
     77 <p>
     78   Parallax mapping is implemented in the fragment shader as the displacement effect is different all over a triangle's surface. In the fragment shader we're then going to need to calculate the fragment-to-view direction vector \(\color{orange}{\bar{V}}\) so we need the view position and a fragment position in tangent space. In the normal mapping chapter we already had a vertex shader that sends these vectors in tangent space so we can take an exact copy of that chapter's vertex shader:
     79 </p>
     80           
     81 <pre><code>
     82 #version 330 core
     83 layout (location = 0) in vec3 aPos;
     84 layout (location = 1) in vec3 aNormal;
     85 layout (location = 2) in vec2 aTexCoords;
     86 layout (location = 3) in vec3 aTangent;
     87 layout (location = 4) in vec3 aBitangent;
     88 
     89 out VS_OUT {
     90     vec3 FragPos;
     91     vec2 TexCoords;
     92     vec3 TangentLightPos;
     93     vec3 TangentViewPos;
     94     vec3 TangentFragPos;
     95 } vs_out;
     96 
     97 uniform mat4 projection;
     98 uniform mat4 view;
     99 uniform mat4 model;
    100 
    101 uniform vec3 lightPos;
    102 uniform vec3 viewPos;
    103 
    104 void main()
    105 {
    106     gl_Position      = projection * view * model * vec4(aPos, 1.0);
    107     vs_out.FragPos   = vec3(model * vec4(aPos, 1.0));   
    108     vs_out.TexCoords = aTexCoords;    
    109     
    110     vec3 T   = normalize(mat3(model) * aTangent);
    111     vec3 B   = normalize(mat3(model) * aBitangent);
    112     vec3 N   = normalize(mat3(model) * aNormal);
    113     mat3 TBN = transpose(mat3(T, B, N));
    114 
    115     vs_out.TangentLightPos = TBN * lightPos;
    116     vs_out.TangentViewPos  = TBN * viewPos;
    117     vs_out.TangentFragPos  = TBN * vs_out.FragPos;
    118 }   
    119 </code></pre>
    120 
    121 <p>
    122   Within the fragment shader we then implement the parallax mapping logic. The fragment shader looks a bit like this:
    123 </p>
    124           
    125 <pre><code>
    126 #version 330 core
    127 out vec4 FragColor;
    128 
    129 in VS_OUT {
    130     vec3 FragPos;
    131     vec2 TexCoords;
    132     vec3 TangentLightPos;
    133     vec3 TangentViewPos;
    134     vec3 TangentFragPos;
    135 } fs_in;
    136 
    137 uniform sampler2D diffuseMap;
    138 uniform sampler2D normalMap;
    139 uniform sampler2D depthMap;
    140   
    141 uniform float height_scale;
    142   
    143 vec2 ParallaxMapping(vec2 texCoords, vec3 viewDir);
    144   
    145 void main()
    146 {           
    147     // offset texture coordinates with Parallax Mapping
    148     vec3 viewDir   = normalize(fs_in.TangentViewPos - fs_in.TangentFragPos);
    149     vec2 texCoords = ParallaxMapping(fs_in.TexCoords,  viewDir);
    150 
    151     // then sample textures with new texture coords
    152     vec3 diffuse = texture(diffuseMap, texCoords);
    153     vec3 normal  = texture(normalMap, texCoords);
    154     normal = normalize(normal * 2.0 - 1.0);
    155     // proceed with lighting code
    156     [...]    
    157 }
    158   
    159 </code></pre>
    160           
    161 <p>
    162   We defined a function called <fun>ParallaxMapping</fun> that takes as input the fragment's texture coordinates and the fragment-to-view direction \(\color{orange}{\bar{V}}\) in tangent space. The function returns the displaced texture coordinates. We then use these <em>displaced</em> texture coordinates as the texture coordinates for sampling the diffuse and normal map. As a result, the fragment's diffuse and normal vector correctly corresponds to the surface's displaced geometry.
    163 </p>
    164           
    165 <p>
    166   Let's take a look inside the <fun>ParallaxMapping</fun> function:
    167 </p>
    168           
    169 <pre><code>
    170 vec2 ParallaxMapping(vec2 texCoords, vec3 viewDir)
    171 { 
    172     float height =  texture(depthMap, texCoords).r;    
    173     vec2 p = viewDir.xy / viewDir.z * (height * height_scale);
    174     return texCoords - p;    
    175 } 
    176 </code></pre>
    177   
    178 <p>
    179   This relatively simple function is a direct translation of what we've discussed so far. We take the original texture coordinates <var>texCoords</var> and use these to sample the height (or depth) from the <var>depthMap</var> at the current fragment \(\color{green}{A}\) as \(\color{green}{H(A)}\). We then calculate \(\color{brown}{\bar{P}}\) as the <code>x</code> and <code>y</code> component of the tangent-space <var>viewDir</var> vector divided by its <code>z</code> component and scaled by \(\color{green}{H(A)}\). We also introduced a <var>height_scale</var> uniform for some extra control as the parallax effect is usually too strong without an extra scale parameter. We then subtract this vector \(\color{brown}{\bar{P}}\) from the texture coordinates to get the final displaced texture coordinates.
    180   </p>
    181   
    182 <p>
    183   What is interesting to note here is the division of <var>viewDir.xy</var> by <var>viewDir.z</var>. As the <var>viewDir</var> vector is normalized, <var>viewDir.z</var> will be somewhere in the range between <code>0.0</code> and <code>1.0</code>. When <var>viewDir</var> is largely parallel to the surface, its <code>z</code> component is close to <code>0.0</code> and the division returns a much larger vector \(\color{brown}{\bar{P}}\) compared to when <var>viewDir</var> is largely perpendicular to the surface. We're adjusting the size of \(\color{brown}{\bar{P}}\) in such a way that it offsets the texture coordinates at a larger scale when looking at a surface from an angle compared to when looking at it from the top; this gives more realistic results at angles. <br/>
    184   Some prefer to leave the division by <var>viewDir.z</var> out of the equation as default Parallax Mapping could produce undesirable results at angles; the technique is then called <def>Parallax Mapping with Offset Limiting</def>. Choosing which technique to pick is usually a matter of personal preference.
    185 </p>
    186     
    187 <p>
    188   The resulting texture coordinates are then used to sample the other textures (diffuse and normal) and this gives a very neat displaced effect as you can see below with a <var>height_scale</var> of roughly <code>0.1</code>:
    189 </p>
    190   
    191 <img src="/img/advanced-lighting/parallax_mapping.png" alt="Image of parallax mapping in OpenGL"/>
    192     
    193 <p>
    194   Here you can see the difference between normal mapping and parallax mapping combined with normal mapping. Because parallax mapping tries to simulate depth it is actually possible to have bricks overlap other bricks based on the direction you view them. 
    195 </p>
    196     
    197 <p>
    198   You can still see a few weird border artifacts at the edge of the parallax mapped plane. This happens because at the edges of the plane the displaced texture coordinates can oversample outside the range [<code>0</code>, <code>1</code>]. This gives unrealistic results based on the texture's wrapping mode(s). A cool trick to solve this issue is to discard the fragment whenever it samples outside the default texture coordinate range:
    199 </p>
    200                   
    201 <pre><code>
    202 texCoords = ParallaxMapping(fs_in.TexCoords,  viewDir);
    203 if(texCoords.x &gt; 1.0 || texCoords.y &gt; 1.0 || texCoords.x &lt; 0.0 || texCoords.y &lt; 0.0)
    204     discard;
    205 </code></pre>
    206                 
    207 <p>
    208   All fragments with (displaced) texture coordinates outside the default range are discarded and Parallax Mapping then gives proper result around the edges of a surface. Note that this trick doesn't work on all types of surfaces, but when applied to a plane it gives great results:
    209 </p>
    210   
    211   <img src="/img/advanced-lighting/parallax_mapping_edge_fix.png" class="clean" alt="Parallax mapping with fragments discarded at the borders, fixing edge artifacts in OpenGL"/>
    212   
    213 <p>
    214   You can find the source code <a href="/code_viewer_gh.php?code=src/5.advanced_lighting/5.1.parallax_mapping/parallax_mapping.cpp" target="_blank">here</a>.
    215 </p>
    216           
    217 <p>
    218   It looks great and is quite fast as well as we only need a single extra texture sample for parallax mapping to work. It does come with a few issues though as it sort of breaks down when looking at it from an angle (similar to normal mapping) and gives incorrect results with steep height changes, as you can see below:
    219 </p>
    220     
    221     <img src="/img/advanced-lighting/parallax_mapping_issues.png" alt="Three images displaying the issues with standard parallax mapping: breaks down at angles and incorrect results with steep height changes."/>
    222       
    223 <p>
    224   The reason that it doesn't work properly at times is that it's just a crude approximation of displacement mapping. There are some extra tricks however that still allows us to get almost perfect results with steep height changes, even when looking at an angle. For instance, what if we instead of one sample take multiple samples to find the closest point to \(\color{blue}B\)?
    225 </p>
    226       
    227 <h2>Steep Parallax Mapping</h2>
    228 <p>
    229   Steep Parallax Mapping is an extension on top of Parallax Mapping in that it uses the same principles, but instead of 1 sample it takes multiple samples to better pinpoint vector \(\color{brown}{\bar{P}}\) to \(\color{blue}B\). This gives much better results, even with steep height changes, as the accuracy of the technique is improved by the number of samples.
    230 </p>
    231       
    232 <p>
    233   The general idea of Steep Parallax Mapping is that it divides the total depth range into multiple layers of the same height/depth. For each of these layers we sample the depthmap, shifting the texture coordinates along the direction of \(\color{brown}{\bar{P}}\), until we find a sampled depth value that is less than the depth value of the current layer. Take a look at the following image:
    234 </p>
    235       
    236       <img src="/img/advanced-lighting/parallax_mapping_steep_parallax_mapping_diagram.png" class="clean" alt="Diagram of how steep Parallax Mapping works in OpenGL"/>
    237         
    238 <p>
    239   We traverse the depth layers from the top down and for each layer we compare its depth value to the depth value stored in the depthmap. If the layer's depth value is less than the depthmap's value it means this layer's part of vector \(\color{brown}{\bar{P}}\) is not below the surface. We continue this process until the layer's depth is higher than the value stored in the depthmap: this point is then below the (displaced) geometric surface. 
    240 </p>
    241         
    242 <p>
    243   In this example we can see that the depthmap value at the second layer (D(2) = 0.73) is lower than the second layer's depth value <code>0.4</code> so we continue. In the next iteration, the layer's depth value <code>0.6</code> is higher than the depthmap's sampled depth value (D(3) = 0.37). We can thus assume vector \(\color{brown}{\bar{P}}\) at the third layer to be the most viable position of the displaced geometry. We then take the texture coordinate offset \(T_3\) from vector \(\color{brown}{\bar{P_3}}\) to displace the fragment's texture coordinates. You can see how the accuracy increases with more depth layers. 
    244 </p>
    245         
    246 <p>
    247   To implement this technique we only have to change the <fun>ParallaxMapping</fun> function as we already have all the variables we need:
    248 </p>
    249         
    250 <pre><code>
    251 vec2 ParallaxMapping(vec2 texCoords, vec3 viewDir)
    252 { 
    253     // number of depth layers
    254     const float numLayers = 10;
    255     // calculate the size of each layer
    256     float layerDepth = 1.0 / numLayers;
    257     // depth of current layer
    258     float currentLayerDepth = 0.0;
    259     // the amount to shift the texture coordinates per layer (from vector P)
    260     vec2 P = viewDir.xy * height_scale; 
    261     vec2 deltaTexCoords = P / numLayers;
    262   
    263     [...]     
    264 }   
    265 </code></pre>
    266         
    267 <p>
    268   Here we first set things up: we specify the number of layers, calculate the depth offset of each layer, and finally calculate the texture coordinate offset that we have to shift along the direction of \(\color{brown}{\bar{P}}\) per layer.
    269 </p>
    270         
    271 <p>
    272   We then iterate through all the layers, starting from the top, until we find a depthmap value less than the layer's depth value:
    273 </p>
    274         
    275 <pre><code>
    276 // get initial values
    277 vec2  currentTexCoords     = texCoords;
    278 float currentDepthMapValue = texture(depthMap, currentTexCoords).r;
    279   
    280 while(currentLayerDepth &lt; currentDepthMapValue)
    281 {
    282     // shift texture coordinates along direction of P
    283     currentTexCoords -= deltaTexCoords;
    284     // get depthmap value at current texture coordinates
    285     currentDepthMapValue = texture(depthMap, currentTexCoords).r;  
    286     // get depth of next layer
    287     currentLayerDepth += layerDepth;  
    288 }
    289 
    290 return currentTexCoords;
    291 </code></pre>
    292         
    293 <p>
    294   Here we loop over each depth layer and stop until we find the texture coordinate offset along vector \(\color{brown}{\bar{P}}\) that first returns a depth that's below the (displaced) surface. The resulting offset is subtracted from the fragment's texture coordinates to get a final displaced texture coordinate vector, this time with much more accuracy compared to traditional parallax mapping.
    295 </p>
    296         
    297 <p>
    298   With around <code>10</code> samples the brick surface already looks more viable even when looking at it from an angle, but steep parallax mapping really shines when having a complex surface with steep height changes; like the earlier displayed wooden toy surface:
    299 </p>
    300         
    301         <img src="/img/advanced-lighting/parallax_mapping_steep_parallax_mapping.png" class="clean" alt="Steep Parallax Mapping implemented in OpenGL"/>
    302           
    303 <p>
    304   We can improve the algorithm a bit by exploiting one of Parallax Mapping's properties. When looking straight onto a surface there isn't much texture displacement going on while there is a lot of displacement when looking at a surface from an angle (visualize the view direction on both cases). By taking less samples when looking straight at a surface and more samples when looking at an angle we only sample the necessary amount:
    305 </p>
    306           
    307 <pre><code>
    308 const float minLayers = 8.0;
    309 const float maxLayers = 32.0;
    310 float numLayers = mix(maxLayers, minLayers, max(dot(vec3(0.0, 0.0, 1.0), viewDir), 0.0));  
    311 </code></pre>
    312           
    313 <p>
    314   Here we take the dot product of <var>viewDir</var> and the positive z direction and use its result to align the number of samples to <var>minLayers</var> or <var>maxLayers</var> based on the angle we're looking towards a surface (note that the positive z direction equals the surface's normal vector in tangent space). If we were to look at a direction parallel to the surface we'd use a total of <code>32</code> layers.
    315 </p>
    316           
    317 <p>
    318   You can find the updated source code <a href="/code_viewer_gh.php?code=src/5.advanced_lighting/5.2.steep_parallax_mapping/steep_parallax_mapping.cpp" target="_blank">here</a>. You can also find the wooden toy box surface here: <a href="/img/textures/wood.png" target="_blank">diffuse</a>, <a href="/img/textures/toy_box_normal.png" target="_blank">normal</a> and <a href="/img/textures/toy_box_disp.png" target="_blank">depth</a>.
    319 </p>
    320           
    321 <p>
    322   Steep Parallax Mapping also comes with its problems though. Because the technique is based on a finite number of samples, we get aliasing effects and the clear distinctions between layers can easily be spotted:
    323 </p>
    324           
    325           <img src="/img/advanced-lighting/parallax_mapping_steep_artifact.png"  class="clean" alt="The visible layers of Steep Parallax Mapping can easily be detected with small numbers"/>
    326             
    327 <p>
    328   We can reduce the issue by taking a larger number of samples, but this quickly becomes too heavy a burden on performance. There are several approaches that aim to fix this issue by not taking the first position that's below the (displaced) surface, but by <em>interpolating</em> between the position's two closest depth layers to find a much closer match to \(\color{blue}B\). 
    329 </p>
    330             
    331 <p>
    332   Two of the more popular of these approaches are called <def>Relief Parallax Mapping</def> and <def>Parallax Occlusion Mapping</def> of which Relief Parallax Mapping gives the most accurate results, but is also more performance heavy compared to Parallax Occlusion Mapping. Because Parallax Occlusion Mapping gives almost the same results as Relief Parallax Mapping and is also more efficient it is often the preferred approach.
    333 </p>
    334             
    335 <h2>Parallax Occlusion Mapping</h2>
    336 <p>
    337     Parallax Occlusion Mapping is based on the same principles as Steep Parallax Mapping, but instead of taking the texture coordinates of the first depth layer after a collision, we're going to linearly interpolate between the depth layer after and before the collision. We base the weight of the linear interpolation on how far the surface's height is from the depth layer's value of both layers. Take a look at the following picture to get a grasp of how it works:
    338 </p>
    339             
    340 <img src="/img/advanced-lighting/parallax_mapping_parallax_occlusion_mapping_diagram.png" class="clean" alt="How Parallax Occlusion Mapping works in OpenGL"/>
    341               
    342 <p>
    343   As you can see, it's largely similar to Steep Parallax Mapping with as an extra step the linear interpolation between the two depth layers' texture coordinates surrounding the intersected point. This is again an approximation, but significantly more accurate than Steep Parallax Mapping.
    344 </p>
    345               
    346 <p>
    347   The code for Parallax Occlusion Mapping is an extension on top of Steep Parallax Mapping and not too difficult:
    348 </p>
    349               
    350 <pre><code>
    351 [...] // steep parallax mapping code here
    352   
    353 // get texture coordinates before collision (reverse operations)
    354 vec2 prevTexCoords = currentTexCoords + deltaTexCoords;
    355 
    356 // get depth after and before collision for linear interpolation
    357 float afterDepth  = currentDepthMapValue - currentLayerDepth;
    358 float beforeDepth = texture(depthMap, prevTexCoords).r - currentLayerDepth + layerDepth;
    359  
    360 // interpolation of texture coordinates
    361 float weight = afterDepth / (afterDepth - beforeDepth);
    362 vec2 finalTexCoords = prevTexCoords * weight + currentTexCoords * (1.0 - weight);
    363 
    364 return finalTexCoords;  
    365 </code></pre>
    366               
    367 <p>
    368   After we found the depth layer after intersecting the (displaced) surface geometry, we also retrieve the texture coordinates of the depth layer before intersection. Then we calculate the distance of the  (displaced) geometry's depth from the corresponding depth layers and interpolate between these two values. The linear interpolation is a basic interpolation between both layer's texture coordinates. The function then finally returns the final interpolated texture coordinates.
    369 </p>
    370               
    371 <p>
    372   Parallax Occlusion Mapping gives surprisingly good results and although some slight artifacts and aliasing issues are still visible, it's a generally a good trade-off and only really visible when heavily zoomed in or looking at very steep angles. 
    373 </p>
    374               
    375               <img src="/img/advanced-lighting/parallax_mapping_parallax_occlusion_mapping.png" alt="Image of Parallax Occlusion Mapping in OpenGL"/>                                 
    376                  
    377 <p>
    378  You can find the source code <a href="/code_viewer_gh.php?code=src/5.advanced_lighting/5.3.parallax_occlusion_mapping/parallax_occlusion_mapping.cpp" target="_blank">here</a>.
    379 </p>
    380                     
    381 <p>
    382   Parallax Mapping is a great technique to boost the detail of your scene, but does come with a few artifacts you'll have to consider when using it. Most often, parallax mapping is used on floor or wall-like surfaces where it's not as easy to determine the surface's outline and the viewing angle is most often roughly perpendicular to the surface. This way, the artifacts of Parallax Mapping aren't as noticeable and make it an incredibly interesting technique for boosting your objects' details. 
    383 </p>            
    384                 
    385 <h2>Additional resources</h2>
    386 <ul>
    387      <li><a href="http://sunandblackcat.com/tipFullView.php?topicid=28" target="_blank">Parallax Occlusion Mapping in GLSL</a>: great parallax mapping tutorial by sunandblackcat.com.</li>
    388     <li><a href="https://www.youtube.com/watch?v=xvOT62L-fQI" target="_blank">How Parallax Displacement Mapping Works</a>: a nice video tutorial of how parallax mapping works by TheBennyBox.</li>
    389 </ul>       
    390 
    391     </div>
    392     
    393     <div id="hover">
    394         HI
    395     </div>
    396    <!-- 728x90/320x50 sticky footer -->
    397 <div id="waldo-tag-6196"></div>
    398 
    399    <div id="disqus_thread"></div>
    400 
    401     
    402 
    403 
    404 </div> <!-- container div -->
    405 
    406 
    407 </div> <!-- super container div -->
    408 </body>
    409 </html>