LearnOpenGL

Translation in progress of learnopengl.com.
git clone https://git.mtkn.jp/LearnOpenGL
Log | Files | Refs

Shadow-Mapping.html (41659B)


      1     <h1 id="content-title">Shadow Mapping</h1>
      2 <h1 id="content-url" style='display:none;'>Advanced-Lighting/Shadows/Shadow-Mapping</h1>
      3 <p>
      4   Shadows are a result of the absence of light due to occlusion. When a light source's light rays do not hit an object because it gets occluded by some other object, the object is in shadow. Shadows add a great deal of realism to a lit scene and make it easier for a viewer to observe spatial relationships between objects. They give a greater sense of depth to our scene and objects. For example, take a look at the following image of a scene with and without shadows:
      5   
      6 </p>
      7 
      8 <img src="/img/advanced-lighting/shadow_mapping_with_without.png" alt="comparrison of shadows in a scene with and without in OpenGL"/>
      9   
     10 <p>
     11   You can see that with shadows it becomes much more obvious how the objects relate to each other. For instance, the fact that one of the cubes is floating above the others is only really noticeable when we have shadows.
     12 </p>
     13   
     14 <p>
     15   Shadows are a bit tricky to implement though, specifically because in current real-time (rasterized graphics) research a perfect shadow algorithm hasn't been developed yet. There are several good shadow approximation techniques, but they all have their little quirks and annoyances which we have to take into account.
     16 </p>
     17   
     18 <p>
     19   One technique used by most videogames that gives decent results and is relatively easy to implement is <def>shadow mapping</def>. Shadow mapping is not too difficult to understand, doesn't cost too much in performance and quite easily extends into more advanced algorithms (like <a href="https://learnopengl.com/Advanced-Lighting/Shadows/Point-Shadows" target="_blank">Omnidirectional Shadow Maps</a> and Cascaded Shadow Maps). 
     20 </p>
     21   
     22 <h2>Shadow mapping</h2>
     23 <p>
     24    The idea behind shadow mapping is quite simple: we render the scene from the light's point of view and everything we see from the light's perspective is lit and everything we can't see must be in shadow. Imagine a floor section with a large box between itself and a light source. Since the light source will see this box and not the floor section when looking in its direction that specific floor section should be in shadow. 
     25 </p>
     26   
     27   <img src="/img/advanced-lighting/shadow_mapping_theory.png" class="clean" alt="Shadow mapping illustrated."/>
     28     
     29 <p>
     30    Here all the blue lines represent the fragments that the light source can see. The occluded fragments are shown as black lines: these are rendered as being shadowed. If we were to draw a line or <def>ray</def> from the light source to a fragment on the right-most box we can see the ray first hits the floating container before hitting the right-most container. As a result, the floating container's fragment is lit and the right-most container's fragment is not lit and thus in shadow.
     31 </p>
     32     
     33 <p>
     34   We want to get the point on the ray where it first hit an object and compare this <em>closest point</em> to other points on this ray. We then do a basic test to see if a test point's ray position is further down the ray than the closest point and if so, the test point must be in shadow. Iterating through possibly thousands of light rays from such a light source is an extremely inefficient approach and doesn't lend itself too well for real-time rendering. We can do something similar, but without casting light rays. Instead, we use something we're quite familiar with: the depth buffer.
     35 </p>
     36     
     37 <p>
     38   You may remember from the <a href="https://learnopengl.com/Advanced-OpenGL/Depth-testing" target="_blank">depth testing</a> chapter that a value in the depth buffer corresponds to the depth of a fragment clamped to [0,1] from the camera's point of view. What if we were to render the scene from the light's perspective and store the resulting depth values in a texture? This way, we can sample the closest depth values as seen from the light's perspective. After all, the depth values show the first fragment visible from the light's perspective. We store all these depth values in a texture that we call a <def>depth map</def> or <def>shadow map</def>.
     39 </p>
     40     
     41     <img src="/img/advanced-lighting/shadow_mapping_theory_spaces.png" class="clean" alt="Different coordinate transforms / spaces for shadow mapping."/>
     42       
     43 <p>
     44   The left image shows a directional light source (all light rays are parallel) casting a shadow on the surface below the cube. Using the depth values stored in the depth map we find the closest point and use that to determine whether fragments are in shadow. We create the depth map by rendering the scene (from the light's perspective) using a view and projection matrix specific to that light source. This projection and view matrix together form a transformation \(T\) that transforms any 3D position to the light's (visible) coordinate space.
     45 </p>
     46             
     47 <note>
     48   A directional light doesn't have a position as it's modelled to be infinitely far away. However, for the sake of shadow mapping we need to render the scene from a light's perspective and thus render the scene from a position somewhere along the lines of the light direction.
     49 </note>
     50       
     51  <p>
     52   In the right image we see the same directional light and the viewer. We render a fragment at point \(\bar{\color{red}{P}}\) for which we have to determine whether it is in shadow. To do this, we first transform point \(\bar{\color{red}{P}}\) to the light's coordinate space using \(T\). Since point \(\bar{\color{red}{P}}\) is now as seen from the light's perspective, its <code>z</code> coordinate corresponds to its depth which in this example is <code>0.9</code>. Using point \(\bar{\color{red}{P}}\) we can also index the depth/shadow map to obtain the closest visible depth from the light's perspective, which is at point \(\bar{\color{green}{C}}\) with a sampled depth of <code>0.4</code>. Since indexing the depth map returns a depth smaller than the depth at point \(\bar{\color{red}{P}}\) we can conclude point \(\bar{\color{red}{P}}\) is occluded and thus in shadow. 
     53 </p>
     54 
     55     
     56 <p>
     57   Shadow mapping therefore consists of two passes: first we render the depth map, and in the second pass we render the scene as normal and use the generated depth map to calculate whether fragments are in shadow. It may sound a bit complicated, but as soon as we walk through the technique step-by-step it'll likely start to make sense. 
     58 </p>
     59     
     60 <h2>The depth map</h2>
     61 <p>
     62   The first pass requires us to generate a depth map. The depth map is the depth texture as rendered from the light's perspective that we'll be using for testing for shadows. Because we need to store the rendered result of a scene into a texture we're going to need <a href="https://learnopengl.com/Advanced-OpenGL/Framebuffers" target="_blank">framebuffers</a> again.
     63 </p>
     64       
     65 <p>
     66   First we'll create a framebuffer object for rendering the depth map:
     67 </p>
     68       
     69 <pre><code>
     70 unsigned int depthMapFBO;
     71 <function id='76'>glGenFramebuffers</function>(1, &depthMapFBO);  
     72 </code></pre>
     73       
     74 <p>
     75   Next we create a 2D texture that we'll use as the framebuffer's depth buffer:
     76 </p>
     77       
     78 <pre><code>
     79 const unsigned int SHADOW_WIDTH = 1024, SHADOW_HEIGHT = 1024;
     80 
     81 unsigned int depthMap;
     82 <function id='50'>glGenTextures</function>(1, &depthMap);
     83 <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, depthMap);
     84 <function id='52'>glTexImage2D</function>(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 
     85              SHADOW_WIDTH, SHADOW_HEIGHT, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
     86 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
     87 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
     88 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); 
     89 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);  
     90 </code></pre>
     91       
     92 <p>
     93   Generating the depth map shouldn't look too complicated. Because we only care about depth values we specify the texture's formats as <var>GL_DEPTH_COMPONENT</var>. We also give the texture a width and height of <code>1024</code>: this is the resolution of the depth map. 
     94 </p>
     95       
     96 <p>
     97   With the generated depth texture we can attach it as the framebuffer's depth buffer:
     98 </p>
     99 
    100 <pre class="cpp"><code>
    101 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, depthMapFBO);
    102 <function id='81'>glFramebufferTexture2D</function>(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthMap, 0);
    103 glDrawBuffer(GL_NONE);
    104 glReadBuffer(GL_NONE);
    105 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, 0);  
    106 </code></pre>
    107 
    108 <p>
    109   We only need the depth information when rendering the scene from the light's perspective so there is no need for a color buffer. A framebuffer object however is not complete without a color buffer so we need to explicitly tell OpenGL we're not going to render any color data. We do this by setting both the read and draw buffer to <var>GL_NONE</var> with <fun>glDrawBuffer</fun> and <fun>glReadbuffer</fun>. 
    110 </p>
    111       
    112 <p>
    113   With a properly configured framebuffer that renders depth values to a texture we can start the first pass: generate the depth map. When combined with the second pass, the complete rendering stage will look a bit like this:
    114 </p>
    115       
    116 <pre><code>
    117 // 1. first render to depth map
    118 <function id='22'>glViewport</function>(0, 0, SHADOW_WIDTH, SHADOW_HEIGHT);
    119 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, depthMapFBO);
    120     <function id='10'>glClear</function>(GL_DEPTH_BUFFER_BIT);
    121     ConfigureShaderAndMatrices();
    122     RenderScene();
    123 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, 0);
    124 // 2. then render scene as normal with shadow mapping (using depth map)
    125 <function id='22'>glViewport</function>(0, 0, SCR_WIDTH, SCR_HEIGHT);
    126 <function id='10'>glClear</function>(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    127 ConfigureShaderAndMatrices();
    128 <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, depthMap);
    129 RenderScene();
    130 </code></pre>
    131       
    132 <p>
    133   This code left out some details, but it'll give you the general idea of shadow mapping. What is important to note here are the calls to <fun><function id='22'>glViewport</function></fun>. Because shadow maps often have a different resolution compared to what we originally render the scene in (usually the window resolution), we need to change the viewport parameters to accommodate for the size of the shadow map. If we forget to update the viewport parameters, the resulting depth map will be either incomplete or too small.
    134 </p>
    135     
    136 <h3>Light space transform</h3>
    137 <p>
    138   An unknown in the previous snippet of code is the <fun>ConfigureShaderAndMatrices</fun> function. In the second pass this is business as usual: make sure proper projection and view matrices are set, and set the relevant model matrices per object. However, in the first pass we need to use a different projection and view matrix to render the scene from the light's point of view.
    139 </p>
    140       
    141 <p>
    142   Because we're modelling a directional light source, all its light rays are parallel. For this reason, we're going to use an orthographic projection matrix for the light source where there is no perspective deform:
    143 </p>
    144 
    145 <pre><code>
    146 float near_plane = 1.0f, far_plane = 7.5f;
    147 glm::mat4 lightProjection = <function id='59'>glm::ortho</function>(-10.0f, 10.0f, -10.0f, 10.0f, near_plane, far_plane);  
    148 </code></pre>
    149       
    150 <p>
    151   Here is an example orthographic projection matrix as used in this chapter's demo scene. Because a projection matrix indirectly determines the range of what is visible (e.g. what is not clipped) you want to make sure the size of the projection frustum correctly contains the objects you want to be in the depth map. When objects or fragments are not in the depth map they will not produce shadows.
    152 </p>
    153       
    154 <p>
    155   To create a view matrix to transform each object so they're visible from the light's point of view, we're going to use the infamous <fun><function id='62'>glm::lookAt</function></fun> function; this time with the light source's position looking at the scene's center. 
    156 </p>
    157       
    158 <pre><code>
    159 glm::mat4 lightView = <function id='62'>glm::lookAt</function>(glm::vec3(-2.0f, 4.0f, -1.0f), 
    160                                   glm::vec3( 0.0f, 0.0f,  0.0f), 
    161                                   glm::vec3( 0.0f, 1.0f,  0.0f));  
    162 </code></pre>
    163       
    164 <p>
    165   Combining these two gives us a light space transformation matrix that transforms each world-space vector into the space as visible from the light source; exactly what we need to render the depth map.
    166 </p>
    167       
    168 <pre><code>
    169 glm::mat4 lightSpaceMatrix = lightProjection * lightView; 
    170 </code></pre>
    171       
    172 <p>
    173   This <var>lightSpaceMatrix</var> is the transformation matrix that we earlier denoted as \(T\). With this <var>lightSpaceMatrix</var>, we can render the scene as usual as long as we give each shader the light-space equivalents of the projection and view matrices. However, we only care about depth values and not all the expensive fragment (lighting) calculations. To save performance we're going to use a different, but much simpler shader for rendering to the depth map.
    174 </p>
    175       
    176 <h3>Render to depth map</h3>
    177 <p>
    178   When we render the scene from the light's perspective we'd much rather use a simple shader that only transforms the vertices to light space and not much more. For such a simple shader called <var>simpleDepthShader</var> we'll use the following vertex shader:
    179 </p>
    180       
    181 <pre><code>
    182 #version 330 core
    183 layout (location = 0) in vec3 aPos;
    184 
    185 uniform mat4 lightSpaceMatrix;
    186 uniform mat4 model;
    187 
    188 void main()
    189 {
    190     gl_Position = lightSpaceMatrix * model * vec4(aPos, 1.0);
    191 }  
    192 </code></pre>
    193       
    194 <p>
    195   This vertex shader takes a per-object model, a vertex, and transforms all vertices to light space using <var>lightSpaceMatrix</var>.
    196 </p>      
    197       
    198 <p>
    199   Since we have no color buffer and disabled the draw and read buffers, the resulting fragments do not require any processing so we can simply use an empty fragment shader:
    200 </p>
    201 
    202 <pre><code>
    203 #version 330 core
    204 
    205 void main()
    206 {             
    207     // gl_FragDepth = gl_FragCoord.z;
    208 }  
    209 </code></pre>
    210       
    211 <p>
    212   This empty fragment shader does no processing whatsoever, and at the end of its run the depth buffer is updated. We could explicitly set the depth by uncommenting its one line, but this is effectively what happens behind the scene anyways.
    213 </p>
    214       
    215 <p>
    216   Rendering the depth/shadow map now effectively becomes:
    217 </p>
    218       
    219 <pre><code>
    220 simpleDepthShader.use();
    221 <function id='44'>glUniform</function>Matrix4fv(lightSpaceMatrixLocation, 1, GL_FALSE, glm::value_ptr(lightSpaceMatrix));
    222 
    223 <function id='22'>glViewport</function>(0, 0, SHADOW_WIDTH, SHADOW_HEIGHT);
    224 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, depthMapFBO);
    225     <function id='10'>glClear</function>(GL_DEPTH_BUFFER_BIT);
    226     RenderScene(simpleDepthShader);
    227 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, 0);  
    228 </code></pre>
    229       
    230 <p>
    231   Here the <fun>RenderScene</fun> function takes a shader program, calls all relevant drawing functions and sets the corresponding model matrices where necessary. 
    232 </p>
    233 
    234 <p>
    235   The result is a nicely filled depth buffer holding the closest depth of each visible fragment from the light's perspective. By rendering this texture onto a 2D quad that fills the screen (similar to what we did in the post-processing section at the end of the <a href="https://learnopengl.com/Advanced-OpenGL/Framebuffers" target="_blank">framebuffers</a> chapter) we get something like this:
    236 </p>
    237       
    238 <img src="/img/advanced-lighting/shadow_mapping_depth_map.png" class="clean" alt="Depth (or shadow) map of shadow mapping technique"/>
    239       
    240 <p>
    241   For rendering the depth map onto a quad we used the following fragment shader:
    242 </p>
    243   
    244 <pre><code>
    245 #version 330 core
    246 out vec4 FragColor;
    247   
    248 in vec2 TexCoords;
    249 
    250 uniform sampler2D depthMap;
    251 
    252 void main()
    253 {             
    254     float depthValue = texture(depthMap, TexCoords).r;
    255     FragColor = vec4(vec3(depthValue), 1.0);
    256 }  
    257 </code></pre>
    258   
    259 <p>
    260   Note that there are some subtle changes when displaying depth using a perspective projection matrix instead of an orthographic projection matrix as depth is non-linear when using perspective projection. At the end of this chapter we'll discuss some of these subtle differences.
    261 </p>
    262   
    263 <p>
    264   You can find the source code for rendering a scene to a depth map <a href="/code_viewer_gh.php?code=src/5.advanced_lighting/3.1.1.shadow_mapping_depth/shadow_mapping_depth.cpp" target="_blank">here</a>.
    265 </p>
    266       
    267 <h2>Rendering shadows</h2>  
    268 <p>
    269   With a properly generated depth map we can start rendering the actual shadows. The code to check if a fragment is in shadow is (quite obviously) executed in the fragment shader, but we do the light-space transformation in the vertex shader:
    270 </p>
    271   
    272 <pre><code>
    273 #version 330 core
    274 layout (location = 0) in vec3 aPos;
    275 layout (location = 1) in vec3 aNormal;
    276 layout (location = 2) in vec2 aTexCoords;
    277 
    278 out VS_OUT {
    279     vec3 FragPos;
    280     vec3 Normal;
    281     vec2 TexCoords;
    282     vec4 FragPosLightSpace;
    283 } vs_out;
    284 
    285 uniform mat4 projection;
    286 uniform mat4 view;
    287 uniform mat4 model;
    288 uniform mat4 lightSpaceMatrix;
    289 
    290 void main()
    291 {    
    292     vs_out.FragPos = vec3(model * vec4(aPos, 1.0));
    293     vs_out.Normal = transpose(inverse(mat3(model))) * aNormal;
    294     vs_out.TexCoords = aTexCoords;
    295     vs_out.FragPosLightSpace = lightSpaceMatrix * vec4(vs_out.FragPos, 1.0);
    296     gl_Position = projection * view * vec4(vs_out.FragPos, 1.0);
    297 }
    298 </code></pre>
    299   
    300 <p>
    301   What is new here is the extra output vector <var>FragPosLightSpace</var>. We take the same <var>lightSpaceMatrix</var> (used to transform vertices to light space in the depth map stage) and transform the world-space vertex position to light space for use in the fragment shader. 
    302 </p>
    303   
    304 <p>
    305   The main fragment shader we'll use to render the scene uses the Blinn-Phong lighting model. Within the fragment shader we then calculate a <var>shadow</var> value that is either <code>1.0</code> when the fragment is in shadow or <code>0.0</code> when not in shadow. The resulting <var>diffuse</var> and <var>specular</var> components are then multiplied by this shadow component. Because shadows are rarely completely dark (due to light scattering) we leave the <var>ambient</var> component out of the shadow multiplications. 
    306 </p>
    307   
    308 <pre><code>
    309 #version 330 core
    310 out vec4 FragColor;
    311 
    312 in VS_OUT {
    313     vec3 FragPos;
    314     vec3 Normal;
    315     vec2 TexCoords;
    316     vec4 FragPosLightSpace;
    317 } fs_in;
    318 
    319 uniform sampler2D diffuseTexture;
    320 uniform sampler2D shadowMap;
    321 
    322 uniform vec3 lightPos;
    323 uniform vec3 viewPos;
    324 
    325 float ShadowCalculation(vec4 fragPosLightSpace)
    326 {
    327     [...]
    328 }
    329 
    330 void main()
    331 {           
    332     vec3 color = texture(diffuseTexture, fs_in.TexCoords).rgb;
    333     vec3 normal = normalize(fs_in.Normal);
    334     vec3 lightColor = vec3(1.0);
    335     // ambient
    336     vec3 ambient = 0.15 * color;
    337     // diffuse
    338     vec3 lightDir = normalize(lightPos - fs_in.FragPos);
    339     float diff = max(dot(lightDir, normal), 0.0);
    340     vec3 diffuse = diff * lightColor;
    341     // specular
    342     vec3 viewDir = normalize(viewPos - fs_in.FragPos);
    343     float spec = 0.0;
    344     vec3 halfwayDir = normalize(lightDir + viewDir);  
    345     spec = pow(max(dot(normal, halfwayDir), 0.0), 64.0);
    346     vec3 specular = spec * lightColor;    
    347     // calculate shadow
    348     float shadow = ShadowCalculation(fs_in.FragPosLightSpace);       
    349     vec3 lighting = (ambient + (1.0 - shadow) * (diffuse + specular)) * color;    
    350     
    351     FragColor = vec4(lighting, 1.0);
    352 }
    353 </code></pre>
    354   
    355 <p>
    356   The fragment shader is largely a copy from what we used in the <a href="https://learnopengl.com/Advanced-Lighting/Advanced-Lighting" target="_blank">advanced lighting</a> chapter, but with an added shadow calculation. We declared a function <fun>ShadowCalculation</fun> that does most of the shadow work. At the end of the fragment shader, we multiply the diffuse and specular contributions by the inverse of the <var>shadow</var> component e.g. how much the fragment is <em>not</em> in shadow. This fragment shader takes as extra input the light-space fragment position and the depth map generated from the first render pass.
    357 </p>
    358 
    359 <p>
    360   The first thing to do to check whether a fragment is in shadow, is transform the light-space fragment position in clip-space to normalized device coordinates. When we output a clip-space vertex position to <var>gl_Position</var> in the vertex shader, OpenGL automatically does a perspective divide e.g. transform clip-space coordinates in the range [<code>-w</code>,<code>w</code>] to [<code>-1</code>,<code>1</code>] by dividing the <code>x</code>, <code>y</code> and <code>z</code> component by the vector's <code>w</code> component. As the clip-space <var>FragPosLightSpace</var> is not passed to the fragment shader through <var>gl_Position</var>, we have to do this perspective divide ourselves:
    361 </p>
    362   
    363 <pre><code>
    364 float ShadowCalculation(vec4 fragPosLightSpace)
    365 {
    366     // perform perspective divide
    367     vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
    368     [...]
    369 }
    370 </code></pre>
    371   
    372 <p>
    373   This returns the fragment's light-space position in the range [<code>-1</code>,<code>1</code>].
    374 </p>
    375 
    376   <note>
    377     When using an orthographic projection matrix the <code>w</code> component of a vertex remains untouched so this step is actually quite meaningless. However, it is necessary when using perspective projection so keeping this line ensures it works with both projection matrices.
    378   </note>
    379   
    380 <p>
    381     Because the depth from the depth map is in the range [<code>0</code>,<code>1</code>] and we also want to use <var>projCoords</var> to sample from the depth map, we transform the NDC coordinates to the range [<code>0</code>,<code>1</code>]:
    382 </p>
    383   
    384 <pre class="cpp"><code>
    385 projCoords = projCoords * 0.5 + 0.5; 
    386 </code></pre>
    387   
    388 <p>
    389   With these projected coordinates we can sample the depth map as the resulting [<code>0</code>,<code>1</code>] coordinates from <var>projCoords</var> directly correspond to the transformed NDC coordinates from the first render pass. This gives us the closest depth from the light's point of view:
    390 </p>
    391   
    392 <pre><code>
    393 float closestDepth = texture(shadowMap, projCoords.xy).r;   
    394 </code></pre>
    395   
    396 <p>
    397   To get the current depth at this fragment we simply retrieve the projected vector's <code>z</code> coordinate which equals the depth of this fragment from the light's perspective.
    398 </p>
    399   
    400 <pre><code>
    401 float currentDepth = projCoords.z;  
    402 </code></pre>
    403   
    404 <p>
    405   The actual comparison is then simply a check whether <var>currentDepth</var> is higher than <var>closestDepth</var> and if so, the fragment is in shadow:
    406 </p>
    407   
    408 <pre><code>
    409 float shadow = currentDepth > closestDepth  ? 1.0 : 0.0;  
    410 </code></pre>
    411   
    412 <p>
    413   The complete <fun>ShadowCalculation</fun> function then becomes:
    414 </p>
    415   
    416 <pre><code>
    417 float ShadowCalculation(vec4 fragPosLightSpace)
    418 {
    419     // perform perspective divide
    420     vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
    421     // transform to [0,1] range
    422     projCoords = projCoords * 0.5 + 0.5;
    423     // get closest depth value from light's perspective (using [0,1] range fragPosLight as coords)
    424     float closestDepth = texture(shadowMap, projCoords.xy).r; 
    425     // get depth of current fragment from light's perspective
    426     float currentDepth = projCoords.z;
    427     // check whether current frag pos is in shadow
    428     float shadow = currentDepth > closestDepth  ? 1.0 : 0.0;
    429 
    430     return shadow;
    431 }  
    432 </code></pre>
    433   
    434 <p>
    435   Activating this shader, binding the proper textures, and activating the default projection and view matrices in the second render pass should give you a result similar to the image below:
    436 </p>
    437   
    438   <img src="/img/advanced-lighting/shadow_mapping_shadows.png" class="clean" alt="Shadow mapped images, without improvements."/>
    439     
    440 <p>
    441   If you did things right you should indeed see (albeit with quite a few artifacts) shadows on the floor and the cubes. You can find the source code of the demo application <a href="/code_viewer_gh.php?code=src/5.advanced_lighting/3.1.2.shadow_mapping_base/shadow_mapping_base.cpp" target="_blank">here</a>.
    442 </p>    
    443   
    444 <h2>Improving shadow maps</h2>
    445 <p>
    446   We managed to get the basics of shadow mapping working, but as you can we're not there yet due to several (clearly visible) artifacts related to shadow mapping we need to fix. We'll focus on fixing these artifacts in the next sections.
    447 </p> 
    448     
    449 <h3>Shadow acne</h3>
    450 <p>
    451     It is obvious something is wrong from the previous image. A closer zoom shows us a very obvious Moiré-like pattern:
    452 </p>
    453     
    454     <img src="/img/advanced-lighting/shadow_mapping_acne.png" alt="Image of shadow acne as Moiré pattern with shadow mapping"/>
    455       
    456 <p>
    457   We can see a large part of the floor quad rendered with obvious black lines in an alternating fashion. This shadow mapping artifact is called <def>shadow acne</def> and can be explained by the following image:
    458 </p>
    459       
    460 <img src="/img/advanced-lighting/shadow_mapping_acne_diagram.png" class="clean" alt="Shadow acne explained"/>
    461         
    462 <p>
    463   Because the shadow map is limited by resolution, multiple fragments can sample the same value from the depth map when they're relatively far away from the light source. The image shows the floor where each yellow tilted panel represents a single texel of the depth map. As you can see, several fragments sample the same depth sample. 
    464   </p>
    465   
    466 <p>
    467   While this is generally okay, it becomes an issue when the light source looks at an angle towards the surface as in that case the depth map is also rendered from an angle. Several fragments then access the same tilted depth texel while some are above and some below the floor; we get a shadow discrepancy. Because of this, some fragments are considered to be in shadow and some are not, giving the striped pattern from the image.
    468 </p>
    469   
    470 <p>
    471   We can solve this issue with a small little hack called a <def>shadow bias</def> where we simply offset the depth of the surface (or the shadow map) by a small bias amount such that the fragments are not incorrectly considered above the surface. 
    472 </p>
    473   
    474  <img src="/img/advanced-lighting/shadow_mapping_acne_bias.png" class="clean" alt="Shadow mapping, with shadow acne fixed using shadow bias."/>
    475    
    476 <p>
    477   With the bias applied, all the samples get a depth smaller than the surface's depth and thus the entire surface is correctly lit without any shadows. We can implement such a bias as follows:
    478 </p>
    479   
    480 <pre><code>
    481 float bias = 0.005;
    482 float shadow = currentDepth - bias > closestDepth  ? 1.0 : 0.0;  
    483 </code></pre>
    484   
    485 <p>
    486   A shadow bias of <code>0.005</code> solves the issues of our scene by a large extent, but you can imagine the bias value is highly dependent on the angle between the light source and the surface. If the surface would have a steep angle to the light source, the shadows may still display shadow acne. A more solid approach would be to change the amount of bias based on the surface angle towards the light: something we can solve with the dot product:
    487 </p>
    488   
    489 <pre><code>
    490 float bias = max(0.05 * (1.0 - dot(normal, lightDir)), 0.005);  
    491 </code></pre>
    492   
    493 <p>
    494   Here we have a maximum bias of <code>0.05</code> and a minimum of <code>0.005</code> based on the surface's normal and light direction. This way, surfaces like the floor that are almost perpendicular to the light source get a small bias, while surfaces like the cube's side-faces get a much larger bias. The following image shows the same scene but now with a shadow bias:
    495 </p>
    496   
    497    
    498   <img src="/img/advanced-lighting/shadow_mapping_with_bias.png" class="clean" alt="Shadow mapped images, with (sloped) shadow bias applied."/>
    499   
    500 <p>
    501   Choosing the correct bias value(s) requires some tweaking as this will be different for each scene, but most of the time it's simply a matter of slowly incrementing the bias until all acne is removed.
    502 </p>
    503   
    504 <h3>Peter panning</h3>
    505 <p>
    506   A disadvantage of using a shadow bias is that you're applying an offset to the actual depth of objects. As a result, the bias may become large enough to see a visible offset of shadows compared to the actual object locations as you can see below (with an exaggerated bias value):
    507 </p>
    508   
    509   <img src="/img/advanced-lighting/shadow_mapping_peter_panning.png" class="clean" alt="Peter panning with shadow mapping implementation"/>
    510     
    511 <p>
    512   This shadow artifact is called <def>peter panning</def> since objects seem slightly <em>detached</em> from their shadows. We can use a little trick to solve most of the peter panning issue by using front face culling when rendering the depth map. You may remember from the <a href="https://learnopengl.com/Advanced-OpenGL/Face-Culling" target="_blank">face culling</a> chapter that OpenGL by default culls back-faces. By telling OpenGL we want to cull front faces during the shadow map stage we're switching that order around. 
    513 </p>
    514     
    515 <p>
    516   Because we only need depth values for the depth map it shouldn't matter for solid objects whether we take the depth of their front faces or their back faces. Using their back face depths doesn't give wrong results as it doesn't matter if we have shadows inside objects; we can't see there anyways.
    517 </p>
    518     
    519     <img src="/img/advanced-lighting/shadow_mapping_culling.png" class="clean" alt="Shadow mapping showing how front face culling helps solve peter panning."/>
    520     
    521 <p>
    522   To fix peter panning we cull all front faces during the shadow map generation. Note that you need to enable <var>GL_CULL_FACE</var> first.
    523 </p>
    524     
    525 <pre><code>
    526 <function id='74'>glCullFace</function>(GL_FRONT);
    527 RenderSceneToDepthMap();
    528 <function id='74'>glCullFace</function>(GL_BACK); // don't forget to reset original culling face
    529 </code></pre>
    530     
    531 <p>
    532   This effectively solves the peter panning issues, but <strong>only for solid</strong> objects that actually have an inside without openings. In our scene for example, this works perfectly fine on the cubes. However, on the floor it won't work as well as culling the front face completely removes the floor from the equation. The floor is a single plane and would thus be completely culled. If one wants to solve peter panning with this trick, care has to be taken to only cull the front faces of objects where it makes sense. 
    533 </p>    
    534     
    535 <p>
    536   Another consideration is that objects that are close to the shadow receiver (like the distant cube) may still give incorrect results. However, with normal bias values you can generally avoid peter panning. 
    537 </p>
    538       
    539 <h3>Over sampling</h3>
    540 <p>
    541   Another visual discrepancy which you may like or dislike is that regions outside the light's visible frustum are considered to be in shadow while they're (usually) not. This happens because projected coordinates outside the light's frustum are higher than <code>1.0</code> and will thus sample the depth texture outside its default range of [<code>0</code>,<code>1</code>]. Based on the texture's wrapping method, we will get incorrect depth results not based on the real depth values from the light source.
    542 </p>
    543       
    544        <img src="/img/advanced-lighting/shadow_mapping_outside_frustum.png" class="clean" alt="Shadow mapping with edges of depth map visible, texture wrapping"/>
    545 <p>
    546   You can see in the image that there is some sort of imaginary region of light, and a large part outside this area is in shadow; this area represents the size of the depth map projected onto the floor. The reason this happens is that we earlier set the depth map's wrapping options to <var>GL_REPEAT</var>.
    547 </p>
    548          
    549 <p>
    550   What we'd rather have is that all coordinates outside the depth map's range have a depth of <code>1.0</code> which as a result means these coordinates will never be in shadow (as no object will have a depth larger than <code>1.0</code>). We can do this by configuring a texture border color and set the depth map's texture wrap options to <var>GL_CLAMP_TO_BORDER</var>:
    551 </p>
    552       
    553 <pre><code>
    554 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
    555 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
    556 float borderColor[] = { 1.0f, 1.0f, 1.0f, 1.0f };
    557 <function id='15'>glTexParameter</function>fv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, borderColor);  
    558 </code></pre>
    559       
    560 <p>
    561   Now whenever we sample outside the depth map's [<code>0</code>,<code>1</code>] coordinate range, the <fun>texture</fun> function will always return a depth of <code>1.0</code>, producing a <var>shadow</var> value of <code>0.0</code>. The result now looks more plausible:
    562 </p>
    563       
    564       <img src="/img/advanced-lighting/shadow_mapping_clamp_edge.png" class="clean" alt="Shadow mapping with texture wrapping set to clamp to border color"/>
    565         
    566 <p>
    567   There seems to still be one part showing a dark region. Those are the coordinates outside the far plane of the light's orthographic frustum. You can see that this dark region always occurs at the far end of the light source's frustum by looking at the shadow directions.
    568 </p>
    569         
    570 <p>
    571    A light-space projected fragment coordinate is further than the light's far plane when its <code>z</code> coordinate is larger than <code>1.0</code>. In that case the <var>GL_CLAMP_TO_BORDER</var> wrapping method doesn't work anymore as we compare the coordinate's <code>z</code> component with the depth map values; this always returns true for <code>z</code> larger than <code>1.0</code>.
    572 </p>
    573         
    574 <p>
    575   The fix for this is also relatively easy as we simply force the <var>shadow</var> value to <code>0.0</code> whenever the projected vector's <code>z</code> coordinate is larger than <code>1.0</code>:
    576 </p>
    577  
    578 <pre><code>
    579 float ShadowCalculation(vec4 fragPosLightSpace)
    580 {
    581     [...]
    582     if(projCoords.z > 1.0)
    583         shadow = 0.0;
    584     
    585     return shadow;
    586 }  
    587 </code></pre>
    588        
    589 <p>
    590   Checking the far plane and clamping the depth map to a manually specified border color solves the over-sampling of the depth map. This finally gives us the result we are looking for:
    591 </p>
    592         
    593 <img src="/img/advanced-lighting/shadow_mapping_over_sampling_fixed.png" class="clean" alt="Shadow mapping with over sampling fixed with border clamp to color and far plane fix."/>               
    594         
    595 <p>
    596   The result of all this does mean that we only have shadows where the projected fragment coordinates sit inside the depth map range so anything outside the light frustum will have no visible shadows. As games usually make sure this only occurs in the distance it is a much more plausible effect than the obvious black regions we had before.
    597 </p>      
    598   
    599 <h2>PCF</h2>
    600 <p>
    601   The shadows right now are a nice addition to the scenery, but it's still not exactly what we want. If you were to zoom in on the shadows the resolution dependency of shadow mapping quickly becomes apparent.
    602 </p>
    603   
    604   <img src="/img/advanced-lighting/shadow_mapping_zoom.png" alt="Zoomed in of shadows with shadow mappign technique shows jagged edges."/>
    605     
    606 <p>
    607   Because the depth map has a fixed resolution, the depth frequently usually spans more than one fragment per texel. As a result, multiple fragments sample the same depth value from the depth map and come to the same shadow conclusions, which produces these jagged blocky edges.
    608 </p>      
    609     
    610 <p>
    611   You can reduce these blocky shadows by increasing the depth map resolution, or by trying to fit the light frustum as closely to the scene as possible.
    612 </p>
    613     
    614 <p>
    615   Another (partial) solution to these jagged edges is called PCF, or <def>percentage-closer filtering</def>, which is a term that hosts many different filtering functions that produce <em>softer</em> shadows, making them appear less blocky or hard. The idea is to sample more than once from the depth map, each time with slightly different texture coordinates. For each individual sample we check whether it is in shadow or not. All the sub-results are then combined and averaged and we get a nice soft looking shadow.
    616 </p>
    617     
    618 <p>
    619   One simple implementation of PCF is to simply sample the surrounding texels of the depth map and average the results:
    620 </p>
    621     
    622 <pre><code>
    623 float shadow = 0.0;
    624 vec2 texelSize = 1.0 / textureSize(shadowMap, 0);
    625 for(int x = -1; x &lt;= 1; ++x)
    626 {
    627     for(int y = -1; y &lt;= 1; ++y)
    628     {
    629         float pcfDepth = texture(shadowMap, projCoords.xy + vec2(x, y) * texelSize).r; 
    630         shadow += currentDepth - bias > pcfDepth ? 1.0 : 0.0;        
    631     }    
    632 }
    633 shadow /= 9.0;
    634 </code></pre>
    635     
    636 <p>
    637   Here <fun>textureSize</fun> returns a <code>vec2</code> of the width and height of the given sampler texture at mipmap level <code>0</code>. 1 divided over this returns the size of a single texel that we use to offset the texture coordinates, making sure each new sample samples a different depth value. Here we sample 9 values around the projected coordinate's <code>x</code> and <code>y</code> value, test for shadow occlusion, and finally average the results by the total number of samples taken.
    638 </p>
    639     
    640 <p>
    641   By using more samples and/or varying the <var>texelSize</var> variable you can increase the quality of the soft shadows. Below you can see the shadows with simple PCF applied:
    642 </p>
    643     
    644     <img src="/img/advanced-lighting/shadow_mapping_soft_shadows.png" alt="Soft shadows with PCF using shadow mapping"/>
    645       
    646 <p>
    647   From a distance the shadows look a lot better and less hard. If you zoom in you can still see the resolution artifacts of shadow mapping, but in general this gives good results for most applications. 
    648 </p>
    649       
    650 <p>
    651   You can find the complete source code of the example <a href="/code_viewer_gh.php?code=src/5.advanced_lighting/3.1.3.shadow_mapping/shadow_mapping.cpp" target="_blank">here</a>.
    652 </p>
    653       
    654 <p>
    655   There is actually much more to PCF and quite a few techniques to considerably improve the quality of soft shadows, but for the sake of this chapter's length we'll leave that for a later discussion.
    656 </p>
    657     
    658 <h2>Orthographic vs perspective</h2>
    659 <p>
    660   There is a difference between rendering the depth map with an orthographic or a perspective projection matrix. An orthographic projection matrix does not deform the scene with perspective so all view/light rays are parallel. This makes it a great projection matrix for directional lights. A perspective projection matrix however does deform all vertices based on perspective which gives different results. The following image shows the different shadow regions of both projection methods:
    661 </p>
    662       
    663 <img src="/img/advanced-lighting/shadow_mapping_projection.png" class="clean" alt="Shadow mapping difference between orthographic and perspective projection."/>
    664   
    665 <p>
    666   Perspective projections make most sense for light sources that have actual locations, unlike directional lights. Perspective projections are most often used with spotlights and point lights, while orthographic projections are used for directional lights.
    667 </p>
    668   
    669 <p>
    670   Another subtle difference with using a perspective projection matrix is that visualizing the depth buffer will often give an almost completely white result. This happens because with perspective projection the depth is transformed to non-linear depth values with most of its noticeable range close to the near plane. To be able to properly view the depth values as we did with the orthographic projection you first want to transform the non-linear depth values to linear as we discussed in the <a href="https://learnopengl.com/Advanced-OpenGL/Depth-testing" target="_blank">depth testing</a> chapter:
    671 </p>
    672   
    673 <pre><code>
    674 #version 330 core
    675 out vec4 FragColor;
    676   
    677 in vec2 TexCoords;
    678 
    679 uniform sampler2D depthMap;
    680 uniform float near_plane;
    681 uniform float far_plane;
    682 
    683 float LinearizeDepth(float depth)
    684 {
    685     float z = depth * 2.0 - 1.0; // Back to NDC 
    686     return (2.0 * near_plane * far_plane) / (far_plane + near_plane - z * (far_plane - near_plane));
    687 }
    688 
    689 void main()
    690 {             
    691     float depthValue = texture(depthMap, TexCoords).r;
    692     FragColor = vec4(vec3(LinearizeDepth(depthValue) / far_plane), 1.0); // perspective
    693     // FragColor = vec4(vec3(depthValue), 1.0); // orthographic
    694 }  
    695 </code></pre>
    696   
    697 <p>
    698   This shows depth values similar to what we've seen with orthographic projection. Note that this is only useful for debugging; the depth checks remain the same with orthographic or projection matrices as the relative depths do not change.
    699 </p>
    700 
    701 <h2>Additional resources</h2>
    702   <ul>
    703   <li><a href="http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/" target="_blank">Tutorial 16 : Shadow mapping</a>: similar shadow mapping tutorial by opengl-tutorial.org with a few extra notes.</li>
    704     <li><a href="http://ogldev.atspace.co.uk/www/tutorial23/tutorial23.html" target="_blank">Shadow Mapping - Part 1</a>: another shadow mapping tutorial by ogldev.</li>
    705     <li><a href="https://www.youtube.com/watch?v=EsccgeUpdsM" target="_blank">How Shadow Mapping Works</a>: a 3-part YouTube tutorial by TheBennyBox on shadow mapping and its implementation.</li>
    706     <li><a href="https://msdn.microsoft.com/en-us/library/windows/desktop/ee416324%28v=vs.85%29.aspx" target="_blank">Common Techniques to Improve Shadow Depth Maps</a>: a great article by Microsoft listing a large number of techniques to improve the quality of shadow maps.
    707 </ul>       
    708 
    709     </div>
    710     
    711     <div id="hover">
    712         HI
    713     </div>
    714    <!-- 728x90/320x50 sticky footer -->
    715 <div id="waldo-tag-6196"></div>
    716 
    717    <div id="disqus_thread"></div>
    718 
    719     
    720 
    721 
    722 </div> <!-- container div -->
    723 
    724 
    725 </div> <!-- super container div -->
    726 </body>
    727 </html>