LearnOpenGL

Translation in progress of learnopengl.com.
git clone https://git.mtkn.jp/LearnOpenGL
Log | Files | Refs

Deferred-Shading.html (46430B)


      1 <!DOCTYPE html>
      2 <html lang="ja"> 
      3 <head>
      4     <meta charset="utf-8"/>
      5     <title>LearnOpenGL</title>
      6     <link rel="shortcut icon" type="image/ico" href="/favicon.ico"  />
      7 	<link rel="stylesheet" href="../static/style.css" />
      8 	<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js"> </script>
      9 	<script src="/static/functions.js"></script>
     10 </head>
     11 <body>
     12 	<nav>
     13 <ol>
     14 	<li id="Introduction">
     15 		<a href="https://learnopengl.com/Introduction">はじめに</a>
     16 	</li>
     17 	<li id="Getting-started">
     18 		<span class="closed">入門</span>
     19 		<ol>
     20 			<li id="Getting-started/OpenGL">
     21 				<a href="https://learnopengl.com/Getting-started/OpenGL">OpenGL </a>
     22 			</li>
     23 			<li id="Getting-started/Creating-a-window">
     24 				<a href="https://learnopengl.com/Getting-started/Creating-a-window">ウィンドウの作成</a>
     25 			</li>
     26 			<li id="Getting-started/Hello-Window">
     27 				<a href="https://learnopengl.com/Getting-started/Hello-Window">最初のウィンドウ</a>
     28 			</li>
     29 			<li id="Getting-started/Hello-Triangle">
     30 				<a href="https://learnopengl.com/Getting-started/Hello-Triangle">最初の三角形</a>
     31 			</li>
     32 			<li id="Getting-started/Shaders">
     33 				<a href="https://learnopengl.com/Getting-started/Shaders">シェーダー</a>
     34 			</li>
     35 			<li id="Getting-started/Textures">
     36 				<a href="https://learnopengl.com/Getting-started/Textures">テクスチャ</a>
     37 			</li>
     38 			<li id="Getting-started/Transformations">
     39 				<a href="https://learnopengl.com/Getting-started/Transformations">座標変換</a>
     40 			</li>
     41 			<li id="Getting-started/Coordinate-Systems">
     42 				<a href="https://learnopengl.com/Getting-started/Coordinate-Systems">座標系</a>
     43 			</li>
     44 			<li id="Getting-started/Camera">
     45 				<a href="https://learnopengl.com/Getting-started/Camera">カメラ</a>
     46 			</li>
     47 			<li id="Getting-started/Review">
     48 				<a href="https://learnopengl.com/Getting-started/Review">まとめ</a>
     49 			</li>
     50 		</ol>
     51 	</li>
     52 	<li id="Lighting">
     53 		<span class="closed">Lighting </span>
     54 		<ol>
     55 			<li id="Lighting/Colors">
     56 				<a href="https://learnopengl.com/Lighting/Colors">Colors </a>
     57 			</li>
     58 			<li id="Lighting/Basic-Lighting">
     59 				<a href="https://learnopengl.com/Lighting/Basic-Lighting">Basic Lighting </a>
     60 			</li>
     61 			<li id="Lighting/Materials">
     62 				<a href="https://learnopengl.com/Lighting/Materials">Materials </a>
     63 			</li>
     64 			<li id="Lighting/Lighting-maps">
     65 				<a href="https://learnopengl.com/Lighting/Lighting-maps">Lighting maps </a>
     66 			</li>
     67 			<li id="Lighting/Light-casters">
     68 				<a href="https://learnopengl.com/Lighting/Light-casters">Light casters </a>
     69 			</li>
     70 			<li id="Lighting/Multiple-lights">
     71 				<a href="https://learnopengl.com/Lighting/Multiple-lights">Multiple lights </a>
     72 			</li>
     73 			<li id="Lighting/Review">
     74 				<a href="https://learnopengl.com/Lighting/Review">Review </a>
     75 			</li>
     76 		</ol>
     77 	</li>
     78 	<li id="Model-Loading">
     79 		<span class="closed">Model Loading </span>
     80 		<ol>
     81 			<li id="Model-Loading/Assimp">
     82 				<a href="https://learnopengl.com/Model-Loading/Assimp">Assimp </a>
     83 			</li>
     84 			<li id="Model-Loading/Mesh">
     85 				<a href="https://learnopengl.com/Model-Loading/Mesh">Mesh </a>
     86 			</li>
     87 			<li id="Model-Loading/Model">
     88 				<a href="https://learnopengl.com/Model-Loading/Model">Model </a>
     89 			</li>
     90 		</ol>
     91 	</li>
     92 	<li id="Advanced-OpenGL">
     93 		<span class="closed">Advanced OpenGL </span>
     94 		<ol>
     95 			<li id="Advanced-OpenGL/Depth-testing">
     96 				<a href="https://learnopengl.com/Advanced-OpenGL/Depth-testing">Depth testing </a>
     97 			</li>
     98 			<li id="Advanced-OpenGL/Stencil-testing">
     99 				<a href="https://learnopengl.com/Advanced-OpenGL/Stencil-testing">Stencil testing </a>
    100 			</li>
    101 			<li id="Advanced-OpenGL/Blending">
    102 				<a href="https://learnopengl.com/Advanced-OpenGL/Blending">Blending </a>
    103 			</li>
    104 			<li id="Advanced-OpenGL/Face-culling">
    105 				<a href="https://learnopengl.cm/Advanced-OpenGL/Face-culling">Face culling </a>
    106 			</li>
    107 			<li id="Advanced-OpenGL/Framebuffers">
    108 				<a href="https://learnopengl.com/Advanced-OpenGL/Framebuffers">Framebuffers </a>
    109 			</li>
    110 			<li id="Advanced-OpenGL/Cubemaps">
    111 				<a href="https://learnopengl.com/Advanced-OpenGL/Cubemaps">Cubemaps </a>
    112 			</li>
    113 			<li id="Advanced-OpenGL/Advanced-Data">
    114 				<a href="https://learnopengl.com/Advanced-OpenGL/Advanced-Data">Advanced Data </a>
    115 			</li>
    116 			<li id="Advanced-OpenGL/Advanced-GLSL">
    117 				<a href="https://learnopengl.com/Advanced-OpenGL/Advanced-GLSL">Advanced GLSL </a>
    118 			</li>
    119 			<li id="Advanced-OpenGL/Geometry-Shader">
    120 				<a href="https://learnopengl.com/Advanced-OpenGL/Geometry-Shader">Geometry Shader </a>
    121 			</li>
    122 			<li id="Advanced-OpenGL/Instancing">
    123 				<a href="https://learnopengl.com/Advanced-OpenGL/Instancing">Instancing </a>
    124 			</li>
    125 			<li id="Advanced-OpenGL/Anti-Aliasing">
    126 				<a href="https://learnopengl.com/Advanced-OpenGL/Anti-Aliasing">Anti Aliasing </a>
    127 			</li>
    128 		</ol>
    129 	</li>
    130 	<li id="Advanced-Lighting">
    131 		<span class="closed">Advanced Lighting </span>
    132 		<ol>
    133 			<li id="Advanced-Lighting/Advanced-Lighting">
    134 				<a href="https://learnopengl.com/Advanced-Lighting/Advanced-Lighting">Advanced Lighting </a>
    135 			</li>
    136 			<li id="Advanced-Lighting/Gamma-Correction">
    137 				<a href="https://learnopengl.com/Advanced-Lighting/Gamma-Correction">Gamma Correction </a>
    138 			</li>
    139 			<li id="Advanced-Lighting/Shadows">
    140 				<span class="closed">Shadows </span>
    141 				<ol>
    142 					<li id="Advanced-Lighting/Shadows/Shadow-Mapping">
    143 						<a href="https://learnopengl.com/Advanced-Lighting/Shadows/Shadow-Mapping">Shadow Mapping </a>
    144 					</li>
    145 					<li id="Advanced-Lighting/Shadows/Point-Shadows">
    146 						<a href="https://learnopengl.com/Advanced-Lighting/Shadows/Point-Shadows">Point Shadows </a>
    147 					</li>
    148 				</ol>
    149 			</li>
    150 			<li id="Advanced-Lighting/Normal-Mapping">
    151 				<a href="https://learnopengl.com/Advanced-Lighting/Normal-Mapping">Normal Mapping </a>
    152 			</li>
    153 			<li id="Advanced-Lighting/Parallax-Mapping">
    154 				<a href="https://learnopengl.com/Advanced-Lighting/Parallax-Mapping">Parallax Mapping </a>
    155 			</li>
    156 			<li id="Advanced-Lighting/HDR">
    157 				<a href="https://learnopengl.com/Advanced-Lighting/HDR">HDR </a>
    158 			</li>
    159 			<li id="Advanced-Lighting/Bloom">
    160 				<a href="https://learnopengl.com/Advanced-Lighting/Bloom">Bloom </a>
    161 			</li>
    162 			<li id="Advanced-Lighting/Deferred-Shading">
    163 				<a href="https://learnopengl.com/Advanced-Lighting/Deferred-Shading">Deferred Shading </a>
    164 			</li>
    165 			<li id="Advanced-Lighting/SSAO">
    166 				<a href="https://learnopengl.com/Advanced-Lighting/SSAO">SSAO </a>
    167 			</li>
    168 		</ol>
    169 	</li>
    170 	<li id="PBR">
    171 		<span class="closed">PBR </span>
    172 		<ol>
    173 			<li id="PBR/Theory">
    174 				<a href="https://learnopengl.com/PBR/Theory">Theory </a>
    175 			</li>
    176 			<li id="PBR/Lighting">
    177 				<a href="https://learnopengl.com/PBR/Lighting">Lighting </a>
    178 			</li>
    179 			<li id="PBR/IBL">
    180 				<span class="closed">IBL </span>
    181 				<ol>
    182 					<li id="PBR/IBL/Diffuse-irradiance">
    183 						<a href="https://learnopengl.com/PBR/IBL/Diffuse-irradiance">Diffuse irradiance </a>
    184 					</li>
    185 					<li id="PBR/IBL/Specular-IBL">
    186 						<a href="https://learnopengl.com/PBR/IBL/Specular-IBL">Specular IBL </a>
    187 					</li>
    188 				</ol>
    189 			</li>
    190 		</ol>
    191 	</li>
    192 	<li id="In-Practice">
    193 		<span class="closed">In Practice </span>
    194 		<ol>
    195 			<li id="In-Practice/Debugging">
    196 				<a href="https://learnopengl.com/In-Practice/Debugging">Debugging </a>
    197 			</li>
    198 			<li id="In-Practice/Text-Rendering">
    199 				<a href="https://learnopengl.com/In-Practice/Text-Rendering">Text Rendering </a>
    200 			</li>
    201 			<li id="In-Practice/2D-Game">
    202 				<span class="closed">2D Game </span>
    203 				<ol>
    204 					<li id="In-Practice/2D-Game/Breakout">
    205 						<a href="https://learnopengl.com/In-Practice/2D-Game/Breakout">Breakout </a>
    206 					</li>
    207 					<li id="In-Practice/2D-Game/Setting-up">
    208 						<a href="https://learnopengl.com/In-Practice/2D-Game/Setting-up">Setting up </a>
    209 					</li>
    210 					<li id="In-Practice/2D-Game/Rendering-Sprites">
    211 						<a href="https://learnopengl.com/In-Practice/2D-Game/Rendering-Sprites">Rendering Sprites </a>
    212 					</li>
    213 					<li id="In-Practice/2D-Game/Levels">
    214 						<a href="https://learnopengl.com/In-Practice/2D-Game/Levels">Levels </a>
    215 					</li>
    216 					<li id="In-Practice/2D-Game/Collisions">
    217 						<span class="closed">Collisions </span>
    218 						<ol>
    219 							<li id="In-Practice/2D-Game/Collisions/Ball">
    220 								<a href="https://learnopengl.com/In-Practice/2D-Game/Collisions/Ball">Ball </a>
    221 							</li>
    222 							<li id="In-Practice/2D-Game/Collisions/Collision-detection">
    223 								<a href="https://learnopengl.com/In-Practice/2D-Game/Collisions/Collision-detection">Collision detection </a>
    224 							</li>
    225 							<li id="In-Practice/2D-Game/Collisions/Collision-resolution">
    226 								<a href="https://learnopengl.com/In-Practice/2D-Game/Collisions/Collision-resolution">Collision resolution </a>
    227 							</li>
    228 						</ol>
    229 					</li>
    230 					<li id="In-Practice/2D-Game/Particles">
    231 						<a href="https://learnopengl.com/In-Practice/2D-Game/Particles">Particles </a>
    232 					</li>
    233 					<li id="In-Practice/2D-Game/Postprocessing">
    234 						<a href="https://learnopengl.com/In-Practice/2D-Game/Postprocessing">Postprocessing </a>
    235 					</li>
    236 					<li id="In-Practice/2D-Game/Powerups">
    237 						<a href="https://learnopengl.com/In-Practice/2D-Game/Powerups">Powerups </a>
    238 					</li>
    239 					<li id="In-Practice/2D-Game/Audio">
    240 						<a href="https://learnopengl.com/In-Practice/2D-Game/Audio">Audio </a>
    241 					</li>
    242 					<li id="In-Practice/2D-Game/Render-text">
    243 						<a href="https://learnopengl.com/In-Practice/2D-Game/Render-text">Render text </a>
    244 					</li>
    245 					<li id="In-Practice/2D-Game/Final-thoughts">
    246 						<a href="https://learnopengl.com/In-Practice/2D-Game/Final-thoughts">Final thoughts </a>
    247 					</li>
    248 				</ol>
    249 			</li>
    250 		</ol>
    251 	</li>
    252 	<li id="Guest-Articles">
    253 		<span class="closed">Guest Articles </span>
    254 		<ol>
    255 			<li id="Guest-Articles/How-to-publish">
    256 				<a href="https://learnopengl.com/Guest-Articles/How-to-publish">How to publish </a>
    257 			</li>
    258 			<li id="Guest-Articles/2020">
    259 				<span class="closed">2020 </span>
    260 				<ol>
    261 					<li id="Guest-Articles/2020/OIT">
    262 						<span class="closed">OIT </span>
    263 						<ol>
    264 							<li id="Guest-Articles/2020/OIT/Introduction">
    265 								<a href="https://learnopengl.com/Guest-Articles/2020/OIT/Introduction">Introduction </a>
    266 							</li>
    267 							<li id="Guest-Articles/2020/OIT/Weighted-Blended">
    268 								<a href="https://learnopengl.com/Guest-Articles/2020/OIT/Weighted-Blended">Weighted Blended </a>
    269 							</li>
    270 						</ol>
    271 					</li>
    272 					<li id="Guest-Articles/2020/Skeletal-Animation">
    273 						<a href="https://learnopengl.com/Guest-Articles/2020/Skeletal-Animation">Skeletal Animation </a>
    274 					</li>
    275 				</ol>
    276 			</li>
    277 			<li id="Guest-Articles/2021">
    278 				<span class="closed">2021 </span>
    279 				<ol>
    280 					<li id="Guest-Articles/2021/CSM">
    281 						<a href="https://learnopengl.com/Guest-Articles/2021/CSM">CSM </a>
    282 					</li>
    283 					<li id="Guest-Articles/2021/Scene">
    284 						<span class="closed">Scene </span>
    285 						<ol>
    286 							<li id="Guest-Articles/2021/Scene/Scene-Graph">
    287 								<a href="https://learnopengl.com/Guest-Articles/2021/Scene/Scene-Graph">Scene Graph </a>
    288 							</li>
    289 							<li id="Guest-Articles/2021/Scene/Frustum-Culling">
    290 								<a href="https://learnopengl.com/Guest-Articles/2021/Scene/Frustum-Culling">Frustum Culling </a>
    291 							</li>
    292 						</ol>
    293 					</li>
    294 					<li id="Guest-Articles/2021/Tessellation">
    295 						<span class="closed">Tessellation </span>
    296 						<ol>
    297 							<li id="Guest-Articles/2021/Tessellation/Height-map">
    298 								<a href="https://learnopengl.com/Guest-Articles/2021/Tessellation/Height-map">Height map </a>
    299 							</li>
    300 						</ol>
    301 					</li>
    302 				</ol>
    303 			</li>
    304 		</ol>
    305 	</li>
    306 	<li id="Code-repository">
    307 		<a href="https://learnopengl.com/Code-repository">Code repository </a>
    308 	</li>
    309 	<li id="Translations">
    310 		<a href="https://learnopengl.com/Translations">Translations </a>
    311 	</li>
    312 	<li id="About">
    313 		<a href="https://learnopengl.com/About">About </a>
    314 	</li>
    315 </ol>
    316 	</nav>
    317 	<main>
    318     <h1 id="content-title">Deferred Shading</h1>
    319 <h1 id="content-url" style='display:none;'>Advanced-Lighting/Deferred-Shading</h1>
    320 <p>
    321   The way we did lighting so far was called <def>forward rendering</def> or <def>forward shading</def>. A straightforward approach where we render an object and light it according to all light sources in a scene. We do this for every object individually for each object in the scene. While quite easy to understand and implement it is also quite heavy on performance as each rendered object has to iterate over each light source for every rendered fragment, which is a lot! Forward rendering also tends to waste a lot of fragment shader runs in scenes with a high depth complexity (multiple objects cover the same screen pixel) as fragment shader outputs are overwritten.
    322 </p>
    323 
    324 <p>
    325   <def>Deferred shading</def> or <def>deferred rendering</def> aims to overcome these issues by drastically changing the way we render objects. This gives us several new options to significantly optimize scenes with large numbers of lights, allowing us to render hundreds (or even thousands) of lights with an acceptable framerate. The following image is a scene with 1847 point lights rendered with deferred shading (image courtesy of Hannes Nevalainen); something that wouldn't be possible with forward rendering.
    326 </p>
    327 
    328 <img src="/img/advanced-lighting/deferred_example.png" alt="Example of the power of deferred shading in OpenGL as we can easily render 1000s lights with an acceptable framerate"/>
    329   
    330 <p>
    331   Deferred shading is based on the idea that we <em>defer</em> or <em>postpone</em> most of the heavy rendering (like lighting) to a later stage. Deferred shading consists of two passes: in the first pass, called the <def>geometry pass</def>, we render the scene once and retrieve all kinds of geometrical information from the objects that we store in a collection of textures called the <def>G-buffer</def>; think of position vectors, color vectors, normal vectors, and/or specular values. The geometric information of a scene stored in the <def>G-buffer</def> is then later used for (more complex) lighting calculations. Below is the content of a G-buffer of a single frame:
    332 </p>
    333   
    334   <img src="/img/advanced-lighting/deferred_g_buffer.png" alt="An example of a G-Buffer filled with geometrical data of a scene in OpenGL"/>
    335     
    336 <p>
    337   We use the textures from the G-buffer in a second pass called the <def>lighting pass</def> where we render a screen-filled quad and calculate the scene's lighting for each fragment using the geometrical information stored in the G-buffer; pixel by pixel we iterate over the G-buffer. Instead of taking each object all the way from the vertex shader to the fragment shader, we decouple its advanced fragment processes to a later stage. The lighting calculations are exactly the same, but this time we take all required input variables from the corresponding G-buffer textures, instead of the vertex shader (plus some uniform variables).
    338 </p>
    339     
    340 <p>
    341   The image below nicely illustrates the process of deferred shading.
    342 </p>
    343     
    344     <img src="/img/advanced-lighting/deferred_overview.png" class="clean" alt="Overview of the deferred shading technique in OpenGL"/>    
    345     
    346 <p>
    347   A major advantage of this approach is that whatever fragment ends up in the G-buffer is the actual fragment information that ends up as a screen pixel. The depth test already concluded this fragment to be the last and top-most fragment. This ensures that for each pixel we process in the lighting pass, we only calculate lighting once. Furthermore, deferred rendering opens up the possibility for further optimizations that allow us to render a much larger amount of light sources compared to forward rendering.
    348 </p>
    349     
    350 <p>
    351   It also comes with some disadvantages though as the G-buffer requires us to store a relatively large amount of scene data in its texture color buffers. This eats memory, especially since scene data like position vectors require a high precision. Another disadvantage is that it doesn't support blending (as we only have information of the top-most fragment) and MSAA no longer works. There are several workarounds for this that we'll get to at the end of the chapter.
    352 </p>
    353     
    354 <p>
    355   Filling the G-buffer (in the geometry pass) isn't too expensive as we directly store object information like position, color, or normals into a framebuffer with a small or zero amount of processing. By using <def>multiple render targets</def> (MRT) we can even do all of this in a single render pass.
    356 </p>
    357     
    358 <h2>The G-buffer</h2>
    359 <p>
    360   The <def>G-buffer</def> is the collective term of all textures used to store lighting-relevant data for the final lighting pass. Let's take this moment to briefly review all the data we need to light a fragment with forward rendering:
    361 </p>
    362     
    363 <ul>
    364   <li>A 3D world-space <strong>position</strong> vector to calculate the (interpolated) fragment position variable used for <var>lightDir</var> and <var>viewDir</var>. </li>  
    365   <li>An RGB diffuse <strong>color</strong> vector also known as <def>albedo</def>.</li>
    366   <li>A 3D <strong>normal</strong> vector for determining a surface's slope.</li>
    367   <li>A <strong>specular intensity</strong> float.</li>
    368   <li>All light source position and color vectors.</li>
    369   <li>The player or viewer's position vector.</li>
    370 </ul>
    371     
    372 <p>
    373   With these (per-fragment) variables at our disposal we are able to calculate the (Blinn-)Phong lighting we're accustomed to. The light source positions and colors, and the player's view position, can be configured using uniform variables, but the other variables are all fragment specific. If we can somehow pass the exact same data to the final deferred lighting pass we can calculate the same lighting effects, even though we're rendering fragments of a 2D quad. 
    374 </p>
    375     
    376 <p>
    377   There is no limit in OpenGL to what we can store in a texture so it makes sense to store all per-fragment data in one or multiple screen-filled textures of the G-buffer and use these later in the lighting pass. As the G-buffer textures will have the same size as the lighting pass's 2D quad, we get the exact same fragment data we'd had in a forward rendering setting, but this time in the lighting pass; there is a one on one mapping.
    378 </p>
    379 
    380 <p>
    381   In pseudocode the entire process will look a bit like this:
    382 </p>
    383       
    384 <pre><code>
    385 while(...) // render loop
    386 {
    387     // 1. geometry pass: render all geometric/color data to g-buffer 
    388     <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, gBuffer);
    389     <function id='13'><function id='10'>glClear</function>Color</function>(0.0, 0.0, 0.0, 1.0); // keep it black so it doesn't leak into g-buffer
    390     <function id='10'>glClear</function>(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    391     gBufferShader.use();
    392     for(Object obj : Objects)
    393     {
    394         ConfigureShaderTransformsAndUniforms();
    395         obj.Draw();
    396     }  
    397     // 2. lighting pass: use g-buffer to calculate the scene's lighting
    398     <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, 0);
    399     lightingPassShader.use();
    400     BindAllGBufferTextures();
    401     SetLightingUniforms();
    402     RenderQuad();
    403 }
    404 </code></pre>
    405     
    406 <p>
    407   The data we'll need to store of each fragment is a <strong>position</strong> vector, a <strong>normal</strong> vector, a <strong>color</strong> vector, and a <strong>specular intensity</strong> value. In the geometry pass we need to render all objects of the scene and store these data components in the G-buffer. We can again use <def>multiple render targets</def> to render to multiple color buffers in a single render pass; this was briefly discussed in the <a href="https://learnopengl.com/Advanced-Lighting/Bloom" target="_blank">Bloom</a> chapter.
    408 </p>
    409       
    410 <p>
    411   For the geometry pass we'll need to initialize a framebuffer object that we'll call <var>gBuffer</var> that has multiple color buffers attached and a single depth renderbuffer object. For the position and normal texture we'd preferably use a high-precision texture (16 or 32-bit float per component). For the albedo and specular values we'll be fine with the default texture precision (8-bit precision per component). Note that we use <var>GL_RGBA16F</var> over <var>GL_RGB16F</var> as GPUs generally prefer 4-component formats over 3-component formats due to byte alignment; some drivers may fail to complete the framebuffer otherwise.
    412 </p>
    413       
    414 <pre><code>
    415 unsigned int gBuffer;
    416 <function id='76'>glGenFramebuffers</function>(1, &gBuffer);
    417 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, gBuffer);
    418 unsigned int gPosition, gNormal, gColorSpec;
    419   
    420 // - position color buffer
    421 <function id='50'>glGenTextures</function>(1, &gPosition);
    422 <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, gPosition);
    423 <function id='52'>glTexImage2D</function>(GL_TEXTURE_2D, 0, GL_RGBA16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGBA, GL_FLOAT, NULL);
    424 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    425 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    426 <function id='81'>glFramebufferTexture2D</function>(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, gPosition, 0);
    427   
    428 // - normal color buffer
    429 <function id='50'>glGenTextures</function>(1, &gNormal);
    430 <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, gNormal);
    431 <function id='52'>glTexImage2D</function>(GL_TEXTURE_2D, 0, GL_RGBA16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGBA, GL_FLOAT, NULL);
    432 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    433 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    434 <function id='81'>glFramebufferTexture2D</function>(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, gNormal, 0);
    435   
    436 // - color + specular color buffer
    437 <function id='50'>glGenTextures</function>(1, &gAlbedoSpec);
    438 <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, gAlbedoSpec);
    439 <function id='52'>glTexImage2D</function>(GL_TEXTURE_2D, 0, GL_RGBA, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
    440 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    441 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    442 <function id='81'>glFramebufferTexture2D</function>(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, GL_TEXTURE_2D, gAlbedoSpec, 0);
    443   
    444 // - tell OpenGL which color attachments we'll use (of this framebuffer) for rendering 
    445 unsigned int attachments[3] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 };
    446 glDrawBuffers(3, attachments);
    447   
    448 // then also add render buffer object as depth buffer and check for completeness.
    449 [...]
    450 </code></pre>
    451       
    452 <p>
    453   Since we use multiple render targets, we have to explicitly tell OpenGL which of the color buffers associated with <var>GBuffer</var> we'd like to render to with <fun>glDrawBuffers</fun>. Also interesting to note here is we combine the color and specular intensity data in a single <code>RGBA</code> texture; this saves us from having to declare an additional color buffer texture. As your deferred shading pipeline gets more complex and needs more data you'll quickly find new ways to combine data in individual textures. 
    454 </p>
    455       
    456 <p>
    457  Next we need to render into the G-buffer. Assuming each object has a diffuse, normal, and specular texture we'd use something like the following fragment shader to render into the G-buffer:
    458 </p>
    459     
    460 <pre><code>
    461 #version 330 core
    462 layout (location = 0) out vec3 gPosition;
    463 layout (location = 1) out vec3 gNormal;
    464 layout (location = 2) out vec4 gAlbedoSpec;
    465 
    466 in vec2 TexCoords;
    467 in vec3 FragPos;
    468 in vec3 Normal;
    469 
    470 uniform sampler2D texture_diffuse1;
    471 uniform sampler2D texture_specular1;
    472 
    473 void main()
    474 {    
    475     // store the fragment position vector in the first gbuffer texture
    476     gPosition = FragPos;
    477     // also store the per-fragment normals into the gbuffer
    478     gNormal = normalize(Normal);
    479     // and the diffuse per-fragment color
    480     gAlbedoSpec.rgb = texture(texture_diffuse1, TexCoords).rgb;
    481     // store specular intensity in gAlbedoSpec's alpha component
    482     gAlbedoSpec.a = texture(texture_specular1, TexCoords).r;
    483 }  
    484 </code></pre>
    485       
    486 <p>
    487    As we use multiple render targets, the layout specifier tells OpenGL to which color buffer of the active framebuffer we render to. Note that we do not store the specular intensity into a single color buffer texture as we can store its single float value in the alpha component of one of the other color buffer textures. 
    488 </p>
    489       
    490           
    491 <warning>
    492   Keep in mind that with lighting calculations it is extremely important to keep all relevant variables in the same coordinate space. In this case we store (and calculate) all variables in world-space.
    493 </warning>
    494       
    495 <p>
    496   If we'd now were to render a large collection of backpack objects into the <var>gBuffer</var> framebuffer and visualize its content by projecting each color buffer one by one onto a screen-filled quad we'd see something like this:
    497 </p>
    498       
    499       <img src="/img/advanced-lighting/deferred_g_buffer.png" alt="Image of a G-Buffer in OpenGL with several backpacks"/>
    500       
    501 <p>
    502   Try to visualize that the world-space position and normal vectors are indeed correct. For instance, the normal vectors pointing to the right would be more aligned to a red color, similarly for position vectors that point from the scene's origin to the right. As soon as you're satisfied with the content of the G-buffer it's time to move to the next step: the lighting pass.
    503 </p>
    504       
    505 <h2>The deferred lighting pass</h2>
    506 <p>
    507   With a large collection of fragment data in the G-Buffer at our disposal we have the option to completely calculate the scene's final lit colors. We do this by iterating over each of the G-Buffer textures pixel by pixel and use their content as input to the lighting algorithms. Because the G-buffer texture values all represent the final transformed fragment values we only have to do the expensive lighting operations once per pixel. This is especially useful in complex scenes where we'd easily invoke multiple expensive fragment shader calls per pixel in a forward rendering setting.
    508 </p>
    509         
    510 <p>
    511   For the lighting pass we're going to render a 2D screen-filled quad (a bit like a post-processing effect) and execute an expensive lighting fragment shader on each pixel:
    512 </p>
    513         
    514 <pre><code>
    515 <function id='10'>glClear</function>(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    516 <function id='49'>glActiveTexture</function>(GL_TEXTURE0);
    517 <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, gPosition);
    518 <function id='49'>glActiveTexture</function>(GL_TEXTURE1);
    519 <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, gNormal);
    520 <function id='49'>glActiveTexture</function>(GL_TEXTURE2);
    521 <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, gAlbedoSpec);
    522 // also send light relevant uniforms
    523 shaderLightingPass.use();
    524 SendAllLightUniformsToShader(shaderLightingPass);
    525 shaderLightingPass.setVec3("viewPos", camera.Position);
    526 RenderQuad();  
    527 </code></pre>
    528         
    529 <p>
    530   We bind all relevant textures of the G-buffer before rendering and also send the lighting-relevant uniform variables to the shader.
    531 </p>
    532         
    533 <p>
    534   The fragment shader of the lighting pass is largely similar to the lighting chapter shaders we've used so far. What is new is the method in which we obtain the lighting's input variables, which we now directly sample from the G-buffer:
    535 </p>
    536         
    537 <pre><code>
    538 #version 330 core
    539 out vec4 FragColor;
    540   
    541 in vec2 TexCoords;
    542 
    543 uniform sampler2D gPosition;
    544 uniform sampler2D gNormal;
    545 uniform sampler2D gAlbedoSpec;
    546 
    547 struct Light {
    548     vec3 Position;
    549     vec3 Color;
    550 };
    551 const int NR_LIGHTS = 32;
    552 uniform Light lights[NR_LIGHTS];
    553 uniform vec3 viewPos;
    554 
    555 void main()
    556 {             
    557     // retrieve data from G-buffer
    558     vec3 FragPos = texture(gPosition, TexCoords).rgb;
    559     vec3 Normal = texture(gNormal, TexCoords).rgb;
    560     vec3 Albedo = texture(gAlbedoSpec, TexCoords).rgb;
    561     float Specular = texture(gAlbedoSpec, TexCoords).a;
    562     
    563     // then calculate lighting as usual
    564     vec3 lighting = Albedo * 0.1; // hard-coded ambient component
    565     vec3 viewDir = normalize(viewPos - FragPos);
    566     for(int i = 0; i &lt; NR_LIGHTS; ++i)
    567     {
    568         // diffuse
    569         vec3 lightDir = normalize(lights[i].Position - FragPos);
    570         vec3 diffuse = max(dot(Normal, lightDir), 0.0) * Albedo * lights[i].Color;
    571         lighting += diffuse;
    572     }
    573     
    574     FragColor = vec4(lighting, 1.0);
    575 }  
    576 </code></pre>
    577         
    578 <p>
    579   The lighting pass shader accepts 3 uniform textures that represent the G-buffer and hold all the data we've stored in the geometry pass. If we were to sample these with the current fragment's texture coordinates we'd get the exact same fragment values as if we were rendering the geometry directly. Note that we retrieve both the <var>Albedo</var> color and the <var>Specular</var> intensity from the single <var>gAlbedoSpec</var> texture.
    580 </p>
    581         
    582 <p>
    583   As we now have the per-fragment variables (and the relevant uniform variables) necessary to calculate Blinn-Phong lighting, we don't have to make any changes to the lighting code. The only thing we change in deferred shading here is the method of obtaining lighting input variables.
    584 </p>
    585         
    586 <p>
    587   Running a simple demo with a total of <code>32</code> small lights looks a bit like this:
    588 </p>
    589         
    590         <img src="/img/advanced-lighting/deferred_shading.png" class="clean" alt="Example of Deferred Shading in OpenGL"/>
    591                     
    592 <p>
    593   One of the disadvantages of deferred shading is that it is not possible to do <a href="https://learnopengl.com/Advanced-OpenGL/Blending" target="_blank">blending</a> as all values in the G-buffer are from single fragments, and blending operates on the combination of multiple fragments. Another disadvantage is that deferred shading forces you to use the same lighting algorithm for most of your scene's lighting; you can somehow alleviate this a bit by including more material-specific data in the G-buffer.
    594 </p>
    595           
    596 <p>
    597   To overcome these disadvantages (especially blending) we often split the renderer into two parts: one deferred rendering part, and the other a forward rendering part specifically meant for blending or special shader effects not suited for a deferred rendering pipeline. To illustrate how this works, we'll render the light sources as small cubes using a forward renderer as the light cubes require a special shader (simply output a single light color).
    598 </p>
    599       
    600 <h2>Combining deferred rendering with forward rendering</h2>
    601 <p>
    602   Say we want to render each of the light sources as a 3D cube positioned at the light source's position emitting the color of the light. A first idea that comes to mind is to simply forward render all the light sources on top of the deferred lighting quad at the end of the deferred shading pipeline. So basically render the cubes as we'd normally do, but only after we've finished the deferred rendering operations. In code this will look a bit like this:
    603 </p>
    604   
    605 <pre><code>
    606 // deferred lighting pass
    607 [...]
    608 RenderQuad();
    609   
    610 // now render all light cubes with forward rendering as we'd normally do
    611 shaderLightBox.use();
    612 shaderLightBox.setMat4("projection", projection);
    613 shaderLightBox.setMat4("view", view);
    614 for (unsigned int i = 0; i &lt; lightPositions.size(); i++)
    615 {
    616     model = glm::mat4(1.0f);
    617     model = <function id='55'>glm::translate</function>(model, lightPositions[i]);
    618     model = <function id='56'>glm::scale</function>(model, glm::vec3(0.25f));
    619     shaderLightBox.setMat4("model", model);
    620     shaderLightBox.setVec3("lightColor", lightColors[i]);
    621     RenderCube();
    622 }
    623 </code></pre>
    624   
    625 <p>
    626    However, these rendered cubes do not take any of the stored geometry depth of the deferred renderer into account and are, as a result, always rendered on top of the previously rendered objects; this isn't the result we were looking for.
    627 </p>
    628   
    629   <img src="/img/advanced-lighting/deferred_lights_no_depth.png" class="clean" alt="Image of deferred rendering with forward rendering where we didn't copy depth buffer data and lights are rendered on top of all geometry in OpenGL"/>
    630     
    631 <p>
    632   What we need to do, is first copy the depth information stored in the geometry pass into the default framebuffer's depth buffer and only then render the light cubes. This way the light cubes' fragments are only rendered when on top of the previously rendered geometry.
    633 </p>
    634     
    635 <p>
    636   We can copy the content of a framebuffer to the content of another framebuffer with the help of <fun><function id='103'>glBlitFramebuffer</function></fun>, a function we also used in the <a href="https://learnopengl.com/Advanced-OpenGL/Anti-Aliasing" target="_blank">anti-aliasing</a> chapter to resolve multisampled framebuffers. The <fun><function id='103'>glBlitFramebuffer</function></fun> function allows us to copy a user-defined region of a framebuffer to a user-defined region of another framebuffer. 
    637 </p>
    638     
    639 <p>
    640   We stored the depth of all the objects rendered in the deferred geometry pass in the <var>gBuffer</var> FBO. If we were to copy the content of its depth buffer to the depth buffer of the default framebuffer, the light cubes would then render as if all of the scene's geometry was rendered with forward rendering. As briefly explained in the anti-aliasing chapter, we have to specify a framebuffer as the read framebuffer and similarly specify a framebuffer as the write framebuffer:
    641 </p>
    642     
    643 <pre><code>
    644 <function id='77'>glBindFramebuffer</function>(GL_READ_FRAMEBUFFER, gBuffer);
    645 <function id='77'>glBindFramebuffer</function>(GL_DRAW_FRAMEBUFFER, 0); // write to default framebuffer
    646 <function id='103'>glBlitFramebuffer</function>(
    647   0, 0, SCR_WIDTH, SCR_HEIGHT, 0, 0, SCR_WIDTH, SCR_HEIGHT, GL_DEPTH_BUFFER_BIT, GL_NEAREST
    648 );
    649 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, 0);
    650 // now render light cubes as before
    651 [...]  
    652 </code></pre>
    653     
    654 <p>
    655   Here we copy the entire read framebuffer's depth buffer content to the default framebuffer's depth buffer; this can similarly be done for color buffers and stencil buffers. If we then render the light cubes, the cubes indeed render correctly over the scene's geometry:
    656 </p>
    657     
    658   
    659   <img src="/img/advanced-lighting/deferred_lights_depth.png" class="clean" alt="Image of deferred rendering with forward rendering where we copied the depth buffer data and lights are rendered properly with all  geometry in OpenGL"/>
    660     
    661 <p>
    662   You can find the full source code of the demo <a href="/code_viewer_gh.php?code=src/5.advanced_lighting/8.1.deferred_shading/deferred_shading.cpp" target="_blank">here</a>.
    663 </p>
    664     
    665 <p>
    666   With this approach we can easily combine deferred shading with forward shading. This is great as we can now still apply blending and render objects that require special shader effects, something that isn't possible in a pure deferred rendering context.
    667 </p>
    668     
    669 <h2>A larger number of lights</h2>
    670 <p>
    671   What deferred rendering is often praised for, is its ability to render an enormous amount of light sources without a heavy cost on performance. Deferred rendering by itself doesn't allow for a very large amount of light sources as we'd still have to calculate each fragment's lighting component for each of the scene's light sources. What makes a large amount of light sources possible is a very neat optimization we can apply to the deferred rendering pipeline: that of <def>light volumes</def>.
    672 </p>
    673     
    674 <p>
    675   Normally when we render a fragment in a large lit scene we'd calculate the contribution of <strong>each</strong> light source in a scene, regardless of their distance to the fragment. A large portion of these light sources will never reach the fragment, so why waste all these lighting computations? 
    676 </p>
    677         
    678 <p>
    679   The idea behind light volumes is to calculate the radius, or volume, of a light source i.e. the area where its light is able to reach fragments. As most light sources use some form of attenuation, we can use that to calculate the maximum distance or radius their light is able to reach. We then only do the expensive lighting calculations if a fragment is inside one or more of these light volumes. This can save us a considerable amount of computation as we now only calculate lighting where it's necessary.
    680 </p>
    681     
    682 <p>
    683   The trick to this approach is mostly figuring out the size or radius of the light volume of a light source.
    684 </p>
    685     
    686 <h3>Calculating a light's volume or radius</h3>
    687 <p>
    688   To obtain a light's volume radius we have to solve the attenuation equation for when its light contribution becomes <code>0.0</code>. For the attenuation function we'll use the function introduced in the <a href="https://learnopengl.com/Lighting/Light-casters" target="_blank">light casters</a> chapter:
    689 </p>
    690     
    691     \[F_{light} = \frac{I}{K_c + K_l * d + K_q * d^2}\]
    692     
    693 <p>
    694   What we want to do is solve this equation for when \(F_{light}\) is <code>0.0</code>. However, this equation will never exactly reach the value <code>0.0</code>, so there won't be a solution. What we can do however, is not solve the equation for <code>0.0</code>, but solve it for a brightness value that is close to <code>0.0</code> but still perceived as dark. The brightness value of \(5/256\) would be acceptable for this chapter's demo scene; divided by 256 as the default 8-bit framebuffer can only display that many intensities per component.
    695 </p>
    696     
    697 <note>
    698   The attenuation function used is mostly dark in its visible range. If we were to limit it to an even darker brightness than \(5/256\), the light volume would become too large and thus less effective. As long as a user cannot see a sudden cut-off of a light source at its volume borders we'll be fine. Of course this always depends on the type of scene; a higher brightness threshold results in smaller light volumes and thus a better efficiency, but can produce noticeable artifacts where lighting seems to break at a volume's borders.
    699 </note>
    700     
    701 <p>
    702   The attenuation equation we have to solve becomes:
    703 </p>
    704     
    705     \[\frac{5}{256} = \frac{I_{max}}{Attenuation}\]
    706     
    707 <p>
    708   Here \(I_{max}\) is the light source's brightest color component. We use a light source's brightest color component as solving the equation for a light's brightest intensity value best reflects the ideal light volume radius.
    709 </p>
    710     
    711 <p>
    712   From here on we continue solving the equation:
    713 </p>
    714     
    715     \[\frac{5}{256} * Attenuation = I_{max} \]
    716     
    717     \[5 * Attenuation = I_{max} * 256 \]
    718     
    719     \[Attenuation = I_{max} * \frac{256}{5} \]
    720     
    721      \[K_c + K_l * d + K_q * d^2 = I_{max} * \frac{256}{5} \]
    722     
    723     \[K_q * d^2 + K_l * d + K_c - I_{max} * \frac{256}{5} = 0 \]
    724     
    725 <p>
    726   The last equation is an equation of the form \(ax^2 + bx + c = 0\), which we can solve using the quadratic equation:
    727 </p>
    728     
    729     \[x = \frac{-K_l + \sqrt{K_l^2 - 4 * K_q * (K_c - I_{max} * \frac{256}{5})}}{2 * K_q} \]
    730     
    731 <p>
    732   This gives us a general equation that allows us to calculate \(x\) i.e. the light volume's radius for the light source given a constant, linear, and quadratic parameter:
    733 </p>
    734     
    735 <pre><code>
    736 float constant  = 1.0; 
    737 float linear    = 0.7;
    738 float quadratic = 1.8;
    739 float lightMax  = std::fmaxf(std::fmaxf(lightColor.r, lightColor.g), lightColor.b);
    740 float radius    = 
    741   (-linear +  std::sqrtf(linear * linear - 4 * quadratic * (constant - (256.0 / 5.0) * lightMax))) 
    742   / (2 * quadratic);  
    743 </code></pre>
    744 
    745 <p>
    746   We calculate this radius for each light source of the scene and use it to only calculate lighting for that light source if a fragment is inside the light source's volume. Below is the updated lighting pass fragment shader that takes the calculated light volumes into account. Note that this approach is merely done for teaching purposes and not viable in a practical setting as we'll soon discuss:
    747 </p>
    748     
    749 <pre><code>
    750 struct Light {
    751     [...]
    752     float Radius;
    753 }; 
    754   
    755 void main()
    756 {
    757     [...]
    758     for(int i = 0; i &lt; NR_LIGHTS; ++i)
    759     {
    760         // calculate distance between light source and current fragment
    761         float distance = length(lights[i].Position - FragPos);
    762         if(distance &lt; lights[i].Radius)
    763         {
    764             // do expensive lighting
    765             [...]
    766         }
    767     }   
    768 }
    769 </code></pre>
    770 
    771 <p>
    772   The results are exactly the same as before, but this time each light only calculates lighting for the light sources in which volume it resides.
    773 </p>
    774     
    775 <p>
    776   You can find the final source code of the demo <a href="/code_viewer_gh.php?code=src/5.advanced_lighting/8.2.deferred_shading_volumes/deferred_shading_volumes.cpp" target="_blank">here</a>.
    777 </p>
    778       
    779 <h3>How we really use light volumes</h3>
    780 <p>
    781   The fragment shader shown above doesn't really work in practice and only illustrates how we can <em>sort of</em> use a light's volume to reduce lighting calculations. The reality is that your GPU and GLSL are pretty bad at optimizing loops and branches. The reason for this is that shader execution on the GPU is highly parallel and most architectures have a requirement that for large collection of threads they need to run the exact same shader code for it to be efficient. This often means that a shader is run that executes <strong>all</strong> branches of an <code>if</code> statement to ensure the shader runs are the same for that group of threads, making our previous <em>radius check</em> optimization completely useless; we'd still calculate lighting for all light sources!
    782 </p>
    783       
    784 <p>
    785   The appropriate approach to using light volumes is to render actual spheres, scaled by the light volume radius. The centers of these spheres are positioned at the light source's position, and as it is scaled by the light volume radius the sphere exactly encompasses the light's visible volume. This is where the trick comes in: we use the deferred lighting shader for rendering the spheres.  As a rendered sphere produces fragment shader invocations that exactly match the pixels the light source affects, we only render the relevant pixels and skip all other pixels. The image below illustrates this:
    786 </p>
    787       
    788       <img src="/img/advanced-lighting/deferred_light_volume_rendered.png" class="clean" alt="Image of a light volume rendered with a deferred fragment shader in OpenGL"/>
    789         
    790 <p>
    791   This is done for each light source in the scene, and the resulting fragments are additively blended together. The result is then the exact same scene as before, but this time rendering only the relevant fragments per light source. This effectively reduces the computations from <code>nr_objects * nr_lights</code> to <code>nr_objects + nr_lights</code>, which makes it incredibly efficient in scenes with a large number of lights. This approach is what makes deferred rendering so suitable for rendering a large number of lights.
    792 </p>
    793         
    794 <p>
    795   There is still an issue with this approach: face culling should be enabled (otherwise we'd render a light's effect twice) and when it is enabled the user may enter a light source's volume after which the volume isn't rendered anymore (due to back-face culling), removing the light source's influence; we can solve that by only rendering the spheres' back faces. 
    796 </p>
    797         
    798  <p>
    799     Rendering light volumes does take its toll on performance, and while it is generally much faster than normal deferred shading for rendering a large number of lights, there's still more we can optimize. Two other popular (and more efficient) extensions on top of deferred shading exist called <def>deferred lighting</def> and <def>tile-based deferred shading</def>. These are even more efficient at rendering large amounts of light and also allow for relatively efficient MSAA.  
    800 </p>
    801         
    802 <h2>Deferred rendering vs forward rendering</h2>
    803 <p>
    804   By itself (without light volumes), deferred shading is a nice optimization as each pixel only runs a single fragment shader, compared to forward rendering where we'd often run the fragment shader multiple times per pixel. Deferred rendering does come with a few disadvantages though: a large memory overhead, no MSAA, and blending still has to be done with forward rendering.
    805 </p>
    806         
    807 <p>
    808   When you have a small scene and not too many lights, deferred rendering is not necessarily faster and sometimes even slower as the overhead then outweighs the benefits of deferred rendering. In more complex scenes, deferred rendering quickly becomes a significant optimization; especially with the more advanced optimization extensions. In addition, some render effects (especially post-processing effects) become cheaper on a deferred render pipeline as a lot of scene inputs are already available from the g-buffer.
    809 </p>       
    810         
    811 <p>
    812   As a final note I'd like to mention that basically all effects that can be accomplished with forward rendering can also be implemented in a deferred rendering context; this often only requires a small translation step. For instance, if we want to use normal mapping in a deferred renderer, we'd change the geometry pass shaders to output a world-space normal extracted from a normal map (using a TBN matrix) instead of the surface normal; the lighting calculations in the lighting pass don't need to change at all. And if you want parallax mapping to work, you'd want to first displace the texture coordinates in the geometry pass before sampling an object's diffuse, specular, and normal textures. Once you understand the idea behind deferred rendering, it's not too difficult to get creative.
    813 </p>
    814         
    815 <h2>Additional resources</h2>
    816 <ul>
    817     <li><a href="http://ogldev.atspace.co.uk/www/tutorial35/tutorial35.html" target="_blank">Tutorial 35: Deferred Shading - Part 1</a>: a three-part deferred shading tutorial by OGLDev.</li> 
    818     <li><a href="https://software.intel.com/sites/default/files/m/d/4/1/d/8/lauritzen_deferred_shading_siggraph_2010.pdf" target="_blank">Deferred Rendering for Current and
    819 Future Rendering Pipelines</a>: slides by Andrew Lauritzen discussing high-level tile-based deferred shading and deferred lighting.</li>
    820  </ul>
    821        
    822 
    823     </div>
    824     
    825     <div id="hover">
    826         HI
    827     </div>
    828    <!-- 728x90/320x50 sticky footer -->
    829 <div id="waldo-tag-6196"></div>
    830 
    831    <div id="disqus_thread"></div>
    832 
    833     
    834 
    835 
    836 </div> <!-- container div -->
    837 
    838 
    839 </div> <!-- super container div -->
    840 </body>
    841 </html>
    842 	</main>
    843 </body>
    844 </html>