LearnOpenGL

Translation in progress of learnopengl.com.
git clone https://git.mtkn.jp/LearnOpenGL
Log | Files | Refs

SSAO.html (46737B)


      1 <!DOCTYPE html>
      2 <html lang="ja"> 
      3 <head>
      4     <meta charset="utf-8"/>
      5     <title>LearnOpenGL</title>
      6     <link rel="shortcut icon" type="image/ico" href="/favicon.ico"  />
      7 	<link rel="stylesheet" href="../static/style.css" />
      8 	<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js"> </script>
      9 	<script src="/static/functions.js"></script>
     10 </head>
     11 <body>
     12 	<nav>
     13 <ol>
     14 	<li id="Introduction">
     15 		<a href="https://learnopengl.com/Introduction">はじめに</a>
     16 	</li>
     17 	<li id="Getting-started">
     18 		<span class="closed">入門</span>
     19 		<ol>
     20 			<li id="Getting-started/OpenGL">
     21 				<a href="https://learnopengl.com/Getting-started/OpenGL">OpenGL </a>
     22 			</li>
     23 			<li id="Getting-started/Creating-a-window">
     24 				<a href="https://learnopengl.com/Getting-started/Creating-a-window">ウィンドウの作成</a>
     25 			</li>
     26 			<li id="Getting-started/Hello-Window">
     27 				<a href="https://learnopengl.com/Getting-started/Hello-Window">最初のウィンドウ</a>
     28 			</li>
     29 			<li id="Getting-started/Hello-Triangle">
     30 				<a href="https://learnopengl.com/Getting-started/Hello-Triangle">最初の三角形</a>
     31 			</li>
     32 			<li id="Getting-started/Shaders">
     33 				<a href="https://learnopengl.com/Getting-started/Shaders">シェーダー</a>
     34 			</li>
     35 			<li id="Getting-started/Textures">
     36 				<a href="https://learnopengl.com/Getting-started/Textures">テクスチャ</a>
     37 			</li>
     38 			<li id="Getting-started/Transformations">
     39 				<a href="https://learnopengl.com/Getting-started/Transformations">座標変換</a>
     40 			</li>
     41 			<li id="Getting-started/Coordinate-Systems">
     42 				<a href="https://learnopengl.com/Getting-started/Coordinate-Systems">座標系</a>
     43 			</li>
     44 			<li id="Getting-started/Camera">
     45 				<a href="https://learnopengl.com/Getting-started/Camera">カメラ</a>
     46 			</li>
     47 			<li id="Getting-started/Review">
     48 				<a href="https://learnopengl.com/Getting-started/Review">まとめ</a>
     49 			</li>
     50 		</ol>
     51 	</li>
     52 	<li id="Lighting">
     53 		<span class="closed">Lighting </span>
     54 		<ol>
     55 			<li id="Lighting/Colors">
     56 				<a href="https://learnopengl.com/Lighting/Colors">Colors </a>
     57 			</li>
     58 			<li id="Lighting/Basic-Lighting">
     59 				<a href="https://learnopengl.com/Lighting/Basic-Lighting">Basic Lighting </a>
     60 			</li>
     61 			<li id="Lighting/Materials">
     62 				<a href="https://learnopengl.com/Lighting/Materials">Materials </a>
     63 			</li>
     64 			<li id="Lighting/Lighting-maps">
     65 				<a href="https://learnopengl.com/Lighting/Lighting-maps">Lighting maps </a>
     66 			</li>
     67 			<li id="Lighting/Light-casters">
     68 				<a href="https://learnopengl.com/Lighting/Light-casters">Light casters </a>
     69 			</li>
     70 			<li id="Lighting/Multiple-lights">
     71 				<a href="https://learnopengl.com/Lighting/Multiple-lights">Multiple lights </a>
     72 			</li>
     73 			<li id="Lighting/Review">
     74 				<a href="https://learnopengl.com/Lighting/Review">Review </a>
     75 			</li>
     76 		</ol>
     77 	</li>
     78 	<li id="Model-Loading">
     79 		<span class="closed">Model Loading </span>
     80 		<ol>
     81 			<li id="Model-Loading/Assimp">
     82 				<a href="https://learnopengl.com/Model-Loading/Assimp">Assimp </a>
     83 			</li>
     84 			<li id="Model-Loading/Mesh">
     85 				<a href="https://learnopengl.com/Model-Loading/Mesh">Mesh </a>
     86 			</li>
     87 			<li id="Model-Loading/Model">
     88 				<a href="https://learnopengl.com/Model-Loading/Model">Model </a>
     89 			</li>
     90 		</ol>
     91 	</li>
     92 	<li id="Advanced-OpenGL">
     93 		<span class="closed">Advanced OpenGL </span>
     94 		<ol>
     95 			<li id="Advanced-OpenGL/Depth-testing">
     96 				<a href="https://learnopengl.com/Advanced-OpenGL/Depth-testing">Depth testing </a>
     97 			</li>
     98 			<li id="Advanced-OpenGL/Stencil-testing">
     99 				<a href="https://learnopengl.com/Advanced-OpenGL/Stencil-testing">Stencil testing </a>
    100 			</li>
    101 			<li id="Advanced-OpenGL/Blending">
    102 				<a href="https://learnopengl.com/Advanced-OpenGL/Blending">Blending </a>
    103 			</li>
    104 			<li id="Advanced-OpenGL/Face-culling">
    105 				<a href="https://learnopengl.cm/Advanced-OpenGL/Face-culling">Face culling </a>
    106 			</li>
    107 			<li id="Advanced-OpenGL/Framebuffers">
    108 				<a href="https://learnopengl.com/Advanced-OpenGL/Framebuffers">Framebuffers </a>
    109 			</li>
    110 			<li id="Advanced-OpenGL/Cubemaps">
    111 				<a href="https://learnopengl.com/Advanced-OpenGL/Cubemaps">Cubemaps </a>
    112 			</li>
    113 			<li id="Advanced-OpenGL/Advanced-Data">
    114 				<a href="https://learnopengl.com/Advanced-OpenGL/Advanced-Data">Advanced Data </a>
    115 			</li>
    116 			<li id="Advanced-OpenGL/Advanced-GLSL">
    117 				<a href="https://learnopengl.com/Advanced-OpenGL/Advanced-GLSL">Advanced GLSL </a>
    118 			</li>
    119 			<li id="Advanced-OpenGL/Geometry-Shader">
    120 				<a href="https://learnopengl.com/Advanced-OpenGL/Geometry-Shader">Geometry Shader </a>
    121 			</li>
    122 			<li id="Advanced-OpenGL/Instancing">
    123 				<a href="https://learnopengl.com/Advanced-OpenGL/Instancing">Instancing </a>
    124 			</li>
    125 			<li id="Advanced-OpenGL/Anti-Aliasing">
    126 				<a href="https://learnopengl.com/Advanced-OpenGL/Anti-Aliasing">Anti Aliasing </a>
    127 			</li>
    128 		</ol>
    129 	</li>
    130 	<li id="Advanced-Lighting">
    131 		<span class="closed">Advanced Lighting </span>
    132 		<ol>
    133 			<li id="Advanced-Lighting/Advanced-Lighting">
    134 				<a href="https://learnopengl.com/Advanced-Lighting/Advanced-Lighting">Advanced Lighting </a>
    135 			</li>
    136 			<li id="Advanced-Lighting/Gamma-Correction">
    137 				<a href="https://learnopengl.com/Advanced-Lighting/Gamma-Correction">Gamma Correction </a>
    138 			</li>
    139 			<li id="Advanced-Lighting/Shadows">
    140 				<span class="closed">Shadows </span>
    141 				<ol>
    142 					<li id="Advanced-Lighting/Shadows/Shadow-Mapping">
    143 						<a href="https://learnopengl.com/Advanced-Lighting/Shadows/Shadow-Mapping">Shadow Mapping </a>
    144 					</li>
    145 					<li id="Advanced-Lighting/Shadows/Point-Shadows">
    146 						<a href="https://learnopengl.com/Advanced-Lighting/Shadows/Point-Shadows">Point Shadows </a>
    147 					</li>
    148 				</ol>
    149 			</li>
    150 			<li id="Advanced-Lighting/Normal-Mapping">
    151 				<a href="https://learnopengl.com/Advanced-Lighting/Normal-Mapping">Normal Mapping </a>
    152 			</li>
    153 			<li id="Advanced-Lighting/Parallax-Mapping">
    154 				<a href="https://learnopengl.com/Advanced-Lighting/Parallax-Mapping">Parallax Mapping </a>
    155 			</li>
    156 			<li id="Advanced-Lighting/HDR">
    157 				<a href="https://learnopengl.com/Advanced-Lighting/HDR">HDR </a>
    158 			</li>
    159 			<li id="Advanced-Lighting/Bloom">
    160 				<a href="https://learnopengl.com/Advanced-Lighting/Bloom">Bloom </a>
    161 			</li>
    162 			<li id="Advanced-Lighting/Deferred-Shading">
    163 				<a href="https://learnopengl.com/Advanced-Lighting/Deferred-Shading">Deferred Shading </a>
    164 			</li>
    165 			<li id="Advanced-Lighting/SSAO">
    166 				<a href="https://learnopengl.com/Advanced-Lighting/SSAO">SSAO </a>
    167 			</li>
    168 		</ol>
    169 	</li>
    170 	<li id="PBR">
    171 		<span class="closed">PBR </span>
    172 		<ol>
    173 			<li id="PBR/Theory">
    174 				<a href="https://learnopengl.com/PBR/Theory">Theory </a>
    175 			</li>
    176 			<li id="PBR/Lighting">
    177 				<a href="https://learnopengl.com/PBR/Lighting">Lighting </a>
    178 			</li>
    179 			<li id="PBR/IBL">
    180 				<span class="closed">IBL </span>
    181 				<ol>
    182 					<li id="PBR/IBL/Diffuse-irradiance">
    183 						<a href="https://learnopengl.com/PBR/IBL/Diffuse-irradiance">Diffuse irradiance </a>
    184 					</li>
    185 					<li id="PBR/IBL/Specular-IBL">
    186 						<a href="https://learnopengl.com/PBR/IBL/Specular-IBL">Specular IBL </a>
    187 					</li>
    188 				</ol>
    189 			</li>
    190 		</ol>
    191 	</li>
    192 	<li id="In-Practice">
    193 		<span class="closed">In Practice </span>
    194 		<ol>
    195 			<li id="In-Practice/Debugging">
    196 				<a href="https://learnopengl.com/In-Practice/Debugging">Debugging </a>
    197 			</li>
    198 			<li id="In-Practice/Text-Rendering">
    199 				<a href="https://learnopengl.com/In-Practice/Text-Rendering">Text Rendering </a>
    200 			</li>
    201 			<li id="In-Practice/2D-Game">
    202 				<span class="closed">2D Game </span>
    203 				<ol>
    204 					<li id="In-Practice/2D-Game/Breakout">
    205 						<a href="https://learnopengl.com/In-Practice/2D-Game/Breakout">Breakout </a>
    206 					</li>
    207 					<li id="In-Practice/2D-Game/Setting-up">
    208 						<a href="https://learnopengl.com/In-Practice/2D-Game/Setting-up">Setting up </a>
    209 					</li>
    210 					<li id="In-Practice/2D-Game/Rendering-Sprites">
    211 						<a href="https://learnopengl.com/In-Practice/2D-Game/Rendering-Sprites">Rendering Sprites </a>
    212 					</li>
    213 					<li id="In-Practice/2D-Game/Levels">
    214 						<a href="https://learnopengl.com/In-Practice/2D-Game/Levels">Levels </a>
    215 					</li>
    216 					<li id="In-Practice/2D-Game/Collisions">
    217 						<span class="closed">Collisions </span>
    218 						<ol>
    219 							<li id="In-Practice/2D-Game/Collisions/Ball">
    220 								<a href="https://learnopengl.com/In-Practice/2D-Game/Collisions/Ball">Ball </a>
    221 							</li>
    222 							<li id="In-Practice/2D-Game/Collisions/Collision-detection">
    223 								<a href="https://learnopengl.com/In-Practice/2D-Game/Collisions/Collision-detection">Collision detection </a>
    224 							</li>
    225 							<li id="In-Practice/2D-Game/Collisions/Collision-resolution">
    226 								<a href="https://learnopengl.com/In-Practice/2D-Game/Collisions/Collision-resolution">Collision resolution </a>
    227 							</li>
    228 						</ol>
    229 					</li>
    230 					<li id="In-Practice/2D-Game/Particles">
    231 						<a href="https://learnopengl.com/In-Practice/2D-Game/Particles">Particles </a>
    232 					</li>
    233 					<li id="In-Practice/2D-Game/Postprocessing">
    234 						<a href="https://learnopengl.com/In-Practice/2D-Game/Postprocessing">Postprocessing </a>
    235 					</li>
    236 					<li id="In-Practice/2D-Game/Powerups">
    237 						<a href="https://learnopengl.com/In-Practice/2D-Game/Powerups">Powerups </a>
    238 					</li>
    239 					<li id="In-Practice/2D-Game/Audio">
    240 						<a href="https://learnopengl.com/In-Practice/2D-Game/Audio">Audio </a>
    241 					</li>
    242 					<li id="In-Practice/2D-Game/Render-text">
    243 						<a href="https://learnopengl.com/In-Practice/2D-Game/Render-text">Render text </a>
    244 					</li>
    245 					<li id="In-Practice/2D-Game/Final-thoughts">
    246 						<a href="https://learnopengl.com/In-Practice/2D-Game/Final-thoughts">Final thoughts </a>
    247 					</li>
    248 				</ol>
    249 			</li>
    250 		</ol>
    251 	</li>
    252 	<li id="Guest-Articles">
    253 		<span class="closed">Guest Articles </span>
    254 		<ol>
    255 			<li id="Guest-Articles/How-to-publish">
    256 				<a href="https://learnopengl.com/Guest-Articles/How-to-publish">How to publish </a>
    257 			</li>
    258 			<li id="Guest-Articles/2020">
    259 				<span class="closed">2020 </span>
    260 				<ol>
    261 					<li id="Guest-Articles/2020/OIT">
    262 						<span class="closed">OIT </span>
    263 						<ol>
    264 							<li id="Guest-Articles/2020/OIT/Introduction">
    265 								<a href="https://learnopengl.com/Guest-Articles/2020/OIT/Introduction">Introduction </a>
    266 							</li>
    267 							<li id="Guest-Articles/2020/OIT/Weighted-Blended">
    268 								<a href="https://learnopengl.com/Guest-Articles/2020/OIT/Weighted-Blended">Weighted Blended </a>
    269 							</li>
    270 						</ol>
    271 					</li>
    272 					<li id="Guest-Articles/2020/Skeletal-Animation">
    273 						<a href="https://learnopengl.com/Guest-Articles/2020/Skeletal-Animation">Skeletal Animation </a>
    274 					</li>
    275 				</ol>
    276 			</li>
    277 			<li id="Guest-Articles/2021">
    278 				<span class="closed">2021 </span>
    279 				<ol>
    280 					<li id="Guest-Articles/2021/CSM">
    281 						<a href="https://learnopengl.com/Guest-Articles/2021/CSM">CSM </a>
    282 					</li>
    283 					<li id="Guest-Articles/2021/Scene">
    284 						<span class="closed">Scene </span>
    285 						<ol>
    286 							<li id="Guest-Articles/2021/Scene/Scene-Graph">
    287 								<a href="https://learnopengl.com/Guest-Articles/2021/Scene/Scene-Graph">Scene Graph </a>
    288 							</li>
    289 							<li id="Guest-Articles/2021/Scene/Frustum-Culling">
    290 								<a href="https://learnopengl.com/Guest-Articles/2021/Scene/Frustum-Culling">Frustum Culling </a>
    291 							</li>
    292 						</ol>
    293 					</li>
    294 					<li id="Guest-Articles/2021/Tessellation">
    295 						<span class="closed">Tessellation </span>
    296 						<ol>
    297 							<li id="Guest-Articles/2021/Tessellation/Height-map">
    298 								<a href="https://learnopengl.com/Guest-Articles/2021/Tessellation/Height-map">Height map </a>
    299 							</li>
    300 						</ol>
    301 					</li>
    302 				</ol>
    303 			</li>
    304 		</ol>
    305 	</li>
    306 	<li id="Code-repository">
    307 		<a href="https://learnopengl.com/Code-repository">Code repository </a>
    308 	</li>
    309 	<li id="Translations">
    310 		<a href="https://learnopengl.com/Translations">Translations </a>
    311 	</li>
    312 	<li id="About">
    313 		<a href="https://learnopengl.com/About">About </a>
    314 	</li>
    315 </ol>
    316 	</nav>
    317 	<main>
    318     <h1 id="content-title">SSAO</h1>
    319 <h1 id="content-url" style='display:none;'>Advanced-Lighting/SSAO</h1>
    320 <p>
    321   We've briefly touched the topic in the basic lighting chapter: ambient lighting. Ambient lighting is a fixed light constant we add to the overall lighting of a scene to simulate the <def>scattering</def> of light. In reality, light scatters in all kinds of directions with varying intensities so the indirectly lit parts of a scene should also have varying intensities. One type of indirect lighting approximation is called <def>ambient occlusion</def> that tries to approximate indirect lighting by darkening creases, holes, and surfaces that are close to each other. These areas are largely occluded by surrounding geometry and thus light rays have fewer places to escape to, hence the areas appear darker. Take a look at the corners and creases of your room to see that the light there seems just a little darker. 
    322 </p>
    323 
    324 <p>
    325   Below is an example image of a scene with and without ambient occlusion. Notice how especially between the creases, the (ambient) light is more occluded:
    326 </p>
    327 
    328 <img src="/img/advanced-lighting/ssao_example.png" alt="Example image of SSAO with and without"/>
    329 
    330 <p>
    331   While not an incredibly obvious effect, the image with ambient occlusion enabled does feel a lot more realistic due to these small occlusion-like details, giving the entire scene a greater feel of depth.
    332 </p>
    333   
    334 <p>
    335   Ambient occlusion techniques are expensive as they have to take surrounding geometry into account. One could shoot a large number of rays for each point in space to determine its amount of occlusion, but that quickly becomes computationally infeasible for real-time solutions. In 2007, Crytek published a technique called <def>screen-space ambient occlusion</def> (SSAO) for use in their title <em>Crysis</em>. The technique uses a scene's depth buffer in screen-space to determine the amount of occlusion instead of real geometrical data. This approach is incredibly fast compared to real ambient occlusion and gives plausible results, making it the de-facto standard for approximating real-time ambient occlusion.  
    336 </p>
    337 
    338 <p>
    339   The basics behind screen-space ambient occlusion are simple: for each fragment on a screen-filled quad we calculate an <def>occlusion factor</def> based on the fragment's surrounding depth values. The occlusion factor is then used to reduce or nullify the fragment's ambient lighting component. The occlusion factor is obtained by taking multiple depth samples in a sphere sample kernel surrounding the fragment position and compare each of the samples with the current fragment's depth value. The number of samples that have a higher depth value than the fragment's depth represents the occlusion factor.
    340 </p>
    341 
    342 <img src="/img/advanced-lighting/ssao_crysis_circle.png" class="clean" alt="Image of circle based SSAO technique as done by Crysis"/>
    343   
    344 <p>
    345   Each of the gray depth samples that are inside geometry contribute to the total occlusion factor; the more samples we find inside geometry, the less ambient lighting the fragment should eventually receive.
    346 </p>
    347   
    348 <p>
    349   It is clear the quality and precision of the effect directly relates to the number of surrounding samples we take. If the sample count is too low, the precision drastically reduces and we get an artifact called <def>banding</def>; if it is too high, we lose performance. We can reduce the amount of samples we have to test by introducing some randomness into the sample kernel. By randomly rotating the sample kernel each fragment we can get high quality results with a much smaller amount of samples. This does come at a price as the randomness introduces a noticeable <def>noise pattern</def> that we'll have to  fix by blurring the results. Below is an image (courtesy of <a href="http://john-chapman-graphics.blogspot.com/" target="_blank">John Chapman</a>) showcasing the banding effect and the effect randomness has on the results:
    350 </p>
    351   
    352 <img src="/img/advanced-lighting/ssao_banding_noise.jpg" alt="The SSAO image quality with multiple samples and a blur added"/>
    353   
    354 <p>
    355   As you can see, even though we get noticeable banding on the SSAO results due to a low sample count, by introducing some randomness the banding effects are completely gone.
    356 </p>
    357   
    358 <p>
    359   The SSAO method developed by Crytek had a certain visual style. Because the sample kernel used was a sphere, it caused flat walls to look gray as half of the kernel samples end up being in the surrounding geometry. Below is an image of Crysis's screen-space ambient occlusion that clearly portrays this gray feel:
    360 </p>
    361   
    362 <img src="/img/advanced-lighting/ssao_crysis.jpg" alt="Screen space ambient occlusion in the Crysis game by Crytek showing a gray feel due to them using a sphere kernel instead of a normal oriented hemisphere sample kernel in OpenGL"/>
    363   
    364 <p>
    365   For that reason we won't be using a sphere sample kernel, but rather a hemisphere sample kernel oriented along a surface's normal vector. 
    366 </p>
    367 
    368   <img src="/img/advanced-lighting/ssao_hemisphere.png" class="clean" alt="Image of normal oriented hemisphere sample kernel for SSAO in OpenGL"/>
    369   
    370   <p>
    371     By sampling around this <def>normal-oriented hemisphere</def> we do not consider the fragment's underlying geometry to be a contribution to the occlusion factor. This removes the gray-feel of ambient occlusion and generally produces more realistic results.
    372     This chapter's technique is based on this normal-oriented hemisphere method and a slightly modified version of John Chapman's brilliant <a href="http://john-chapman-graphics.blogspot.nl/2013/01/ssao-tutorial.html" target="_blank">SSAO tutorial</a>.
    373   </p>
    374   
    375 <h2>Sample buffers</h2>
    376 <p>
    377    SSAO requires geometrical info as we need some way to determine the occlusion factor of a fragment. For each fragment, we're going to need the following data:
    378 </p>
    379     
    380 <ul>
    381   <li>A per-fragment <strong>position</strong> vector.</li>  
    382   <li>A per-fragment <strong>normal</strong> vector.</li>  
    383   <li>A per-fragment <strong>albedo</strong> color.</li>  
    384   <li>A <strong>sample kernel</strong>.</li>
    385   <li>A per-fragment <strong>random rotation</strong> vector used to rotate the sample kernel.</li>
    386 </ul>
    387 
    388 <p>
    389   Using a per-fragment view-space position we can orient a sample hemisphere kernel around the fragment's view-space surface normal and use this kernel to sample the position buffer texture at varying offsets. For each per-fragment kernel sample we compare its depth with its depth in the position buffer to determine the amount of occlusion. The resulting occlusion factor is then used to limit the final ambient lighting component. By also including a per-fragment rotation vector we can significantly reduce the number of samples we'll need to take as we'll soon see. 
    390 </p>
    391     
    392     <img src="/img/advanced-lighting/ssao_overview.png" class="clean" alt="An overview of the SSAO screen-space OpenGL technique"/>
    393       
    394 <p>
    395   As SSAO is a screen-space technique we calculate its effect on each fragment on a screen-filled 2D quad. This does mean we have no geometrical information of the scene. What we could do, is render the geometrical per-fragment data into screen-space textures that we then later send to the SSAO shader so we have access to the per-fragment geometrical data. If you've followed along with the previous chapter you'll realize this looks quite like a deferred renderer's G-buffer setup. For that reason SSAO is perfectly suited in combination with deferred rendering as we already have the position and normal vectors in the G-buffer.
    396 </p>
    397       
    398 <note>
    399   In this chapter we're going to implement SSAO on top of a slightly simplified version of the deferred renderer from the <a href="https://learnopengl.com/Advanced-Lighting/Deferred-Shading" target="_blank">deferred shading</a> chapter. If you're not sure what deferred shading is, be sure to first read up on that.
    400  </note>
    401       
    402 <p>
    403   As we should have per-fragment position and normal data available from the scene objects, the fragment shader of the geometry stage is fairly simple: 
    404 </p>
    405       
    406 <pre><code>
    407 #version 330 core
    408 layout (location = 0) out vec4 gPosition;
    409 layout (location = 1) out vec3 gNormal;
    410 layout (location = 2) out vec4 gAlbedoSpec;
    411 
    412 in vec2 TexCoords;
    413 in vec3 FragPos;
    414 in vec3 Normal;
    415 
    416 void main()
    417 {    
    418     // store the fragment position vector in the first gbuffer texture
    419     gPosition = FragPos;
    420     // also store the per-fragment normals into the gbuffer
    421     gNormal = normalize(Normal);
    422     // and the diffuse per-fragment color, ignore specular
    423     gAlbedoSpec.rgb = vec3(0.95);
    424 }  
    425 </code></pre>
    426       
    427 <p>
    428   Since SSAO is a screen-space technique where occlusion is calculated from the visible view, it makes sense to implement the algorithm in view-space. Therefore, <var>FragPos</var> and <var>Normal</var> as supplied by the geometry stage's vertex shader are transformed to view space (multiplied by the view matrix as well). 
    429 </p>
    430       
    431 <note>
    432   It is possible to reconstruct the position vectors from depth values alone, using some clever tricks as Matt Pettineo described in his <a href="https://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/" target="_blank">blog</a>. This requires a few extra calculations in the shaders, but saves us from having to store position data in the G-buffer (which costs a lot of memory). For the sake of a more simple example, we'll leave these optimizations out of the chapter.
    433 </note>
    434       
    435 <p>
    436   The <var>gPosition</var> color buffer texture is configured as follows:
    437 </p>
    438       
    439 <pre><code>
    440 <function id='50'>glGenTextures</function>(1, &gPosition);
    441 <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, gPosition);
    442 <function id='52'>glTexImage2D</function>(GL_TEXTURE_2D, 0, GL_RGBA16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGBA, GL_FLOAT, NULL);
    443 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    444 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    445 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    446 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);  
    447 </code></pre>
    448       
    449 <p>
    450   This gives us a position texture that we can use to obtain depth values for each of the kernel samples. Note that we store the positions in a floating point data format; this way position values aren't clamped to [<code>0.0</code>,<code>1.0</code>] and we need the higher precision. Also note the texture wrapping method of <var>GL_CLAMP_TO_EDGE</var>. This ensures we don't accidentally oversample position/depth values in screen-space outside the texture's default coordinate region.
    451 </p>
    452       
    453 <p>
    454   Next, we need the actual hemisphere sample kernel and some method to randomly rotate it.
    455 </p>
    456       
    457 <h2>Normal-oriented hemisphere</h2>
    458 <p>
    459   We need to generate a number of samples oriented along the normal of a surface. As we briefly discussed at the start of this chapter, we want to generate samples that form a hemisphere. As it is difficult nor plausible to generate a sample kernel for each surface normal direction, we're going to generate a sample kernel in <a href="https://learnopengl.com/Advanced-Lighting/Normal-Mapping" target="_blank">tangent space</a>, with the normal vector pointing in the positive z direction.
    460 </p>
    461       
    462       <img src="/img/advanced-lighting/ssao_hemisphere.png" class="clean" alt="Image of normal oriented hemisphere sample kernel for use in SSAO in OpenGL"/>
    463         
    464 <p>
    465   Assuming we have a unit hemisphere, we can obtain a sample kernel with a maximum of <code>64</code> sample values as follows: 
    466 </p>
    467         
    468 <pre><code>
    469 std::uniform_real_distribution&lt;float&gt; randomFloats(0.0, 1.0); // random floats between [0.0, 1.0]
    470 std::default_random_engine generator;
    471 std::vector&lt;glm::vec3&gt; ssaoKernel;
    472 for (unsigned int i = 0; i &lt; 64; ++i)
    473 {
    474     glm::vec3 sample(
    475         randomFloats(generator) * 2.0 - 1.0, 
    476         randomFloats(generator) * 2.0 - 1.0, 
    477         randomFloats(generator)
    478     );
    479     sample  = glm::normalize(sample);
    480     sample *= randomFloats(generator);
    481     ssaoKernel.push_back(sample);  
    482 }
    483 </code></pre>
    484         
    485 <p>
    486   We vary the <code>x</code> and <code>y</code> direction in tangent space between <code>-1.0</code> and <code>1.0</code>, and vary the z direction of the samples between <code>0.0</code> and <code>1.0</code> (if we varied the z direction between <code>-1.0</code> and <code>1.0</code> as well we'd have a sphere sample kernel). As the sample kernel will be oriented along the surface normal, the resulting sample vectors will all end up in the hemisphere.
    487 </p>
    488         
    489 <p>
    490   Currently, all samples are randomly distributed in the sample kernel, but we'd rather place a larger weight on occlusions close to the actual fragment. We want to distribute more  kernel samples closer to the origin. We can do this with an accelerating interpolation function:
    491 </p>
    492         
    493 <pre><code>
    494    float scale = (float)i / 64.0; 
    495    scale   = lerp(0.1f, 1.0f, scale * scale);
    496    sample *= scale;
    497    ssaoKernel.push_back(sample);  
    498 }
    499 </code></pre>
    500         
    501 <p>
    502   Where <fun>lerp</fun> is defined as:
    503 </p>
    504         
    505 <pre><code>
    506 float lerp(float a, float b, float f)
    507 {
    508     return a + f * (b - a);
    509 }  
    510 </code></pre>
    511         
    512 <p>
    513   This gives us a kernel distribution that places most samples closer to its origin.
    514 </p>
    515         
    516 <img src="/img/advanced-lighting/ssao_kernel_weight.png" class="clean" alt="SSAO Sample kernels (normal oriented hemisphere) with samples more closer aligned to the fragment's center position in OpenGL"/>
    517   
    518           
    519 <p>
    520   Each of the kernel samples will be used to offset the view-space fragment position to sample surrounding geometry. We do need quite a lot of samples in view-space in order to get realistic results, which may be too heavy on performance. However, if we can introduce some semi-random rotation/noise on a per-fragment basis, we can significantly reduce the number of samples required.
    521 </p>
    522   
    523 <h2>Random kernel rotations</h2>
    524 <p>
    525   By introducing some randomness onto the sample kernels we largely reduce the number of samples necessary to get good results. We could create a random rotation vector for each fragment of a scene, but that quickly eats up memory. It makes more sense to create a small texture of random rotation vectors that we tile over the screen.
    526 </p>
    527   
    528 <p>
    529   We create a 4x4 array of random rotation vectors oriented around the tangent-space surface normal:
    530 </p>
    531   
    532 <pre><code>
    533 std::vector&lt;glm::vec3&gt; ssaoNoise;
    534 for (unsigned int i = 0; i &lt; 16; i++)
    535 {
    536     glm::vec3 noise(
    537         randomFloats(generator) * 2.0 - 1.0, 
    538         randomFloats(generator) * 2.0 - 1.0, 
    539         0.0f); 
    540     ssaoNoise.push_back(noise);
    541 }  
    542 </code></pre>
    543       
    544 <p>
    545   As the sample kernel is oriented along the positive z direction in tangent space, we leave the <code>z</code> component at <code>0.0</code> so we rotate around the <code>z</code> axis.
    546 </p>
    547   
    548 <p>
    549   We then create a 4x4 texture that holds the random rotation vectors; make sure to set its wrapping method to <var>GL_REPEAT</var> so it properly tiles over the screen.
    550 </p>
    551   
    552 <pre><code>
    553 unsigned int noiseTexture; 
    554 <function id='50'>glGenTextures</function>(1, &noiseTexture);
    555 <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, noiseTexture);
    556 <function id='52'>glTexImage2D</function>(GL_TEXTURE_2D, 0, GL_RGBA16F, 4, 4, 0, GL_RGB, GL_FLOAT, &ssaoNoise[0]);
    557 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    558 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    559 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
    560 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);  
    561 </code></pre>
    562   
    563 <p>
    564   We now have all the relevant input data we need to implement SSAO.
    565 </p>
    566   
    567 <h2>The SSAO shader</h2>
    568 <p>
    569    The SSAO shader runs on a 2D screen-filled quad that calculates the occlusion value for each of its fragments. As we need to store the result of the SSAO stage (for use in the final lighting shader), we create yet another framebuffer object:
    570   </p>
    571   
    572 <pre><code>
    573 unsigned int ssaoFBO;
    574 <function id='76'>glGenFramebuffers</function>(1, &ssaoFBO);  
    575 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, ssaoFBO);
    576   
    577 unsigned int ssaoColorBuffer;
    578 <function id='50'>glGenTextures</function>(1, &ssaoColorBuffer);
    579 <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, ssaoColorBuffer);
    580 <function id='52'>glTexImage2D</function>(GL_TEXTURE_2D, 0, GL_RED, SCR_WIDTH, SCR_HEIGHT, 0, GL_RED, GL_FLOAT, NULL);
    581 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    582 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    583   
    584 <function id='81'>glFramebufferTexture2D</function>(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, ssaoColorBuffer, 0);  
    585 </code></pre>
    586   
    587 <p>
    588   As the ambient occlusion result is a single grayscale value we'll only need a texture's red component, so we set the color buffer's internal format to <var>GL_RED</var>. 
    589 </p>
    590   
    591 <p>
    592   The complete process for rendering SSAO then looks a bit like this:
    593 </p>
    594   
    595 <pre><code>
    596 // geometry pass: render stuff into G-buffer
    597 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, gBuffer);
    598     [...]
    599 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, 0);  
    600   
    601 // use G-buffer to render SSAO texture
    602 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, ssaoFBO);
    603     <function id='10'>glClear</function>(GL_COLOR_BUFFER_BIT);    
    604     <function id='49'>glActiveTexture</function>(GL_TEXTURE0);
    605     <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, gPosition);
    606     <function id='49'>glActiveTexture</function>(GL_TEXTURE1);
    607     <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, gNormal);
    608     <function id='49'>glActiveTexture</function>(GL_TEXTURE2);
    609     <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, noiseTexture);
    610     shaderSSAO.use();
    611     SendKernelSamplesToShader();
    612     shaderSSAO.setMat4("projection", projection);
    613     RenderQuad();
    614 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, 0);
    615   
    616 // lighting pass: render scene lighting
    617 <function id='10'>glClear</function>(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    618 shaderLightingPass.use();
    619 [...]
    620 <function id='49'>glActiveTexture</function>(GL_TEXTURE3);
    621 <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, ssaoColorBuffer);
    622 [...]
    623 RenderQuad();  
    624 </code></pre>
    625   
    626 <p>
    627   The <var>shaderSSAO</var> shader takes as input the relevant G-buffer textures, the noise texture, and the normal-oriented hemisphere kernel samples:
    628 </p>
    629   
    630 <pre><code>
    631 #version 330 core
    632 out float FragColor;
    633   
    634 in vec2 TexCoords;
    635 
    636 uniform sampler2D gPosition;
    637 uniform sampler2D gNormal;
    638 uniform sampler2D texNoise;
    639 
    640 uniform vec3 samples[64];
    641 uniform mat4 projection;
    642 
    643 // tile noise texture over screen, based on screen dimensions divided by noise size
    644 const vec2 noiseScale = vec2(800.0/4.0, 600.0/4.0); // screen = 800x600
    645 
    646 void main()
    647 {
    648     [...]
    649 }
    650 </code></pre>
    651   
    652 <p>
    653   Interesting to note here is the <var>noiseScale</var> variable. We want to tile the noise texture all over the screen, but as the <var>TexCoords</var> vary between <code>0.0</code> and <code>1.0</code>, the <var>texNoise</var> texture won't tile at all. So we'll calculate the required amount to scale <var>TexCoords</var> by dividing the screen's dimensions by the noise texture size.
    654 </p>
    655   
    656 <pre><code>
    657 vec3 fragPos   = texture(gPosition, TexCoords).xyz;
    658 vec3 normal    = texture(gNormal, TexCoords).rgb;
    659 vec3 randomVec = texture(texNoise, TexCoords * noiseScale).xyz;  
    660 </code></pre>
    661   
    662 <p>
    663   As we set the tiling parameters of <var>texNoise</var> to <var>GL_REPEAT</var>, the random values will be repeated all over the screen. Together with the <var>fragPos</var> and <var>normal</var> vector, we then have enough data to create a TBN matrix that transforms any vector from tangent-space to view-space:
    664 </p>
    665   
    666 <pre><code>
    667 vec3 tangent   = normalize(randomVec - normal * dot(randomVec, normal));
    668 vec3 bitangent = cross(normal, tangent);
    669 mat3 TBN       = mat3(tangent, bitangent, normal);  
    670 </code></pre>
    671   
    672 <p>
    673   Using a process called the <def>Gramm-Schmidt process</def> we create an orthogonal basis, each time slightly tilted based on the value of <var>randomVec</var>. Note that because we use a random vector for constructing the tangent vector, there is no need to have the TBN matrix exactly aligned to the geometry's surface, thus no need for per-vertex tangent (and bitangent) vectors.
    674 </p>
    675   
    676 <p>
    677   Next we iterate over each of the kernel samples, transform the samples from tangent to view-space, add them to the current fragment position, and compare the fragment position's depth with the sample depth stored in the view-space position buffer. Let's discuss this in a step-by-step fashion:
    678 </p>
    679   
    680 <pre><code>
    681 float occlusion = 0.0;
    682 for(int i = 0; i &lt; kernelSize; ++i)
    683 {
    684     // get sample position
    685     vec3 samplePos = TBN * samples[i]; // from tangent to view-space
    686     samplePos = fragPos + samplePos * radius; 
    687     
    688     [...]
    689 }  
    690 </code></pre>
    691   
    692 <p>
    693   Here <var>kernelSize</var> and <var>radius</var> are variables that we can use to tweak the effect; in this case a value of <var>64</var> and <var>0.5</var> respectively.
    694   For each iteration we first transform the respective sample to view-space. We then add the view-space kernel offset sample to the view-space fragment position. Then we multiply the offset sample by <var>radius</var> to increase (or decrease) the effective sample radius of SSAO.
    695 </p>
    696   
    697 <p>
    698   Next we want to transform <var>sample</var> to screen-space so we can sample the position/depth value of <var>sample</var> as if we were rendering its position directly to the screen. As the vector is currently in view-space, we'll transform it to clip-space first using the <var>projection</var> matrix uniform:
    699 </p>
    700   
    701 <pre><code>
    702 vec4 offset = vec4(samplePos, 1.0);
    703 offset      = projection * offset;    // from view to clip-space
    704 offset.xyz /= offset.w;               // perspective divide
    705 offset.xyz  = offset.xyz * 0.5 + 0.5; // transform to range 0.0 - 1.0  
    706 </code></pre>
    707   
    708 <p>
    709   After the variable is transformed to clip-space, we perform the perspective divide step by dividing its <code>xyz</code> components with its <code>w</code> component. The resulting normalized device coordinates are then transformed to the [<code>0.0</code>, <code>1.0</code>] range so we can use them to sample the position texture:
    710 </p>
    711   
    712 <pre><code>
    713 float sampleDepth = texture(gPosition, offset.xy).z; 
    714 </code></pre>
    715 
    716 <p>
    717   We use the <var>offset</var> vector's <code>x</code> and <code>y</code> component to sample the position texture to retrieve the depth (or <code>z</code> value) of the sample position as seen from the viewer's perspective (the first non-occluded visible fragment). We then check if the sample's current depth value is larger than the stored depth value and if so, we add to the final contribution factor:
    718 </p>
    719   
    720 <pre class="cpp"><code>
    721 occlusion += (sampleDepth >= samplePos.z + bias ? 1.0 : 0.0);  
    722 </code></pre>
    723   
    724 <p>
    725   Note that we add a small <code>bias</code> here to the original fragment's depth value (set to <code>0.025</code> in this example). A bias isn't always necessary, but it helps visually tweak the SSAO effect and solves acne effects that may occur based on the scene's complexity.  
    726 </p>
    727   
    728 <p>
    729   We're not completely finished yet as there is still a small issue we have to take into account. Whenever a fragment is tested for ambient occlusion that is aligned close to the edge of a surface, it will also consider depth values of surfaces far behind the test surface; these values will (incorrectly) contribute to the occlusion factor. We can solve this by introducing a range check as the following image (courtesy of <a href="http://john-chapman-graphics.blogspot.com/" target="_blank">John Chapman</a>) illustrates:
    730 </p>
    731   
    732   <img src="/img/advanced-lighting/ssao_range_check.png" alt="Image with and without range check of SSAO surface in OpenGL"/>
    733     
    734 <p>
    735   We introduce a range check that makes sure a fragment contributes to the occlusion factor if its depth values is within the sample's radius. We change the last line to:
    736 </p>
    737     
    738 <pre><code>
    739 float rangeCheck = smoothstep(0.0, 1.0, radius / abs(fragPos.z - sampleDepth));
    740 occlusion       += (sampleDepth >= samplePos.z + bias ? 1.0 : 0.0) * rangeCheck;         
    741 </code></pre>
    742   
    743 <p>
    744   Here we used GLSL's <fun>smoothstep</fun> function that smoothly interpolates its third parameter between the first and second parameter's range, returning <code>0.0</code> if less than or equal to its first parameter and <code>1.0</code> if equal or higher to its second parameter. If the depth difference ends up between <var>radius</var>, its value gets smoothly interpolated between <code>0.0</code> and <code>1.0</code> by the following curve:
    745 </p>
    746     
    747     <img src="/img/advanced-lighting/ssao_smoothstep.png" class="clean" alt="Image of smoothstep function in OpenGL used for rangecheck in SSAO in OpenGL"/>
    748       
    749 <p>
    750   If we were to use a hard cut-off range check that would abruptly remove occlusion contributions if the depth values are outside <var>radius</var>, we'd see obvious (unattractive) borders at where the range check is applied. 
    751 </p>
    752       
    753 <p>
    754   As a final step we normalize the occlusion contribution by the size of the kernel and output the results. Note that we subtract the occlusion factor from <code>1.0</code> so we can directly use the occlusion factor to scale the ambient lighting component.
    755 </p>
    756       
    757 <pre class="cpp"><code>
    758 }
    759 occlusion = 1.0 - (occlusion / kernelSize);
    760 FragColor = occlusion;  
    761 </code></pre>
    762       
    763 <p>
    764   If we'd imagine a scene where our favorite backpack model is taking a little nap, the ambient occlusion shader produces the following texture:
    765 </p>
    766       
    767 <img src="/img/advanced-lighting/ssao_without_blur.png" class="clean" alt="Image of SSAO shader result in OpenGL"/>
    768         
    769 <p>
    770   As we can see, ambient occlusion gives a great sense of depth. With just the ambient occlusion texture we can already clearly see the model is indeed laying on the floor, instead of hovering slightly above it. 
    771 </p>
    772         
    773 <p>
    774   It still doesn't look perfect, as the repeating pattern of the noise texture is clearly visible. To create a smooth ambient occlusion result we need to blur the ambient occlusion texture.
    775 </p>
    776         
    777 <h2>Ambient occlusion blur</h2>
    778 <p>
    779   Between the SSAO pass and the lighting pass, we first want to blur the SSAO texture. So let's create yet another framebuffer object for storing the blur result:
    780 </p>
    781         
    782 <pre><code>
    783 unsigned int ssaoBlurFBO, ssaoColorBufferBlur;
    784 <function id='76'>glGenFramebuffers</function>(1, &ssaoBlurFBO);
    785 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, ssaoBlurFBO);
    786 <function id='50'>glGenTextures</function>(1, &ssaoColorBufferBlur);
    787 <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, ssaoColorBufferBlur);
    788 <function id='52'>glTexImage2D</function>(GL_TEXTURE_2D, 0, GL_RED, SCR_WIDTH, SCR_HEIGHT, 0, GL_RED, GL_FLOAT, NULL);
    789 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    790 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    791 <function id='81'>glFramebufferTexture2D</function>(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, ssaoColorBufferBlur, 0);
    792 </code></pre>
    793         
    794 <p>
    795   Because the tiled random vector texture gives us a consistent randomness, we can use this property to our advantage to create a simple blur shader:
    796 </p>
    797         
    798 <pre><code>
    799 #version 330 core
    800 out float FragColor;
    801   
    802 in vec2 TexCoords;
    803   
    804 uniform sampler2D ssaoInput;
    805 
    806 void main() {
    807     vec2 texelSize = 1.0 / vec2(textureSize(ssaoInput, 0));
    808     float result = 0.0;
    809     for (int x = -2; x &lt; 2; ++x) 
    810     {
    811         for (int y = -2; y &lt; 2; ++y) 
    812         {
    813             vec2 offset = vec2(float(x), float(y)) * texelSize;
    814             result += texture(ssaoInput, TexCoords + offset).r;
    815         }
    816     }
    817     FragColor = result / (4.0 * 4.0);
    818 }  
    819 </code></pre>
    820         
    821 <p>
    822   Here we traverse the surrounding SSAO texels between <code>-2.0</code> and <code>2.0</code>, sampling the SSAO texture an amount identical to the noise texture's dimensions. We offset each texture coordinate by the exact size of a single texel using <fun>textureSize</fun> that returns a <code>vec2</code> of the given texture's dimensions. We average the obtained results to get a simple, but effective blur:
    823 </p>
    824         
    825         <img src="/img/advanced-lighting/ssao.png" class="clean" alt="Image of SSAO texture with blur applied in OpenGL"/>
    826           
    827 <p>
    828   And there we go, a texture with per-fragment ambient occlusion data; ready for use in the lighting pass.
    829 </p>
    830           
    831 <h2>Applying ambient occlusion</h2>
    832 <p>
    833   Applying the occlusion factors to the lighting equation is incredibly easy: all we have to do is multiply the per-fragment ambient occlusion factor to the lighting's ambient component and we're done. If we take the Blinn-Phong deferred lighting shader of the previous chapter and adjust it a bit, we get the following fragment shader: 
    834 </p>                  
    835           
    836 <pre><code>
    837 #version 330 core
    838 out vec4 FragColor;
    839   
    840 in vec2 TexCoords;
    841 
    842 uniform sampler2D gPosition;
    843 uniform sampler2D gNormal;
    844 uniform sampler2D gAlbedo;
    845 uniform sampler2D ssao;
    846 
    847 struct Light {
    848     vec3 Position;
    849     vec3 Color;
    850     
    851     float Linear;
    852     float Quadratic;
    853     float Radius;
    854 };
    855 uniform Light light;
    856 
    857 void main()
    858 {             
    859     // retrieve data from gbuffer
    860     vec3 FragPos = texture(gPosition, TexCoords).rgb;
    861     vec3 Normal = texture(gNormal, TexCoords).rgb;
    862     vec3 Diffuse = texture(gAlbedo, TexCoords).rgb;
    863     float AmbientOcclusion = texture(ssao, TexCoords).r;
    864     
    865     // blinn-phong (in view-space)
    866     vec3 ambient = vec3(0.3 * Diffuse * AmbientOcclusion); // here we add occlusion factor
    867     vec3 lighting  = ambient; 
    868     vec3 viewDir  = normalize(-FragPos); // viewpos is (0.0.0) in view-space
    869     // diffuse
    870     vec3 lightDir = normalize(light.Position - FragPos);
    871     vec3 diffuse = max(dot(Normal, lightDir), 0.0) * Diffuse * light.Color;
    872     // specular
    873     vec3 halfwayDir = normalize(lightDir + viewDir);  
    874     float spec = pow(max(dot(Normal, halfwayDir), 0.0), 8.0);
    875     vec3 specular = light.Color * spec;
    876     // attenuation
    877     float dist = length(light.Position - FragPos);
    878     float attenuation = 1.0 / (1.0 + light.Linear * dist + light.Quadratic * dist * dist);
    879     diffuse  *= attenuation;
    880     specular *= attenuation;
    881     lighting += diffuse + specular;
    882 
    883     FragColor = vec4(lighting, 1.0);
    884 }
    885 </code></pre>
    886           
    887 <p>
    888   The only thing (aside from the change to view-space) we really changed is the multiplication of the scene's ambient component by <var>AmbientOcclusion</var>. With a single blue-ish point light in the scene we'd get the following result:
    889 </p>
    890           
    891 <img src="/img/advanced-lighting/ssao_final.png" class="clean" alt="Image of SSAO applied in OpenGL"/>
    892             
    893 <p>
    894   You can find the full source code of the demo scene <a href="/code_viewer_gh.php?code=src/5.advanced_lighting/9.ssao/ssao.cpp" target="_blank">here</a>.
    895 </p>
    896               
    897 <!--<ul>
    898   <li><strong>geometry</strong>: <a href="/code_viewer.php?code=advanced-lighting/ssao_geometry&type=vertex" target="_blank">vertex</a>, <a href="/code_viewer.php?code=advanced-lighting/ssao_geometry&type=fragment" target="_blank">fragment</a>.</li>
    899  <li><strong>SSAO</strong>: <a href="/code_viewer.php?code=advanced-lighting/ssao&type=vertex" target="_blank">vertex</a>, <a href="/code_viewer.php?code=advanced-lighting/ssao&type=fragment" target="_blank">fragment</a>.</li>
    900   <li><strong>blur</strong>: <a href="/code_viewer.php?code=advanced-lighting/ssao&type=vertex" target="_blank">vertex</a>, <a href="/code_viewer.php?code=advanced-lighting/ssao_blur&type=fragment" target="_blank">fragment</a>.</li>
    901   <li><strong>lighting</strong>: <a href="/code_viewer.php?code=advanced-lighting/ssao&type=vertex" target="_blank">vertex</a>, <a href="/code_viewer.php?code=advanced-lighting/ssao_lighting&type=fragment" target="_blank">fragment</a>.</li>
    902 </ul>
    903 -->
    904   
    905 <p>
    906   Screen-space ambient occlusion is a highly customizable effect that relies heavily on tweaking its parameters based on the type of scene. There is no perfect combination of parameters for every type of scene. Some scenes only work with a small radius, while other scenes require a larger radius and a larger sample count for them to look realistic. The current demo uses <code>64</code> samples, which is a bit much; play around with a smaller kernel size and try to get good results.
    907 </p>
    908             
    909 <p>
    910   Some parameters you can tweak (by using uniforms for example): kernel size, radius, bias, and/or the size of the noise kernel. You can also raise the final occlusion value to a user-defined power to increase its strength:
    911 </p>
    912             
    913 <pre><code>
    914 occlusion = 1.0 - (occlusion / kernelSize);       
    915 FragColor = pow(occlusion, power);
    916 </code></pre>
    917             
    918 <p>
    919   Play around with different scenes and different parameters to appreciate the customizability of SSAO.</p>
    920             
    921 <p>
    922   Even though SSAO is a subtle effect that isn't too clearly noticeable, it adds a great deal of realism to properly lit scenes and is definitely a technique you'd want to have in your toolkit.
    923 </p>
    924             
    925 <h2>Additional resources</h2>
    926 <ul>
    927     <li><a href="http://john-chapman-graphics.blogspot.nl/2013/01/ssao-tutorial.html" target="_blank">SSAO Tutorial</a>: excellent SSAO tutorial by John Chapman; a large portion of this chapter's code and techniques are based of his article.</li> 
    928   <li><a href="https://mtnphil.wordpress.com/2013/06/26/know-your-ssao-artifacts/" target="_blank">Know your SSAO artifacts</a>: great article about improving SSAO specific artifacts.</li>
    929   <li><a href="http://ogldev.atspace.co.uk/www/tutorial46/tutorial46.html" target="_blank">SSAO With Depth Reconstruction</a>: extension tutorial on top of SSAO from OGLDev about reconstructing position vectors from depth alone, saving us from storing the expensive position vectors in the G-buffer.</li>
    930  </ul>         
    931            
    932 
    933     </div>
    934     
    935     <div id="hover">
    936         HI
    937     </div>
    938    <!-- 728x90/320x50 sticky footer -->
    939 <div id="waldo-tag-6196"></div>
    940 
    941    <div id="disqus_thread"></div>
    942 
    943     
    944 
    945 
    946 </div> <!-- container div -->
    947 
    948 
    949 </div> <!-- super container div -->
    950 </body>
    951 </html>
    952 	</main>
    953 </body>
    954 </html>