LearnOpenGL

Translation in progress of learnopengl.com.
git clone https://git.mtkn.jp/LearnOpenGL
Log | Files | Refs

Diffuse-irradiance.html (52501B)


      1 <!DOCTYPE html>
      2 <html lang="ja"> 
      3 <head>
      4     <meta charset="utf-8"/>
      5     <title>LearnOpenGL</title>
      6     <link rel="shortcut icon" type="image/ico" href="/favicon.ico"  />
      7 	<link rel="stylesheet" href="../static/style.css" />
      8 	<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js"> </script>
      9 	<script src="/static/functions.js"></script>
     10 </head>
     11 <body>
     12 	<nav>
     13 <ol>
     14 	<li id="Introduction">
     15 		<a href="https://learnopengl.com/Introduction">はじめに</a>
     16 	</li>
     17 	<li id="Getting-started">
     18 		<span class="closed">入門</span>
     19 		<ol>
     20 			<li id="Getting-started/OpenGL">
     21 				<a href="https://learnopengl.com/Getting-started/OpenGL">OpenGL </a>
     22 			</li>
     23 			<li id="Getting-started/Creating-a-window">
     24 				<a href="https://learnopengl.com/Getting-started/Creating-a-window">ウィンドウの作成</a>
     25 			</li>
     26 			<li id="Getting-started/Hello-Window">
     27 				<a href="https://learnopengl.com/Getting-started/Hello-Window">最初のウィンドウ</a>
     28 			</li>
     29 			<li id="Getting-started/Hello-Triangle">
     30 				<a href="https://learnopengl.com/Getting-started/Hello-Triangle">最初の三角形</a>
     31 			</li>
     32 			<li id="Getting-started/Shaders">
     33 				<a href="https://learnopengl.com/Getting-started/Shaders">シェーダー</a>
     34 			</li>
     35 			<li id="Getting-started/Textures">
     36 				<a href="https://learnopengl.com/Getting-started/Textures">テクスチャ</a>
     37 			</li>
     38 			<li id="Getting-started/Transformations">
     39 				<a href="https://learnopengl.com/Getting-started/Transformations">座標変換</a>
     40 			</li>
     41 			<li id="Getting-started/Coordinate-Systems">
     42 				<a href="https://learnopengl.com/Getting-started/Coordinate-Systems">座標系</a>
     43 			</li>
     44 			<li id="Getting-started/Camera">
     45 				<a href="https://learnopengl.com/Getting-started/Camera">カメラ</a>
     46 			</li>
     47 			<li id="Getting-started/Review">
     48 				<a href="https://learnopengl.com/Getting-started/Review">まとめ</a>
     49 			</li>
     50 		</ol>
     51 	</li>
     52 	<li id="Lighting">
     53 		<span class="closed">Lighting </span>
     54 		<ol>
     55 			<li id="Lighting/Colors">
     56 				<a href="https://learnopengl.com/Lighting/Colors">Colors </a>
     57 			</li>
     58 			<li id="Lighting/Basic-Lighting">
     59 				<a href="https://learnopengl.com/Lighting/Basic-Lighting">Basic Lighting </a>
     60 			</li>
     61 			<li id="Lighting/Materials">
     62 				<a href="https://learnopengl.com/Lighting/Materials">Materials </a>
     63 			</li>
     64 			<li id="Lighting/Lighting-maps">
     65 				<a href="https://learnopengl.com/Lighting/Lighting-maps">Lighting maps </a>
     66 			</li>
     67 			<li id="Lighting/Light-casters">
     68 				<a href="https://learnopengl.com/Lighting/Light-casters">Light casters </a>
     69 			</li>
     70 			<li id="Lighting/Multiple-lights">
     71 				<a href="https://learnopengl.com/Lighting/Multiple-lights">Multiple lights </a>
     72 			</li>
     73 			<li id="Lighting/Review">
     74 				<a href="https://learnopengl.com/Lighting/Review">Review </a>
     75 			</li>
     76 		</ol>
     77 	</li>
     78 	<li id="Model-Loading">
     79 		<span class="closed">Model Loading </span>
     80 		<ol>
     81 			<li id="Model-Loading/Assimp">
     82 				<a href="https://learnopengl.com/Model-Loading/Assimp">Assimp </a>
     83 			</li>
     84 			<li id="Model-Loading/Mesh">
     85 				<a href="https://learnopengl.com/Model-Loading/Mesh">Mesh </a>
     86 			</li>
     87 			<li id="Model-Loading/Model">
     88 				<a href="https://learnopengl.com/Model-Loading/Model">Model </a>
     89 			</li>
     90 		</ol>
     91 	</li>
     92 	<li id="Advanced-OpenGL">
     93 		<span class="closed">Advanced OpenGL </span>
     94 		<ol>
     95 			<li id="Advanced-OpenGL/Depth-testing">
     96 				<a href="https://learnopengl.com/Advanced-OpenGL/Depth-testing">Depth testing </a>
     97 			</li>
     98 			<li id="Advanced-OpenGL/Stencil-testing">
     99 				<a href="https://learnopengl.com/Advanced-OpenGL/Stencil-testing">Stencil testing </a>
    100 			</li>
    101 			<li id="Advanced-OpenGL/Blending">
    102 				<a href="https://learnopengl.com/Advanced-OpenGL/Blending">Blending </a>
    103 			</li>
    104 			<li id="Advanced-OpenGL/Face-culling">
    105 				<a href="https://learnopengl.cm/Advanced-OpenGL/Face-culling">Face culling </a>
    106 			</li>
    107 			<li id="Advanced-OpenGL/Framebuffers">
    108 				<a href="https://learnopengl.com/Advanced-OpenGL/Framebuffers">Framebuffers </a>
    109 			</li>
    110 			<li id="Advanced-OpenGL/Cubemaps">
    111 				<a href="https://learnopengl.com/Advanced-OpenGL/Cubemaps">Cubemaps </a>
    112 			</li>
    113 			<li id="Advanced-OpenGL/Advanced-Data">
    114 				<a href="https://learnopengl.com/Advanced-OpenGL/Advanced-Data">Advanced Data </a>
    115 			</li>
    116 			<li id="Advanced-OpenGL/Advanced-GLSL">
    117 				<a href="https://learnopengl.com/Advanced-OpenGL/Advanced-GLSL">Advanced GLSL </a>
    118 			</li>
    119 			<li id="Advanced-OpenGL/Geometry-Shader">
    120 				<a href="https://learnopengl.com/Advanced-OpenGL/Geometry-Shader">Geometry Shader </a>
    121 			</li>
    122 			<li id="Advanced-OpenGL/Instancing">
    123 				<a href="https://learnopengl.com/Advanced-OpenGL/Instancing">Instancing </a>
    124 			</li>
    125 			<li id="Advanced-OpenGL/Anti-Aliasing">
    126 				<a href="https://learnopengl.com/Advanced-OpenGL/Anti-Aliasing">Anti Aliasing </a>
    127 			</li>
    128 		</ol>
    129 	</li>
    130 	<li id="Advanced-Lighting">
    131 		<span class="closed">Advanced Lighting </span>
    132 		<ol>
    133 			<li id="Advanced-Lighting/Advanced-Lighting">
    134 				<a href="https://learnopengl.com/Advanced-Lighting/Advanced-Lighting">Advanced Lighting </a>
    135 			</li>
    136 			<li id="Advanced-Lighting/Gamma-Correction">
    137 				<a href="https://learnopengl.com/Advanced-Lighting/Gamma-Correction">Gamma Correction </a>
    138 			</li>
    139 			<li id="Advanced-Lighting/Shadows">
    140 				<span class="closed">Shadows </span>
    141 				<ol>
    142 					<li id="Advanced-Lighting/Shadows/Shadow-Mapping">
    143 						<a href="https://learnopengl.com/Advanced-Lighting/Shadows/Shadow-Mapping">Shadow Mapping </a>
    144 					</li>
    145 					<li id="Advanced-Lighting/Shadows/Point-Shadows">
    146 						<a href="https://learnopengl.com/Advanced-Lighting/Shadows/Point-Shadows">Point Shadows </a>
    147 					</li>
    148 				</ol>
    149 			</li>
    150 			<li id="Advanced-Lighting/Normal-Mapping">
    151 				<a href="https://learnopengl.com/Advanced-Lighting/Normal-Mapping">Normal Mapping </a>
    152 			</li>
    153 			<li id="Advanced-Lighting/Parallax-Mapping">
    154 				<a href="https://learnopengl.com/Advanced-Lighting/Parallax-Mapping">Parallax Mapping </a>
    155 			</li>
    156 			<li id="Advanced-Lighting/HDR">
    157 				<a href="https://learnopengl.com/Advanced-Lighting/HDR">HDR </a>
    158 			</li>
    159 			<li id="Advanced-Lighting/Bloom">
    160 				<a href="https://learnopengl.com/Advanced-Lighting/Bloom">Bloom </a>
    161 			</li>
    162 			<li id="Advanced-Lighting/Deferred-Shading">
    163 				<a href="https://learnopengl.com/Advanced-Lighting/Deferred-Shading">Deferred Shading </a>
    164 			</li>
    165 			<li id="Advanced-Lighting/SSAO">
    166 				<a href="https://learnopengl.com/Advanced-Lighting/SSAO">SSAO </a>
    167 			</li>
    168 		</ol>
    169 	</li>
    170 	<li id="PBR">
    171 		<span class="closed">PBR </span>
    172 		<ol>
    173 			<li id="PBR/Theory">
    174 				<a href="https://learnopengl.com/PBR/Theory">Theory </a>
    175 			</li>
    176 			<li id="PBR/Lighting">
    177 				<a href="https://learnopengl.com/PBR/Lighting">Lighting </a>
    178 			</li>
    179 			<li id="PBR/IBL">
    180 				<span class="closed">IBL </span>
    181 				<ol>
    182 					<li id="PBR/IBL/Diffuse-irradiance">
    183 						<a href="https://learnopengl.com/PBR/IBL/Diffuse-irradiance">Diffuse irradiance </a>
    184 					</li>
    185 					<li id="PBR/IBL/Specular-IBL">
    186 						<a href="https://learnopengl.com/PBR/IBL/Specular-IBL">Specular IBL </a>
    187 					</li>
    188 				</ol>
    189 			</li>
    190 		</ol>
    191 	</li>
    192 	<li id="In-Practice">
    193 		<span class="closed">In Practice </span>
    194 		<ol>
    195 			<li id="In-Practice/Debugging">
    196 				<a href="https://learnopengl.com/In-Practice/Debugging">Debugging </a>
    197 			</li>
    198 			<li id="In-Practice/Text-Rendering">
    199 				<a href="https://learnopengl.com/In-Practice/Text-Rendering">Text Rendering </a>
    200 			</li>
    201 			<li id="In-Practice/2D-Game">
    202 				<span class="closed">2D Game </span>
    203 				<ol>
    204 					<li id="In-Practice/2D-Game/Breakout">
    205 						<a href="https://learnopengl.com/In-Practice/2D-Game/Breakout">Breakout </a>
    206 					</li>
    207 					<li id="In-Practice/2D-Game/Setting-up">
    208 						<a href="https://learnopengl.com/In-Practice/2D-Game/Setting-up">Setting up </a>
    209 					</li>
    210 					<li id="In-Practice/2D-Game/Rendering-Sprites">
    211 						<a href="https://learnopengl.com/In-Practice/2D-Game/Rendering-Sprites">Rendering Sprites </a>
    212 					</li>
    213 					<li id="In-Practice/2D-Game/Levels">
    214 						<a href="https://learnopengl.com/In-Practice/2D-Game/Levels">Levels </a>
    215 					</li>
    216 					<li id="In-Practice/2D-Game/Collisions">
    217 						<span class="closed">Collisions </span>
    218 						<ol>
    219 							<li id="In-Practice/2D-Game/Collisions/Ball">
    220 								<a href="https://learnopengl.com/In-Practice/2D-Game/Collisions/Ball">Ball </a>
    221 							</li>
    222 							<li id="In-Practice/2D-Game/Collisions/Collision-detection">
    223 								<a href="https://learnopengl.com/In-Practice/2D-Game/Collisions/Collision-detection">Collision detection </a>
    224 							</li>
    225 							<li id="In-Practice/2D-Game/Collisions/Collision-resolution">
    226 								<a href="https://learnopengl.com/In-Practice/2D-Game/Collisions/Collision-resolution">Collision resolution </a>
    227 							</li>
    228 						</ol>
    229 					</li>
    230 					<li id="In-Practice/2D-Game/Particles">
    231 						<a href="https://learnopengl.com/In-Practice/2D-Game/Particles">Particles </a>
    232 					</li>
    233 					<li id="In-Practice/2D-Game/Postprocessing">
    234 						<a href="https://learnopengl.com/In-Practice/2D-Game/Postprocessing">Postprocessing </a>
    235 					</li>
    236 					<li id="In-Practice/2D-Game/Powerups">
    237 						<a href="https://learnopengl.com/In-Practice/2D-Game/Powerups">Powerups </a>
    238 					</li>
    239 					<li id="In-Practice/2D-Game/Audio">
    240 						<a href="https://learnopengl.com/In-Practice/2D-Game/Audio">Audio </a>
    241 					</li>
    242 					<li id="In-Practice/2D-Game/Render-text">
    243 						<a href="https://learnopengl.com/In-Practice/2D-Game/Render-text">Render text </a>
    244 					</li>
    245 					<li id="In-Practice/2D-Game/Final-thoughts">
    246 						<a href="https://learnopengl.com/In-Practice/2D-Game/Final-thoughts">Final thoughts </a>
    247 					</li>
    248 				</ol>
    249 			</li>
    250 		</ol>
    251 	</li>
    252 	<li id="Guest-Articles">
    253 		<span class="closed">Guest Articles </span>
    254 		<ol>
    255 			<li id="Guest-Articles/How-to-publish">
    256 				<a href="https://learnopengl.com/Guest-Articles/How-to-publish">How to publish </a>
    257 			</li>
    258 			<li id="Guest-Articles/2020">
    259 				<span class="closed">2020 </span>
    260 				<ol>
    261 					<li id="Guest-Articles/2020/OIT">
    262 						<span class="closed">OIT </span>
    263 						<ol>
    264 							<li id="Guest-Articles/2020/OIT/Introduction">
    265 								<a href="https://learnopengl.com/Guest-Articles/2020/OIT/Introduction">Introduction </a>
    266 							</li>
    267 							<li id="Guest-Articles/2020/OIT/Weighted-Blended">
    268 								<a href="https://learnopengl.com/Guest-Articles/2020/OIT/Weighted-Blended">Weighted Blended </a>
    269 							</li>
    270 						</ol>
    271 					</li>
    272 					<li id="Guest-Articles/2020/Skeletal-Animation">
    273 						<a href="https://learnopengl.com/Guest-Articles/2020/Skeletal-Animation">Skeletal Animation </a>
    274 					</li>
    275 				</ol>
    276 			</li>
    277 			<li id="Guest-Articles/2021">
    278 				<span class="closed">2021 </span>
    279 				<ol>
    280 					<li id="Guest-Articles/2021/CSM">
    281 						<a href="https://learnopengl.com/Guest-Articles/2021/CSM">CSM </a>
    282 					</li>
    283 					<li id="Guest-Articles/2021/Scene">
    284 						<span class="closed">Scene </span>
    285 						<ol>
    286 							<li id="Guest-Articles/2021/Scene/Scene-Graph">
    287 								<a href="https://learnopengl.com/Guest-Articles/2021/Scene/Scene-Graph">Scene Graph </a>
    288 							</li>
    289 							<li id="Guest-Articles/2021/Scene/Frustum-Culling">
    290 								<a href="https://learnopengl.com/Guest-Articles/2021/Scene/Frustum-Culling">Frustum Culling </a>
    291 							</li>
    292 						</ol>
    293 					</li>
    294 					<li id="Guest-Articles/2021/Tessellation">
    295 						<span class="closed">Tessellation </span>
    296 						<ol>
    297 							<li id="Guest-Articles/2021/Tessellation/Height-map">
    298 								<a href="https://learnopengl.com/Guest-Articles/2021/Tessellation/Height-map">Height map </a>
    299 							</li>
    300 						</ol>
    301 					</li>
    302 				</ol>
    303 			</li>
    304 		</ol>
    305 	</li>
    306 	<li id="Code-repository">
    307 		<a href="https://learnopengl.com/Code-repository">Code repository </a>
    308 	</li>
    309 	<li id="Translations">
    310 		<a href="https://learnopengl.com/Translations">Translations </a>
    311 	</li>
    312 	<li id="About">
    313 		<a href="https://learnopengl.com/About">About </a>
    314 	</li>
    315 </ol>
    316 	</nav>
    317 	<main>
    318     <h1 id="content-title">Diffuse irradiance</h1>
    319 <h1 id="content-url" style='display:none;'>PBR/IBL/Diffuse-irradiance</h1>
    320 <p>
    321   IBL, or <def>image based lighting</def>, is a collection of techniques to light objects, not by direct analytical lights as in the <a href="https://learnopengl.com/PBR/Lighting" target="_blank">previous</a> chapter, but by treating the surrounding environment as one big light source. This is generally accomplished by manipulating a cubemap environment map (taken from the real world or generated from a 3D scene) such that we can directly use it in our lighting equations: treating each cubemap texel as a light emitter. This way we can effectively capture an environment's global lighting and general feel, giving objects a better sense of <em>belonging</em> in their environment.
    322 </p>
    323 
    324 <p>
    325   As image based lighting algorithms capture the lighting of some (global) environment, its input is  considered a more precise form of ambient lighting, even a crude approximation of global illumination. This makes IBL interesting for PBR as objects look significantly more physically accurate when we take the environment's lighting into account.
    326 </p>
    327 
    328 <p>
    329   To start introducing IBL into our PBR system let's again take a quick look at the reflectance equation:
    330 </p>
    331 
    332 
    333 \[
    334  	L_o(p,\omega_o) = \int\limits_{\Omega} 
    335     	(k_d\frac{c}{\pi} + k_s\frac{DFG}{4(\omega_o \cdot n)(\omega_i \cdot n)})
    336     	L_i(p,\omega_i) n \cdot \omega_i  d\omega_i
    337 \]
    338 
    339 <p>
    340   As described before, our main goal is to solve the integral of all incoming light directions \(w_i\) over the hemisphere \(\Omega\) . Solving the integral in the previous chapter was easy as we knew beforehand the exact few light directions \(w_i\) that contributed to the integral.
    341   This time however, <strong>every</strong> incoming light direction \(w_i\) from the surrounding environment could potentially have some radiance making it less trivial to solve the integral. This gives us two main requirements for solving the integral:  
    342 </p>
    343 
    344 <ul>
    345   <li>We need some way to retrieve the scene's radiance given any direction vector \(w_i\).</li>
    346   <li>Solving the integral needs to be fast and real-time.</li>
    347 </ul>
    348 
    349 <p>
    350   Now, the first requirement is relatively easy. We've already hinted it, but one way of representing an environment or scene's irradiance is in the form of a (processed) environment cubemap. Given such a cubemap, we can visualize every texel of the cubemap as one single emitting light source. By sampling this cubemap with any direction vector \(w_i\), we retrieve the scene's radiance from that direction.
    351 </p>
    352 
    353 <p>
    354   Getting the scene's radiance given any direction vector \(w_i\) is then as simple as:
    355 </p>
    356 
    357 <pre><code>
    358 vec3 radiance = texture(_cubemapEnvironment, w_i).rgb;  
    359 </code></pre>
    360 
    361 <p>
    362   Still, solving the integral requires us to sample the environment map from not just one direction, but all possible directions \(w_i\) over the hemisphere \(\Omega\) which is far too expensive for each fragment shader invocation. To solve the integral in a more efficient fashion we'll want to <em>pre-process</em> or <def>pre-compute</def> most of the computations. For this we'll have to delve a bit deeper into the reflectance equation:
    363 </p>
    364 
    365 \[
    366  	L_o(p,\omega_o) = \int\limits_{\Omega} 
    367     	(k_d\frac{c}{\pi} + k_s\frac{DFG}{4(\omega_o \cdot n)(\omega_i \cdot n)})
    368     	L_i(p,\omega_i) n \cdot \omega_i  d\omega_i
    369 \]
    370       
    371 <p>
    372   Taking a good look at the reflectance equation we find that the diffuse \(k_d\) and specular \(k_s\) term of the BRDF are independent from each other and we can split the integral in two:
    373       </p>
    374       
    375 \[
    376  	L_o(p,\omega_o) = 
    377 		\int\limits_{\Omega} (k_d\frac{c}{\pi}) L_i(p,\omega_i) n \cdot \omega_i  d\omega_i
    378 		+ 
    379 		\int\limits_{\Omega} (k_s\frac{DFG}{4(\omega_o \cdot n)(\omega_i \cdot n)})
    380 			L_i(p,\omega_i) n \cdot \omega_i  d\omega_i
    381 \]
    382       
    383 <p>
    384   By splitting the integral in two parts we can focus on both the diffuse and specular term individually; the focus of this chapter being on the diffuse integral. 
    385 </p>
    386 
    387 <p>
    388   Taking a closer look at the diffuse integral we find that the diffuse lambert term is a constant term (the color \(c\), the refraction ratio \(k_d\), and \(\pi\) are constant over the integral) and not dependent on any of the integral variables. Given this, we can move the constant term out of the diffuse integral:
    389 </p>
    390       
    391 \[
    392       L_o(p,\omega_o) = 
    393 		k_d\frac{c}{\pi} \int\limits_{\Omega} L_i(p,\omega_i) n \cdot \omega_i  d\omega_i
    394 \]
    395       
    396 <p>
    397   This gives us an integral that only depends on \(w_i\) (assuming \(p\) is at the center of the environment map). With this knowledge, we can calculate or <em>pre-compute</em> a new cubemap that stores in each sample direction (or texel) \(w_o\) the diffuse integral's result by <def>convolution</def>.
    398 </p>
    399       
    400 <p>
    401 	Convolution is applying some computation to each entry in a data set considering all other entries in the data set; the data set being the scene's radiance or environment map. Thus for every sample direction in the cubemap, we take all other sample directions over the hemisphere \(\Omega\) into account.   
    402 </p>
    403 
    404 <p>
    405   To convolute an environment map we solve the integral for each output \(w_o\) sample direction by discretely sampling a large number of directions \(w_i\) over the hemisphere \(\Omega\) and averaging their radiance. The hemisphere we build the sample directions \(w_i\) from is oriented towards the output \(w_o\) sample direction we're convoluting. 
    406 </p>
    407 
    408 <img src="/img/pbr/ibl_hemisphere_sample.png" class="clean" alt="Convoluting a cubemap on a hemisphere for a PBR irradiance map."/>
    409 
    410 <p>
    411   This pre-computed cubemap, that for each sample direction \(w_o\) stores the integral result, can be thought of as the pre-computed sum of all indirect diffuse light of the scene hitting some surface aligned along direction \(w_o\). Such a cubemap is known as an <def>irradiance map</def> seeing as the convoluted cubemap effectively allows us to directly sample the scene's (pre-computed) irradiance from any direction \(w_o\). 
    412 </p>
    413 
    414 <note>
    415   The radiance equation also depends on a position \(p\), which we've assumed to be at the center of the irradiance map. This does mean all diffuse indirect light must come from a single environment map which may break the illusion of reality (especially indoors). Render engines solve this by placing <def>reflection probes</def> all over the scene where each reflection probes calculates its own irradiance map of its surroundings. This way, the irradiance (and radiance) at position \(p\) is the interpolated irradiance between its closest reflection probes. For now, we assume we always sample the environment map from its center.
    416 </note>
    417 
    418 <p>
    419   Below is an example of a cubemap environment map and its resulting irradiance map (courtesy of <a href="http://www.indiedb.com/features/using-image-based-lighting-ibl" target="_blank">wave engine</a>), averaging the scene's radiance for every direction \(w_o\).
    420 </p>
    421 
    422 <img src="/img/pbr/ibl_irradiance.png" class="clean" alt="The effect of convoluting a cubemap environment map."/>
    423   
    424 <p>
    425   By storing the convoluted result in each cubemap texel (in the direction of \(w_o\)), the irradiance map displays somewhat like an average color or lighting display of the environment. Sampling any direction from this environment map will give us the scene's irradiance in that particular direction. 
    426 </p>
    427 
    428 
    429 <h2>PBR and HDR</h2>
    430 <p>
    431   We've briefly touched upon it in the <a href="https://learnopengl.com/PBR/Lighting" target="_blank">previous</a> chapter: taking the high dynamic range of your scene's lighting into account in a PBR pipeline is incredibly important. As PBR bases most of its inputs on real physical properties and measurements it makes sense to closely match the incoming light values to their physical equivalents. Whether we make educated guesses on each light's radiant flux or use their <a href="https://en.wikipedia.org/wiki/Lumen_(unit)" target="_blank">direct physical equivalent</a>, the difference between a simple light bulb or the sun is significant either way. Without working in an <a href="https://learnopengl.com/Advanced-Lighting/HDR" target="_blank">HDR</a> render environment it's impossible to correctly specify each light's relative  intensity.
    432 </p>
    433 
    434 <p>
    435   So, PBR and HDR go hand in hand, but how does it all relate to image based lighting? We've seen in  the previous chapter that it's relatively easy to get PBR working in HDR. However, seeing as for image based lighting we base the environment's indirect light intensity on the color values of an environment cubemap we need some way to store the lighting's high dynamic range into an environment map.
    436 </p>
    437 
    438 <p>
    439   The environment maps we've been using so far as cubemaps (used as <a href="https://learnopengl.com/Advanced-OpenGL/Cubemaps" target="_blank">skyboxes</a> for instance) are in low dynamic range (LDR). We directly used their color values from the individual face images, ranged between <code>0.0</code> and <code>1.0</code>, and processed them as is. While this may work fine for visual output, when taking them as physical input parameters it's not going to work.
    440 </p>
    441 
    442 <h3>The radiance HDR file format</h3>
    443 <p>
    444   Enter the radiance file format. The radiance file format (with the <code>.hdr</code> extension) stores a full cubemap with all 6 faces as floating point data. This allows us to specify color values outside the <code>0.0</code> to <code>1.0</code> range to give lights their correct color intensities. The file format also uses a clever trick to store each floating point value, not as a 32 bit value per channel, but 8 bits per channel using the color's alpha channel as an exponent (this does come with a loss of precision). This works quite well, but requires the parsing program to re-convert each color to their floating point equivalent. 
    445 </p>
    446 
    447 <p>
    448   There are quite a few radiance HDR environment maps freely available from sources like <a href="http://www.hdrlabs.com/sibl/archive.html" target="_blank">sIBL archive</a> of which you can see an example below:
    449 </p>
    450 
    451 <img src="/img/pbr/ibl_hdr_radiance.png" alt="Example of an equirectangular map"/>
    452   
    453 <p>
    454     This may not be exactly what you were expecting, as the image appears distorted and doesn't show any of the 6 individual cubemap faces of environment maps we've seen before. This environment map is projected from a sphere onto a flat plane such that we can more easily store the environment into a single image known as an <def>equirectangular map</def>. This does come with a small caveat as most of the visual resolution is stored in the horizontal view direction, while less is preserved in the bottom and top directions. In most cases this is a decent compromise as with almost any renderer you'll find most of the interesting lighting and surroundings in the horizontal viewing directions.
    455 </p>
    456 
    457 <h3>HDR and stb_image.h</h3>
    458 <p>
    459   Loading radiance HDR images directly requires some knowledge of the <a href="http://radsite.lbl.gov/radiance/refer/Notes/picture_format.html" target="_blank">file format</a>  which isn't too difficult, but cumbersome nonetheless. Lucky for us, the popular one header library <a href="https://github.com/nothings/stb/blob/master/stb_image.h" target="_blank">stb_image.h</a> supports loading radiance HDR images directly as an array of floating point values which perfectly fits our needs. With <code>stb_image</code> added to your project, loading an HDR image is now as simple as follows:
    460 </p>
    461   
    462 <pre><code>
    463 #include "stb_image.h"
    464 [...]
    465 
    466 stbi_set_flip_vertically_on_load(true);
    467 int width, height, nrComponents;
    468 float *data = stbi_loadf("newport_loft.hdr", &width, &height, &nrComponents, 0);
    469 unsigned int hdrTexture;
    470 if (data)
    471 {
    472     <function id='50'>glGenTextures</function>(1, &hdrTexture);
    473     <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, hdrTexture);
    474     <function id='52'>glTexImage2D</function>(GL_TEXTURE_2D, 0, GL_RGB16F, width, height, 0, GL_RGB, GL_FLOAT, data); 
    475 
    476     <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    477     <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    478     <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    479     <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    480 
    481     stbi_image_free(data);
    482 }
    483 else
    484 {
    485     std::cout &lt;&lt; "Failed to load HDR image." &lt;&lt; std::endl;
    486 }  
    487 </code></pre>
    488   
    489 <p>
    490   <code>stb_image.h</code> automatically maps the HDR values to a list of floating point values: 32 bits per channel and 3 channels per color by default. This is all we need to store the equirectangular HDR environment map into a 2D floating point texture.
    491 </p>
    492 
    493 <h3>From Equirectangular to Cubemap</h3>
    494 <p>
    495   It is possible to use the equirectangular map directly for environment lookups, but these operations can be relatively expensive in which case a direct cubemap sample is more performant. Therefore, in this chapter we'll first convert the equirectangular image to a cubemap for further processing. Note that in the process we also show how to sample an equirectangular map as if it was a 3D environment map in which case you're free to pick whichever solution you prefer.
    496 </p>
    497   
    498 <p>
    499   To convert an equirectangular image into a cubemap we need to render a (unit) cube and project the equirectangular map on all of the cube's faces from the inside and take 6 images of each of the cube's sides as a cubemap face. The vertex shader of this cube simply renders the cube as is and passes its local position to the fragment shader as a 3D sample vector:
    500 </p>
    501   
    502 <pre><code>
    503 #version 330 core
    504 layout (location = 0) in vec3 aPos;
    505 
    506 out vec3 localPos;
    507 
    508 uniform mat4 projection;
    509 uniform mat4 view;
    510 
    511 void main()
    512 {
    513     localPos = aPos;  
    514     gl_Position =  projection * view * vec4(localPos, 1.0);
    515 }
    516 </code></pre>
    517   
    518 <p>
    519   For the fragment shader, we color each part of the cube as if we neatly folded the equirectangular map onto each side of the cube. To accomplish this, we take the fragment's sample direction as interpolated from the cube's local position and then use this direction vector and some trigonometry magic (spherical to cartesian) to sample the equirectangular map as if it's a cubemap itself. We directly store the result onto the cube-face's fragment which should be all we need to do:
    520 </p>
    521   
    522 <pre><code>
    523 #version 330 core
    524 out vec4 FragColor;
    525 in vec3 localPos;
    526 
    527 uniform sampler2D equirectangularMap;
    528 
    529 const vec2 invAtan = vec2(0.1591, 0.3183);
    530 vec2 SampleSphericalMap(vec3 v)
    531 {
    532     vec2 uv = vec2(atan(v.z, v.x), asin(v.y));
    533     uv *= invAtan;
    534     uv += 0.5;
    535     return uv;
    536 }
    537 
    538 void main()
    539 {		
    540     vec2 uv = SampleSphericalMap(normalize(localPos)); // make sure to normalize localPos
    541     vec3 color = texture(equirectangularMap, uv).rgb;
    542     
    543     FragColor = vec4(color, 1.0);
    544 }
    545 
    546 </code></pre>
    547     
    548 <p>
    549   If you render a cube at the center of the scene given an HDR equirectangular map you'll get something that looks like this:
    550 </p>
    551   
    552   <img src="/img/pbr/ibl_equirectangular_projection.png" alt="OpenGL render of an equirectangular map converted to a cubemap."/>
    553   
    554 <p>
    555   This demonstrates that we effectively mapped an equirectangular image onto a cubic shape, but doesn't yet help us in converting the source HDR image to a cubemap texture. To accomplish this we have to render the same cube 6 times, looking at each individual face of the cube, while recording its visual result with a <a href="https://learnopengl.com/Advanced-OpenGL/Framebuffers" target="_blank">framebuffer</a> object:
    556 </p>
    557     
    558 <pre><code>
    559 unsigned int captureFBO, captureRBO;
    560 <function id='76'>glGenFramebuffers</function>(1, &captureFBO);
    561 <function id='82'>glGenRenderbuffers</function>(1, &captureRBO);
    562 
    563 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, captureFBO);
    564 <function id='83'>glBindRenderbuffer</function>(GL_RENDERBUFFER, captureRBO);
    565 <function id='88'>glRenderbufferStorage</function>(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, 512, 512);
    566 <function id='89'>glFramebufferRenderbuffer</function>(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, captureRBO);  
    567 </code></pre>  
    568     
    569 <p>
    570   Of course, we then also generate the corresponding cubemap color textures, pre-allocating memory for each of its 6 faces:
    571 </p>
    572   
    573 <pre><code>
    574 unsigned int envCubemap;
    575 <function id='50'>glGenTextures</function>(1, &envCubemap);
    576 <function id='48'>glBindTexture</function>(GL_TEXTURE_CUBE_MAP, envCubemap);
    577 for (unsigned int i = 0; i &lt; 6; ++i)
    578 {
    579     // note that we store each face with 16 bit floating point values
    580     <function id='52'>glTexImage2D</function>(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_RGB16F, 
    581                  512, 512, 0, GL_RGB, GL_FLOAT, nullptr);
    582 }
    583 <function id='15'>glTexParameter</function>i(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    584 <function id='15'>glTexParameter</function>i(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    585 <function id='15'>glTexParameter</function>i(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
    586 <function id='15'>glTexParameter</function>i(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    587 <function id='15'>glTexParameter</function>i(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    588 </code></pre>
    589     
    590 <p>
    591   Then what's left to do is capture the equirectangular 2D texture onto the cubemap faces.
    592 </p>
    593     
    594 <p>
    595   I won't go over the details as the code details topics previously discussed in the <a href="https://learnopengl.com/Advanced-OpenGL/Framebuffers" target="_blank">framebuffer</a> and <a href="https://learnopengl.com/Advanced-Lighting/Shadows/Point-Shadows" target="_blank">point shadows</a> chapters, but it effectively boils down to setting up 6 different view matrices (facing  each side of the cube), set up a projection matrix with a fov of <code>90</code> degrees to capture the entire face, and render a cube 6 times storing the results in a floating point framebuffer: 
    596 </p>
    597     
    598 <pre><code>
    599 glm::mat4 captureProjection = <function id='58'>glm::perspective</function>(<function id='63'>glm::radians</function>(90.0f), 1.0f, 0.1f, 10.0f);
    600 glm::mat4 captureViews[] = 
    601 {
    602    <function id='62'>glm::lookAt</function>(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3( 1.0f,  0.0f,  0.0f), glm::vec3(0.0f, -1.0f,  0.0f)),
    603    <function id='62'>glm::lookAt</function>(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(-1.0f,  0.0f,  0.0f), glm::vec3(0.0f, -1.0f,  0.0f)),
    604    <function id='62'>glm::lookAt</function>(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3( 0.0f,  1.0f,  0.0f), glm::vec3(0.0f,  0.0f,  1.0f)),
    605    <function id='62'>glm::lookAt</function>(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3( 0.0f, -1.0f,  0.0f), glm::vec3(0.0f,  0.0f, -1.0f)),
    606    <function id='62'>glm::lookAt</function>(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3( 0.0f,  0.0f,  1.0f), glm::vec3(0.0f, -1.0f,  0.0f)),
    607    <function id='62'>glm::lookAt</function>(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3( 0.0f,  0.0f, -1.0f), glm::vec3(0.0f, -1.0f,  0.0f))
    608 };
    609 
    610 // convert HDR equirectangular environment map to cubemap equivalent
    611 equirectangularToCubemapShader.use();
    612 equirectangularToCubemapShader.setInt("equirectangularMap", 0);
    613 equirectangularToCubemapShader.setMat4("projection", captureProjection);
    614 <function id='49'>glActiveTexture</function>(GL_TEXTURE0);
    615 <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, hdrTexture);
    616 
    617 <function id='22'>glViewport</function>(0, 0, 512, 512); // don't forget to configure the viewport to the capture dimensions.
    618 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, captureFBO);
    619 for (unsigned int i = 0; i &lt; 6; ++i)
    620 {
    621     equirectangularToCubemapShader.setMat4("view", captureViews[i]);
    622     <function id='81'>glFramebufferTexture2D</function>(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, 
    623                            GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, envCubemap, 0);
    624     <function id='10'>glClear</function>(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    625 
    626     renderCube(); // renders a 1x1 cube
    627 }
    628 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, 0);  
    629 </code></pre>
    630     
    631 <p>
    632   We take the color attachment of the framebuffer and switch its texture target around for every face of the cubemap, directly rendering the scene into one of the cubemap's faces. Once this routine has finished (which we only have to do once), the cubemap <var>envCubemap</var> should be the cubemapped environment version of our original HDR image.
    633 </p>
    634     
    635 <p>
    636   Let's test the cubemap by writing a very simple skybox shader to display the cubemap around us:
    637 </p>
    638     
    639 <pre><code>
    640 #version 330 core
    641 layout (location = 0) in vec3 aPos;
    642 
    643 uniform mat4 projection;
    644 uniform mat4 view;
    645 
    646 out vec3 localPos;
    647 
    648 void main()
    649 {
    650     localPos = aPos;
    651 
    652     mat4 rotView = mat4(mat3(view)); // remove translation from the view matrix
    653     vec4 clipPos = projection * rotView * vec4(localPos, 1.0);
    654 
    655     gl_Position = clipPos.xyww;
    656 }
    657 </code></pre>
    658     
    659 <p>
    660   Note the <code>xyww</code> trick here that ensures the depth value of the rendered cube fragments always end up at <code>1.0</code>, the maximum depth value, as described in the <a href="https://learnopengl.com/Advanced-OpenGL/Cubemaps" target="_blank">cubemap</a> chapter. Do note that we need to change the depth comparison function to <var>GL_LEQUAL</var>:
    661 </p>
    662     
    663 <pre><code>
    664 <function id='66'>glDepthFunc</function>(GL_LEQUAL);  
    665 </code></pre>
    666     
    667 <p>
    668   The fragment shader then directly samples the cubemap environment map using the cube's local fragment position:
    669 </p>
    670   
    671 <pre><code>
    672 #version 330 core
    673 out vec4 FragColor;
    674 
    675 in vec3 localPos;
    676   
    677 uniform samplerCube environmentMap;
    678   
    679 void main()
    680 {
    681     vec3 envColor = texture(environmentMap, localPos).rgb;
    682     
    683     envColor = envColor / (envColor + vec3(1.0));
    684     envColor = pow(envColor, vec3(1.0/2.2)); 
    685   
    686     FragColor = vec4(envColor, 1.0);
    687 }
    688 </code></pre>
    689     
    690 <p>
    691   We sample the environment map using its interpolated vertex cube positions that directly correspond to the correct direction vector to sample. Seeing as the camera's translation components are ignored, rendering this shader over a cube should give you the environment map as a non-moving background. Also, as we directly output the environment map's HDR values to the default LDR framebuffer, we want to properly tone map the color values. Furthermore, almost all HDR maps are in linear color space by default so we need to apply <a href="https://learnopengl.com/Advanced-Lighting/Gamma-Correction" target="_blank">gamma correction</a> before writing to the default framebuffer.
    692 </p>
    693     
    694 <p>
    695   Now rendering the sampled environment map over the previously rendered spheres should look something like this:
    696 </p>
    697     
    698     <img src="/img/pbr/ibl_hdr_environment_mapped.png" alt="Render the converted cubemap as a skybox."/>
    699       
    700 <p>
    701   Well... it took us quite a bit of setup to get here, but we successfully managed to read an HDR environment map, convert it from its equirectangular mapping to a cubemap, and render the HDR cubemap into the scene as a skybox. Furthermore, we set up a small system to render onto all 6 faces of a cubemap, which we'll need again when <def>convoluting</def> the environment map. You can find the source code of the entire conversion process <a href="/code_viewer_gh.php?code=src/6.pbr/2.1.1.ibl_irradiance_conversion/ibl_irradiance_conversion.cpp" target="_blank">here</a>.
    702 </p>
    703       
    704 <h2>Cubemap convolution</h2>
    705 <p>
    706   As described at the start of the chapter, our main goal is to solve the integral for all diffuse indirect lighting given the scene's irradiance in the form of a cubemap environment map. We know that we can get the radiance of the scene \(L(p, w_i)\) in a particular direction by sampling an HDR environment map in direction \(w_i\). To solve the integral, we have to sample the scene's radiance from all possible directions within the hemisphere \(\Omega\) for each fragment.
    707       </p>
    708       
    709 <p>
    710 	It is however computationally impossible to sample the environment's lighting from every possible direction in \(\Omega\), the number of possible directions is theoretically infinite. We can however, approximate the number of directions by taking a finite number of directions or samples, spaced uniformly or taken randomly from within the hemisphere, to get a fairly accurate approximation of the irradiance; effectively solving the integral \(\int\) discretely  
    711 </p>
    712       
    713 <p>
    714 	It is however still too expensive to do this for every fragment in real-time as the number of samples needs to be significantly large for decent results, so we want to <def>pre-compute</def> this. Since the orientation of the hemisphere decides where we capture the irradiance, we can pre-calculate the irradiance for every possible hemisphere orientation oriented around all outgoing directions \(w_o\):
    715 </p>
    716       
    717 \[
    718       L_o(p,\omega_o) = 
    719 		k_d\frac{c}{\pi} \int\limits_{\Omega} L_i(p,\omega_i) n \cdot \omega_i  d\omega_i
    720 \]
    721       
    722 <p>
    723   Given any direction vector \(w_i\) in the lighting pass, we can then sample the pre-computed irradiance map to retrieve the total diffuse irradiance from direction \(w_i\). To determine the amount of indirect diffuse (irradiant) light at a fragment surface, we retrieve the total irradiance from the hemisphere oriented around its surface normal. Obtaining the scene's irradiance is then as simple as:
    724 </p>
    725       
    726 <pre><code>
    727 vec3 irradiance = texture(irradianceMap, N).rgb;
    728 </code></pre>
    729       
    730 <p>
    731   Now, to generate the irradiance map, we need to convolute the environment's lighting as converted to a cubemap. Given that for each fragment the surface's hemisphere is oriented along the normal vector \(N\), convoluting a cubemap equals calculating the total averaged radiance of each direction \(w_i\) in the hemisphere \(\Omega\) oriented along \(N\). 
    732 </p>
    733       
    734 <img src="/img/pbr/ibl_hemisphere_sample_normal.png" class="clean" alt="Convoluting a cubemap on a hemisphere (oriented around the normal) for a PBR irradiance map."/>
    735       
    736 <p>
    737   Thankfully, all of the cumbersome setup of this chapter isn't all for nothing as we can now directly take the converted cubemap, convolute it in a fragment shader, and capture its result in a new cubemap using a framebuffer that renders to all 6 face directions. As we've already set this up for converting the equirectangular environment map to a cubemap, we can take the exact same approach but use a different fragment shader:
    738 </p>
    739       
    740 <pre><code>
    741 #version 330 core
    742 out vec4 FragColor;
    743 in vec3 localPos;
    744 
    745 uniform samplerCube environmentMap;
    746 
    747 const float PI = 3.14159265359;
    748 
    749 void main()
    750 {		
    751     // the sample direction equals the hemisphere's orientation 
    752     vec3 normal = normalize(localPos);
    753   
    754     vec3 irradiance = vec3(0.0);
    755   
    756     [...] // convolution code
    757   
    758     FragColor = vec4(irradiance, 1.0);
    759 }
    760 </code></pre>
    761    
    762 <p>
    763   With <var>environmentMap</var> being the HDR cubemap as converted from the equirectangular HDR environment map. 
    764 </p>
    765       
    766 <p>
    767 	There are many ways to convolute the environment map, but for this chapter we're going to generate a fixed amount of sample vectors for each cubemap texel along a hemisphere \(\Omega\) oriented around the sample direction and average the results. The fixed amount of sample vectors will be uniformly spread inside the hemisphere. Note that an integral is a continuous function and discretely sampling its function given a fixed amount of sample vectors will be an approximation. The more sample vectors we use, the better we approximate the integral.  
    768 </p>
    769       
    770 <p>
    771   The integral \(\int\) of the reflectance equation revolves around the solid angle \(dw\) which is rather difficult to work with. Instead of integrating over the solid angle \(dw\) we'll integrate over its equivalent spherical coordinates \(\theta\) and \(\phi\).
    772 </p>
    773 
    774       
    775       <img src="/img/pbr/ibl_spherical_integrate.png" class="clean" alt="Converting the solid angle over the equivalent polar azimuth and inclination angle for PBR"/>
    776         
    777 <p>
    778   We use the polar azimuth \(\phi\) angle to sample around the ring of the hemisphere between \(0\) and \(2\pi\), and use the inclination zenith \(\theta\) angle between \(0\) and \(\frac{1}{2}\pi\) to sample the increasing rings of the hemisphere. This will give us the updated reflectance integral: 
    779 </p>
    780 
    781 \[
    782       L_o(p,\phi_o, \theta_o) = 
    783         k_d\frac{c}{\pi} \int_{\phi = 0}^{2\pi} \int_{\theta = 0}^{\frac{1}{2}\pi} L_i(p,\phi_i, \theta_i) \cos(\theta) \sin(\theta)  d\phi d\theta
    784 \]
    785         
    786 <p>
    787   Solving the integral requires us to take a fixed number of discrete samples within the hemisphere \(\Omega\) and averaging their results. This translates the integral to the following discrete version as based on the <a href="https://en.wikipedia.org/wiki/Riemann_sum" target="_blank">Riemann sum</a> given \(n1\) and \(n2\) discrete samples on each spherical coordinate respectively:
    788 </p>
    789         
    790 \[
    791       L_o(p,\phi_o, \theta_o) = 
    792         k_d \frac{c\pi}{n1 n2} \sum_{\phi = 0}^{n1} \sum_{\theta = 0}^{n2} L_i(p,\phi_i, \theta_i) \cos(\theta) \sin(\theta)  d\phi d\theta
    793 \]   
    794 
    795         
    796 <p>
    797    As we sample both spherical values discretely, each sample will approximate or average an area on the hemisphere as the image before shows. Note that (due to the general properties of a spherical shape) the hemisphere's discrete sample area gets smaller the higher the zenith angle \(\theta\) as the sample regions converge towards the center top. To compensate for the smaller areas, we weigh its contribution by scaling the area by \(\sin \theta\).
    798 </p>
    799       
    800 <p>
    801   Discretely sampling the hemisphere given the integral's spherical coordinates translates to the following fragment code:
    802 </p>
    803       
    804 <pre><code>
    805 vec3 irradiance = vec3(0.0);  
    806 
    807 vec3 up    = vec3(0.0, 1.0, 0.0);
    808 vec3 right = normalize(cross(up, normal));
    809 up         = normalize(cross(normal, right));
    810 
    811 float sampleDelta = 0.025;
    812 float nrSamples = 0.0; 
    813 for(float phi = 0.0; phi &lt; 2.0 * PI; phi += sampleDelta)
    814 {
    815     for(float theta = 0.0; theta &lt; 0.5 * PI; theta += sampleDelta)
    816     {
    817         // spherical to cartesian (in tangent space)
    818         vec3 tangentSample = vec3(sin(theta) * cos(phi),  sin(theta) * sin(phi), cos(theta));
    819         // tangent space to world
    820         vec3 sampleVec = tangentSample.x * right + tangentSample.y * up + tangentSample.z * N; 
    821 
    822         irradiance += texture(environmentMap, sampleVec).rgb * cos(theta) * sin(theta);
    823         nrSamples++;
    824     }
    825 }
    826 irradiance = PI * irradiance * (1.0 / float(nrSamples));
    827 </code></pre>
    828       
    829 <p>
    830   We specify a fixed <var>sampleDelta</var> delta value to traverse the hemisphere; decreasing or increasing the sample delta will increase or decrease the accuracy respectively.
    831 </p>
    832         
    833 <p>
    834   From within both loops, we take both spherical coordinates to convert them to a 3D Cartesian sample vector, convert the sample from tangent to world space oriented around the normal, and use this sample vector to directly sample the HDR environment map.  We add each sample result to <var>irradiance</var> which at the end we divide by the total number of samples taken, giving us the average sampled irradiance. Note that we scale the sampled color value by <code>cos(theta)</code> due to the light being weaker at larger angles and by <code>sin(theta)</code> to account for the smaller sample areas in the higher hemisphere areas.
    835 </p>
    836               
    837 <p>
    838   Now what's left to do is to set up the OpenGL rendering code such that we can convolute the earlier captured <var>envCubemap</var>. First we create the irradiance cubemap (again, we only have to do this once before the render loop):
    839 </p>
    840       
    841 <pre><code>
    842 unsigned int irradianceMap;
    843 <function id='50'>glGenTextures</function>(1, &irradianceMap);
    844 <function id='48'>glBindTexture</function>(GL_TEXTURE_CUBE_MAP, irradianceMap);
    845 for (unsigned int i = 0; i &lt; 6; ++i)
    846 {
    847     <function id='52'>glTexImage2D</function>(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_RGB16F, 32, 32, 0, 
    848                  GL_RGB, GL_FLOAT, nullptr);
    849 }
    850 <function id='15'>glTexParameter</function>i(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    851 <function id='15'>glTexParameter</function>i(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    852 <function id='15'>glTexParameter</function>i(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
    853 <function id='15'>glTexParameter</function>i(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    854 <function id='15'>glTexParameter</function>i(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    855 </code></pre>
    856               
    857 <p>
    858   As the irradiance map averages all surrounding radiance uniformly it doesn't have a lot of high frequency details, so we can store the map at a low resolution (32x32) and let OpenGL's linear filtering do most of the work. Next, we re-scale the capture framebuffer to the new resolution:
    859 </p>
    860       
    861 <pre class="cpp"><code>
    862 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, captureFBO);
    863 <function id='83'>glBindRenderbuffer</function>(GL_RENDERBUFFER, captureRBO);
    864 <function id='88'>glRenderbufferStorage</function>(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, 32, 32);  
    865 </code></pre>
    866         
    867 <p>
    868    Using the convolution shader, we render the environment map in a similar way to how we captured the environment cubemap:
    869 </p>
    870         
    871 <pre><code>
    872 irradianceShader.use();
    873 irradianceShader.setInt("environmentMap", 0);
    874 irradianceShader.setMat4("projection", captureProjection);
    875 <function id='49'>glActiveTexture</function>(GL_TEXTURE0);
    876 <function id='48'>glBindTexture</function>(GL_TEXTURE_CUBE_MAP, envCubemap);
    877 
    878 <function id='22'>glViewport</function>(0, 0, 32, 32); // don't forget to configure the viewport to the capture dimensions.
    879 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, captureFBO);
    880 for (unsigned int i = 0; i &lt; 6; ++i)
    881 {
    882     irradianceShader.setMat4("view", captureViews[i]);
    883     <function id='81'>glFramebufferTexture2D</function>(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, 
    884                            GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, irradianceMap, 0);
    885     <function id='10'>glClear</function>(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    886 
    887     renderCube();
    888 }
    889 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, 0);  
    890 </code></pre>
    891   
    892 <p>
    893   Now after this routine we should have a pre-computed irradiance map that we can directly use for our diffuse image based lighting. To see if we successfully convoluted the environment map we'll substitute the environment map for the irradiance map as the skybox's environment sampler:
    894 </p>
    895       
    896       <img src="/img/pbr/ibl_irradiance_map_background.png" alt="Displaying a PBR irradiance map as the skybox background."/>
    897         
    898 <p>
    899   If it looks like a heavily blurred version of the environment map you've successfully convoluted the environment map.
    900 </p>
    901         
    902 <h2>PBR and indirect irradiance lighting</h2>
    903 <p>
    904   The irradiance map represents the diffuse part of the reflectance integral as accumulated from all surrounding indirect light. Seeing as the light doesn't come from direct light sources, but from the surrounding environment, we treat both the diffuse and specular indirect lighting as the ambient lighting, replacing our previously set constant term. 
    905 </p>
    906         
    907 <p>
    908   First, be sure to add the pre-calculated irradiance map as a cube sampler:
    909 </p>
    910         
    911 <pre><code>
    912 uniform samplerCube irradianceMap;
    913 </code></pre>
    914 
    915 <p>
    916   Given the irradiance map that holds all of the scene's indirect diffuse light, retrieving the irradiance influencing the fragment is as simple as a single texture sample given the surface normal:
    917 </p>
    918         
    919 <pre><code>
    920 // vec3 ambient = vec3(0.03);
    921 vec3 ambient = texture(irradianceMap, N).rgb;
    922 </code></pre>
    923         
    924 <p>
    925   However, as the indirect lighting contains both a diffuse and specular part (as we've seen from the split version of the reflectance equation) we need to weigh the diffuse part accordingly. Similar to what we did in the previous chapter, we use the Fresnel equation to determine the surface's indirect reflectance ratio from which we derive the refractive (or diffuse) ratio:
    926 </p>
    927         
    928 <pre><code>
    929 vec3 kS = fresnelSchlick(max(dot(N, V), 0.0), F0);
    930 vec3 kD = 1.0 - kS;
    931 vec3 irradiance = texture(irradianceMap, N).rgb;
    932 vec3 diffuse    = irradiance * albedo;
    933 vec3 ambient    = (kD * diffuse) * ao; 
    934 </code></pre>
    935         
    936 <p>
    937   As the ambient light comes from all directions within the hemisphere oriented around the normal <var>N</var>, there's no single halfway vector to determine the Fresnel response. To still simulate Fresnel, we calculate the Fresnel from the angle between the normal and view vector. However, earlier we used the micro-surface halfway vector, influenced by the roughness of the surface, as input to the Fresnel equation. As we currently don't take roughness into account, the surface's reflective ratio will always end up relatively high. Indirect light follows the same properties of direct light so we expect rougher surfaces to reflect less strongly on the surface edges. Because of this, the indirect Fresnel reflection strength looks off on rough non-metal surfaces (slightly exaggerated for demonstration purposes):
    938 </p>
    939       <img src="/img/pbr/lighting_fresnel_no_roughness.png" alt="The Fresnel equation for IBL without taking roughness into account."/>
    940           
    941 <p>
    942    We can alleviate the issue by injecting a roughness term in the Fresnel-Schlick equation as described by <a href="https://seblagarde.wordpress.com/2011/08/17/hello-world/" target="_blank">Sébastien Lagarde</a>:
    943 </p>
    944           
    945 <pre><code>
    946 vec3 fresnelSchlickRoughness(float cosTheta, vec3 F0, float roughness)
    947 {
    948     return F0 + (max(vec3(1.0 - roughness), F0) - F0) * pow(clamp(1.0 - cosTheta, 0.0, 1.0), 5.0);
    949 }   
    950 </code></pre>
    951           
    952 <p>
    953   By taking account of the surface's roughness when calculating the Fresnel response, the ambient code ends up as:
    954 </p>
    955         
    956 <pre><code>
    957 vec3 kS = fresnelSchlickRoughness(max(dot(N, V), 0.0), F0, roughness); 
    958 vec3 kD = 1.0 - kS;
    959 vec3 irradiance = texture(irradianceMap, N).rgb;
    960 vec3 diffuse    = irradiance * albedo;
    961 vec3 ambient    = (kD * diffuse) * ao; 
    962 </code></pre>
    963         
    964 <p>
    965   As you can see, the actual image based lighting computation is quite simple and only requires a single cubemap texture lookup; most of the work is in pre-computing or convoluting the irradiance map. 
    966 </p>
    967           
    968 <p>
    969   If we take the initial scene from the PBR <a href="https://learnopengl.com/PBR/Lighting" target="_blank">lighting</a> chapter, where each sphere has a vertically increasing metallic and a horizontally increasing roughness value, and add the diffuse image based lighting it'll look a bit like this:
    970 </p>
    971           
    972           <img src="/img/pbr/ibl_irradiance_result.png" alt="Result of convoluting an irradiance map in OpenGL used by the PBR shader."/>
    973             
    974 <p>
    975   It still looks a bit weird as the more metallic spheres <strong>require</strong> some form of reflection to properly start looking like metallic surfaces (as metallic surfaces don't reflect diffuse light) which at the moment are only (barely) coming from the point light sources. Nevertheless, you can already tell the spheres do feel more <em>in place</em> within the environment (especially if you switch between environment maps) as the surface response reacts accordingly to the environment's ambient lighting.
    976 </p>
    977             
    978 <p>
    979   You can find the complete source code of the discussed topics <a href="/code_viewer_gh.php?code=src/6.pbr/2.1.2.ibl_irradiance/ibl_irradiance.cpp" target="_blank">here</a>. In the <a href="https://learnopengl.com/PBR/IBL/Specular-IBL"  target="_blank">next</a> chapter we'll add the indirect specular part of the reflectance integral at which point we're really going to see the power of PBR.
    980 </p>
    981   
    982 <h2>Further reading</h2>
    983 <ul>
    984   <li><a href="http://www.codinglabs.net/article_physically_based_rendering.aspx" target="_blank">Coding Labs: Physically based rendering</a>: an introduction to PBR and how and why to generate an irradiance map. </li>
    985   <li><a href="http://www.scratchapixel.com/lessons/mathematics-physics-for-computer-graphics/mathematics-of-shading" target="_blank">The Mathematics of Shading</a>: a brief introduction by ScratchAPixel on several of the mathematics described in this tutorial, specifically on polar coordinates and integrals.</li>
    986 </ul>       
    987 
    988     </div>
    989     
    990 	</main>
    991 </body>
    992 </html>