LearnOpenGL

Translation in progress of learnopengl.com.
git clone https://git.mtkn.jp/LearnOpenGL
Log | Files | Refs

Specular-IBL.html (62243B)


      1 <!DOCTYPE html>
      2 <html lang="ja"> 
      3 <head>
      4     <meta charset="utf-8"/>
      5     <title>LearnOpenGL</title>
      6     <link rel="shortcut icon" type="image/ico" href="/favicon.ico"  />
      7 	<link rel="stylesheet" href="../static/style.css" />
      8 	<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js"> </script>
      9 	<script src="/static/functions.js"></script>
     10 </head>
     11 <body>
     12 	<nav>
     13 <ol>
     14 	<li id="Introduction">
     15 		<a href="https://learnopengl.com/Introduction">はじめに</a>
     16 	</li>
     17 	<li id="Getting-started">
     18 		<span class="closed">入門</span>
     19 		<ol>
     20 			<li id="Getting-started/OpenGL">
     21 				<a href="https://learnopengl.com/Getting-started/OpenGL">OpenGL </a>
     22 			</li>
     23 			<li id="Getting-started/Creating-a-window">
     24 				<a href="https://learnopengl.com/Getting-started/Creating-a-window">ウィンドウの作成</a>
     25 			</li>
     26 			<li id="Getting-started/Hello-Window">
     27 				<a href="https://learnopengl.com/Getting-started/Hello-Window">最初のウィンドウ</a>
     28 			</li>
     29 			<li id="Getting-started/Hello-Triangle">
     30 				<a href="https://learnopengl.com/Getting-started/Hello-Triangle">最初の三角形</a>
     31 			</li>
     32 			<li id="Getting-started/Shaders">
     33 				<a href="https://learnopengl.com/Getting-started/Shaders">シェーダー</a>
     34 			</li>
     35 			<li id="Getting-started/Textures">
     36 				<a href="https://learnopengl.com/Getting-started/Textures">テクスチャ</a>
     37 			</li>
     38 			<li id="Getting-started/Transformations">
     39 				<a href="https://learnopengl.com/Getting-started/Transformations">座標変換</a>
     40 			</li>
     41 			<li id="Getting-started/Coordinate-Systems">
     42 				<a href="https://learnopengl.com/Getting-started/Coordinate-Systems">座標系</a>
     43 			</li>
     44 			<li id="Getting-started/Camera">
     45 				<a href="https://learnopengl.com/Getting-started/Camera">カメラ</a>
     46 			</li>
     47 			<li id="Getting-started/Review">
     48 				<a href="https://learnopengl.com/Getting-started/Review">まとめ</a>
     49 			</li>
     50 		</ol>
     51 	</li>
     52 	<li id="Lighting">
     53 		<span class="closed">Lighting </span>
     54 		<ol>
     55 			<li id="Lighting/Colors">
     56 				<a href="https://learnopengl.com/Lighting/Colors">Colors </a>
     57 			</li>
     58 			<li id="Lighting/Basic-Lighting">
     59 				<a href="https://learnopengl.com/Lighting/Basic-Lighting">Basic Lighting </a>
     60 			</li>
     61 			<li id="Lighting/Materials">
     62 				<a href="https://learnopengl.com/Lighting/Materials">Materials </a>
     63 			</li>
     64 			<li id="Lighting/Lighting-maps">
     65 				<a href="https://learnopengl.com/Lighting/Lighting-maps">Lighting maps </a>
     66 			</li>
     67 			<li id="Lighting/Light-casters">
     68 				<a href="https://learnopengl.com/Lighting/Light-casters">Light casters </a>
     69 			</li>
     70 			<li id="Lighting/Multiple-lights">
     71 				<a href="https://learnopengl.com/Lighting/Multiple-lights">Multiple lights </a>
     72 			</li>
     73 			<li id="Lighting/Review">
     74 				<a href="https://learnopengl.com/Lighting/Review">Review </a>
     75 			</li>
     76 		</ol>
     77 	</li>
     78 	<li id="Model-Loading">
     79 		<span class="closed">Model Loading </span>
     80 		<ol>
     81 			<li id="Model-Loading/Assimp">
     82 				<a href="https://learnopengl.com/Model-Loading/Assimp">Assimp </a>
     83 			</li>
     84 			<li id="Model-Loading/Mesh">
     85 				<a href="https://learnopengl.com/Model-Loading/Mesh">Mesh </a>
     86 			</li>
     87 			<li id="Model-Loading/Model">
     88 				<a href="https://learnopengl.com/Model-Loading/Model">Model </a>
     89 			</li>
     90 		</ol>
     91 	</li>
     92 	<li id="Advanced-OpenGL">
     93 		<span class="closed">Advanced OpenGL </span>
     94 		<ol>
     95 			<li id="Advanced-OpenGL/Depth-testing">
     96 				<a href="https://learnopengl.com/Advanced-OpenGL/Depth-testing">Depth testing </a>
     97 			</li>
     98 			<li id="Advanced-OpenGL/Stencil-testing">
     99 				<a href="https://learnopengl.com/Advanced-OpenGL/Stencil-testing">Stencil testing </a>
    100 			</li>
    101 			<li id="Advanced-OpenGL/Blending">
    102 				<a href="https://learnopengl.com/Advanced-OpenGL/Blending">Blending </a>
    103 			</li>
    104 			<li id="Advanced-OpenGL/Face-culling">
    105 				<a href="https://learnopengl.cm/Advanced-OpenGL/Face-culling">Face culling </a>
    106 			</li>
    107 			<li id="Advanced-OpenGL/Framebuffers">
    108 				<a href="https://learnopengl.com/Advanced-OpenGL/Framebuffers">Framebuffers </a>
    109 			</li>
    110 			<li id="Advanced-OpenGL/Cubemaps">
    111 				<a href="https://learnopengl.com/Advanced-OpenGL/Cubemaps">Cubemaps </a>
    112 			</li>
    113 			<li id="Advanced-OpenGL/Advanced-Data">
    114 				<a href="https://learnopengl.com/Advanced-OpenGL/Advanced-Data">Advanced Data </a>
    115 			</li>
    116 			<li id="Advanced-OpenGL/Advanced-GLSL">
    117 				<a href="https://learnopengl.com/Advanced-OpenGL/Advanced-GLSL">Advanced GLSL </a>
    118 			</li>
    119 			<li id="Advanced-OpenGL/Geometry-Shader">
    120 				<a href="https://learnopengl.com/Advanced-OpenGL/Geometry-Shader">Geometry Shader </a>
    121 			</li>
    122 			<li id="Advanced-OpenGL/Instancing">
    123 				<a href="https://learnopengl.com/Advanced-OpenGL/Instancing">Instancing </a>
    124 			</li>
    125 			<li id="Advanced-OpenGL/Anti-Aliasing">
    126 				<a href="https://learnopengl.com/Advanced-OpenGL/Anti-Aliasing">Anti Aliasing </a>
    127 			</li>
    128 		</ol>
    129 	</li>
    130 	<li id="Advanced-Lighting">
    131 		<span class="closed">Advanced Lighting </span>
    132 		<ol>
    133 			<li id="Advanced-Lighting/Advanced-Lighting">
    134 				<a href="https://learnopengl.com/Advanced-Lighting/Advanced-Lighting">Advanced Lighting </a>
    135 			</li>
    136 			<li id="Advanced-Lighting/Gamma-Correction">
    137 				<a href="https://learnopengl.com/Advanced-Lighting/Gamma-Correction">Gamma Correction </a>
    138 			</li>
    139 			<li id="Advanced-Lighting/Shadows">
    140 				<span class="closed">Shadows </span>
    141 				<ol>
    142 					<li id="Advanced-Lighting/Shadows/Shadow-Mapping">
    143 						<a href="https://learnopengl.com/Advanced-Lighting/Shadows/Shadow-Mapping">Shadow Mapping </a>
    144 					</li>
    145 					<li id="Advanced-Lighting/Shadows/Point-Shadows">
    146 						<a href="https://learnopengl.com/Advanced-Lighting/Shadows/Point-Shadows">Point Shadows </a>
    147 					</li>
    148 				</ol>
    149 			</li>
    150 			<li id="Advanced-Lighting/Normal-Mapping">
    151 				<a href="https://learnopengl.com/Advanced-Lighting/Normal-Mapping">Normal Mapping </a>
    152 			</li>
    153 			<li id="Advanced-Lighting/Parallax-Mapping">
    154 				<a href="https://learnopengl.com/Advanced-Lighting/Parallax-Mapping">Parallax Mapping </a>
    155 			</li>
    156 			<li id="Advanced-Lighting/HDR">
    157 				<a href="https://learnopengl.com/Advanced-Lighting/HDR">HDR </a>
    158 			</li>
    159 			<li id="Advanced-Lighting/Bloom">
    160 				<a href="https://learnopengl.com/Advanced-Lighting/Bloom">Bloom </a>
    161 			</li>
    162 			<li id="Advanced-Lighting/Deferred-Shading">
    163 				<a href="https://learnopengl.com/Advanced-Lighting/Deferred-Shading">Deferred Shading </a>
    164 			</li>
    165 			<li id="Advanced-Lighting/SSAO">
    166 				<a href="https://learnopengl.com/Advanced-Lighting/SSAO">SSAO </a>
    167 			</li>
    168 		</ol>
    169 	</li>
    170 	<li id="PBR">
    171 		<span class="closed">PBR </span>
    172 		<ol>
    173 			<li id="PBR/Theory">
    174 				<a href="https://learnopengl.com/PBR/Theory">Theory </a>
    175 			</li>
    176 			<li id="PBR/Lighting">
    177 				<a href="https://learnopengl.com/PBR/Lighting">Lighting </a>
    178 			</li>
    179 			<li id="PBR/IBL">
    180 				<span class="closed">IBL </span>
    181 				<ol>
    182 					<li id="PBR/IBL/Diffuse-irradiance">
    183 						<a href="https://learnopengl.com/PBR/IBL/Diffuse-irradiance">Diffuse irradiance </a>
    184 					</li>
    185 					<li id="PBR/IBL/Specular-IBL">
    186 						<a href="https://learnopengl.com/PBR/IBL/Specular-IBL">Specular IBL </a>
    187 					</li>
    188 				</ol>
    189 			</li>
    190 		</ol>
    191 	</li>
    192 	<li id="In-Practice">
    193 		<span class="closed">In Practice </span>
    194 		<ol>
    195 			<li id="In-Practice/Debugging">
    196 				<a href="https://learnopengl.com/In-Practice/Debugging">Debugging </a>
    197 			</li>
    198 			<li id="In-Practice/Text-Rendering">
    199 				<a href="https://learnopengl.com/In-Practice/Text-Rendering">Text Rendering </a>
    200 			</li>
    201 			<li id="In-Practice/2D-Game">
    202 				<span class="closed">2D Game </span>
    203 				<ol>
    204 					<li id="In-Practice/2D-Game/Breakout">
    205 						<a href="https://learnopengl.com/In-Practice/2D-Game/Breakout">Breakout </a>
    206 					</li>
    207 					<li id="In-Practice/2D-Game/Setting-up">
    208 						<a href="https://learnopengl.com/In-Practice/2D-Game/Setting-up">Setting up </a>
    209 					</li>
    210 					<li id="In-Practice/2D-Game/Rendering-Sprites">
    211 						<a href="https://learnopengl.com/In-Practice/2D-Game/Rendering-Sprites">Rendering Sprites </a>
    212 					</li>
    213 					<li id="In-Practice/2D-Game/Levels">
    214 						<a href="https://learnopengl.com/In-Practice/2D-Game/Levels">Levels </a>
    215 					</li>
    216 					<li id="In-Practice/2D-Game/Collisions">
    217 						<span class="closed">Collisions </span>
    218 						<ol>
    219 							<li id="In-Practice/2D-Game/Collisions/Ball">
    220 								<a href="https://learnopengl.com/In-Practice/2D-Game/Collisions/Ball">Ball </a>
    221 							</li>
    222 							<li id="In-Practice/2D-Game/Collisions/Collision-detection">
    223 								<a href="https://learnopengl.com/In-Practice/2D-Game/Collisions/Collision-detection">Collision detection </a>
    224 							</li>
    225 							<li id="In-Practice/2D-Game/Collisions/Collision-resolution">
    226 								<a href="https://learnopengl.com/In-Practice/2D-Game/Collisions/Collision-resolution">Collision resolution </a>
    227 							</li>
    228 						</ol>
    229 					</li>
    230 					<li id="In-Practice/2D-Game/Particles">
    231 						<a href="https://learnopengl.com/In-Practice/2D-Game/Particles">Particles </a>
    232 					</li>
    233 					<li id="In-Practice/2D-Game/Postprocessing">
    234 						<a href="https://learnopengl.com/In-Practice/2D-Game/Postprocessing">Postprocessing </a>
    235 					</li>
    236 					<li id="In-Practice/2D-Game/Powerups">
    237 						<a href="https://learnopengl.com/In-Practice/2D-Game/Powerups">Powerups </a>
    238 					</li>
    239 					<li id="In-Practice/2D-Game/Audio">
    240 						<a href="https://learnopengl.com/In-Practice/2D-Game/Audio">Audio </a>
    241 					</li>
    242 					<li id="In-Practice/2D-Game/Render-text">
    243 						<a href="https://learnopengl.com/In-Practice/2D-Game/Render-text">Render text </a>
    244 					</li>
    245 					<li id="In-Practice/2D-Game/Final-thoughts">
    246 						<a href="https://learnopengl.com/In-Practice/2D-Game/Final-thoughts">Final thoughts </a>
    247 					</li>
    248 				</ol>
    249 			</li>
    250 		</ol>
    251 	</li>
    252 	<li id="Guest-Articles">
    253 		<span class="closed">Guest Articles </span>
    254 		<ol>
    255 			<li id="Guest-Articles/How-to-publish">
    256 				<a href="https://learnopengl.com/Guest-Articles/How-to-publish">How to publish </a>
    257 			</li>
    258 			<li id="Guest-Articles/2020">
    259 				<span class="closed">2020 </span>
    260 				<ol>
    261 					<li id="Guest-Articles/2020/OIT">
    262 						<span class="closed">OIT </span>
    263 						<ol>
    264 							<li id="Guest-Articles/2020/OIT/Introduction">
    265 								<a href="https://learnopengl.com/Guest-Articles/2020/OIT/Introduction">Introduction </a>
    266 							</li>
    267 							<li id="Guest-Articles/2020/OIT/Weighted-Blended">
    268 								<a href="https://learnopengl.com/Guest-Articles/2020/OIT/Weighted-Blended">Weighted Blended </a>
    269 							</li>
    270 						</ol>
    271 					</li>
    272 					<li id="Guest-Articles/2020/Skeletal-Animation">
    273 						<a href="https://learnopengl.com/Guest-Articles/2020/Skeletal-Animation">Skeletal Animation </a>
    274 					</li>
    275 				</ol>
    276 			</li>
    277 			<li id="Guest-Articles/2021">
    278 				<span class="closed">2021 </span>
    279 				<ol>
    280 					<li id="Guest-Articles/2021/CSM">
    281 						<a href="https://learnopengl.com/Guest-Articles/2021/CSM">CSM </a>
    282 					</li>
    283 					<li id="Guest-Articles/2021/Scene">
    284 						<span class="closed">Scene </span>
    285 						<ol>
    286 							<li id="Guest-Articles/2021/Scene/Scene-Graph">
    287 								<a href="https://learnopengl.com/Guest-Articles/2021/Scene/Scene-Graph">Scene Graph </a>
    288 							</li>
    289 							<li id="Guest-Articles/2021/Scene/Frustum-Culling">
    290 								<a href="https://learnopengl.com/Guest-Articles/2021/Scene/Frustum-Culling">Frustum Culling </a>
    291 							</li>
    292 						</ol>
    293 					</li>
    294 					<li id="Guest-Articles/2021/Tessellation">
    295 						<span class="closed">Tessellation </span>
    296 						<ol>
    297 							<li id="Guest-Articles/2021/Tessellation/Height-map">
    298 								<a href="https://learnopengl.com/Guest-Articles/2021/Tessellation/Height-map">Height map </a>
    299 							</li>
    300 						</ol>
    301 					</li>
    302 				</ol>
    303 			</li>
    304 		</ol>
    305 	</li>
    306 	<li id="Code-repository">
    307 		<a href="https://learnopengl.com/Code-repository">Code repository </a>
    308 	</li>
    309 	<li id="Translations">
    310 		<a href="https://learnopengl.com/Translations">Translations </a>
    311 	</li>
    312 	<li id="About">
    313 		<a href="https://learnopengl.com/About">About </a>
    314 	</li>
    315 </ol>
    316 	</nav>
    317 	<main>
    318     <h1 id="content-title">Specular IBL</h1>
    319 <h1 id="content-url" style='display:none;'>PBR/IBL/Specular-IBL</h1>
    320 <p>
    321   In the <a href="https://learnopengl.com/PBR/IBL/Diffuse-irradiance" target="_blank">previous</a> chapter we've set up PBR in combination with image based lighting by pre-computing an irradiance map as the lighting's indirect diffuse portion. In this chapter we'll focus on the specular part of the reflectance equation:
    322 </p>
    323   
    324  \[
    325     L_o(p,\omega_o) = \int\limits_{\Omega} 
    326     	(k_d\frac{c}{\pi} + k_s\frac{DFG}{4(\omega_o \cdot n)(\omega_i \cdot n)})
    327     	L_i(p,\omega_i) n \cdot \omega_i  d\omega_i
    328  \]
    329 
    330 <p>
    331   You'll notice that the Cook-Torrance specular portion (multiplied by \(kS\)) isn't constant over the integral and is dependent on the incoming light direction, but <strong>also</strong> the incoming view direction. Trying to solve the integral for all incoming light directions including all possible view directions is a combinatorial overload and way too expensive to calculate on a real-time basis. Epic Games proposed a solution where they were able to pre-convolute the specular part for real time purposes, given a few compromises, known as the <def>split sum approximation</def>.
    332 </p>
    333 
    334 <p>
    335   The split sum approximation splits the specular part of the reflectance equation into two separate parts that we can individually convolute and later combine in the PBR shader for specular indirect image based lighting. Similar to how we pre-convoluted the irradiance map, the split sum approximation requires an HDR environment map as its convolution input. To understand the split sum approximation we'll again look at the reflectance equation, but this time focus on the specular part:
    336 </p>
    337 
    338 \[
    339 	L_o(p,\omega_o) = 
    340 		\int\limits_{\Omega} (k_s\frac{DFG}{4(\omega_o \cdot n)(\omega_i \cdot n)}
    341 			L_i(p,\omega_i) n \cdot \omega_i  d\omega_i
    342 			=
    343 		\int\limits_{\Omega} f_r(p, \omega_i, \omega_o) L_i(p,\omega_i) n \cdot \omega_i  d\omega_i
    344 \]
    345 
    346 <p>
    347   For the same (performance) reasons as the irradiance convolution, we can't solve the specular part of the integral in real time and expect a reasonable performance. So preferably we'd pre-compute this integral to get something like a specular IBL map, sample this map with the fragment's normal, and be done with it. However, this is where it gets a bit tricky. We were able to pre-compute the irradiance map as the integral only depended on \(\omega_i\) and we could move the constant diffuse albedo terms out of the integral. This time, the integral depends on more than just \(\omega_i\) as evident from the BRDF:
    348 </p>
    349 
    350 \[
    351 	f_r(p, w_i, w_o) = \frac{DFG}{4(\omega_o \cdot n)(\omega_i \cdot n)}
    352 \]
    353 
    354 <p>
    355   The integral also depends on \(w_o\), and we can't really sample a pre-computed cubemap with two direction vectors. The position \(p\) is irrelevant here as described in the previous chapter. Pre-computing this integral for every possible combination of \(\omega_i\) and \(\omega_o\)  isn't practical in a real-time setting. 
    356 </p>
    357 
    358 <p>
    359   Epic Games' split sum approximation solves the issue by splitting the pre-computation into 2 individual parts that we can later combine to get the resulting pre-computed result we're after. The split sum approximation splits the specular integral into two separate integrals:
    360 </p>
    361 
    362 \[
    363      L_o(p,\omega_o) = 
    364 		\int\limits_{\Omega} L_i(p,\omega_i) d\omega_i
    365 		*
    366 		\int\limits_{\Omega} f_r(p, \omega_i, \omega_o) n \cdot \omega_i d\omega_i
    367 \]
    368 
    369 <p>
    370   The first part (when convoluted) is known as the <def>pre-filtered environment map</def> which is (similar to the irradiance map) a pre-computed environment convolution map, but this time taking roughness into account. For increasing roughness levels, the environment map is convoluted with more scattered sample vectors, creating blurrier reflections. For each roughness level we convolute, we store the sequentially blurrier results in the pre-filtered map's mipmap levels. For instance, a pre-filtered environment map storing the pre-convoluted result of 5 different roughness values in its 5 mipmap levels looks as follows: 
    371 </p>
    372 
    373 <img src="/img/pbr/ibl_prefilter_map.png" class="clean" alt="Pre-convoluted environment map over 5 roughness levels for PBR"/>
    374   
    375   
    376   <p>
    377     We generate the sample vectors and their scattering amount using the normal distribution function (NDF) of the Cook-Torrance BRDF that takes as input both a normal and view direction. As we don't know beforehand the view direction when convoluting the environment map, Epic Games makes a further approximation by assuming the view direction (and thus the specular reflection direction) to be equal to the output sample direction \(\omega_o\). This translates itself to the following code: 
    378 </p>
    379 
    380 <pre><code>
    381 vec3 N = normalize(w_o);
    382 vec3 R = N;
    383 vec3 V = R;
    384 </code></pre>
    385 
    386 <p>
    387   This way, the pre-filtered environment convolution doesn't need to be aware of the view direction. This does mean we don't get nice grazing specular reflections when looking at specular surface reflections from an angle as seen in the image below (courtesy of the <em>Moving Frostbite to PBR</em> article); this is however generally considered an acceptable compromise:
    388 </p>
    389 
    390 <img src="/img/pbr/ibl_grazing_angles.png" class="clean" alt="Removing grazing specular reflections with the split sum approximation of V = R = N."/>
    391   
    392 <p>
    393   The second part of the split sum equation equals the BRDF part of the specular integral. If we pretend the incoming radiance is completely white for every direction (thus \(L(p, x) = 1.0\)) we can pre-calculate the BRDF's response given an input roughness and an input angle between the normal \(n\) and light direction \(\omega_i\), or \(n \cdot \omega_i\). Epic Games stores the pre-computed BRDF's response to each normal and light direction combination on varying roughness values in a 2D lookup texture (LUT) known as the <def>BRDF integration</def> map. The 2D lookup texture outputs a scale (red) and a bias value (green) to the surface's Fresnel response giving us the second part of the split specular integral:
    394 </p>
    395   
    396   <img src="/img/pbr/ibl_brdf_lut.png" alt="Visualization of the 2D BRDF LUT according to the split sum approximation for PBR in OpenGL."/>
    397     
    398 <p>
    399   We generate the lookup texture by treating the horizontal texture coordinate (ranged between <code>0.0</code> and <code>1.0</code>) of a plane as the BRDF's input \(n \cdot \omega_i\), and its vertical texture coordinate as the input roughness value. With this BRDF integration map and the pre-filtered environment map we can combine both to get the result of the specular integral:
    400 </p>
    401     
    402 <pre><code>
    403 float lod             = getMipLevelFromRoughness(roughness);
    404 vec3 prefilteredColor = textureCubeLod(PrefilteredEnvMap, refVec, lod);
    405 vec2 envBRDF          = texture2D(BRDFIntegrationMap, vec2(NdotV, roughness)).xy;
    406 vec3 indirectSpecular = prefilteredColor * (F * envBRDF.x + envBRDF.y) 
    407 </code></pre>
    408     
    409 <p>
    410   This should give you a bit of an overview on how Epic Games' split sum approximation roughly approaches the indirect specular part of the reflectance equation. Let's now try and build the pre-convoluted parts ourselves.
    411 </p>
    412 
    413 <h2>Pre-filtering an HDR environment map</h2>
    414 <p>
    415   Pre-filtering an environment map is quite similar to how we convoluted an irradiance map. The difference being that we now account for roughness and store sequentially rougher reflections in the pre-filtered map's mip levels.
    416 </p>
    417     
    418 <p>
    419   First, we need to generate a new cubemap to hold the pre-filtered environment map data. To make sure we allocate enough memory for its mip levels we call <fun><function id='51'>glGenerateMipmap</function></fun> as an easy way to allocate the required amount of memory:
    420 </p>
    421     
    422 <pre><code>
    423 unsigned int prefilterMap;
    424 <function id='50'>glGenTextures</function>(1, &prefilterMap);
    425 <function id='48'>glBindTexture</function>(GL_TEXTURE_CUBE_MAP, prefilterMap);
    426 for (unsigned int i = 0; i &lt; 6; ++i)
    427 {
    428     <function id='52'>glTexImage2D</function>(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_RGB16F, 128, 128, 0, GL_RGB, GL_FLOAT, nullptr);
    429 }
    430 <function id='15'>glTexParameter</function>i(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    431 <function id='15'>glTexParameter</function>i(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    432 <function id='15'>glTexParameter</function>i(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
    433 <function id='15'>glTexParameter</function>i(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); 
    434 <function id='15'>glTexParameter</function>i(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    435 
    436 <function id='51'>glGenerateMipmap</function>(GL_TEXTURE_CUBE_MAP);
    437 </code></pre>
    438     
    439 <p>
    440   Note that because we plan to sample <var>prefilterMap</var>'s mipmaps you'll need to make sure its minification filter is set to <var>GL_LINEAR_MIPMAP_LINEAR</var> to enable trilinear filtering. We store the pre-filtered specular reflections in a per-face resolution of 128 by 128 at its base mip level. This is likely to be enough for most reflections, but if you have a large number of smooth materials (think of car reflections) you may want to increase the resolution.
    441 </p>
    442     
    443 <p>
    444   In the previous chapter we convoluted the environment map by generating sample vectors uniformly spread over the hemisphere \(\Omega\) using spherical coordinates. While this works just fine for irradiance, for specular reflections it's less efficient. When it comes to specular reflections, based on the roughness of a surface, the light reflects closely or roughly around a reflection vector \(r\) over a normal \(n\), but (unless the surface is extremely rough) around the reflection vector nonetheless:
    445 </p>
    446     
    447     <img src="/img/pbr/ibl_specular_lobe.png" class="clean" alt="Specular lobe according to the PBR microfacet surface model."/>
    448       
    449 <p>
    450   The general shape of possible outgoing light reflections is known as the <def>specular lobe</def>. As roughness increases, the specular lobe's size increases; and the shape of the specular lobe changes on varying incoming light directions. The shape of the specular lobe is thus highly dependent on the material.
    451 </p>
    452       
    453 <p>
    454   When it comes to the microsurface model, we can imagine the specular lobe as the reflection  orientation about the microfacet halfway vectors given some incoming light direction. Seeing as most light rays end up in a specular lobe reflected around the microfacet halfway vectors, it makes sense to generate the sample vectors in a similar fashion as most would otherwise be wasted. This process is known as <def>importance sampling</def>.
    455 </p>
    456 
    457 <h3>Monte Carlo integration and importance sampling</h3>
    458 <p>
    459   To fully get a grasp of importance sampling it's relevant we first delve into the mathematical construct known as <def>Monte Carlo integration</def>. Monte Carlo integration revolves mostly around a combination of statistics and probability theory. Monte Carlo helps us in discretely solving the problem of figuring out some statistic or value of a population without having to take <strong>all</strong> of the population into consideration.
    460 </p>
    461       
    462 <p>
    463   For instance, let's say you want to count the average height of all citizens of a country. To get your result, you could measure <strong>every</strong> citizen and average their height which will give you the <strong>exact</strong> answer you're looking for. However, since most countries have a considerable population this isn't a realistic approach: it would take too much effort and time.
    464 </p>
    465       
    466 <p>
    467   A different approach is to pick a much smaller <strong>completely random</strong> (unbiased) subset of this population, measure their height, and average the result. This population could be as small as a 100 people. While not as accurate as the exact answer, you'll get an answer that is relatively close to the ground truth. This is known as the <def>law of large numbers</def>. The idea is that if you measure a smaller set of size \(N\) of truly random samples from the total population, the result will be relatively close to the true answer and gets closer as the number of samples \(N\) increases.
    468 </p>
    469       
    470 <p>
    471   Monte Carlo integration builds on this law of large numbers and takes the same approach in solving an integral. Rather than solving an integral for all possible (theoretically infinite) sample values \(x\), simply generate \(N\) sample values randomly picked from the total population and average. As \(N\) increases, we're guaranteed to get a result closer to the exact answer of the integral: 
    472 </p>
    473       
    474 \[
    475   	O =  \int\limits_{a}^{b} f(x) dx 
    476       = 
    477       \frac{1}{N} \sum_{i=0}^{N-1} \frac{f(x)}{pdf(x)}
    478 \]
    479 
    480 <p>
    481   To solve the integral, we take \(N\) random samples over the population \(a\) to \(b\), add them together, and divide by the total number of samples to average them. The \(pdf\) stands for the <def>probability density function</def> that tells us the probability a specific sample occurs over the total sample set. For instance, the pdf of the height of a population would look a bit like this:
    482 </p>
    483 
    484 <img src="/img/pbr/ibl_pdf.png" class="clean" alt="Example PDF (probability distribution function)."/>
    485 
    486 <p>
    487   From this graph we can see that if we take any random sample of the population, there is a higher chance of picking a sample of someone of height 1.70, compared to the lower probability of the sample being of height 1.50.
    488   </p>
    489   
    490 <p>
    491 When it comes to Monte Carlo integration, some samples may have a higher probability of being generated than others. This is why for any general Monte Carlo estimation we divide or multiply the sampled value by the sample probability according to a pdf. So far, in each of our cases of estimating an integral, the samples we've generated were uniform, having the exact same chance of being generated. Our estimations so far were <def>unbiased</def>, meaning that given an ever-increasing amount of samples we will eventually <def>converge</def> to the <strong>exact</strong> solution of the integral. 
    492 </p>
    493 
    494 <p>
    495     However, some Monte Carlo estimators are <def>biased</def>, meaning that the generated samples aren't completely random, but focused towards a specific value or direction. These biased Monte Carlo estimators have a <def>faster rate of convergence</def>, meaning they can converge to the exact solution at a much faster rate, but due to their biased nature it's likely they won't ever converge to the exact solution. This is generally an acceptable tradeoff, especially in computer graphics, as the exact solution isn't too important as long as the results are visually acceptable.
    496     As we'll soon see with importance sampling (which uses a biased estimator), the generated samples  are biased towards specific directions in which case we account for this by multiplying or dividing each sample by its corresponding pdf.
    497   </p>
    498       
    499 <p>
    500   Monte Carlo integration is quite prevalent in computer graphics as it's a fairly intuitive way to approximate continuous integrals in a discrete and efficient fashion: take any area/volume to sample over (like the hemisphere \(\Omega\)), generate \(N\) amount of random samples within the area/volume, and sum and weigh every sample contribution to the final result. 
    501 </p>
    502             
    503 <p>
    504   Monte Carlo integration is an extensive mathematical topic and I won't delve much further into the specifics, but we'll mention that there are multiple ways of generating the <em>random samples</em>. By default, each sample is completely (pseudo)random as we're used to, but by utilizing certain properties of semi-random sequences we can generate sample vectors that are still random, but have interesting properties. For instance, we can do Monte Carlo integration on something called <def>low-discrepancy sequences</def> which still generate random samples, but each sample is more evenly distributed (image courtesy of James Heald):
    505 </p>
    506       
    507       <img src="/img/pbr/ibl_low_discrepancy_sequence.png" class="clean" alt="Low discrepancy sequence."/>
    508         
    509 <p>
    510   When using a low-discrepancy sequence for generating the Monte Carlo sample vectors, the process is known as <def>Quasi-Monte Carlo integration</def>. Quasi-Monte Carlo methods have a faster <def>rate of convergence</def> which makes them interesting for performance heavy applications.
    511 </p>
    512         
    513 <p>
    514   Given our newly obtained knowledge of Monte Carlo and Quasi-Monte Carlo integration, there is an interesting property we can use for an even faster rate of convergence known as <def>importance sampling</def>. We've mentioned it before in this chapter, but when it comes to specular reflections of light, the reflected light vectors are constrained in a specular lobe with its size determined by the roughness of the surface. Seeing as any (quasi-)randomly generated sample outside the specular lobe isn't relevant to the specular integral it makes sense to focus the sample generation to within the specular lobe, at the cost of making the Monte Carlo estimator biased.
    515 </p>
    516 
    517 <p>
    518   This is in essence what importance sampling is about: generate sample vectors in some region constrained by the roughness oriented around the microfacet's halfway vector. By combining Quasi-Monte Carlo sampling with a low-discrepancy sequence and biasing the sample vectors using importance sampling, we get a high rate of convergence. Because we reach the solution at a faster rate, we'll need significantly fewer samples to reach an approximation that is sufficient enough. 
    519 </p>
    520         
    521 <h3>A low-discrepancy sequence</h3>
    522 <p>
    523   In this chapter we'll pre-compute the specular portion of the indirect reflectance equation using importance sampling given a random low-discrepancy sequence based on the Quasi-Monte Carlo method. The sequence we'll be using is known as the <def>Hammersley Sequence</def> as carefully described by <a href="http://holger.dammertz.org/stuff/notes_HammersleyOnHemisphere.html" target="_blank">Holger Dammertz</a>. The Hammersley sequence is based on the <def>Van Der Corput</def> sequence which mirrors a decimal binary representation around its decimal point.
    524 </p>
    525         
    526 <p>
    527   Given some neat bit tricks, we can quite efficiently generate the Van Der Corput sequence in a shader program which we'll use to get a Hammersley sequence sample <var>i</var> over <code>N</code> total samples:
    528 </p>
    529         
    530 <pre><code>
    531 float RadicalInverse_VdC(uint bits) 
    532 {
    533     bits = (bits &lt;&lt; 16u) | (bits &gt;&gt; 16u);
    534     bits = ((bits & 0x55555555u) &lt;&lt; 1u) | ((bits & 0xAAAAAAAAu) &gt;&gt; 1u);
    535     bits = ((bits & 0x33333333u) &lt;&lt; 2u) | ((bits & 0xCCCCCCCCu) &gt;&gt; 2u);
    536     bits = ((bits & 0x0F0F0F0Fu) &lt;&lt; 4u) | ((bits & 0xF0F0F0F0u) &gt;&gt; 4u);
    537     bits = ((bits & 0x00FF00FFu) &lt;&lt; 8u) | ((bits & 0xFF00FF00u) &gt;&gt; 8u);
    538     return float(bits) * 2.3283064365386963e-10; // / 0x100000000
    539 }
    540 // ----------------------------------------------------------------------------
    541 vec2 Hammersley(uint i, uint N)
    542 {
    543     return vec2(float(i)/float(N), RadicalInverse_VdC(i));
    544 }  
    545 </code></pre>
    546         
    547 <p>
    548   The GLSL <fun>Hammersley</fun> function gives us the low-discrepancy sample <var>i</var> of the total sample set of size <var>N</var>. 
    549 </p>
    550 
    551 <note>
    552 <strong>Hammersley sequence without bit operator support</strong><br/>
    553 <p>
    554   Not all OpenGL related drivers support bit operators (WebGL and OpenGL ES 2.0 for instance) in which case you may want to use an alternative version of the Van Der Corput Sequence that doesn't rely on bit operators:
    555 </p>
    556   
    557 <pre><code>
    558 float VanDerCorput(uint n, uint base)
    559 {
    560     float invBase = 1.0 / float(base);
    561     float denom   = 1.0;
    562     float result  = 0.0;
    563 
    564     for(uint i = 0u; i &lt; 32u; ++i)
    565     {
    566         if(n > 0u)
    567         {
    568             denom   = mod(float(n), 2.0);
    569             result += denom * invBase;
    570             invBase = invBase / 2.0;
    571             n       = uint(float(n) / 2.0);
    572         }
    573     }
    574 
    575     return result;
    576 }
    577 // ----------------------------------------------------------------------------
    578 vec2 HammersleyNoBitOps(uint i, uint N)
    579 {
    580     return vec2(float(i)/float(N), VanDerCorput(i, 2u));
    581 }
    582 </code></pre>
    583   
    584 <p>
    585   Note that due to GLSL loop restrictions in older hardware, the sequence loops over all possible <code>32</code> bits. This version is less performant, but does work on all hardware if you ever find yourself without bit operators.
    586 </p>
    587 </note>
    588         
    589 <h3>GGX Importance sampling</h3>
    590 <p>
    591   Instead of uniformly or randomly (Monte Carlo) generating sample vectors over the integral's hemisphere \(\Omega\), we'll generate sample vectors biased towards the general reflection orientation of the microsurface halfway vector based on the surface's roughness. The sampling process will be  similar to what we've seen before: begin a large loop, generate a random (low-discrepancy) sequence value, take the sequence value to generate a sample vector in tangent space, transform to world space, and sample the scene's radiance. What's different is that we now use a low-discrepancy sequence value as input to generate a sample vector:
    592 </p>
    593         
    594 <pre><code>
    595 const uint SAMPLE_COUNT = 4096u;
    596 for(uint i = 0u; i &lt; SAMPLE_COUNT; ++i)
    597 {
    598     vec2 Xi = Hammersley(i, SAMPLE_COUNT);   
    599 </code></pre>
    600         
    601 <p>
    602   Additionally, to build a sample vector, we need some way of orienting and biasing the sample vector towards the specular lobe of some surface roughness. We can take the NDF as described in the <a href="https://learnopengl.com/PBR/Theory" target="_blank">theory</a> chapter and combine the GGX NDF in the spherical sample vector process as described by Epic Games:
    603 </p>
    604         
    605 <pre><code>
    606 vec3 ImportanceSampleGGX(vec2 Xi, vec3 N, float roughness)
    607 {
    608     float a = roughness*roughness;
    609 	
    610     float phi = 2.0 * PI * Xi.x;
    611     float cosTheta = sqrt((1.0 - Xi.y) / (1.0 + (a*a - 1.0) * Xi.y));
    612     float sinTheta = sqrt(1.0 - cosTheta*cosTheta);
    613 	
    614     // from spherical coordinates to cartesian coordinates
    615     vec3 H;
    616     H.x = cos(phi) * sinTheta;
    617     H.y = sin(phi) * sinTheta;
    618     H.z = cosTheta;
    619 	
    620     // from tangent-space vector to world-space sample vector
    621     vec3 up        = abs(N.z) &lt; 0.999 ? vec3(0.0, 0.0, 1.0) : vec3(1.0, 0.0, 0.0);
    622     vec3 tangent   = normalize(cross(up, N));
    623     vec3 bitangent = cross(N, tangent);
    624 	
    625     vec3 sampleVec = tangent * H.x + bitangent * H.y + N * H.z;
    626     return normalize(sampleVec);
    627 }  
    628 </code></pre>
    629   
    630 <p>
    631   This gives us a sample vector somewhat oriented around the expected microsurface's halfway vector based on some input roughness and the low-discrepancy sequence value <var>Xi</var>. Note that Epic Games uses the squared roughness for better visual results as based on Disney's original PBR research. 
    632 </p>
    633         
    634 <p>
    635   With the low-discrepancy Hammersley sequence and sample generation defined, we can finalize the pre-filter convolution shader:
    636 </p>
    637         
    638 <pre><code>
    639 #version 330 core
    640 out vec4 FragColor;
    641 in vec3 localPos;
    642 
    643 uniform samplerCube environmentMap;
    644 uniform float roughness;
    645 
    646 const float PI = 3.14159265359;
    647 
    648 float RadicalInverse_VdC(uint bits);
    649 vec2 Hammersley(uint i, uint N);
    650 vec3 ImportanceSampleGGX(vec2 Xi, vec3 N, float roughness);
    651   
    652 void main()
    653 {		
    654     vec3 N = normalize(localPos);    
    655     vec3 R = N;
    656     vec3 V = R;
    657 
    658     const uint SAMPLE_COUNT = 1024u;
    659     float totalWeight = 0.0;   
    660     vec3 prefilteredColor = vec3(0.0);     
    661     for(uint i = 0u; i &lt; SAMPLE_COUNT; ++i)
    662     {
    663         vec2 Xi = Hammersley(i, SAMPLE_COUNT);
    664         vec3 H  = ImportanceSampleGGX(Xi, N, roughness);
    665         vec3 L  = normalize(2.0 * dot(V, H) * H - V);
    666 
    667         float NdotL = max(dot(N, L), 0.0);
    668         if(NdotL > 0.0)
    669         {
    670             prefilteredColor += texture(environmentMap, L).rgb * NdotL;
    671             totalWeight      += NdotL;
    672         }
    673     }
    674     prefilteredColor = prefilteredColor / totalWeight;
    675 
    676     FragColor = vec4(prefilteredColor, 1.0);
    677 }  
    678   
    679 </code></pre>
    680         
    681 <p>
    682   We pre-filter the environment, based on some input roughness that varies over each mipmap level of the pre-filter cubemap (from <code>0.0</code> to <code>1.0</code>), and store the result in <var>prefilteredColor</var>. The resulting <var>prefilteredColor</var> is divided by the total sample weight, where samples with less influence on the final result (for small <var>NdotL</var>) contribute less to the final weight.
    683 </p>
    684         
    685 <h3>Capturing pre-filter mipmap levels</h3>
    686 <p>
    687   What's left to do is let OpenGL pre-filter the environment map with different roughness values over multiple mipmap levels. This is actually fairly easy to do with the original setup of the <a href="https://learnopengl.com/PBR/IBL/Diffuse-irradiance" target="_blank">irradiance</a> chapter:
    688 </p>
    689         
    690 <pre><code>
    691 prefilterShader.use();
    692 prefilterShader.setInt("environmentMap", 0);
    693 prefilterShader.setMat4("projection", captureProjection);
    694 <function id='49'>glActiveTexture</function>(GL_TEXTURE0);
    695 <function id='48'>glBindTexture</function>(GL_TEXTURE_CUBE_MAP, envCubemap);
    696 
    697 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, captureFBO);
    698 unsigned int maxMipLevels = 5;
    699 for (unsigned int mip = 0; mip &lt; maxMipLevels; ++mip)
    700 {
    701     // reisze framebuffer according to mip-level size.
    702     unsigned int mipWidth  = 128 * std::pow(0.5, mip);
    703     unsigned int mipHeight = 128 * std::pow(0.5, mip);
    704     <function id='83'>glBindRenderbuffer</function>(GL_RENDERBUFFER, captureRBO);
    705     <function id='88'>glRenderbufferStorage</function>(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, mipWidth, mipHeight);
    706     <function id='22'>glViewport</function>(0, 0, mipWidth, mipHeight);
    707 
    708     float roughness = (float)mip / (float)(maxMipLevels - 1);
    709     prefilterShader.setFloat("roughness", roughness);
    710     for (unsigned int i = 0; i &lt; 6; ++i)
    711     {
    712         prefilterShader.setMat4("view", captureViews[i]);
    713         <function id='81'>glFramebufferTexture2D</function>(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, 
    714                                GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, prefilterMap, mip);
    715 
    716         <function id='10'>glClear</function>(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    717         renderCube();
    718     }
    719 }
    720 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, 0);   
    721 </code></pre>
    722         
    723 <p>
    724   The process is similar to the irradiance map convolution, but this time we scale the framebuffer's dimensions to the appropriate mipmap scale, each mip level reducing the dimensions by a scale of 2. Additionally, we specify the mip level we're rendering into in <fun><function id='81'>glFramebufferTexture2D</function></fun>'s last parameter and pass the roughness we're pre-filtering for to the pre-filter shader.
    725 </p>
    726         
    727 <p>
    728   This should give us a properly pre-filtered environment map that returns blurrier reflections the higher mip level we access it from. If we use the pre-filtered environment cubemap in the skybox shader and forcefully sample somewhat above its first mip level like so:
    729 </p>
    730         
    731 <pre><code>
    732 vec3 envColor = textureLod(environmentMap, WorldPos, 1.2).rgb; 
    733 </code></pre>
    734         
    735 <p>
    736   We get a result that indeed looks like a blurrier version of the original environment:
    737 </p>
    738         
    739 <img src="/img/pbr/ibl_prefilter_map_sample.png" alt="Visualizing a LOD mip level of the pre-filtered environment map in the skybox."/>
    740           
    741 <p>
    742   If it looks somewhat similar you've successfully pre-filtered the HDR environment map. Play around with different mipmap levels to see the pre-filter map gradually change from sharp to blurry reflections on increasing mip levels.
    743 </p>
    744        
    745       
    746 <h2>Pre-filter convolution artifacts</h2>
    747 <p>
    748   While the current pre-filter map works fine for most purposes, sooner or later you'll come across several render artifacts that are directly related to the pre-filter convolution. I'll list the most common here including how to fix them.
    749 </p>
    750           
    751 <h3>Cubemap seams at high roughness</h3>
    752 <p>
    753   Sampling the pre-filter map on surfaces with a rough surface means sampling the pre-filter map on some of its lower mip levels. When sampling cubemaps, OpenGL by default doesn't linearly interpolate <strong>across</strong> cubemap faces. Because the lower mip levels are both of a lower resolution and the pre-filter map is convoluted with a much larger sample lobe, the lack of <em>between-cube-face filtering</em> becomes quite apparent:
    754 </p>
    755           
    756 <img src="/img/pbr/ibl_prefilter_seams.png" alt="Visible cubemap seams in the pre-filter map."/>
    757             
    758 <p>
    759   Luckily for us, OpenGL gives us the option to properly filter across cubemap faces by enabling <var>GL_TEXTURE_CUBE_MAP_SEAMLESS</var>:
    760 </p>
    761             
    762 <pre><code>
    763 <function id='60'>glEnable</function>(GL_TEXTURE_CUBE_MAP_SEAMLESS);  
    764 </code></pre>
    765             
    766 <p>
    767   Simply enable this property somewhere at the start of your application and the seams will be gone.
    768 </p>
    769             
    770 <h3>Bright dots in the pre-filter convolution</h3>
    771 <p>
    772   Due to high frequency details and wildly varying light intensities in specular reflections, convoluting the specular reflections requires a large number of samples to properly account for the wildly varying nature of HDR environmental reflections. We already take a very large number of samples, but on some environments it may still not be enough at some of the rougher mip levels in which case you'll start seeing dotted patterns emerge around bright areas:
    773 </p>
    774             
    775             <img src="/img/pbr/ibl_prefilter_dots.png" alt="Visible dots on high frequency HDR maps in the deeper mip LOD levels of a pre-filter map."/>
    776               
    777 <p>
    778   One option is to further increase the sample count, but this won't be enough for all environments. As described by <a href="https://chetanjags.wordpress.com/2015/08/26/image-based-lighting/" target="_blank">Chetan Jags</a> we can reduce this artifact by (during the pre-filter convolution) not directly sampling the environment map, but sampling a mip level of the environment map based on the integral's PDF and the roughness:
    779 </p>
    780               
    781 <pre><code>
    782 float D   = DistributionGGX(NdotH, roughness);
    783 float pdf = (D * NdotH / (4.0 * HdotV)) + 0.0001; 
    784 
    785 float resolution = 512.0; // resolution of source cubemap (per face)
    786 float saTexel  = 4.0 * PI / (6.0 * resolution * resolution);
    787 float saSample = 1.0 / (float(SAMPLE_COUNT) * pdf + 0.0001);
    788 
    789 float mipLevel = roughness == 0.0 ? 0.0 : 0.5 * log2(saSample / saTexel); 
    790 </code></pre>
    791               
    792 <p>
    793   Don't forget to enable trilinear filtering on the environment map you want to sample its mip levels from:
    794 </p>
    795               
    796 <pre><code>
    797 <function id='48'>glBindTexture</function>(GL_TEXTURE_CUBE_MAP, envCubemap);
    798 <function id='15'>glTexParameter</function>i(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); 
    799 </code></pre>
    800               
    801 <p>
    802   And let OpenGL generate the mipmaps <strong>after</strong> the cubemap's base texture is set:
    803 </p>
    804 
    805 <pre><code>
    806 // convert HDR equirectangular environment map to cubemap equivalent
    807 [...]
    808 // then generate mipmaps
    809 <function id='48'>glBindTexture</function>(GL_TEXTURE_CUBE_MAP, envCubemap);
    810 <function id='51'>glGenerateMipmap</function>(GL_TEXTURE_CUBE_MAP);
    811 </code></pre>
    812               
    813 <p>
    814   This works surprisingly well and should remove most, if not all, dots in your pre-filter map on rougher surfaces. 
    815 </p>
    816 
    817 <h2>Pre-computing the BRDF</h2>
    818 <p>
    819   With the pre-filtered environment up and running, we can focus on the second part of the split-sum approximation: the BRDF. Let's briefly review the specular split sum approximation again:
    820 </p>
    821           
    822 \[
    823      L_o(p,\omega_o) = 
    824 		\int\limits_{\Omega} L_i(p,\omega_i) d\omega_i
    825 		*
    826 		\int\limits_{\Omega} f_r(p, \omega_i, \omega_o) n \cdot \omega_i d\omega_i
    827 \]
    828           
    829 <p>
    830   We've pre-computed the left part of the split sum approximation in the pre-filter map over different roughness levels. The right side requires us to convolute the BRDF equation over the angle \(n \cdot \omega_o\), the surface roughness, and Fresnel's \(F_0\). This is similar to integrating the specular BRDF with a solid-white environment or a constant radiance \(L_i\) of <code>1.0</code>. Convoluting the BRDF over 3 variables is a bit much, but we can try to move \(F_0\) out of the specular BRDF equation:
    831 </p>
    832         
    833 \[
    834 	\int\limits_{\Omega} f_r(p, \omega_i, \omega_o) n \cdot \omega_i d\omega_i = \int\limits_{\Omega} f_r(p, \omega_i, \omega_o) \frac{F(\omega_o, h)}{F(\omega_o, h)} n \cdot \omega_i d\omega_i 
    835 \]
    836           
    837 <p>
    838   With \(F\) being the Fresnel equation. Moving the Fresnel denominator to the BRDF gives us the following equivalent equation: 
    839 </p>
    840   
    841 \[        
    842 	\int\limits_{\Omega} \frac{f_r(p, \omega_i, \omega_o)}{F(\omega_o, h)} F(\omega_o, h)  n \cdot \omega_i d\omega_i
    843 \]
    844           
    845 <p>
    846   Substituting the right-most \(F\) with the Fresnel-Schlick approximation gives us:
    847   </p>
    848           
    849 \[        
    850 	\int\limits_{\Omega} \frac{f_r(p, \omega_i, \omega_o)}{F(\omega_o, h)} (F_0 + (1 - F_0){(1 - \omega_o \cdot h)}^5)  n \cdot \omega_i d\omega_i
    851 \]
    852   
    853 <p>
    854   Let's replace \({(1 - \omega_o \cdot h)}^5\) by \(\alpha\) to make it easier to solve for \(F_0\):
    855 </p>
    856           
    857 \[        
    858 	\int\limits_{\Omega} \frac{f_r(p, \omega_i, \omega_o)}{F(\omega_o, h)} (F_0 + (1 - F_0)\alpha)  n \cdot \omega_i d\omega_i
    859 \]
    860    
    861 \[        
    862 	\int\limits_{\Omega} \frac{f_r(p, \omega_i, \omega_o)}{F(\omega_o, h)} (F_0 + 1*\alpha - F_0*\alpha)  n \cdot \omega_i d\omega_i
    863 \]        
    864               
    865 \[        
    866 	\int\limits_{\Omega} \frac{f_r(p, \omega_i, \omega_o)}{F(\omega_o, h)} (F_0 * (1 - \alpha) + \alpha)  n \cdot \omega_i d\omega_i
    867 \]  
    868           
    869 <p>
    870   Then we split the Fresnel function \(F\) over two integrals:
    871 </p>
    872           
    873 \[        
    874 	\int\limits_{\Omega} \frac{f_r(p, \omega_i, \omega_o)}{F(\omega_o, h)} (F_0 * (1 - \alpha))  n \cdot \omega_i d\omega_i
    875               +
    876 	\int\limits_{\Omega} \frac{f_r(p, \omega_i, \omega_o)}{F(\omega_o, h)} (\alpha)  n \cdot \omega_i d\omega_i              
    877 \]  
    878               
    879 <p>
    880   This way, \(F_0\) is constant over the integral and we can take \(F_0\) out of the integral. Next,  we substitute \(\alpha\) back to its original form giving us the final split sum BRDF equation:
    881 </p>
    882               
    883 \[        
    884 	F_0 \int\limits_{\Omega} f_r(p, \omega_i, \omega_o)(1 - {(1 - \omega_o \cdot h)}^5)  n \cdot \omega_i d\omega_i
    885               +
    886 	\int\limits_{\Omega} f_r(p, \omega_i, \omega_o) {(1 - \omega_o \cdot h)}^5  n \cdot \omega_i d\omega_i              
    887 \]  
    888           
    889 <p>
    890    The two resulting integrals represent a scale and a bias to \(F_0\) respectively. Note that as \(f_r(p, \omega_i, \omega_o)\) already contains a term for \(F\) they both cancel out, removing \(F\) from \(f_r\). 
    891 </p>                  
    892   
    893   <p>
    894     In a similar fashion to the earlier convoluted environment maps, we can convolute the BRDF equations on their inputs: the angle between \(n\) and \(\omega_o\), and the roughness. We store the convoluted results in a 2D lookup texture (LUT) known as a <def>BRDF integration</def> map that we later use in our PBR lighting shader to get the final convoluted indirect specular result.
    895 </p>        
    896           
    897 <p>
    898   The BRDF convolution shader operates on a 2D plane, using its 2D texture coordinates directly as inputs to the BRDF convolution (<var>NdotV</var> and <var>roughness</var>). The convolution code is largely similar to the pre-filter convolution, except that it now processes the sample vector according to our BRDF's geometry function and Fresnel-Schlick's approximation:
    899 </p>
    900           
    901 <pre><code>          
    902 vec2 IntegrateBRDF(float NdotV, float roughness)
    903 {
    904     vec3 V;
    905     V.x = sqrt(1.0 - NdotV*NdotV);
    906     V.y = 0.0;
    907     V.z = NdotV;
    908 
    909     float A = 0.0;
    910     float B = 0.0;
    911 
    912     vec3 N = vec3(0.0, 0.0, 1.0);
    913 
    914     const uint SAMPLE_COUNT = 1024u;
    915     for(uint i = 0u; i &lt; SAMPLE_COUNT; ++i)
    916     {
    917         vec2 Xi = Hammersley(i, SAMPLE_COUNT);
    918         vec3 H  = ImportanceSampleGGX(Xi, N, roughness);
    919         vec3 L  = normalize(2.0 * dot(V, H) * H - V);
    920 
    921         float NdotL = max(L.z, 0.0);
    922         float NdotH = max(H.z, 0.0);
    923         float VdotH = max(dot(V, H), 0.0);
    924 
    925         if(NdotL > 0.0)
    926         {
    927             float G = GeometrySmith(N, V, L, roughness);
    928             float G_Vis = (G * VdotH) / (NdotH * NdotV);
    929             float Fc = pow(1.0 - VdotH, 5.0);
    930 
    931             A += (1.0 - Fc) * G_Vis;
    932             B += Fc * G_Vis;
    933         }
    934     }
    935     A /= float(SAMPLE_COUNT);
    936     B /= float(SAMPLE_COUNT);
    937     return vec2(A, B);
    938 }
    939 // ----------------------------------------------------------------------------
    940 void main() 
    941 {
    942     vec2 integratedBRDF = IntegrateBRDF(TexCoords.x, TexCoords.y);
    943     FragColor = integratedBRDF;
    944 }
    945 </code></pre>
    946               
    947 <p>
    948   As you can see, the BRDF convolution is a direct translation from the mathematics to code. We take both the angle \(\theta\) and the roughness as input, generate a sample vector with importance sampling, process it over the geometry and the derived Fresnel term of the BRDF, and output both a scale and a bias to \(F_0\) for each sample, averaging them in the end. 
    949 </p>
    950               
    951 <p>
    952   You may recall from the <a href="https://learnopengl.com/PBR/Theory" target="_blank">theory</a> chapter that the geometry term of the BRDF is slightly different when used alongside IBL as its \(k\) variable has a slightly different interpretation:
    953 </p>
    954               
    955 \[
    956     k_{direct} = \frac{(\alpha + 1)^2}{8}
    957 \]
    958     
    959 \[
    960     k_{IBL} = \frac{\alpha^2}{2}
    961 \]
    962               
    963 <p>
    964   Since the BRDF convolution is part of the specular IBL integral we'll use \(k_{IBL}\) for the Schlick-GGX geometry function:
    965 </p>
    966               
    967 <pre><code>
    968 float GeometrySchlickGGX(float NdotV, float roughness)
    969 {
    970     float a = roughness;
    971     float k = (a * a) / 2.0;
    972 
    973     float nom   = NdotV;
    974     float denom = NdotV * (1.0 - k) + k;
    975 
    976     return nom / denom;
    977 }
    978 // ----------------------------------------------------------------------------
    979 float GeometrySmith(vec3 N, vec3 V, vec3 L, float roughness)
    980 {
    981     float NdotV = max(dot(N, V), 0.0);
    982     float NdotL = max(dot(N, L), 0.0);
    983     float ggx2 = GeometrySchlickGGX(NdotV, roughness);
    984     float ggx1 = GeometrySchlickGGX(NdotL, roughness);
    985 
    986     return ggx1 * ggx2;
    987 }  
    988 </code></pre>
    989               
    990 <p>
    991   Note that while \(k\) takes <var>a</var> as its parameter we didn't square <var>roughness</var> as <var>a</var> as we originally did for other interpretations of <var>a</var>; likely as <var>a</var> is squared here already. I'm not sure whether this is an inconsistency on Epic Games' part or the original Disney paper, but directly translating <var>roughness</var> to <var>a</var> gives the BRDF integration map that is identical to Epic Games' version.
    992 </p>
    993           
    994 <p>
    995   Finally, to store the BRDF convolution result we'll generate a 2D texture of a 512 by 512 resolution:
    996 </p>
    997           
    998 <pre><code>
    999 unsigned int brdfLUTTexture;
   1000 <function id='50'>glGenTextures</function>(1, &brdfLUTTexture);
   1001 
   1002 // pre-allocate enough memory for the LUT texture.
   1003 <function id='48'>glBindTexture</function>(GL_TEXTURE_2D, brdfLUTTexture);
   1004 <function id='52'>glTexImage2D</function>(GL_TEXTURE_2D, 0, GL_RG16F, 512, 512, 0, GL_RG, GL_FLOAT, 0);
   1005 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
   1006 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
   1007 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
   1008 <function id='15'>glTexParameter</function>i(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); 
   1009 </code></pre>
   1010           
   1011 <p>
   1012   Note that we use a 16-bit precision floating format as recommended by Epic Games. Be sure to set the wrapping mode to <var>GL_CLAMP_TO_EDGE</var> to prevent edge sampling artifacts.
   1013 </p>
   1014           
   1015 <p>
   1016   Then, we re-use the same framebuffer object and run this shader over an NDC screen-space quad:
   1017 </p>
   1018           
   1019 <pre class="cpp"><code>
   1020 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, captureFBO);
   1021 <function id='83'>glBindRenderbuffer</function>(GL_RENDERBUFFER, captureRBO);
   1022 <function id='88'>glRenderbufferStorage</function>(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, 512, 512);
   1023 <function id='81'>glFramebufferTexture2D</function>(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, brdfLUTTexture, 0);
   1024 
   1025 <function id='22'>glViewport</function>(0, 0, 512, 512);
   1026 brdfShader.use();
   1027 <function id='10'>glClear</function>(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
   1028 RenderQuad();
   1029 
   1030 <function id='77'>glBindFramebuffer</function>(GL_FRAMEBUFFER, 0);  
   1031 </code></pre>
   1032           
   1033 <p>
   1034   The convoluted BRDF part of the split sum integral should give you the following result:
   1035 </p>
   1036           
   1037 <img src="/img/pbr/ibl_brdf_lut.png" alt="BRDF LUT"/>
   1038 
   1039 <p>
   1040   With both the pre-filtered environment map and the BRDF 2D LUT we can re-construct the indirect specular integral according to the split sum approximation. The combined result then acts as the indirect or ambient specular light. 
   1041 </p>
   1042 
   1043   <h2>Completing the IBL reflectance</h2>
   1044 <p>
   1045   To get the indirect specular part of the reflectance equation up and running we need to stitch both parts of the split sum approximation together. Let's start by adding the pre-computed lighting data to the top of our PBR shader:
   1046 </p>
   1047 
   1048 <pre><code>
   1049 uniform samplerCube prefilterMap;
   1050 uniform sampler2D   brdfLUT;  
   1051 </code></pre>
   1052   
   1053 <p>
   1054   First, we get the indirect specular reflections of the surface by sampling the pre-filtered environment map using the reflection vector. Note that we sample the appropriate mip level based on the surface roughness, giving rougher surfaces <em>blurrier</em> specular reflections:
   1055 </p>
   1056   
   1057 <pre><code>
   1058 void main()
   1059 {
   1060     [...]
   1061     vec3 R = reflect(-V, N);   
   1062 
   1063     const float MAX_REFLECTION_LOD = 4.0;
   1064     vec3 prefilteredColor = textureLod(prefilterMap, R,  roughness * MAX_REFLECTION_LOD).rgb;    
   1065     [...]
   1066 }
   1067 </code></pre>
   1068   
   1069 <p>
   1070   In the pre-filter step we only convoluted the environment map up to a maximum of 5 mip levels (0 to 4), which we denote here as <var>MAX_REFLECTION_LOD</var> to ensure we don't sample a mip level where there's no (relevant) data.
   1071 </p>
   1072   
   1073 <p>
   1074 	Then we sample from the BRDF lookup texture given the material's roughness and the angle between the normal and view vector:
   1075 </p>
   1076   
   1077 <pre><code>
   1078 vec3 F        = FresnelSchlickRoughness(max(dot(N, V), 0.0), F0, roughness);
   1079 vec2 envBRDF  = texture(brdfLUT, vec2(max(dot(N, V), 0.0), roughness)).rg;
   1080 vec3 specular = prefilteredColor * (F * envBRDF.x + envBRDF.y);
   1081 </code></pre>
   1082   
   1083 <p>
   1084   Given the scale and bias to \(F_0\) (here we're directly using the indirect Fresnel result <var>F</var>) from the BRDF lookup texture, we combine this with the left pre-filter portion of the IBL reflectance equation and re-construct the approximated integral result as <var>specular</var>. 
   1085 </p>
   1086   
   1087 <p>
   1088   This gives us the indirect specular part of the reflectance equation. Now, combine this with the diffuse IBL part of the reflectance equation from the <a href="https://learnopengl.com/PBR/IBL/Diffuse-irradiance" target="_blank">last</a> chapter and we get the full PBR IBL result:
   1089 </p>
   1090   
   1091 <pre><code>
   1092 vec3 F = FresnelSchlickRoughness(max(dot(N, V), 0.0), F0, roughness);
   1093 
   1094 vec3 kS = F;
   1095 vec3 kD = 1.0 - kS;
   1096 kD *= 1.0 - metallic;	  
   1097   
   1098 vec3 irradiance = texture(irradianceMap, N).rgb;
   1099 vec3 diffuse    = irradiance * albedo;
   1100   
   1101 const float MAX_REFLECTION_LOD = 4.0;
   1102 vec3 prefilteredColor = textureLod(prefilterMap, R,  roughness * MAX_REFLECTION_LOD).rgb;   
   1103 vec2 envBRDF  = texture(brdfLUT, vec2(max(dot(N, V), 0.0), roughness)).rg;
   1104 vec3 specular = prefilteredColor * (F * envBRDF.x + envBRDF.y);
   1105   
   1106 vec3 ambient = (kD * diffuse + specular) * ao; 
   1107 </code></pre>
   1108   
   1109 <p>
   1110   Note that we don't multiply <var>specular</var> by <var>kS</var> as we already have a Fresnel multiplication in there.
   1111 </p>
   1112   
   1113 <p>
   1114   Now, running this exact code on the series of spheres that differ by their roughness and metallic properties, we finally get to see their true colors in the final PBR renderer: 
   1115 </p>
   1116   
   1117   <img src="/img/pbr/ibl_specular_result.png" alt="Render in OpenGL of full PBR with IBL (image based lighting) on spheres with varying roughness and metallic properties."/>
   1118     
   1119 <p>
   1120   We could even go wild, and use some cool textured <a href="http://freepbr.com" target="_blank">PBR materials</a>: 
   1121 </p>
   1122     
   1123     <img src="/img/pbr/ibl_specular_result_textured.png" alt="Render in OpenGL of full PBR with IBL (image based lighting) on textured spheres."/>
   1124       
   1125 <p>
   1126   Or load <a href="http://artisaverb.info/PBT.html" target="_blank">this awesome free 3D PBR model</a> by Andrew Maximov:
   1127 </p>
   1128       
   1129       <img src="/img/pbr/ibl_specular_result_model.png" alt="Render in OpenGL of full PBR with IBL (image based lighting) on a 3D PBR model."/>
   1130         
   1131 <p>
   1132   I'm sure we can all agree that our lighting now looks a lot more convincing. What's even better, is that our lighting looks physically correct regardless of which environment map we use. Below you'll see several different pre-computed HDR maps, completely changing the lighting dynamics, but still looking physically correct without changing a single lighting variable!
   1133 </p>
   1134         
   1135 <img src="/img/pbr/ibl_specular_result_different_environments.png" alt="Render in OpenGL of full PBR with IBL (image based lighting) on a 3D PBR model over multiple different environments (with changing light conditions)."/>
   1136           
   1137   
   1138 <p>
   1139    Well, this PBR adventure turned out to be quite a long journey. There are a lot of steps and thus a lot that could go wrong so carefully work your way through the <a href="/code_viewer_gh.php?code=src/6.pbr/2.2.1.ibl_specular/ibl_specular.cpp" target="_blank">sphere scene</a> or <a href="/code_viewer_gh.php?code=src/6.pbr/2.2.2.ibl_specular_textured/ibl_specular_textured.cpp" target="_blank">textured scene</a> code samples (including all shaders) if you're stuck, or check and ask around in the comments. 
   1140 </p>
   1141   
   1142 <h3>What's next?</h3>
   1143 <p>
   1144 	Hopefully, by the end of this tutorial you should have a pretty clear understanding of what PBR is about, and even have an actual PBR renderer up and running. In these tutorials, we've pre-computed all the relevant PBR image-based lighting data at the start of our application, before the render loop. This was fine for educational purposes, but not too great for any practical use of PBR. First, the pre-computation only really has to be done once, not at every startup. And second, the moment you use multiple environment maps you'll have to pre-compute each and every one of them at every startup which tends to build up.
   1145 </p>
   1146 
   1147 <p>
   1148 	For this reason you'd generally pre-compute an environment map into an irradiance and pre-filter map just once, and then store it on disk (note that the BRDF integration map isn't dependent on an environment map so you only need to calculate or load it once). This does mean you'll need to come up with a custom image format to store HDR cubemaps, including their mip levels. Or, you'll store (and load) it as one of the available formats (like .dds that supports storing mip levels). 
   1149 </p>
   1150 
   1151 <p>
   1152  Furthermore, we've described the <strong>total</strong> process in these tutorials, including generating the pre-computed IBL images to help further our understanding of the PBR pipeline. But,  you'll be just as fine by using several great tools like <a href="https://github.com/dariomanesku/cmftStudio" target="_blank">cmftStudio</a> or <a href="https://github.com/derkreature/IBLBaker" target="_blank">IBLBaker</a> to generate these pre-computed maps for you.
   1153 </p>
   1154 
   1155 <p>
   1156   One point we've skipped over is pre-computed cubemaps as <def>reflection probes</def>: cubemap interpolation and parallax correction. This is the process of placing several reflection probes in your scene that take a cubemap snapshot of the scene at that specific location, which we can then convolute as IBL data for that part of the scene. By interpolating between several of these probes based on the camera's vicinity we can achieve local high-detail image-based lighting that is simply limited by the amount of reflection probes we're willing to place. This way, the image-based lighting could correctly update when moving from a bright outdoor section of a scene to a darker indoor section for instance. I'll write a tutorial about reflection probes somewhere in the future, but for now I recommend the article by Chetan Jags below to give you a head start.
   1157 </p>
   1158 
   1159   
   1160 <h2>Further reading</h2>
   1161 <ul>
   1162   <li><a href="http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf" target="_blank">Real Shading in Unreal Engine 4</a>: explains Epic Games' split sum approximation. This is the article the IBL PBR code is based of.</li>
   1163   <li><a href="http://www.trentreed.net/blog/physically-based-shading-and-image-based-lighting/" target="_blank">Physically Based Shading and Image Based Lighting</a>: great blog post by Trent Reed about integrating specular IBL into a PBR pipeline in real time. </li>
   1164        <li><a href="https://chetanjags.wordpress.com/2015/08/26/image-based-lighting/" target="_blank">Image Based Lighting</a>: very extensive write-up by Chetan Jags about specular-based image-based lighting and several of its caveats, including light probe interpolation.</li>
   1165     <li><a href="https://seblagarde.files.wordpress.com/2015/07/course_notes_moving_frostbite_to_pbr_v32.pdf" target="_blank">Moving Frostbite to PBR</a>: well written and in-depth overview of integrating PBR into a AAA game engine by Sébastien Lagarde and Charles de Rousiers.</li>
   1166   <li><a href="https://jmonkeyengine.github.io/wiki/jme3/advanced/pbr_part3.html" target="_blank">Physically Based Rendering – Part Three</a>: high level overview of IBL lighting and PBR by the JMonkeyEngine team.</li>
   1167   <li><a href="https://placeholderart.wordpress.com/2015/07/28/implementation-notes-runtime-environment-map-filtering-for-image-based-lighting/" target="_blank">Implementation Notes: Runtime Environment Map Filtering for Image Based Lighting</a>: extensive write-up by Padraic Hennessy about pre-filtering HDR environment maps and significantly optimizing the sample process.</li>
   1168 </ul>       
   1169 
   1170     </div>
   1171     
   1172 	</main>
   1173 </body>
   1174 </html>