LearnOpenGL

Translation in progress of learnopengl.com.
git clone https://git.mtkn.jp/LearnOpenGL
Log | Files | Refs

Geometry-Shader.html (28078B)


      1     <div id="content">
      2     <h1 id="content-title">Geometry Shader</h1>
      3 <h1 id="content-url" style='display:none;'>Advanced-OpenGL/Geometry-Shader</h1>
      4 <p>
      5   Between the vertex and the fragment shader there is an optional shader stage called the <def>geometry shader</def>. A geometry shader takes as input a set of vertices that form a single primitive e.g. a point or a triangle. The geometry shader can then transform these vertices as it sees fit before sending them to the next shader stage. What makes the geometry shader interesting is that it is able to convert the original primitive (set of vertices) to completely different primitives, possibly generating more vertices than were initially given. 
      6 </p>
      7 
      8 <p>
      9   We're going to throw you right into the deep by showing you an example of a geometry shader:
     10 </p>
     11 
     12 <pre><code>
     13 #version 330 core
     14 layout (points) in;
     15 layout (line_strip, max_vertices = 2) out;
     16 
     17 void main() {    
     18     gl_Position = gl_in[0].gl_Position + vec4(-0.1, 0.0, 0.0, 0.0); 
     19     EmitVertex();
     20 
     21     gl_Position = gl_in[0].gl_Position + vec4( 0.1, 0.0, 0.0, 0.0);
     22     EmitVertex();
     23     
     24     EndPrimitive();
     25 }  
     26 </code></pre>
     27 
     28 <p>
     29   At the start of a geometry shader we need to declare the type of primitive input we're receiving from the vertex shader. We do this by declaring a layout specifier in front of the <fun>in</fun> keyword. This input layout qualifier can take any of the following primitive values:
     30 </p>
     31 
     32 <ul>
     33   <li><code>points</code>: when drawing <var>GL_POINTS</var> primitives (<code>1</code>).</li>
     34   <li><code>lines</code>: when drawing <var>GL_LINES</var> or <var>GL_LINE_STRIP</var> (<code>2</code>).</li>
     35   <li><code>lines_adjacency</code>: <var>GL_LINES_ADJACENCY</var> or <var>GL_LINE_STRIP_ADJACENCY</var> (<code>4</code>).</li>
     36   <li><code>triangles</code>: <var>GL_TRIANGLES</var>, <var>GL_TRIANGLE_STRIP</var> or <var>GL_TRIANGLE_FAN</var> (<code>3</code>).</li>
     37    <li><code>triangles_adjacency </code>: <var>GL_TRIANGLES_ADJACENCY</var> or <var>GL_TRIANGLE_STRIP_ADJACENCY </var> (<code>6</code>).</li>
     38 </ul>
     39 
     40 <p>
     41   These are almost all the rendering primitives we're able to give to rendering calls like <fun><function id='1'>glDrawArrays</function></fun>. If we'd chosen to draw vertices as <var>GL_TRIANGLES</var> we should set the input qualifier to <code>triangles</code>. The number within the parenthesis represents the minimal number of vertices a single primitive contains.
     42 </p>
     43 
     44 <p>
     45   We also need to specify a primitive type that the geometry shader will output and we do this via a layout specifier in front of the <fun>out</fun> keyword. Like the input layout qualifier, the output layout qualifier can take several primitive values:
     46 </p>
     47 
     48 <ul>
     49   <li><code>points</code></li>
     50   <li><code>line_strip</code></li>
     51   <li><code>triangle_strip</code></li>
     52 </ul>
     53 
     54 <p>
     55 	With just these 3 output specifiers we can create almost any shape we want from the input primitives. To generate a single triangle for example we'd specify <code>triangle_strip</code> as the output and output 3 vertices.
     56 </p>
     57 
     58 <p>
     59   The geometry shader also expects us to set a maximum number of vertices it outputs (if you exceed this number, OpenGL won't draw the <em>extra</em> vertices) which we can also do within the layout qualifier of the <fun>out</fun> keyword. In this particular case we're going to output a <code>line_strip</code> with a maximum number of 2 vertices. 
     60 </p>
     61 
     62 <note>
     63   In case you're wondering what a line strip is: a line strip binds together a set of points to form one continuous line between them with a minimum of 2 points. Each extra point results in a new line between the new point and the previous point as you can see in the following image with 5 point vertices:
     64 
     65 <img src="/img/advanced/geometry_shader_line_strip.png" class="clean" alt="Image of line_strip primitive in geometry shader"/>
     66 </note>
     67 
     68 <p>
     69   To generate meaningful results we need some way to retrieve the output from the previous shader stage. GLSL gives us a <def>built-in</def> variable called <fun>gl_in</fun> that internally (probably) looks something like this:
     70 </p>
     71 
     72 <pre><code>
     73 in gl_Vertex
     74 {
     75     vec4  gl_Position;
     76     float gl_PointSize;
     77     float gl_ClipDistance[];
     78 } gl_in[];  
     79 </code></pre>
     80 
     81 <p>
     82   Here it is declared as an <def>interface block</def> (as discussed in the <a href="https://learnopengl.com/Advanced-OpenGL/Advanced-GLSL" target="_blank">previous</a> chapter) that contains a few interesting variables of which the most interesting one is <var>gl_Position</var> that contains the vector we set as the vertex shader's output.
     83 </p>
     84 
     85 <p>
     86   Note that it is declared as an array, because most render primitives contain more than 1 vertex. The geometry shader receives <strong>all</strong> vertices of a primitive as its input.
     87 </p>
     88 
     89 <p>
     90   Using the vertex data from the vertex shader stage we can generate new data with 2 geometry shader functions called <fun>EmitVertex</fun> and <fun>EndPrimitive</fun>. The geometry shader expects you to generate/output at least one of the primitives you specified as output. In our case we want to at least generate one line strip primitive.
     91 </p>
     92 
     93 <pre><code>
     94 #version 330 core
     95 layout (points) in;
     96 layout (line_strip, max_vertices = 2) out;
     97   
     98 void main() {    
     99     gl_Position = gl_in[0].gl_Position + vec4(-0.1, 0.0, 0.0, 0.0); 
    100     EmitVertex();
    101 
    102     gl_Position = gl_in[0].gl_Position + vec4( 0.1, 0.0, 0.0, 0.0);
    103     EmitVertex();
    104     
    105     EndPrimitive();
    106 }    
    107 </code></pre>
    108 
    109 <p>
    110   Each time we call <fun>EmitVertex</fun>, the vector currently set to <var>gl_Position</var> is added to the output primitive. Whenever <fun>EndPrimitive</fun> is called, all emitted vertices for this primitive are combined into the specified output render primitive. By repeatedly calling <fun>EndPrimitive</fun>, after one or more <fun>EmitVertex</fun> calls, multiple primitives can be generated. This particular case emits two vertices that were translated by a small offset from the original vertex position and then calls <fun>EndPrimitive</fun>, combining the two vertices into a single line strip of 2 vertices.
    111 </p>
    112 
    113 <p>
    114   Now that you (sort of) know how geometry shaders work you can probably guess what this geometry shader does. This geometry shader takes a point primitive as its input and creates a horizontal line primitive with the input point at its center. If we were to render this it looks something like this:
    115 </p>
    116 
    117 <img src="/img/advanced/geometry_shader_lines.png" class="clean" alt="Geometry shader drawing lines out of points in OpenGL"/>
    118 
    119 <p>
    120   Not very impressive yet, but it's interesting to consider that this output was generated using just the following render call:
    121 </p>
    122 
    123 <pre class="cpp"><code>
    124 <function id='1'>glDrawArrays</function>(GL_POINTS, 0, 4);  
    125 </code></pre>
    126 
    127 <p>
    128   While this is a relatively simple example, it does show you how we can use geometry shaders to (dynamically) generate new shapes on the fly. Later in this chapter we'll discuss a few interesting effects that we can create using geometry shaders, but for now we're going to start with a simple example.
    129 </p>
    130 
    131 <h2>Using geometry shaders</h2>
    132 <p>
    133   To demonstrate the use of a geometry shader we're going to render a really simple scene where we draw 4 points on the z-plane in normalized device coordinates. The coordinates of the points are:
    134 </p>
    135 
    136 <pre><code>
    137 float points[] = {
    138 	-0.5f,  0.5f, // top-left
    139 	 0.5f,  0.5f, // top-right
    140 	 0.5f, -0.5f, // bottom-right
    141 	-0.5f, -0.5f  // bottom-left
    142 };  
    143 </code></pre>
    144 
    145 <p>
    146   The vertex shader needs to draw the points on the z-plane so we'll create a basic vertex shader:
    147 </p>
    148 
    149 <pre><code>
    150 #version 330 core
    151 layout (location = 0) in vec2 aPos;
    152 
    153 void main()
    154 {
    155     gl_Position = vec4(aPos.x, aPos.y, 0.0, 1.0); 
    156 }
    157 </code></pre>
    158 
    159 <p>
    160   And we'll output the color green for all points which we code directly in the fragment shader:
    161 </p>
    162 
    163 <pre><code>
    164 #version 330 core
    165 out vec4 FragColor;
    166 
    167 void main()
    168 {
    169     FragColor = vec4(0.0, 1.0, 0.0, 1.0);   
    170 }  
    171 </code></pre>
    172 
    173 <p>
    174   Generate a VAO and a VBO for the points' vertex data and then draw them via <fun><function id='1'>glDrawArrays</function></fun>:
    175 </p>
    176 
    177 <pre class="cpp"><code>
    178 shader.use();
    179 <function id='27'>glBindVertexArray</function>(VAO);
    180 <function id='1'>glDrawArrays</function>(GL_POINTS, 0, 4); 
    181 </code></pre>
    182 
    183 <p>
    184   The result is a dark scene with 4 (difficult to see) green points:
    185 </p>
    186 
    187 <img src="/img/advanced/geometry_shader_points.png" class="clean" alt="4 Points drawn using OpenGL"/>
    188 
    189 <p>
    190   But didn't we already learn to do all this? Yes, and now we're going to spice this little scene up by adding geometry shader magic to the scene.
    191 </p>
    192 
    193 <p>
    194   For learning purposes we're first going to create what is called a <def>pass-through</def> geometry shader that takes a point primitive as its input and <em>passes</em> it to the next shader unmodified:
    195 </p>
    196 
    197 <pre><code>
    198 #version 330 core
    199 layout (points) in;
    200 layout (points, max_vertices = 1) out;
    201 
    202 void main() {    
    203     gl_Position = gl_in[0].gl_Position; 
    204     EmitVertex();
    205     EndPrimitive();
    206 }  
    207 </code></pre>
    208 
    209 <p>
    210   By now this geometry shader should be fairly easy to understand. It simply emits the unmodified vertex position it received as input and generates a point primitive.
    211 </p>
    212 
    213 <p>
    214   A geometry shader needs to be compiled and linked to a program just like the vertex and fragment shader, but this time we'll create the shader using <var>GL_GEOMETRY_SHADER</var> as the shader type:
    215 </p>
    216 
    217 <pre class="cpp"><code>
    218 geometryShader = <function id='37'>glCreateShader</function>(GL_GEOMETRY_SHADER);
    219 <function id='42'>glShaderSource</function>(geometryShader, 1, &gShaderCode, NULL);
    220 <function id='38'>glCompileShader</function>(geometryShader);  
    221 [...]
    222 <function id='34'>glAttachShader</function>(program, geometryShader);
    223 <function id='35'>glLinkProgram</function>(program);  
    224 </code></pre>
    225 
    226 <p>
    227   The shader compilation code is the same as the vertex and fragment shaders. Be sure to check for compile or linking errors!
    228 </p>
    229 
    230 <p>
    231   If you'd now compile and run you should be looking at a result that looks a bit like this:
    232 </p>
    233 
    234 <img src="/img/advanced/geometry_shader_points.png" class="clean" alt="4 Points drawn using OpenGL (with geometry shader this time!)"/>
    235 
    236 <p>
    237   It's exactly the same as without the geometry shader! It's a bit dull, I'll admit that, but the fact that we were still able to draw the points means that the geometry shader works, so now it's time for the more funky stuff!
    238 </p>
    239 
    240 <h2>Let's build houses</h2>
    241 <p>
    242   Drawing points and lines isn't <strong>that</strong> interesting so we're going to get a little creative by using the geometry shader to draw a house for us at the location of each point. We can accomplish this by setting the output of the geometry shader to <def>triangle_strip</def> and draw a total of three triangles: two for the square house and one for the roof.
    243 </p>
    244 
    245 <p>
    246   A triangle strip in OpenGL is a more efficient way to draw triangles with fewer vertices. After the first triangle is drawn, each subsequent vertex generates another triangle next to the first triangle: every 3 adjacent vertices will form a triangle. If we have a total of 6 vertices that form a triangle strip we'd get the following triangles: (1,2,3), (2,3,4), (3,4,5) and (4,5,6); forming a total of 4 triangles. A triangle strip needs at least 3 vertices and will generate N-2 triangles; with 6 vertices we created 6-2 = 4 triangles. The following image illustrates this:
    247 </p>
    248 
    249 <img src="/img/advanced/geometry_shader_triangle_strip.png" class="clean" alt="Image of a triangle strip with their index order in OpenGL"/>
    250 
    251 <p>
    252   Using a triangle strip as the output of the geometry shader we can easily create the house shape we're after by generating 3 adjacent triangles in the correct order. The following image shows in what order we need to draw what vertices to get the triangles we need with the blue dot being the input point:
    253 </p>
    254 
    255 <img src="/img/advanced/geometry_shader_house.png" class="clean" alt="How a house figure should be drawn from a single point using geometry shaders"/>
    256 
    257 <p>
    258   This translates to the following geometry shader:
    259 </p>
    260 
    261 <pre><code>
    262 #version 330 core
    263 layout (points) in;
    264 layout (triangle_strip, max_vertices = 5) out;
    265 
    266 void build_house(vec4 position)
    267 {    
    268     gl_Position = position + vec4(-0.2, -0.2, 0.0, 0.0);    // 1:bottom-left
    269     EmitVertex();   
    270     gl_Position = position + vec4( 0.2, -0.2, 0.0, 0.0);    // 2:bottom-right
    271     EmitVertex();
    272     gl_Position = position + vec4(-0.2,  0.2, 0.0, 0.0);    // 3:top-left
    273     EmitVertex();
    274     gl_Position = position + vec4( 0.2,  0.2, 0.0, 0.0);    // 4:top-right
    275     EmitVertex();
    276     gl_Position = position + vec4( 0.0,  0.4, 0.0, 0.0);    // 5:top
    277     EmitVertex();
    278     EndPrimitive();
    279 }
    280 
    281 void main() {    
    282     build_house(gl_in[0].gl_Position);
    283 }  
    284 </code></pre>
    285 
    286 <p>
    287   This geometry shader generates 5 vertices, with each vertex being the point's position plus an offset to form one large triangle strip. The resulting primitive is then rasterized and the fragment shader runs on the entire triangle strip, resulting in a green house for each point we've rendered:
    288 </p>
    289 
    290 <img src="/img/advanced/geometry_shader_houses.png" class="clean" alt="Houses drawn with points using geometry shader in OpenGL"/>
    291 
    292 <p>
    293   You can see that each house indeed consists of 3 triangles - all drawn using a single point in space. The green houses do look a bit boring though, so let's liven it up a bit by giving each house a unique color. To do this we're going to add an extra vertex attribute in the vertex shader with color information per vertex and direct it to the geometry shader that further forwards it to the fragment shader. 
    294 </p>
    295 
    296 <p>
    297   The updated vertex data is given below:
    298 </p>
    299 
    300 <pre><code>
    301 float points[] = {
    302     -0.5f,  0.5f, 1.0f, 0.0f, 0.0f, // top-left
    303      0.5f,  0.5f, 0.0f, 1.0f, 0.0f, // top-right
    304      0.5f, -0.5f, 0.0f, 0.0f, 1.0f, // bottom-right
    305     -0.5f, -0.5f, 1.0f, 1.0f, 0.0f  // bottom-left
    306 };  
    307 </code></pre>
    308 
    309 <p>
    310   Then we update the vertex shader to forward the color attribute to the geometry shader using an interface block:
    311 </p>
    312 
    313 <pre><code>
    314 #version 330 core
    315 layout (location = 0) in vec2 aPos;
    316 layout (location = 1) in vec3 aColor;
    317 
    318 out VS_OUT {
    319     vec3 color;
    320 } vs_out;
    321 
    322 void main()
    323 {
    324     gl_Position = vec4(aPos.x, aPos.y, 0.0, 1.0); 
    325     vs_out.color = aColor;
    326 }  
    327 </code></pre>
    328 
    329 <p>
    330   Then we also need to declare the same interface block (with a different interface name) in the geometry shader:
    331 </p>
    332 
    333 <pre><code>
    334 in VS_OUT {
    335     vec3 color;
    336 } gs_in[];  
    337 </code></pre>
    338 
    339 <p>
    340   Because the geometry shader acts on a set of vertices as its input, its input data from the vertex shader is always represented as arrays of vertex data even though we only have a single vertex right now.
    341 </p>
    342 
    343 <note>
    344   We don't necessarily have to use interface blocks to transfer data to the geometry shader. We could have also written it as:
    345 <pre><code>
    346 in vec3 outColor[];
    347 </code></pre>
    348   This works if the vertex shader forwarded the color vector as <code>out</code> <code>vec3</code> <code>outColor</code>. However, interface blocks are easier to work with in shaders like the geometry shader. In practice, geometry shader inputs can get quite large and grouping them in one large interface block array makes a lot more sense.  
    349 </note>
    350 
    351 <p>
    352   We should also declare an output color vector for the next fragment shader stage:
    353 </p>
    354 
    355 <pre><code>
    356 out vec3 fColor;  
    357 </code></pre>
    358 
    359 <p>
    360   Because the fragment shader expects only a single (interpolated) color it doesn't make sense to forward multiple colors. The <var>fColor</var> vector is thus not an array, but a single vector. When emitting a vertex, that vertex will store the last stored value in <var>fColor</var> as that vertex's output value. For the houses, we can fill <var>fColor</var> once with the color from the vertex shader before the first vertex is emitted to color the entire house:
    361 </p>
    362 
    363 <pre><code>
    364 fColor = gs_in[0].color; // gs_in[0] since there's only one input vertex
    365 gl_Position = position + vec4(-0.2, -0.2, 0.0, 0.0);    // 1:bottom-left   
    366 EmitVertex();   
    367 gl_Position = position + vec4( 0.2, -0.2, 0.0, 0.0);    // 2:bottom-right
    368 EmitVertex();
    369 gl_Position = position + vec4(-0.2,  0.2, 0.0, 0.0);    // 3:top-left
    370 EmitVertex();
    371 gl_Position = position + vec4( 0.2,  0.2, 0.0, 0.0);    // 4:top-right
    372 EmitVertex();
    373 gl_Position = position + vec4( 0.0,  0.4, 0.0, 0.0);    // 5:top
    374 EmitVertex();
    375 EndPrimitive();  
    376 </code></pre>
    377 
    378 <p>
    379   All the emitted vertices will have the last stored value in <var>fColor</var> embedded into their data, which is equal to the input vertex's color as we defined in its attributes. All the houses will now have a color of their own:
    380 </p>
    381 
    382 <img src="/img/advanced/geometry_shader_houses_colored.png" class="clean" alt="Colored houses, generating using points with geometry shaders in OpenGL"/>
    383 
    384 <p>
    385   Just for fun we could also pretend it's winter and give their roofs a little snow by giving the last vertex a color of its own: 
    386 </p>
    387 
    388 <pre><code>
    389 fColor = gs_in[0].color; 
    390 gl_Position = position + vec4(-0.2, -0.2, 0.0, 0.0);    // 1:bottom-left   
    391 EmitVertex();   
    392 gl_Position = position + vec4( 0.2, -0.2, 0.0, 0.0);    // 2:bottom-right
    393 EmitVertex();
    394 gl_Position = position + vec4(-0.2,  0.2, 0.0, 0.0);    // 3:top-left
    395 EmitVertex();
    396 gl_Position = position + vec4( 0.2,  0.2, 0.0, 0.0);    // 4:top-right
    397 EmitVertex();
    398 gl_Position = position + vec4( 0.0,  0.4, 0.0, 0.0);    // 5:top
    399 fColor = vec3(1.0, 1.0, 1.0);
    400 EmitVertex();
    401 EndPrimitive();  
    402 </code></pre>
    403 
    404 <p>
    405   The result now looks something like this:
    406 </p>
    407 
    408 <img src="/img/advanced/geometry_shader_houses_snow.png" class="clean" alt="Snow-colored houses, generating using points with geometry shaders in OpenGL"/>
    409 
    410 <p>
    411   You can compare your source code with the OpenGL code <a href="/code_viewer_gh.php?code=src/4.advanced_opengl/9.1.geometry_shader_houses/geometry_shader_houses.cpp" target="_blank">here</a>.
    412 </p>
    413 
    414 <p>
    415   You can see that with geometry shaders you can get pretty creative, even with the simplest primitives. Because the shapes are generated dynamically on the ultra-fast hardware of your GPU this can be a lot more powerful than defining these shapes yourself within vertex buffers. Geometry shaders are a great tool for simple (often-repeating) shapes, like cubes in a voxel world or grass leaves on a large outdoor field.
    416 </p>
    417 
    418 <h1>Exploding objects</h1>
    419 <p>
    420   While drawing houses is fun and all, it's not something we're going to use that much. That's why we're now going to take it up one notch and explode objects! That is something we're also probably not going to use that much either, but it's definitely fun to do!
    421 </p>
    422 
    423 <p>
    424   When we say <em>exploding</em> an object we're not actually going to blow up our precious bundled sets of vertices, but we're going to move each triangle along the direction of their normal vector over a small period of time. The effect is that the entire object's triangles seem to <em>explode</em>. The effect of exploding triangles on the backpack model looks a bit like this:
    425 </p>
    426 
    427 <img src="/img/advanced/geometry_shader_explosion.png" class="clean" alt="Explosion effect with geometry shaders in OpenGL"/>
    428 
    429 <p>
    430   The great thing about such a geometry shader effect is that it works on all objects, regardless of their complexity.
    431 </p>
    432 
    433 <p>
    434   Because we're going to translate each vertex into the direction of the triangle's normal vector we first need to calculate this normal vector. What we need to do is calculate a vector that is perpendicular to the surface of a triangle, using just the 3 vertices we have access to. You may remember from the <a href="https://learnopengl.com/Getting-started/Transformations" target="_blank">transformations</a> chapter that we can retrieve a vector perpendicular to two other vectors using the <def>cross product</def>. If we were to retrieve two vectors <var>a</var> and <var>b</var> that are parallel to the surface of a triangle we can retrieve its normal vector by doing a cross product on those vectors. The following geometry shader function does exactly this to retrieve the normal vector using 3 input vertex coordinates:
    435 </p>
    436 
    437 <pre><code>
    438 vec3 GetNormal()
    439 {
    440    vec3 a = vec3(gl_in[0].gl_Position) - vec3(gl_in[1].gl_Position);
    441    vec3 b = vec3(gl_in[2].gl_Position) - vec3(gl_in[1].gl_Position);
    442    return normalize(cross(a, b));
    443 }  
    444 </code></pre>
    445 
    446 <p>
    447   Here we retrieve two vectors <var>a</var> and <var>b</var> that are parallel to the surface of the triangle using vector subtraction. Subtracting two vectors from each other results in a vector that is the difference of the two vectors. Since all 3 points lie on the triangle plane, subtracting any of its vectors from each other results in a vector parallel to the plane. Do note that if we switched <var>a</var> and <var>b</var> in the <fun>cross</fun> function we'd get a normal vector that points in the opposite direction - order is important here!
    448 </p>
    449 
    450 <p>
    451   Now that we know how to calculate a normal vector we can create an <fun>explode</fun> function that takes this normal vector along with a vertex position vector. The function returns a new vector that translates the position vector along the direction of the normal vector:
    452 </p>
    453 
    454 <pre><code>
    455 vec4 explode(vec4 position, vec3 normal)
    456 {
    457     float magnitude = 2.0;
    458     vec3 direction = normal * ((sin(time) + 1.0) / 2.0) * magnitude; 
    459     return position + vec4(direction, 0.0);
    460 } 
    461 </code></pre>
    462 
    463 <p>
    464   The function itself shouldn't be too complicated. The <fun>sin</fun> function receives a <var>time</var> uniform variable as its argument that, based on the time, returns a value between <code>-1.0</code> and <code>1.0</code>. Because we don't want to <em>implode</em> the object we transform the sin value to the <code>[0,1]</code> range. The resulting value is then used to scale the <var>normal</var> vector and the resulting <var>direction</var> vector is added to the position vector.
    465 </p>
    466 
    467 <p>
    468   The complete geometry shader for the <def>explode</def> effect, while drawing a model loaded using our <a href="https://learnopengl.com/Model-Loading/Assimp" target="_blank">model loader</a>, looks a bit like this:
    469 </p>
    470 
    471 <pre><code>
    472 #version 330 core
    473 layout (triangles) in;
    474 layout (triangle_strip, max_vertices = 3) out;
    475 
    476 in VS_OUT {
    477     vec2 texCoords;
    478 } gs_in[];
    479 
    480 out vec2 TexCoords; 
    481 
    482 uniform float time;
    483 
    484 vec4 explode(vec4 position, vec3 normal) { ... }
    485 
    486 vec3 GetNormal() { ... }
    487 
    488 void main() {    
    489     vec3 normal = GetNormal();
    490 
    491     gl_Position = explode(gl_in[0].gl_Position, normal);
    492     TexCoords = gs_in[0].texCoords;
    493     EmitVertex();
    494     gl_Position = explode(gl_in[1].gl_Position, normal);
    495     TexCoords = gs_in[1].texCoords;
    496     EmitVertex();
    497     gl_Position = explode(gl_in[2].gl_Position, normal);
    498     TexCoords = gs_in[2].texCoords;
    499     EmitVertex();
    500     EndPrimitive();
    501 }  
    502 </code></pre>
    503 
    504 <p>
    505   Note that we're also outputting the appropriate texture coordinates before emitting a vertex. 
    506 </p>
    507 
    508 <p>
    509   Also don't forget to actually set the <var>time</var> uniform in your OpenGL code:
    510 </p>
    511 
    512 <pre><code>
    513 shader.setFloat("time", <function id='47'>glfwGetTime</function>());  
    514 </code></pre>
    515 
    516 <p>
    517   The result is a 3D model that seems to continually explode its vertices over time after which it returns to normal again. Although not exactly super useful, it does show you a more advanced use of the geometry shader. You can compare your source code with the complete source code <a href="/code_viewer_gh.php?code=src/4.advanced_opengl/9.2.geometry_shader_exploding/geometry_shader_exploding.cpp" target="_blank">here</a>.
    518 </p>
    519 
    520 <h1>Visualizing normal vectors</h1>
    521 <p>
    522   To shake things up we're going to now discuss an example of using the geometry shader that is actually useful: visualizing the normal vectors of any object. When programming lighting shaders you will eventually run into weird visual outputs of which the cause is hard to determine. A common cause of lighting errors is incorrect normal vectors. Either caused by incorrectly loading vertex data, improperly specifying them as vertex attributes, or by incorrectly managing them in the shaders. What we want is some way to detect if the normal vectors we supplied are correct. A great way to determine if your normal vectors are correct is by visualizing them, and it just so happens that the geometry shader is an extremely useful tool for this purpose.
    523 </p>
    524 
    525 <p>
    526   The idea is as follows: we first draw the scene as normal without a geometry shader and then we draw the scene a second time, but this time only displaying normal vectors that we generate via a geometry shader. The geometry shader takes as input a triangle primitive and generates 3 lines from them in the directions of their normal - one normal vector for each vertex. In code it'll look something like this:
    527 </p>
    528 
    529 <pre><code>
    530 shader.use();
    531 DrawScene();
    532 normalDisplayShader.use();
    533 DrawScene();
    534 </code></pre>
    535 
    536 <p>
    537   This time we're creating a geometry shader that uses the vertex normals supplied by the model instead of generating it ourself. To accommodate for scaling and rotations (due to the view and model matrix) we'll transform the normals with a normal matrix. The geometry shader receives its position vectors as view-space coordinates so we should also transform the normal vectors to the same space. This can all be done in the vertex shader:
    538 </p>
    539 
    540 <pre><code>
    541 #version 330 core
    542 layout (location = 0) in vec3 aPos;
    543 layout (location = 1) in vec3 aNormal;
    544 
    545 out VS_OUT {
    546     vec3 normal;
    547 } vs_out;
    548 
    549 uniform mat4 view;
    550 uniform mat4 model;
    551 
    552 void main()
    553 {
    554     gl_Position = view * model * vec4(aPos, 1.0); 
    555     mat3 normalMatrix = mat3(transpose(inverse(view * model)));
    556     vs_out.normal = normalize(vec3(vec4(normalMatrix * aNormal, 0.0)));
    557 }
    558 </code></pre>
    559 
    560 <p>
    561   The transformed view-space normal vector is then passed to the next shader stage via an interface block. The geometry shader then takes each vertex (with a position and a normal vector) and draws a normal vector from each position vector:
    562 </p>
    563 
    564 <pre><code>
    565 #version 330 core
    566 layout (triangles) in;
    567 layout (line_strip, max_vertices = 6) out;
    568 
    569 in VS_OUT {
    570     vec3 normal;
    571 } gs_in[];
    572 
    573 const float MAGNITUDE = 0.4;
    574   
    575 uniform mat4 projection;
    576 
    577 void GenerateLine(int index)
    578 {
    579     gl_Position = projection * gl_in[index].gl_Position;
    580     EmitVertex();
    581     gl_Position = projection * (gl_in[index].gl_Position + 
    582                                 vec4(gs_in[index].normal, 0.0) * MAGNITUDE);
    583     EmitVertex();
    584     EndPrimitive();
    585 }
    586 
    587 void main()
    588 {
    589     GenerateLine(0); // first vertex normal
    590     GenerateLine(1); // second vertex normal
    591     GenerateLine(2); // third vertex normal
    592 }  
    593 </code></pre>
    594 
    595 <p>
    596   The contents of geometry shaders like these should be self-explanatory by now. Note that we're multiplying the normal vector by a <var>MAGNITUDE</var> vector to restrain the size of the displayed normal vectors (otherwise they'd be a bit too large). 
    597 </p>
    598 
    599 <p>
    600   Since visualizing normals are mostly used for debugging purposes we can just display them as mono-colored lines (or super-fancy lines if you feel like it) with the help of the fragment shader:
    601 </p>
    602 
    603 <pre><code>
    604 #version 330 core
    605 out vec4 FragColor;
    606 
    607 void main()
    608 {
    609     FragColor = vec4(1.0, 1.0, 0.0, 1.0);
    610 }  
    611 </code></pre>
    612 
    613 <p>
    614   Now rendering your model with normal shaders first and then with the special <em>normal-visualizing</em> shader you'll see something like this:
    615 </p>
    616 
    617 <img src="/img/advanced/geometry_shader_normals.png" class="clean" alt="Image of geometry shader displaying normal vectors in OpenGL"/>
    618 
    619 <p>
    620   Apart from the fact that our backpack now looks a bit hairy, it gives us a really useful method for determining if the normal vectors of a model are indeed correct. You can imagine that geometry shaders like this could also be used for adding <def>fur</def> to objects.
    621 </p>
    622 
    623 <p>
    624   You can find the OpenGL's source code <a href="/code_viewer_gh.php?code=src/4.advanced_opengl/9.3.geometry_shader_normals/normal_visualization.cpp" target="_blank">here</a>.
    625 </p>       
    626 
    627     </div>
    628