Lua Game Development Cookbook - Sample Chapter
Lua Game Development Cookbook - Sample Chapter
ee
This book will guide you through each part of building your game engine and will help you understand how
computer games are built. The book starts with simple game concepts used mainly in 2D side-scroller
games, and moves on to advanced 3D games. Plus, the scripting capabilities of the Lua language give you
full control over game.
By the end of this book, you will have learned all about the components that go into a game, created a
game, and solved the problems that may arise along the way.
GLSL shaders
Use lighting and graphical effects
Create animated game characters using
Box2D library
Load and use textures, fonts, and 3D models
Design and implement a graphical
user interface
Sa
pl
e
and problems
for pathfinding
Implement networking support
problems efficiently
$ 49.99 US
32.99 UK
Mrio Kauba
P U B L I S H I N G
The Lua language allows developers to create everything from simple to advanced applications and to
create the games they want. Creating a good game is an art, and using the right tools and knowledge is
essential in making game development easier.
P U B L I S H I N G
Mrio Kauba
Preface
Game development is one of the most complex processes in the world as it requires a wide
set of skills such as programming, math, physics, art, sound engineering, management,
marketing, and many more. Even with modern technologies, it may take from a few hours to
several years to create a game. This depends on the game complexity and tools available.
Computer games are usually based on a mix of simple concepts, which are turned into an
enjoyable experience. The first step in making a good game is a game prototype. These can
be made with the help of various game engines. However, learning how to use a game engine
to the full extent may require you to study how it actually works. This way you have to rely on
the available documentation and features that the game engine provides. Many game engines
today provide a scripting language as a tool to implement certain game mechanics or to
extend the game engine itself with new features.
The Lua programming language is gaining popularity in the game industry mainly due to its
simplicity and efficiency. Most of the time, it's used only for simple tasks such as NPC dialogs,
user interface, or custom game events. However, with additional Lua modules, you can create
your own full-fledged game engine that can use almost all the capabilities of the modern
computer hardware.
In this book, you'll find a set of recipes with solutions to the most common problems you may
encounter while creating games with the Lua language.
The best way to learn something is to play with it. Therefore, each recipe is paired with
simple demo applications that will help you understand the topic covered. You may even
use these demo samples to create your own game prototype in no time.
All sample applications are available in the digital content of this book.
Preface
Graphics Modern
Method with
OpenGL 3.0+
This chapter will cover the following recipes:
Rendering to texture
Bumpmapping
Introduction
This chapter will deal with programming and using dynamic rendering pipeline in OpenGL.
While shaders have been available since OpenGL 2.0, their first versions are now considered
deprecated. A wide variety of graphic cards now support at least OpenGL 3.3, which implements
the currently valid specification of GLSL shaders. This chapter will focus on GLSL version 3.3,
which is relevant for OpenGL 3.3.
Vertex shader: This performs operations on vertex attributes: vertex color, position,
normal vector and many others
156
Chapter 5
Only vertex and fragment shaders are mandatory for basic rendering of operations.
The following diagram shows the complete rendering pipeline:
Tessellation
Vertex Shader
Tessellation
Control Shader
Tessellator
Tessellation
Evaluation Shader
Fragment tests
Fragment Shader
Rasterization
Clipping
Framebuffer
blending and logic
Write masking
Write result to
framebuffer
Geometry Shader
Transform
Feedback
Fragment processing
The red parts are mandatory shaders; the optional shaders are in orange. Blue and white
parts present steps that aren't fully controllable by the user.
Getting ready
Before using GLSL shaders, you should always check whether the current graphic card
supports them. For this, you can use the gl.IsSupported function. It accepts one string
parameter that consists of the OpenGL extension names and version names. For example,
the following code tests whether there is support for OpenGL 3.0, vertex and fragment
shaders in the current system:
assert(gl.IsSupported("GL_VERSION_3_0 GL_ARB_vertex_shader
GL_ARB_fragment_shader"))
Each string part is delimited with one space and always starts with the GL_ prefix. After this
check, you can be confident using GLSL shaders or any other extension. Otherwise, you might
end up producing memory access violation or segmentation fault, as the required functions
aren't available.
A list of valid extension names can be found at http://glew.sourceforge.net/glew.
html.
157
This vertex shader uses GLSL version 3.3 and does basic preparation of vertex attributes for
the next stage.
How to do it
GLSL shaders and programs use special OpenGL objects. These must be created before
using. You can create the shader object with the gl.CreateShader function. It accepts
the shader stage identifier and results in a numerical object identifier. Let's assume that this
shader object identifier is stored in the shader_object variable with the following code:
local shader_stage = gl_enum.GL_VERTEX_SHADER
local shader_object = gl.CreateShader(shader_stage)
Now you can use this shader object to load your shader's source code:
gl.ShaderSource(shader_object, shader_source)
After this step, you can compile the shader with the gl.CompileShader function. You can
check the shader compilation status with this code:
local compilation_status = ""
local status = gl.GetShaderiv(shader_object,
gl_enum.GL_COMPILE_STATUS)
if status == gl_enum.GL_FALSE then
compilation_status = gl.GetShaderInfoLog(shader_object)
end
158
Chapter 5
The status variable contains a numerical value, which is set to GL_TRUE if the compilation
is successful. Otherwise, it's set to GL_FALSE and you can obtain the textual error message
with the gl.GetShaderInfoLog function.
After successful compilation, you can link shader objects into shader programs, but first you
must create one with the gl.CreateProgram function. It returns a numerical identifier for
the shader program. Let's store this value into the shader_program value as shown in the
following code:
local shader_program = gl.CreateProgram()
Now you can attach the shader objects into the shader program with the following command:
gl.AttachShader(shader_program, shader_object)
With this step done, you can finally link shaders into the program with the command:
gl.LinkProgram(shader_program)
You should always check for the last linking operation status with the following code:
local link_status = ""
local status = gl.GetProgramiv(shader_program,
gl_enum.GL_LINK_STATUS)
if status == gl_enum.GL_FALSE then
link_status = gl.GetProgramInfoLog(shader_program)
end
After the shader program is linked, the shader objects are not needed anymore and you can
safely delete them with:
gl.DeleteShader(shader_object)
If there's no need for the shader program, you can delete it with the following code:
gl.DeleteProgram(shader_program)
159
How it works
The GLSL shader loading process consists of two steps. The first step is the shader stage
compilation into the shader object. It works in a similar fashion as in a C compiler, where the
source code is compiled into binary object files. The compilation is followed by the linking
process. Shader objects are linked into one shader program. This presents the final result of
the GLSL shader preparation process. Of course, your application might contain more than
one shader program and you can switch between them. On some rare occasions, it's better
to merge more shaders into one and separate them with conditional blocks. This approach
introduces additional overhead to the shader code especially in fragment shader, but this might
be better than switching shaders. There's no general rule for this, so you'll need to experiment.
When you're writing your own shaders, you should always take into account the amount
of shader runs for each element. For instance, the vertex shader is used on every vertex,
whereas the fragment shader is almost always used many more times as it operates on
fragment elements. You can think of fragments as pixels on the frame buffer. So, whenever
you're writing a program for the fragment shader, try to think about implementing it in the
vertex shader first. This way you can further optimize your shaders, especially if you intend
to use them in an application on mobile devices.
See also
160
Chapter 5
Getting ready
Each uniform variable has its own numerical location identifier. This identifier is used to
access almost any uniform variable. The location identifier is limited to primitive values
such as integer, float, and vectors. Matrices present a special case where you can upload
the whole matrix in one step, but you can retrieve only one element from the shader program
at one time. You can obtain a uniform variable location with the gl.GetUniformLocation
function. There are three ways to use this function:
Let's assume that shader_program is the valid identifier for the shader program. This function
returns the location identifier of the specified uniform variable. If such a variable doesn't exist
in the shader program or is discarded in the process of compilation, the returned value is -1.
The uniform variable is discarded if it isn't actively used in the shader program.
How to do it
Now that you've got the location of the uniform variable, you can either set the content of the
uniform variable or obtain its value.
Setting up matrices is a bit more difficult. Matrix values have to be stored in a flat Lua table.
Matrix sizes can vary from 2 x 2 to 4 x 4 elements. You can also let the gl.UniformMatrix
function to transpose your matrix. It means that matrix rows will be swapped with matrix
columns. This is useful if you're supplying matrices that consist of multiple vectors. The
following example shows how to upload the whole matrix of size 4 x 4:
local x,y,z = 1,2,3
local translation = {
1, 0, 0, x,
0, 1, 0, y,
0, 0, 1, z,
0, 0, 0, 1,
}
local rows, columns = 4, 4
local transpose = false
gl.UniformMatrix(location, translation, rows, columns, transpose)
Return types
Integer
gl.GetUniformui
Unsigned integer
gl.GetUniformf
Float
gl.GetUniformd
Double
For example, if you'd want to obtain a 3D vector from the shader program, you'd use the
following code:
local x,y,z = gl.GetUniformf(shader_program, location)
162
Chapter 5
How it works
Uniform variables are available for all parts of the shader program. For instance, you can
access the same uniform variable from the vertex and fragment shaders. You should always
try to minimize the amount of uniform variable updates. Every update consumes a small part
of bandwidth between CPU memory and GPU memory.
Getting ready
This recipe will use the GLSL shading language with version 3.3. It assumes that all the
vertices are stored in Vertex Buffer Object (VBO). The vertex shader program is applied
on every vertex that is contained within VBO.
To prepare the vertex shader, you'll need to create the shader object first:
local shader_stage = gl_enum.GL_VERTEX_SHADER
local shader_object = gl.CreateShader(shader_stage)
How to do it
The shader programs code can be stored in a text file or you can submit it directly as a string
value. This recipe will use the latter method. The following source code will define the basic
vertex shader:
local shader_source = [[
//Requires GLSL 3.3 at least
#version 330
//Input variables vertex attributes
layout (location = 0) in vec3 VertexPosition;
layout (location = 1) in vec4 VertexColor;
layout (location = 2) in vec2 VertexTexCoord;
163
Now you can load and compile this source code into the shader object:
gl.ShaderSource(shader_object, shader_source)
gl.CompileShader(shader_object)
Be sure to always check for the compilation status. The production version of the game should
use at least some kind of message logging mechanism, so you can store error messages into the
bug report file, which is always handy. In order to store the messages, use the following code:
local status = gl.GetShaderiv(shader_object,
gl_enum.GL_COMPILE_STATUS)
if status == gl_enum.GL_FALSE then
local compilation_status = gl.GetShaderInfoLog(shader_object)
error("Vertex Shader compilation failed: "..compilation_status)
end
After these steps, you can finally link the vertex shader with the shader program.
How it works
It's recommended to specify the required shader specification version at the beginning of the
shader source code. This is done with preprocessor macro:
#version VERSION_NUMBER
The version number is always in the form of three digits. For example, for GLSL version
1.5, one would use a number 150. The good thing is that OpenGL shaders are backwards
compatible. This way you can use older GLSL specifications even on newer graphic cards.
164
Chapter 5
The input variables for the vertex shader can have two forms. You can use either the uniform
variables or the vertex attributes stored in VBO. This recipe uses the vertex attributes with
layout specification. Each vertex attribute layout number represents a VBO identifier. This
way the GLSL shader knows what VBO to use:
layout (location = 0) in vec3 VertexPosition;
Optionally, layouts can be set explicitly in Lua with the following code:
local attribute_name = "VertexPosition"
gl.BindAttribLocation(shader_program, layout_index,
attribute_name)
The vertex shader has to pass results to the next stage. The output variables can be specified
in two ways. The first one uses direct output variable specification:
out vec4 VertexColor;
This is also called as an interface block. Interface blocks are shared between shader stages.
However, this will work only if the interface block shares the same interface name, variable
name, and also their order and types have to be the same. Notice that the interface block
name VertexData is specified right after our qualifier. The local interface name outData
is valid only in the local context. You can refer to these variables as if you were using C
structures. Therefore, to set the vertex color, you would use the following code:
outData.Color = vec4(...);
You may also omit the local interface name. In that case, you can refer to the interface
variables in this fashion:
Color = vec4(...);
The last and the most important part of vertex shader is the main function. This sample does
simple matrix transformation on the vertex position:
gl_Position = matrix * vec4(VertexPosition.xyz, 1.0);
165
There's more
The vector data type in GLSL can contain 2, 3, or 4 components. As you've already seen,
components are accessed by their names x, y, z and w. This is also called swizzling. That's
because you can use any combination of components as long as you maintain the correct
output data type. Therefore, the following code is completely valid:
vec2
vec3
vec4
vec4
vector1;
vector2 = vector1.xxy;
vector3 = vector2.zwyx;
vector4 = vector1.xxxx;
You can use swizzling even on the left side (also known as l-value) of the value assignment:
vec4 vector1;
vector1.xz = vec2(1.0, 2.0);
Alternatively, you can use color component names r, g, b, and a; or even texture coordinate
names s, t, p, and q.
See also
166
Chapter 5
Getting ready
The preparation of the fragment shader is fairly similar to the preparation of the vertex shader:
local shader_stage = gl_enum.GL_FRAGMENT_SHADER
local shader_object = gl.CreateShader(shader_stage)
This will create the shader object, which you can use to load and compile the shader
source code.
How to do it
This recipe will use the shader code stored in a string variable:
#version 330
in VertexData {
vec4 Color;
vec2 TexCoord;
} inData;
uniform sampler2D texID;
uniform int textured;
167
This fragment shader doesn't do anything special. It can draw colored primitive on screen
where the vertex colors are automatically interpolated. Optionally, you can switch uniform
variable textured to draw textured primitive.
How it works
Firstly, you should always set the required GLSL version. It's considered as a good practice
because this way you can safely expect and use certain features that are available from this
version of GLSL. If this version is not supported on the system, the compilation process
will fail, and therefore, you can apply the fallback mechanism. To set the version use the
following code:
#version 330
Notice that this block contains the same variables as in the vertex shader interface block.
This block is used as data input; therefore, the in qualifier comes before the block name.
Every variable inside this block is accessible via the local block name inData, so to access
vertex color, you'd use inData.Color. Another thing to mention as that these variables are
linearly interpolated by default.
168
Chapter 5
This shader makes use of uniform variables. This first one, called texID points, at one texture,
which is two-dimensional, in this case, and uses float numbers. Therefore, it's defined to use
the sampler2D type. As you already know, there are many types of textures. A list of the
sampler types is shown in the following table:
Sampler type
gsampler1D
This is a 1D texture
Description
gsampler2D
GL_TEXTURE_2D
This is a 2D texture
gsampler3D
GL_TEXTURE_3D
This is a 3D texture
gsamplerCube
GL_TEXTURE_CUBE_MAP
This is a cubemap
texture
gsampler2DRect
GL_TEXTURE_RECTANGLE
This is a rectangle
texture
gsampler1DArray
GL_TEXTURE_1D_ARRAY
This is a 1D array
texture
gsampler2DArray
GL_TEXTURE_2D_ARRAY
This is a 2D array
texture
gsamplerCubeArray
GL_TEXTURE_CUBE_MAP_ARRAY
This is a cubemap
array texture
gsamplerBuffer
GL_TEXTURE_BUFFER
gsampler2DMS
GL_TEXTURE_2D_MULTISAMPLE
This is a multisample
texture
gsampler2DMSArray
GL_TEXTURE_2D_MULTISAMPLE_
ARRAY
This is a multisample
texture array
You may wonder why all sampler types have a prefix g. This prefix specifies the element data
type. If you omit this prefix, GLSL assumes that the texture contains float values.
Data types
float
int
unsigned int
You can omit this variable if you don't need to control texturing in your fragment shader.
169
On certain occasions, you might want to use multiple outputs in the fragment shader.
Each output variable must have its own location, which in return can be used to bind
the frame buffer. This is often used to split the output to color and the depth buffer.
As in the case of the vertex shader, the fragment shader also uses the main function.
This function is divided into two modes of operation by the control variable textured.
When texturing is enabled, you can access the texture elementstexelsin two ways. Either
you use the normalized float texture coordinates that are within the range (0,1), or you
use the exact texture coordinates specified as an integer's offset values from the origin
point. The first way is used often as you can directly use texture coordinates produced by
the vertex shader. With this method, you can also query subpixel values that are calculated
with linear interpolation:
vec4 texel1 = texture(texID, inData.TexCoord);
The second method is more exact but you'll need to know the texture size in pixels:
vec4 texel0 = texelFetch(texID, tc, LOD);
The LOD or Level of Detail value is used in conjunction with mipmapping. It defines the
mipmap level where the level 0 is a base texture. Be aware that the texelFetch function
uses the ivec texture coordinates that use integer values. You can obtain the texture size
with the textureSize function:
ivec2 texSize = textureSize(texID, LOD);
If you want to use the texture coordinates from the vertex shader with pixel perfect
coordinates, you can use the following code:
ivec2 tc = ivec2(inData.TexCoord * texSize);
It uses float number coordinates that are in the range (0, 1) and multiplies them with texture
dimensions. This will produce the vec2 type vector, which is not what you want to use in this
case. Therefore, you'll need to cast the vec2 vector into the ivec2 vector. All values in the
vector are truncated.
You can apply the texel value directly to the fragment shader output. Alternatively, you can
combine it with vertex colorinData.Color. This value is obtained from the vertex shader
and it's the only output variable if texturing is turned off.
170
Chapter 5
The following code contains a complete example of the simple fragment shader program that
fills the graphical primitive with the texture:
//this shader program requires as least OpenGL 3.3
#version 330
//diffuseTexture will contain texture unit identifier (integer)
uniform sampler2D diffuseTexture;
//structure contains values from previous stage (vertex shader)
//all values use linear interpolation by default
in VertexData {
vec4 Color;
//vertex color value
vec2 TexCoord; //texture coordinates
} inData;
//fragment shader output variable
layout(location = 0) out vec4 diffuseColor;
//main procedure will be called for each texel
void main() {
//texel will be filled with color value from a texture
vec4 texel = texture(diffuseTexture, inData.TexCoord);
//texel value is multiplied with vertex color in this case
diffuseColor = inData.Color * texel;
}
Texture rendering can be controlled by setting vertex colors. The original form of the texture
will be rendered if you use white color on all vertices.
There's more
If you're using the depth or depth-stencil texture format, you'll need to use a special kind of
sampler. These are called shadow samplers. The following table shows the list of shadow
samplers depending on the OpenGL texture type:
Show sampler type
sampler1DShadow
sampler2DShadow
samplerCubeShadow
sampler2DRectShadow
sampler1DArrayShadow
sampler2DArrayShadow
samplerCubeArrayShadow
These textures use only float numbers that are on the range (0,1).
171
See also
VBO
V
VBO
V
VBO
N
VBO
V
VBO
V
Each vertex attribute has its own VBOvertices, normal vectors, and vertex colors.
All the attributes are stored in one VBO. They are grouped by the attribute type.
All the attributes are stored in one VBO. They are grouped by the vertex.
Note that if you plan on frequent updating of vertex attributes, it's better to reserve the whole
VBO for this purpose. This way OpenGL can optimize memory access to vertex attributes.
172
Chapter 5
Getting ready
This recipe will use data layout where each vertex attribute will use its own VBO. You'll be using
the vertex position, the texture coordinates, and the vertex color. Therefore, you'll need to create
three VBOs. You can create the vertex buffer objects with the gl.GenBuffers function:
local vertex_buffer_object = gl.GenBuffers(3)
It accepts one parameter that presents the number of vertex buffer objects to be created.
You'll also be using the vertex array object that specifies the vertex data layout and references
to all used VBOs. The vertex array object can be created using the gl.GenVertexArrays
function. This function accepts the number of vertex array objects to be reserved:
local vertex_array_object = gl.GenVertexArrays(1)
How to do it
You'll need one vertex buffer object for each vertex attribute. In this case, you'll be using
three vertex buffer objects for the vertex position, the vertex color, and the vertex texture
coordinates. Now, you can fill each one with the corresponding vertex data.
Vertex positions
We will use four vertices to draw the rectangular polygon. The following code will define the
vertex positions for one rectangle:
//vertex positions are specified by X, Y pairs
local vertex_positions = {
-1, -1,
1, -1,
1, 1,
-1, 1,
}
gl.BindBuffer(gl_enum.GL_ARRAY_BUFFER, vertex_buffer_object[1])
gl.BufferData(gl_enum.GL_ARRAY_BUFFER, vertex_positions,
gl_enum.GL_STATIC_DRAW)
Vertex colors
You can use this code to store the vertex colors:
//vertex colors use RGBA quadruplets
local vertex_colors = {
1,0,0,1,
0,1,0,1,
173
Now that you have data stored in VBOs, you'll have to bind them into VAO. The vertex
array object contains data layout information. For instance, if the vertex position consists
of three dimensions, each vertex will use three subsequent values from VBO that contains
vertex positions.
Before using the vertex array object, you'll need to bind it with the gl.BindVertexArray
function:
gl.BindVertexArray(vertex_array_object[1])
Another step is enabling and mapping vertex attributes to buffers. In this recipe, each
vertex contains three vertex attributes: the vertex position, the vertex color and the texture
coordinate. Each vertex attribute will use different attribute index. This index will correspond
to the location value in the shader source:
layout (location = 0) in vec3 VertexPosition;
174
Chapter 5
Notice that the vertex position is specified by two elements (x, y), vertex color by four elements
(r, g, b, a) and texture coordinates by two elements (s, t).
The last thing you'll need to do before drawing is enabling vertex attributes with the
gl.EnableVertexAttribArray function.
gl.EnableVertexAttribArray(0)
gl.EnableVertexAttribArray(1)
gl.EnableVertexAttribArray(2)
Alternatively, you can disable certain vertex attributes with the gl.DisableVertexArray
function:
gl.DisableVertexAttribArray(attribute_index)
After all these steps, you are ready to use VBOs and VAO to efficiently draw vertices. Don't
forget to bind the currently used vertex array object before drawing. Otherwise, OpenGL
wouldn't know what data to use and you could get unpredictable results.
Vertices can be drawn by using the gl.DrawArrays function:
gl.DrawArrays(gl_enum.GL_QUADS, 0, 4)
The first parameter specifies what graphic primitive will be used. It uses the same constants
as were used in the gl.Begin function. The second parameter sets the vertex offset and the
last one is a number of vertices to be used.
175
How it works
Vertex buffer objects can contain arbitrary data. Vertex itself can use more than one vertex
attribute. Attributes usually contain more than one element. For instance, the vertex position
uses two coordinates in 2D space, but in 3D space there are three coordinates. OpenGL
doesn't know how many coordinates you use for vertices. Therefore, vertex array objects are
used to help with this issue. Vertex array object defines how to get attributes for each vertex.
Keep in mind that it contains only references to VBOs, so you'll need to keep them.
LuaGL uses the float data type for VBO elements.
Shader
Vertex Buffer
Object
(VBO)
Vertex Attribute
location: 0
Input variable
layout (location = 0)
Vertex Buffer
Object
(VBO)
Vertex Attribute
location: 1
Input variable
layout (location = 1)
Vertex Buffer
Object
(VBO)
Vertex Attribute
location: 2
Input variable
layout (location = 2)
There's more
VBO presents a common data storage. It provides limited storage depending on implementation
and current machine. Some parts can be cached in system RAM and the currently used parts
are in graphic memory.
Another thing is that the gl.BufferData function reserves a certain amount of memory
to store data. You can use only a reserved range for data updates. There might be situations
where you know exactly how much storage you'll need, but you don't want to upload data right
away. For this case, you can use the gl.BufferData function, but instead of submitting data
in a Lua table, you'll be using elements count:
local element_count = 12
gl.BufferData(gl_enum.GL_ARRAY_BUFFER, element_count,
gl_enum.GL_STATIC_DRAW)
176
Chapter 5
This will reserve memory space for 12 elements, which you can update with the
gl.BufferSubData function:
local offset = 0
local data = {1,2,3,4}
gl.BufferSubData(gl_enum.GL_ARRAY_BUFFER, offset, data)
See also
Rendering to texture
Rendering to texture technique is used whenever you need to apply some kind of postprocessing
on screen or to produce dynamic textures in reflections.
Over the past few years, OpenGL introduced a number of ways to obtain screen content and
transfer it to texture. You could read directly from the frame buffer and store all data in texture
with gl.TexSubImage2D function. This approach is a slow process because all rendering
must be stalled in order to obtain a copy of the whole frame. For this kind of operation, there
was a P buffer introduced sometime in 2000. It presented a more efficient way of transferring
larger blocks of pixel data. However, this kind of buffer wasn't available everywhere and what's
more, it was hard to use. Later, it was deprecated in OpenGL 3.0 and subsequently removed
from OpenGL 3.1. Currently, the standardized way of working with frame buffer is to work with
Render Buffer. Render buffer objects have been available since OpenGL 3.0. They use native
pixel format, which makes them optimized for offscreen rendering target. The older technique
used a texture as a target and used the pixel format conversion in each update which is slow.
This recipe will show you how to prepare and use render buffer object.
Getting ready
You can attach render buffers to various kinds of data that frame buffer produces.
Render buffer can store color data, depth information, or stencil data.
Each render buffer will need to know its dimensions. Let's assume that you have
this information already since you need to have the application window in order to
display anything. The size of the application window will be stored in these variables:
screen_width and screen_height.
177
How to do it
First, you'll need to create the frame buffer object or FBO:
local fbos = gl.GenFrameBuffers(1)
With this set, you can proceed to individual render buffers. This recipe will show you how to
create and use the render buffer for color data and depth information.
render_buffers = gl.GenRenderBuffers(1)
internal_format = gl_enum.GL_RGBA8
rb_target = gl_enum.GL_RENDERBUFFER
fb_target = gl_enum.GL_FRAMEBUFFER
attachment = gl_enum.GL_COLOR_ATTACHMENT0
gl.BindRenderBuffer(rb_target, render_buffers[1])
gl.RenderBufferStorage(rb_target, internal_format, screen_width,
screen_height)
gl.FramebufferRenderbuffer(fb_target, attachment, rb_target,
render_buffers[1])
Chapter 5
local fb_target = gl_enum.GL_FRAMEBUFFER
local attachment = gl_enum.GL_DEPTH_ATTACHMENT
gl.BindRenderBuffer(rb_target, render_buffers[1])
gl.RenderBufferStorage(rb_target, internal_format, screen_width,
screen_height)
gl.FramebufferRenderbuffer(fb_target, attachment, rb_target,
render_buffers[1])
You should always check the frame buffer has been prepared properly:
local status =
gl.CheckFramebufferStatus(gl_enum.GL_DRAW_FRAMEBUFFER)
if status ~= gl_enum.GL_FRAMEBUFFER_COMPLETE then
error('Frame buffer is not complete!')
end
After this step, you can switch rendering to this frame buffer with the gl.BindFramebuffer
function:
gl.BindFramebuffer(gl_enum.GL_FRAMEBUFFER, fbos[1])
Alternatively, you can turn off rendering to this frame buffer with the following code:
gl.BindFramebuffer(gl_enum.GL_FRAMEBUFFER, 0)
src_level = 0
src_x, src_y, src_z = 0, 0, 0
dest_level = 0
dest_x, dest_y, dest_z = 0, 0, 0
src_width, src_height = screen_width, screen_height
src_depth = 1
gl.CopyImageSubData(
render_buffers[1], gl_enum.GL_RENDERBUFFER,
src_level,
src_x, src_y, src_z,
screen_texture, gl_enum.GL_TEXTURE_2D,
dest_level,
dest_x, dest_y, dest_z,
src_width, src_height, src_depth
)
179
How it works
OpenGL, by default, uses its own frame buffer. Frame buffer represents an abstract structure
that sets the output for color data, depth information, and others. On the other hand, render
buffer contains real data that has to be allocated in memory.
Render buffer uses native data format. Therefore, its content can be directly drawn on screen.
Optionally, the render buffer content can be copied into the texture, which uses data format
conversion. This approach is faster than rendering into texture first with each frame.
See also
180
Chapter 5
Getting ready
Before staring, you'll need to set up the camera position, object state in a scene, light sources,
and materials. The camera position is stored in a structure, cameraState. It includes three
matrices: position, rotation, and perspective correction. You could've multiplied these matrices
into one but keep in mind that not every matrix is updated frame by frame. What's more, GPU
can do matrix multiplication much faster than on CPU.
The object state is defined by object position. The position is computed from translation and
rotation matrices stored in the positionState structure.
Light sources use a structure, lightState, that stores all the needed information about
the light source such as light position, direction, attenuation, and spotlight parameters.
The scene uses ambient light color, sceneAmbient, to emulate global illumination.
The last thing you'll need to set up is material parameters stored in the materialState
structure.
You'll be setting uniform variables quite a lot. This means you'll be getting a uniform variable
location on every access. To make uniform variable manipulation easier, you can bundle these
operations into one function that stores location identifiers in a table:
local uniformLocations = {}
local uniformTypeFn = {
f = gl.Uniformf, -- float number
d = gl.Uniformd, -- double float number
i = gl.Uniformi, -- integer number
ui = gl.Uniformui, -- unsigned integer number
m = gl.UniformMatrix, -- matrix
}
local function setUniform(var_type, name, ...)
-- uniform variable location is cached to speed up process
local location = uniformLocations[name]
if not location then
location = gl.GetUniformLocation(shader_program, name)
uniformLocations[name] = location
end
local uniformFn = uniformTypeFn[var_type]
if type(uniformFn) == "function" then
uniformFn(location, ...)
end
end
181
182
Chapter 5
How to do it
The first step is to supply the initial values to all uniform variables. This recipe will use one
positional light source that is placed just next to the camera. The scene object is positioned
in front of the camera:
-- camera parameters
setUniform('m', 'camera.translation', {
1,0,0,0,
0,1,0,0,
0,0,1,0,
0,0,0,1,
}, 4, 4, true)
setUniform('m', 'camera.rotation', {
1,0,0,0,
0,1,0,0,
0,0,1,0,
0,0,0,1,
}, 4, 4, true)
setUniform('m', 'camera.perspective', projectionMatrix(60, 1, 1,
10), 4, 4, true)
-- object parameters
setUniform('m', 'object.translation', {
1,0,0,-0.5,
0,1,0,-0.5,
0,0,1,-0.5,
0,0,0,1,
}, 4, 4, true)
setUniform('m', 'object.rotation', {
1,0,0,0,
0,1,0,0,
0,0,1,0,
0,0,0,1,
}, 4, 4, true)
-- light parameters
setUniform('f', 'lights[0].position', {-1, 0, -1, 1})
setUniform('f', 'lights[0].diffuse', {1, 0.8, 0.8, 1})
setUniform('f', 'lights[0].specular', {1, 1, 1, 1})
setUniform('f', 'lights[0].spotCutoff', 180.0)
setUniform('f', 'lights[0].spotExponent', 1.2)
setUniform('f', 'lights[0].constantAttenuation', 0)
183
-- material parameters
setUniform('f', 'material.ambient', {0.2, 0.2, 0.2, 1})
setUniform('f', 'material.diffuse', {1, 1, 1, 1})
setUniform('f', 'material.specular', {1, 1, 1, 1})
setUniform('f', 'material.shininess', 5.0)
-- scene ambient color
setUniform('f', 'sceneAmbient', {0.2, 0.2, 0.2, 1})
-- textures
setUniform('i', 'diffuseTexture', 0)
The next important thing is having correct vertex attributes. You'll need the vertex position,
the vertex texture coordinates, and the vertex normal vector. Therefore, you'll need three
vertex buffer objects. Each one for every vertex attribute:
local
local
local
local
positionVBO = gl.GenBuffers(1)
texcoordVBO = gl.GenBuffers(1)
normalVBO = gl.GenBuffers(1)
vertex_array_object = gl.GenVertexArrays(1)
-- vertex coordinates
gl.BindBuffer(gl_enum.GL_ARRAY_BUFFER,
gl.BufferData(gl_enum.GL_ARRAY_BUFFER,
STATIC_DRAW)
-- normal vector coordinates
gl.BindBuffer(gl_enum.GL_ARRAY_BUFFER
-- texture coordinates
gl.BindBuffer(gl_enum.GL_ARRAY_BUFFER,
gl.BufferData(gl_enum.GL_ARRAY_BUFFER,
gl_enum.GL_STATIC_DRAW), normalVBO)
gl.BufferData(gl_enum.GL_ARRAY_BUFFER,
gl_enum.GL_STATIC_DRAW)
positionVBO)
vertexPositions, gl_enum.GL_
texcoordVBO)
texcoords,
normals,
Chapter 5
gl.BindBuffer(gl_enum.GL_ARRAY_BUFFER, positionVBO)
gl.VertexAttribPointer(0, 3, false, 0)
-- vertex texture coordinates
gl.BindBuffer(gl_enum.GL_ARRAY_BUFFER, texcoordVBO)
gl.VertexAttribPointer(1, 2, false, 0)
-- vertex normal vector
gl.BindBuffer(gl_enum.GL_ARRAY_BUFFER, normalVBO)
gl.VertexAttribPointer(2, 3, false, 0)
Vertex shader
The vertex shader code would look like this:
#version 330
struct
mat4
mat4
mat4
};
cameraState{
perspective;
translation;
rotation;
struct positionState{
mat4 translation;
mat4 rotation;
};
layout (location = 0) in vec3 VertexPosition;
layout (location = 1) in vec2 VertexTexCoord;
layout (location = 2) in vec3 VertexNormal;
out VertexData {
vec2 texCoord;
vec3 normal;
vec3 position;
} outData;
uniform float time;
uniform cameraState camera;
uniform positionState object;
void main(){
// model-view matrix
mat4 objMatrix = (object.translation * object.rotation);
185
Fragment shader
The fragment shader code would contain these definitions:
#version 330
// a structure for light parameters
struct lightState {
vec4 position;
vec4 diffuse;
vec4 specular;
float constantAttenuation, linearAttenuation,
quadraticAttenuation;
float spotCutoff, spotExponent;
vec3 spotDirection;
};
// structure with material properties
struct materialState {
vec4 ambient;
vec4 diffuse;
vec4 specular;
float shininess;
};
// camera position and orientation matrices
struct cameraState{
mat4 perspective;
mat4 translation;
mat4 rotation;
};
// diffuseTexture contains texture unit identifier (integer)
uniform sampler2D diffuseTexture;
186
Chapter 5
uniform cameraState camera;
uniform materialState material;
// ambient light color
uniform vec4 sceneAmbient;
//total number of lights, currently 8 is the maximum
uniform int totalLights;
uniform lightState lights[8];
in VertexData {
vec2 texCoord;
vec3 normal;
vec3 position;
} inData;
layout(location = 0) out vec4 diffuseColor;
The whole light reflection algorithm is packed into one function, processLighting.
It accepts three parameters: material parameters, the current point on surface, and
the normal vector. This makes the entire code much easier to read. Note that the
processLighting function operates on voxelspoints in space:
/*
Input:
material - material type specification
surface - voxel position in world space
normalDirection - normal vector for current voxel
*/
vec4 processLighting(in materialState material, in vec3 surface,
in vec3 normalDirection){
// camera position in world space
vec4 cam = camera.translation * vec4(0,0,0,1);
// directional vector from the surface to the camera
// it's used primarily to determine highlights
vec3 camDirection = normalize(cam.xyz - surface);
vec3 lightDirection;
float attenuation;
// ambient light
vec3 ambientLighting = sceneAmbient.rgb * material.ambient.rgb;
vec3 totalLighting = ambientLighting;
// iterate over all lights on the scene
187
Chapter 5
/*
Specular reflection is present only if the light ray
reflects almost directly to camera lenses.
*/
vec3 specularReflection;
// There's no specular reflection on the dark side
if (dot(normalDirection, lightDirection) < 0.0) {
specularReflection = vec3(0.0, 0.0, 0.0);
} else {
// Specular reflection
specularReflection = attenuation *
light.specular.rgb * material.specular.rgb *
pow(
max(0.0,
dot(reflect(-lightDirection, normalDirection),
camDirection)
),
material.shininess
);
}
// Add to total lighting contribution
totalLighting += diffuseReflection + specularReflection;
}
/*
Material transparency is controlled by alpha channel
of diffuse color
*/
return vec4(totalLighting, material.diffuse.a);
}
Now you can summarize everything in the main function for fragment shader.
void main() {
vec4 texel = texture(diffuseTexture, inData.texCoord.st);
materialState localMaterial = material;
// Texel color is directly applied to current diffuse color
localMaterial.diffuse *= texel;
// Compute output color for current voxel
diffuseColor = processLighting(
localMaterial,
inData.position,
normalize(inData.normal)
);
}
189
How it works
The total light contribution is divided into three parts: ambient light, diffuse light, and specular
light. Ambient light is a constant light produced by the surrounding environment. This lighting is
simply added to the total light contribution. Diffuse lighting is produced by the lighting source.
It's scattered in all directions in response to a rough material surface. Therefore, it mainly
reflects the light that isn't absorbed by the material. In this case, the material color is reflected
to the viewer. The specular light is a part of the lighting where the light directly reflects from the
surface to the viewer with minimum scattering. This also means that specular reflection consists
mainly of light color. You can observe this when you're looking at the water surface under low
angle. The light reflection diagram is shown as follows:
...
The light source position is defined by the vector with four components. If the last component
equals 1, this vector defines the light position. Otherwise, this vector defines the orientation of
directional light. Directional light doesn't have a source so the attenuation factor is 1.
The positional light uses the light source distance from the surface to adjust the light intensity.
The light intensity can be described as the following attenuation formula:
This formula uses three parameters: Cconstant attenuation, Llinear attenuation, and
Qquadratic attenuation.
The spotlight's cut-off value specifies the angular size of the light cone. The omnidirectional light
has the spotlight's cut-off value greater than 90 degrees. The light spot intensity decreases with
the second power of the angular distance from the light spot direction.
190
Chapter 5
j
j
After these steps, you should have the final attenuation value, which will be used on diffuse
and specular reflection.
Diffuse reflection uses the surface normal vector and light direction vector to calculate the
amount of light reflected. Note that this type of reflection is independent of camera position.
The final diffuse color is a result of multiplication of material color value with light color value
and dot product of surface normal vector with the light direction vector. The dot product always
produces values in a range (-1,1). If those two vectors are parallel, it results in a value 1. If they
are perpendicular, it's 0. The negative values are produced when those two vectors enclose an
angle greater than 90 degree. The final value of diffusion color is modified by attenuation value,
so there are dark parts on the surface that are out of the light source range.
Specular reflection occurs only on surface parts that reflect light almost directly to the camera.
The total amount of specular reflection is modified by the result of this formula:
Finally, the diffuse and specular reflections are added to total light contribution on the
selected part of the surface.
See also
191
Bumpmapping
Bumpmapping presents a way to increase a detail level without increasing the total polygon
count. This technique relies on using normal maps applied to surfaces. Without this, each
surface or polygon would have only one normal vector, and therefore, it would look like a
flat surface. It uses the term mapping because in addition to the basic texture map, it uses
another texture that represents a normal map. A normal map contains normal vectors in
tangent space and can be encoded as simple RGB texture, where each color component
represents a normal vector component. It makes the surface look rough with bumps.
Bumpmap textures usually consist of grayscale image, where dark areas represent lower
regions and lighter areas represent a higher region. Such images need to be converted into
colorful normal map. You can use NVidia Texture Tools for Adobe Photoshop or a normal map
plugin for the GIMP image editor. There's even a free online tool to do such conversion called
NormalMap Online and it's available at the GitHub page http://cpetry.github.io/
NormalMap-Online/.
Getting ready
This recipe uses a slightly modified version of shaders from the previous recipe. While the
vertex shader is almost the same, the fragment shader uses two texture units instead of
one. The first one is used for texture map and the second one is used for normal map.
Therefore, you'll need to set up two texture units as follows:
local texture_target = gl_enum.GL_TEXTURE_2D
gl.ActiveTexture(gl_enum.GL_TEXTURE0)
gl.BindTexture(texture_target, texture_map)
gl.ActiveTexture(gl_enum.GL_TEXTURE1)
gl.BindTexture(texture_target, normal_map)
-- textures
setUniform('i', 'diffuseTexture', 0)
setUniform('i', 'normalTexture', 1)
You'll also need to prepare lights in your scene. You can copy the light setup from the previous
recipe about lighting basics.
You could try to apply a normal map as an ordinal texture, but soon you would've discovered
certain artifacts in normal vector orientations. That's why you'll need to know triangle tangent
vectors additionally to existing vertex attributes, such as a normal vector. These vectors describe
the direction of the triangle plane. You'll need these vectors to apply vector correction in a
normal map. Otherwise, the normal map would cause distortions and incorrect light reflections.
You can supply tangent vectors for each vertex by the vertex buffer.
192
Chapter 5
How to do it
First you'll have to prepare the vertex buffer objects and vertex attributes to prepare all data
for shaders:
local
local
local
local
local
positionVBO = gl.GenBuffers(1)
texcoordVBO = gl.GenBuffers(1)
normalVBO = gl.GenBuffers(1)
tangentVBO = gl.GenBuffers(1)
vertex_array_object = gl.GenVertexArrays(1)
-- vertex coordinates
gl.BindBuffer(gl_enum.GL_ARRAY_BUFFER,
gl.BufferData(gl_enum.GL_ARRAY_BUFFER,
STATIC_DRAW)
-- texture coordinates
gl.BindBuffer(gl_enum.GL_ARRAY_BUFFER,
gl.BufferData(gl_enum.GL_ARRAY_BUFFER,
DRAW)
-- normal vector coordinates
gl.BindBuffer(gl_enum.GL_ARRAY_BUFFER,
gl.BufferData(gl_enum.GL_ARRAY_BUFFER,
DRAW)
-- tangent vector coordinates
gl.BindBuffer(gl_enum.GL_ARRAY_BUFFER,
gl.BufferData(gl_enum.GL_ARRAY_BUFFER,
DRAW)
positionVBO)
vertexPositions, gl_enum.GL_
texcoordVBO)
texcoords, gl_enum.GL_STATIC_
normalVBO)
normals, gl_enum.GL_STATIC_
tangentVBO)
tangents, gl_enum.GL_STATIC_
There are two vectors U and V that describe a plane defined by triangle points. You can compute
these two vectors with the following code:
local
x =
y =
z =
}
local
x =
y =
z =
}
U =
C.x
C.y
C.x
{
- A.x,
- A.y,
- A.z
V =
B.x
B.y
B.x
{
- A.x,
- A.y,
- A.z
You'll need to do the same with texture coordinates as well. Texture coordinate vectors will use
letters S and T:
local
x =
y =
}
local
x =
y =
}
194
S = {
C.tx - A.tx,
C.ty - A.ty,
T = {
B.tx - A.tx,
B.ty - A.ty,
Chapter 5
Now that you have the U and V triangle edge vectors and texel direction vectors S and T, you
can compute tangent and bi-tangent vectors with the following formula:
local
local
x =
y =
z =
}
local
x =
y =
z =
}
r = 1/(S.x*T.y - S.y*T.x)
tangent = {
(T.y*U.x - S.y*V.x)*r,
(T.y*U.y - S.y*V.y)*r,
(T.y*U.z - S.y*V.z)*r
bitangent = {
(S.x*V.x - T.x*U.x)*r,
(S.x*V.y - T.x*U.y)*r,
(S.x*V.z - T.x*U.z)*r
Note that these tangent and bitangent vectors are related to the edge vectors and texture
space vectors. You could use those vectors in normal mapping, but on certain occasions, you
would get incorrect results. That's because these tangent space vectors aren't orthogonal or
because they've got different orientation. You can solve these problems with Gram-Schmidt
orthogonalization. For this operation, you'll need a normal vector N. The Gram-Schmidt
orthogonalization formula looks like this:
You can rewrite it in the Lua language with the following code:
local
local
x =
y =
z =
}
Now you're left with determining the triangle winding direction. Winding direction defines the
order of triangle vertices. A visual representation of this triangle is regarded to be the front
face. The back face of the triangle uses the opposite winding direction of vertices. The winding
direction helps to determine the direction of the orthogonal tangent vector in the final step.
The invalid (opposite) direction of the tangent vector would reverse a texture on the triangle.
In most cases, you'll be using counterclockwise winding, but this can differ if you're using
triangle strips, where the triangle winding alternates and this can pose a problem. You can
obtain winding direction from the following formula:
195
* tangent.y,
* tangent.x,
* tangent.x,
bitangent.x +
The last step in producing tangent vectors is to include the winding information in the tangent
vector itself. You can store this information in the fourth element w of the tangent vector:
tangentOrthogonal.w = (winding < 0) and 1 or -1
Do note that this tangent vector has four elements: x, y, z, and w. The last one is used in the
vertex shader to correct TBN matrix orientation. Fortunately, you only have to compute tangent
vectors once.
To produce a bumpmapping effect, you can reuse the shader code introduced in previous
samples with a few changes.
Vertex shader
The vertex shader code will need to include another vertex attribute that will contain the tangent
vector for each vertex. You can do this by including this vertex layout specification code:
layout (location = 4) in vec4 VertexTangent;
After this step, you'll have to compute the so-called TBN matrix with the size of 3 x 3 elements.
This matrix contains three columns, where the first contains the tangent vector, the second
contains the bitangent vector and the last one contains the normal vector. This matrix
represents a new vector space and is often known as the tangent space. The TBN matrix will
be used in the fragment shader to correct the normal vector orientation. To build a TBN matrix,
you'll need to know the bitangent vector as well. Fortunately, you can compute the bitangent
vector from normal and tangent vectors. A bitangent vector is perpendicular to normal and
tangent vectors. Note that it's important to adjust the vector orientation in this matrix to
correspond with your coordinate system. OpenGL uses this coordinate system by default:
196
Chapter 5
The TBN matrix will be passed to the fragment shader by the modified VertexData structure:
out VertexData {
vec2 texCoord;
vec3 position;
mat3 tbn;
} outData;
197
(location
(location
(location
(location
=
=
=
=
0)
1)
2)
3)
in
in
in
in
vec3
vec3
vec2
vec4
VertexPosition;
VertexNormal;
VertexTexCoord;
VertexTangent;
out VertexData {
vec2 texCoord;
vec3 position;
mat3 tbn;
} outData;
uniform cameraState camera;
uniform positionState object;
void main(){
mat4 objMatrix = object.position;
vec4 vertexPos = objMatrix * vec4(VertexPosition.xyz, 1.0);
gl_Position = camera.perspective * camera.position * objMatrix *
vec4(VertexPosition.xyz, 1.0);
outData.texCoord = vec2(VertexTexCoord.st);
outData.position = vertexPos.xyz;
outData.tbn = mat3(
normalize((objMatrix * vec4(VertexTangent.xyz, 0.0)).xyz),
normalize((objMatrix * vec4(cross(VertexNormal,
VertexTangent.xyz)*VertexTangent.w, 0.0)).xyz),
normalize((objMatrix * vec4(VertexNormal.xyz, 0.0)).xyz)
);
}
Fragment shader
First, you'll need to modify the fragment shader code to include the TBN matrix from the
vertex shader:
in VertexData {
vec2 texCoord;
vec3 position;
mat3 tbn;
} inData;
198
Chapter 5
Now, you can read the normal map texel value from the normalTexture texture unit:
vec3 normalTexel = texture(normalTexture, inData.texCoord.st).xyz;
The normalTexel vector contains raw values of the normal vector from the normal map texture
for the current texel. It means that all values are now in the range (0,1), which is the color
component range in OpenGL as well. You need to convert these values into range (-1,1), so you
can use them to produce a valid normal vector. You can do this with the following formula:
normalTexel = 2*normalTexel.xyz - vec3(1.0);
In addition to this conversion, you can apply the vector orientation correction by multiplying
the normalTexel vector with the vec3 vector.
normalTexel *= vec3(1, 1, 1);
Values in the vector multiplier are related to normal map values. Normal maps aren't
standardized, so you'll need to find out what kind of normal map suits you the best. The
normal maps that are generated from bumpmaps are usually fine. However, they are not
very accurate for more complex 3D models. Such an example might be a 3D model with a
low polygon count while using a normal map to define fine details. This is usually the result
of using the sculpting tool in the Blender application. Fortunately, you can use the normal
map baking tool to generate accurate normal maps from the sculpture.
Remember to always set up correct mapping of normal vector coordinates to color channels
in a normal map. In most cases, normal maps use the blue color to represent the facing
vector as you can see in the following screenshot:
199
This vector can be used instead of the per-vertex normal vector in the processLighting
function.
In the end, the fragment shader code would look like this:
#version 330
struct
vec4
vec4
vec4
float
float
vec3
};
lightState {
position;
diffuse;
specular;
constantAttenuation, linearAttenuation, quadraticAttenuation;
spotCutoff, spotExponent;
spotDirection;
struct
vec4
vec4
vec4
float
};
materialState {
ambient;
diffuse;
specular;
shininess;
struct
mat4
mat4
mat4
};
cameraState{
perspective;
translation;
rotation;
uniform
uniform
uniform
uniform
uniform
sampler2D diffuseTexture;
sampler2D normalTexture;
cameraState camera;
materialState material;
vec4 sceneAmbient;
Chapter 5
vec3 normal;
vec3 position;
} inData;
layout(location = 0) out vec4 diffuseColor;
vec4 processLighting(in materialState material, in vec3 surface, in
vec3 normalDirection){
...
}
void main() {
//local copy of material
materialState localMaterial = material;
//texture texel
vec4 texel = texture(diffuseTexture, inData.texCoord.st);
localMaterial.diffuse *= texel;
//normalmap texel
vec3 normalTexel = texture(normalTexture, inData.texCoord.st).xyz;
//normalize range
normalTexel = (2*normalTexel.xyz - vec3(1.0));
//change normal vector orientation
normalTexel *= vec3(-1, -1, 1);
//convert normal map vector into world space
vec3 perTexelNormal = inData.tbn * normalize(normalTexel);
diffuseColor = processLighting(
localMaterial,
inData.position,
normalize(perTexelNormal)
);
}
201
How it works
Bumpmapping affects the normal vector direction at each point of the polygon. Without it,
normal vectors would use only linear interpolation between vertices and the surface would
look smooth.
A normal map is usually represented by a 2D texture, where each pixel contains an encoded
normal vector. A normal vector consists of three axes: x, y, and z, while in a normal texture
map, they are mapped to R, G, and B color channels. A perfectly flat normal map would have
a bluish look. That's because every pixel would use (128,128,255) RGB colors, which also
means it will use a normal vector with XYZ coordinates (0,0,1).
202
Chapter 5
The difficult part is to use these normal map values to produce a usable normal vector. You
can't directly use a normal map as a simple texture because each polygon would have the
same normal vectors. It would be as if all polygons were facing you, which is rare. Therefore,
you'll need to rotate these normal vectors so that the normal vector (0,0,1) on the normal map
would be the same as the normal vector of the polygon. You can achieve this by using the
matrix multiplication on the vector from the normal map. This matrix will contain the tangent,
bitangent, and normal vector values. Each one corresponds to the axis of the local coordinate
system on each polygon:
After multiplication with a normal vector from the normal map texture, you'll get the correct
normal vector, which can be used with the lighting function.
There's more
There's a simple way to debug normal vectors by using the perTexelNormal vector in place
of the output color:
diffuseColor = vec4((normalize(perTexelNormal)+1.0)/2.0, 1);
Note that you'll need to adjust the value range of the vector because the normal vector can
contain negative values and it would more often than not be black.
203
See also
204
www.PacktPub.com
Stay Connected: