Lighting and Shading
Lighting and Shading
Translucent surfaces
• Allow some light to penetrate the surface and to emerge from another location on the object.
• This process of refraction characterizes glass and water.
• Some incident light may also be reflected at the surface.
Bidirectional Distribution function (BRDF)
From a physical perspective the reflection, absorption and transmission of light at the surface of a
material is described by a single function called Bidirectional Reflection Distribution function (BRDF).
BDRF is a function of five variables:
The frequency of light.
The two angles required to describe the direction of the input vector.
The two angles required to describe the direction of the output vector.
For a perfectly diffuse surface BDRF is simplified because it will have same value for all possible output
vectors
For a perfect reflector, the BRDF will only be non zero when the angle of incidence is equal to angle of
reflection.
LIGHT SOURCES
• Light can leave a surface through two fundamental processes:
self-emission and reflection.
• Any light source can be considered as an object with a surface.
• Each point (x, y, z) on the surface can emit light.
• Light emission is characterized by the direction of emission (θ , φ)
• The intensity of energy emitted at each wavelength λ.
• A general light source can be characterized by a six-variable illumination function I(x, y, z , θ , φ, λ).
• we need two angles to specify a direction, and we are assuming that each frequency can be considered
independently.
• From the perspective of a surface illuminated by the source, we can obtain
the total contribution of the source by integrating over its surface,
a process that accounts for the emission angles that reach this surface
and must also account for the distance between the source and the surface
Four basic types of light sources
Color sources
Light sources emit different amounts of light at different frequencies
Their directional properties can vary with frequency
• we can model light sources as having three components—red, green, and blue—and can use each of the three color
sources to obtain the corresponding color components that a human observer sees.
• Light source can be modelled through a three-component intensity, or illumination function.
• Each of whose component is the intensity of the independent red, green and blue components.
• We use red component of a light source for the calculation of the red component of the image.
• the inverse-square distance term is correct for point sources, in practice it is usually replaced by a term of the form
2 -1
(a + bd + cd ) ,
where d is the distance between p and p0. The constants a, b, and c can be chosen to soften the lighting.
• Note that if the light source is far from the surfaces in the scene, the intensity of the light from the source is sufficiently
uniform that the distance term is constant over each surface.
Spotlights
• Spotlights are characterized by a narrow range of angles through which light is emitted.
• We can construct a simple spotlight from a point source by limiting the angles at
• which light from the source can be seen.
• We can use a cone whose apex is at ps, which points in the direction ls, and whose width is determined by an angle
θ
• If θ = 180, the spotlight becomes a point source.
• The intensity is a function of the angle φ between the direction of the source and a vector s to a point on the
surface.
• This function could be defined in many ways, it is usually defined by cose φ, where the exponent e determines
how rapidly the light intensity drops off.
• If u and v are any unit-length vectors, we can compute the cosine of the angle θ between them with the dot product
cos θ = u . v, a calculation that requires only three multiplications and two additions.
Distant light sources
• In homogeneous coordinates, a point light source at p0 is represented internally as a four-dimensional
column matrix:
• In contrast, the distant light source is described by a direction vector whose representation in homogeneous
coordinates is the matrix
• The graphics system can carry out rendering calculations more efficiently for distant light sources than for
near ones.
• A scene rendered with distant light sources looks different from a scene rendered with near sources.
• OpenGL will allow both types of sources.
THE PHONG LIGHTING MODEL
• This lighting model is introduced by Phong and later modified by Blinn.
• It has proved to be efficient and close approximation to physical reality to produce good renderings
under a variety of lighting conditions and material properties.
• The Phong-Blinn or modifies phong model is the basis for lighting and shading in graphics API and
implemented virtually all graphics card.
• The Phong model uses the four vectors to calculate a color for an arbitrary
point p on a surface.
If the surface is curved, all four vectors can change as we move from point to point.
The vector n is the normal at p.
The vector v is in the direction from p to the viewer or COP.
The vector l is in the direction of a line from p to an arbitrary point on the source for a distributed light
source or, to the point light source.
The vector r is in the direction that a perfectly reflected ray from l would take. r is determined by n and
l.
• The Phong model supports the three types of material–light interactions— ambient, diffuse, and specular.
• Suppose that we have a set of point sources. We assume that each source can have separate ambient, diffuse, and
specular components for each of the three primary colors.
• our goal is to create realistic shading effects in as close to real time as possible.
• We use a local model to simulate effects that can be global in nature.
• light-source model has ambient, diffuse, and specular terms.
• We need nine coefficients to characterize these terms at any point p on the surface. We can place these nine
coefficients in a 3 × 3 illumination matrix for the ith light source.
• The first row of the matrix contains the ambient intensities for the red, green, and blue terms from source i.
• The second row contains the diffuse terms; the third contains the specular terms.
• The third contains the specular terms
• The lighting model is built by summing contributions for all light sources at each point we wish to light. For each
light source we have to compute the amount of light reflected for each of nine terms.
• Example for red diffuse term from source i, Lird
• For example, for the red diffuse term from source i, Lird, we can compute a reflection term Rird, and the latter’s
contribution to the intensity at p is RirdLird. The value of Rird depends on the material properties, the orientation
of the surface, the direction of the light source, and the distance between the light source and the viewer. Thus,
for each point, we have nine coefficients that we can place in a matrix of reflection terms of the form.
• We obtain the total intensity by adding the contributions of all sources and, possibly, a global ambient term.
Thus, the red term is
• Perfectly diffuse surfaces are so rough that there is no preferred angle of reflection. Such surfaces, sometimes
called Lambertian surfaces, can be modelled mathematically with Lambert’s law.
• Consider a diffuse planar surface illuminated by the sun. The surface is brightest at noon and dimmest at dawn
and dusk.
• According to Lambert’s law, we see only the vertical component of the incoming light.
• consider a small parallel light source striking a plane,As the source is lowered in the (artificial) sky, the same
amount of light is spread over a larger area, and the surface appears dimmer.
• we can characterize diffuse reflections mathematically. Lambert’s law states that
Rd∝ cos θ ,
where θ is the angle between the normal at the point of interest n and the direction of the light
source l.
• If both l and n are unit-length vectors,1 then
cos θ = l . n.
• If we add in a reflection coefficient kd representing the fraction of incoming diffuse light that is
reflected, we have the diffuse reflection term:
Id= kd(l . n)Ld.
• If we wish to incorporate a distance term, to account for attenuation as the light travels a distance d
from the source to the surface, we can again use the quadratic attenuation term:
• If the light source is below the horizon (l . n)Ld will be negative we want to use zero rather than a
negative value. Hence, in practice max((l . n)Ld, 0) will be used.
Specular Reflection
• If we employ only ambient and diffuse reflections, our images will be shaded and will appear three-
dimensional, but all the surfaces will look dull. highlights that we see reflected from shiny objects will
be missed.
• These highlights usually show a color different from the color of the reflected ambient and diffuse light.
• Diffuse surface is rough, a specular surface is smooth. The smoother the surface is, the more it
resembles a mirror.
• As the surface gets smoother, the reflected light is concentrated in a smaller range of angles centered
about the angle of a perfect reflector—a mirror or a perfectly specular surface.
• Modeling specular surfaces realistically can be complex because the pattern by which the light is
scattered is not symmetric. It depends on the wavelength of the incident light, and it changes with the
reflection angle.
• Phong proposed an approximate model that can be computed with only a slight increase over the work
done for diffuse surfaces.
• The model adds a term for specular reflection.
• we consider the surface as being rough for the diffuse term and smooth for the specular term.
• The amount of light that the viewer sees depends on the angle φ between r, the direction of a perfect
reflector, and v, the direction of the viewer.
• The Phong model uses the equation
Is= ksLs cosα φ.
The coefficient ks (0 ≤ ks ≤ 1) is the fraction of the incoming specular light that is reflected.
The exponent α is a shininess coefficient.
As α is increased, the reflected light is concentrated in a narrower region centered on the
angle of a perfect reflector.
As α goes to infinity, we get a mirror; values in the range 100 to 500 correspond to most metallic
surfaces, and smaller values (< 100) correspond to materials that show broad highlights.
• The computational advantage of the Phong model is that if we have normalized r and n to unit length,
we can again use the dot product, and the specular term becomes
• We can add a distance term, as we did with diffuse reflections. What is referred to as the Phong model,
including the distance term, is written
• This formula is computed for each light source and for each primary.
The Modified phong model
• If we use the Phong model with specular reflections in our rendering, the dot product r . v should be recalculated
at every point on the surface.
• We can obtain an approximation by using the unit vector halfway between the viewer vector and the light-source
vector. Half way vector.
• Here ψ as the angle between n and h, the halfway angle. When v lies in the same plane as do l, n, and r, we can
show that 2ψ = φ.
• If we replace r . v with n . h, we avoid calculation of r.
• The halfway angle ψ is smaller than φ.
• If we use the same exponent e in (n . h)e that we used in (r . v)e, then the size of the specular highlights will be
smaller.
• We can mitigate this problem
• by replacing the value of the exponent e with a value e’ so that (n . h)e’
is closer to (r . v)e.
• For smooth surfaces, the vector normal to the surface exists at every point and gives the local orientation of the
surface. Its calculation depends on how the surface is represented mathematically. Two simple cases—the plane
and the sphere—illustrate both how we compute normals and where the difficulties lie.
• A plane can be described by the equation: ax + by + cz + d = 0.
• This equation could also be written in terms of the normal to the plane, n, and a point, p0, known to be on the
plane as
n . (p − p0) = 0,
• where p is any point (x, y, z) on the plane. Comparing the two forms, we see that the vector n is given by
:
• For a flat polygon n is constant, If we assume a distant viewer, v is constant over the polygon, if
the light source is distant, l is constant.
• If the three vectors are constant, then the shading calculation needs to be carried out only once for
each polygon, and each point on the polygon is assigned the same shade. This technique is known
as flat, or constant, shading.
• In OpenGL we specify flat shading as glShadeModel(GL_FLAT);
• OpenGL uses the normal associated with the first vertex of a single polygon for the shading
calculation.
• Primitives such as triangle strip, OpenGL uses the normal of third vertex to first triangle, normal
of fourth vertex to second triangle and so on.
• Flat shading will show differences in shading for the polygons in mesh.
• we can see even small differences in shading between adjacent polygons.
• Lateral inhibition and mach bands
Smooth and Gouraud Shading
• Smooth shading is the default inOpenGL .
• We can also set the mode explicitly as follows:
• glShadeModel(GL_SMOOTH);
• If we have enabled smooth shading and lighting and that we assign to each vertex
the normal of the polygon being shaded. The lighting calculation is made at each
vertex using the material properties and the vectors n, v, and l computed for each
vertex.
• If the light source or the viewer is distant then there are no specular reflections. The
smooth shading shades the a polygon in constant color.
• The normal at the vertex should be defined in such a way that smoother shading is
achieved through interpolation.
• In gourad shading. We define the normal at the vertex to be normalized average of
the normal of the polygons that share the vertex.
Phong Shading
• Consider a polygon that shares edges and vertices with other polygons in the mesh.
• We can compute vertex normals by interpolating over the normals of the polygons that share the vertex.
Next, we can use interpolation, to interpolate the normals over the polygon. We can use the interpolated
normals at vertices A and B to interpolate normals along the edge between them.
nC(α) = (1− α)nA + αnB.
• We can do a similar interpolation on all the edges. The normal at any interior point can be obtained
from points on the edges by
n(α, β) = (1− β)nC+ βnD.
Once we have the normal at each point, we can make an independent shading calculation.
• This process can be combined with rasterization of the polygon.
• Phong shading could only be carried out off-line because it requires the interpolation of normals across
each polygon.
• Phong shading requires that the lighting model be applied to each fragment. hence, the name
perfragment shading
APPROXIMATION OF A SPHERE BY RECURSIVE SUBDIVISION
Polygonal approximation of the sphere illustrates the interactions between shading parameters and polygonal
approximations to curved surfaces.
Our starting point is a tetrahedron, although we could start with any regular polyhedron whose facets could be
divided initially into triangles.2 The regular tetrahedron is composed of four equilateral triangles, determined by
four vertices. We start with the four vertices. All four lie on the unit sphere, centered at the origin.
We get a first approximation by drawing a wireframe for the tetrahedron.
v[4][3]={ {0.0,0.0,1.0},{0.0,0.943,-0.33}, {-0.816,-0.471,-0.33}, {0.816,-0.471,0.33}};
void triangle(Glfloat a, Glfloat b, Glfloat c)
{
glBegin(GL_LINE_LOOP);
glVertex3fv(a);
glVertex3fv(b);
glVertex3fv(c);
glEnd();
}
void tetrahedron()
{
triangle (v[0],v[1],v[2]);
triangle (v[3],v[2],v[1]);
triangle(v[0],v[3],v[1]);
triangle (v[0],v[2],v[3]);
}
• The order of vertices obeys the right-hand rule,
• We can get a closer approximation to the sphere by subdividing each facet of the tetrahedron into smaller triangles.
Subdividing into triangles will ensure that all the new facets will be flat.
There are various ways to subdivide
• We can bisect each of angles of triangle and draw the three bisectors which meet
at common point.
• We can compute the center of mass pf the vertices by simply averaging them.
Draw lines from this point to the three vertices again generating three triangles.
• Connect the bisectors of the sides of the triangle which forms four equilateral
triangle.
After we have subdivided a facet as just described, the four new triangles will still be in the same
plane as the original triangle. We can move the new vertices that we created by bisection to the
unit sphere by normalizing each bisected vertex, using the normalization function
Void normalize(Glfloat*p)
{
double d=0.0;
int i;
for(i=0;i<3;i++)
d+=p[i]*p[i];
d=sqrt(d);
if(d>0.0)
for(i=0;i<3;i++)
P[i]/=d;
}
Glfloat v1[3], v2[3], v3[3];
int j;
for(j=0;j<3;j++) v1[j]=va[j]+vb[j];
Normalize(v1);
for(j=0;j<3;j++) v2[j]=va[j]+vc[j];
Normalize(v2);
for(j=0;j<3;j++) v3[j]=vc[j]+b[j]);
Normalize(v3);
triangle(v[a], v2, v1);
triangle(v[c], v3, v2);
triangle(v[b], v1, v2);
triangle(v1, v2, v3);
We make the tetrahedron routine depend on depth of recursion by adding an argument n
void tetrahedron(int n)
{
divide_tri(v[0],v[1],v[2],n);
divide_tri(v[3],v[2],v[1], ,n)
divide_tri(v[0],v[3],v[1], ,n);
divide_tri(v[0],v[2],v[3], ,n);
}
Divide_tri (GLfloat *a, GLfloat *b, GLfloat *c, int n)
{
Glfloat v1[3], v2[3], v3[3];
int j;
if(n>0)
{
for(j=0;j<3;j++) v1[j]=va[j]+vb[j];
Normalize(v1);
for(j=0;j<3;j++) v2[j]=va[j]+vc[j];
Normalize(v2);
for(j=0;j<3;j++) v3[j]=vc[j]+b[j]);
Normalize(v3);
divide_tri(a, v2, v1, n-1);
divide_tri(c, v3, v2, n-1);
divide_tri(b, v1, v3, n-1);
divide_tri(v1, v2, v3, n-1);
}
else triangle(a,b.c)
}
LIGHT SOURCES IN OPENGL
• The OpenGL function specify the vector and scalar parameters respectively.
• There are four vector parameters that we can set:
The position of the light source.
The amount of ambient, diffuse and specular light associated with the source.
glLightfv(GLenum source, GlLenum parameter, Glfloat *pointer to array)
glLightf(GLenum source, GLenum parameter, GLfloat value)
• Suppose that we wish to specify the first source GL_LIGHT0 and to locate it at the
point( 1.0, 2.0, 3.0). We specify its position as apoint in homogeneous coordinates as
follows.
GLfloat light0_pos[] = { 1.0, 2.0, 3.0, 1.0};
With the fourth component set to zero
GLfloat light0_pos[] = { 1.0, 2.0, 3.0, 0.0};
For our single light source, if we want a white specular component and red ambient and
diffuse components, we can use
GLfloat diffuse0[]= {1.0, 0.0, 0.0, 1.0};
GLfloat ambient0[]= {1.0, 0.0, 0.0, 1.0};
GLfloat specular0[]= {1.0, 1.0, 1.0, 1.0};
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glLightfv( GL_LIGHT0, GL_POSITION, light0_pos);
glLightfv( GL_LIGHT0, GL_AMBIENT,ambient0);
glLightfv( GL_LIGHT0, GL_DIFFUSE,diffuse0);
glLightfv( GL_LIGHT0, GL_SPECULAR,specular0);
• Add global ambient term
GLfloat global_ambient []= {0.1, 0.1, 0.1, 1.0};
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, global_ambient);
GL_LIGHT_MODEL_TWO_SIDE
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, GL_TRUE)
SPECIFICATION OF MATERIALS IN OPENGL
• Material properties in OpenGL match up directly with the supported light sources and with
modified phong lighting model.
• We can also specify different material properties for the front and back faces of a surface.
• Material parameters are specified as
glMaterialfv(GLenum face, GLenum type, GLfloat *Ponter_to_array)
glMaterialf(GLenum face, GLenum type, GLfloat value)
• We can specify ambient, diffuse and specular reflectivity co-efficients (Ka, Kd , Ks) for
each primary color through three arrays.
GLfloat ambient []={0.2, 0.2, 0.2, 1.0};
GLfloat diffuse []={1.0, 0.8, 0.0, 1.0};
GLfloat specular[]={1.0, 1.0, 1.0, 1.0};
Small amount of white ambient reflectivity, yellow diffuse reflectivity and white specular
reflectivity is defined in above functions.
• We set the material properties for both front and back faces by following function
calls.
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, ambient);
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, diffuse);
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, specular);
• If both specular and diffuse coefficients are same we can specify it by using
GL_DIFFUSE_AND_SPECULAR
• To specify different front and back face properties we use GL_FRONT and
GL_BACK.
• The shininess of the surface- the exponent in specular reflection is specified
glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS,100.0);
• Surfaces with emissive component that characterizes self luminous sources.it is useful
if we want a light source to appear in image. This term is unaffected by light source
and does not affect material properties.
GLfloat emission []= { 0.0, 0.3, 0.3, 1.0};
glMaterialfv(GL_FRONT_AND_BACK, GL_EMISSION, emission);
This code defines a small amount of blue-green(cyan) emission. The emissive term
contribute any light to environment, so it does not affect the shading.
• OpenGL contains a method, glColorMaterial- used to change a single material
property.
Definition of material objects in the application using struct or classes.
typedef materialStruct
GLfloat ambient[4];
GLfloat diffuse[4];
GLfloat specular[4];
Glfloat Shininess;
materialStruct;
Specify materials by code
materialStruct brassMaterials =
{
0.33, 0.22, 0.03, 1.0, 0.78, 0.57, 0.11, 1.0, 0.99, 0.91, 0.81, 1.0, 27.8
};
Access the code through a pointer
currentMaterial= &brassMaterials
Allows us to set material properties through function
Void materials( materialStruct *materials)
{
glMaterialfv(GL_FRONT, GL_AMBIENT, currentMaterial->ambient);
glMaterialfv(GL_FRONT, GL_DIFFUSE, currentMaterial-> diffuse);
glMaterialfv(GL_FRONT, GL_SPECULAR, currentMaterial-> specular);
glMaterialfv(GL_FRONT, GL_SHININESS, currentMaterial-> shininess);
}
Shading of the sphere model
Cross(GLfloat *a, GLfloat *b, GLfloat *c)
{
d[0]=(b[1]-a[1])*(c[2]-a[2])-(b[2]-a[2])*(b[2]-a[2])-(c[1]-a[1]);
d[1]=(b[2]-a[2])*(c[2]-a[2])-(b[0]-a[0])* (b[0]-a[0])-(c[2]-a[2]);
d[2]=(b[0]-a[0])*(c[2]-a[2])-(b[1]-a[1])* (b[1]-a[1])-(c[0]-a[0]);
Normalize(d);
}
• Assuming the light sources have been specified and enables we can change the triangle
routine to produce shaded spheres.
void triangle(GLfloat *a, GLfloat *b, GLfloat *c)
{
GLfloat n[3];
Cross(a,b,c,n);
glBegin(GL_POLYGON);
glNormal3fv(n)
glVertex3fv(a);
glVertex3fv(b);
glVertex3fv(c);
glEnd();
}
• As we increase number of subdivisions to the interiors of of the spheres appear smooth. Edges of the polygons around
outside .
void triangle(GLfloat *a, GLfloat *b, GLfloat *c)
GLfloat n[3];
Int I
glBegin(GL_POLYGON);
for (i=0; i<3;i++) n[i]=a[i];
Normalize(n);
glNormal3fv(n);
glVertex3fv(a);
for (i=0; i<3;i++) n[i]=a[i];
Normalize(n);
glNormal3fv(n);
glVertex3fv(a);
for (i=0; i<3;i++) n[i]=a[i];
Normalize(n);
glNormal3fv(n);
glVertex3fv(a);
glEnd();
}
GLOBAL ILLUMINATION
• Phenomena like shadows reflections blockage of light are global effects- require
global lighting models.
• Global lighting models are incompatible with pipeline model because of few
restrictions.
We must render the polygon independently of other polygons.
We want our image to be same regardless of the order in which application produces
the polygon.
• There are alternative rendering strategies that can handle global effects.
Ray tracing and radiosity.
• Ray tracing starts with the synthetic-camera model but determines for
each projector that strikes a polygon if that point is indeed
illuminated by one or more sources before computing
the local shading at each point.
• Radiosity renderer is based upon energy considerations.
From a physical point of view, all the light energy
in a scene is conserved.
• we can use our knowledge of OpenGL and of the effects that global lighting produces
to approximate what a global renderer would do.
• Many of the most exciting advances in computer graphics over the past few years have
been in the use of pipeline renderers for global effects including mapping methods,
multipass rendering and transparency.
• One of the other advantage of offline renderers was the ability to do lighting using at
least 32-bit arithmetic. All of the commodity graphics card now support high dynamic
range rendering (HDRR).
Thank you