Computer Graphics Lectures - 1 To 25
Computer Graphics Lectures - 1 To 25
Computer Graphics Lectures - 1 To 25
Lecture No:3-4
What is computer graphics
• Computer graphics is an art of drawing
pictures, lines, charts, etc using computers
with the help of programming.
• Computer graphics is made up of number of
pixels.
• Pixel is the smallest graphical picture or unit
represented on the computer screen.
• There are two types of computer graphics
– raster graphics:
• A raster graphic (also called “bitmap”) is basically a large grid,
filled with the boxes called pixel
• where each pixel is separately defined as in digital photograph
– Vector graphics:
• Vector graphics are defined by math(where mathematical
formulas are used to draw lines and shapes)
• They are points connected by lines of various shapes
Raster Graphics
• Raster images are created with pixel-based
software or captured with a camera or
scanner.
• They are more common in general such as jpg,
gif, png, and are widely used on the web.
Raster Graphics
• When using a raster
program you paint an
image and it's similar to
dipping a brush in paint
and painting.
• You can blend colors to
soften the transition
from one color to
another. FIG.1
• Raster images are made
of pixels.
• A pixel is a single point
or the smallest single
element in a display
device.
• If you zoom in to a
raster image you may
start to see a lot of little
tiny squares.
Vector Graphics
• Vector graphics are math-defined shapes
created with vector software and are not as
common:
• Used in CAD/engineering, 3D animation, and
Graphic Design for processes that reproduce
an image onto an object such as engraving,
etching, cut stencils.
• When using a vector
program you draw the
outline of shapes: and it's
similar to creating an
image with tiles of all
different shapes and
sizes. e.g. an eye shape, a
nose shape, a lip shape.
These shapes called
objects display one single
color each. FIG.2
• Vector images are
mathematical
calculations from one
point to another that
form lines and shapes.
If you zoom into a
vector graphic it will
always look the same.
• Driven by display commands (move (x, y),
char(“A”) , line(x, y)…).
Raster Image vs Vector Image
• A raster image has a specific • When you enlarge a vector
number of pixels. graphic, the math formulas
• When you enlarge the image stay the same rendering the
file without changing the same visual graphic no matter
number of pixels, the image the size.
will look blurry. • Vector graphics can be scaled
• When you enlarge the file by to any size without losing
adding more pixels, the pixels quality.
are added randomly • Because vector graphics are
throughout the image, rarely not composed of pixels they
producing good results. are resolution-independent.
Graphics Display Hardware
Bitmap graphics
Bitmap vs. Vector graphics
13
Bitmap vs. vector graphics
14
Graphics Output Primitives
Drawing Line, Circle and Ellipse
Dr. Ali Raza Baloch
Objectives
Introduction to Primitives
Points & Lines
Line Drawing Algorithms
Digital Differential Analyzer (DDA)
Bresenham’s Algorithm
Mid-Point Algorithm
Circle Generating Algorithms
Properties of Circles
Bresenham’s Algorithm
Mid-Point Algorithm
Ellipse Generating Algorithms
Properties of Ellipse
Bresenham’s Algorithm
Mid-Point Algorithm
Other Curves
Conic Sections
Polynomial Curves
Spline Curves
Rasterization
Rasterization is an act of converting an image described in a vector
format(shapes) into a raster image (pixel or dots) to display on a video
device, print it or save it in a bitmap file format
dy y1 − y 0 b = y 0 − m * x0
m= =
dx x1 − x0
(x1,y1)
dy
(x0,y0)
dx
Line Drawing Algorithm
By − Ay 96 − 41 55
m= = = = 0.5392
Bx − Ax 125 − 23 102
Points
A point is shown by
illuminating a pixel on
the screen
Lines
A line segment is completely defined in terms of its two endpoints.
A line segment is thus defined as:
Line_Seg = { (x1, y1), (x2, y2) }
Lines
y
A line is produced by
means of illuminating
a set of intermediary
pixels between the
two endpoints. y2
y1
x
x1 x2
Lines
Lines is digitized into a set of discrete integer positions that
approximate the actual line path.
Example: A computed line position of (10.48, 20.51) is converted to
pixel position (10, 21).
Line
The rounding of coordinate values to integer causes all but horizonatal
and vertical lines to be displayed with a stair step appearance “the
jaggies”.
Line Drawing Algorithms
A straight line segment is defined by the coordinate position for the end
points of the segment.
Given Points (x1, y1) and (x2, y2)
Line
All line drawing algorithms make use of the fundamental equations:
yk+1 = yk + m (1)
Subscript k takes integer values starting from 1, for the first point, and
increases by 1 until the final end point is reached.
Since 0.0 < m ≤ 1.0, the calculated y values must be rounded to the
nearest integer pixel position.
DDA
If m > 1, reverse the role of x and y and take Δy = 1, calculate successive
x from
xk+1 = xk + 1/m (2)
The circle function tests in (3) are performed for the mid
positions between pixels near the circle path at each
sampling step. Thus, the circle function is the decision
parameter in the midpoint algorithm, and we can set up
incremental calculations for this function as we did in the
line algorithm.
Mid point Circle Algorithm
Figure shows the midpoint between the two candidate
pixels at sampling position xk +1. Assuming we have just
plotted the pixel at (xk , yk), we next need to determine
whether the pixel at position (xk +1, yk) or the one at
position (xk +1, yk −1) is closer to the circle.
Mid point Circle Algorithm
Our decision parameter is the circle function evaluated at the midpoint
between these two pixels:
Mid point Circle Algorithm
If pk < 0, this midpoint is inside the circle and the pixel on scan line yk is
closer to the circle boundary.
Otherwise, the midpoint is outside or on the circle boundary, and we
select the pixel on scan line yk −1.
Successive decision parameters are obtained using incremental
calculations.
Mid point Circle Algorithm
Mid point Circle Algorithm
Mid point Circle Algorithm
Summary of the Algorithm
As in Bresenham’s line algorithm, the midpoint method calculates pixel
positions along the circumference of a circle using integer additions and
subtractions, assuming that the circle parameters are specified in
screen coordinates. We can summarize the steps in the midpoint circle
algorithm as follows.
Algorithm
Example
Given a circle radius r = 10, we demonstrate the midpoint circle
algorithm by determining positions along the circle octant in the first
quadrant from x = 0 to x = y . The initial value of the decision parameter
is
Example
k pk (xk+1,yk+1)
0 -9 (1,10)
1 -4 (2,10)
2 3 (3,9)
3 12 (4,8)
……. …….. ………
Example
A plot of the generated pixel positions in the first quadrant is shown in
Figure
Midpoint Ellipse Algorithm
Ellipse equations are greatly simplified if the major and minor axes are
oriented to align with the coordinate axes.
In “standard position” major and minor axes are oriented parallel to x
and y axes.
Parameter rx labels the semi-major axis, and parameter ry labels the
semi-minor axis.
The equation for the ellipse can be written in terms of the ellipse center
coordinates and parameters rx and ry as
Midpoint Ellipse Algorithm
Using polar coordinates r and θ, we can also describe the ellipse in
standard position with the parametric equations
Midpoint Ellipse Algorithm
Scanning Display
1 1
p p
Sampling Reconstruction
Function Kernel
Signal Processing
Sampling a function:
Signal Processing
Sampling a function:
What do you notice?
Signal Processing
Sampling a function: what do you notice?
Jagged, not smooth
Signal Processing
Sampling a function: what do you notice?
Jagged, not smooth
Loses information!
Anti-aliasing
• Aliasing: distortion artifacts produced when representing a high-resolution signal at a lower
resolution.
• Anti-aliasing : techniques to remove aliasing
jaggies
digitized
dy y1 − y 0 b = y 0 − m * x0
m= =
dx x1 − x0
(x1,y1)
dy
(x0,y0)
dx
Line Drawing Algorithm
By − Ay 96 − 41 55
m= = = = 0.5392
Bx − Ax 125 − 23 102
Points
A point is shown by
illuminating a pixel on
the screen
Lines
A line segment is completely defined in terms of its two endpoints.
A line segment is thus defined as:
Line_Seg = { (x1, y1), (x2, y2) }
Lines
y
A line is produced by
means of illuminating
a set of intermediary
pixels between the
two endpoints. y2
y1
x
x1 x2
Lines
Lines is digitized into a set of discrete integer positions that
approximate the actual line path.
Example: A computed line position of (10.48, 20.51) is converted to
pixel position (10, 21).
Line
The rounding of coordinate values to integer causes all but horizonatal
and vertical lines to be displayed with a stair step appearance “the
jaggies”.
Line Drawing Algorithms
A straight line segment is defined by the coordinate position for the end
points of the segment.
Given Points (x1, y1) and (x2, y2)
Line
All line drawing algorithms make use of the fundamental equations:
yk+1 = yk + m (1)
Subscript k takes integer values starting from 1, for the first point, and
increases by 1 until the final end point is reached.
Since 0.0 < m ≤ 1.0, the calculated y values must be rounded to the
nearest integer pixel position.
DDA
If m > 1, reverse the role of x and y and take Δy = 1, calculate successive
x from
xk+1 = xk + 1/m (2)
dlower , dupper.
Bresenham’s Line
The y coordinate on the mathematical line at xk+1 is calculated as
y = m(xk +1)+ b
then
dlower = y − yk
= m (xk +1) + b − yk
and
dupper = (yk+1) − y
= yk+1− m(xk+1)− b
Bresenham’s Line
To determine which of the two pixels is closest to the line path, we set
an efficient test based on the difference between the two pixel
separations
dlower - dupper = 2m (xk +1) − 2yk + 2b - 1
= 2 (Δy / Δx) (xk +1) − 2yk + 2b - 1
Multiplying both sides by Δx to avoid floating point numbers:
Δx(dlower - dupper )= 2 Δy(xk +1)- Δx(2yk + 2b - 1 )
Consider a decision parameter pk such that
pk = Δx (dlower - dupper )
= 2Δy.xk − 2Δx.yk + c
where
c = 2Δy + Δx(2b −1)
Bresenham’s Line
For any circle point (x, y), this distance is expressed by the
Equation
(x − xc)2 + (y − yc)2 = r 2
We calculate the points by stepping along the x-axis in unit
steps from xc-r to xc+r and calculate y values as
Circle Generating Algorithms
There are some problems with this approach:
1. Considerable computation at each step.
2. Non-uniform spacing between plotted pixels as in this
Figure.
Circle Generating Algorithms
Problem 2 can be removed using the polar form:
x = xc + r cos θ
y = yc + r sin θ
using a fixed angular step size, a circle is plotted with equally spaced
points along the circumference.
Circle Generating Algorithms
Problem 1 can be overcome by considering the symmetry of
circles
Efficient Solutions
Midpoint Circle Algorithm
Mid point Circle Algorithm
To apply the midpoint method, we define a circle function:
Any point (x,y) on the boundary of the circle with radius r satisfies the
equation fcircle(x, y)= 0.
Mid point Circle Algorithm
If the points is in the interior of the circle, the circle function is negative.
If the point is outside the circle, the circle function is positive.
The circle function tests in (3) are performed for the mid
positions between pixels near the circle path at each
sampling step. Thus, the circle function is the decision
parameter in the midpoint algorithm, and we can set up
incremental calculations for this function as we did in the
line algorithm.
Mid point Circle Algorithm
Figure shows the midpoint between the two candidate
pixels at sampling position xk +1. Assuming we have just
plotted the pixel at (xk , yk), we next need to determine
whether the pixel at position (xk +1, yk) or the one at
position (xk +1, yk −1) is closer to the circle.
Mid point Circle Algorithm
Our decision parameter is the circle function evaluated at the midpoint
between these two pixels:
Mid point Circle Algorithm
If pk < 0, this midpoint is inside the circle and the pixel on scan line yk is
closer to the circle boundary.
Otherwise, the midpoint is outside or on the circle boundary, and we
select the pixel on scan line yk −1.
Successive decision parameters are obtained using incremental
calculations.
Mid point Circle Algorithm
Mid point Circle Algorithm
Mid point Circle Algorithm
Summary of the Algorithm
As in Bresenham’s line algorithm, the midpoint method calculates pixel
positions along the circumference of a circle using integer additions and
subtractions, assuming that the circle parameters are specified in
screen coordinates. We can summarize the steps in the midpoint circle
algorithm as follows.
Algorithm
Midpoint Circle Drawing program
Assignment # 1
Discuss in detail Midpoint Ellipse Algorithm with example and write down
its c program.
k pk (xk+1,yk+1)
0 -9 (1,10)
1 -4 (2,10)
2 3 (3,9)
3 12 (4,8)
……. …….. ………
Example
A plot of the generated pixel positions in the first quadrant is shown in
Figure
Midpoint Ellipse Algorithm
Ellipse equations are greatly simplified if the major and minor axes are
oriented to align with the coordinate axes.
In “standard position” major and minor axes are oriented parallel to x
and y axes.
Parameter rx labels the semi-major axis, and parameter ry labels the
semi-minor axis.
The equation for the ellipse can be written in terms of the ellipse center
coordinates and parameters rx and ry as
Midpoint Ellipse Algorithm
Using polar coordinates r and θ, we can also describe the ellipse in
standard position with the parametric equations
Midpoint Ellipse Algorithm
• Two-Dimensional translation
– Moving objects without deformation
– Translating an object by Adding offsets to coordinates to generate
new coordinates positions
– Set tx,ty be the translation distance, we have
x' = x + t x y' = y + t y
– In matrix format, where T is the translation vector
x' x t x
P' = P= T=
y' y t y
P' = P + T
• Example: Given a circle C with radius 10 and center
coordinates (1, 4). Apply the translation with distance 5
towards X axis and 1 towards Y axis. Obtain the new
coordinates of C without changing its radius.
• Solution:
– Given-
• Old center coordinates of C = (Xold, Yold) = (1, 4)
• Translation vector = (Tx, Ty) = (5, 1)
• New Coordinates X’, Y’ ?
– X’= X+Tx = 1+5=6
– Y’=Y+Ty = 4+1=5
– Thus new coordinates are
– X’ , Y’ = 6,5
2D Translation
• 2D Rotation
– 2D Rotation is a process of rotating an object with
respect to an angle in a two dimensional plane.
Basic two-dimensional geometric transformations (2/1)
• Two-Dimensional rotation
– Rotation axis and angle are specified for rotation
r
– Convert coordinates into polar form for calculation
x = r cos y = y sin r
– Example, to rotation an object with angle θ
• The new position coordinates
x' = r cos( + ) = r cos cos − r sin sin = x cos − y sin
y ' = r sin( + ) = r cos sin + r sin sin = x sin + y cos
• In matrix format
cos − sin
R= P' = R P
sin cos
• Rotation about a point (xr, yr)
2
=
1 6
0
1 2 3 4 5 6 7 8 9 10
x
• Given-
• Old ending coordinates of the line = (Xold, Yold) = (4, 4)
• Rotation angle = θ = 30º
• Xnew
• = Xold x cosθ – Yold x sinθ
• = 4 x cos30º – 4 x sin30º
• = 4 x (√3 / 2) – 4 x (1 / 2)
• = 2√3 – 2
• = 2(√3 – 1)
• = 2(1.73 – 1)
• = 1.46
• Ynew
• = Xold x sinθ + Yold x cosθ
• = 4 x sin30º + 4 x cos30º
• = 4 x (1 / 2) + 4 x (√3 / 2)
• = 2 + 2√3
• = 2(1 + √3)
• = 2(1 + 1.73)
• = 5.46
• Two-Dimensional scaling
– To alter the size of an object by multiplying the coordinates
with scaling factor sx and sy
x' = x s x y = y sy
– In matrix format, where S is a 2x2 scaling matrix
x ' s x 0 x
= 0 P' = S P
y' s y y
– Choosing a fix point (xf, yf) as its centroid to perform scaling
x' = x s x + x f (1 − s x )
y' = y s y + y f (1 − s y )
Basic two-dimensional geometric transformations (3/2)
3 6 9
3 3
2
1 2 3
1 1
0
1 2 3 4 5 6 7 8 9 10
x
Note: House shifts position relative to origin
• Problem-01:
• Given a square object with coordinate points
A(0, 3), B(3, 3), C(3, 0), D(0, 0). Apply the
scaling parameter 2 towards X axis and 3
towards Y axis and obtain the new coordinates
of the object.
• Solution-
• Given-
• Old corner coordinates of the square = A (0,
3), B(3, 3), C(3, 0), D(0, 0)
• Scaling factor along X axis = 2
• Scaling factor along Y axis = 3
• For Coordinates A(0, 3)
• Let the new coordinates of corner A after scaling
= (Xnew, Ynew).
x' s x 0 x
P'=S*P y ' = 0 x
sy y
x' s x 0 0 x
y' = 0 sy 0 y
1 0 0 1 1
3D Transformation
• In the 2D system, we use only two coordinates X and
Y but in 3D, an extra coordinate Z is added.
• 3D graphics techniques and their application are
fundamental to the entertainment, games, and
computer-aided design industries
Translation
• In 3D translation, we transfer the Z coordinate
along with the X and Y coordinates.
• The process for translation in 3D is similar to
2D translation.
• A translation moves an object into a different
position on the screen.
• The following figure shows the effect of
translation −
Geometric transformations in three-dimensional space (2)
• Three-dimensional translation
– A point P (x,y,z) in three-dimensional space translate to new
location with the translation distance T (tx, ty, tz)
x' = x + t x y' = y + t y z' = z + t z
– In matrix format
x ' 1 0 0 t x x
y' = 0 1 0 t y y
P' = T P
z' 0 0 1 t z z
1 0 0 0 1 1
= [𝑋. 𝑆𝑥 𝑌. 𝑆𝑦 𝑍. 𝑆𝑧 1]
3D Rotation
• 3D rotation is not same as 2D rotation.
• In 3D rotation, we have to specify the angle of
rotation along with the axis of rotation.
• We can perform 3D rotation about X, Y, and Z
axes.
Geometric transformations in three-dimensional space (5)
Displayed in Center
• The clipping window selects what we want to
see
• The viewport indicates where it is to be
viewed on the output device
• By changing the position of a viewport, we can
view objects at different positions on the
display area of an output device
The Viewing Pipeline (1/3)
• In many cases window and viewport are rectangle
ywmin yvmin
xvmin xvmax xvmin xvmax
World Coordinates Viewport Coordinates
56
Viewing Pipeline
Normalized
YMAX Coordinates
Clipping
Modeling XMIN
XMIN XMAX
XMAX
World Coordinates
Coordinates
Viewing
Coordinates
Device
Coordinates
• The mapping of 2D world-coordinates scene
description to device coordinates is called a
2-Dimensional transformation
• Once a world coordinate scene has been constructed, We
could set up a separate 2-D viewing coordinate reference
frame for specifying the clipping window.
• To make the viewing process independent of the
requirements of any output device, graphic system convert
object descriptions to normalized coordinates and apply the
clipping routines.
• Clipping is usually performed in normalized
coordinates
• At the final step of the viewing
transformation, the contents of the viewport
are transferred to positions within the display
window
Viewing Coordinate Reference Frame
ΜW C,VC = R Τ
R: rotation matrix
T: translation matrix
61
Viewing Coordinate Reference Frame
62
63
• Mapping of window coordinates to viewport is
called window to viewport transformation
• We do this using transformation that
maintains relative position of window
coordinate into viewport
• That means center coordinates in window
must be remain at center point in viewport
• We find relative position by equation as
follow:
Window-To-Viewport Coordinate
Transformation (1/5)
• Window-to-viewport mapping
– A point at position (xw, yw) in a designated window is
mapped to viewport coordinates (xv, yv) so that
relative positions in the two areas are the same
65
Window-To-Viewport Coordinate
Transformation (2/5)
66
Window-To-Viewport Coordinate
Transformation (3/5)
• To maintain the same relative placement
xv − xvmin xw− xwmin
xvmax − xvmin = xwmax − xwmin
yv − yvmin yw− ywmin
yvmax − yvmin = ywmax − ywmin
67
Window-To-Viewport Coordinate Transformation (4/5)
68
Clipping Operations
Clipping:
• Any procedure that identifies those portions of a picture that
are either inside or outside of a specified region of space.
• The region against which an object is to clipped is called a clip
window.
• Applications of clipping includes extraction parts of a defined
scene for viewing, identifying visible surface in 3D-views, etc.
• Depending on the application, the clip window can be a
general polygon or it can even have curved boundaries.
• We consider clipping methods using rectangular clip regions
69
• Applied in World Coordinates
• Adapting Primitive Types
– Point clipping
– Line clipping
– Area clipping (Polygons)
– Curve clipping
– Text clipping
Point Clipping
• In point clipping we eliminate those points which
are outside the clipping window and draw points
which are inside the clipping window
• Here we consider clipping window is rectangular
boundary with edge (Xwmin, Xwmax, Ywmin, Ywmax)
• For finding whether the given point is inside or
outside the clipping window we use the following
equations
Point Clipping
• Assuming that the clip window is a rectangle in
standard position, we save a point P = ( x , y) for
display if the following inequalities are satisfied:
72
Applications of point clipping
• Although point clipping is applied less often than
line or polygon clipping, some applications may
require a point clipping procedure
For example:
• Point clipping can be applied to scenes involving
explosions or sea foam that are modeled with
particles (points) distributed in some region of
the scene.
Line Clipping
• If the part of the line lies inside the window is
kept and the part of the line appearing
outside of the window is removed
Line Clipping (1/3)
• Line clipping against a rectangular clip window
75
Line Clipping (2/3)
76
Line Clipping (3/3)
• Cohen-Sutherland Line Clipping
• Liang-Barsky Line Clipping
• NLN(Nicholl-Lee-Nicholl) Line Clipping
• Line Clipping Using Nonrectangular Clip
Windows
• Splitting Concave Polygons
77
Lecture No. 21-22
xwmin<= x<=xwmax
ywmin<= y<=ywmax
Point Clipping
Line Clipping
• A line is specified with its end-points.
• There are three possible cases for a line that
we need to consider.
– A line is completely inside the window
– A line is completely outside the window
– A line is neither completely inside nor completely
outside
Cohen Sutherland Line Clipping Algorithm
16
Cohen-Sutherland Line Clipping
(2/3)
17
Cohen-Sutherland Line Clipping
(2/3)
18
Cohen-Sutherland Line Clipping
(2/3)
19
Cohen-Sutherland Line Clipping
(2/3)
20
Cohen-Sutherland Line Clipping
(2/3)
21
Illumination Models & Surface
Rendering Methods
45
Ambient Light
• The amount of light reflected from an object's
surface is determined by Ka , the ambient
reflection coefficient. Ka ranges from 0 to 1.
I = kaIa
where
I is the resulting intensity
Ia is the intensity of ambient light
ka is the object’s basic intensity, ambient-
reflection coefficient.
46
Ambeint light source
• A scene lit only with an ambient light
source
Diffuse Reflection
• Diffuse reflections are constant over each surface
in a scene, independent of the viewing direction.
• The amount of the incident light that is diffusely
reflected can be set for each surface with
parameter kd, the diffuse-reflection coefficient, or
diffuse reflectivity.
0 kd 1;
kd near 1 – highly reflective surface;
kd near 0 – surface that absorbs most of the
incident light;
kd is a function of surface color; 48
Diffuse Reflection
Even though there is equal light scattering in all direction
from a surface, the brightness of the surface does depend
on the orientation of the surface relative to the light
source:
(a) (b)
Fig. 8
A surface perpendicular to the direction of the incident light (a) is more illuminated than an equal-sized
surface at an oblique angle (b) to the incoming light direction.
49
Diffuse Reflection
• As the angle between the surface normal and the
incoming light direction increases, less of the
incident light falls on the surface.
50
Diffuse Reflection
If Il is the intensity of the point N
where
N is the unit normal vector to a surface and L is the
unit direction vector to the point light source from a
position on the surface.
51
Diffuse Reflection
Figure 10 illustrates the illumination with
diffuse reflection, using various values of
parameter kd between 0 and1.
Fig. 10
Series of pictures of sphere illuminated by diffuse reflection model only using different kd values (0.4,
0.55, 0.7, 0.85,1.0).
52
Diffuse Reflection
We can combine the ambient and point-source
intensity calculations to obtain an expression for the
total diffuse reflection.
Idiff = kaIa+kdIl(N.L)
where both ka and kd depend on surface material
properties and are assigned values in the range from
0 to 1.
Fig. 11
Series of pictures of sphere illuminated by ambient and diffuse reflection model.
Ia = Il = 1.0, kd = 0.4 and ka values (0.0, 0.15, 0.30, 0.45, 0.60).
53
Specular Reflection and the
Phong Model
• Specular reflection is the result of total, or near total,
reflection of the incident light in a concentrated region
around the specular-reflection angle.
• Shiny surfaces have a narrow specular-reflection range.
• Dull surfaces have a wider reflection range.
55
Specular Reflection
Figure 13 shows the specular reflection
direction at a point on the N
the direction of specular reflection;
V
56
Specular reflection
Phong Reflection Model
• A simple model supports three models of light-
matter interactions
– Diffuse
– Specular
– Ambient
• and uses four vectors
– Normal
• To surface
• To viewer
• Perfect reflector
Phong Model
59
Phong Model
Phong model is an empirical model for
calculating the
specular-reflection range:
• Sets the intensity of specular reflection
proportional to cosns;
• Angle assigned values in the range 0o to
90o, so that cos values from 0 to 1;
• Specular-reflection parameter ns is
determined by the type of surface that we
want to display,
• Specular-reflection coefficient ks equal to
some value in the range 0 to 1 for each
surface.
60
Phong Model
• Very shiny surface is modeled with a large value for ns
(say, 100 or more);
• Small values are used for duller surfaces.
• For perfect reflector (perfect mirror), ns is infinite;
N N
R R
L L
and specular-reflection
V
directions, we can
calculate the value of Fig. 13
Modeling specular reflection.
R
L N. L
Calculation of vector R by considering projections onto the direction of the normal vector N.
R + L = (2N.L)N
R = (2N.L)N-L
63
Phong Model
N H
R
L
V
Fig. 17
Halfway vector H along the bisector of the angle between L and V.
H = (L + V)/|(L + V)|
65
Combine Diffuse & Specular
Reflections
For a single point light source, we can model
the combined diffuse and specular reflections
from a point on an illuminated surface as
I = Idiff + Ispec
66
Combine Diffuse & Specular
Reflections with Multiple Light
Sources
If we place more than one point source in a
scene, we obtain the light reflection at any
surface point by summering the contributions
from the individual sources:
67