Computer Graphics
Computer Graphics
Introduction
A picture is completely specified by the set of intensities for the pixel positions in the
display. Shapes and colors of the objects can be described internally with pixel arrays
into the frame buffer or with the set of the basic geometric – structure such as straight
line segments and polygon color areas. To describe structure of basic object is referred to
as output primitives.
Each output primitive is specified with input co-ordinate data and other information about
the way that objects is to be displayed. Additional output primitives that can be used to
constant a picture include circles and other conic sections, quadric surfaces, Spline curves
and surfaces, polygon floor areas and character string.
Line drawing is accomplished by calculating intermediate positions along the line path
between two specified end points positions. An output device is then directed to fill in
these positions between the end points
Digital devices display a straight line segment by plotting discrete points between the two
end points. Discrete coordinate positions along the line path are calculated from the
equation of the line. For a raster video display, the line color (intensity) is then loaded
into the frame buffer at the corresponding pixel coordinates. Reading from the frame
buffer, the video controller then plots “the screen pixels”.
Pixel positions are referenced according to scan-line number and column number (pixel
position across a scan line). Scan lines are numbered consecutively from 0, starting at the
bottom of the screen; and pixel columns are numbered from 0, left to right across each
scan line
1
Figure : Pixel Postions reference by scan line number and column number
To load an intensity value into the frame buffer at a position corresponding to column x
along scan line y,
setpixel (x, y)
To retrieve the current frame buffer intensity setting for a specified location we use a low
level function
getpixel (x, y)
y=m.x+b (1)
Given that the two endpoints of a line segment are specified at positions (x1,y1) and
(x2,y2) as in figure we can determine the values for the slope m and y intercept b with the
following calculations
2
Figure : Line Path between endpoint positions (x1,y1) and (x2,y2)
m = ∆y / ∆x = y2-y1 / x2 - x1 (2)
b= y1 - m . x1 (3)
For any given x interval ∆x along a line, we can compute the corresponding y interval
∆y
∆y= m ∆x (4)
∆ x = ∆ y/m (5)
For lines with slope magnitudes |m| < 1, ∆x can be set proportional to a small
horizontal deflection voltage and the corresponding vertical deflection is then set
proportional to ∆y as calculated from Eq (4).
For lines whose slopes have magnitudes |m | >1 , ∆y can be set proportional to a small
vertical deflection voltage with the corresponding horizontal deflection voltage set
proportional to ∆x, calculated from Eq (5)
For lines with m = 1, ∆x = ∆y and the horizontal and vertical deflections voltage
are equal.
Figure : Straight line Segment with five sampling positions along the x axis between x1 and x2
3
Digital Differential Analyzer (DDA) Algortihm
The line at unit intervals in one coordinate and determine corresponding integer values
nearest the line path for the other coordinate.
A line with positive slop, if the slope is less than or equal to 1, at unit x intervals (∆x=1)
and compute each successive y values as
yk+1 = yk + m (6)
Subscript k takes integer values starting from 1 for the first point and increases by 1 until
the final endpoint is reached. m can be any real number between 0 and 1 and, the
calculated y values must be rounded to the nearest integer
For lines with a positive slope greater than 1 we reverse the roles of x and y, (∆y=1) and
calculate each succeeding x value as
Equation (6) and (7) are based on the assumption that lines are to be processed from the
left endpoint to the right endpoint.
If this processing is reversed, ∆x=-1 that the starting endpoint is at the right
yk+1 = yk – m (8)
If the absolute value of the slope is less than 1 and the start endpoint is at the left, we set
∆x = 1 and calculate y values with Eq. (6)
When the start endpoint is at the right (for the same slope), we set ∆x = -1 and obtain y
positions from Eq. (8). Similarly, when the absolute value of a negative slope is greater
than 1, we use ∆y = -1 and Eq. (9) or we use ∆y = 1 and Eq. (7).
4
Algorithm
Algorithm Description:
Plotting points
k x Y
(Rounded to Integer)
0 0+0.66=0.66 0+1=1 (1,1)
1 0.66+0.66=1.32 1+1=2 (1,2)
2 1.32+0.66=1.98 2+1=3 (2,3)
3 1.98+0.66=2.64 3+1=4 (3,4)
4 2.64+0.66=3.3 4+1=5 (3,5)
5 3.3+0.66=3.96 5+1=6 (4,6)
Result :
An accurate and efficient raster line generating algorithm developed by Bresenham, that
uses only incremental integer calculations.
6
In addition, Bresenham’s line algorithm can be adapted to display circles and other
curves.
To illustrate Bresenham's approach, we- first consider the scan-conversion process for
lines with positive slope less than 1.
Pixel positions along a line path are then determined by sampling at unit x intervals.
Starting from the left endpoint (x0,y0) of a given line, we step to each successive column
(x position) and plot the pixel whose scan-line y value is closest to the line path.
To determine the pixel (xk,yk) is to be displayed, next to decide which pixel to plot the
column xk+1=xk+1.(xk+1,yk) and .(xk+1,yk+1). At sampling position xk+1, we label vertical
pixel separations from the mathematical line path as d1 and d2. The y coordinate on the
mathematical line at pixel column position xk+1 is calculated as
y =m(xk+1)+b (1)
Then
d1 = y-yk
= m(xk+1)+b-yk
d2 = (yk+1)-y
= yk+1-m(xk+1)-b
To determine which of the two pixel is closest to the line path, efficient test that is based
on the difference between the two pixel separations
A decision parameter Pk for the kth step in the line algorithm can be obtained by
rearranging equation (2). By substituting m=∆y/∆x where ∆x and ∆y are the vertical and
horizontal separations of the endpoint positions and defining the decision parameter as
pk = ∆x (d1- d2)
= 2∆y xk.-2∆x. yk + c (3)
Parameter C is constant and has the value 2∆y + ∆x(2b-1) which is independent of the
pixel position and will be eliminated in the recursive calculations for Pk.
If the pixel at yk is “closer” to the line path than the pixel at yk+1 (d1< d2) than decision
parameter Pk is negative. In this case, plot the lower pixel, otherwise plot the upper pixel.
Coordinate changes along the line occur in unit steps in either the x or y directions.
7
To obtain the values of successive decision parameters using incremental integer
calculations. At steps k+1, the decision parameter is evaluated from equation (3) as
The first parameter P0 is evaluated from equation at the starting pixel position
(x0,y0) and with m evaluated as ∆y/∆x
P0 = 2∆y-∆x (5)
Bresenham’s line drawing for a line with a positive slope less than 1 in the following
outline of the algorithm.
The constants 2∆y and 2∆y-2∆x are calculated once for each line to be scan
converted.
1. Input the two line endpoints and store the left end point in (x0,y0)
2. load (x0,y0) into frame buffer, ie. Plot the first point.
3. Calculate the constants ∆x, ∆y, 2∆y and obtain the starting value for the decision
parameter as P0 = 2∆y-∆x
4. At each xk along the line, starting at k=0 perform the following test
If Pk < 0, the next point to plot is(xk+1,yk) and
Pk+1 = Pk + 2∆y
otherwise, the next point to plot is (xk+1,yk+1) and
Pk+1 = Pk + 2∆y - 2∆x
8
Implementation of Bresenham Line drawing Algorithm
if (xa > x b )
{
x = xb;
y = yb;
xEnd = xa;
}
else
{
x = xa;
y = ya;
xEnd = xb;
}
setPixel(x,y);
while(x<xEnd)
{
x++;
if (p<0)
p+=twoDy;
else
{
y++;
p+=twoDyDx;
}
setPixel(x,y);
}
}
9
∆x = 10 ∆y=8
p0 = 2Δy- Δx = 6
We plot the initial point (x0,y0) = (20,10) and determine successive pixel positions along
the line path from the decision parameter as
Tabulation
k pk (xk+1, yK+1)
0 6 (21,11)
1 2 (22,12)
2 -2 (23,12)
3 14 (24,13)
4 10 (25,14)
5 6 (26,15)
6 2 (27,16)
7 -2 (28,16)
8 14 (29,17)
9 10 (30,18)
Result
10
Advantages
Algorithm is Fast
Uses only integer calculations
Disadvantages
Line Function
To display a single straight-line segment we have to set n=2 and list the x and y values of
the two endpoint coordinates in wcPoints.
Circle-Generating Algorithms
General function is available in a graphics library for displaying various kinds of curves,
including circles and ellipses.
Properties of a circle
A circle is defined as a set of points that are all the given distance (xc,yc).
11
This distance relationship is expressed by the pythagorean theorem in Cartesian
coordinates as
This is not the best method for generating a circle for the following reason
To eliminate the unequal spacing is to calculate points along the circle boundary using
polar coordinates r and θ. Expressing the circle equation in parametric polar from yields
the pair of equations
x = xc + rcos θ y = yc + rsin θ
When a display is generated with these equations using a fixed angular step size, a circle
is plotted with equally spaced points along the circumference. To reduce calculations use
a large angular separation between points along the circumference and connect the points
with straight line segments to approximate the circular path.
Set the angular step size at 1/r. This plots pixel positions that are approximately
one unit apart. The shape of the circle is similar in each quadrant. To determine the curve
positions in the first quadrant, to generate he circle section in the second quadrant of the
xy plane by nothing that the two circle sections are symmetric with respect to the y axis
12
and circle section in the third and fourth quadrants can be obtained from sections in the
first and second quadrants by considering symmetry between octants.
Circle sections in adjacent octants within one quadrant are symmetric with respect to the
450 line dividing the two octants. Where a point at position (x, y) on a one-eight circle
sector is mapped into the seven circle points in the other octants of the xy plane.
To generate all pixel positions around a circle by calculating only the points within the
sector from x=0 to y=0. the slope of the curve in this octant has an magnitude less than of
equal to 1.0. at x=0, the circle slope is 0 and at x=y, the slope is -1.0.
Bresenham’s line algorithm for raster displays is adapted to circle generation by setting
up decision parameters for finding the closest pixel to the circumference at each sampling
step. Square root evaluations would be required to computer pixel siatances from a
circular path.
In this approach is to test the halfway position between two pixels to determine if
this midpoint is inside or outside the circle boundary. This method is more easily applied
to other conics and for an integer circle radius the midpoint approach generates the same
pixel positions as the Bresenham circle algorithm.
For a straight line segment the midpoint method is equivalent to the bresenham line
algorithm. The error involved in locating pixel positions along any conic section using
the midpoint test is limited to one half the pixel separations.
13
Midpoint circle Algorithm:
In the raster line algorithm at unit intervals and determine the closest pixel
position to the specified circle path at each step for a given radius r and screen center
position (xc,yc) set up our algorithm to calculate pixel positions around a circle path
centered at the coordinate position by adding xc to x and yc to y.
To apply the midpoint method we define a circle function as
fcircle(x,y) = x2+y2-r2
Any point (x,y) on the boundary of the circle with radius r satisfies the equation fcircle
(x,y)=0. If the point is in the interior of the circle, the circle function is negative. And if
the point is outside the circle the, circle function is positive
The tests in the above eqn are performed for the midposition sbteween pixels near the
circle path at each sampling step. The circle function is the decision parameter in the
midpoint algorithm.
Midpoint between candidate pixels at sampling position xk+1 along a circular path.
Fig -1 shows the midpoint between the two candidate pixels at sampling position xk+1. To
plot the pixel at (xk,yk) next need to determine whether the pixel at position (xk+1,yk) or
the one at position (xk+1,yk-1) is circular to the circle.
Our decision parameter is the circle function evaluated at the midpoint between
these two pixels
Pk= fcircle (xk+1,yk-1/2)
=(xk+1)2+(yk-1/2)2-r2
If Pk <0, this midpoint is inside the circle and the pixel on scan line yk is closer to the
circle boundary. Otherwise the mid position is outside or on the circle boundary and
select the pixel on scan line yk -1.
14
Increments for obtaining Pk+1 are either 2xk+1+1 (if Pk is negative) or
2xk+1+1-2 yk+1.
Evaluation of the terms 2xk+1 and 2 yk+1 can also be done incrementally as
2xk+1=2xk+2
2 yk+1=2 yk-2
At the Start position (0,r) these two terms have the values 0 and 2r respectively. Each
successive value for the 2xk+1 term is obtained by adding 2 to the previous value and each
successive value for the 2yk+1 term is obtained by subtracting 2 from the previous value.
The initial decision parameter is obtained by evaluating the circle function at the
start position (x0,y0)=(0,r)
P0=1-r(for r an integer)
1. Input radius r and circle center (xc,yc) and obtain the first point on the circumference
of the circle centered on the origin as
(x0,y0) = (0,r)
2. Calculate the initial value of the decision parameter as P0=(5/4)-r
3. At each xk position, starting at k=0, perform the following test. If Pk <0 the next point
along the circle centered on (0,0) is (xk+1,yk) and Pk+1=Pk+2xk+1+1
Otherwise the next point along the circle is (xk+1,yk-1) and Pk+1=Pk+2xk+1+1-2 yk+1
Where 2xk+1=2xk+2 and 2yk+1=2yk-2
4. Determine symmetry points in the other seven octants.
5. Move each calculated pixel position (x,y) onto the circular path centered at (xc,yc) and
plot the coordinate values.
x=x+xc y=y+yc
6. Repeat step 3 through 5 until x>=y.
15
Example : Midpoint Circle Drawing
The circle octant in the first quadrant from x=0 to x=y. The initial value of the decision
parameter is P0=1-r = - 9
For the circle centered on the coordinate origin, the initial point is (x0,y0)=(0,10) and
initial increment terms for calculating the decision parameters are
2x0=0 , 2y0=20
16
Implementation of Midpoint Circle Algorithm
Ellipse-Generating Algorithms
17
Properties of ellipses
An ellipse can be given in terms of the distances from any point on the ellipse to two
fixed positions called the foci of the ellipse. The sum of these two distances is the same
values for all points on the ellipse.
If the distances to the two focus positions from any point p=(x,y) on the ellipse are
labeled d1 and d2, then the general equation of an ellipse can be stated as
d1+d2=constant
By squaring this equation isolating the remaining radical and squaring again. The
general ellipse equation in the form
Ax2+By2+Cxy+Dx+Ey+F=0
The coefficients A,B,C,D,E, and F are evaluated in terms of the focal coordinates and the
dimensions of the major and minor axes of the ellipse.
The major axis is the straight line segment extending from one side of the ellipse
to the other through the foci. The minor axis spans the shorter dimension of the ellipse,
perpendicularly bisecting the major axis at the halfway position (ellipse center) between
the two foci.
Ellipse equations are simplified if the major and minor axes are oriented to align
with the coordinate axes. The major and minor axes oriented parallel to the x and y axes
parameter rx for this example labels the semi major axis and parameter ry labels the semi
minor axis
18
((x-xc)/rx)2+((y-yc)/ry)2=1
Using polar coordinates r and θ, to describe the ellipse in Standard position with
the parametric equations
x=xc+rxcos θ
y=yc+rxsin θ
Angle θ called the eccentric angle of the ellipse is measured around the perimeter of a
bounding circle.
We must calculate pixel positions along the elliptical arc throughout one quadrant, and
then we obtain positions in the remaining three quadrants by symmetry
The midpoint ellipse method is applied throughout the first quadrant in two parts.
The below figure show the division of the first quadrant according to the slope of an
ellipse with rx<ry.
19
In the x direction where the slope of the curve has a magnitude less than 1 and unit steps
in the y direction where the slope has a magnitude greater than 1.
1. Start at position (0,ry) and step clockwise along the elliptical path in the first
quadrant shifting from unit steps in x to unit steps in y when the slope becomes less than
-1
2. Start at (rx,0) and select points in a counter clockwise order.
2.1 Shifting from unit steps in y to unit steps in x when the slope becomes
greater than -1.0
2.2 Using parallel processors calculate pixel positions in the two regions
simultaneously
3. Start at (0,ry)
step along the ellipse path in clockwise order throughout the first quadrant
ellipse function (xc,yc)=(0,0)
fellipse (x,y)=ry2x2+rx2y2 –rx2 ry2
which has the following properties:
fellipse (x,y) <0, if (x,y) is inside the ellipse boundary
=0, if(x,y) is on ellipse boundary
>0, if(x,y) is outside the ellipse boundary
Thus, the ellipse function fellipse (x,y) serves as the decision parameter in the
midpoint algorithm.
Starting at (0,ry):
Unit steps in the x direction until to reach the boundary between region 1 and
region 2. Then switch to unit steps in the y direction over the remainder of the curve in
the first quadrant.
20
At each step to test the value of the slope of the curve. The ellipse slope is
calculated
dy/dx= -(2ry2x/2rx2y)
2ry2x>=2rx2y
The following figure shows the midpoint between two candidate pixels at sampling
position xk+1 in the first region.
To determine the next position along the ellipse path by evaluating the decision
parameter at this mid point
if P1k <0, the midpoint is inside the ellipse and the pixel on scan line yk is
closer to the ellipse boundary. Otherwise the midpoint is outside or on the ellipse
boundary and select the pixel on scan line yk-1
At the next sampling position (xk+1+1=xk+2) the decision parameter for region 1 is
calculated as
21
p1k+1 = fellipse(xk+1 +1,yk+1 -½ )
Or
p1k+1 = p1k +2 ry2(xk +1) + ry2 + rx2 [(yk+1 -½)2 - (yk -½)2]
Increments for the decision parameters can be calculated using only addition and
subtraction as in the circle algorithm.
The terms 2ry2 x and 2rx2 y can be obtained incrementally. At the initial position
(0,ry) these two terms evaluate to
2 ry2x = 0
2rx2 y =2rx2 ry
x and y are incremented updated values are obtained by adding 2ry2to the current
value of the increment term and subtracting 2rx2 from the current value of the increment
term. The updated increment values are compared at each step and more from region 1 to
region 2. when the condition 4 is satisfied.
In region 1 the initial value of the decision parameter is obtained by evaluating the
ellipse function at the start position
(x0,y0) = (0,ry)
region 2 at unit intervals in the negative y direction and the midpoint is now taken
between horizontal pixels at each step for this region the decision parameter is evaluated
as
p10 = fellipse(1,ry -½ )
22
Or
over region 2, we sample at unit steps in the negative y direction and the midpoint is now
taken between horizontal pixels at each step. For this region, the decision parameter is
evaluated as
1. If P2k >0, the mid point position is outside the ellipse boundary, and select the
pixel at xk.
2. If P2k <=0, the mid point is inside the ellipse boundary and select pixel position
xk+1.
To determine the relationship between successive decision parameters in region 2
evaluate the ellipse function at the sampling step : yk+1 -1= yk-2.
or
p2k+1 = p2k -2 rx2(yk -1) + rx2 + ry2 [(xk+1 +½)2 - (xk +½)2]
With xk+1set either to xkor xk+1, depending on the sign of P2k. when we enter
region 2, the initial position (x0,y0) is taken as the last position. Selected in region 1 and
the initial decision parameter in region 2 is then
To simplify the calculation of P20, select pixel positions in counter clock wise
order starting at (rx,0). Unit steps would then be taken in the positive y direction up to the
last position selected in region 1.
1. Input rx,ry and ellipse center (xc,yc) and obtain the first point on an ellipse
centered on the origin as
23
(x0,y0) = (0,ry)
Otherwise the next point along the ellipse is (xk+1, yk-1) and
with
4. Calculate the initial value of the decision parameter in region 2 using the last
point (x0,y0) is the last position calculated in region 1.
24
Example : Mid point ellipse drawing
Input ellipse parameters rx=8 and ry=6 the mid point ellipse algorithm by
determining raster position along the ellipse path is the first quadrant. Initial
values and increments for the decision parameter calculations are
2ry2 x=0 (with increment 2ry2=72 )
2rx2 y=2rx2 ry (with increment -2rx2= -128 )
For region 1 the initial point for the ellipse centered on the origin is (x0,y0) =
(0,6) and the initial decision parameter value is
p10=ry2-rx2ry2+1/4rx2=-332
Successive midpoint decision parameter values and the pixel positions along the
ellipse are listed in the following table.
For a region 2 the initial point is (x0,y0)=(7,3) and the initial decision parameter
is
p20 = fellipse(7+1/2,2) = -151
The remaining positions along the ellipse path in the first quadrant are then
calculated as
25
Implementation of Midpoint Ellipse drawing
/ * Region 1 */
p = ROUND(Ry2 - (Rx2* Ry) + (0.25*Rx2));
while (px < py)
{
x++;
px += twoRy2;
i f (p < 0)
p += Ry2 + px;
else
{
y--;
py -= twoRx2;
p += Ry2 + px - py;
}
ellipsePlotPoints(xCenter, yCenter,x,y);
}
/* Region 2 */
p = ROUND (Ry2*(x+0.5)*' (x+0.5)+ Rx2*(y- l )* (y- l ) - Rx2*Ry2);
while (y > 0 )
{
y--;
py -= twoRx2;
i f (p > 0)
p += Rx2 - py;
else
26
{
x++;
px+=twoRy2;
p+=Rx2-py+px;
}
ellipsePlotPoints(xCenter, yCenter,x,y);
}
}
void ellipsePlotPoints(int xCenter, int yCenter,int x,int y);
{
setpixel (xCenter + x, yCenter + y);
setpixel (xCenter - x, yCenter + y);
setpixel (xCenter + x, yCenter - y);
setpixel (xCenter- x, yCenter - y);
}
1. Line Attributes
2. Curve Attributes
3. Color and Grayscale Levels
4. Area Fill Attributes
5. Character Attributes
6. Bundled Attributes
27
Line Attributes
Basic attributes of a straight line segment are its type, its width, and its color. In some
graphics packages, lines can also be displayed using selected pen or brush options
Line Type
Line Width
Pen and Brush Options
Line Color
Line type
Possible selection of line type attribute includes solid lines, dashed lines and dotted lines.
To set line type attributes in a PHIGS application program, a user invokes the function
setLinetype (lt)
Line width
Implementation of line width option depends on the capabilities of the output device to
set the line width attributes.
setLinewidthScaleFactor(lw)
Line width parameter lw is assigned a positive number to indicate the relative width of
line to be displayed. A value of 1 specifies a standard width line. A user could set lw to a
value of 0.5 to plot a line whose width is half that of the standard line. Values greater
than 1 produce lines thicker than the standard.
Line Cap
We can adjust the shape of the line ends to give them a better appearance by adding line
caps.
28
Butt cap obtained by adjusting the end positions of the component parallel lines so that
the thick line is displayed with square ends that are perpendicular to the line path.
Round cap obtained by adding a filled semicircle to each butt cap. The circular arcs are
centered on the line endpoints and have a diameter equal to the line thickness
Projecting square cap extend the line and add butt caps that are positioned one-half of
the line width beyond the specified endpoints.
Mitter Join
Round Join
Bevel Join
1. A miter join accomplished by extending the outer boundaries of each of the two lines
until they meet.
2. A round join is produced by capping the connection between the two segments with a
circular boundary whose diameter is equal to the width.
3. A bevel join is generated by displaying the line segment with but caps and filling in tri
angular gap where the segments meet
29
Pen and Brush Options
With some packages, lines can be displayed with pen or brush selections. Options in this
category include shape, size, and pattern. Some possible pen or brush shapes are given in
Figure
Line color
A poly line routine displays a line in the current color by setting this color value in the
frame buffer at pixel locations along the line path using the set pixel procedure.
We set the line color value in PHlGS with the function
setPolylineColourIndex (lc)
Nonnegative integer values, corresponding to allowed color choices, are assigned to the
line color parameter lc
30
setLinetype(2);
setLinewidthScaleFactor(2);
setPolylineColourIndex (5);
polyline(n1,wc points1);
setPolylineColorIindex(6);
poly line (n2, wc points2);
This program segment would display two figures, drawn with double-wide dashed lines.
The first is displayed in a color corresponding to code 5, and the second in color 6.
Curve attributes
Parameters for curve attribute are same as those for line segments. Curves displayed with
varying colors, widths, dot –dash patterns and available pen or brush options
Various color and intensity-level options can be made available to a user, depending on
the capabilities and design objectives of a particular system
In a color raster system, the number of color choices available depends on the amount of
storage provided per pixel in the frame buffer
With the direct storage scheme, whenever a particular color code is specified in an
application program, the corresponding binary value is placed in the frame buffer for
each-component pixel in the output primitives to be displayed in that color.
A minimum number of colors can be provided in this scheme with 3 bits of storage per
pixel, as shown in Table
31
Color tables(Color Lookup Tables) are an alternate means for providing extended color
capabilities to a user without requiring large frame buffers
A user can set color-table entries in a PHIGS applications program with the function
32
Parameter ws identifies the workstation output device; parameter ci specifies the color
index, which is the color-table position number (0 to 255) and parameter colorptr points
to a trio of RGB color values (r, g, b) each specified in the range from 0 to 1
Grayscale
With monitors that have no color capability, color functions can be used in an application
program to set the shades of gray, or grayscale, for displayed primitives. Numeric values
over the range from 0 to 1 can be used to specify grayscale levels, which are then
converted to appropriate binary codes for storage in the raster.
Intensity = 0.5[min(r,g,b)+max(r,g,b)]
Options for filling a defined region include a choice between a solid color or a
pattern fill and choices for particular colors and patterns
Fill Styles
Areas are displayed with three basic fill styles: hollow with a color border, filled with a
solid color, or filled with a specified pattern or design. A basic fill style is selected in a
PHIGS program with the function
setInteriorStyle(fs)
Values for the fill-style parameter fs include hollow, solid, and pattern. Another value for
fill style is hatch, which is used to fill an area with selected hatching patterns-parallel
lines or crossed lines
33
The color for a solid interior or for a hollow area outline is chosen with where fill color
parameter fc is set to the desired color code
setInteriorColourIndex(fc)
Pattern Fill
We select fill patterns with setInteriorStyleIndex (pi) where pattern index parameter pi
specifies a table position
For example, the following set of statements would fill the area defined in the fillArea
command with the second pattern type stored in the pattern table:
SetInteriorStyle( pattern)
SetInteriorStyleIndex(2);
Fill area (n, points)
34
Character Attributes
The appearance of displayed character is controlled by attributes such as font, size, color
and orientation. Attributes can be set both for entire character strings (text) and for
individual characters defined as marker symbols
Text Attributes
The choice of font or type face is set of characters with a particular design style as
courier, Helvetica, times roman, and various symbol groups.
The characters in a selected font also be displayed with styles. (solid, dotted,
double) in bold face in italics, and in or shadow styles.
setTextFont(tf)
Control of text color (or intensity) is managed from an application program with
setTextColourIndex(tc)
Text size can be adjusted without changing the width to height ratio of characters with
SetCharacterHeight (ch)
Parameter ch is assigned a real value greater than 0 to set the coordinate height of capital
letters
SetCharacterExpansionFactor(cw)
35
Where the character width parameter cw is set to a positive real value that scales the body
width of character
setCharacterSpacing(cs)
The orientation for a displayed character string is set according to the direction of the
character up vector
setCharacterUpVector(upvect)
Parameter upvect in this function is assigned two values that specify the x and y vector
components. For example, with upvect = (1, 1), the direction of the up vector is 45o and
text would be displayed as shown in Figure.
setTextPath (tp)
36
Where the text path parameter tp can be assigned the value: right, left, up, or down
Another handy attribute for character strings is alignment. This attribute specifies how
text is to be positioned with respect to the $tart coordinates. Alignment attributes are set
with
setTextAlignment (h,v)
where parameters h and v control horizontal and vertical alignment. Horizontal alignment
is set by assigning h a value of left, center, or right. Vertical alignment is set by
assigning v a value of top, cap, half, base or bottom.
setTextPrecision (tpr)
Marker Attributes
A marker symbol is a single character that can he displayed in different colors and in
different sizes. Marker attributes are implemented by procedures that load the chosen
character into the raster at the defined positions with the specified color and size. We
select a particular character to be the marker symbol with
setMarkerType(mt)
where marker type parameter mt is set to an integer code. Typical codes for marker type
are the integers 1 through 5, specifying, respectively, a dot (.) a vertical cross (+), an
asterisk (*), a circle (o), and a diagonal cross (X).
37
We set the marker size with
setMarkerSizeScaleFactor(ms)
with parameter marker size ms assigned a positive number. This scaling parameter is
applied to the nominal size for the particular marker symbol chosen. Values greater than
1 produce character enlargement; values less than 1 reduce the marker size.
setPolymarkerColourIndex(mc)
A selected color code parameter mc is stored in the current attribute list and used to
display subsequently specified marker primitives
Bundled Attributes
The procedures considered so far each function reference a single attribute that specifies
exactly how a primitive is to be displayed these specifications are called individual
attributes.
A particular set of attributes values for a primitive on each output device is chosen by
specifying appropriate table index. Attributes specified in this manner are called bundled
attributes. The choice between a bundled or an unbundled specification is made by setting
a switch called the aspect source flag for each of these attributes
where parameter attributer ptr points to a list of attributes and parameter flagptr points to
the corresponding list of aspect source flags. Each aspect source flag can be assigned a
value of individual or bundled.
Entries in the bundle table for line attributes on a specified workstation are set with the
function
Parameter ws is the workstation identifier and line index parameter li defines the bundle
table position. Parameter lt, lw, tc are then bundled and assigned values to set the line
type, line width, and line color specifications for designated table index.
38
Example
setPolylineRepresentation(1,3,2,0.5,1)
setPolylineRepresentation (4,3,1,1,7)
A poly line that is assigned a table index value of 3 would be displayed using
dashed lines at half thickness in a blue color on work station 1; while on workstation 4,
this same index generates solid, standard-sized white lines
Which defines the attributes list corresponding to fill index fi on workstation ws.
Parameter fs, pi and fc are assigned values for the fill style pattern index and fill color.
bundles values for text font, precision expansion factor size an color in a table position
for work station ws that is specified by value assigned to text index parameter ti.
That defines marker type marker scale factor marker color for index mi on
workstation ws.
Inquiry functions
Current settings for attributes and other parameters as workstations types and status in the
system lists can be retrieved with inquiry functions.
inquireInteriorcColourIndex (lastfc)
Copy the current values for line index and fill color into parameter lastli and lastfc.
39
Two Dimensional Geometric Transformations
Changes in orientations, size and shape are accomplished with geometric transformations
that alter the coordinate description of objects.
Basic transformation
Translation
T(tx, ty)
Translation distances
Scale
S(sx,sy)
Scale factors
Rotation
R()
Rotation angle
Translation
x’ = x + tx, y’ = y + ty
40
The translation distance point (tx,ty) is called translation vector or shift vector.
Translation equation can be expressed as single matrix equation by using column vectors
to represent the coordinate position and the translation vector as
P ( x, y )
T (t x , t y )
x' x t x
y' y t y
x ' x t x
y ' y t
y
P' P T
Moving a polygon from one position to another position with the translation
vector (-5.5, 3.75)
Rotations:
Positive values for the rotation angle define counter clock wise rotation about
pivot point. Negative value of angle rotate objects in clock wise direction. The
transformation can also be described as a rotation about a rotation axis perpendicular to
xy plane and passes through pivot point
41
Rotation of a point from position (x,y) to position (x’,y’) through angle θ relative to
coordinate origin
The transformation equations for rotation of a point position P when the pivot point is at
coordinate origin. In figure r is constant distance of the point positions Ф is the original
angular of the point from horizontal and θ is the rotation angle.
x = rcosФ, y = rsinФ
the transformation equation for rotating a point at position (x,y) through an angle θ about
origin
x’ = xcosθ – ysinθ
y’ = xsinθ + ycosθ
Rotation equation
P’= R . P
Rotation Matrix
cos sin
R=
sin cos
42
x' cos sin x
y ' sin cos y
Note : Positive values for the rotation angle define counterclockwise rotations about the
rotation point and negative values rotate objects in the clockwise.
Scaling
A scaling transformation alters the size of an object. This operation can be carried out for
polygons by multiplying the coordinate values (x,y) to each vertex by scaling factor Sx &
Sy to produce the transformed coordinates (x’,y’)
x' s x 0 x
y ' 0 s y y
or
P’ = S. P
Turning a square (a) Into a rectangle (b) with scaling factors sx = 2 and sy= 1.
Any positive numeric values are valid for scaling factors sx and sy. Values less than 1
reduce the size of the objects and values greater than 1 produce an enlarged object.
43
There are two types of Scaling. They are
Uniform scaling
Non Uniform Scaling
To get uniform scaling it is necessary to assign same value for sx and sy. Unequal values
for sx and sy result in a non uniform scaling.
x' 1 0 t x x
y ' 0 1 t y
y
1 0 0 1 1
P' T t x , t y P
For Scaling
x' s x 0 0 x
y ' 0 sy 0 y
1 0 0 1 1
P ' S s x , s y P
44
For rotation
Composite Transformations
Translation
If two successive translation vectors (tx1,ty1) and (tx2,ty2) are applied to a coordinate
position P, the final transformed location P’ is calculated as
P’=T(tx2,ty2).{T(tx1,ty1).P}
={T(tx2,ty2).T(tx1,ty1)}.P
Or
T(tx2,ty2).T(tx1,ty1) = T(tx1+tx2,ty1+ty2)
Rotations
P’=R(θ2).{R(θ1).P}={R(θ2).R(θ1)}.P
45
By multiplying the two rotation matrices, we can verify that two successive rotation are
additive
So that the final rotated coordinates can be calculated with the composite rotation matrix
as
P’ = R(θ1+ θ2).P
Scaling
46
The composite transformation matrix for this sequence is obtain with the concatenation
Translate object so that the fixed point coincides with the coordinate origin
Scale the object with respect to the coordinate origin
Use the inverse translation of step 1 to return the object to its original position
47
Concatenating the matrices for these three operations produces the required scaling matix
#include <math.h>
#include <graphics.h>
typedef float Matrix3x3 [3][3];
Matrix3x3 thematrix;
48
/ * Multiplies matrix a times b, putting result in b */
void matrix3x3PreMultiply (Matrix3x3 a. Matrix3x3 b)
{
int r,c:
Matrix3x3 tmp:
for (r = 0; r < 3: r++)
for (c = 0; c < 3; c++)
tmp[r][c] =a[r][0]*b[0][c]+ a[r][1]*b[l][c] + a[r][2]*b[2][c]:
for (r = 0: r < 3: r++)
for Ic = 0; c < 3: c++)
b[r][c]=- tmp[r][c]:
}
49
m[l] [2] = refPt.y * (1 - cosf (a) - refPt.x * sinf ( a ) ;
matrix3x3PreMultiply (m, theMatrix);
}
Other Transformations
1. Reflection
2. Shear
Reflection
1 0 0
0 1 0
0 0 1
Reflection of an object about the y axis
1 0 0
0 1 0
0 0 1
51
Reflection about origin is accomplished with the transformation matrix
1 0 0
0 1 0
0 0 1
To obtain transformation matrix for reflection about diagonal y=x the transformation
sequence is
1. Clock wise rotation by 450
2. Reflection about x axis
3. counter clock wise by 450
52
Reflection about the diagonal line y=x is accomplished with the transformation
matrix
0 1 0
1 0 0
0 0 1
To obtain transformation matrix for reflection about diagonal y=-x the transformation
sequence is
1. Clock wise rotation by 450
2. Reflection about y axis
3. counter clock wise by 450
Reflection about the diagonal line y=-x is accomplished with the transformation
matrix
0 1 0
1 0 0
0 0 1
Shear
A Transformation that slants the shape of an object is called the shear transformation.
Two common shearing transformations are used. One shifts x coordinate values and other
shift y coordinate values. However in both the cases only one coordinate (x or y)
changes its coordinates and other preserves its values.
53
X- Shear
The x shear preserves the y coordinates, but changes the x values which cause vertical
lines to tilt right or left as shown in figure
1 shx 0
0 1 0
0 0 1
x’ =x+ shx .y
y’ = y
Y Shear
The y shear preserves the x coordinates, but changes the y values which cause horizontal
lines which slope up or down
1 0 0
shy 1 0
0 0 1
x’ =x
y’ = y+ shy .x
54
XY-Shear
x' 1 shx 0 x
y ' sh 1 0 y
y
1 0 0 1 1
x’ =x+ shx .y
y’ = y+ shy .x
We can apply x shear and y shear transformations relative to other reference lines. In x
shear transformations we can use y reference line and in y shear we can use x reference
line.
We can generate x-direction shears relative to other reference lines with the
transformation matrix
y’ = y
Example
55
Y shear with x reference line
We can generate y-direction shears relative to other reference lines with the
transformation matrix
x’ =x
Example
56
Two dimensional viewing
The viewing pipeline
A world coordinate area selected for display is called a window. An area on a display
device to which a window is mapped is called a view port. The window defines what is to
be viewed the view port defines where it is to be displayed.
The mapping of a part of a world coordinate scene to device coordinate is referred to as
viewing transformation. The two dimensional viewing transformation is referred to as
window to view port transformation of windowing transformation.
A viewing transformation using standard rectangles for the window and viewport
The viewing- coordinate reference frame is used to provide a method for setting up
arbitrary orientations for rectangular windows. Once the viewing reference frame is
established, we can transform descriptions in world coordinates to viewing coordinates.
We then define a viewport in normalized coordinates (in the range from 0 to 1) and map
the viewing-coordinate description of the scene to normalized coordinates.
57
At the final step all parts of the picture that lie outside the viewport are clipped, and the
contents of the viewport are transferred to device coordinates. By changing the position
of the viewport, we can view objects at different positions on the display area of an
output device.
A point at position (xw,yw) in the window is mapped into position (xv,yv) in the associated
view port. To maintain the same relative placement in view port as in window
58
xv xvmin xw xwmin
=
xvmax xvmin xwmax xwmin
yv yvmin yw ywmin
=
yvmax yvmin ywmax ywmin
solving these expressions for view port position (xv,yv)
yv = yvmin + yw ywmin
yvmax yvmin
ywmax ywmin
1. Perform a scaling transformation using point position of (xw min, yw min) that
scales the window area to the size of view port.
2. Translate the scaled window area to the position of view port. Relative
proportions of objects are maintained if scaling factor are the same(Sx=Sy).
Otherwise world objects will be stretched or contracted in either the x or y direction when
displayed on output device. For normalized coordinates, object descriptions are mapped
to various display devices.
Any number of output devices can be open in particular application and another
window view port transformation can be performed for each open output device. This
mapping called the work station transformation is accomplished by selecting a window
area in normalized apace and a view port are in coordinates of display device.
59
Two Dimensional viewing functions
evaluateViewOrientationMatrix(x0,y0,xv,yv,error, viewMatrix)
where x0,y0 are coordinate of viewing origin and parameter xv, yv are the world
coordinate positions for view up vector.An integer error code is generated if the input
parameters are in error otherwise the view matrix for world-to-viewing transformation is
calculated. Any number of viewing transformation matrices can be defined in an
application.
60
Here window limits in viewing coordinates are chosen with parameters xwmin, xwmax,
ywmin, ywmax and the viewport limits are set with normalized coordinate positions
xvmin, xvmax, yvmin, yvmax.
The combinations of viewing and window view port mapping for various workstations in
a viewing table with
setViewRepresentation(ws,viewIndex,viewMatrix,viewMappingMatrix,
xclipmin, xclipmax, yclipmin, yclipmax, clipxy)
Where parameter ws designates the output device and parameter view index sets an
integer identifier for this window-view port point. The matrices viewMatrix and
viewMappingMatrix can be concatenated and referenced by viewIndex.
setViewIndex(viewIndex)
where was gives the workstation number. Window-coordinate extents are specified in the
range from 0 to 1 and viewport limits are in integer device coordinates.
Clipping operation
Any procedure that identifies those portions of a picture that are inside or outside of a
specified region of space is referred to as clipping algorithm or clipping. The region
against which an object is to be clipped is called clip window.
Point clipping
Line clipping (Straight-line segment)
Area clipping
Curve clipping
Text clipping
61
Line and polygon clipping routines are standard components of graphics packages.
Point Clipping
Clip window is a rectangle in standard position. A point P=(x,y) for display, if following
inequalities are satisfied:
where the edges of the clip window (xwmin,xwmax,ywmin,ywmax) can be either the
world-coordinate window boundaries or viewport boundaries. If any one of these four
inequalities is not satisfied, the point is clipped (not saved for display).
Line Clipping
A line clipping procedure involves several parts. First we test a given line segment
whether it lies completely inside the clipping window. If it does not we try to determine
whether it lies completely outside the window . Finally if we can not identify a line as
completely inside or completely outside, we perform intersection calculations with one or
more clipping boundaries.
Process lines through “inside-outside” tests by checking the line endpoints. A line with
both endpoints inside all clipping boundaries such as line from P1 to P2 is saved. A line
with both end point outside any one of the clip boundaries line P3P4 is outside the
window.
62
All other lines cross one or more clipping boundaries. For a line segment with end points
(x1,y1) and (x2,y2) one or both end points outside clipping rectangle, the parametric
representation
x = x1 + u x2 x1 ,
y = y1 + u y2 y1 , 0 u 1
could be used to determine values of u for an intersection with the clipping boundary
coordinates. If the value of u for an intersection with a rectangle boundary edge is outside
the range of 0 to 1, the line does not enter the interior of the window at that boundary. If
the value of u is within the range from 0 to 1, the line segment does indeed cross into the
clipping area. This method can be applied to each clipping boundary edge in to
determined whether any part of line segment is to displayed.
This is one of the oldest and most popular line-clipping procedures. The method
speeds up the processing of line segments by performing initial tests that reduce the
number of intersections that must be calculated.
Every line endpoint in a picture is assigned a four digit binary code called a region
code that identifies the location of the point relative to the boundaries of the clipping
rectangle.
Binary region codes assigned to line end points according to relative position with
respect to the clipping rectangle.
63
Regions are set up in reference to the boundaries. Each bit position in region code is used
to indicate one of four relative coordinate positions of points with respect to clip window:
to the left, right, top or bottom. By numbering the bit positions in the region code as 1
through 4 from right to left, the coordinate regions are corrected with bit positions as
bit 1: left
bit 2: right
bit 3: below
bit4: above
A value of 1 in any bit position indicates that the point is in that relative position.
Otherwise the bit position is set to 0. If a point is within the clipping rectangle the region
code is 0000. A point that is below and to the left of the rectangle has a region code of
0101.
Bit values in the region code are determined by comparing endpoint coordinate
values (x,y) to clip boundaries. Bit1 is set to 1 if x <xwmin.
(2) Use the resultant sign bit of each difference calculation to set the corresponding value
in the region code.
Once we have established region codes for all line endpoints, we can quickly determine
which lines are completely inside the clip window and which are clearly outside.
Any lines that are completely contained within the window boundaries have a
region code of 0000 for both endpoints, and we accept
64
these lines. Any lines that have a 1 in the same bit position in the region codes for each
endpoint are completely outside the clipping rectangle, and we reject these lines.
We would discard the line that has a region code of 1001 for one endpoint and a
code of 0101 for the other endpoint. Both endpoints of this line are left of the clipping
rectangle, as indicated by the 1 in the first bit position of each region code.
A method that can be used to test lines for total clipping is to perform the logical
and operation with both region codes. If the result is not 0000,the line is completely
outside the clipping region.
Line extending from one coordinates region to another may pass through the clip
window, or they may intersect clipping boundaries without entering window.
Cohen-Sutherland line clipping starting with bottom endpoint left, right , bottom
and top boundaries in turn and find that this point is below the clipping rectangle.
Starting with the bottom endpoint of the line from P1 to P2, we check P1 against
the left, right, and bottom boundaries in turn and find that this point is below the clipping
rectangle. We then find the intersection point P1’ with the bottom boundary and discard
the line section from P1 to P1’.
The line now has been reduced to the section from P1’ to P2,Since P2, is outside the
clip window, we check this endpoint against the boundaries and find that it is to the left
65
of the window. Intersection point P2’ is calculated, but this point is above the window. So
the final intersection calculation yields P2”, and the line from P1’ to P2”is saved. This
completes processing for this line, so we save this part and go on to the next line.
Point P3 in the next line is to the left of the clipping rectangle, so we determine the
intersection P3’, and eliminate the line section from P3 to P3'. By checking region codes
for the line section from P3'to P4 we find that the remainder of the line is below the clip
window and can be discarded also.
Intersection points with a clipping boundary can be calculated using the slope-
intercept form of the line equation. For a line with endpoint coordinates (x1,y1) and (x2,y2)
and the y coordinate of the intersection point with a vertical boundary can be obtained
with the calculation.
y =y1 +m (x-x1)
where x value is set either to xwmin or to xwmax and slope of line is calculated as
x= x1 +( y- y1) / m
66
unsigned char code=0x00;
if(pt.x<winmin.x)
code=code|LEFT_EDGE;
if(pt.x>winmax.x)
code=code|RIGHT_EDGE;
if(pt.y<winmin.y)
code=code|BOTTOM_EDGE;
if(pt.y>winmax.y)
code=code|TOP_EDGE;
return(code);
}
void swappts(wcPt2 *p1,wcPt2 *p2)
{
wcPt2 temp;
tmp=*p1;
*p1=*p2;
*p2=tmp;
}
void swapcodes(unsigned char *c1,unsigned char *c2)
{
unsigned char tmp;
tmp=*c1;
*c1=*c2;
*c2=tmp;
}
void clipline(dcPt winmin, dcPt winmax, wcPt2 p1,ecPt2 point p2)
{
unsigned char code1,code2;
int done=FALSE, draw=FALSE;
float m;
while(!done)
{
code1=encode(p1,winmin,winmax);
code2=encode(p2,winmin,winmax);
if(ACCEPT(code1,code2))
{
done=TRUE;
draw=TRUE;
}
else if(REJECT(code1,code2))
done=TRUE;
else
{
67
if(INSIDE(code1))
{
swappts(&p1,&p2);
swapcodes(&code1,&code2);
}
if(p2.x!=p1.x)
m=(p2.y-p1.y)/(p2.x-p1.x);
if(code1 &LEFT_EDGE)
{
p1.y+=(winmin.x-p1.x)*m;
p1.x=winmin.x;
}
else if(code1 &RIGHT_EDGE)
{
p1.y+=(winmax.x-p1.x)*m;
p1.x=winmax.x;
}
else if(code1 &BOTTOM_EDGE)
{
if(p2.x!=p1.x)
p1.x+=(winmin.y-p1.y)/m;
p1.y=winmin.y;
}
else if(code1 &TOP_EDGE)
{
if(p2.x!=p1.x)
p1.x+=(winmax.y-p1.y)/m;
p1.y=winmax.y;
}
}
}
if(draw)
lineDDA(ROUND(p1.x),ROUND(p1.y),ROUND(p2.x),ROUND(p2.y));
}
x = x1 + u ∆x
y = y1 + u ∆y 0<=u<=1
68
where ∆x = (x2 - x1) and ∆y = (y2 - y1)
p1 = -∆x q1 = x1 - xwmin
p2 = ∆x q2 = xwmax - x1
P3 = -∆y q3 = y1- ywmin
P4 = ∆y q4 = ywmax - y1
Any line that is parallel to one of the clipping boundaries have pk=0 for values of
k corresponding to boundary k=1,2,3,4 correspond to left, right, bottom and top
boundaries. For values of k, find qk<0, the line is completely out side the boundary.
When pk<0 the infinite extension of line proceeds from outside to inside of the
infinite extension of this clipping boundary.
If pk>0, the line proceeds from inside to outside, for non zero value of pk calculate
the value of u, that corresponds to the point where the infinitely extended line intersect
the extension of boundary k as
u = qk / pk
For each line, calculate values for parameters u1and u2 that define the part of line
that lies within the clip rectangle. The value of u1 is determined by looking at the
rectangle edges for which the line proceeds from outside to the inside (p<0).
rk = qk / pk
69
The value of u1 is taken as largest of set consisting of 0 and various values of r. The
value of u2 is determined by examining the boundaries for which lines proceeds from
inside to outside (P>0).
A value of rkis calculated for each of these boundaries and value of u2 is the
minimum of the set consisting of 1 and the calculated r values.
If u1>u2, the line is completely outside the clip window and it can be rejected.
Line intersection parameters are initialized to values u1=0 and u2=1. for each
clipping boundary, the appropriate values for P and q are calculated and used by function
Cliptest to determine whether the line can be rejected or whether the intersection
parameter can be adjusted.
If updating u1 or u2 results in u1>u2 reject the line, when p=0 and q<0, discard the line,
it is parallel to and outside the boundary.If the line has not been rejected after all four
value of p and q have been tested , the end points of clipped lines are determined from
values of u1 and u2.
The Liang-Barsky algorithm is more efficient than the Cohen-Sutherland
algorithm since intersections calculations are reduced. Each update of parameters u1 and
u2 require only one division and window intersections of these lines are computed only
once.
Cohen-Sutherland algorithm, can repeatedly calculate intersections along a line
path, even through line may be completely outside the clip window. Each intersection
calculations require both a division and a multiplication.
70
r=q/p
if (r>*u2)
retVal=FALSE;
else
if (r>*u1)
*u1=r;
}
else
if (p>0.0)
{
r=q/p
if (r<*u1)
retVal=FALSE;
else
if (r<*u2)
*u2=r;
}
else
if )q<0.0)
retVal=FALSE
return(retVal);
void clipLine (dcPt winMin, dcPt winMax, wcPt2 p1, wcpt2 p2)
{
float u1=0.0, u2=1.0, dx=p2.x-p1.x,dy;
if (clipTest (-dx, p1.x-winMin.x, &u1, &u2))
if (clipTest (dx, winMax.x-p1.x, &u1, &u2))
{
dy=p2.y-p1.y;
if (clipTest (-dy, p1.y-winMin.y, &u1, &u2))
if (clipTest (dy, winMax.y-p1.y, &u1, &u2))
{
if (u1<1.0)
{
p2.x=p1.x+u2*dx;
p2.y=p1.y+u2*dy;
}
if (u1>0.0)
{
p1.x=p1.x+u1*dx;
p1.y=p1.y+u1*dy;
}
lineDDA(ROUND(p1.x),ROUND(p1.y),ROUND(p2.x),ROUND(p2.y));
71
}
}
}
For a line with endpoints P1 and P2 we first determine the position of point P1,
for the nine possible regions relative to the clipping rectangle. Only the three regions
shown in Fig. need to be considered. If P1 lies in any one of the other six regions, we can
move it to one of the three regions in Fig. using a symmetry transformation. For
example, the region directly above the clip window can be transformed to the region left
of the clip window using a reflection about the line y = -x, or we could use a 90 degree
counterclockwise rotation.
Three possible positions for a line endpoint p1(a) in the NLN algorithm
72
Case 1: p1 inside region
Next, we determine the position of P2 relative to P1. To do this, we create some new
regions in the plane, depending on the location of P1. Boundaries of the new regions are
half-infinite line segments that start at the position of P1 and pass through the window
corners. If P1 is inside the clip window and P2 is outside, we set up the four regions
shown in Fig
The four clipping regions used in NLN alg when p1 is inside and p2 outside the clip
window
The intersection with the appropriate window boundary is then carried out,
depending on which one of the four regions (L, T, R, or B) contains P2. If both P1 and P2
are inside the clipping rectangle, we simply save the entire line.
If P1 is in the region to the left of the window, we set up the four regions, L, LT, LR, and
LB, shown in Fig.
73
These four regions determine a unique boundary for the line segment. For instance, if P2
is in region L, we clip the line at the left boundary and save the line segment from this
intersection point to P2. But if P2 is in region LT, we save the line segment from the left
window boundary to the top boundary. If P2 is not in any of the four regions, L, LT, LR,
or LB, the entire line is clipped.
For the third case, when P1 is to the left and above the clip window, we usethe clipping
regions in Fig.
Fig : The two possible sets of clipping regions used in NLN algorithm when P1 is
above and to the left of the clip window
In this case, we have the two possibilities shown, depending on the position of P1,
relative to the top left corner of the window. If P2, is in one of the regions T, L, TR, TB,
LR, or LB, this determines a unique clip window edge for the intersection calculations.
Otherwise, the entire line is rejected.
74
line to the slopes of the boundaries of the clip regions. For example, if P1 is left of
slopeP1PTR<slopeP1P2<slopeP1PTL
or
yT – y1 < y2 – y1 < yT – y1
xR – x1 x2 – x1 xL – x1
The coordinate difference and product calculations used in the slope tests are
saved and also used in the intersection calculations. From the parametric equations
x = x1 + (x2 – x1)u
y = y1 + (y2 – y1)u
y = y1 + y2 – y1 (xL – x1 )
x2 – x1
And an intersection position on the top boundary has y = yT and u = (yT – y1)/ (y2 – y1)
with
x = x1 + x2 – x1 (yT – y1 )
y2 – y1
POLYGON CLIPPING
75
Display of a polygon processed by a line clipping algorithm
For polygon clipping, we require an algorithm that will generate one or more closed areas
that are then scan converted for the appropriate area fill. The output of a polygon clipper
should be a sequence of vertices that defines the clipped polygon boundaries.
There are four possible cases when processing vertices in sequence around the
perimeter of a polygon. As each point of adjacent polygon vertices is passed to a window
boundary clipper, make the following tests:
1. If the first vertex is outside the window boundary and second vertex is inside,
both the intersection point of the polygon edge with window boundary and
second vertex are added to output vertex list.
2. If both input vertices are inside the window boundary, only the second vertex
is added to the output vertex list.
76
3. If first vertex is inside the window boundary and second vertex is outside only
the edge intersection with window boundary is added to output vertex list.
4. If both input vertices are outside the window boundary nothing is added to the
output list.
Clipping a polygon against successive window boundaries.
Successive processing of pairs of polygon vertices against the left window boundary
Clipping a polygon against the left boundary of a window, starting with vertex 1.
Primed numbers are used to label the points in the output vertex list for this window
boundary.
77
vertices 1 and 2 are found to be on outside of boundary. Moving along vertex 3 which is
inside, calculate the intersection and save both the intersection point and vertex 3. Vertex
4 and 5 are determined to be inside and are saved. Vertex 6 is outside so we find and save
the intersection point. Using the five saved points we repeat the process for next window
boundary.
Implementing the algorithm as described requires setting up storage for an output list of
vertices as a polygon clipped against each window boundary. We eliminate the
intermediate output vertex lists by simply by clipping individual vertices at each step and
passing the clipped vertices on to the next boundary clipper.
A point is added to the output vertex list only after it has been determined to be inside or
on a window boundary by all boundary clippers. Otherwise the point does not continue in
the pipeline.
Processing the vertices of the polygon in the above fig. through a boundary clipping
pipeline. After all vertices are processed through the pipeline, the vertex list is {
v2”, v2’, v3,v3’}
78
Implementation of Sutherland-Hodgeman Polygon Clipping
79
ipt.y=p2.y+(wmin.x-p2.x)*m;
break;
case Right:
ipt.x=wmax.x;
ipt.y=p2.y+(wmax.x-p2.x)*m;
break;
case Bottom:
ipt.y=wmin.y;
if(p1.x!=p2.x)
ipt.x=p2.x+(wmin.y-p2.y)/m;
else
ipt.x=p2.x;
break;
case Top:
ipt.y=wmax.y;
if(p1.x!=p2.x)
ipt.x=p2.x+(wmax.y-p2.y)/m;
else
ipt.x=p2.x;
break;
}
return(ipt);
}
void clippoint(wcPt2 p,Edge b,dcPt wmin,dcPt wmax, wcPt2 *pout,int *cnt, wcPt2
*first[],struct point *s)
{
wcPt2 iPt;
if(!first[b])
first[b]=&p;
else
if(cross(p,s[b],b,wmin,wmax))
{
ipt=intersect(p,s[b],b,wmin,wmax);
if(b<top)
clippoint(ipt,b+1,wmin,wmax,pout,cnt,first,s);
else
{
pout[*cnt]=ipt;
(*cnt)++;
}
}
s[b]=p;
if(inside(p,b,wmin,wmax))
80
if(b<top)
clippoint(p,b+1,wmin,wmax,pout,cnt,first,s);
else
{
pout[*cnt]=p;
(*cnt)++;
}
}
void closeclip(dcPt wmin,dcPt wmax, wcPt2 *pout,int *cnt,wcPt2 *first[], wcPt2 *s)
{
wcPt2 iPt;
Edge b;
for(b=left;b<=top;b++)
{
if(cross(s[b],*first[b],b,wmin,wmax))
{
i=intersect(s[b],*first[b],b,wmin,wmax);
if(b<top)
clippoint(i,b+1,wmin,wmax,pout,cnt,first,s);
else
{
pout[*cnt]=i;
(*cnt)++;
}
}
}
}
int clippolygon(dcPt point wmin,dcPt wmax,int n,wcPt2 *pin, wcPt2 *pout)
{
wcPt2 *first[N_EDGE]={0,0,0,0},s[N_EDGE];
int i,cnt=0;
for(i=0;i<n;i++)
clippoint(pin[i],left,wmin,wmax,pout,&cnt,first,s);
closeclip(wmin,wmax,pout,&cnt,first,s);
return(cnt);
}
81
The basic idea in this algorithm is that instead of always proceeding around the
polygon edges as vertices are processed, we sometimes want to follow the window
boundaries. Which path we follow depends on the polygon-processing direction
(clockwise or counterclockwise) and whether the pair of polygon vertices currently being
processed represents an outside-to-inside pair or an inside- to-outside pair. For clockwise
processing of polygon vertices, we use the following rules:
For an outside-to-inside pair of vertices, follow the polygon boundary.
For an inside-to-outside pair of vertices,. follow the window boundary in a
clockwise direction.
In the below Fig. the processing direction in the Weiler-Atherton algorithm and the
resulting clipped polygon is shown for a rectangular clipping window.
Curve Clipping
But if the bounding rectangle test fails, we can look for other computation-saving
approaches. For a circle, we can use the coordinate extents of individual quadrants and
then octants for preliminary testing before calculating curve-window intersections.
The below figure illustrates circle clipping against a rectangular window. On the
first pass, we can clip the bounding rectangle of the object against the bounding rectangle
82
of the clip region. If the two regions overlap, we will need to solve the simultaneous line-
curve equations to obtain the clipping intersection points.
Text clipping
There are several techniques that can be used to provide text clipping in a graphics
package. The clipping technique used will depend on the methods used to
generate characters and the requirements of a particular application.
83
Text clipping using a bounding rectangle about individual characters.
A final method for handling text clipping is to clip the components of individual
characters. We now treat characters in much the same way that we treated lines. If an
individual character overlaps a clip window boundary, we clip off the parts of the
character that are outside the window.
Exterior clipping:
Objects within a window are clipped to interior of window when other higher
priority window overlap these objects. The objects are also clipped to the exterior of
overlapping windows.
84
Unit II – Computer Graphics
1
Unit II – Computer Graphics
Fig. Three parallel projection views of an object, showing
relative proportions from different viewing positions.
Perspective Projection:
It is a method for generating a view of a three dimensional scene is
to project points to the display plane alone converging paths.
This makes objects further from the viewing position be displayed
smaller than objects of the same size that are nearer to the viewing
position.
In a perspective projection, parallel lines in a scene that are not
parallel to the display plane are projected into converging lines.
Scenes displayed using perspective projections appear more
realistic, since this is the way that our eyes and a camera lens
form images.
Depth Cueing:
Depth information is important to identify the viewing direction,
which is the front and which is the back of displayed object.
Depth cueing is a method for indicating depth with wire frame
displays is to vary the intensity of objects according to their
distance from the viewing position.
Depth cueing is applied by choosing maximum and minimum
intensity (or color) values and a range of distance over which the
intensities are to vary.
Visible line and surface identification:
A simplest way to identify the visible line is to highlight the visible
lines or to display them in a different color.
Another method is to display the non visible lines as dashed lines.
Surface Rendering:
Surface rendering method is used to generate a degree of realism
in a displayed scene.
2
Unit II – Computer Graphics
Realism is attained in displays by setting the surface intensity of
objects according to the lighting conditions in the scene and
surface characteristics.
Lighting conditions include the intensity and positions of light
sources and the background illumination.
Surface characteristics include degree of transparency and how
rough or smooth the surfaces are to be.
Exploded and Cutaway views:
Exploded and cutaway views of objects can be to show the internal
structure and relationship of the objects parts.
An alternative to exploding an object into its component parts is
the cut away view which removes part of the visible surfaces to
show internal structure.
Three-dimensional and Stereoscopic Views:
In Stereoscopic views, three dimensional views can be obtained by
reflecting a raster image from a vibrating flexible mirror.
The vibrations of the mirror are synchronized with the display of
the scene on the CRT.
As the mirror vibrates, the focal length varies so that each point in
the scene is projected to a position corresponding to its depth.
Stereoscopic devices present two views of a scene; one for the left
eye and the other for the right eye.
The two views are generated by selecting viewing positions that
corresponds to the two eye positions of a single viewer.
These two views can be displayed on alternate refresh cycles of a
raster monitor, and viewed through glasses that alternately darken
first one lens then the other in synchronization with the monitor
refresh cycles.
2.1.2 Three Dimensional Graphics Packages
The 3D package must include methods for mapping scene
descriptions onto a flat viewing surface.
There should be some consideration on how surfaces of solid
objects are to be modeled, how visible surfaces can be identified,
how transformations of objects are preformed in space, and how to
describe the additional spatial properties.
World coordinate descriptions are extended to 3D, and users are
provided with output and input routines accessed with
specifications such as
o Polyline3(n, WcPoints)
3
Unit II – Computer
o Fillarea3(n,
Graphics WcPoints)
4
Unit II – Computer Graphics
o Text3(WcPoint, string)
o Getlocator3(WcPoint)
o Translate3(translateVector, matrix Translate)
Where points and vectors are specified with 3 components and
transformation matrices have 4 rows and 4 columns.
2.2 Three Dimensional Object Representations
Representation schemes for solid objects are divided into two
categories as follows:
1. Boundary Representation ( B-reps)
It describes a three dimensional object as a set of surfaces that
separate the object interior from the environment. Examples are
polygon facets and spline patches.
2. Space Partitioning representation
It describes the interior properties, by partitioning the spatial
region containing an object into a set of small, nonoverlapping,
contiguous solids(usually cubes).
Eg: Octree Representation
2.2.1 Polygon Surfaces
Polygon surfaces are boundary representations for a 3D graphics
object is a set of polygons that enclose the object interior.
Polygon Tables
The polygon surface is specified with a set of vertex coordinates
and associated attribute parameters.
For each polygon input, the data are placed into tables that are to
be used in the subsequent processing.
Polygon data tables can be organized into two groups: Geometric
tables and attribute tables.
Geometric Tables
Contain vertex coordinates and parameters to identify the spatial
orientation of the polygon surfaces.
Attribute tables
Contain attribute information for an object such as parameters
specifying the degree of transparency of the object and its surface
reflectivity and texture characteristics.
A convenient organization for storing geometric data is to create three
lists:
1. The Vertex Table
Coordinate values for each vertex in the object are stored in
this table.
5
Unit II – Computer Graphics
2. The Edge Table
It contains pointers back into the vertex table to identify the
vertices for each polygon edge.
3. The Polygon Table
It contains pointers back into the edge table to identify the
edges for each polygon.
This is shown in fig
6
Unit II – Computer Graphics
to include forward points into the polygon table so that common
edges between polygons can be identified more rapidly.
E1 : V1, V2, S1
E2 : V2, V3, S1
E3 : V3, V1, S1, S2
E4 : V3, V4, S2
E5 : V4, V5, S2
E6 : V5, V1, S2
This is useful for the rendering procedure that must vary surface
shading smoothly across the edges from one polygon to the next.
Similarly, the vertex table can be expanded so that vertices are
cross-referenced to corresponding edges.
Additional geometric information that is stored in the data tables
includes the slope for each edge and the coordinate extends for
each polygon. As vertices are input, we can calculate edge slopes
and we can scan the coordinate values to identify the minimum
and maximum x, y and z values for individual polygons.
The more information included in the data tables will be easier to
check for errors.
Some of the tests that could be performed by a graphics package
are:
1. That every vertex is listed as an endpoint for at least two
edges.
2. That every edge is part of at least one polygon.
3. That every polygon is closed.
4. That each polygon has at least one shared edge.
5. That if the edge table contains pointers to polygons, every
edge referenced by a polygon pointer has a reciprocal
pointer back to the polygon.
Plane Equations:
To produce a display of a 3D object, we must process the input data
representation for the object through several procedures such as,
- Transformation of the modeling and world coordinate
descriptions to viewing coordinates.
- Then to device coordinates:
- Identification of visible surfaces
- The application of surface-rendering procedures.
For these processes, we need information about the spatial
orientation of the individual surface components of the object. This
7
Unit II – Computer Graphics
information is obtained from the vertex coordinate value and the
equations that describe the polygon planes.
The equation for a plane surface is
Ax + By+ Cz + D = 0 ----(1)
Where (x, y, z) is any point on the plane, and the coefficients A,B,C
and D are constants describing the spatial properties of the plane.
We can obtain the values of A, B,C and D by solving a set of three
plane equations using the coordinate values for three non collinear
points in the plane.
For that, we can select three successive polygon vertices (x1, y1, z1),
(x2, y2, z2) and (x3, y3, z3) and solve the following set of
simultaneous linear plane equations for the ratios A/D, B/D and
C/D.
(A/D)xk + (B/D)yk + (c/D)zk = -1, k=1,2,3 -----(2)
The solution for this set of equations can be obtained in determinant
form, using Cramer’s rule as
1 y1 z1 x1 1 z1
A= 1 y2 z2 B= x2 1 z2
1 y3 z3 x3 1 z3
x1 y1 1 x1 y1 z1
C= x2 y2 1 D = - x2 y2 z2 ------(3)
x3 y3 1 x3 y3 z3
8
Unit II – Computer Graphics
We can identify the point as either inside or outside the plane
surface according o the sigh (negative or positive) of Ax + By + Cz +
D:
If Ax + By + Cz + D < 0, the point (x, y, z) is inside the
surface.
If Ax + By + Cz + D > 0, the point (x, y, z) is outside the
surface.
These inequality tests are valid in a right handed Cartesian
system, provided the plane parmeters A,B,C and D were calculated
using vertices selected in a counter clockwise order when viewing
the surface in an outside-to-inside direction.
Polygon Meshes
A single plane surface can be specified with a function such as
fillArea. But when object surfaces are to be tiled, it is more
convenient to specify the surface facets with a mesh function.
One type of polygon mesh is the triangle strip.A triangle strip
formed with 11 triangles connecting 13 vertices.
1
0
Unit II – Computer Graphics
When functions are specified, a package can project the defining
equations for a curve to the display plane and plot pixel positions
along the path of the projected function.
For surfaces, a functional description in decorated to produce a
polygon-mesh approximation to the surface.
2.2.3 Quadric Surfaces
The quadric surfaces are described with second degree equations
(quadratics).
They include spheres, ellipsoids, tori, parabolids, and
hyperboloids.
Sphere
In Cartesian coordinates, a spherical surface with radius r
centered on the coordinates origin is defined as the set of points (x,
y, z) that satisfy the equation.
x2 + y2 + z2 = r2 -------------------------(1)
9
Unit II – Computer Graphics
The Cartesian representation for points over the surface of an
ellipsoid centered on2 the origin
2
is
2
x y z
+ + =1
rx ry rz
Torus
Torus is a doughnut shaped object.
It can be generated by rotating a circle or other conic about a
specified axis.
A torus with a circular cross section centered on the
coordinate origin
10
Unit II – Computer Graphics
11
Unit II – Computer Graphics
specify animation paths for the objects or the camera in the scene.
CAD applications for splines include the design of automobiles
bodies, aircraft and spacecraft surfaces, and ship hulls.
Interpolation and Approximation Splines
Spline curve can be specified by a set of coordinate positions called
control points which indicates the general shape of the curve.
These control points are fitted with piecewise continuous
parametric polynomial functions in one of the two ways.
1. When polynomial sections are fitted so that the curve passes
through each control point the resulting curve is said to
interpolate the set of control points.
A set of six control points interpolated with piecewise
continuous polynomial sections
12
Unit II – Computer Graphics
A spline curve is designed , modified and manipulated with
operations on the control points.The curve can be translated,
rotated or scaled with transformation applied to the control points.
The convex polygon boundary that encloses a set of control points
is called the convex hull.
The shape of the convex hull is to imagine a rubber band stretched
around the position of the control points so that each control point
is either on the perimeter of the hull or inside it.
Convex hull shapes (dashed lines) for two sets of control points
Spline specifications
There are three methods to specify a spline representation:
1. We can state the set of boundary conditions that are imposed on the
spline; (or)
2. We can state the matrix that characterizes the spline; (or)
3. We can state the set of blending functions that determine how
specified geometric constraints on the curve are combined to calculate
positions along the curve path.
To illustrate these three equivalent specifications, suppose we have
the following parametric cubic polynomial representation for the x
coordinate along the path of a spline section.
x(u)=axu3 + axu2 + cxu + dx 0<= u <=1 ----------(1)
Boundary conditions for this curve might be set on the
endpoint coordinates x(0) and x(1) and on the parametric first derivatives
at the endpoints x’(0) and x’(1). These boundary conditions are sufficient
to determine the values of the four coordinates ax, bx, cx and dx.
From the boundary conditions we can obtain the matrix that
characterizes this spline curve by first rewriting eq(1) as the matrix
product
15
Unit II – Computer Graphics
x(u) = [u3 u2 u1 1] ax
bx
cx -------( 2 )
dx
= U.C
where U is the row matrix of power of parameter u and C is the
coefficient column matrix.
Using equation (2) we can write the boundary conditions in matrix
form and solve for the coefficient matrix C as
C = Mspline . Mgeom -----(3)
Where Mgeom in a four element column matrix containing the geometric
constraint values on the spline and Mspline in the 4 * 4 matrix that
transforms the geometric constraint values to the polynomial coefficients
and provides a characterization for the spline curve.
Matrix Mgeom contains control point coordinate values and other
geometric constraints.
We can substitute the matrix representation for C into equation (2)
to obtain.
x (u) = U . Mspline . Mgeom ------(4)
The matrix Mspline, characterizing a spline representation, called
the basis matriz is useful for transforming from one spline
representation to another.
Finally we can expand equation (4) to obtain a polynomial
representation for coordinate x in terms of the geometric
constraint parameters.
x(u) = ∑ gk. BFk(u)
where gk are the constraint parameters, such as the control point
coordinates and slope of the curve at the control points and BFk(u) are
the polynomial blending functions.
16
Unit II – Computer Graphics
Visualization techniques are useful for analyzing process that
occur over a long period of time or that cannot observed directly.
Example quantum mechanical phenomena and special relativity
effects produced by objects traveling near the speed of light.
Scientific visualization is used to visually display , enhance and
manipulate information to allow better understanding of the data.
Similar methods employed by commerce , industry and other
nonscientific areas are sometimes referred to as business
visualization.
Data sets are classified according to their spatial distribution ( 2D
or 3D ) and according to data type (scalars , vectors , tensors and
multivariate data ).
17
Unit II – Computer Graphics
Sometimes isolines are plotted with spline curves but spline fitting
can lead to misinterpretation of the data sets. Example two spline
isolines could cross or curved isoline paths might not be a true
indicator of data trends since data values are known only at the cell
corners.
For 3D scalar data fields we can take cross sectional slices and
display the 2D data distributions over the slices. Visualization
packages provide a slicer routine that allows cross sections to be
taken at any angle.
Instead of looking at 2D cross sections we plot one or more
isosurfaces which are simply 3D contour plots. When two
overlapping isosurfaces are displayed the outer surface is made
transparent so that we can view the shape of both isosurfaces.
Volume rendering which is like an X-ray picture is another
method for visualizing a 3D data set. The interior information
about a data set is projected to a display screen using the ray-
casting method. Along the ray path from each screen pixel.
Volume visualization of a regular, Cartesian data grid
using ray casting to examine interior data values
18
Unit II – Computer Graphics
For this volume visualization, a color-coded plot of the
distance to the maximum voxel value along each pixel ray was
displayed.
2.4.1 Translation
20
Unit II – Computer Graphics
In a three dimensional homogeneous coordinate representation, a
point or an object is translated from position P = (x,y,z) to position P’ =
(x’,y’,z’) with the matrix operation.
x’ 1 0 0 tx x
y’ = 0 1 0 ty y
z’ 0 0 1 yz z --------
(1)
1 0 0 0 1 1
2.4.2 Rotation
21
Unit II – Computer Graphics
To generate a rotation transformation for an object an axis of
rotation must be designed to rotate the object and the amount of
angular rotation is also be specified.
Positive rotation angles produce counter clockwise rotations about
a coordinate axis.
x’ cosθ -sinθ 0 0 x
y’ = sinθ cosθ 0 0 y
z’ 0 0 1 0 z -------(3)
1 0 0 0 1 1
which we can write more compactly as
P’ = Rz (θ) . P ------------------(4)
The below figure illustrates rotation of an object about the z axis.
x’ 1 0 0 0 x
y’ = 0 cosθ -sinθ 0 y
z’ 0 sinθ cosθ 0 z -------(7)
1 0 0 0 1 1
24
Unit II – Computer Graphics
3. Obtaining the inverse transformation sequence that returns
the rotation axis to its original position.
In the special case where an object is to be rotated about an axis
that is parallel to one of the coordinate axes, we can attain the
desired rotation with the following transformation sequence:
1. Translate the object so that the rotation axis coincides with
the parallel coordinate axis.
2. Perform the specified rotation about the axis.
3. Translate the object so that the rotation axis is moved back
to its original position.
When an object is to be rotated about an axis that is not parallel to
one of the coordinate axes, we need to perform some additional
transformations.
In such case, we need rotations to align the axis with a selected
coordinate axis and to bring the axis back to its original orientation.
Given the specifications for the rotation axis and the rotation angle,
we can accomplish the required rotation in five steps:
1. Translate the object so that the rotation axis passes through
the coordinate origin.
2. Rotate the object so that the axis of rotation coincides with
one of the coordinate axes.
3. Perform the specified rotation about that coordinate axis.
4. Apply inverse rotations to bring the rotation axis back to its
original orientation.
5. Apply the inverse translation to bring the rotation axis back
to its original position.
Five transformation steps
25
Unit II – Computer Graphics
2.4.3 Scaling
The matrix expression for the scaling transformation of a position
P = (x,y,.z) relative to the coordinate origin can be written as
x’ sx 0 0 0 x
y’ = 0 sy 0 0 y
z’ 0 0 sz 0 z --------(11)
1 0 0 0 1 1
26
Unit II – Computer Graphics
3. Translate the fixed point back to its original position. This
sequence of transformation is shown in the below figure .
28
Unit II – Computer Graphics
This transformation matrix is used to alter x and y coordinate
values by an amount that is proportional to the z value, and the z
coordinate will be unchanged.
Boundaries of planes that are perpendicular to the z axis are
shifted by an amount proportional to z the figure shows the effect
of shearing matrix on a unit cube for the values a = b = 1.
29
Unit II – Computer Graphics
Parameter scale vector specifies the three scaling parameters sx, sy
and sz.
Rotate and scale matrices transform objects with respect to the
coordinate origin.
Composite transformation can be constructed with the following
functions:
composeMatrix3
buildTransformationMatrix3
composeTransformationMatrix3
The order of the transformation sequence for the
buildTransformationMarix3 and composeTransfomationMarix3
functions, is the same as in 2 dimensions:
1. scale
2. rotate
3. translate
Once a transformation matrix is specified, the matrix can be
applied to specified points with
transformPoint3 (inPoint, matrix, outpoint)
The transformations for hierarchical construction can be set using
structures with the function
setLocalTransformation3 (matrix, type)
where parameter matrix specifies the elements of a 4 by 4
transformation matrix and parameter type can be assigned one of the
values of:
Preconcatenate,
Postconcatenate, or replace.
2.4.7 Modeling and Coordinate Transformations
In modeling, objects are described in a local (modeling) coordinate
reference frame, then the objects are repositioned into a world
coordinate scene.
For instance, tables, chairs and other furniture, each defined in a
local coordinate system, can be placed into the description of a
room defined in another reference frame, by transforming the
furniture coordinates to room coordinates. Then the room might be
transformed into a larger scene constructed in world coordinate.
Three dimensional objects and scenes are constructed using
structure operations.
Object description is transformed from modeling coordinate to
world coordinate or to another system in the hierarchy.
30
Unit II – Computer Graphics
Coordinate descriptions of objects are transferred from one system
to another system with the same procedures used to obtain two
dimensional coordinate transformations.
Transformation matrix has to be set up to bring the two coordinate
systems into alignment:
- First, a translation is set up to bring the new coordinate
origin to the position of the other coordinate origin.
- Then a sequence of rotations are made to the corresponding
coordinate axes.
- If different scales are used in the two coordinate systems, a
scaling transformation may also be necessary to compensate
for the differences in coordinate intervals.
If a second coordinate system is defined with origin (x0, y0,z0) and
axis vectors as shown in the figure relative to an existing
Cartesian reference frame, then first construct the translation
matrix T(-x0, -y0, -z0), then we can use the unit axis vectors to form
the coordinate rotation matrix
u’x1 u’x2 u’x3 0
R= u’y1 u’y2 u’y3 0
u’z1 u’z2 u’z3 0
0 0 0 1
which transforms unit vectors u’x, u’y and u’z onto the x, y and z
axes respectively.
Transformation of an object description from one
coordinate system to another.
31
Unit II – Computer Graphics
2.5Three-Dimensional Viewing
In three dimensional graphics applications,
- we can view an object from any spatial position, from the
front, from above or from the back.
- We could generate a view of what we could see if we were
standing in the middle of a group of objects or inside object,
such as a building.
2.5.1Viewing Pipeline:
In the view of a three dimensional scene, to take a snapshot we
need to do the following steps.
1. Positioning the camera at a particular point in space.
2. Deciding the camera orientation (i.e.,) pointing the
camera and rotating it around the line of right to set up
the direction for the picture.
3. When snap the shutter, the scene is cropped to the size of
the ‘window’ of the camera and light from the visible
surfaces is projected into the camera film.
In such a way the below figure shows the three dimensional
transformation pipeline, from modeling coordinates to final device
coordinate.
Projection Device.
Projection Co-ordinates Work Station co-
ordinates
Transformation Transformation
Processing Steps
1. Once the scene has been modeled, world coordinates position is
converted to viewing coordinates.
2. The viewing coordinates system is used in graphics packages as
a reference for specifying the observer viewing position and the
position of the projection plane.
3. Projection operations are performed to convert the viewing
coordinate description of the scene to coordinate positions on
the projection plane, which will then be mapped to the output
device.
32
Unit II – Computer Graphics
4. Objects outside the viewing limits are clipped from further
consideration, and the remaining objects are processed through
visible surface identification and surface rendering procedures
to produce the display within the device viewport.
2.5.2Viewing Coordinates
Specifying the view plane
The view for a scene is chosen by establishing the viewing
coordinate system, also called the view reference coordinate
system.
34
Unit II – Computer Graphics
u = (V*N) / (|V*N|) = (u1, u2, u3)
v = n*u = (v1, v2, v3)
This method automatically adjusts the direction for v, so that v is
perpendicular to n.
The composite rotation matrix for the viewing transformation is
u1 u2 u3 0
R= v1 v2 v3 0
n1 n2 n3 0
0 0 0 1
which transforms u into the world xw axis, v onto the yw axis and n
onto the zw axis.
The complete world-to-viewing transformation matrix is obtained
as the matrix product. Mwc, vc = R.T
This transformation is applied to coordinate descriptions of objects
in the scene transfer them to the viewing reference frame.
2.5 Projections
Once world coordinate descriptions of the objects are converted to
viewing coordinates, we can project the 3 dimensional objects onto
the two dimensional view planes.
There are two basic types of projection.
1. Parallel Projection - Here the coordinate positions are
transformed to the view plane along parallel lines.
Parallel projection of an object to the view plane
35
Unit II – Computer Graphics
Perspective projection of an object to the view
plane
Parallel Projections
Parallel projections are specified with a projection vector that
defines the direction for the projection lines.
When the projection in perpendicular to the view plane, it is said to
be an Orthographic parallel projection, otherwise it said to be an
Oblique parallel projection.
Orientation of the projection vector Vp to produce an
orthographic projection (a) and an oblique projection (b)
Orthographic Projection
Orthographic projections are used to produce the front, side and
top views of an object.
Front, side and rear orthographic projections of an object are
called elevations.
A top orthographic projection is called a plan view.
This projection gives the measurement of lengths and angles
accurately.
36
Unit II – Computer Graphics
37
Unit II – Computer Graphics
If the view plane is placed at position zvp along the zv axis then any
point (x,y,z) in viewing coordinates is transformed to projection
coordinates as
xp = x, yp = y
where the original z coordinates value is kept for the depth
information needed in depth cueing and visible surface determination
procedures.
Oblique Projection
An oblique projection in obtained by projecting points along
parallel lines that are not perpendicular to the projection plane.
The below figure α and φ are two angles.
38
Unit II – Computer Graphics
The oblique projection equation (1) can be written as
xp = x + z(L1cosφ)
yp = y + z(L1sinφ)
The transformation matrix for producing any parallel projection
onto the xvyv plane is
1 0 L1cosφ 0
Mparallel = 0 1 L1sinφ 0
0 0 1 0
0 0 0 1
An orthographic projection is obtained when L1 = 0 (which occurs
at a projection angle α of 900)
Oblique projections are generated with non zero values for L1.
Perspective Projections
To obtain perspective projection of a 3D object, we transform
points along projection lines that meet at the projection reference
point.
If the projection reference point is set at position zprp along the zv
axis and the view plane is placed at zvp as in fig , we can write
equations describing coordinate positions along this perspective
projection line in parametric form as
x’ = x - xu
y’ = y - yu
z’ = z – (z – zprp) u
Perspective projection of a point P with coordinates (x,y,z). to
position (xp, yp,zvp) on the view plane.
2.6 CLIPPING
42
Unit II – Computer Graphics
y1 = y1 + ( y2 – y1) zvmin – z1
z2 – z1
If either x1 or y1 is not in the range of the boundaries of the
viewport, then this line intersects the front plane beyond the
boundaries of the volume (line B in Fig.)
43
Unit II – Computer Graphics
- Window limits on the view plane are given in viewing
coordinates with parameters xwmin, xwmax, ywmin and
ywmax.
- Limits of the 3D view port within the unit cube are set with
normalized coordinates xvmin, xvmax, yvmin, yvmax, zvmin
and zvmax.
- Parameter projType is used to choose the projection type
either parallel or perspective.
- Coordinate position (xprojRef, yprojRdf, zprojRef) sets the
projection reference point. This point is used as the center of
projection if projType is set to perspective; otherwise, this
point and the center of the viewplane window define the
parallel projection vector.
- The position of the viewplane along the viewing zv axis is set
with parameter z view.
- Positions along the viewing zv axis for the front and back
planes of the view volume are given with parameters z front
and z back.
- The error parameter returns an integer error code indicating
erroneous input data.
2.8VISIBLE SURFACE IDENTIFICATION
A major consideration in the generation of realistic
graphics displays is identifying those parts of a scene that are
visible from a chosen viewing position.
2.8.1 Classification of Visible Surface Detection Algorithms
These are classified into two types based on whether
they deal with object definitions directly or with their
projected images
1. Object space methods: compares objects and parts of objects
to each other within the scene definition to determine which
surfaces as a whole we should label as visible.
2. Image space methods: visibility is decided point by point at each
pixel position on the projection plane. Most Visible Surface
Detection Algorithms use image space methods.
2.8.2 Back Face Detection
A point (x, y,z) is "inside" a polygon surface with plane
parameters A, B, C, and D if
Ax + By + Cz + D < 0 ----------------(1 )
44
Unit II – Computer Graphics
We can simplify this test by considering the normal vector N
to a polygon surface, which has Cartesian components (A, B, C). In
general, if V is a vector in the viewing direction from the eye
position, as shown in Fig.,
Thus, in general, we can label any polygon as a back face if its normal
vector has a z component value
C<= 0
By examining parameter C for the different planes defining an
object, we can immediately identify all the back faces.
2.8.3 Depth Buffer Method
A commonly used image-space approach to detecting visible
surfaces is the depth-buffer method, which compares surface depths at
each pixel position on the projection plane. This procedure is also
referred to as the z-buffer method.
Each surface of a scene is processed separately, one point at a
45
Unit II – Computer Graphics
time across the surface. The method is usually applied to scenes
containing only polygon surfaces, because depth values can be computed
very quickly and the method is easy to implement. But the mcthod can
be applied to nonplanar surfaces.
With object descriptions converted to projection coordinates, each
(x, y, z) position on a polygon surface corresponds to the orthographic
projection point (x, y) on the view plane.
Therefore, for each pixel position (x, y) on the view plane, object
depths can be compared by comparing z values. The figure shows
three surfaces at varying distances along the orthographic projection line
from position (x,y ) in a view plane taken as the (xv,yv) plane. Surface S1,
is closest at this position, so its surface intensity value at (x, y) is saved.
where Ibackgnd is the value for the background intensity, and Isurf(x, y)
is the projected intensity value for the surface at pixel position (x,y).
After all surfaces have been processed, the depth buffer contains
depth values for the visible surfaces and the refresh buffer contains
the corresponding intensity values for those surfaces.
Depth values for a surface position (x, y) are calculated from the
plane equation for each surface:
Ax By D
z -----------------------------(1)
C
For any scan line adjacent horizontal positions across the line
differ by1, and a vertical y value on an adjacent scan line differs by 1. If
the depth of position(x, y) has been determined to be z, then the depth z'
of the next position (x +1, y) along the scan line is obtained from Eq. (1)
as
A(x 1) By D
z' -----------------------(2)
C
A
Or z' z -----------------------(3)
C
On each scan line, we start by calculating the depth on a left edge
of the polygon that intersects that scan line in the below fig. Depth
values at each successive position across the scan line are then
calculated by Eq. (3).
Scan lines intersecting a polygon surface
47
Unit II – Computer Graphics
z' z
A/ m B C ----------------------(4)
Intersection positions on successive scan lines along a left polygon edge
48
Unit II – Computer Graphics
A drawback of the depth-buffer method is that it can only find one
visible surface at each pixel position. The A-buffer method expands
the depth buffer so that each position in the buffer can reference a linked
list of surfaces.
Thus, more than one surface intensity can be taken into
consideration at each pixel position, and object edges can be antialiased.
Each position in the A-buffer has two fields:
1)depth field - stores a positive or negative real number
2)intensity field - stores surface-intensity information or a pointer
value.
If the depth field is positive, the number stored at that position is
the depth of a single surface overlapping the corresponding pixel area.
The intensity field then stores the RCB components of the surface color
at that point and the percent of pixel coverage, as illustrated in Fig.A
49
Unit II – Computer Graphics
2.8.5 SCAN-LINE METHOD
This image-space method for removing hidden surfaces is an
extension of the scan-line algorithm for filling polygon interiors. As each
scan line is processed, all polygon surfaces intersecting that line are
examined to determine which are visible. Across each scan line, depth
calculations are made for each overlapping surface to determine which is
nearest to the view plane. When the visible surface has been determined,
the intensity value for that position is entered into the refresh buffer.
We assume that tables are set up for the various surfaces, which include
both an edge table and a polygon table. The edge table contains
coordinate endpoints for each line in-the scene, the inverse slope of each
line, and pointers into the polygon table to identify the surfaces bounded
by each line.
The polygon table contains coefficients of the plane equation for
each surface, intensity information for the surfaces, and possibly
pointers into the edge table.
To facilitate the search for surfaces crossing a given scan line, we
can set up an active list of edges from information in the edge table. This
active list will contain only edges that cross the current scan line, sorted
in order of increasing x.
In addition, we define a flag for each surface that is set on or off to
indicate whether a position along a scan line is inside or outside of the
surface. Scan lines are processed from left to right. At the leftmost
boundary of a surface, the surface flag is turned on; and at the rightmost
boundary, it is turned off.
Scan lines crossing the projection of two surfaces S1 and S2 in the
view plane. Dashed lines indicate the boundaries of hidden
surfaces
51
Unit II – Computer Graphics
in turn we "paint" the surface intensities onto the frame buffer over the
intensities of the previously processed surfaces.
Painting polygon surfaces onto the frame buffer according to depth
is carried out in several steps. Assuming we are viewing along the-z
direction,
1.surfaces are ordered on the first pass according to the smallest z
value on each surface.
2.Surfaces with the greatest depth is then compared to the other
surfaces in the list to determine whether there are any overlaps in depth.
If no depth overlaps occur, S is scan converted. Figure shows two
surfaces that overlap in the xy plane but have no depth overlap.
3.This process is then repeated for the next surface in the list. As
long as no overlaps occur, each surface is processed in depth order until
all have been scan converted.
4. If a depth overlap is detected at any point in the list, we need to
make some additional comparisons to determine whether any of the
surfaces should be reordered.Two surfaces with no depth overlap
We make the following tests for each surface that overlaps with S. If
any one of these tests is true, no reordering is necessary for that surface.
The tests are listed in order of increasing difficulty.
1. The bounding rectangles in the xy plane for the two surfaces do not
overlap
2. Surface S is completely behind the overlapping surface relative to the
viewing position.
3. The overlapping surface is completelv in front of S relative to the
viewing position.
4. The projections of the two surfaces onto the view plane do not overlap.
52
Unit II – Computer Graphics
example of two surfaces that overlap in the z direction but not in the x
direction is shown in Fig.
53
Unit II – Computer Graphics
Overlapping surface S’ is completely in front(outside) of
surface S but s is not completely behind S’
54
Unit II – Computer Graphics
useful when the view reference point changes, but the objects in a scene
are at fixed positions.
Applying a BSP tree to visibility testing involves identifying
surfaces that are "inside" and "outside" the partitioning plane at each
step of the space subdivision, relative to the viewing direction. The
figure(a) illustrates the basic concept in this algorithm.
A region of space (a) is partitioned with two planes P1 and P2 to form the
BSP tree representation in (b)
With plane P1,we first partition the space into two sets of objects.
One set of objects is behind, or in back of, plane P1, relative to the
viewing direction, and the other set is in front of P1. Since one object is
intersected by plane P1, we divide that object into two separate objects,
labeled A and B.
Objects A and C are in front of P1 and objects B and D are behind
P1. We next partition the space again with plane P2 and construct the
binary tree representation shown in Fig.(b).
In this tree, the objects are represented as terminal nodes, with
front objects as left branches and back objects as right branches.
2.8.8 Area – Subdivision Method
This technique for hidden-surface removal is essentially an image-
space method ,but object-space operations can be used to accomplish
depth ordering of surfaces.
The area-subdivision method takes advantage of area coherence in
a scene by locating those view areas that represent part of a single
surface. We apply this method by successively dividing the total viewing
55
Unit II – Computer Graphics
area into smaller and smaller rectangles until each small area is the
projection of part of a single visible surface or no surface at all.
To implement this method, we need to establish tests that can
quickly identify the area as part of a single surface or tell us that the
area is too complex to analyze easily. Starting with the total view, we
apply the tests to determine whether we should subdivide the total area
into smaller rectangles. If the tests indicate that the view is sufficiently
complex, we subdivide it. Next. we apply the tests to each of the smaller
areas, subdividing these if the tests indicate that visibility of a single
surface is still uncertain. We continue this process until the subdivisions
are easily analyzed as belonging to a single surface or until they are
reduced to the size of a single pixel. An easy way to do this is to
successively divide the area into four equal parts at each step.
Tests to determine the visibility of a single surface within a
specified area are made by comparing surfaces to the boundary of the
area. There are four possible relationships that a surface can have with a
specified area boundary. We can describe these relative surface
characteristics in the following way (Fig. ):
Surrounding surface-One that completely encloses the area.
Overlapping surface-One that is partly inside and partly outside
the area.
Inside surface-One that is completely inside the area.
Outside surface-One that is completely outside the area.
Possible relationships between polygon surfaces and a rectangular
area
Another method for carrying out test 3 that does not require depth
sorting is to use plane equations to calculate depth values at the four
vertices of the area for all surrounding, overlapping, and inside surfaces,
If the calculated depths for one of the surrounding surfaces is less than
the calculated depths for all other surfaces, test 3 is true. Then the area
can be filled with the intensity values of thesurrounding surface.
For some situations, both methods of implementing test 3 will fail
to identify correctly a surrounding surface that obscures all the other
surfaces. It is faster to subdivide the area than to continue with more
complex testing.
Once outside and surrounding surfaces have been identified for an
area, they will remain outside and surrounding surfaces for all
subdivisions of the area. Furthermore, some inside and overlapping
surfaces can be expected to be eliminated as the subdivision process
continues, so that the areas become easier to analyze.
In the limiting case, when a subdivision the size of a pixel is
produced, we simply calculate the depth of each relevant surface at that
57
Unit II – Computer Graphics
point and transfer the intensity of the nearest surface to the frame
buffer.
As a variation on the basic subdivision process, we could subdivide
areas along surface boundaries instead of dividing them in half. The
below Figure illustrates this method for subdividing areas. The
projection of the boundary of surface S is used to partition the original
area into the subdivisions A1 and A2. Surface S is then a surrounding
surface for A1, and visibility tests 2 and 3 can be applied to determine
whether further subdividing is necessary.
In general, fewer subdivisions are required using this approach,
but more processing is needed to subdivide areas and to analyze the
relation of surfaces to the subdivision boundaries.
58
Unit II – Computer Graphics
View Plane
59
Unit II – Computer Graphics
back octant. If the front is empty the, the rear octant is processed.
Otherwise, two ,.recursive calls are made, one for the rear octant and one
for the front octant.
typedef enum ( SOLID, MIXED } Status;
bdefine EMPTY -1
typedef struct tOctree (
int id;
Status status;
union (
int color;
struct tOctree *children[8]:
) data;
}Octree:
typedef struct tQuadtree i
int id:
Status status;
union [
int color;
struct tQuadtree *children[4];
) data;
) Quadtree;
int nQuadtree = 0.
void octreeToQuadtree (Octree *oTree. Quadtree *qTree)
(
Octree *front. *back:
Quadtree *newQuadtree;
int i, j;
if (oTree->status == SOLID) (
qTree->status = SOLID:
qTree->data.color = oTree->data color:
return:
)
qTree->status = MIXED:
/*Fill in each quad of the quadtree *I
for ( i = O ; i<4; i++)
{
front = oTree->data.children[il;
back = oTree->data..children[i+4];
newQuadtree = (Quadtree *) malloc (sizeof (Quadtree)):
newQuadtree->id = nQuadtree++;
newQuadtree->status = SOLID;
qTree->data.childrenIil = newQuadtree;
if (front->status == SOLID)
if (front->data.color != EMPTY)
qTree->data.children[i]->data.color = front->data.color;
else
if (back->status == SOLID)
if (back->data.color != EMPTY)
qTree->data.children[i]->data.color = back->data.color;
60
Unit II – Computer Graphics
else
qTree->data.children[il->data.color = EMPTY;
else ( / * back node is mixed * /
newQuadtree->status = MIXED;
octreeToQuadtree (back, newguadtree);
octreeToQuadtree (front, newQuadtree):
}
}
}
63
Unit II – Computer Graphics
64
CS2401 – Computer Graphics Unit - III
Color Models – RGB, YIQ, CMY, HSV – Animations – General Computer Animation,
Raster, Keyframe - Graphics programming using OPENGL – Basic graphics primitives –
Drawing three dimensional objects - Drawing three dimensional scenes
Color Models
Color Model is a method for explaining the properties or behavior of color within some
particular context. No single color model can explain all aspects of color, so we make use
of different models to help describe the different perceived characteristics of color.
Properties of Light
Other frequency bands within this spectrum are called radio waves, micro waves,
infrared waves and x-rays. The below fig shows the frequency ranges for some of
the electromagnetic bands.
Each frequency value within the visible band corresponds to a distinct color.
At the low frequency end is a red color (4.3*104 Hz) and the highest frequency is a
violet color (7.5 *10 14Hz)
Spectral colors range from the reds through orange and yellow at the low
frequency end to greens, blues and violet at the high end.
1 CSE/IT
CS2401 – Computer Graphics Unit - III
Since light is an electro magnetic wave, the various colors are described in terms
of either the frequency for the wave length λ of the wave.
2 CSE/IT
CS2401 – Computer Graphics Unit - III
A light source such as the sun or a light bulb emits all frequencies within the
visible range to produce white light. When white light is incident upon an object,
some frequencies are reflected and some are absorbed by the object. The
combination of frequencies present in the reflected light determines what we
perceive as the color of the object.
If low frequencies are predominant in the reflected light, the object is described as
red. In this case, the perceived light has the dominant frequency at the red end of
the spectrum. The dominant frequency is also called the hue, or simply the color of
the light.
Intensity in the radiant energy emitted per limit time, per unit solid angle, and per
unit projected area of the source.
- Purity describes how washed out or how pure the color of the light appears.
The term chromaticity is used to refer collectively to the two properties, purity and
dominant frequency.
Two different color light sources with suitably chosen intensities can be used to
produce a range of other colors.
If the 2 color sources combine to produce white light, they are called
complementary colors. E.g., Red and Cyan, green and magenta, and blue and
yellow.
Color models that are used to describe combinations of light in terms of dominant
frequency use 3 colors to obtain a wide range of colors, called the color gamut.
The 2 or 3 colors used to produce other colors in a color model are called primary
colors.
Standard Primaries
3 CSE/IT
CS2401 – Computer Graphics Unit - III
The set of primaries is generally referred to as the XYZ or (X,Y,Z) color model
where X,Y and Z represent vectors in a 3D, additive color space.
Cλ = XX + YY + ZZ -------------(1)
Where X,Y and Z designates the amounts of the standard primaries needed
to match Cλ.
with x + y + z = 1
Any color can be represented with just the x and y amounts. The parameters x and
y are called the chromaticity values because they depend only on hue and purity.
If we specify colors only with x and y, we cannot obtain the amounts X, Y and Z.
so, a complete description of a color in given with the 3 values x, y and Y.
X = (x/y)Y, Z = (z/y)Y
Where z = 1-x-y.
Color paintings can be created by mixing color pigments with white and black
pigments to form the various shades, tints and tones.
Starting with the pigment for a „pure color‟ the color is added to black pigment to
produce different shades. The more black pigment produces darker shades.
Different tints of the color are obtained by adding a white pigment to the original
color, making it lighter as more white is added.
Tones of the color are produced by adding both black and white pigments.
Based on the tristimulus theory of version, our eyes perceive color through the
stimulation of three visual pigments in the cones on the retina.
4 CSE/IT
CS2401 – Computer Graphics Unit - III
These visual pigments have a peak sensitivity at wavelengths of about 630 nm (red),
530 nm (green) and 450 nm (blue).
This is the basis for displaying color output on a video monitor using the 3 color
primaries, red, green, and blue referred to as the RGB color model. It is represented
in the below figure.
5 CSE/IT
CS2401 – Computer Graphics Unit - III
Vertices of the cube on the axes represent the primary colors, the remaining vertices
represents the complementary color for each of the primary colors.
The RGB color scheme is an additive model. (i.e.,) Intensities of the primary colors
are added to produce other colors.
Each color point within the bounds of the cube can be represented as the triple
(R,G,B) where values for R, G and B are assigned in the range from 0 to1.
Cλ = RR + GG + BB
The magenta vertex is obtained by adding red and blue to produce the triple (1,0,1)
and white at (1,1,1) in the sum of the red, green and blue vertices.
Shades of gray are represented along the main diagonal of the cube from the origin
(black) to the white vertex.
6 CSE/IT
CS2401 – Computer Graphics Unit - III
The National Television System Committee (NTSC) color model for forming the
composite video signal in the YIQ model.
A combination of red, green and blue intensities are chosen for the Y parameter to
yield the standard luminosity curve.
Since Y contains the luminance information, black and white TV monitors use only
the Y signal.
An NTSC video signal can be converted to an RGB signal using an NTSC encoder
which separates the video signal into YIQ components, the converts to RCB values,
as follows:
7 CSE/IT
CS2401 – Computer Graphics Unit - III
A color model defined with the primary colors cyan, magenta, and yellow (CMY)
in useful for describing color output to hard copy devices.
It is a subtractive color model (i.e.,) cyan can be formed by adding green and blue
light. When white light is reflected from cyan-colored ink, the reflected light must
have no red component. i.e., red light is absorbed or subtracted by the link.
Magenta ink
subtracts the green component from incident light and yellow subtracts the
blue component.
Equal amounts of each of the primary colors produce grays along the main
diagonal of the cube.
A combination of cyan and magenta ink produces blue light because the red and
green components of the incident light are absorbed.
The printing process often used with the CMY model generates a color point with
a collection of 4 ink dots; one dot is used for each of the primary colors (cyan,
magenta and yellow) and one dot in black.
8 CSE/IT
CS2401 – Computer Graphics Unit - III
C 1 R
M 1 G
Y 1 B
Where the white is represented in the RGB system as the unit column vector.
R 1 C
G 1 M
B 1 Y
Where black is represented in the CMY system as the unit column vector.
The HSV model uses color descriptions that have a more interactive appeal to a
user.
Color parameters in this model are hue (H), saturation (S), and value (V).
The 3D representation of the HSV model is derived from the RGB cube. The
outline of the cube has the hexagon shape.
9 CSE/IT
CS2401 – Computer Graphics Unit - III
The boundary of the hexagon represents the various hues, and it is used as the top
of the HSV hexcone.
In the hexcone, saturation is measured along a horizontal axis, and value is along a
vertical axis through the center of the hexcone.
Hue is represented as an angle about the vertical axis, ranging from 00 at red
through 3600. Vertices of the hexagon are separated by 600 intervals.
Yellow is at 600,
green at 120 and cyan opposite red at H = 180 . Complementary colors are 1800
0 0
apart.
1 CSE/IT
0
CS2401 – Computer Graphics Unit - III
It has the double cone representation shown in the below figure. The 3 parameters
in this model are called Hue (H), lightness (L) and saturation (s).
9
CS2401 – Computer Graphics Unit - III
Hue
specifies an angle about the vertical axis that locates a chosen hue. In
The remaining colors are specified around the perimeter of the cone in the same
order as in the HSV model.
The vertical axis is called lightness (L). At L = 0, we have black, and white is at L
= 1 Gray scale in along the L axis and the “purehues” on the L = 0.5 plane.
10
CS2401 – Computer Graphics Unit - III
Animation
Example : Advertising animations often transition one object shape into another.
Frame-by-Frame animation
Each frame of the scene is separately generated and stored. Later, the frames can be
recoded on film or they can be consecutively displayed in "real-time playback" mode
Object definitions
Key-frame specifications
Story board
It defines the motion sequences as a set of basic events that are to take place.
Depending on the type of animation to be produced, the story board could consist
of a set of rough sketches or a list of the basic ideas for the motion.
Object Definition
11
CS2401 – Computer Graphics Unit - III
The associated movements of each object are specified along with the shape.
Key frame
A key frame is detailed drawing of the scene at a certain time in the animation
sequence.
Within each key frame, each object is positioned according to the time for that
frame.
Some key frames are chosen at extreme positions in the action; others are spaced
so that the time interval between key frames is not too much.
In-betweens
Film requires 24 frames per second and graphics terminals are refreshed at the rate
of 30 to 60 frames per seconds.
Time intervals for the motion are setup so there are from 3 to 5 in-between for
each pair of key frames.
Depending on the speed of the motion, some key frames can be duplicated.
For a 1 min film sequence with no duplication, 1440 frames are needed.
- Motion verification
- Editing
12
CS2401 – Computer Graphics Unit - III
2. Camera motion
3. Generation of in-betweens
Animation packages such as wave front provide special functions for designing
the animation and processing individuals objects.
Object shapes and associated parameter are stored and updated in the database.
Standard functions can be applied to identify visible surfaces and apply the
rendering algorithms.
Camera movement functions such as zooming, panning and tilting are used for
motion simulation.
Given the specification for the key frames, the in-betweens can be automatically
generated.
Raster Animations
- Predefine the object as successive positions along the motion path, set the
successive blocks of pixel values to color table entries.
- Set the pixels at the first position of the object to „on‟ values, and set the
pixels at the other object positions to the background color.
13
CS2401 – Computer Graphics Unit - III
Animation functions include a graphics editor, a key frame generator and standard
graphics routines.
The graphics editor allows designing and modifying object shapes, using spline
surfaces, constructive solid geometry methods or other representation schemes.
Scene description includes the positioning of objects and light sources defining the
photometric parameters and setting the camera parameters.
Action specification involves the layout of motion paths for the objects and
camera.
Keyframe Systems
Each set of in-betweens are generated from the specification of two keyframes.
For complex scenes, we can separate the frames into individual components or
objects called cells, an acronym from cartoon animation.
14
CS2401 – Computer Graphics Unit - III
Morphing
The general preprocessing rules for equalizing keyframes in terms of either the
number of vertices to be added to a keyframe.
CS2401 – Computer Graphics Unit - III
Suppose we equalize the edge count and parameters Lk and Lk+1 denote the
number of line segments in two consecutive frames. We define,
Ns = int (Lmax/Lmin)
If the vector counts in equalized parameters Vk and Vk+1 are used to denote the
number of vertices in the two consecutive frames. In this case we define
Simulating Accelerations
Curve-fitting techniques are often used to specify the animation paths between key
frames. Given the vertex positions at the key frames, we can fit the positions with linear
or nonlinear paths. Figure illustrates a nonlinear fit of key-frame positions. This
determines the trajectories for the in-betweens. To simulate accelerations, we can adjust
the time spacing for the in-betweens.
CS2401 – Computer Graphics Unit - III
For constant speed (zero acceleration), we use equal-interval time spacing for the in-
betweens. Suppose we want n in-betweens for key frames at times t1 and t2.
The time interval between key frames is then divided into n + 1 subintervals, yielding an
in-between spacing of
∆= t2-t1/n+1
Motion Specification
These are several ways in which the motions of objects can be specified in an
animation system.
Here the rotation angles and translation vectors are explicitly given.
We can approximate the path of a bouncing ball with a damped, rectified, sine
curve
We can specify the motions that are to take place in general terms that abstractly
describe the actions.
These systems are called goal directed. Because they determine specific motion
parameters given the goals of the animation.
For constant velocity (zero acceleration) we designate the motions of rigid bodies
in a scene by giving an initial position and velocity vector for each object.
CS2401 – Computer Graphics Unit - III
We can specify accelerations (rate of change of velocity ), speed up, slow downs
and curved motion paths.
An alternative approach is to use inverse kinematics; where the initial and final
positions of the object are specified at specified times and the motion parameters
are computed by the system.
OpenGL is a software interface that allows you to access the graphics hardware without
taking care of the hardware details or which graphics adapter is in the system. OpenGL is
a low-level graphics library specification. It makes available to the programmer a small
set of geomteric primitives - points, lines, polygons, images, and bitmaps. OpenGL
provides a set of commands that allow the specification of geometric objects in two or
three dimensions, using the provided primitives, together with commands that control
how these objects are rendered (drawn).
Libraries
OpenGL Utility Library (GLU) contains several routines that use lower-level
OpenGL commands to perform such tasks as setting up matrices for specific
viewing orientations and projections and rendering surfaces.
OpenGL Utility Toolkit (GLUT) is a window-system-independent toolkit, written
by Mark Kilgard, to hide the complexities of differing window APIs.
Include Files
For all OpenGL applications, you want to include the gl.h header file in every file.
Almost all OpenGL applications use GLU, the aforementioned OpenGL Utility Library,
which also requires inclusion of the glu.h header file. So almost every OpenGL source
file begins with:
#include <GL/gl.h>
#include <GL/glu.h>
If you are using the OpenGL Utility Toolkit (GLUT) for managing your window
manager tasks, you should include:
#include <GL/glut.h>
The following files must be placed in the proper folder to run a OpenGL Program.
CS2401 – Computer Graphics Unit - III
opengl32.lib
glu32.lib
glut32.lib
gl.h
glu.h
glut.h
opengl32.dll
glu32.dll
glut32.dll
The First task in making pictures is to open a screen window for drawing. The following
five functions initialize and display the screen window in our program.
1. glutInit(&argc, argv)
The first thing we need to do is call the glutInit() procedure. It should be called before
any other GLUT routine because it initializes the GLUT library. The parameters to
glutInit() should be the same as those to main(), specifically main(int argc, char** argv)
and glutInit(&argc, argv).
2. glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB)
The next thing we need to do is call the glutInitDisplayMode() procedure to specify the
display mode for a window.
Another decision we need to make when setting up the display mode is whether we want
to use single buffering (GLUT_SINGLE) or double buffering (GLUT_DOUBLE). If we
aren't using annimation, stick with single buffering, which is the default.
3. glutInitWindowSize(640,480)
4. glutInitWindowPosition(100,15)
Similarly, glutInitWindowPosition() is used to specify the screen location for the upper-
left corner of our initial window. The arguments, x and y, indicate the location of the
window relative to the entire display. This function positioned the screen 100 pixels over
from the left edge and 150 pixels down from the top.
5. glutCreateWindow(“Example”)
To create a window, the with the previously set characteristics (display mode, size,
location, etc), the programmer uses the glutCreateWindow() command. The command
takes a string as a parameter which may appear in the title bar.
6. glutMainLoop()
The window is not actually displayed until the glutMainLoop() is entered. The very last
thing is we have to call this function
The method of associating a call back function with a particular type of event is called as
event driven programming. OpenGL provides tools to assist with the event management.
1. glutDisplayFunc(mydisplay)
The glutDisplayFunc() procedure is the first and most important event callback function.
A callback function is one where a programmer-specified routine can be registered to be
called in response to a specific type of event. For example, the argument of
glutDisplayFunc(mydisplay) is the function that is called whenever GLUT determines
that the contents of the window needs to be redisplayed. Therefore, we should put all the
routines that you need to draw a scene in this display callback function.
CS2401 – Computer Graphics Unit - III
2. glutReshapeFunc(myreshape)
The glutReshapeFunc() is a callback function that specifies the function that is called
whenever the window is resized or moved. Typically, the function that is called when
needed by the reshape function displays the window to the new size and redefines the
viewing characteristics as desired.
3. glutKeyboardFunc(mykeyboard)
Special keys can also be used as triggers. The key passed to the callback function, in this
case, takes one of the following values (defined in glut.h).
Special keys can also be used as triggers. The key passed to the callback function, in this
case, takes one of the following values (defined in glut.h).
GLUT_KEY_UP
Up Arrow
GLUT_KEY_RIGHT
Right Arrow
GLUT_KEY_DOWN
Down Arrow
GLUT_KEY_PAGE_UP
Page Up
GLUT_KEY_PAGE_DOWN Page Down
GLUT_KEY_HOME Home
GLUT_KEY_END End
GLUT_KEY_INSERT Insert
4. glutMouseFunc(mymouse)
GLUT supports interaction with the computer mouse that is triggered when one of the
three typical buttons is presses. A mouse callback fuction can be initiated when a given
mouse button is pressed or released. The command glutMouseFunc() is used to specify
the callback function to use when a specified button is is a given state at a certain
location. This buttons are defined as either GL_LEFT_BUTTON,
GL_RIGHT_BUTTON, or GL_MIDDLE_BUTTON and the states for that button are
either GLUT_DOWN (when pressed) or GLUT_UP (when released). Finally, x and y
callback parameters indicate the location (in window-relative coordinates) of the mouse
at the time of the event.
CS2401 – Computer Graphics Unit - III
OpenGL Provides tools for drawing all the output primitives such as points, lines,
triangles, polygons, quads etc and it is defined by one or more vertices.
To draw such objects in OpenGL we pass it a list of vertices. The list occurs between the
two OpenGL function calls glBegin() and glEnd(). The argument of glBegin() determine
which object is drawn.
glBegin(int mode);
glEnd( void );
The parameter mode of the function glBegin can be one of the following:
GL_POINTS
GL_LINES
GL_LINE_STRIP
GL_LINE_LOOP
GL_TRIANGLES
GL_TRIANGLE_STRIP
GL_TRIANGLE_FAN
GL_QUADS
CS2401 – Computer Graphics Unit - III
GL_QUAD_STRIP
GL_POLYGON
glVertex( ) : The main function used to draw objects is named as glVertex. This function
defines a point (or a vertex) and it can vary from receiving 2 up to 4 coordinates.
glVertex*();
Example
glBegin(GL_POINTS);
glVertex2i(100, 50);
glVertex2i(100, 130);
glVertex2i(150, 130);
glEnd( );
glBegin(GL_TRIANGLES);
glVertex3f(100.0f, 100.0f, 0.0f);
glVertex3f(150.0f, 100.0f, 0.0f);
glVertex3f(125.0f, 50.0f, 0.0f);
glEnd( );
glBegin(GL_LINES);
glVertex3f(100.0f, 100.0f, 0.0f); // origin of the line
glVertex3f(200.0f, 140.0f, 5.0f); // ending point of the line
glEnd( );
OpenGl State
OpenGl keeps track of many state variables, such as current size of a point, the current
color of a drawing, the current background color, etc.
The value of a state variable remains active until new value is given.
CS2401 – Computer Graphics Unit - III
glPointSize() : The size of a point can be set with glPointSize(), which takes one floating
point argument
Example : glPointSize(4.0);
glClearColor() : establishes what color the window will be cleared to. The background
color is set with glClearColor(red, green, blue, alpha), where alpha
specifies a degree of transparency
Example : glClearColor (0.0, 0.0, 0.0, 0.0); //set black background color
CS2401 – Computer Graphics Unit - III
glClear() : To clear the entire window to the background color, we use glClear
(GL_COLOR_BUFFER_BIT). The argument GL_COLOR_BUFFER_BIT is another
constant built into OpenGL
Example : glClear(GL_COLOR_BUFFER_BIT)
glColor3f() : establishes to use for drawing objects. All objects drawn after this point use
this color, until it‟s changed with another call to set the color.
Example:
glFlush() : ensures that the drawing commands are actually executed rather than stored
in a buffer awaiting (ie) Force all issued OpenGL commands to be executed
glShadeModel : Sets the shading model. The mode parameter can be either
GL_SMOOTH (the default) or GL_FLAT.
With flat shading, the color of one particular vertex of an independent primitive is
duplicated across all the primitive‟s vertices to render that primitive. With smooth
shading, the color at each vertex is treated individually.
#include "stdafx.h"
#include "gl/glut.h"
#include <gl/gl.h>
void myInit(void)
{
glClearColor (1.0, 1.0, 1.0, 0.0);
glColor3f (0.0, 0.0, 0.0);
glPointSize(4.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0, 640.0, 0.0, 480.0);
}
void Display(void)
{
glClear (GL_COLOR_BUFFER_BIT);
glBegin(GL_POINTS);
glVertex2i(100, 50);
glVertex2i(100, 130);
glVertex2i(150, 130);
glEnd( );
glFlush();
}
int main (int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(640,480);
glutInitWindowPosition(100,150);
glutCreateWindow("Example");
glutDisplayFunc(Display);
myInit();
glutMainLoop();
return 0;
}
CS2401 – Computer Graphics Unit - III
#include "stdafx.h"
#include "gl/glut.h"
#include <gl/gl.h>
void Display(void)
{
glClearColor (0.0, 0.0, 0.0, 0.0);
glClear (GL_COLOR_BUFFER_BIT);
glColor3f (1.0, 1.0, 1.0);
glOrtho(0.0, 1.0, 0.0, 1.0, -1.0, 1.0);
glBegin(GL_POLYGON);
glVertex3f (0.25, 0.25, 0.0);
glVertex3f (0.75, 0.25, 0.0);
glVertex3f (0.75, 0.75, 0.0);
glVertex3f (0.25, 0.75, 0.0);
glEnd();
glFlush();
}
int main (int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(640,480);
glutCreateWindow("Intro");
glClearColor(0.0,0.0,0.0,0.0);
glutDisplayFunc(Display);
glutMainLoop();
return 0;
}
CS2401 – Computer Graphics Unit - III
#include "stdafx.h"
#include "gl/glut.h"
#include <gl/gl.h>
void myInit(void)
{
glClearColor (0.0, 0.0, 0.0, 0.0);
glColor3f (1.0, 1.0, 1.0);
glPointSize(4.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0, 640.0, 0.0, 480.0);
}
void Display(void)
{
glClear (GL_COLOR_BUFFER_BIT);
glBegin(GL_POINTS);
glVertex2i(289, 190);
glVertex2i(320, 128);
glVertex2i(239, 67);
glVertex2i(194, 101);
glVertex2i(129, 83);
glVertex2i(75, 73);
glVertex2i(74, 74);
glVertex2i(20, 10);
glEnd( );
glFlush();
}
OpenGL makes it easy to draw a line: use GL_LINES as the argument to glBegin(), and
pass it the two end points as vertices. Thus to draw a line between (40,100) and (202,96)
use:
A line‟s color is set in the same way as for points, using glColor3f().
To make stippled (dotted or dashed) lines, you use the command glLineStipple() to
define the stipple pattern, and then we enable line stippling with glEnable()
glLineStipple(1, 0x3F07);
glEnable(GL_LINE_STIPPLE);
CS2401 – Computer Graphics Unit - III
In OpenGL a polyline is called a “line strip”, and is drawn by specifying the vertices in
turn between glBegin(GL_LINE_STRIP) and glEnd().
Attributes such as color, thickness and stippling may be applied to polylines in the same
way they are applied to single lines. If it is desired to connect the last point with the first
point to make the polyline into a polygon simply replace GL_LINE_STRIP with
GL_LINE_LOOP.
Polygons drawn using GL_LINE_LOOP cannot be filled with a color or pattern. To draw
filled polygons we have to use glBegin(GL_POLYGON)
A special case of a polygon is the aligned rectangle, so called because its sides are
aligned with the coordinate axes.
CS2401 – Computer Graphics Unit - III
// draw a rectangle with opposite corners (x1, y1) and (x2, y2);
// fill it with the current color;
Polygons
Polygons are the areas enclosed by single closed loops of line segments, where the line
segments are specified by the vertices at their endpoints
Polygons are typically drawn by filling in all the pixels enclosed within the boundary, but
you can also draw them as outlined polygons or simply as points at the vertices. A filled
polygon might be solidly filled, or stippled with a certain pattern
OpenGL also supports filling more general polygons with a pattern or color.
To draw a convex polygon based on vertices (x0, y0), (x1, y1), …, (xn, yn) use the usual
list of vertices, but place them between a glBegin(GL_POLYGON) and an glEnd():
glBegin(GL_POLYGON);
glVertex2f(x0, y0);
glVertex2f(x1, y1);
. . . . ..
glVertex2f(xn, yn);
glEnd();
The following list explains the function of each of the five constants:
GL_TRIANGLES: takes the listed vertices three at a time, and draws a separate triangle
for each;
GL_QUADS: takes the vertices four at a time and draws a separate quadrilateral for each
CS2401 – Computer Graphics Unit - III
#include "stdafx.h"
#include "gl/glut.h"
#include <gl/gl.h>
void init(void)
{
glClearColor (0.0, 0.0, 0.0, 0.0);
glShadeModel (GL_SMOOTH);
gluOrtho2D (0.0, 640.0, 0.0, 480.0);
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
}
void display(void)
CS2401 – Computer Graphics Unit - III
{
glClear (GL_COLOR_BUFFER_BIT);
glBegin (GL_TRIANGLES);
glColor3f (1.0, 0.0, 0.0);
glVertex2f (50.0, 50.0);
glColor3f (0.0, 1.0, 0.0);
glVertex2f (250.0, 50.0);
glColor3f (0.0, 0.0, 1.0);
glVertex2f (50.0, 250.0);
glEnd();
glFlush ();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize (500, 500);
glutInitWindowPosition (100, 100);
glutCreateWindow ("Shade");
init ();
glutDisplayFunc(display);
glutMainLoop();
return 0;
}
Polygon Filling
The pattern is specified with 128-byte array of data type GLubyte. The 128 bytes
provides the bits for a mask that is 32 bits wide and 32 bits high.
The first 4 bytes prescribe the 32 bits across the bottom row from left to right; the next 4
bytes give the next row up, etc..
Example
#include "stdafx.h"
#include "gl/glut.h"
#include <gl/gl.h>
CS2401 – Computer Graphics Unit - III
GLubyte mask[]={
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x03, 0x80, 0x01, 0xC0, 0x06, 0xC0, 0x03, 0x60,
0x04, 0x60, 0x06, 0x20, 0x04, 0x30, 0x0C, 0x20,
0x04, 0x18, 0x18, 0x20, 0x04, 0x0C, 0x30, 0x20,
0x04, 0x06, 0x60, 0x20, 0x44, 0x03, 0xC0, 0x22,
0x44, 0x01, 0x80, 0x22, 0x44, 0x01, 0x80, 0x22,
0x44, 0x01, 0x80, 0x22, 0x44, 0x01, 0x80, 0x22,
0x44, 0x01, 0x80, 0x22, 0x44, 0x01, 0x80, 0x22,
0x66, 0x01, 0x80, 0x66, 0x33, 0x01, 0x80, 0xCC,
0x19, 0x81, 0x81, 0x98, 0x0C, 0xC1, 0x83, 0x30,
0x07, 0xe1, 0x87, 0xe0, 0x03, 0x3f, 0xfc, 0xc0,
0x03, 0x31, 0x8c, 0xc0, 0x03, 0x33, 0xcc, 0xc0,
0x06, 0x64, 0x26, 0x60, 0x0c, 0xcc, 0x33, 0x30,
0x18, 0xcc, 0x33, 0x18, 0x10, 0xc4, 0x23, 0x08,
0x10, 0x63, 0xC6, 0x08, 0x10, 0x30, 0x0c, 0x08,
0x10, 0x18, 0x18, 0x08, 0x10, 0x00, 0x00, 0x08};
void myInit(void)
{
glClearColor (0.0, 0.0, 0.0, 0.0);
glColor3f (1.0, 1.0, 1.0);
glPointSize(4.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0, 640.0, 0.0, 480.0);
}
void Display(void)
{
glClearColor(0.0,0.0,0.0,0.0); // white background
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1.0, 1.0, 1.0);
glRectf(25.0, 25.0, 125.0, 125.0);
glEnable(GL_POLYGON_STIPPLE);
glPolygonStipple(mask);
glRectf (125.0, 25.0, 225.0, 125.0);
glDisable(GL_POLYGON_STIPPLE);
glFlush();
}
glutInitWindowSize(640,480);
glutInitWindowPosition(100,150);
glutCreateWindow("Polygon Stipple");
glutDisplayFunc(Display);
myInit();
glutMainLoop();
return 0;
}
When the user presses or releases a mouse button, moves the mouse, or presses a
keyboard key, an event occur. Using the OpenGL Utility Toolkit (GLUT) the
CS2401 – Computer Graphics Unit - III
programmer can register a callback function with each of these events by using the
following commands:
glutMouseFunc(myMouse) which registers myMouse() with the event that occurs when
the mouse button is pressed or released;
Mouse interaction.
When a mouse event occurs the system calls the registered function, supplying it with
values for these parameters. The value of button will be one of:
CS2401 – Computer Graphics Unit - III
GLUT_LEFT_BUTTON,
GLUT_MIDDLE_BUTTON,
GLUT_RIGHT_BUTTON,
with the obvious interpretation, and the value of state will be one of: GLUT_UP or
GLUT_DOWN. The values x and y report the position of the mouse at the time of the
event.
Keyboard interaction.
As mentioned earlier, pressing a key on the keyboard queues a keyboard event. The
callback function myKeyboard() is registered with this type of event through
glutKeyboardFunc(myKeyboard).
The value of key is the ASCII value12 of the key pressed. The values x and y report the
position of the mouse at the time that the event occurred. (As before y measures the
number of pixels down from the top of the window.)
Example : gluLookAt(3.0, 2.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
// Triangle
glBegin( GL_TRIANGLES );
glVertex3f( -0.5f, -0.5f, -10.0 );
glVertex3f( 0.5f, -0.5f, -10.0 );
glVertex3f( 0.0f, 0.5f, -10.0 );
glEnd();
glBegin(GL_QUADS);
glColor3f(1,0,0); //red
glVertex3f(-0.5, -0.5, 0.0);
glColor3f(0,1,0); //green
glVertex3f(-0.5, 0.5, 0.0);
glColor3f(0,0,1); //blue
glVertex3f(0.5, 0.5, 0.0);
glColor3f(1,1,1); //white
glVertex3f(0.5, -0.5, 0.0);
glEnd();
cone
icosahedron
teapot
cube
octahedron
tetrahedron
dodecahedron
sphere
torus
glutWireCube(double size);
glutSolidCube(double size);
glutWireSphere(double radius, int slices, int stacks);
glutSolidSphere(double radius, int slices, int stacks);
glutWireCone(double radius, double height, int slices, int stacks);
glutSolidCone(double radius, double height, int slices, int stacks);
glutWireTorus(double inner_radius, double outer_radius, int sides, int rings);
glutSolidTorus(double inner_radius, double outer_radius, int sides, int rings);
glutWireTeapot(double size);
glutSolidTeapot(double size);
CS2401 – Computer Graphics Unit - III
3D Transformation in OpenGL
If the matrix mode is either GL_MODELVIEW or GL_PROJECTION, all objects drawn after a
call to glTranslate are translated.
Use glPushMatrix and glPopMatrix to save and restore the untranslated coordinate system.
#include "stdafx.h"
#include "gl/glut.h"
#include <gl/gl.h>
void Display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(0.0, 0.0, 5.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
glColor3f(0.0, 1.0, 0.0);
glBegin(GL_POLYGON);
glVertex3f( 0.0, 0.0, 0.0); // V0 ( 0, 0, 0)
glVertex3f( 1.0f, 0.0, 0.0); // V1 ( 1, 0, 0)
CS2401 – Computer Graphics Unit - III
UNIT IV – RENDERING
1
CS2401 Computer Graphics Unit IV
1. The
normal vector , m , to the surface at P.
2. The vector v from P to the viewer‟s eye.
3. The vector s from P to the light source.
The angles between these three vectors form the basis of
computing light intensities. These angles are normally calculated
using world coordinates.
Each face of a mesh object has two sides. If the object is
solid , one is inside and the other is outside. The eye can see only
the outside and it is this side for which we must compute light
contributions.
We shall develop the shading model for a given side of a face.
If that side of the face is turned away from the eye there is no light
contribution.
4.1.2 How to Compute the Diffuse Component
Suppose that a light falls from a point source onto one side
of a face , a fraction of it is re-radiated diffusely in all directions
2
CS2401 Computer Graphics Unit IV
from this side. Some fraction of the re-radiated part reaches the
eye, with an intensity denoted by Id.
An important property assumed for diffuse scattering is that
it is independent of the direction from the point P, to the location of
the viewer‟s eye. This is called omnidirectional scattering ,
because scattering is uniform in all directions. Therefore Id is
independent of the angle between m and v.
ae cross section of a point source illuminating a face S when
m is aligned with s.
Fig (b) the face is turned partially away from the light source
through angle θ. The area subtended is now only cos(θ) , so that
the brightness is reduced of S is reduced by this same factor. This
relationship between the brightness and surface orientation is
called Lambert’s law.
cos(θ) is the dot product between the normalized versions of
s and m. Therefore the strength of the diffuse component:
s.m
Id = Is ρd
s m
Is is the intensity of the light source and ρd is the diffuse
reflection coefficient. If the facet is aimed away from the eye this
dot product is negative so we need to evaluate Id to 0. A more
precise computation of the diffuse component is :
s.m
Id = Is ρd max ,0
sm
The reflection coefficient ρd depends on the wavelength of the
incident light , the angle θ and various physical properties of the
surface. But for simplicity and to reduce computation time, these
effects are usually suppressed when rendering images. A
reasonable value for ρd is chosen for each surface.
4.1.3 Specular Reflection
Real objects do not scatter light uniformly in all directions
and so a specular component is added to the shading model.
Specular reflection causes highlights which can add reality to a
picture when objects are shinny. The behavior of specular light can
be explained with Phong model.
3
CS2401 Computer Graphics Unit IV
Phong Model
It is easy to apply and the highlights generated by the phong
model given an plasticlike appearance , so the phong model is
good when the object is made of shinny plastic or glass.
The Phong model is less successful with objects that have a
shinny metallic surface.
Fig a) shows a situation where light from a source impinges
on a surface and is reflected in different directions.
equals the angle of reflection. This is the direction in which all light
would travel if the surface were a perfect mirror. At the other
nearby angles theamount of light reflected diminishes rapidly, Fig
4
CS2401 Computer Graphics Unit IV
(b) shows this with beam patterns. The distance from P to the beam
envelope shows the relative strength
of
the light scattered in that direction.
5
CS2401 Computer Graphics Unit IV
6
CS2401 Computer Graphics Unit IV
The key idea is that the vertices of a mesh are sent down the
pipeline along with their associated vertex normals, and all shading
calculations are done on vertices.
The above fig. shows a triangle with vertices v0,v1 and v2 being
rendered. Vertex vi has the normal vector mi associated with it. These
quantities are sent down the pipeline with calls such as :
7
CS2401 Computer Graphics Unit IV
glBegin(GL_POLYGON);
for( int i=0 ;i< 3; i++)
{
glNormal3f(m[i].x, m[i].y, m[i].z);
glVertex3f(v[i].x, v[i].y, v[i].z);
}
glEnd();
The call to glNormal3f() sets the “current normal vector” which
is applied to all vertices sent using glVertex3f(). The current normal
remains current until it is changed with another call to glNormal3f().
The vertices are transformed by the modelview matrix, M so
they are then expressed in camera coordinates. The normal vectors are
also transformed. Transforming points of a surface by a matrix M causes
the normal m at any point to become the normal M-Tm on the
transformed surface, where M-T is the transpose of the inverse of M.
All quantities
after the modelview transformation are expressed in camera
coordinates. At this point the shading model equation (1) is applied and a
color is attached to each vertex.
The clipping step is performed in homogenous coordinates.
This may alter some of the vertices. The below figure shows the case
where vertex v1 of a triangle is clipped off and two new vertices a and b
are created. The triangle becomes a quadrilateral. The color at each new
vertices must be computed, since it is needed in the actual rendering
step.
Clipping a polygon against the view volume
8
CS2401 Computer Graphics Unit IV
9
CS2401 Computer Graphics Unit IV
10
CS2401 Computer Graphics Unit IV
Spotlights
Light sources are point sources by default, meaning that they
emit light uniformly in all directions. But OpenGL allows you to make
them into spotlights, so they emit light in a restricted set of directions.
The fig. shows a spotlight aimed in direction d with a “cutoff angle” of α.
Properties of an OpenGL spotlight
The default values for these parameters are d= (0,0,-1) , α=180 degree
and ε=0, which makes a source an omni directional point source.
This code sets the ambient source to the color (0.2, 0.3, 0.1).
The default value is (0.2, 0.2, 0.2,1.0) so the ambient is always present.
Setting the ambient source to a non-zero value makes object in a scene
visible even if you have not invoked any of the lighting functions.
11
CS2401 Computer Graphics Unit IV
glLightModeli(GL_LIGHT_MODEL_LOCAL_VIEWER, GL_TRUE);
In OpenGL
the terms “front faces” and “back faces” are used for “inside” and
“outside”. A face is a front face if its vertices are listed in
counterclockwise order as seen by the eye.
The fig.(a) shows a eye viewing a cube which is modeled using
the counterclockwise order notion. The arrows indicate the order in
which the vertices are passed to OpenGL. For an object that encloses
that some space, all faces that are visible to the eye are front faces, and
12
CS2401 Computer Graphics Unit IV
OpenGL draws them with the correct shading. OpenGL also draws back
faces but they are hidden by closer front faces.
OpenGL’s definition of a front face
13
CS2401 Computer Graphics Unit IV
void display()
{
GLfloat position[]={2,1,3,1}; //initial light position
clear the color and depth buffers
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
glRotated(….); //move the light
glTranslated(…);
glLightfv(GL_LIGHT0,GL_POSITION,position);
glPopMatrix();
14
CS2401 Computer Graphics Unit IV
the function glMaterial and they can be specified individually for front
and back faces. The code:
15
CS2401 Computer Graphics Unit IV
Painting a Face
A face is colored using a polygon fill routine. A polygon routine
is sometimes called as a tiler because it moves over a polygon pixel by
pixel, coloring each pixel. The pixels in a polygon are visited in a regular
order usually from bottom to top of the polygon and from left to right.
Polygons intersect are convex. A tiler designed to fill only
convex polygons can be very efficient because at each scan line there is
unbroken run of pixels that lie inside the polygon. OpenGL uses this
property and always fills convex polygons correctly whereas nonconvex
polygons are not filled correctly.
A convex quadrilateral whose face is filled with color
16
CS2401 Computer Graphics Unit IV
17
CS2401 Computer Graphics Unit IV
Gouraud Shading
Gouraud shading computes a different value of c for each
pixel. For the scan line ys in the fig. , it finds the color at the leftmost
pixel, colorleft, by linear interpolation of the colors at the top and bottom
of the left edge of the polygon. For the same scan line the color at the
top is color4, and that at the bottom is color1, so colorleft will be
calculated as
colorleft = lerp(color1, color4,f), ----------(1)
where the fraction
varies between 0 and 1 as ys varies from ybott to y4. The eq(1) involves
three calculations since each color quantity has a red, green and blue
component.
Colorright is found by interpolating the colors at the top and
bottom of the right edge. The tiler then fills across the scan line , linearly
interpolating between colorleft and colorright to obtain the color at pixel x:
C(x) = lerp
To increase the efficiency of the fill, this color is computed
incrementally at each pixel . that is there is a constant difference
between c(x+1) and c(x) so that
C(x+1)=c(x)+
18
CS2401 Computer Graphics Unit IV
Phong Shading
Highlights are better reproduced using Phong Shading.
Greater realism can be achieved with regard to highlights on shiny
objects by a better approximation of the normal vector to the face at each
pixel this type of shading is called as Phong Shading
When computing Phong Shading we find the normal vector at
each point on the face of the object and we apply the shading model
there to fig the color we compute the normal vector at each pixel by
interpolating the normal vectors at the vertices of the polygon.
The fig shows a projected face with the normal vectors m1, m2,
m3 and m4 indicated at the four vertices.
19
CS2401 Computer Graphics Unit IV
Interpolating normals
For the scan line ys, the vectors m left and m right are found by
linear interpolation
20
CS2401 Computer Graphics Unit IV
Bitmap Textures
Textures are formed from bitmap representations of images,
such as digitized photo. Such a representation consists of an array
txtr[c][r] of color values. If the array has C columns and R rows, the
indices c and r vary from 0 to C-1 and R-1 resp.,. The function
texture(s,t) accesses samples in the array as in the code:
Color3 texture (float s, float t)
{
return txtr[ (int) (s * C)][(int) (t * R)];
}
Where Color3 holds an RGB triple.
Example: If R=400 and C=600, then the texture (0.261, 0.783)
evaluates to txtr[156][313]. Note that a variation in s from 0 to 1
encompasses 600 pixels, the variation in t encompasses 400 pixels. To
avoid distortion during rendering , this texture must be mapped onto a
rectangle with aspect ratio 6/4.
Procedural Textures
Textures are defined by a mathematical function or procedure.
For example a spherical shape could be generated by a function:
float fakesphere( float s, float t)
{
float r= sqrt((s-0.5) * (s-0.5)+ (t-0.5) * (t-0.5));
if (r < 0.3) return 1-r/0.3; //sphere intensity
else return 0.2; //dark background
}
This function varies from 1(white) at the center to 0 (black) at
the edges of the sphere.
4.3.1 Painting the Textures onto a Flat Surface
Texture space is flat so it is simple to paste texture on a flat
surface.
Mapping texture onto a planar polygon
21
CS2401 Computer Graphics Unit IV
glEnd();
Mapping a Square to a Rectangle
22
CS2401 Computer Graphics Unit IV
The fig. shows the a case where the four corners of the texture
square are associated with the four corners of a rectangle. In this
example, the texture is a 640-by-480 pixel bit map and it is pasted onto
a rectangle with aspect ratio 640/480, so it appears without distortion.
23
CS2401 Computer Graphics Unit IV
The fig. shows the use of texture coordinates , that tile the
texture, making it to repeat. To do this some texture coordinates that lie
outside the interval[0,1] are used. When rendering routine encounters a
value of s and t outside the unit square, such as s=2.67, it ignores the
integral part and uses only the fractional part 0.67. A point on a face
that requires (s,t)=(2.6,3.77) is textured with texture (0.6,0.77).
The points inside F will be filled with texture values lying
inside P, by finding the internal coordinate values (s,t) through the use of
interpolation.
24
CS2401 Computer Graphics Unit IV
25
CS2401 Computer Graphics Unit IV
26
CS2401 Computer Graphics Unit IV
27
CS2401 Computer Graphics Unit IV
The fig. shows the face of a barn. The left edge of the projected face
has endpoints a and b. The face extends from xleft to xright across scan line
y. We need to find appropriate texture coordinates (sleft, tleft) and
(sright, tright) to attach to xleft and xright, which we can then interpolate
across the scan line
Consider finding sleft(y), the value of sleft at scan line y.We know
that texture coordinate sA is attached to point a and sB is attached to
point b. If the scan line at y is a fraction f of the way between ybott and
ytop so that f=(y – ybott)/ (ytop – ybott), the proper texture coordinate to use
is
28
CS2401 Computer Graphics Unit IV
29
CS2401 Computer Graphics Unit IV
30
CS2401 Computer Graphics Unit IV
Fig(a) shows a box casting a shadow onto the floor. The shape of the
shadow is determined by the projections of each of the faces of the box
onto the plane of the floor, using the light source as the center of
projection.
Fig(b) shows the superposed projections of two of the faces. The
31
CS2401 Computer Graphics Unit IV
top faces projects to top‟ and the front face to front‟.
This provides the key to drawing the shadow. After drawing the
plane by the use of ambient, diffuse and specular light contributions,
draw the six projections of the box‟s faces on the plane, using only the
ambient light. This technique will draw the shadow in the right shape
and color. Finally draw the box.
Building the “Projected” Face
To make the new face F‟ produced by F, we project each of the
vertices of F onto the plane. Suppose that the plane passes through point
A and has a normal vector n. Consider projecting vertex V, producing V‟.
V‟ is the point where the ray from source at S through V hits the plane,
this point is
32
CS2401 Computer Graphics Unit IV
n.( A S )
V' S (V S)
n.(V S )
4.4.2 Creating Shadows with the use of a Shadow buffer
This method uses a variant of the depth buffer that performs the
removal of hidden surfaces. An auxiliary second depth buffer called a
shadow buffer is used for each light source. This requires lot of memory.
This method is based on the principle that any points in a scene
that are hidden from the light source must be in shadow. If no object lies
between a point and the light source, the point is not in shadow.
The shadow buffer contains a depth picture of the scene from the
point of view of the light source. Each of the elements of the buffer
records the distance from the source to the closest object in the
associated direction. Rendering is done in two stages:
1) Loading the shadow buffer
The
shadow buffer is initialized with 1.0 in each element, the largest
pseudodepth possible. Then through a camera positioned at the light
source, each of the scene is rasterized but only the pseudodepth of the
point on the face is tested. Each element of the shadow buffer keeps
track of the smallest pseudodepth seen so far.
Using the shadow buffer
33
CS2401 Computer Graphics Unit IV
The fig. shows a scene being viewed by the usual eye camera
and a source camera located at the light source. Suppose that point P is
on the ray from the source through the shadow buffer pixel d[i][j] and
that point B on the pyramid is also on this ray. If the pyramid is present
d[i][j] contains the pseudodepth to B; if the pyramid happens to be
absent d[i][j] contains the pseudodepth to P.
The shadow buffer calculation is independent of the eye position,
so in an animation in which only the eye moves, the shadow buffer is
loaded only once. The shadow buffer must be recalculated whenever the
objects move relative to the light source.
2) Rendering the scene
Each face in the scene is rendered using the eye camera.
Suppose the eye camera sees point P through pixel p[c][r]. When
rendering p[c][r], we need to find
34
CS2401 Computer Graphics Unit IV
class Camera {
private:
Point3 eye;
Vector3 u, v, n;
double viewAngle, aspect, nearDist, farDist; //view volume shape
void setModelViewMatrix(); //tell OpenGL where the camera is
public:
Camera(); //default constructor
void set(Point3 eye, Point3 look, Vector3 up); //like gluLookAt()
void roll(float, angle); //roll it
void pitch(float, angle); // increase the pitch
void yaw(float, angle); //yaw it
void slide(float delU, float delV, float delN); //slide it
void setShape(float vAng, float asp, float nearD, float farD);
};
The Camera class definition contains fields for eye and the
directions u, v and n. Point3 and Vector3 are the basic data types. It also
has fields that describe the shape of the view volume: viewAngle, aspect,
nearDist and farDist.
35
CS2401 Computer Graphics Unit IV
The method set() acts like gluLookAt(): It uses the values of eye,
look and up to compute u, v and n according to equation:
n= eye – look,
u = up X n
and
v = n X u. It places this information in the camera‟s fields and
communicates it to OpenGL.
36
CS2401 Computer Graphics Unit IV
37
CS2401 Computer Graphics Unit IV
Two new axes are formed u‟ and v‟ that lie in the same plane as u
and v, and have been rotated through the angle α radians.
We form u‟ as the appropriate linear combination of u and v and
similarly for v‟:
u‟ = cos (α)u + sin(α)v ;
v‟ = -sin (α)u + cos(α)v
The new axes u‟ and v‟ then replace u and v respectively in the
camera. The angles are measured in degrees.
Implementation of roll()
38
CS2401 Computer Graphics Unit IV
Implementation of yaw()
#include “camera.h”
Camera cam; //global camera object
//---------------------- myKeyboard-------------------------------
void myKeyboard(unsigned char key, int x, int y)
{
switch(key)
{
//controls for the camera
case „F‟: //slide camera forward
cam.slide(0, 0, 0.2);
break;
case „F‟-64: //slide camera back
cam.slide(0, 0,-0.2);
break;
case „P‟:
cam.pitch(-1.0);
break;
39
CS2401 Computer Graphics Unit IV
case „P‟-64:
cam.pitch(1.0);
break;
//add roll and yaw controls
}
glutPostRedisplay(); //draw it again
}
//--------------------------myDisplay------------------------------
void myDisplay(void)
{
glClear(GL_COLOR_BUFFER_BIT |GL_DEPTH_BUFFER_BIT);
glutWireTeapot(1,0); // draw the teapot
glFlush();
glutSwapBuffers(); //display the screen just made
}
//--------------------------main----------------------------
void main(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); //double buffering
glutInitWindowSize(640, 480);
glutInitWindowPosition(50, 50);
glutCreateWindow(“fly a camera around a teapot”);
glutKeyboardFunc(myKeyboard);
glutDisplayFunc(myDisplay);
glClearColor(1.0f, 1.0f, 1.0f, 1.0f); //background is white
glColor3f(0.0f, 0.0f, 0.0f); //set color of stuff
glViewport(0, 0, 640, 480);
cam.set(4, 4, 4, 0, 0, 0, 0, 1, 0); //make the initial camera
cam.setShape(30.0f, 64.0f/48.0f, 0.5f, 50.0f);
glutMainLoop();
}
40
CS2401 Computer Graphics Unit IV
34
CS2401 COMPUTER GRAPHICS UNIT V
UNIT V FRACTALS
Fractals and Self similarity – Peano curves – Creating image by
iterated functions –Mandelbrot sets – Julia Sets – Random Fractals –
Overview of Ray Tracing –Intersecting rays with other primitives – Adding
Surface texture – Reflections and Transparency – Boolean operations on
Objects
To create K1 , divide the line K0 into three equal parts and replace
the middle section with a triangular bump having sides of length 1/3.
1
CS2401 COMPUTER GRAPHICS UNIT V
The total length of the line is 4/3. The second order curve K2, is formed
by building a bump on each of the four line segments of K1.
To form Kn+1 from Kn:
Subdivide each segment of Kn into three equal parts and replace
the middle part with a bump in the shape of an equilateral triangle.
In this process each segment is increased in length by a factor of
4/3, so the total length of the curve is 4/3 larger than that of the
previous generation. Thus Ki has total length of (4/3)i , which increases
as i increases. As i tends to infinity, the length of the curve becomes
infinite.
The Koch snowflake of the above figure is formed out of three Koch
curves joined together. The perimeter of the ith generations shape Si is
three times length of a Koch curve and so is 3(4/3)i , which grows forever
as i increases. But the area inside the Koch snowflake grows quite
slowly. So the edge of the
Koch snowflake gets rougher and rougher and longer and longer, but
the area remains bounded.
Koch snowflake s3, s4 and s5
2
CS2401 COMPUTER GRAPHICS UNIT V
We call n the order of the curve Kn, and we say the order –n
Koch curve consists of four versions of the order (n-1) Koch curve.To
draw K2 we draw a smaller version of K1 , then turn left 60 , draw K1
again, turn right 120 , draw K1 a third time. For snowflake this routine is
performed just three times, with a 120 turn in between.
The recursive method for drawing any order Koch curve is
given in the following pseudocode:
To draw Kn:
if ( n equals 0 ) Draw a straight line;
else {
Draw Kn-1;
Turn left 60 ;
Draw Kn-1;
Turn right 120 ;
Draw Kn-1;
Turn left 60 ;
Draw Kn-1;
}
Drawing a Koch Curve
Void drawKoch (double dir, double len, int n)
{
// Koch to order n the line of length len
// from CP in the direction dir
3
CS2401 COMPUTER GRAPHICS UNIT V
dir. To keep track of the direction of each child generation, the parameter
dir is passed to subsequent calls of Koch().
4
CS2401 COMPUTER GRAPHICS UNIT V
5
CS2401 COMPUTER GRAPHICS UNIT V
6
CS2401 COMPUTER GRAPHICS UNIT V
7
CS2401 COMPUTER GRAPHICS UNIT V
that it draws gray scale and color images of objects. The image is viewed
as a collection of pixels and at each iteration the transformed point lands
in one of the pixels. A counter is kept for each pixel and at the
completion of the game the number of times each pixel has been visited
is converted into a color according to some mapping.
8
CS2401 COMPUTER GRAPHICS UNIT V
9
CS2401 COMPUTER GRAPHICS UNIT V
The IFS works well with both complex and real numbers.
Both s and c are complex numbers and at each iteration we square the
previous result and add c. Squaring a complex number z = x + yi yields
the new complex number:
( x + yi)2 = (x2 – y2) + (2xy)i ----------------------------------(3)
2xy.
Some Notes on the Fixed Points of the System
It is useful to examine the fixed points of the system
f(.) =(.)2 + c . The behavior of the orbits depends on these fixed points
that is those complex numbers z that map into themselves, so that
z2 + c = z. This gives us the quadratic equation z2 – z + c = 0 and the fixed
points of the system are the two solutions of this equation, given by
1 1
p+, p- = c --------------------------------(4)
2 4
If an orbit reaches a fixed point, p its gets trapped there
forever. The fixed point can be characterized as attracting or repelling.
If an orbit flies close to a fixed point p, the next point along the orbit will
be forced
closer to p if p is an attracting fixed point
farther away from p if p is a repelling a fixed point.
If an orbit gets close to an attracting fixed point, it is sucked
into the point. In contrast, a repelling fixed point keeps the orbit away
from it.
5.4.2 Defining the Mandelbrot Set
The Mandelbrot set considers different values of c, always
using the starting point s =0. For each value of c, the set reports on the
nature of the orbit of 0, whose first few values are as follows:
orbit of 0: 0, c, c2+c, (c2+c)2+c, ((c2+c)2+c)2 +c,……..
For each complex number c, either the orbit is finite so that
how far along the orbit one goes, the values remain finite or the orbit
explodes that is the values get larger without limit. The Mandelbrot set
denoted by M, contains just those values of c that result in finite orbits:
The point c is in M if 0 has a finite orbit.
The point c is not in M if the orbit of 0 explodes.
Definition:
The Mandelbrot set M is the set of all complex numbers c
that produce a finite orbit of 0.
If c is chosen outside of M, the resulting orbit explodes. If c
is chosen just beyond the border of M, the orbit usually thrashes around
the plane and goes to infinity.
If the value of c is chosen inside M, the orbit can do a variety
of things. For some c’s it goes immediately to a fixed point or spirals into
such a point.
10
CS2401 COMPUTER GRAPHICS UNIT V
11
CS2401 COMPUTER GRAPHICS UNIT V
and the dwell for that c value is found. A color is assigned to the pixel,
depending on whether the dwell is finite or has reached its limit.
The simplest picture of the Mandelbrot set just assign black
to points inside M and white to those outside. But pictures are more
appealing to the eye if a range of color is associated with points outside
M. Such points all have dwells less than the maximum and we assign
different colors to them on the basis of dwell size.
Assigning colors according to the orbit’s dwell
12
CS2401 COMPUTER GRAPHICS UNIT V
13
CS2401 COMPUTER GRAPHICS UNIT V
14
CS2401 COMPUTER GRAPHICS UNIT V
15
CS2401 COMPUTER GRAPHICS UNIT V
16
CS2401 COMPUTER GRAPHICS UNIT V
subsequent iterations are there, so point after point builds up inside Jc,
and a picture emerges.
17
CS2401 COMPUTER GRAPHICS UNIT V
form points D and E. In the third and final stage, the new points F…..I
are added.
To perform fractalization in a program
Line L passes through the midpoint M of segment S and is
perpendicular to it. Any point C along L has the parametric form:
C(t) = M + (B-A) t -----------------------------------(7)
for some values of t, where the midpoint M= (A+B)/2.
The distance of C from M is |B-A||t|, which is proportional
to both t and the length of S. So to produce a point C on the random
elbow, we let t be computed randomly. If t is positive, the elbow lies to
one side of AB; if t is negative it lies to the other side.
For most fractal curves, t is modeled as a Gaussian random
variable with a zero mean and some standard deviation. Using a mean of
zero causes, with equal probability, the elbow to lie above or below the
parent segment.
Fractalizing a Line segment
void fract(Point2 A, Point2 B, double stdDev)
// generate a fractal curve from A to B
double xDiff = A.x – B.x, yDiff= A.y –B.y;
Point2 C;
if(xDiff * XDiff + YDiff * yDiff < minLenSq)
cvs.lintTo(B.x, B.y);
else
{
stdDev *=factor; //scale stdDev by factor
double t=0;
// make a gaussian variate t lying between 0 and 12.0
for(int i=0; I, 12; i++)
t+= rand()/32768.0;
t= (t-6) * stdDev; //shift the mean to 0 and sc
C.x = 0.5 *(A.x +B.x) – t * (B.y – A.y);
C.y = 0.5 *(A.y +B.y) – t * (B.x – A.x);
fract(A, C, stdDev);
fract(C, B, stdDev);
}
The routine fract() generates curves that approximate actual
fractals. The routine recursively replaces each segment in a random
elbow with a smaller random elbow. The stopping criteria used is: When
the length of the segment is small enough, the segment is drawn using
cvs.lineTo(), where cvs is a Canvas object. The variable t is made to be
approximately Gaussian in its distribution by summing together 12
uniformly distributed random values lying between 0 and 1. The result
has a mean value of 6 and a variance of 1. The mean value is then
shifted to 0 and the variance is scaled as necessary.
The depth of recursion in fract() is controlled by the length of
the line segment.
18
CS2401 COMPUTER GRAPHICS UNIT V
S(f)= 1/f β
19
CS2401 COMPUTER GRAPHICS UNIT V
This function is
positive at these values of t for which the ray is outside the object.
zero when the ray coincides with the surface of the object and
negative when the ray is inside the surface.
20
CS2401 COMPUTER GRAPHICS UNIT V
For quadrics such as the sphere, d(t) has a parabolic shape, for the torus, it has a
quartic shape. For other surfaces d(t) may be so complicated that we have to search
numerically to locate t’s for which d(.) equals zero. The function for super ellipsoid is
d(t) = ((Sx + Cxt)n +(Sy + Cyt)n)m/n + (Sy + Cyt)m -1
where n and m are constant that govern the shape of the surface.
5.8 ADDING SURFACE TEXTURE
A fast method for approximating global illumination effect is environmental
mapping. An environment array is used to store background intensity information for a
scene. This array is then mapped to the objects in a scene based on the specified viewing
direction. This is called as environment mapping or reflection mapping.
To render the surface of an object, we project pixel areas on to surface and then reflect
the projected pixel area on to the environment map to pick up the surface shading
attributes for each pixel. If the object is transparent, we can also refract the projected
pixel are also the environment map. The environment mapping process for reflection of a
projected pixel area is shown in figure. Pixel intensity is determined by averaging the
intensity values within the intersected region of the environment map.
A simple method for adding surface detail is the model structure and patterns with
polygon facets. For large scale detail, polygon modeling can give good results. Also we
could model an irregular surface with small, randomly oriented polygon facets, provided
the facets were not too small.
Surface pattern polygons are generally overlaid on a larger surface polygon and are
processed with the parent’s surface. Only the parent polygon is processed by the visible
surface algorithms, but the illumination parameters for the surfac3e detail polygons take
precedence over the parent polygon. When fine surface detail is to be modeled, polygon
are not practical.
21
CS2401 COMPUTER GRAPHICS UNIT V
The object to image space mapping is accomplished with the concatenation of the
viewing and projection transformations.
A disadvantage of mapping from texture space to pixel space is that a selected
texture patch usually does not match up with the pixel boundaries, thus requiring
calculation of the fractional area of pixel coverage. Therefore, mapping from pixel space
to texture space is the most commonly used texture mapping method. This avoids pixel
subdivision calculations, and allows anti aliasing procedures to be easily applied.
The mapping from image space to texture space does require calculation of the
inverse viewing projection transformation mVP -1 and the inverse texture map
transformation mT -1 .
5.8.2 Procedural Texturing Methods
Next method for adding surface texture is to use procedural definitions of the
color variations that are to be applied to the objects in a scene. This approach avoids the
transformation calculations involved transferring two dimensional texture patterns to
object surfaces.
When values are assigned throughout a region of three dimensional space, the
object color variations are referred to as solid textures. Values from texture space are
transferred to object surfaces using procedural methods, since it is usually impossible to
store texture values for all points throughout a region of space (e.g) Wood Grains or
Marble patterns Bump Mapping.
Although texture mapping can be used to add fine surface detail, it is not a good
method for modeling the surface roughness that appears on objects such as oranges,
strawberries and raisins. The illumination detail in the texture pattern usually does not
correspond to the illumination direction in the scene.
A better method for creating surfaces bumpiness is to apply a perturbation function to
the surface normal and then use the perturbed normal in the illumination model
calculations. This technique is called bump mapping.
If P(u,v) represents a position on a parameter surface, we can obtain the surface normal at
that point with the calculation
N = Pu × Pv
Where Pu and Pv are the partial derivatives of P with respect to parameters u and v.
To obtain a perturbed normal, we modify the surface position vector by adding a small
perturbation function called a bump function.
P’(u,v) = P(u,v) + b(u,v) n.
This adds bumps to the surface in the direction of the unit surface normal n=N/|N|. The
perturbed surface normal is then obtained as
N'=Pu' + Pv'
We calculate the partial derivative with respect to u of the perturbed position vector as
Pu' = _∂_(P + bn)
∂u
= Pu + bu n + bnu
Assuming the bump function b is small, we can neglect the last term and write
p u' ≈ pu + bun
Similarly p v'= p v + b v n.
and the perturbed surface normal is
N' = Pu + Pv + b v (Pu x n ) + bu ( n x Pv ) + bu bv (n x n).
22
CS2401 COMPUTER GRAPHICS UNIT V
23
CS2401 COMPUTER GRAPHICS UNIT V
A method for generating more interesting noise. The idea is to mix together
several noise components: One that fluctuates slowly as you move slightly through space,
one that fluctuates twice as rapidly, one that fluctuates four times rapidly, etc. The more
rapidly varying components are given progressively smaller strengths
turb (s, x, y, z) = 1/2noise(s ,x, y, z) + 1/4noise(2s,x,y,z) +1/8 noise (4s,x,y,z).
The function adds three such components, each behalf as strong and varying twice
as rapidly as its predecessor.
Common term of a turb () is a
24
CS2401 COMPUTER GRAPHICS UNIT V
light component. Similarly IT is found by casting a ray in the direction t and seeing what
surface is hit first, then computing the light contributions.
5.9.1 The Refraction of Light
When a ray of light strikes a transparent object, apportion of the ray penetrates the
object. The ray will change direction from dir to + if the speed of light is different in
medium 1 than in medium 2. If the angle of incidence of the ray is θ1, Snell’s law states
that the angle of refraction will be
sin(θ2) = sin(θ1)
C2 C1
where C1 is the spped of light in medium 1 and C2 is the speed of light in medium
2. Only the ratio C2/C1 is important. It is often called the index of refraction of medium 2
with respect to medium 1. Note that if θ1 ,equals zero so does θ2 .Light hitting an
interface at right angles is not bent.
In ray traving scenes that include transparent objects, we must keep track of the
medium through which a ray is passing so that we can determine the value C2/C1 at the
next intersection where the ray either exists from the current object or enters another one.
This tracking is most easily accomplished by adding a field to the ray that holds a pointer
to the object within which the ray is travelling.
Several design polices are used,
1) Design Policy 1: No two transparent object may interpenetrate.
2) Design Policy 2: Transparent object may interpenetrate.
A ray tracing method to combine simple shapes to more complex ones is known
as constructive Solid Geometry(CSG). Arbitrarily complex shapes are defined by set
operations on simpler shapes in a CSG. Objects such as lenses and hollow fish bowls, as
well as objects with holes are easily formed by combining the generic shapes. Such
objects are called compound, Boolean or CSG objects.
The Boolean operators: union, intersection and difference are shown in the figure
5.17.
Two compound objects build from spheres. The intersection of two spheres is
shown as a lens shape. That is a point in the lens if and only if it is in both spheres. L is
the intersection of the S1 and S2 is written as
L=S1∩S2
The difference operation is shown as a bowl.A point is in the difference of sets A
and B, denoted A-B,if it is in A and not in B.Applying the difference operation is
analogous to removing material to cutting or carrying.The bowl is specified by
B=(S1-S2)-C.
25
CS2401 COMPUTER GRAPHICS UNIT V
The solid globe, S1 is hollowed out by removing all the points of the inner sphere,
S2,forming a hollow spherical shell. The top is then opened by removing all points in the
cone C.
A point is in the union of two sets A and B, denoted AUB, if it is in A or in B or
in both. Forming the union of two objects is analogous to gluing them together.
The union of two cones and two cylinders is shown as a rocket.
R=C1 U C2 U C3 U C4.
Cone C1 resets on cylinder C2.Cone C3 is partially embedded in C2 and resets on
the fatter cylinder C4.
Ray trace objects that are Boolean combinations of simpler objects. The ray inside
lens L from t3 to t2 and the hit time is t3.If the lens is opaque, the familiar shading rules
will be applied to find what color the lens is at the hit spot. If the lens is mirror like or
transparent spawned rays are generated with the proper directions and are traced as
shown in figure 5.18.
Ray,first strikes the bowl at t1,the smallest of the times for which it is in S1 but not
in either S2 or C. Ray 2 on the other hand,first hits the bowl at t5. Again this is the
smallest time for which the ray is in S1,but in neither the other sphere nor the cone.The
hits at earlier times are hits with components parts of the bowl,but not with the bowl
itself.
26
CS2401 COMPUTER GRAPHICS UNIT V
Extent tests are first made to see if there is an early out.Then the proper hit()
routing is called for the left subtree and unless the ray misses this subtree,the hit list rinter
is formed.If there is a miss,hit() returns the value false immediately because the ray must
hit dot subtrees in order to hit their intersection.Then the hit list rtInter is formed.
The code is similar for the union Bool and DifferenceBool classes. For
UnionBool::hit(),the two hits are formed using
if((!left-)hit(r,lftInter))**(|right-)hit(r,rtinter)))
return false;
which provides an early out only if both hit lists are empty.
For differenceBool::hit(),we use the code
if((!left−>hit(r,lftInter)) return false;
if(!right−>hit(r,rtInter))
{
inter=lftInter;
return true;
}
which gives an early out if the ray misses the left subtree,since it must then miss the
whole object.
5.10.4 Building and using Extents for CSG object
The creation of projection,sphere and box extend for CSG object. During a
preprocessing step,the true for the CSG object is scanned and extents are built for each
node and stored within the node itself. During raytracing,the ray can be tested against
each extent encounted,with the potential benefit of an early out in the intersection process
if it becomes clear that the ray cannot hit the object.
27
CS2401 COMPUTER GRAPHICS UNIT V
28