Line Algorithm
Line Algorithm
DDA Algorithm
In computer graphics, a hardware or software implementation of a digital differential
analyzer (DDA) is used for linear interpolation of variables over an interval between start and end
point. DDAs are used for rasterization of lines, triangles and polygons. In its simplest
implementation the DDA Line drawing algorithm interpolates values in interval [(xstart, ystart),
(xend, yend)] by computing for each xi the equations xi = xi1+1/m, yi = yi1 + m, where x =
xend xstart and y = yend ystart and m = y/x.
The DDA is a scan conversion line algorithm based on calculating either dy or dx. A line is
sampled at unit intervals in one coordinate and corresponding integer values nearest the line path are
determined for other coordinates.
Considering a line with positive slope, if the slope is less than or equal to 1, we sample at unit x
intervals (dx=1) and compute successive y values as
Subscript k takes integer values starting from 0, for the 1st point and increases by until endpoint
is reached. y value is rounded off to nearest integer to correspond to a screen pixel.
For lines with slope greater than 1, we reverse the role of x and y i.e. we sample at dy=1 and
calculate consecutive x values as
Similar calculations are carried out to determine pixel positions along a line with negative slope.
Thus, if the absolute value of the slope is less than 1, we set dx=1 if i.e. the starting extreme point
is at the left.
The basic concept is:
-
Let m be between 0 to 1, then the slope of the line is between 0 and 45 degrees.
For the x-coordinate of the left end point of the line, compute the corresponding y value
according to the line equation. Thus we get the left end point as (x1,y1), where y1 may not
be an integer.
Calculate the distance of (x1,y1) from the center of the pixel immediately above it and call it
D1
Calculate the distance of (x1,y1) from the center of the pixel immediately below it and call it
D2
If D1 is smaller than D2, it means that the line is closer to the upper pixel than the lower
pixel, then, we set the upper pixel to on; otherwise we set the lower pixel to on.
Then increatement x by 1 and repeat the same process until x reaches the right end point of
the line.
This method assumes the width of the line to be zero
Bitmap
A graphics pattern such as an icon or a character may be needed frequently, or may need to be
re-used.
Generating the pattern every time when needed may waste a lot of processing time.
A bitmap can be used to store a pattern and duplicate it to many places on the image or on
the screen with simple copying operations.
Similarly to the case with lines, there is an incremental algorithm for drawing circles the midpoint circle algorithm
In the mid-point circle algorithm we use eight-way symmetry so only ever calculate the points for
the top right eighth of a circle, and then use symmetry to get the rest of the points
Assume that we have just
plotted point (xk, yk) The next
point is a choice between
(xk+1, yk) and (xk+1, yk-1)
By evaluating this function at the midpoint between the candidate pixels we can make our
decision
Assuming we have just plotted the pixel at (xk,yk) so we need to choose between (xk+1,yk) and
To ensure things are as efficient as possible we can do all of our calculations incrementally
First consider:
pk
x
circ
1, yk
k 1
1
2
1) ( y 2
[( xk 1) 1]
p
p 2(x
k 1
2
2
k 1
y2) (y
k 1
k 1
y ) 1
k
(1, r 1 )
2
1 (r 1 )2 r 2
2
5 r
4
circ
p p
k 1
2x
k
1
k
pk
2xk
1 2 yk
Input radius r and circle centre (xc, yc), then set the coordinates for the first point on
the circumference of a circle centred on the origin as:
(x0 , y0 ) (0, r)
Calculate the initial value of the decision parameter as:
p0
4 r
Starting with k = 0 at each position xk, perform the following test. If pk < 0, the next
point along the circle centred on (0, 0) is (xk+1, yk) and:
p
k 1
2x
k
1
k
Otherwise the next point along the circle is (xk+1, yk-1) and:
k 1
2x
1 2 y
x x xc y y yc
Repeat steps 3 to 5 until x >= y
To see the mid-point circle algorithm in action lets use it to draw a circle centred at (0,0) with
radius 10
UNIT_-2
2.1 Scan-Line Polygon Fill Algorithm
-
Basic idea: For each scan line crossing a polygon, this algorithm locates the intersection
points of the scan line with the polygon edges. These intersection points are shorted from
left to right. Then, we fill the pixels between each intersection pair.
Some scan-line intersection at polygon vertices require special handling. A scan line
passing through a vertex as intersecting the polygon twice. In this case we may or may not
add 2 points to the list of intersections, instead of adding 1 points. This decision depends
on whether the 2 edges at both sides of the vertex are both above, both below, or one is
above and one is below the scan line. Only for the case if both are above or both are
below the scan line, then we will add 2 points.
The above algorithm only works for standard polygon shapes. However, for the cases
which the edge of the polygon intersects, we need to identify whether a point is an interior
or exterior point. Students may find interesting descriptions of 2 methods to solve this
problem in many text books: odd-even rule and nonzero winding number rule.
This algorithm starts at a point inside a region and paint the interior outward towards the
boundary.
This is a simple method but not efficient: 1. It is recursive method which may occupy a
large stack size in the main memory.
More efficient methods fill horizontal pixel spands across scan lines, instead of
proceeding to neighboring points.
UNIT 3
0 01
x'
=0
y'
1
0 0
ty
For example, to translate a triangle with vertices at original coordinates (10,20), (10,10),
(20,10) by tx=5, ty=10, we compute as followings:
Translation of vertex (10,20):
Translation of vertex
(10,10):
x'
10
y'
=0 1
x'
y'
1
10
10
20 = 0 *10
0 01
1 05
= 01
0 01
10
10
1*10 0 * 20 5 *1
10 *1
= 30
0 *10 0 * 20 1*1
1*10
10 = 0 *10
15
1* 20
0 *10 5 *1
15
1*10 10 *1
= 20
1 0
=
y'
0 1 10
0 0
20
1* 20 0 *10 5 *1
=
10
25
1*10 10 *1 = 20
0 * 20
0 * 20 0 *10 1*1
The resultant coordinates of the triangle vertices are (15,30), (15,20), and (25,20) respectively.
Exercise: translate a triangle with vertices at original coordinates (10,25), (5,10), (20,10)
by tx=15, ty=5. Roughly plot the original and resultant triangles.
Alternatively, this rotation can also be specified by the following transformation matrix:
cos
sin
sin
cos
cos
y'
= sin
sin
cos
0 x
0 y
For example, to rotate a triange about the origin with vertices at original coordinates
(10,20), (10,10), (20,10) by 30 degrees, we compute as followings:
cos
sin
cos
sin
cos 30
0
= sin 30
sin 30 0
cos 30
0
0.866
=
0.5
0.5
0.866
0.866
0.5
0.5 0.866
0
0
10
0 20 =
1.34
22.32
1
0.866
y' =
0.5
10
0.5 0.866 0 10
0
0
1 1
0.866 *10
( 0.5) *10 0 *1
3.66
=
1
0.866
0.5
0.5 0.866
0
0
20
10 =
12.32
The resultant coordinates of the triangle vertices are (-1.34,22.32), (3.6,13.66), and
(12.32,18.66) respectively.
Exercise: Rotate a triange with vertices at original coordinates (10,20), (5,10),
(20,10) by 45 degrees. Roughly plot the original and resultant triangles.
Alternatively, this scaling can also be specified by the following transformation matrix:
0
s
x' sx
y' = 0
0 x
sx
0
y
1
For example, to scale a triange with respect to the origin, with vertices at original
coordinates (10,20), (10,10), (20,10) by sx=2, sy=1.5, we compute as followings:
Scaling of vertex (10,20):
x'
2 0
0 10
y'
= 0 1.5 0
20
0 0
2 *10 0 * 20 0 *1
20
=
0 *10 1.5 * 20
0 *1
0 *10 0 * 20 1*1
30
1
2 0
y'
= 0 1.5 0
0 0 1
0 10
10
2 *10
0 *10 0 *1
20
= 15
0 *10
0 *10 1*1
0 20
2 * 20
0 *10 0 *1
40
Scaling of vertex
(20,10):
x'
2 0
y' = 0
1
1.5
0
= 0 * 20 1.5 *10 0 *1
0 10
0 * 20
0 *10 1*1
15
1
The resultant coordinates of the triangle vertices are (20,30), (20,15), and (40,15) respectively.
Exercise: Scale a triange with vertices at original coordinates (10,25), (5,10), (20,10) by
sx=1.5, sy=2, with respect to the origin. Roughly plot the original and
resultant triangles.
Using C'=A(BC):
1.
2.
3.
4.
5.
6.
'
'
'
'
'
'
Example: Rotate a triangle with vertices (10,20), (10,10), (20,10) about the origin by 30
degrees and then translate it by tx=5, ty=10,
We compute the rotation matrix:
cos 30
sin 30
cos 30
B = sin 30
0
0
0.866
= 0.5
0.5
0
0.866 0
0
0 5
A= 0
1 10
0 1
M= 0
0.866
1 10 0.5
0.5
0.866
0 1
0 1
0
1* 0.866 0 * 0.5 5 * 0 1* 0.5 0 * 0.866 5 * 0 1* 0 0 * 0 5 *1
M= 0 * 0.866
1* 0.5 10 * 0 0 * 0.5 1* 0.866 10 * 0 0 * 0 1* 0 10 *1
0
0 * 0.866 0 * 0.5
0.866
0.5
M=
0.5 0.866
0
1* 0 0 *
5
10
0.866
0.5
0
0.5 5
0.866 10
0
10
0 *10
0 * 20 1*1
Transformation of vertex
(10,10):
x'
y'
1
0.866
0.5
0
0.5 5
0.866 10
0
1
10
10 =
5 *1
8.66
= 23.66
1
0.866
0.5
y' =
0.5 5
0.866 10
20
0 * 20
0 *10 1*1
The resultant coordinates of the triangle vertices are (3.66,32.32), (8.66,23.66), and
(17.32,28.66) respectively.
0 x1
01
0 x1
y1
1 0
0
01 t
t
y1
x2
t
0 0
y2
1*1 0 * 0 t x1 * 0 1* 0
= 0 *1
1* 0
t y1 * 0
0 *1 t x1 * 0 1* t x 2 0 * t y 2
0 * 0 1*1 t y1 * 0
0 *1 0 * 0 1* 0 0 * 0 0 *1
0 * t x 2 1* t y 2
t x1 *1
t y1 *1
1* 0 0 * t x 2 0 * tu 2 1*1
x2
y2
Rotations
By common sense, if we rotate a shape with 2 successive rotation angles: ? and a, about the
origin, it is equal to rotating the shape once by an angle ? + a about the origin.
Similarly, this additive property can be demonstrated by composite transformation matrix:
cos
sin
sin
cos
cos
sin cos
0 * cos
cos
cos
= sin
cos
cos * sin
0 * sin
1
0 * 0 cos * ( sin
sin
sin(
cos(
0
) ( sin ) * cos
0 * 0 cos
(cos sin
0*0
1* 0
cossin
= sin(
0
) * sin
sin sin
cos(
cos
sin
0
0
1
cos cos ( sin
=
sin
sin
sin
cos
cos cos
* 0 ( sin ) * 0 0 *1
sin * 0 cos * 0 0 *1
0 * 0 0 * 0 1 *1
) 0
1
0
0
x1
0 s
s
=
*s
x2
0*s
x1
y1
s
0
x2
0 s
0 1
0*s x2
s
=
x1
0 0
*s
0 0
y2
0 * 0 0 * 0 s x1 * 0 0 * s y 2
s y1 * 0 0 * 0 0 * 0 s y1 * s y 2
0 * 0 1* 0
0*0 0*s
x2
x2
0 * 0 s x1 * 0 0 * 0 0 *1
0 * 0 0 * 0 s y1 * 0 0 *1
1* 0
0 * 0 0 * 0 1*1
y2
0
s *s
y1
0
y2
This demonstrates that 2 successive scalings with respect to the origin are multiplicative.
Translate the object so that the pivot-point position is moved to the origin.
Rotate the object about the origin.
Translate the object so that the pivot point is returned to its original position.
0xr
1
0
sin
0
cos
=
0 1
cos
=
sin
cos
sin
sin
cos
sin
cos
x
y
r
r
0
1
sin
x r cos
cos
x r sin
xr
1 0
0 0
r
y
r
0 0
1
y r sin
xr
y r cos
yr
1 0 x f sx 0 0 1 0
0
1
0 1
s x 0 x f (1 s x )
y
f
1
0 1
0 sy 0 0
0 0 1 0
= 0 s y y f (1
0
0
sy)
sin(
cos( )
0
0
0
s1
0
0
0 0
0
1 0 y
0 01
1
ie. x'=x; y'=-y
y'
1
1 0
0
0
1
0 01
ie. x'=-x; y'=y
y'
0 x
y
1
0 0
2
0 1
cos
sin
sin
cos
0
0 x
1 0 y
01
1
0
ie. x'=-x; y'=-y
0 1 0
y'
=1 00
y
1
1
0 01
ie. x'=y; y'=x
y' =
1 0
0 x
0
01
1
0
ie. x'=-y; y'=-x
Shear
1 sh x
=0
0 0
Exercise: Think of a y-direction shear, with a shearing parameter shy, relative to the y-axis.
sin(
) xr
cos( )
0
0 x0
1
0
y
0
UNIT -4 2-
Dimensional viewing
4.1Images on the Screen
All objects in the real world have size. We use a unit of measure to describe both the size of an
object as well as the location of the object in the real world. For example, meters can be used to
specify both size and distance. When showing an image of an object on the screen, we use a
screen coordinate system that defines the location of the object in the same relative position as in
the real world. After we select the screen coordinate system, we change the picture to display
interior screen that means change it into screen coordinate system.
A Viewport is the section of the screen where the images encompassed by the window on the
world coordinate system will be drawn. A coordinate transformation is required to display the
image, encompassed by the window, in the viewport. The viewport uses the screen coordiante
system so this transformation is from the world coordinate system to the screen coordinate
system.
When a window is "placed" on the world, only certain objects and parts of objects can be seen.
Points and lines which are outside the window are "cut off" from view. This process of "cutting
off" parts of the image of the world is called Clipping. In clipping, we examine each line to
determine whether or not it is completely inside the window, completely outside the window, or
crosses a window boundary. If inside the window, the line is displayed. If outside the
window,the lines and points are not displayed. If a line crosses the boundary, we must determine
the point of intersection and display only the part which lies inside the window.
As you proceed around the window, extending each edge and defining an inside half-space and
an outside half-space, nine regions are created - the eight "outside" regions and the one "inside"
region. Each of the nine regions associated with the window is assigned a 4-bit code to identify
the region. Each bit in the code is set to either a 1(true) or a 0(false). If the region is to the left of
the window, the first bit of the code is set to 1. If the region is to the top of the window,
the second bit of the code is set to 1. If to the right, the third bit is set, and if to the bottom,
the fourth bit is set. The 4 bits in the code then identify each of the nine regions as shown below.
For any endpoint ( x , y ) of a line, the code can be determined that identifies which region the
endpoint lies. The code's bits are set according to the following conditions:
The sequence for reading the codes' bits is LRBT (Left, Right, Bottom, Top).
Once the codes for each endpoint of a line are determined, the logical AND operation of the
codes determines if the line is completely outside of the window. If the logical AND of the
endpoint codes is not zero, the line can be trivially rejected. For example, if an endpoint had a
code of 1001 while the other endpoint had a code of 1010, the logical AND would be 1000
which indicates the line segment lies outside of the window. On the other hand, if the endpoints
had codes of 1001 and 0110, the logical AND would be 0000, and the line could not be trivially
rejected.
The logical OR of the endpoint codes determines if the line is completely inside the window. If
the logical OR is zero, the line can be trivially accepted. For example, if the endpoint codes are
0000 and 0000, the logical OR is 0000 - the line can be trivially accepted. If the endpoint codes
are 0000 and 0110, the logical OR is 0110 and the line cannot be trivially accepted.
Algorithm
The Cohen-Sutherland algorithm uses a divide-and-conquer strategy. The line segment's
endpoints are tested to see if the line can be trivially accepted or rejected. If the line cannot be
trivally accepted or rejected, an intersection of the line with a window edge is determined and
the trivial reject/accept test is repeated. This process is continued until the line is accepted.
To perform the trivial acceptance and rejection tests, we extend the edges of the window to
divide the plane of the window into the nine regions. Each end point of the line segment is
then assigned the code of the region in which it lies.
1. Given a line segment with endpoint
2. Compute the 4-bit codes for each endpoint.
and
If both codes are 0000,(bitwise OR of the codes yields 0000 ) line lies
completely inside the window: pass the endpoints to the draw routine.
If both codes have a 1 in the same bit position (bitwise AND of the codes is not 0000),
the line lies outside the window. It can be trivially rejected.
3. If a line cannot be trivially accepted or rejected, at least one of the two endpoints must lie
outside the window and the line segment crosses a window edge. This line must
be clipped at the window edge before being passed to the drawing routine.
4. Examine one of the endpoints, say
. Read
's 4-bit code in order:
Left-to-Right, Bottom-to-Top.
5. When a set bit (1) is found, compute the intersection I of the corresponding window edge
with the line from
to
. Replace
with I and repeat the algorithm.
Let P(x1,y1) , Q(x2,y2)be the line which we want to study. The parametric equation of the
line segment from gives x-values and y-values for every point in terms of a parameter that
ranges from 0 to 1. The equations are
and
We can see that when t = 0, the point computed is P(x1,y1); and when t = 1, the point computed
is Q(x2,y2).
Algorithm
1. Set
and
2. Calculate the values of tL, tR, tT, and tB (tvalues).
if
or
ignore it and go to the next edge
o otherwise classify the tvalue as entering or exiting value (using inner product to
classify)
o if t is entering value set
; if t is exiting value set
o
3. If
then draw a line from (x1 + dx*tmin, y1 + dy*tmin) to (x1 + dx*tmax, y1
+ dy*tmax)
4. If the line crosses over the window, you will see (x1 + dx*tmin, y1 + dy*tmin) and (x1 +
dx*tmax, y1 + dy*tmax) are intersection between line and edge.
a.
b.
c.
d.
Edges that are totally inside the clip window. - add the second inside vertex point
Edges that are leaving the clip window. - add the intersection point as a vertex
Edges that are entirely outside the clip window. - add nothing to the vertex output list
Edges that are entering the clip window. - save the intersection and inside points as
vertices
IX = xmin
IY = slope*(xmin-x1) + y1, where the slope = (y2-y1)/(x2-x1)
The location of the intersection of the edge with the right side of the window is:
i.
ii.
IX = xmax
IY = slope*(xmax-x1) + y1, where the slope = (y2-y1)/(x2-x1)
The intersection of the polygon's edge with the top side of the window is:
i.
ii.
Finally, the intersection of the edge with the bottom side of the window is:
i.
ii.
UNIT-5
3D Object Representations
Methods:
Octree Encoding
Classification:
Boundary Representations (B-reps) eg. Polygon facets and spline patches
Space-partitioning representations eg. Octree Representation
Objects may also associate with other properties such as mass, volume, so as to determine
their response to stress and temperature etc.
Every vertex
is listed as an endpoint for at
least 2 edges
x1 1 z1
x1 y1 1
A = 1 y2 z2
1 y3 z3
B = x2 1 z2
x3 1 z3
C = x2 y2 1
x3 y3 1
D=
-
x1 y1 z1
x2 y2 z2
x3 y3 z3
Then, the plane equation at the form: Ax+By+Cz+D=0 has the property that:
If we substitute any arbitrary point (x,y) into this equation, then,
Ax + By + Cz + D < 0 implies that the point (x,y) is inside the surface, and
Ax + By + Cz + D < 1 implies that the point (x,y) is outside the surface.
Polygon Meshes
Common types of polygon meshes are triangle strip and quadrilateral mesh.
Curved Surfaces
1. Regular curved surfaces can be generated as
- Quadric Surfaces, eg. Sphere, Ellipsoid, or
- Superquadrics, eg. Superellipsoids
These surfaces can be represented by some simple parametric equations, eg, for ellipsoid:
s1
s2
x = rx cos
cos
, - /2<= <= /2
s1
s2
y = ry cos
sin
, - <= <=
s1
z = rz sin
Where s1, rx, r y, and rx are constants. By varying the values of and , points on the
surface can be computed.
Sweep Representations
Sweep representations mean sweeping a 2D surface in 3D space to create an object.
However, the objects created by this method are usually converted into polygon meshes
and/or parametric surfaces before storing.
A Translational Sweep:
A Rotational Sweep:
Other variations:
- We can specify special path for the sweep as some curve function.
- We can vary the shape or size of the cross section along the sweep path.
We can also vary the orientation of the cross section relative to the sweep path.
Unit-6
Three Dimensional Transformations:
Methods for geometric transforamtions and object modelling in 3D are extended from 2D
methods by including
the considerations for the z coordinate.
Basic geometric transformations are: Translation, Rotation, Scaling
x'
y'
1 0 0 tx x
0 1 0 t
y
z'
0 1 tz z
0
x'
y'
sx
z'
1
0
sy
0
0
0
0
x
y
0
0
0
0
sz
0
0
1
z
1
Step 1. Translate the object so that the rotation axis coincides with the parallel coordinate axis.
Step 2. Perform the specified rotation about that axis.
Step 3. Translate the object so that the rotation axis is moved back to its original position.
General 3D Rotations
Step 1. Translate the object so that the rotation axis passes through the coordinate origin. Step
2. Rotate the object so that the axis of rotation coincides with one of the coordinate axes. Step
3. Perform the specified rotation about that coordinate axis.
Step 4. Rotate the object so that the rotation axis is brought back to its original orientation.
Step 5. Translate the object so that the rotation axis is brought back to its original position.
Viewing
Transformation
Viewing
Coordinates
Projection
Transformation
Projection
Coordinates
Workstation
Transformation
Device
Coordinates
Modelling Transformation and Viewing Transformation can be done by 3D transformations.
The viewing-coordinate system is used in graphics packages as a reference for specifying the
observer viewing position and the position of the projection plane. Projection operations convert
the viewing-coordinate description (3D) to coordinate positions on the projection plane (2D).
(Usually combined with clipping, visual-surface identification, and surfacerendering)Workstation transformation maps the coordinate positions on the
6.6 Projections
Projection operations convert the viewing-coordinate description (3D) to coordinate positions on
the
projection plane (2D). There are 2 basic projection methods:
1. Parallel Projection transforms object positions to the view plane along parallel lines.
A parallel projection preserves relative proportions of objects. Accurate views of the various
sides of
an object are obtained with a parallel projection. But not a realistic representation
2. Perspective Projection transforms object positions to the view plane while converging to a
center
point of projection. Perspective projection produces realistic views but does not preserve relative
proportions. Projections of distant objects are smaller than the projections of objects of the same
size that are closer to the
projection plane.
Orthographic parallel projections are done by projecting points along parallel lines that are
perpendicular to the projection plane.
Oblique projections are obtained by projecting along parallel lines that are NOT perpendicular to
the
projection plane.Some special Orthographic Parallel Projections involve Plan View (Top
projection), Side Elevations, and Isometric Projection:
frustum becomes a regular parallelepiped. The transformation equations are shown as follows
and are applied to every vertex of each object:
x' = x * (d/z),
y' = y * (d/z),
z' = z
Where (x,y,z) is the original position of a vertex, (x',y',z') is the transformed position of the
vertex,
and d is the distance of image plane from the center of
projection. Note that:
Perspective transformation is different from perspective projection: Perspective projection
projects a
3D object onto a 2D plane perspectively. Perspective transformation converts a 3D object into a
deformed 3D object. After the transformation, the depth value of an object remains unchanged.
Before the perspective transformation, all the projection lines converge to the center of
projection.
After the transformation, all the projection lines are parallel to each other. Finally we can apply
parallel projection to project the object onto a 2D image plane. Perspective Projection =
Perspective Transformation + Parallel Projection
Some facts:
Perspective effects depend on the positioning of the center point of projection. If it is close to the
view plane, perspective effects are emphasized, ie. closer objects will appear larger than more
distant
objects of the same size. The projected size of an object is also affected by the relative position
of the object and the view plane.
'Viewing' a static view:
The view plane is usually placed at the viewing-coordinate origin and the center of projection is
positioned to obtain the amount of perspective desired.
'Viewing' an animation sequence:
Usually the center of projection point is placed at the viewing-coordinate origin and the view
plane is
placed in front of the scene. The size of the view window is adjusted to obtain the amount of
scene
desired. We move through the scene by moving the viewing reference frame (ie. the viewing
coordinate system).
Some facts:
Perspective effects depend on the positioning of the center point of projection. If it is close to the
view plane, perspective effects are emphasized, ie. closer objects will appear larger than more
distant
objects of the same size. The projected size of an object is also affected by the relative position
of the object and the view
plane.
'Viewing' a static view:
The view plane is usually placed at the viewing-coordinate origin and the center of projection is
positioned to obtain the amount of perspective desired.
'Viewing' an animation sequence:
Usually the center of projection point is placed at the viewing-coordinate origin and the view
plane is
placed in front of the scene. The size of the view window is adjusted to obtain the amount of
scene
desired. We move through the scene by moving the viewing reference frame (ie. the viewing
coordinate system).
6.8 Clipping
The purpose of 3D clipping is to identify and saveall surface segments within the view volume
for display on the output device. All parts of objects that are outside the view volume are
discarded. Thus the computing time is saved. 3D clipping is based on 2D clipping. To
We repeat the clipping process with the new polygon against the next border line of the clip
region.
7. This clipping operation results in a polygon which is totally inside the clip region.
6.9 Hardware Implementations
Most graphics processes are now implemented in graphics chip sets. Hardware systems are
now designed to transform, clip, and project objects to the output device for either 3D or 2D
applications.
In a typical arrangement, each of the individual chips in a chip set is responsible for geometric
transformations, projection transformation, clipping, visible-surface identification, surfaceshading
procedure, octree representation processing, or ray-tracing etc., in a pipe-line way.
Unit-7
Visible-Surface Detection Methods
More information about Modelling and Perspective Viewing:
Before going to visible surface detection, we first review and discuss the followings:
coordinate system. Afterwards, objects in the scene are further perspectively transformed. The
effect of such an operation is that after the transformation, the view volume in the shape of a
frustum becomes a regular parallelepiped. The transformation equations are shown as follows
and are applied to every vertex of each object:
x' = x * (d/z),
y' = y * (d/z),
z' = z
Where (x,y,z) is the original position of a vertex, (x',y',z') is the transformed position of the
vertex, and d is the distance of image plane
7.3 Clipping:
In 3D clipping, we remove all objects and parts of objects which are
outside of the view volume. Since we have done perspective transformation, the 6 clipping
planes,
which form the parallelepiped, are parallel to the 3 axes and hence clipping is straight forward.
Hence the clipping operation can be performed in 2D. For example, we may first perform the
clipping operations on the x-y plane and then on the x-z plane.
Characteristics of approaches:
- Require large memory size?
- Require long processing time?
- Applicable to which types of objects?
Considerations:
- Complexity of the scene
- Type of objects in the scene
- Available equipment
- Static or animated?
Classification of Visible-Surface Detection Algorithms:
Begin
1. Determine the object closest to the viewer that is pierced by the projector through the
pixel
2. Draw the pixel in the object colour.
End
- For each pixel, examine all n objects to determine the one closest to the viewer.
- If there are p pixels in the image, complexity depends on n and p ( O(np) ).
- Accuarcy of the calculation is bounded by the display resolution.
- A change of display resolution requires re-calculation
Application of Coherence in Visible Surface Detection Methods:
- Making use of the results calculated for one part of the scene or image for other nearby parts.
- Coherence is the result of local similarity
- As objects have continuous spatial extent, object properties vary smoothly within a small local
region in the scene. Calculations can then be made incremental.
Types of coherence:
1. Object Coherence:
Visibility of an object can often be decided by examining a circumscribing solid (which may be
of
simple form, eg. A sphere or a polyhedron.)
2. Face Coherence:
Surface properties computed for one part of a face can be applied to adjacent parts after small
incremental modification. (eg. If the face is small, we sometimes can assume if one part of the
face is
invisible to the viewer, the entire face is also invisible).
3. Edge Coherence:
The Visibility of an edge changes only when it crosses another edge, so if one segment of an
nonintersecting edge is visible, the entire edge is also visible.
4. Scan line Coherence:
Line or surface segments visible in one scan line are also likely to be visible in adjacent scan
lines.
Consequently, the image of a scan line is similar to the image of adjacent scan lines.
5. Area and Span Coherence:
A group of adjacent pixels in an image is often covered by the same visible object. This
coherence is
based on the assumption that a small enough region of pixels will most likely lie within a single
polygon. This reduces computation effort in searching for those polygons which contain a given
captured. As surfaces are processed, the image buffer is used to store the color values of each
pixel position and the z-buffer is used to store the depth values for each (x,y) position.
Algorithm:
1. Initially each pixel of the z-buffer is set to the maximum depth value (the depth of the
back clipping plane).
2. The image buffer is set to the background color.
3. Surfaces are rendered one at a time.
4. For the first surface, the depth value of each pixel is calculated.
5. If this depth value is smaller than the corresponding depth value in the z-buffer (ie. it is closer to
the view point), both the depth value in the z-buffer and the color value in the image buffer are
replaced by the depth value and the color value of this surface calculated at the pixel position.
End
End
End
- Step 2 is not efficient because not all polygons necessarily intersect with the scan line.
- Depth calculation in 2a is not needed if only 1 polygon in the scene is mapped onto a segment
of
the scan line.
- To speed up the process:
Recall the basic idea of polygon filling: For each scan line crossing a polygon, this algorithm
locates the intersection points of the scan line with the polygon edges. These intersection points
are sorted from left to right. Then, we fill the pixels between each intersection pair.
With similar idea, we fill every scan line span by span. When polygon overlaps on a scan line,
we perform depth calculations at their edges to determine which polygon should be visible at
which span. Any number of overlapping polygon surfaces can be processed with this method.
Depth calculations are performed only when there are polygons overlapping. We can take
advantage of coherence along the scan lines as we pass from one scan line to the next. If no
changes in the pattern of the intersection of polygon edges with the successive scan lines, it is not
necessary to do depth calculations. This works only if surfaces do not cut through or otherwise
cyclically overlap each other. If cyclic overlap happens, we can divide the surfaces to eliminate
the overlaps.
- The algorithm is applicable to non-polygonal surfaces (use of surface and active surface table,
zvalue
is computed from surface representation).
- Memory requirement is less than that for depth-buffer method.
- Lot of sortings are done on x-y coordinates and on depths.
overlaps occur, S can be scan converted. This process is repeated for the next surface in the list.
However, if depth overlap is detected, we need to make some additional comparisons to
determine whether any of the surfaces should be reordered.
- the subdivision is repeated until the half-space contains a single polygon (leaf node of the tree)
- the same is done for the back space of the polygon.
Discussion:
- Back face removal is achieved by not displaying a polygon if the viewer is located in its back
half-space
- It is an object space algorithm (sorting and intersection calculations are done in object
space precision)
- If the view point changes, the BSP needs only minor re-arrangement.
- A new BSP tree is built if the scene changes
- The algorithm displays polygon back to front (cf. Depth-sort)
The procedure to determine whether we should subdivide an area into smaller rectangle is:
1. We first classify each of the surfaces, according to their relations with the area:
Surrounding surface - a single surface completely encloses the area Overlapping surface - a
single surface that is partly inside and partly outside the area Inside surface - a single surface that
is completely inside the area Outside surface - a single surface that is completely outside the
area. To improve the speed of classification, we can make use of the bounding rectangles of
surfaces for early confirmation or rejection that the surfaces should be belong to that type.
2. Check the result from 1., that, if any of the following condition is true, then, no subdivision of
this area is needed.
a. All surfaces are outside the area.
b. Only one surface is inside, overlapping or surrounding surface is in the area. c.
A surrounding surface obscures all other surfaces within the area boundaries.
For cases b and c, the color of the area can be determined from that single surface.
4,5,6,7. Similarly the nodes for the front four suboctants of octant 0 are visited before the nodes
for the four back suboctants.When a colour is encountered in an octree node, the corresponding
pixel in the frame buffer is painted only if no previous color has been
loaded into the same pixel position. In most cases, both a front and a back octant must be
considered in determining the correct color values for a quadrant. But
- If the front octant is homogeneously filled with some color, we do not process the back octant.
- If the front is empty, it is necessary only to process the rear octant.
- If the front octant has heterogeneous regions, it has to be subdivided and the sub-octants are
handled recursively.
Unit-8
Computer Animation
8.1 Overview
Motion can bring the simplest of characters to life. Even simple polygonal shapes can convey
a number of human qualities when animated: identity, character, gender, mood, intention,
emotion, and so on. Very simple
In general, animation may be achieved by specifying a model with n parameters that identify
degrees of freedom that an animator may be interested in such as
polygon vertices,
spline control,
joint angles,
muscle contraction,
camera parameters, or
color.
With n parameters, this results in a vector ~q in n-dimensional state space. Parameters may be
varied to generate animation. A models motion is a trajectory through its state space or a set of
motion curves for each parameter over time, i.e. ~q(t), where t is the time of the current frame.
Every animation technique reduces to specifying the state space trajectory.
The basic animation algorithm is then: for t=t1 to tend: render(~q(t)).
Modeling and animation are loosely coupled. Modeling describes control values and
their actions.
Animation describes how to vary the control values. There are a number of animation
techniques,
including the following:
User driven animation
Keyframing
Motion capture
Procedural animation
Physical simulation
Particle systems
Crowd behaviors
Data-driven animation
8.2 Keyframing
Keyframing is an animation technique where motion curves are interpolated through states at
times, (~q1, ..., ~qT ), called keyframes, specified by a user
Catmull-Rom splines are well suited for keyframe animation because they pass through their
control points.
Pros:
Very expressive
8.3 Kinematics
Kinematics describe the properties of shape and motion independent of physical forces that
cause motion. Kinematic techniques are used often in keyframing, with an animator either setting
joint parameters explicitly with forward kinematics or specifying a few key joint orientations
and having the rest computed automatically with inverse kinematics.
16.3.1 Forward Kinematics
With forward kinematics, a point p is positioned by p = f(_) where_is a state vector (1,
2, ...n)
specifying the position, orientation, and rotation of all joints.
For the above example, p = (l1 cos(1) + l2 cos(1 + 2), l1 sin(1) + l2 sin(1 +
2)).
Inverse Kinematics
With inverse kinematics, a user specifies the position of the end effector, p, and the algorithm
has to evaluate the required _ give p. That is, _ = f1(p).
Usually, numerical methods are used to solve this problem, as it is often nonlinear and either
underdetermined or overdetermined. A system is underdetermined when there is not a unique
solution, such as when there are more equations than unknowns. A system is overdetermined
when it is inconsistent and has no solutions.
Extra constraints are necessary to obtain unique and stable solutions. For example, constraints
may be placed on the range of joint motion and the solution may be required to minimize the
kinetic energy of the system.
With enough cameras, it is possible to reconstruct the position of the markers accurately in 3D.
In practice, this is a laborious process. Markers tend to be hidden from cameras and 3D
reconstructions fail, requiring a user to manually fix such drop outs. The resulting motion curves
are often noisy, requiring yet more effort to clean up the motion data to more accurately match
what an animator wants. Despite the labor involved, motion capture has become a popular
technique in the movie and game industries, as it allows fairly accurate animations to be created
from the motion of actors. However, this is limited by the density of markers that can be placed
on a single actor. Faces, for example, are still very difficult to convincingly reconstruct.
Pros:
Captures specific style of real actors
Cons:
Often not expressive enough
Time consuming and expensive
Difficult to edit
Uses:
Character animation
Medicine, such as kinesiology and biomechanics