Comp Graph UNIT 3
Comp Graph UNIT 3
Comp Graph UNIT 3
3. CLIPPING
▪ To achieve a particular viewing effect in an application program, we can design our own clipping
window with any shape, size, and orientation we choose.
▪ But clipping a scene using a concave polygon or a clipping window with nonlinear boundaries requires
more processing than clipping against a rectangle.
▪ Rectangular clipping windows in standard position are easily defined by giving the coordinates of two
opposite corners of each rectangle.
▪ Some systems provide options for selecting a rotated, two-dimensional viewing frame, but usually the
clipping window must be specified in world coordinates.
i) Viewing-Coordinate Clipping Window
• A general approach to the two-dimensional viewing transformation is to set up a viewing coordinate
system within the world-coordinate frame.
• We choose an origin for a two-dimensional viewing-coordinate frame at some world position P0 =
(x0, y0), and we can establish the orientation using a world vector V that defines the y view direction.
• Vector V is called the two-dimensional view up vector.
• An alternative method for specifying the orientation of the viewing frame is to give a rotation angle
relative to either the x or y axis in the world frame.
• The first step in the transformation sequence is to translate the viewing origin to the world origin.
• Next, we rotate the viewing system to align it with the world frame.
• Given the orientation vector V, we can calculate the components of unit vectors v = (vx, vy) and u =
(ux, uy) for the y view and x view axes, respectively.
• where T is the translation matrix that takes the viewing origin P0 to the world origin, and R is the
rotation matrix that rotates the viewing frame of reference into coincidence with the world-coordinate
system.
• Figure 4 illustrates the steps in this coordinate transformation.
ii) World-Coordinate Clipping Window
• A routine for defining a standard, rectangular clipping window in world coordinates is typically
provided in a graphics-programming library.
• We simply specify two world-coordinate positions, which are then assigned to the two opposite corners
of a standard rectangle.
• Once the clipping window has been established, the scene description is processed through the viewing
routines to the output device.
• Thus, we simply rotate (and possibly translate) objects to a desired position and set up the clipping
window all in world coordinates.
▪ Solving these expressions for the viewport position (xv, yv), we have
▪ We could obtain the transformation from world coordinates to viewport coordinates with the following
sequence:
1. Scale the clipping window to the size of the viewport using a fixed-point position of (xwmin, ywmin).
2. Translate (xwmin, ywmin) to (xvmin, yvmin).
▪ The scaling transformation in step (1) can be represented with the two dimensional Matrix
▪ The two-dimensional matrix representation for the translation of the lower-left corner of the clipping
window to the lower-left viewport corner is
▪ And the composite matrix representation for the transformation to the normalized viewport is
▪ Similarly, after the clipping algorithms have been applied, the normalized square with edge length
equal to 2 is transformed into a specified viewport.
▪ This time, we get the transformation matrix from Equation 8 by substituting −1 for xwmin and ywmin
and substituting +1 for xwmax and ywmax:
▪ The last step in the viewing process is to position the viewport area in the display window.
▪ Typically, the lower-left corner of the viewport is placed at a coordinate position specified relative to
the lower-left corner of the display window.
▪ Figure 8 demonstrates the positioning of a viewport within a display window. As before, we maintain
the initial proportions of objects by choosing the aspect ratio of the viewport to be the same as the
clipping window. Otherwise, objects
▪ A possible ordering for the clipping window boundaries corresponding to the bit positions in the
Cohen-Sutherland endpoint region code.
▪ Thus, for this ordering, the rightmost position (bit 1) references the left clipping-window boundary,
and the leftmost position (bit 4) references the top window boundary.
▪ A value of 1 (or true) in any bit position indicates that the endpoint is outside that window border.
Similarly, a value of 0 (or false) in any bit position indicates that the endpoint is not outside (it is inside
or on) the corresponding window edge.
▪ Sometimes, a region code is referred to as an “out” code because a value of 1 in any bit position
indicates that the spatial point is outside the corresponding clipping boundary.
▪ The nine binary region codes for identifying the position of a line endpoint, relative to the clipping-
window boundaries.
▪ Bit values in a region code are determined by comparing the coordinate values (x, y) of an endpoint to
the clipping boundaries.
▪ Bit 1 is set to 1 if x < xwmin, and the other three bit values are determined similarly.
▪ Instead of using inequality testing, we can determine the values for a region-code more efficiently
using bit-processing operations and the following two steps:
(1) Calculate differences between endpoint coordinates and clipping boundaries.
(2) Use the resultant sign bit of each difference calculation to set the corresponding value in
the region code.
▪ For the ordering scheme shown in Figure 10, bit 1 is the sign bit of x − xw min; bit 2 is the sign bit of
xwmax − x; bit 3 is the sign bit of y − ywmin; and bit 4 is the sign bit of ywmax − y.
▪ To determine a boundary intersection for a line segment, we can use the slope intercept form of the
line equation.
▪ Once we have established region codes for all line endpoints.
▪ Any lines that are completely contained within the window edges have a region code of 0000 for both
endpoints, and we save these line segments.
▪ Any line that has a region-code value of 1 in the same bit position for each endpoint is completely
outside the clipping rectangle, and we eliminate that line segment.
▪ As an example, a line that has a region code of 1001 for one endpoint and a code of 0101 for the other
endpoint is completely to the left of the clipping window, as indicated by the value of 1 in the first bit
position of each region code.
▪ We can perform the inside-outside tests for line segments using logical operators. When the or
operation between two endpoint region codes for a line segment is false (0000), the line is inside the
clipping window. Therefore, we save the line and proceed to test the next line in the scene description.
▪ When the and operation between the two endpoint region codes for a line is true (not 0000), the line
is completely outside the clipping window, and we can eliminate it from the scene description.
▪ Lines that cannot be identified, the region-code tests are next checked for intersection with the window
border lines. As shown in Figure 12, line segments can intersect clipping boundary lines without
entering the interior of the window.
▪ Therefore, several intersection calculations might be necessary to clip a line segment, depending on
the order in which we process the clipping boundaries. As we process each clipping-window edge, a
section of the line is clipped, and the remaining part of the line is checked against the other window
borders.
▪ We continue eliminating sections until either the line is totally clipped or the remaining part of the line
is inside the clipping window.
We assume that the window edges are processed in the following order: left, right, bottom, top. To
determine whether a line crosses a selected clipping boundary, we can check corresponding bit values in
the two endpoint region codes. If one of these bit values is 1 and the other is 0, the line segment crosses
that boundary.
o Figure 12 illustrates two line segments that cannot be identified immediately as completely inside or
completely outside the clipping window.
o The region codes for the line from P1 to P2 are 0100 and 1001. Thus, P1 is inside the left clipping
boundary and P2 is outside that boundary. We then calculate the intersection position P’2, and we clip
off the line section from P2 to P’2.
o The remaining portion of the line is inside the right border line, and so we next check the bottom
border. Endpoint P1 is below the bottom clipping edge and P’2 is above it, so we determine the
intersection position at this boundary (P’1).
o We eliminate the line section from P1 to P’1 and proceed to the top window edge. There we determine
the intersection position to be P”2.
o The final step is to clip off the section above the top boundary and save the interior segment from P’1
to P”2.
o For the second line, we find that point P3 is outside the left boundary and P4 is inside. Thus, we
calculate the intersection position P’3 and eliminate the line section from P3 to P’3.
o By checking region codes for the endpoints P’3 and P4, we find that the remainder of the line is below
the clipping window and can be eliminated as well.
o Figure 13 shows the four intersection positions that could be calculated for a line segment that is
processed against the clipping-window edges in the order left, right, bottom, top. Therefore, variations
of this basic approach have been developed in an effort to reduce the intersection calculations.
▪ For a line with endpoint coordinates (x0, y0) and (xend, yend), the y coordinate of the intersection point
with a vertical clipping border line can be obtained with the calculation
▪ where the x value is set to either xwmin or xwmax, and the slope of the line is calculated as m = (yend −
y0)/(xend − x0).
▪ Similarly, if we are looking for the intersection with a horizontal border, the x coordinate can be
calculated as
• We require is a procedure that will output one or more closed polylines for the boundaries of the
clipped fill area, so that the polygons can be scan-converted to fill the interiors with the assigned color
or pattern, as in Figure 20.
• We can process a polygon fill area against the borders of a clipping window using the same general
approach as in line clipping.
• We need to maintain a fill area as an entity as it is processed through the clipping stages.
• Thus, we can clip a polygon fill area by determining the new shape for the polygon as each clipping-
window edge is processed, as demonstrated in Figure 21.
• If the minimum and maximum coordinate values for the fill area are inside all four clipping boundaries,
the fill area is saved for further processing. If these coordinate extents are all outside any of the
clipping-window borders, we eliminate the polygon from the scene description (Figure 22).
• When we cannot identify a fill area as being completely inside or completely outside the clipping
window, we then need to locate the polygon intersection positions with the clipping boundaries.
• One way to implement convex-polygon clipping is to create a new vertex list at each clipping
boundary, and then pass this new vertex list to the next boundary clipper.
• The output of the final clipping stage is the vertex list for the clipped polygon.
▪ The selection of vertex edge of intersection for each clipper is given as follows
1. If the first input vertex is outside this clipping-window border and the second vertex is inside,
both the intersection point of the polygon edge with the window border and the second vertex
are sent to the next clipper.
2. If both input vertices are inside this clipping-window border, only the second vertex is sent
to the next clipper.
3. If the first vertex is inside this clipping-window border and the second vertex is outside, only
the polygon edge-intersection position with the clipping-window border is sent to the next
clipper.
4. If both input vertices are outside this clipping-window border, no vertices are sent to the next
clipper.
▪ The last clipper in this series generates a vertex list that describes the final clipped fill area.
▪ Figure 25 provides an example of the Sutherland-Hodgman polygon clipping algorithm for a fill area
defined with the vertex set {1, 2, 3}.
▪ As soon as a clipper receives a pair of endpoints, it determines the appropriate output using the tests
illustrated in Figure 24. These outputs are passed in succession from the left clipper to the right, bottom,
and top clippers. The output from the top clipper is the set of vertices defining the clipped fill area. For
this example, the output vertex list is {1’ , 2, 2’, 2”}.
▪ When a concave polygon is clipped with the Sutherland-Hodgman algorithm, extraneous lines may be
displayed.
▪ This occurs when the clipped polygon should have two or more separate sections. But since there is
only one output vertex list, the last vertex in the list is always joined to the first vertex.
▪ There are several things we can do to display clipped concave polygons correctly.
▪ For one, we could split a concave polygon into two or more convex polygons and process each convex
polygon separately using the Sutherland- Hodgman algorithm.
▪ Another possibility is to modify the Sutherland- Hodgman method so that the final vertex list is
checked for multiple intersection points along any clipping-window boundary.
▪ If we find more than two vertex positions along any clipping boundary, we can separate the list of
vertices into two or more lists that correctly identify the separate sections of the clipped fill area.
▪ A third possibility is to use a more general polygon clipper that has been designed to process concave
polygons correctly.
3.6 Three-Dimensional Geometric Transformations
• Methods for geometric transformations in three dimensions are extended from two dimensional
methods by including considerations for the z coordinate.
• A three-dimensional position, expressed in homogeneous coordinates, is represented as a four-element
column vector.
3.6.1 Three-Dimensional Translation
▪ A position P = (x, y, z) in three-dimensional space is translated to a location P’= (x’, y’, z’) by adding
translation distances tx, ty, and tz to the Cartesian coordinates of P:
▪ The following program segment illustrates construction of a translation matrix, given an input set of
translation parameters.
▪ An inverse of a three-dimensional translation matrix is obtained. That is, we negate the translation
distances tx, ty, and tz. This produces a translation in the opposite direction, and the product of a
translation matrix and its inverse is the identity matrix.
3.6.2 Three-Dimensional Rotation
▪ By convention, positive rotation angles produce counter clockwise rotations about a coordinate axis
Figure 3.
▪ This agrees with our earlier discussion of rotations in two dimensions, where positive rotations in the
xy plane are counter clockwise about a pivot point (an axis that is parallel to the z axis.
▪ Parameter θ specifies the rotation angle about the z axis, and z-coordinate values are unchanged by
this transformation. In homogeneous-coordinate form, the three-dimensional z-axis rotation equations
are
▪ Thus, to obtain the x-axis and y-axis rotation transformations, we cyclically replace x with y, y with z,
and z with x, as illustrated in Figure 5.
▪ Substituting permutations 7 into Equations, we get the equations for an x-axis rotation:
▪ A cyclic permutation of coordinates in Equations 8 gives us the transformation equations for a y-axis
rotation:
1. Translate the object so that the rotation axis coincides with the parallel coordinate axis.
2. Perform the specified rotation about that axis.
3. Translate the object so that the rotation axis is moved back to its original position.
▪ The steps in this sequence are illustrated in Figure 8. A coordinate position P is transformed with the
sequence shown in this figure as
▪ When an object is to be rotated about an axis that is not parallel to one of the coordinate axes, we must
perform some additional transformations we can accomplish the required rotation in five steps:
1. Translate the object so that the rotation axis passes through the coordinate origin.
2. Rotate the object so that the axis of rotation coincides with one of the coordinate axes.
3. Perform the specified rotation about the selected coordinate axis.
4. Apply inverse rotations to bring the rotation axis back to its original orientation.
5. Apply the inverse translation to bring the rotation axis back to its original spatial position.
▪ We can transform the rotation axis onto any one of the three coordinate axes. The z axis is often a
convenient choice, and we next consider a transformation sequence using the z-axis rotation matrix
(Figure 9).
▪ A rotation axis can be defined with two coordinate positions, as in Figure 10, or with one coordinate
point and direction angles (or direction cosines) between the rotation axis and two of the coordinate
axes.
▪ We assume that the rotation axis is defined by two points, as illustrated, and that the direction of
rotation is to be counter clockwise when looking along the axis from P2 to P1.
▪ The components of the rotation-axis vector are then computed as
▪ where the components a, b, and c are the direction cosines for the rotation axis:
▪ The first step in the rotation sequence is to set up the translation matrix that repositions the rotation
axis so that it passes through the coordinate origin. Because we want a counterclockwise rotation
when viewing along the axis from P2 to P1 (Figure 10).
▪ Because rotation calculations involve sine and cosine functions, we can use standard vector operations
to obtain elements of the two rotation matrices.
▪ These two rotations are illustrated in Figure 12 for one possible orientation of vector u.
▪ A vector dot product can be used to determine the cosine term, and a vector cross product can be used
to calculate the sine term.
▪ Rotation of u around the x axis into the x z plane is accomplished by rotating u’ (the projection of u in
the y z plane) through angle α onto the z axis.
▪ If we represent the projection of u in the yz plane as the vector u’= (0, b, c), then the cosine of the
rotation angle α can be determined from the dot product of u’ and the unit vector uz along the z axis
Figure 13:
▪ If we represent the projection of u in the yz plane as the vector u’ = (0, b, c), then the cosine of the
rotation angle α can be determined from the dot product of u’ and the unit vector uz along the z axis:
▪ Similarly, we can determine the sine of α from the cross-product of u and uz. The coordinate-
independent form of this cross-product is
▪ Equating the right sides of Equations 18 and 19, and noting that |u | = 1z and |u’| = d, we have
▪ We have determined the values for cos α and sin α in terms of the components of vector u, we can set
up the matrix elements for rotation of this vector about the x axis and into the xz plane
▪ Figure 14 shows the orientation of the unit vector in the xz plane, resulting from the rotation about the
x axis. This vector, labeled u”, has the value a for its x component, because rotation about the x axis
leaves the x component unchanged.
▪ Rotation of unit vector u” (vector u after rotation into the x z plane) about the y axis. Positive rotation
angle β aligns u” with vector uz .
▪ We can determine the cosine of rotation angle β from the dot product of unit vectors u’’ and uz. Thus,
▪ The specified rotation angle θ can now be applied as a rotation about the z axis as follows:
▪ The transformation matrix for rotation about an arbitrary axis can then be expressed as the composition
of these seven individual transformations:
▪ The composite matrix for any sequence of three-dimensional rotations is of the form
▪ Assuming that the rotation axis is not parallel to any coordinate axis, we could form the following set
of local unit vectors Figure 15.
▪ If we express the elements of the unit local vectors for the rotation axis as
▪ then the required composite matrix, which is equal to the product Ry(β) · Rx(α), is
▪ This matrix transforms the unit vectors u’x, u’y, and u’z onto the x, y, and z axes, respectively. This
aligns the rotation axis with the z axis, because u’z = u.
c) Quaternion Methods for Three-Dimensional Rotations
▪ A more efficient method for generating a rotation about an arbitrarily selected axis is to use a
quaternion representation for the rotation transformation.
▪ Quaternions, which are extensions of two-dimensional complex numbers, are useful in a number of
computer-graphics procedures, including the generation of fractal objects.
▪ One way to characterize a quaternion is as an ordered pair, consisting of a scalar part and a vector part:
▪ A rotation about any axis passing through the coordinate origin is accomplished by first setting up a
unit quaternion with the scalar and vector parts as follows:
▪ where u is a unit vector along the selected rotation axis and θ is the specified rotation angle about this
axis (Figure 16).
▪ Any point position P that is to be rotated by this quaternion can be represented in quaternion notation
as
▪ Rotation of the point is then carried out with the quaternion operation
▪ The second term in this ordered pair is the rotated point position p’, which is evaluated with vector dot
and cross-products as
▪ Designating the components of the vector part of q as v = (a, b, c) , we obtain the elements for the
composite rotation matrix
• Because some graphics packages provide only a routine that scales relative to the coordinate origin,
we can always construct a scaling transformation with respect to any selected fixed position (xf, yf,zf)
using the following transformation sequence:
1. Translate the fixed point to the origin.
2. Apply the scaling transformation relative to the coordinate origin
3. Translate the fixed point back to its original position.
• This sequence of transformations is demonstrated in Figure 18.
• The matrix representation for an arbitrary fixed-point scaling can then be expressed as the
concatenation of these translate-scale-translate transformations:
b) Three-Dimensional Shear
▪ These transformations can be used to modify object shapes.
▪ For three-dimensional we can also generate shears relative to the z axis.
▪ A general z-axis shearing transformation relative to a selected reference position is produced with the
following matrix:
▪ A unit cube (a) is sheared relative to the origin (b) by above Matrix, with shzx = shzy = 1.