Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Comp Graph UNIT 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

UNIT 3

3. CLIPPING
▪ To achieve a particular viewing effect in an application program, we can design our own clipping
window with any shape, size, and orientation we choose.
▪ But clipping a scene using a concave polygon or a clipping window with nonlinear boundaries requires
more processing than clipping against a rectangle.
▪ Rectangular clipping windows in standard position are easily defined by giving the coordinates of two
opposite corners of each rectangle.
▪ Some systems provide options for selecting a rotated, two-dimensional viewing frame, but usually the
clipping window must be specified in world coordinates.
i) Viewing-Coordinate Clipping Window
• A general approach to the two-dimensional viewing transformation is to set up a viewing coordinate
system within the world-coordinate frame.
• We choose an origin for a two-dimensional viewing-coordinate frame at some world position P0 =
(x0, y0), and we can establish the orientation using a world vector V that defines the y view direction.
• Vector V is called the two-dimensional view up vector.

• An alternative method for specifying the orientation of the viewing frame is to give a rotation angle
relative to either the x or y axis in the world frame.
• The first step in the transformation sequence is to translate the viewing origin to the world origin.
• Next, we rotate the viewing system to align it with the world frame.
• Given the orientation vector V, we can calculate the components of unit vectors v = (vx, vy) and u =
(ux, uy) for the y view and x view axes, respectively.

• where T is the translation matrix that takes the viewing origin P0 to the world origin, and R is the
rotation matrix that rotates the viewing frame of reference into coincidence with the world-coordinate
system.
• Figure 4 illustrates the steps in this coordinate transformation.
ii) World-Coordinate Clipping Window
• A routine for defining a standard, rectangular clipping window in world coordinates is typically
provided in a graphics-programming library.
• We simply specify two world-coordinate positions, which are then assigned to the two opposite corners
of a standard rectangle.
• Once the clipping window has been established, the scene description is processed through the viewing
routines to the output device.
• Thus, we simply rotate (and possibly translate) objects to a desired position and set up the clipping
window all in world coordinates.

3.1 Normalization and Viewport Transformations


• The viewport coordinates are often given in the range from 0 to 1 so that the viewport is positioned
within a unit square.
• After clipping, the unit square containing the viewport is mapped to the output display device.
a) Mapping the Clipping Window into a Normalized Viewport
▪ We first consider a viewport defined with normalized coordinate values between 0 and 1.
▪ Object descriptions are transferred to this normalized space using a transformation that maintains the
same relative placement of a point in the viewport as it had in the clipping window.
▪ Figure 6 illustrates this window-to-viewport mapping. Position (xw, yw) in the clipping window is
mapped to position (xv, yv) in the associated viewport.
▪ To transform the world-coordinate point into the same relative position within the viewport, we require
that

▪ Solving these expressions for the viewport position (xv, yv), we have

▪ where the scaling factors are

▪ and the translation factors are

▪ We could obtain the transformation from world coordinates to viewport coordinates with the following
sequence:
1. Scale the clipping window to the size of the viewport using a fixed-point position of (xwmin, ywmin).
2. Translate (xwmin, ywmin) to (xvmin, yvmin).
▪ The scaling transformation in step (1) can be represented with the two dimensional Matrix

▪ The two-dimensional matrix representation for the translation of the lower-left corner of the clipping
window to the lower-left viewport corner is

▪ And the composite matrix representation for the transformation to the normalized viewport is

b) Mapping the Clipping Window into a Normalized Square


▪ Another approach to two-dimensional viewing is to transform the clipping window into a normalized
square, clip in normalized coordinates, and then transfer the scene description to a viewport specified
in screen coordinates.
▪ This transformation is illustrated in Figure 7 with normalized coordinates in the range from −1 to 1.
▪ Making these substitutions in the expressions for tx, ty, sx, and sy, we have

▪ Similarly, after the clipping algorithms have been applied, the normalized square with edge length
equal to 2 is transformed into a specified viewport.
▪ This time, we get the transformation matrix from Equation 8 by substituting −1 for xwmin and ywmin
and substituting +1 for xwmax and ywmax:

▪ The last step in the viewing process is to position the viewport area in the display window.
▪ Typically, the lower-left corner of the viewport is placed at a coordinate position specified relative to
the lower-left corner of the display window.
▪ Figure 8 demonstrates the positioning of a viewport within a display window. As before, we maintain
the initial proportions of objects by choosing the aspect ratio of the viewport to be the same as the
clipping window. Otherwise, objects

c) Display of Character Strings


▪ Character strings can be handled in one of two ways when they are mapped through the viewing
pipeline to a viewport.
▪ The simplest mapping maintains a constant character size.
▪ This method could be employed with bitmap character patterns.
▪ But outline fonts could be transformed the same as other primitives; we just need to transform the
defining positions for the line segments in the outline character shape.
d) Split-Screen Effects and Multiple Output Devices
▪ By selecting different clipping windows and associated viewports for a scene, we can provide
simultaneous display of two or more objects, multiple picture parts, or different views of a single scene.
▪ It is also possible that two or more output devices could be operating concurrently on a particular
system, and we can set up a clipping-window/viewport pair for each output device.
▪ A mapping to a selected output device is sometimes referred to as a workstation transformation.
3.2 Clipping Algorithms
• Any procedure that eliminates those portions of a picture that are either inside or outside a specified
region of space is referred to as a clipping algorithm or simply clipping.
• The most common application of clipping is in the viewing pipeline, where clipping is applied to
extract a designated portion of a scene (either two-dimensional or three-dimensional) for display on
an output device.
• Different objects clipping are:
1. Point clipping
2. Line clipping (straight-line segments)
3. Fill-area clipping (polygons)
4. Curve clipping
5. Text Clipping
3.3 Two-Dimensional Point Clipping
• For a clipping rectangle in standard position, we save a two-dimensional point P = (x, y) for display if
the following inequalities are satisfied:

• If any of these four inequalities is not satisfied, the point is clipped.


• Although point clipping is applied less often than line or polygon clipping, it is useful in various
situations, particularly when pictures are modeled with particle systems.
• For example, point clipping can be applied to scenes involving clouds, sea foam, smoke, or explosions
that are modeled with “particles,” such as the center coordinates for small circles or spheres.
3.4 Two-Dimensional Line Clipping
• Clipping straight-line segments using a standard rectangular clipping window.
• A line-clipping algorithm processes each line in a scene through a series of tests and intersection
calculations to determine whether the entire line or any part of it is to be saved.
• The expensive part of a line-clipping procedure is in calculating the intersection positions of a line
with the window edges.
• Therefore, a major goal for any line-clipping algorithm is to minimize the intersection calculations. 
To do this, we can first perform tests to determine whether a line segment is completely inside the
clipping window or completely outside.
• It is easy to determine whether a line is completely inside a clipping window, but it is more difficult
to identify all lines that are entirely outside the window.
• One way to formulate the equation for a straight-line segment is to use the following parametric
representation, where the coordinate positions (x0, y0) and (xend, yend) designate the two line endpoints:

3.4.1 Cohen-Sutherland Line Clipping


▪ Processing time is reduced in the Cohen-Sutherland method by performing more tests before
proceeding to the intersection calculations.
▪ Initially, every line endpoint in a picture is assigned a four-digit binary value, called a region code,
and each bit position is used to indicate whether the point is inside or outside one of the clipping-
window boundaries.

▪ A possible ordering for the clipping window boundaries corresponding to the bit positions in the
Cohen-Sutherland endpoint region code.
▪ Thus, for this ordering, the rightmost position (bit 1) references the left clipping-window boundary,
and the leftmost position (bit 4) references the top window boundary.
▪ A value of 1 (or true) in any bit position indicates that the endpoint is outside that window border.
Similarly, a value of 0 (or false) in any bit position indicates that the endpoint is not outside (it is inside
or on) the corresponding window edge.
▪ Sometimes, a region code is referred to as an “out” code because a value of 1 in any bit position
indicates that the spatial point is outside the corresponding clipping boundary.
▪ The nine binary region codes for identifying the position of a line endpoint, relative to the clipping-
window boundaries.
▪ Bit values in a region code are determined by comparing the coordinate values (x, y) of an endpoint to
the clipping boundaries.
▪ Bit 1 is set to 1 if x < xwmin, and the other three bit values are determined similarly.
▪ Instead of using inequality testing, we can determine the values for a region-code more efficiently
using bit-processing operations and the following two steps:
(1) Calculate differences between endpoint coordinates and clipping boundaries.
(2) Use the resultant sign bit of each difference calculation to set the corresponding value in
the region code.
▪ For the ordering scheme shown in Figure 10, bit 1 is the sign bit of x − xw min; bit 2 is the sign bit of
xwmax − x; bit 3 is the sign bit of y − ywmin; and bit 4 is the sign bit of ywmax − y.
▪ To determine a boundary intersection for a line segment, we can use the slope intercept form of the
line equation.
▪ Once we have established region codes for all line endpoints.
▪ Any lines that are completely contained within the window edges have a region code of 0000 for both
endpoints, and we save these line segments.
▪ Any line that has a region-code value of 1 in the same bit position for each endpoint is completely
outside the clipping rectangle, and we eliminate that line segment.
▪ As an example, a line that has a region code of 1001 for one endpoint and a code of 0101 for the other
endpoint is completely to the left of the clipping window, as indicated by the value of 1 in the first bit
position of each region code.
▪ We can perform the inside-outside tests for line segments using logical operators. When the or
operation between two endpoint region codes for a line segment is false (0000), the line is inside the
clipping window. Therefore, we save the line and proceed to test the next line in the scene description.
▪ When the and operation between the two endpoint region codes for a line is true (not 0000), the line
is completely outside the clipping window, and we can eliminate it from the scene description.
▪ Lines that cannot be identified, the region-code tests are next checked for intersection with the window
border lines. As shown in Figure 12, line segments can intersect clipping boundary lines without
entering the interior of the window.

▪ Therefore, several intersection calculations might be necessary to clip a line segment, depending on
the order in which we process the clipping boundaries. As we process each clipping-window edge, a
section of the line is clipped, and the remaining part of the line is checked against the other window
borders.
▪ We continue eliminating sections until either the line is totally clipped or the remaining part of the line
is inside the clipping window.
We assume that the window edges are processed in the following order: left, right, bottom, top. To
determine whether a line crosses a selected clipping boundary, we can check corresponding bit values in
the two endpoint region codes. If one of these bit values is 1 and the other is 0, the line segment crosses
that boundary.
o Figure 12 illustrates two line segments that cannot be identified immediately as completely inside or
completely outside the clipping window.
o The region codes for the line from P1 to P2 are 0100 and 1001. Thus, P1 is inside the left clipping
boundary and P2 is outside that boundary. We then calculate the intersection position P’2, and we clip
off the line section from P2 to P’2.
o The remaining portion of the line is inside the right border line, and so we next check the bottom
border. Endpoint P1 is below the bottom clipping edge and P’2 is above it, so we determine the
intersection position at this boundary (P’1).
o We eliminate the line section from P1 to P’1 and proceed to the top window edge. There we determine
the intersection position to be P”2.
o The final step is to clip off the section above the top boundary and save the interior segment from P’1
to P”2.
o For the second line, we find that point P3 is outside the left boundary and P4 is inside. Thus, we
calculate the intersection position P’3 and eliminate the line section from P3 to P’3.
o By checking region codes for the endpoints P’3 and P4, we find that the remainder of the line is below
the clipping window and can be eliminated as well.
o Figure 13 shows the four intersection positions that could be calculated for a line segment that is
processed against the clipping-window edges in the order left, right, bottom, top. Therefore, variations
of this basic approach have been developed in an effort to reduce the intersection calculations.

▪ For a line with endpoint coordinates (x0, y0) and (xend, yend), the y coordinate of the intersection point
with a vertical clipping border line can be obtained with the calculation

▪ where the x value is set to either xwmin or xwmax, and the slope of the line is calculated as m = (yend −
y0)/(xend − x0).
▪ Similarly, if we are looking for the intersection with a horizontal border, the x coordinate can be
calculated as

▪ with y set either to ywmin or to ywmax.


3.5 Polygon Fill-Area Clipping
• To clip a polygon fill area, we cannot apply a line-clipping method to the individual polygon edges
directly because this approach would not, in general, produce a closed polyline.
• Instead, a line clipper would often produce a disjoint set of lines with no complete information about
how we might form a closed boundary around the clipped fill area. Figure 19 illustrates a possible
output from a line-clipping procedure applied to the edges of a polygon fill area.

• We require is a procedure that will output one or more closed polylines for the boundaries of the
clipped fill area, so that the polygons can be scan-converted to fill the interiors with the assigned color
or pattern, as in Figure 20.

• We can process a polygon fill area against the borders of a clipping window using the same general
approach as in line clipping.
• We need to maintain a fill area as an entity as it is processed through the clipping stages.
• Thus, we can clip a polygon fill area by determining the new shape for the polygon as each clipping-
window edge is processed, as demonstrated in Figure 21.
• If the minimum and maximum coordinate values for the fill area are inside all four clipping boundaries,
the fill area is saved for further processing. If these coordinate extents are all outside any of the
clipping-window borders, we eliminate the polygon from the scene description (Figure 22).

• When we cannot identify a fill area as being completely inside or completely outside the clipping
window, we then need to locate the polygon intersection positions with the clipping boundaries.
• One way to implement convex-polygon clipping is to create a new vertex list at each clipping
boundary, and then pass this new vertex list to the next boundary clipper.
• The output of the final clipping stage is the vertex list for the clipped polygon.

3.5.1 Sutherland--Hodgman Polygon Clipping


▪ An efficient method for clipping a convex-polygon fill area, developed by Sutherland and Hodgman,
is to send the polygon vertices through each clipping stage so that a single clipped vertex can be
immediately passed to the next stage.
▪ The final output is a list of vertices that describe the edges of the clipped polygon fill area the basic
Sutherland-Hodgman algorithm is able to process concave polygons when the clipped fill area can be
described with a single vertex list.
▪ The general strategy in this algorithm is to send the pair of endpoints for each successive polygon line
segment through the series of clippers (left, right, bottom, and top).
▪ There are four possible cases that need to be considered when processing a polygon edge against one
of the clipping boundaries.
1. One possibility is that the first edge endpoint is outside the clipping boundary and the second
endpoint is inside.
2. Or, both endpoints could be inside this clipping boundary.
3. Another possibility is that the first endpoint is inside the clipping boundary and the second
endpoint is outside.
4. And, finally, both endpoints could be outside the clipping boundary.
▪ To facilitate the passing of vertices from one clipping stage to the next, the output from each clipper
can be formulated as shown in Figure 24 below.

▪ The selection of vertex edge of intersection for each clipper is given as follows
1. If the first input vertex is outside this clipping-window border and the second vertex is inside,
both the intersection point of the polygon edge with the window border and the second vertex
are sent to the next clipper.
2. If both input vertices are inside this clipping-window border, only the second vertex is sent
to the next clipper.
3. If the first vertex is inside this clipping-window border and the second vertex is outside, only
the polygon edge-intersection position with the clipping-window border is sent to the next
clipper.
4. If both input vertices are outside this clipping-window border, no vertices are sent to the next
clipper.
▪ The last clipper in this series generates a vertex list that describes the final clipped fill area.
▪ Figure 25 provides an example of the Sutherland-Hodgman polygon clipping algorithm for a fill area
defined with the vertex set {1, 2, 3}.
▪ As soon as a clipper receives a pair of endpoints, it determines the appropriate output using the tests
illustrated in Figure 24. These outputs are passed in succession from the left clipper to the right, bottom,
and top clippers. The output from the top clipper is the set of vertices defining the clipped fill area. For
this example, the output vertex list is {1’ , 2, 2’, 2”}.
▪ When a concave polygon is clipped with the Sutherland-Hodgman algorithm, extraneous lines may be
displayed.
▪ This occurs when the clipped polygon should have two or more separate sections. But since there is
only one output vertex list, the last vertex in the list is always joined to the first vertex.
▪ There are several things we can do to display clipped concave polygons correctly.
▪ For one, we could split a concave polygon into two or more convex polygons and process each convex
polygon separately using the Sutherland- Hodgman algorithm.
▪ Another possibility is to modify the Sutherland- Hodgman method so that the final vertex list is
checked for multiple intersection points along any clipping-window boundary.
▪ If we find more than two vertex positions along any clipping boundary, we can separate the list of
vertices into two or more lists that correctly identify the separate sections of the clipped fill area.
▪ A third possibility is to use a more general polygon clipper that has been designed to process concave
polygons correctly.
3.6 Three-Dimensional Geometric Transformations
• Methods for geometric transformations in three dimensions are extended from two dimensional
methods by including considerations for the z coordinate.
• A three-dimensional position, expressed in homogeneous coordinates, is represented as a four-element
column vector.
3.6.1 Three-Dimensional Translation
▪ A position P = (x, y, z) in three-dimensional space is translated to a location P’= (x’, y’, z’) by adding
translation distances tx, ty, and tz to the Cartesian coordinates of P:

▪ Figure 1 illustrates three-dimensional point translation.


▪ We can express these three-dimensional translation operations in matrix form. But now the coordinate
positions, P and P’ , are represented in homogeneous coordinates with four-element column matrices,
and the translation operator T is a 4 × 4 matrix:
▪ An object is translated in three dimensions by transforming each of the defining coordinate positions
for the object, then reconstructing the object at the new location.
▪ For an object represented as a set of polygon surfaces, we translate each vertex for each surface (Figure
2) and redisplay the polygon facets at the translated positions.

▪ The following program segment illustrates construction of a translation matrix, given an input set of
translation parameters.

▪ An inverse of a three-dimensional translation matrix is obtained. That is, we negate the translation
distances tx, ty, and tz. This produces a translation in the opposite direction, and the product of a
translation matrix and its inverse is the identity matrix.
3.6.2 Three-Dimensional Rotation
▪ By convention, positive rotation angles produce counter clockwise rotations about a coordinate axis
Figure 3.
▪ This agrees with our earlier discussion of rotations in two dimensions, where positive rotations in the
xy plane are counter clockwise about a pivot point (an axis that is parallel to the z axis.

a) Three-Dimensional Coordinate-Axis Rotation


▪ The two-dimensional z-axis rotation equations are easily extended to three dimensions, as follows:

▪ Parameter θ specifies the rotation angle about the z axis, and z-coordinate values are unchanged by
this transformation. In homogeneous-coordinate form, the three-dimensional z-axis rotation equations
are

▪ which we can write more compactly as

▪ Figure 4 illustrates rotation of an object about the z axis.


▪ Transformation equations for rotations about the other two coordinate axes can be obtained with a
cyclic permutation of the coordinate parameters x, y, and z in above Equations:

▪ Thus, to obtain the x-axis and y-axis rotation transformations, we cyclically replace x with y, y with z,
and z with x, as illustrated in Figure 5.

▪ Substituting permutations 7 into Equations, we get the equations for an x-axis rotation:

▪ Rotation of an object around the x axis is demonstrated in Figure 6.

▪ A cyclic permutation of coordinates in Equations 8 gives us the transformation equations for a y-axis
rotation:

▪ An example of y-axis rotation is shown in Figure 7.


▪ An inverse three-dimensional rotation matrix is obtained in the same way as the inverse rotations in
two dimensions. We just replace the angle θ with −θ.
▪ Only the sine function is affected by the change in sign of the rotation angle, the inverse matrix can
also be obtained by interchanging rows and columns.
▪ That is, we can calculate the inverse of any rotation matrix R by forming its transpose (R−1 = RT ).
b) General Three-Dimensional Rotations
▪ A rotation matrix for any axis that does not coincide with a coordinate axis can be set up as a composite
transformation involving combinations of translations and the coordinate axis rotations the following
transformation sequence:

1. Translate the object so that the rotation axis coincides with the parallel coordinate axis.
2. Perform the specified rotation about that axis.
3. Translate the object so that the rotation axis is moved back to its original position.
▪ The steps in this sequence are illustrated in Figure 8. A coordinate position P is transformed with the
sequence shown in this figure as

▪ When an object is to be rotated about an axis that is not parallel to one of the coordinate axes, we must
perform some additional transformations we can accomplish the required rotation in five steps:
1. Translate the object so that the rotation axis passes through the coordinate origin.
2. Rotate the object so that the axis of rotation coincides with one of the coordinate axes.
3. Perform the specified rotation about the selected coordinate axis.
4. Apply inverse rotations to bring the rotation axis back to its original orientation.
5. Apply the inverse translation to bring the rotation axis back to its original spatial position.
▪ We can transform the rotation axis onto any one of the three coordinate axes. The z axis is often a
convenient choice, and we next consider a transformation sequence using the z-axis rotation matrix
(Figure 9).
▪ A rotation axis can be defined with two coordinate positions, as in Figure 10, or with one coordinate
point and direction angles (or direction cosines) between the rotation axis and two of the coordinate
axes.
▪ We assume that the rotation axis is defined by two points, as illustrated, and that the direction of
rotation is to be counter clockwise when looking along the axis from P2 to P1.
▪ The components of the rotation-axis vector are then computed as

▪ where the components a, b, and c are the direction cosines for the rotation axis:

▪ The first step in the rotation sequence is to set up the translation matrix that repositions the rotation
axis so that it passes through the coordinate origin. Because we want a counterclockwise rotation
when viewing along the axis from P2 to P1 (Figure 10).

▪ Translation matrix is given by


▪ which repositions the rotation axis and the object as shown in Figure 11.

▪ Because rotation calculations involve sine and cosine functions, we can use standard vector operations
to obtain elements of the two rotation matrices.
▪ These two rotations are illustrated in Figure 12 for one possible orientation of vector u.

▪ A vector dot product can be used to determine the cosine term, and a vector cross product can be used
to calculate the sine term.
▪ Rotation of u around the x axis into the x z plane is accomplished by rotating u’ (the projection of u in
the y z plane) through angle α onto the z axis.
▪ If we represent the projection of u in the yz plane as the vector u’= (0, b, c), then the cosine of the
rotation angle α can be determined from the dot product of u’ and the unit vector uz along the z axis
Figure 13:

▪ If we represent the projection of u in the yz plane as the vector u’ = (0, b, c), then the cosine of the
rotation angle α can be determined from the dot product of u’ and the unit vector uz along the z axis:
▪ Similarly, we can determine the sine of α from the cross-product of u and uz. The coordinate-
independent form of this cross-product is

▪ Equating the right sides of Equations 18 and 19, and noting that |u | = 1z and |u’| = d, we have

▪ We have determined the values for cos α and sin α in terms of the components of vector u, we can set
up the matrix elements for rotation of this vector about the x axis and into the xz plane

▪ Figure 14 shows the orientation of the unit vector in the xz plane, resulting from the rotation about the
x axis. This vector, labeled u”, has the value a for its x component, because rotation about the x axis
leaves the x component unchanged.
▪ Rotation of unit vector u” (vector u after rotation into the x z plane) about the y axis. Positive rotation
angle β aligns u” with vector uz .

▪ We can determine the cosine of rotation angle β from the dot product of unit vectors u’’ and uz. Thus,

▪ because |uz|=|u”| = 1. Comparing the coordinate-independent form of the cross-product


▪ Therefore, the transformation matrix for rotation of u” about the y axis is

▪ The specified rotation angle θ can now be applied as a rotation about the z axis as follows:

▪ The transformation matrix for rotation about an arbitrary axis can then be expressed as the composition
of these seven individual transformations:

▪ The composite matrix for any sequence of three-dimensional rotations is of the form

▪ The upper-left 3 × 3 submatrix of this matrix is orthogonal

▪ Assuming that the rotation axis is not parallel to any coordinate axis, we could form the following set
of local unit vectors Figure 15.
▪ If we express the elements of the unit local vectors for the rotation axis as

▪ then the required composite matrix, which is equal to the product Ry(β) · Rx(α), is

▪ This matrix transforms the unit vectors u’x, u’y, and u’z onto the x, y, and z axes, respectively. This
aligns the rotation axis with the z axis, because u’z = u.
c) Quaternion Methods for Three-Dimensional Rotations
▪ A more efficient method for generating a rotation about an arbitrarily selected axis is to use a
quaternion representation for the rotation transformation.
▪ Quaternions, which are extensions of two-dimensional complex numbers, are useful in a number of
computer-graphics procedures, including the generation of fractal objects.
▪ One way to characterize a quaternion is as an ordered pair, consisting of a scalar part and a vector part:

▪ A rotation about any axis passing through the coordinate origin is accomplished by first setting up a
unit quaternion with the scalar and vector parts as follows:

▪ where u is a unit vector along the selected rotation axis and θ is the specified rotation angle about this
axis (Figure 16).
▪ Any point position P that is to be rotated by this quaternion can be represented in quaternion notation
as

▪ Rotation of the point is then carried out with the quaternion operation

▪ where q −1 = (s, −v) is the inverse of the unit quaternion q.


▪ This transformation produces the following new quaternion:

▪ The second term in this ordered pair is the rotated point position p’, which is evaluated with vector dot
and cross-products as

▪ Designating the components of the vector part of q as v = (a, b, c) , we obtain the elements for the
composite rotation matrix

▪ Using the following trigonometric identities to simplify the terms

▪ Thus, we can rewrite above Matrix as

▪ Thus, the complete quaternion rotation expression, corresponding to Equation is,

3.6.3 Three-Dimensional Scaling


• The matrix expression for the three-dimensional scaling transformation of a position P = (x, y, z) is
given by

• The three-dimensional scaling transformation for a point position can be represented as


• where scaling parameters sx, sy, and sz are assigned any positive values. Explicit expressions for the
scaling transformation relative to the origin are

• Because some graphics packages provide only a routine that scales relative to the coordinate origin,
we can always construct a scaling transformation with respect to any selected fixed position (xf, yf,zf)
using the following transformation sequence:
1. Translate the fixed point to the origin.
2. Apply the scaling transformation relative to the coordinate origin
3. Translate the fixed point back to its original position.
• This sequence of transformations is demonstrated in Figure 18.

• The matrix representation for an arbitrary fixed-point scaling can then be expressed as the
concatenation of these translate-scale-translate transformations:

• In the following code example, we demonstrate a direct construction of a three-dimensional scaling


matrix relative to a selected fixed point using the calculations in above Equation.
3.6.4 Composite Three-Dimensional Transformations
• We form a composite three dimensional transformation by multiplying the matrix representations for
the individual operations in the transformation sequence.
• We can implement a transformation sequence by concatenating the individual matrices from right to
left or from left to right, depending on the order in which the matrix representations are specified
• The rightmost term in a matrix product is always the first transformation to be applied to an object and
the leftmost term is always the last transformation.
• We need to use this ordering for the matrix product because coordinate positions are represented as
four-element column vectors, which are premultiplied by the composite 4 × 4 transformation matrix.
• Example routines for constructing a three dimensional composite transformation matrix. The three
basic geometric transformations are combined in a selected order to produce a single composite matrix,
which is initialized to the identity matrix.
• Refer text book for Example.
3.6.5 Other Three-Dimensional Transformations
a) Three-Dimensional Reflections
▪ A reflection in a three-dimensional space can be performed relative to a selected reflection axis or with
respect to a reflection plane.
▪ Reflections with respect to a plane are similar; when the reflection plane is a coordinate plane (xy, xz,
or yz), we can think of the transformation as a 180◦ rotation in four dimensional space with a
conversion between a left-handed frame and a right-handed frame.
▪ An example of a reflection that converts coordinate specifications from a right handed system to a left-
handed system is shown below

▪ The matrix representation for this reflection relative to the xy plane is

b) Three-Dimensional Shear
▪ These transformations can be used to modify object shapes.
▪ For three-dimensional we can also generate shears relative to the z axis.
▪ A general z-axis shearing transformation relative to a selected reference position is produced with the
following matrix:

▪ The Below figure 20 shows the shear transformation of a cube

▪ A unit cube (a) is sheared relative to the origin (b) by above Matrix, with shzx = shzy = 1.

You might also like