Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Computer Graphics Lectures - 1 To 25

Download as pdf or txt
Download as pdf or txt
You are on page 1of 313

Computer Graphics

Lecture No:3-4
What is computer graphics
• Computer graphics is an art of drawing
pictures, lines, charts, etc using computers
with the help of programming.
• Computer graphics is made up of number of
pixels.
• Pixel is the smallest graphical picture or unit
represented on the computer screen.
• There are two types of computer graphics
– raster graphics:
• A raster graphic (also called “bitmap”) is basically a large grid,
filled with the boxes called pixel
• where each pixel is separately defined as in digital photograph

– Vector graphics:
• Vector graphics are defined by math(where mathematical
formulas are used to draw lines and shapes)
• They are points connected by lines of various shapes
Raster Graphics
• Raster images are created with pixel-based
software or captured with a camera or
scanner.
• They are more common in general such as jpg,
gif, png, and are widely used on the web.
Raster Graphics
• When using a raster
program you paint an
image and it's similar to
dipping a brush in paint
and painting.
• You can blend colors to
soften the transition
from one color to
another. FIG.1
• Raster images are made
of pixels.
• A pixel is a single point
or the smallest single
element in a display
device.
• If you zoom in to a
raster image you may
start to see a lot of little
tiny squares.
Vector Graphics
• Vector graphics are math-defined shapes
created with vector software and are not as
common:
• Used in CAD/engineering, 3D animation, and
Graphic Design for processes that reproduce
an image onto an object such as engraving,
etching, cut stencils.
• When using a vector
program you draw the
outline of shapes: and it's
similar to creating an
image with tiles of all
different shapes and
sizes. e.g. an eye shape, a
nose shape, a lip shape.
These shapes called
objects display one single
color each. FIG.2
• Vector images are
mathematical
calculations from one
point to another that
form lines and shapes.
If you zoom into a
vector graphic it will
always look the same.
• Driven by display commands (move (x, y),
char(“A”) , line(x, y)…).
Raster Image vs Vector Image
• A raster image has a specific • When you enlarge a vector
number of pixels. graphic, the math formulas
• When you enlarge the image stay the same rendering the
file without changing the same visual graphic no matter
number of pixels, the image the size.
will look blurry. • Vector graphics can be scaled
• When you enlarge the file by to any size without losing
adding more pixels, the pixels quality.
are added randomly • Because vector graphics are
throughout the image, rarely not composed of pixels they
producing good results. are resolution-independent.
Graphics Display Hardware

Bitmap graphics
Bitmap vs. Vector graphics
13
Bitmap vs. vector graphics

14
Graphics Output Primitives
Drawing Line, Circle and Ellipse
Dr. Ali Raza Baloch
Objectives
 Introduction to Primitives
 Points & Lines
 Line Drawing Algorithms
 Digital Differential Analyzer (DDA)
 Bresenham’s Algorithm
 Mid-Point Algorithm
 Circle Generating Algorithms
 Properties of Circles
 Bresenham’s Algorithm
 Mid-Point Algorithm
 Ellipse Generating Algorithms
 Properties of Ellipse
 Bresenham’s Algorithm
 Mid-Point Algorithm
 Other Curves
 Conic Sections
 Polynomial Curves
 Spline Curves
Rasterization
 Rasterization is an act of converting an image described in a vector
format(shapes) into a raster image (pixel or dots) to display on a video
device, print it or save it in a bitmap file format

Raster graphic image


Line drawing algorithm

 Programmer specifies (x,y) values of end pixels


 Need algorithm to figure out which intermediate pixels are on line
path
 Pixel (x,y) values constrained to integer values
 Actual computed intermediate line values may be floats
 Rounding may be required. E.g. computed point
(10.48, 20.51) rounded to (10, 21)
 Rounded pixel value is off actual line path (jaggy!!)
 Sloped lines end up having jaggies
 Vertical, horizontal lines, no jaggies
Line Drawing Algorithm

 Slope-intercept line equation


 y = mx + b
 Given two end points (x0,y0), (x1, y1), how to compute m and b?

dy y1 − y 0 b = y 0 − m * x0
m= =
dx x1 − x0

(x1,y1)
dy

(x0,y0)
dx
Line Drawing Algorithm

 Numerical example of finding slope m:


 (Ax, Ay) = (23, 41), (Bx, By) = (125, 96)

By − Ay 96 − 41 55
m= = = = 0.5392
Bx − Ax 125 − 23 102
Points

 A point is shown by
illuminating a pixel on
the screen
Lines
 A line segment is completely defined in terms of its two endpoints.
 A line segment is thus defined as:
Line_Seg = { (x1, y1), (x2, y2) }
Lines

y
 A line is produced by
means of illuminating
a set of intermediary
pixels between the
two endpoints. y2

y1

x
x1 x2
Lines
 Lines is digitized into a set of discrete integer positions that
approximate the actual line path.
 Example: A computed line position of (10.48, 20.51) is converted to
pixel position (10, 21).
Line
 The rounding of coordinate values to integer causes all but horizonatal
and vertical lines to be displayed with a stair step appearance “the
jaggies”.
Line Drawing Algorithms
 A straight line segment is defined by the coordinate position for the end
points of the segment.
 Given Points (x1, y1) and (x2, y2)
Line
 All line drawing algorithms make use of the fundamental equations:

 Line Eqn. y = m.x + b


 Slope m = y2 − y1 / x2 − x1 = Δy / Δx
 y-intercept b = y1 − m.x1
 x-interval→Δx = Δy / m
 y-interval→ Δy = m Δx
DDA Algorithm (Digital Differential Analyzer)
 A line algorithm Based on calculating either Δy or Δx using the above
equations.
 There are two cases:
 Positive slop
 Negative slop
DDA- Line with positive Slope
If m ≤ 1 then take Δx = 1
 Compute successive y by

yk+1 = yk + m (1)
 Subscript k takes integer values starting from 1, for the first point, and
increases by 1 until the final end point is reached.
 Since 0.0 < m ≤ 1.0, the calculated y values must be rounded to the
nearest integer pixel position.
DDA
 If m > 1, reverse the role of x and y and take Δy = 1, calculate successive
x from
xk+1 = xk + 1/m (2)

 In this case, each computed x value is rounded to the nearest integer


pixel position.
 The above equations are based on the assumption that lines are to be
processed from left endpoint to right endpoint.
DDA
 In case the line is processed from Right endpoint to Left endpoint, then
Δx = −1, yk+1 = yk − m for m ≤ 1 (3)
or
Δy = −1, xk+1 = xk −1/m for m > 1 (4)
DDA- Line with negative Slope
 If m < -1,
 Use yk+1 = yk + m [provided line is calculated from left to right] and
 Use Δx = −1, yk+1 = yk − m [provided line is calculated from right to left].
 If m ≥ -1
 use xk+1 = xk + 1/m or Δy = −1, xk+1 = xk −1/m (4).
Merits + Demerits
 Faster than the direct use of line Eqn.
 It eliminates the multiplication in line Eqn.
 For long line segments, the true line Path may be mislead due to round
off.
 Rounding operations and floating-point arithmetic are still time
consuming.
 The algorithm can still be improved.
 Other algorithms, with better performance also exist.
DDA Line Drawing Algorithm in C
DDA Algorithm
DDA Algorithm in C
Mid point Circle Algorithm
 If the points is in the interior of the circle, the circle function is negative.
 If the point is outside the circle, the circle function is positive.

 To summarize, the relative position of any point (x,y) can be


determined by checking the sign of the circle function:
Mid point Circle Algorithm

 The circle function tests in (3) are performed for the mid
positions between pixels near the circle path at each
sampling step. Thus, the circle function is the decision
parameter in the midpoint algorithm, and we can set up
incremental calculations for this function as we did in the
line algorithm.
Mid point Circle Algorithm
 Figure shows the midpoint between the two candidate
pixels at sampling position xk +1. Assuming we have just
plotted the pixel at (xk , yk), we next need to determine
whether the pixel at position (xk +1, yk) or the one at
position (xk +1, yk −1) is closer to the circle.
Mid point Circle Algorithm
 Our decision parameter is the circle function evaluated at the midpoint
between these two pixels:
Mid point Circle Algorithm
 If pk < 0, this midpoint is inside the circle and the pixel on scan line yk is
closer to the circle boundary.
 Otherwise, the midpoint is outside or on the circle boundary, and we
select the pixel on scan line yk −1.
 Successive decision parameters are obtained using incremental
calculations.
Mid point Circle Algorithm
Mid point Circle Algorithm
Mid point Circle Algorithm
Summary of the Algorithm
 As in Bresenham’s line algorithm, the midpoint method calculates pixel
positions along the circumference of a circle using integer additions and
subtractions, assuming that the circle parameters are specified in
screen coordinates. We can summarize the steps in the midpoint circle
algorithm as follows.
Algorithm
Example
 Given a circle radius r = 10, we demonstrate the midpoint circle
algorithm by determining positions along the circle octant in the first
quadrant from x = 0 to x = y . The initial value of the decision parameter
is
Example

 For the circle centered on the coordinate origin, the


initial point is (x0 , y0) =(0,10), and initial increment
terms for calculating the decision parameters are

 Successive decision parameter values and positions


along the circle path are calculated using the
midpoint method as shown in the table.
Example

k pk (xk+1,yk+1)
0 -9 (1,10)
1 -4 (2,10)
2 3 (3,9)
3 12 (4,8)
……. …….. ………
Example
 A plot of the generated pixel positions in the first quadrant is shown in
Figure
Midpoint Ellipse Algorithm
 Ellipse equations are greatly simplified if the major and minor axes are
oriented to align with the coordinate axes.
 In “standard position” major and minor axes are oriented parallel to x
and y axes.
 Parameter rx labels the semi-major axis, and parameter ry labels the
semi-minor axis.
 The equation for the ellipse can be written in terms of the ellipse center
coordinates and parameters rx and ry as
Midpoint Ellipse Algorithm
 Using polar coordinates r and θ, we can also describe the ellipse in
standard position with the parametric equations
Midpoint Ellipse Algorithm

 The midpoint ellipse method is applied


throughout the first quadrant in two parts.
 Figure shows the division of the first quadrant
according to the slope of an ellipse with rx < ry.
Midpoint Ellipse Algorithm
 Regions 1 and 2 can be processed in various ways.
 We can start at position (0, ry) and step clockwise
along the elliptical path in the first quadrant,
shifting from unit steps in x to unit steps in y when
the slope becomes less than −1.0.
 Alternatively, we could start at (rx, 0) and select
points in a counterclockwise order, shifting from
unit steps in y to unit steps in x when the slope
becomes greater than −1.0.
Midpoint Ellipse Algorithm
 We define an ellipse function with (xc , yc) = (0, 0) which has the
following properties:
Midpoint Ellipse Algorithm
 Starting at (0, ry), we take unit steps in the x direction until we reach the
boundary between region 1 and region 2
 Then we switch to unit steps in the y direction over the remainder of
the curve in the first quadrant.
 At each step we need to test the value of the slope of the curve.
Midpoint Ellipse Algorithm
 The ellipse slope is calculated

 At the boundary between region 1 and region 2,


dy/dx = −1.0 and

 Therefore, we move out of region 1 whenever


Midpoint Ellipse Algorithm
 Midpoint between the two candidate pixels at
sampling position xk +1 is in the first region.
 Assuming position (xk , yk) has been selected in the
previous step, we determine the next position
along the ellipse path by evaluating the decision
parameter (the ellipse function) at this midpoint:
Midpoint Ellipse Algorithm
 If p1k < 0, the midpoint is inside the ellipse and the pixel on scan line yk
is closer to the ellipse boundary.
 Otherwise, the midposition is outside or on the ellipse boundary, and
we select the pixel on scan line yk − 1.
Midpoint Ellipse Algorithm
 At the next sampling position (xk+1 + 1 = xk + 2), the decision parameter
for region 1 is evaluated as
Midpoint Ellipse Algorithm
 Decision parameters are incremented by the following amounts:
Midpoint Ellipse Algorithm
 In region 1, the initial value of the decision parameter is obtained by
evaluating the ellipse function at the start position (x0, y0) = (0, ry):
Midpoint Ellipse Algorithm
 Over region 2, we sample at unit intervals in the negative y direction,
and the midpoint is now taken between horizontal pixels at each step
 For this region, the decision parameter is evaluated as
Midpoint Ellipse Algorithm
 If p2k > 0, the midposition is outside the ellipse boundary, and we select
the pixel at xk.
 If p2k <= 0, the midpoint is inside or on the ellipse boundary, and we
select
 pixel position xk+1.
Midpoint Ellipse Algorithm
 To determine the relationship between successive decision parameters
in region 2,we evaluate the ellipse function at the next sampling step
yk+1 −1 = yk −2:
Midpoint Ellipse Algorithm
 When we enter region 2, the initial position (x0, y0) is taken as the last
position selected in region 1 and the initial decision parameter in region
2 is then
Algorithm
Example
 Given input ellipse parameters rx =8 and ry = 6, we illustrate the steps in
the midpoint ellipse algorithm by determining raster positions along
the ellipse path in the first quadrant.
 Initial values and increments for the decision parameter calculations
are
Example
 For region 1, the initial point for the ellipse centered on the origin is (x0,
y0) = (0, 6), and the initial decision parameter value is

 Successive midpoint decision parameter values and the pixel positions


along the ellipse are listed in the following table.
Example
Example
 We now move out of region 1, since
2r 2 y x > 2r 2 x y.
 For region 2, the initial point is
(x0, y0) = (7, 3)
 and the initial decision parameter is
Example
 The remaining positions along the ellipse path in the first quadrant are
then calculated as
Example
 A plot of the calculated positions for the ellipse within the first quadrant
is shown bellow:
C Code
Graphics Output Primitives
Drawing Line, Circle and Ellipse
Dr. Ali Raza Baloch
Objectives
 Introduction to Primitives
 Points & Lines
 Line Drawing Algorithms
 Digital Differential Analyzer (DDA)
 Bresenham’s Algorithm
 Mid-Point Algorithm
 Circle Generating Algorithms
 Properties of Circles
 Bresenham’s Algorithm
 Mid-Point Algorithm
 Ellipse Generating Algorithms
 Properties of Ellipse
 Bresenham’s Algorithm
 Mid-Point Algorithm
 Other Curves
 Conic Sections
 Polynomial Curves
 Spline Curves
Rasterization
 Rasterization is an act of converting an image described in a vector
format(shapes) into a raster image (pixel or dots) to display on a video
device, print it or save it in a bitmap file format
Rasterization
 Rasterization is the process of converting a vector image into a raster
image.
 The rasterized image may then be displayed on a video display or
printer or stored in a bitmap format
 Rasterization may refer to either the conversion of models into raster
files or conversion of 2D rendering primitives such as polygons or line
segment into a rasterized format
Rasterization (Scan Conversion)

 Convert high-level geometry description to pixel colors in the frame


buffer
 Example: given vertex x,y coordinates determine pixel colors to
draw line
 Two ways to create an image:
 Scan existing photograph
 Procedurally compute values (rendering)
Viewport
Rasterization
Transformation
Rasterization

 A fundamental computer graphics function


 Determine the pixels’ colors, illuminations, textures, etc.
 Implemented by graphics hardware
 Rasterization algorithms
 Lines
 Circles
 Triangles
 Polygons
Rasterization Operations

 Drawing lines on the screen


 Manipulating pixel maps (pixmaps): copying, scaling, rotating, etc
 Compositing images, defining and modifying regions
 Drawing and filling polygons
 Aliasing and anti-aliasing methods
Sampling and Reconstruction (1)

Scanning Display

Analog Digital Analog


Image Image Image
Sampling and Reconstruction (2)

Continuous Discrete Continuous


Sampling Reconstruction
Image Samples Image

1 1

p p
Sampling Reconstruction
Function Kernel
Signal Processing
 Sampling a function:
Signal Processing
 Sampling a function:
 What do you notice?
Signal Processing
 Sampling a function: what do you notice?
 Jagged, not smooth
Signal Processing
 Sampling a function: what do you notice?
 Jagged, not smooth
 Loses information!
Anti-aliasing
• Aliasing: distortion artifacts produced when representing a high-resolution signal at a lower
resolution.
• Anti-aliasing : techniques to remove aliasing

Aliased polygons Anti-aliased polygons


(jagged edges)

Aliased and anti-aliased images


14
Antialiasing

jaggies
digitized

Analog line Digital line: aliasing occurs

Aliased 'W' Antialiased 'W'


Line drawing algorithm

 Programmer specifies (x,y) values of end pixels


 Need algorithm to figure out which intermediate pixels are on line
path
 Pixel (x,y) values constrained to integer values
 Actual computed intermediate line values may be floats
 Rounding may be required. E.g. computed point
(10.48, 20.51) rounded to (10, 21)
 Rounded pixel value is off actual line path (jaggy!!)
 Sloped lines end up having jaggies
 Vertical, horizontal lines, no jaggies
Line Drawing Algorithm

 Slope-intercept line equation


 y = mx + b
 Given two end points (x0,y0), (x1, y1), how to compute m and b?

dy y1 − y 0 b = y 0 − m * x0
m= =
dx x1 − x0

(x1,y1)
dy

(x0,y0)
dx
Line Drawing Algorithm

 Numerical example of finding slope m:


 (Ax, Ay) = (23, 41), (Bx, By) = (125, 96)

By − Ay 96 − 41 55
m= = = = 0.5392
Bx − Ax 125 − 23 102
Points

 A point is shown by
illuminating a pixel on
the screen
Lines
 A line segment is completely defined in terms of its two endpoints.
 A line segment is thus defined as:
Line_Seg = { (x1, y1), (x2, y2) }
Lines

y
 A line is produced by
means of illuminating
a set of intermediary
pixels between the
two endpoints. y2

y1

x
x1 x2
Lines
 Lines is digitized into a set of discrete integer positions that
approximate the actual line path.
 Example: A computed line position of (10.48, 20.51) is converted to
pixel position (10, 21).
Line
 The rounding of coordinate values to integer causes all but horizonatal
and vertical lines to be displayed with a stair step appearance “the
jaggies”.
Line Drawing Algorithms
 A straight line segment is defined by the coordinate position for the end
points of the segment.
 Given Points (x1, y1) and (x2, y2)
Line
 All line drawing algorithms make use of the fundamental equations:

 Line Eqn. y = m.x + b


 Slope m = y2 − y1 / x2 − x1 = Δy / Δx
 y-intercept b = y1 − m.x1
 x-interval→Δx = Δy / m
 y-interval→ Δy = m Δx
DDA Algorithm (Digital Differential Analyzer)
 A line algorithm Based on calculating either Δy or Δx using the above
equations.
 There are two cases:
 Positive slop
 Negative slop
DDA- Line with positive Slope
If m ≤ 1 then take Δx = 1
 Compute successive y by

yk+1 = yk + m (1)
 Subscript k takes integer values starting from 1, for the first point, and
increases by 1 until the final end point is reached.
 Since 0.0 < m ≤ 1.0, the calculated y values must be rounded to the
nearest integer pixel position.
DDA
 If m > 1, reverse the role of x and y and take Δy = 1, calculate successive
x from
xk+1 = xk + 1/m (2)

 In this case, each computed x value is rounded to the nearest integer


pixel position.
 The above equations are based on the assumption that lines are to be
processed from left endpoint to right endpoint.
DDA
 In case the line is processed from Right endpoint to Left endpoint, then
Δx = −1, yk+1 = yk − m for m ≤ 1 (3)
or
Δy = −1, xk+1 = xk −1/m for m > 1 (4)
DDA- Line with negative Slope
 If m < -1,
 Use yk+1 = yk + m [provided line is calculated from left to right] and
 Use Δx = −1, yk+1 = yk − m [provided line is calculated from right to left].
 If m ≥ -1
 use xk+1 = xk + 1/m or Δy = −1, xk+1 = xk −1/m (4).
Merits + Demerits
 Faster than the direct use of line Eqn.
 It eliminates the multiplication in line Eqn.
 For long line segments, the true line Path may be mislead due to round
off.
 Rounding operations and floating-point arithmetic are still time
consuming.
 The algorithm can still be improved.
 Other algorithms, with better performance also exist.
DDA Line Drawing Algorithm in C
DDA Algorithm
DDA Algorithm in C
 Lecture No. 9-10
 Bresenham’s Line Drawing Algorithm
Bresenham’s Line Algorithm
 It is an efficient raster line generation algorithm.
 It can be adapted to display circles and other curves.
 The algorithm
 After plotting a pixel position (xk, yk) , what is the next pixel to plot?
 Consider lines with positive slope.
Bresenham’s Line
 For a positive slope, 0 < m < 1 and line is starting from left to right.
 After plotting a pixel position (xk, yk) we have two choices for next pixel:
 (xk +1, yk)
 (xk +1, yk+1)
Bresenham’s Line
 At position xk +1, we label
vertical pixel separations from
the mathematical line path as

dlower , dupper.
Bresenham’s Line
 The y coordinate on the mathematical line at xk+1 is calculated as
y = m(xk +1)+ b
then
dlower = y − yk
= m (xk +1) + b − yk
and
dupper = (yk+1) − y
= yk+1− m(xk+1)− b
Bresenham’s Line
 To determine which of the two pixels is closest to the line path, we set
an efficient test based on the difference between the two pixel
separations
dlower - dupper = 2m (xk +1) − 2yk + 2b - 1
= 2 (Δy / Δx) (xk +1) − 2yk + 2b - 1
Multiplying both sides by Δx to avoid floating point numbers:
Δx(dlower - dupper )= 2 Δy(xk +1)- Δx(2yk + 2b - 1 )
 Consider a decision parameter pk such that
pk = Δx (dlower - dupper )
= 2Δy.xk − 2Δx.yk + c
where
c = 2Δy + Δx(2b −1)
Bresenham’s Line

 Comparing (dlower and dupper ) would tell which


pixel is closer to the line path; is it yk or yk + 1

 If (dlower < dupper )


 Then pk is negative

 Hence plot lower pixel.


 Otherwise

 Plot the upper pixel.


Bresenham’s Line
 We can obtain the values of successive decision parameter as follows:
pk = 2Δy.xk − 2Δx.yk + c
pk+1=2Δy.xk+1−2Δx.yk+1+c
 Subtracting these two equations
pk+1− pk = 2Δy (xk+1 − xk) − 2Δx ( yk+1 − yk)
 But xk+1 − xk = 1, Therefore
pk+1 = pk +2Δy − 2Δx (yk+1 − yk)
Bresenham’s Line
 ( yk+1 − yk) is either 0 or 1, depending on the sign of pk (plotting lower
or upper pixel).
 The recursive calculation of pk is performed at integer x position,
starting at the left endpoint.
 p0 can be evaluated as:
p0 = 2Δy − Δx
Bresenham’s Line-Drawing Algorithm for m < 1
1. Input the two line end points and store the left end point
in (x0 , y0 ).
2. Load (x0 , y0 ) into the frame buffer; that is, plot the first
point.
3. Calculate the constants Δx, Δy, 2Δy, and 2Δy − 2Δx, and
obtain the starting value for the decision parameter as
p0 = 2Δy − Δx
4. At each xk along the line, starting at k = 0 , perform the
following test: If pk < 0,the next point to plot is (xk +1, yk)
and
pk+1=pk+2Δy
Otherwise, the next point to plot is (xk +1, yk +1) and
pk+1=pk+2Δy−2Δx
5. Repeat step 4, Δx−1 times.
Bresenham’s Line Drawing Program for m<0
Summary
 The constants 2Δy and 2Δy − 2Δx are calculated once for each line to be
scan converted.
 Hence the arithmetic involves only integer addition and subtraction of
these two constants.
Example
 To illustrate the algorithm, we digitize the line with
endpoints (20,10) and (30,18). This line has slope
of 0.8, with
Δx = 10
Δy =8
 The initial decision parameter has the value
p0 = 2Δy − Δx = 6
 and the increments for calculating successive
decision parameters are
2 Δy = 16
2 Δy - 2 Δx = -4
Example
 We plot the initial point (x0 , y0)=(20,10) and
determine successive pixel positions along the line
path from the decision parameter as

K pk (xk +1, yk +1) K pk (xk +1, yk +1)


0 6 (21,11) 5 6 (26,15)
1 2 (22,12) 6 2 (27,16)
2 -2 (23,12) 7 -2 (28,16)
3 14 (24,13) 8 14 (29,17)
4 10 (25,14) 9 10 (30,18)
Example
Lecture No. 11-12
Circle Generating Algorithm
Circle Generating Algorithms

 A circle is defined as the set of points that are all at a given


distance r from a center point (xc, yc).

 For any circle point (x, y), this distance is expressed by the
Equation
(x − xc)2 + (y − yc)2 = r 2
 We calculate the points by stepping along the x-axis in unit
steps from xc-r to xc+r and calculate y values as
Circle Generating Algorithms
 There are some problems with this approach:
1. Considerable computation at each step.
2. Non-uniform spacing between plotted pixels as in this
Figure.
Circle Generating Algorithms
 Problem 2 can be removed using the polar form:
x = xc + r cos θ
y = yc + r sin θ
 using a fixed angular step size, a circle is plotted with equally spaced
points along the circumference.
Circle Generating Algorithms
 Problem 1 can be overcome by considering the symmetry of
circles

 Efficient Solutions
 Midpoint Circle Algorithm
Mid point Circle Algorithm
 To apply the midpoint method, we define a circle function:

 Any point (x,y) on the boundary of the circle with radius r satisfies the
equation fcircle(x, y)= 0.
Mid point Circle Algorithm
 If the points is in the interior of the circle, the circle function is negative.
 If the point is outside the circle, the circle function is positive.

 To summarize, the relative position of any point (x,y) can be determined


by checking the sign of the circle function:
Mid point Circle Algorithm

 The circle function tests in (3) are performed for the mid
positions between pixels near the circle path at each
sampling step. Thus, the circle function is the decision
parameter in the midpoint algorithm, and we can set up
incremental calculations for this function as we did in the
line algorithm.
Mid point Circle Algorithm
 Figure shows the midpoint between the two candidate
pixels at sampling position xk +1. Assuming we have just
plotted the pixel at (xk , yk), we next need to determine
whether the pixel at position (xk +1, yk) or the one at
position (xk +1, yk −1) is closer to the circle.
Mid point Circle Algorithm
 Our decision parameter is the circle function evaluated at the midpoint
between these two pixels:
Mid point Circle Algorithm
 If pk < 0, this midpoint is inside the circle and the pixel on scan line yk is
closer to the circle boundary.
 Otherwise, the midpoint is outside or on the circle boundary, and we
select the pixel on scan line yk −1.
 Successive decision parameters are obtained using incremental
calculations.
Mid point Circle Algorithm
Mid point Circle Algorithm
Mid point Circle Algorithm
Summary of the Algorithm
 As in Bresenham’s line algorithm, the midpoint method calculates pixel
positions along the circumference of a circle using integer additions and
subtractions, assuming that the circle parameters are specified in
screen coordinates. We can summarize the steps in the midpoint circle
algorithm as follows.
Algorithm
Midpoint Circle Drawing program
Assignment # 1
Discuss in detail Midpoint Ellipse Algorithm with example and write down
its c program.

Last date: 19-11-21


Example
 Given a circle radius r = 10, we demonstrate the midpoint circle
algorithm by determining positions along the circle octant in the first
quadrant from x = 0 to x = y . The initial value of the decision parameter
is
Example

 For the circle centered on the coordinate origin, the


initial point is (x0 , y0) =(0,10), and initial increment
terms for calculating the decision parameters are

 Successive decision parameter values and positions


along the circle path are calculated using the
midpoint method as shown in the table.
Example

k pk (xk+1,yk+1)
0 -9 (1,10)
1 -4 (2,10)
2 3 (3,9)
3 12 (4,8)
……. …….. ………
Example
 A plot of the generated pixel positions in the first quadrant is shown in
Figure
Midpoint Ellipse Algorithm
 Ellipse equations are greatly simplified if the major and minor axes are
oriented to align with the coordinate axes.
 In “standard position” major and minor axes are oriented parallel to x
and y axes.
 Parameter rx labels the semi-major axis, and parameter ry labels the
semi-minor axis.
 The equation for the ellipse can be written in terms of the ellipse center
coordinates and parameters rx and ry as
Midpoint Ellipse Algorithm
 Using polar coordinates r and θ, we can also describe the ellipse in
standard position with the parametric equations
Midpoint Ellipse Algorithm

 The midpoint ellipse method is applied


throughout the first quadrant in two parts.
 Figure shows the division of the first quadrant
according to the slope of an ellipse with rx < ry.
Midpoint Ellipse Algorithm
 Regions 1 and 2 can be processed in various ways.
 We can start at position (0, ry) and step clockwise
along the elliptical path in the first quadrant,
shifting from unit steps in x to unit steps in y when
the slope becomes less than −1.0.
 Alternatively, we could start at (rx, 0) and select
points in a counterclockwise order, shifting from
unit steps in y to unit steps in x when the slope
becomes greater than −1.0.
Midpoint Ellipse Algorithm
 We define an ellipse function with (xc , yc) = (0, 0) which has the
following properties:
Midpoint Ellipse Algorithm
 Starting at (0, ry), we take unit steps in the x direction until we reach the
boundary between region 1 and region 2
 Then we switch to unit steps in the y direction over the remainder of
the curve in the first quadrant.
 At each step we need to test the value of the slope of the curve.
Midpoint Ellipse Algorithm
 The ellipse slope is calculated

 At the boundary between region 1 and region 2,


dy/dx = −1.0 and

 Therefore, we move out of region 1 whenever


Midpoint Ellipse Algorithm
 Midpoint between the two candidate pixels at
sampling position xk +1 is in the first region.
 Assuming position (xk , yk) has been selected in the
previous step, we determine the next position
along the ellipse path by evaluating the decision
parameter (the ellipse function) at this midpoint:
Midpoint Ellipse Algorithm
 If p1k < 0, the midpoint is inside the ellipse and the pixel on scan line yk
is closer to the ellipse boundary.
 Otherwise, the midposition is outside or on the ellipse boundary, and
we select the pixel on scan line yk − 1.
Midpoint Ellipse Algorithm
 At the next sampling position (xk+1 + 1 = xk + 2), the decision parameter
for region 1 is evaluated as
Midpoint Ellipse Algorithm
 Decision parameters are incremented by the following amounts:
Midpoint Ellipse Algorithm
 In region 1, the initial value of the decision parameter is obtained by
evaluating the ellipse function at the start position (x0, y0) = (0, ry):
Midpoint Ellipse Algorithm
 Over region 2, we sample at unit intervals in the negative y direction,
and the midpoint is now taken between horizontal pixels at each step
 For this region, the decision parameter is evaluated as
Midpoint Ellipse Algorithm
 If p2k > 0, the midposition is outside the ellipse boundary, and we select
the pixel at xk.
 If p2k <= 0, the midpoint is inside or on the ellipse boundary, and we
select
 pixel position xk+1.
Midpoint Ellipse Algorithm
 To determine the relationship between successive decision parameters
in region 2,we evaluate the ellipse function at the next sampling step
yk+1 −1 = yk −2:
Midpoint Ellipse Algorithm
 When we enter region 2, the initial position (x0, y0) is taken as the last
position selected in region 1 and the initial decision parameter in region
2 is then
Algorithm
Example
 Given input ellipse parameters rx =8 and ry = 6, we illustrate the steps in
the midpoint ellipse algorithm by determining raster positions along
the ellipse path in the first quadrant.
 Initial values and increments for the decision parameter calculations
are
Example
 For region 1, the initial point for the ellipse centered on the origin is (x0,
y0) = (0, 6), and the initial decision parameter value is

 Successive midpoint decision parameter values and the pixel positions


along the ellipse are listed in the following table.
Example
Example
 We now move out of region 1, since
2r 2 y x > 2r 2 x y.
 For region 2, the initial point is
(x0, y0) = (7, 3)
 and the initial decision parameter is
Example
 The remaining positions along the ellipse path in the first quadrant are
then calculated as
Example
 A plot of the calculated positions for the ellipse within the first quadrant
is shown bellow:
C Code
2-Dimensional Geometric
Transformations
Lecture#15-16
2-Dimensional Geometric
Transformations
• In Computer graphics,
• Transformation is a process of modifying and
re-positioning the existing graphics.
– 2D Transformations take place in a two
dimensional plane.
– Transformations are helpful in changing the
position, size, orientation, shape etc of the object.
Transformation Techniques
• Translation
• Rotation
• Scaling
Geometric Transformations
• Basic transformations:
– Translation
– Scaling
– Rotation
• Purposes:
– To move the position of objects
– To alter the shape / size of objects
– To change the orientation of objects
2D geometric Transformation
• Translation:
– 2D Translation is a process of moving an object
from one position to another in a two dimensional
plane.
Basic two-dimensional geometric transformations (1/1)

• Two-Dimensional translation
– Moving objects without deformation
– Translating an object by Adding offsets to coordinates to generate
new coordinates positions
– Set tx,ty be the translation distance, we have

x' = x + t x y' = y + t y
– In matrix format, where T is the translation vector

 x' x  t x 
P' =   P=  T= 
 y' y  t y 

P' = P + T
• Example: Given a circle C with radius 10 and center
coordinates (1, 4). Apply the translation with distance 5
towards X axis and 1 towards Y axis. Obtain the new
coordinates of C without changing its radius.
• Solution:
– Given-
• Old center coordinates of C = (Xold, Yold) = (1, 4)
• Translation vector = (Tx, Ty) = (5, 1)
• New Coordinates X’, Y’ ?
– X’= X+Tx = 1+5=6
– Y’=Y+Ty = 4+1=5
– Thus new coordinates are
– X’ , Y’ = 6,5
2D Translation
• 2D Rotation
– 2D Rotation is a process of rotating an object with
respect to an angle in a two dimensional plane.
Basic two-dimensional geometric transformations (2/1)

• Two-Dimensional rotation
– Rotation axis and angle are specified for rotation
r
– Convert coordinates into polar form for calculation
x = r cos y = y sin r
– Example, to rotation an object with angle θ
• The new position coordinates
x' = r cos( +  ) = r cos  cos  − r sin  sin  = x cos  − y sin 
y ' = r sin(  +  ) = r cos  sin  + r sin  sin  = x sin  + y cos 
• In matrix format

cos  − sin  
R= P' = R  P
 sin  cos  
• Rotation about a point (xr, yr)

x' = xr + ( x − xr ) cos  − ( y − yr ) sin 


y ' = yr + ( x − xr ) sin  + ( y − yr ) cos 
Basic two-dimensional geometric transformations (2/2)

– This figure shows the rotation of the house by


45 degrees. y
6

2

=
1 6

0
1 2 3 4 5 6 7 8 9 10
x

• Positive angles are measured counterclockwise


(from x towards y)
• For negative angles, you can use the identities:
– cos(-) = cos() and sin(-)=-sin()
• Problem-01:

• Given a line segment with starting point as (0,


0) and ending point as (4, 4). Apply 30 degree
rotation anticlockwise direction on the line
segment and find out the new coordinates of
the line.
• Solution-

• We rotate a straight line by its end points with the same


angle. Then, we re-draw a line between the new end
points.

• Given-
• Old ending coordinates of the line = (Xold, Yold) = (4, 4)
• Rotation angle = θ = 30º

• Let new ending coordinates of the line after rotation =


(Xnew, Ynew).
• Applying the rotation equations, we have-

• Xnew
• = Xold x cosθ – Yold x sinθ
• = 4 x cos30º – 4 x sin30º
• = 4 x (√3 / 2) – 4 x (1 / 2)
• = 2√3 – 2
• = 2(√3 – 1)
• = 2(1.73 – 1)
• = 1.46
• Ynew
• = Xold x sinθ + Yold x cosθ
• = 4 x sin30º + 4 x cos30º
• = 4 x (1 / 2) + 4 x (√3 / 2)
• = 2 + 2√3
• = 2(1 + √3)
• = 2(1 + 1.73)
• = 5.46

• Thus, New ending coordinates of the line after rotation


= (1.46, 5.46).
Scaling
• scaling is a process of modifying or altering the
size of objects.
– Scaling may be used to increase or reduce the size of object.
– Scaling subjects the coordinate points of the original object
to change.
– Scaling factor determines whether the object size is to be
increased or reduced.
– If scaling factor > 1, then the object size is increased.
– If scaling factor < 1, then the object size is reduced.
Basic two-dimensional geometric transformations (3/1)

• Two-Dimensional scaling
– To alter the size of an object by multiplying the coordinates
with scaling factor sx and sy
x' = x  s x y = y  sy
– In matrix format, where S is a 2x2 scaling matrix
 x '  s x 0  x
  = 0  P' = S  P
 y'  s y   y 
– Choosing a fix point (xf, yf) as its centroid to perform scaling
x' = x  s x + x f (1 − s x )
y' = y  s y + y f (1 − s y )
Basic two-dimensional geometric transformations (3/2)

– In this figure, the house is scaled by 1/2 in x


and 1/4 in y
• Notice that the scaling is about the origin:
– The house is smaller and closer to the origin
Scaling
– If the scale factor had been greater than 1, it
would be larger and farther away.
WATCH OUT: Objects grow and move!
y
6

3 6  9 
3 3
   
2

1 2 3
1  1
   
0
1 2 3 4 5 6 7 8 9 10
x
Note: House shifts position relative to origin
• Problem-01:
• Given a square object with coordinate points
A(0, 3), B(3, 3), C(3, 0), D(0, 0). Apply the
scaling parameter 2 towards X axis and 3
towards Y axis and obtain the new coordinates
of the object.
• Solution-
• Given-
• Old corner coordinates of the square = A (0,
3), B(3, 3), C(3, 0), D(0, 0)
• Scaling factor along X axis = 2
• Scaling factor along Y axis = 3
• For Coordinates A(0, 3)
• Let the new coordinates of corner A after scaling
= (Xnew, Ynew).

• Applying the scaling equations, we have-


– Xnew = Xold x Sx = 0 x 2 = 0
– Ynew = Yold x Sy = 3 x 3 = 9

• Thus, New coordinates of corner A after scaling =


(0, 9).
• For Coordinates B(3, 3)

• Let the new coordinates of corner B after scaling =


(Xnew, Ynew).

• Applying the scaling equations, we have-


• Xnew = Xold x Sx = 3 x 2 = 6
• Ynew = Yold x Sy = 3 x 3 = 9

• Thus, New coordinates of corner B after scaling = (6, 9).


• For Coordinates C(3, 0)

• Let the new coordinates of corner C after scaling =


(Xnew, Ynew).

• Applying the scaling equations, we have-


• Xnew = Xold x Sx = 3 x 2 = 6
• Ynew = Yold x Sy = 0 x 3 = 0

• Thus, New coordinates of corner C after scaling = (6, 0).


• For Coordinates D(0, 0)

• Let the new coordinates of corner D after scaling =


(Xnew, Ynew).

• Applying the scaling equations, we have-


• Xnew = Xold x Sx = 0 x 2 = 0
• Ynew = Yold x Sy = 0 x 3 = 0

• Thus, New coordinates of corner D after scaling = (0, 0).


Lecture No. 17-18
• The matrix representation for translation,
scaling and rotation are:
 x'  x t x 
P'=T+P  y ' =  y + t 
     y

 x'  s x 0   x
P'=S*P  y ' =  0  x 
   sy   y

 x' cos  − sin    x


P'=R*P  y ' =  sin  cos   x  y 
  
• In a 2-d transformation we have represented each
transformation like translation, rotation and scaling
with the help of 2 x 2 matrix
• What if we apply these transformation in a
sequence, for example if we apply rotation first and
then translation to the same object
• Can we represent these sequence of transformation
in 2 x 2 matrix
• Unfortunately, translation is treated as an
addition whereas scaling and rotation as a
multiplication.
• But we would like to be able to treat all three
transformations in a consistent way, so that
they can be combined easily.
• The use of homogeneous coordinates allows
to treat all three transformations as
multiplications.
• But what are homogeneous coordinates?
• In homogeneous coordinates, we add a third
coordinate to a point.
• Instead of being represented by a pair of
numbers (x,y), each point is represented by a
triple (x,y,h).
Homogeneous Coordinates
• A point (x, y) can be re-written in homogeneous
coordinates as (xh, yh, h)
• The homogeneous parameter h is a non-zero
value such that:
xh yh
x= y=
h h

•We can then write any point (x, y) as (hx, hy, h)


• We can conveniently choose h = 1 so that
(x, y) becomes (x, y, 1)
Why Homogeneous Coordinates?
• All of the transformations we discussed
previously can be represented as 3x3 matrices
• Using homogeneous coordinates allows us use
matrix multiplication to calculate
transformations – extremely efficient!
Homogenous Coordinates
• Combine the geometric transformation into a single matrix with 3x3 matrices
• Expand each 2D coordinate to 3D coordinate with homogenous parameter

• Two-Dimensional translation matrix


 x'   1 0 t x   x 
     
 y' = 0 1 t y    y 
 1  0 0 1   1

• Two-Dimensional rotation matrix


 x' cos  − cos  0  x 
     
y '
   = sin cos  0   y
 1   0 0 1  1
• Two-Dimensional scaling matrix

 x' s x 0 0  x 
     
 y'  =  0 sy 0   y 
 1   0 0 1  1
3D Transformation
• In the 2D system, we use only two coordinates X and
Y but in 3D, an extra coordinate Z is added.
• 3D graphics techniques and their application are
fundamental to the entertainment, games, and
computer-aided design industries
Translation
• In 3D translation, we transfer the Z coordinate
along with the X and Y coordinates.
• The process for translation in 3D is similar to
2D translation.
• A translation moves an object into a different
position on the screen.
• The following figure shows the effect of
translation −
Geometric transformations in three-dimensional space (2)

• Three-dimensional translation
– A point P (x,y,z) in three-dimensional space translate to new
location with the translation distance T (tx, ty, tz)
x' = x + t x y' = y + t y z' = z + t z
– In matrix format
 x '  1 0 0 t x  x
     
 y' = 0 1 0 t y  y
 P' = T  P
 z' 0 0 1 t z  z
     
 1  0 0 0 1   1

= [x+tx y+ty z+tz 1]


Scaling
• You can change the size of an object using
scaling transformation.
• In the scaling process, you either expand or
compress the dimensions of the object.
• Scaling can be achieved by multiplying the
original coordinates of the object with the
scaling factor to get the desired result.
3D Scaling
• In 3D scaling operation, three coordinates are
used.
• Let us assume that the original coordinates
are X,Y,Z, scaling factors are (Sx, Sy, Sz)
respectively, and the produced coordinates
are X′,Y′,Z′.
• This can be mathematically represented as
 x '  s x 0 0 0  x 
     
 y ' =  0 sy 0 0  y 
 P' = S  P
 z'   0 0 sz 
0 z
     
 1  0 0 0 1  1

= [𝑋. 𝑆𝑥 𝑌. 𝑆𝑦 𝑍. 𝑆𝑧 1]
3D Rotation
• 3D rotation is not same as 2D rotation.
• In 3D rotation, we have to specify the angle of
rotation along with the axis of rotation.
• We can perform 3D rotation about X, Y, and Z
axes.
Geometric transformations in three-dimensional space (5)

• Three-dimensional coordinate-axis rotation


– Z-axis rotation equations
Rz

x' = x cos  − y sin


 x' cos  − sin  0   x
 y '  sin  cos  0   y
y' = x sin + y cos   =  
z' = z  z'  0 0 1  z
     
     
– Transformation equations for rotation about the other two coordinate
axes can be obtained by a cyclic permutation
x → y → z → x
– X-axis rotation equations 1 0 0 
y' = y cos  − z sin
0 cos  − sin  
( )
z' = y sin + z cos 
x' = x R = R  =  
x x
0 sin  cos  
 
0 0 0 
Geometric transformations in three-dimensional space (6)

• Three-dimensional coordinate-axis rotation


– Y-axis rotation equations  cos  0 sin  
z' = z cos  − x sin  0 1 0 
x' = z sin + x cos  R y = R y ( ) =  
y' = y − sin  0 cos  
 
 0 0 0 
Lecture No. 19-20
2D Viewing
Contents
• The Viewing Pipeline
• Viewing Coordinate Reference Frame
• Window-To-Viewport Coordinate Transformation
• Clipping Operations
• Point Clipping
• Line Clipping
• Polygon Clipping
• How views of a picture displayed on the
output device?
• Typically graphics package allows a user to specify
which part of a defined picture is to be displayed and
where that part is to be placed on the display device.

• For a two dimensional picture, a view is selected by


specifying a subarea of a total picture area.
• The picture parts within the selected area are
then mapped onto specified area of the device
coordinates

• The transformation from world coordinates to


device coordinates involve translation,
rotation, and scaling operations
• Window: A world-coordinate area selected for display is
called a window.

• Viewport: An area on a display device to which a


window is mapped is called a viewport.

• Viewing transformation: The mapping of the part of


the world coordinate scene to device coordinates is
referred to as viewing transformation.
The 2D Viewing Pipeline
The computer generated scene is
SCENE composed of many geometrical
objects, like triangle, rectangle,
circle.
Clipping Window
• The scene here represented by world
coordinates

• A section of 2-Dimensional scene that is


selected for display is called a clipping
Window.
Viewport:
An area on a display device to which a window is mapped
is called a viewport.

Displayed in Center
• The clipping window selects what we want to
see
• The viewport indicates where it is to be
viewed on the output device
• By changing the position of a viewport, we can
view objects at different positions on the
display area of an output device
The Viewing Pipeline (1/3)
• In many cases window and viewport are rectangle

Clipping Window Viewport


ywmax yvmax

ywmin yvmin
xvmin xvmax xvmin xvmax
World Coordinates Viewport Coordinates

56
Viewing Pipeline

Figure: The two-dimensional viewing-transformation pipeline


Example

Normalized
YMAX Coordinates
Clipping

Modeling XMIN
XMIN XMAX
XMAX
World Coordinates
Coordinates
Viewing
Coordinates

Device
Coordinates
• The mapping of 2D world-coordinates scene
description to device coordinates is called a
2-Dimensional transformation
• Once a world coordinate scene has been constructed, We
could set up a separate 2-D viewing coordinate reference
frame for specifying the clipping window.
• To make the viewing process independent of the
requirements of any output device, graphic system convert
object descriptions to normalized coordinates and apply the
clipping routines.
• Clipping is usually performed in normalized
coordinates
• At the final step of the viewing
transformation, the contents of the viewport
are transferred to positions within the display
window
Viewing Coordinate Reference Frame

• Used to provide a method for setting up


arbitrary orientations for rectangular windows
• Matrix for converting world-coordinate
positions to viewing coordinate

ΜW C,VC = R  Τ
R: rotation matrix
T: translation matrix

61
Viewing Coordinate Reference Frame

• The steps in this coordinate transformation


– A viewing coordinate frame is moved into coincidence
with the world frame in two steps
a) Translate the viewing origin to the world origin, then
b) Rotate to align the axes of the two systems

62
63
• Mapping of window coordinates to viewport is
called window to viewport transformation
• We do this using transformation that
maintains relative position of window
coordinate into viewport
• That means center coordinates in window
must be remain at center point in viewport
• We find relative position by equation as
follow:
Window-To-Viewport Coordinate
Transformation (1/5)
• Window-to-viewport mapping
– A point at position (xw, yw) in a designated window is
mapped to viewport coordinates (xv, yv) so that
relative positions in the two areas are the same

65
Window-To-Viewport Coordinate
Transformation (2/5)

66
Window-To-Viewport Coordinate
Transformation (3/5)
• To maintain the same relative placement
xv − xvmin xw− xwmin
xvmax − xvmin = xwmax − xwmin
yv − yvmin yw− ywmin
yvmax − yvmin = ywmax − ywmin

• Solving these expressions for the viewport position


(xv, yv)

67
Window-To-Viewport Coordinate Transformation (4/5)

• The scaling factors


xvmax − xvmin
sx = xwmax − xwmin
yvmax − yvmin
sy = ywmax − ywmin

• Conversion sequence of transformation

68
Clipping Operations
Clipping:
• Any procedure that identifies those portions of a picture that
are either inside or outside of a specified region of space.
• The region against which an object is to clipped is called a clip
window.
• Applications of clipping includes extraction parts of a defined
scene for viewing, identifying visible surface in 3D-views, etc.
• Depending on the application, the clip window can be a
general polygon or it can even have curved boundaries.
• We consider clipping methods using rectangular clip regions

69
• Applied in World Coordinates
• Adapting Primitive Types
– Point clipping
– Line clipping
– Area clipping (Polygons)
– Curve clipping
– Text clipping
Point Clipping
• In point clipping we eliminate those points which
are outside the clipping window and draw points
which are inside the clipping window
• Here we consider clipping window is rectangular
boundary with edge (Xwmin, Xwmax, Ywmin, Ywmax)
• For finding whether the given point is inside or
outside the clipping window we use the following
equations
Point Clipping
• Assuming that the clip window is a rectangle in
standard position, we save a point P = ( x , y) for
display if the following inequalities are satisfied:

72
Applications of point clipping
• Although point clipping is applied less often than
line or polygon clipping, some applications may
require a point clipping procedure
For example:
• Point clipping can be applied to scenes involving
explosions or sea foam that are modeled with
particles (points) distributed in some region of
the scene.
Line Clipping
• If the part of the line lies inside the window is
kept and the part of the line appearing
outside of the window is removed
Line Clipping (1/3)
• Line clipping against a rectangular clip window

a) Before Clipping b) After Clipping

75
Line Clipping (2/3)

76
Line Clipping (3/3)
• Cohen-Sutherland Line Clipping
• Liang-Barsky Line Clipping
• NLN(Nicholl-Lee-Nicholl) Line Clipping
• Line Clipping Using Nonrectangular Clip
Windows
• Splitting Concave Polygons

77
Lecture No. 21-22

Line Clipping Algorithms


Cohen-Sutherland Line Clipping and Important
applications of Clipping

• Cohen-Sutherland Line Clipping Algorithm is the


simplest and oldest method of line clipping.
• It performs some primary tests and checks some
initial conditions.
• It then identifies and bifurcates lines into three
types. This makes the Algorithm faster and easier.
Clipping and Clip Window
• First of all we define a region. We can call this region
Clip Window.
• Then, we will keep all the portions or objects which are
inside this window. While all the things outside this
window are discarded.
• So we select a particular portion of a scene and then
we display only that part. Rest of the part is not
displayed.
• So clipping involves, identifying which portion is
outside clip window. Then discarding that portion.
• Clip window may be rectangular or any other polygon
or it may be a curve also.
Applications of Clipping
• To select a part from a scene for displaying
• To identify visible surfaces in three dimensional
views.
• Solid-modeling procedures also use Clipping.
• We can use clipping for multiwindow environment
• To copy, move, or delete specific part of the picture
Approaches of Clipping
• Clipping can be performed in two ways:
1. We can perform Clipping in world coordinates before
mapping it to device coordinates.
2. While in Viewport clipping we perform clipping after
mapping it to device coordinates.
Types of Clipping
• There are various types of Clipping:
– Point Clipping
– Line Clipping
– Polygon Clipping
– Text Clipping
– Curve Clipping
Point Clipping
• First of all consider a point P(x, y).
• We can specify Edges of Clip Window by
(xwmin, xwmax, ywmin, ywmax).

• If following four inequalities are not satisfied. Then


point is clipped. Otherwise displayed.

xwmin<= x<=xwmax
ywmin<= y<=ywmax
Point Clipping
Line Clipping
• A line is specified with its end-points.
• There are three possible cases for a line that
we need to consider.
– A line is completely inside the window
– A line is completely outside the window
– A line is neither completely inside nor completely
outside
Cohen Sutherland Line Clipping Algorithm

• It uses four bit binary code to take clipping


decision. This code is Region Code. We can
specify the bit pattern as TBRL.
• Now, consider following cases:
1. If any point is inside clip window, the region code
will be 0000.
2. If the point is above the window T is set to 1.
3. If the point is below the window B is set to 1.
4. If the point is right to the window R is set to 1.
5. If the point is left to the window L is set to 1.
Cohen Sutherland Line Clipping
Algorithm
Cohen Sutherland Line Clipping
Algorithm
• So, basically we specify nine regions
1. Inside Clip Window 0000
2. Top to the window 1000 ( T is 1)
3. Bottom to the window 0100 ( B2 is 1)
4. Right to the Window 0010 (R is 1)
5. Left to the Window 0001 (L is 1)
6. Top left to the Window 1001 ( T and L both are 1)
7. Top right to the Window 1010 ( T and R both are 1)
8. Bottom left to the Window 0101 ( B and L both are 1)
9. Bottom Right to the Window 0110 ( B and R both are
1)
Steps of Algorithm
• In this Algorithm, we will first find out the region codes
of endpoints of given lines. Then, check for each line.
• Step1:
– If region codes of both the endpoints of a line are 0000.
Then line is completely inside and we will display line
completely and exit.
• Step2:
– Otherwise calculate logical AND of region codes of
endpoints of the line.
– Then, there are two cases possible.
• If result is non-zero, clip the line completely and exit.
• if the result is zero. Then the line is partially visible. So, calculate
the intersection points of a line segment and clipping boundary.
How to calculate intersection points
• We specify Clipping boundary by (xwmin, xwmax, ywmin, ywmax).
• Line end-points are (x1, y1) and (x2, y2).
• We can calculate intersection points by each clipping boundary.
– Left Vertical Clipping Boundary:
y= y1 + (y2-y1)/(x2-x1) {xwmin-x1}
– Right Vertical Clipping Boundary:
y= y1 + (y2-y1)/(x2-x1){xwmax-x1}
– Top Horizontal Clipping Boundary:
x= x1 + (x2-x1)/(y2-y1){ywmax-y1}
– Bottom Horizontal Clipping Boundary:
x= x1 + (x2-x1)/(y2-y1){ywmin-y1}
Cohen-Sutherland Line Clipping
(2/3)

16
Cohen-Sutherland Line Clipping
(2/3)

17
Cohen-Sutherland Line Clipping
(2/3)

18
Cohen-Sutherland Line Clipping
(2/3)

19
Cohen-Sutherland Line Clipping
(2/3)

20
Cohen-Sutherland Line Clipping
(2/3)

21
Illumination Models & Surface
Rendering Methods

Lecture No. 23-24


Agenda
• Shading:
– Shading is something that we need while we are
rendering objects or display object
– It changes the feel that how an object is being
displayed.
– Basically the first effect on any surface is the
illumination i.e. how much amount of light has
been given to the object
Light, Surface and Imaging
• Illumination has strong impact on appearance
of the surface
Specular Surfaces
• Specular surfaces appear shiny because most
of the light that is reflected or scattered is in
narrow range of angles close the angle of
reflection.
• Mirrors are perfectly specular surfaces
Diffusive Surfaces
• Diffuse surfaces are characterized by reflected light
being scattered in all directions.
• Walls painted with matte or flat paint are diffuse
reflectors.
– Perfectly diffused surfaces scatter light equally in all the
directions,
– Flat perfectly diffuse surface appears the same to all
viewers.
Translucent Surfaces
• Translucent surfaces allow some light to
penetrate the surface and to emerge from
another location on the object.
– This process of refraction is characterizes glass
and water.
– Some incident light may also be reflected at the
surface.
Shadows
• Shadows created by
finite-size light source.
– Umbra
• full shadow
– Penumbra
• partial shadow
Light Sources
• Color sources
• Ambient Light(uniform lighting)
• Point Source(emits light equally in all
directions)
• Spot lights(Reflect light from ideal point
source)
What shading can do?
• Suppose we draw a circle
Illumination Models and surface
rendering methods
• In order to achieve realism in computer
generated images, we need to apply natural
lighting effects to the surfaces of objects.
• Illumination models are used to calculate the
amount of light reflected from a certain
position on a surface.
– In other words, they provide ways to compute the
color on a pixel of a surface.
Illumination Model and Surface
Rendering methods
• An illumination model, also called a lighting
model/shading model is the model for calculating
light intensity at a single surface point.
• A surface-rendering algorithm uses the intensity
calculations from an illumination model to determine
the light intensity for all projected pixel positions for
the various surfaces in a scene
Components of Illumination Model
• Light Sources: type , color, and direction of the light
source
• Surface Properties: reflectance, opaque/transparent,
shiny/dull.
Light Source
• Object that radiates energy are called light
sources, such as sun, lamp, bulb, fluorescent
tube etc.
• Sometimes light sources are referred as light
emitting object and light reflectors.
• Generally light source is used to mean an
object that is emitting radiant energy e.g. Sun
• Total Reflected Light = Contribution from light
sources + contribution from reflecting surfaces
Light Source
• Point sources: The source that emit rays in all
directions
• Point source emit light from a single point in
all directions, with the intensity of the light
decreasing with distance.
• An example of a point source is a standalone
light bulb.
Light Source
• Distributed light source: All light rays originate
at a finite area in space (tubelight).
Distributed Light Source
• A nearby source, such as
the long fluoresent light.
• All of the rays from a
directional/distributed
light source have the
same direction, and no
point of origin.
• It is as if the light source was infinitely far away
from the surface that it is illuminating.
• Sunlight is an example of an infinite light source.
39
Materials
• When light is incident on an opaque surface,
part of it is reflected and part is absorbed.
• The amount of incident light reflected by a
surface depends on the type of material
• Shiny materials reflect more of the incident
light, and dull surface absorb more of the
incident light.
• For an illuminated transparent surface, some
of the incident light will be reflected and some
will be transmitted through the material.
40
Diffuse reflection
• Surfaces that are rough, or grainy, tend to scatter the
reflected light in all directions.
• This scattered light is called diffuse reflection.
The surface appears equally bright from all viewing
directions.
• What we call the color of an object is the color of the
diffuse reflection of the incident light.
Specular Reflection
• When light is placed on a shiny object a white
highlight is seen on the object is called
specular reflection.
OR
• When light sources create highlights, or bright
spots, called specular reflection
Basic Illumination Models
• Illumination models are used to calculate light
intensities that we should see at a given point on
the surface of an object.
• Lighting calculations are based on the
– optical properties of surfaces,
– the background lighting conditions
– and the light source specifications.
• All light sources are considered to be point
sources, specified with a co-ordinate position and
an intensity value (color).
Basic Illumination Models
• Some illumination models are
– Ambient Light
– Diffuse Reflection
– Specular Reflection
– Phong model
Ambient Light
• Even though an object in a scene
is not directly lit it will still be
visible. This is because light is
reflected from nearby objects.
• Ambient light has no spatial or Fig. 6
Ambient light shading.
directional characteristics.
• The amount of ambient light incident on each object is
a constant for all surfaces and over all directions.
• The amount of ambient light that is reflected by an
object is independent of the objects position or
orientation and depends only on the optical properties
of the surface.

45
Ambient Light
• The amount of light reflected from an object's
surface is determined by Ka , the ambient
reflection coefficient. Ka ranges from 0 to 1.

• Illumination equation for ambient light is

I = kaIa
where
I is the resulting intensity
Ia is the intensity of ambient light
ka is the object’s basic intensity, ambient-
reflection coefficient.
46
Ambeint light source
• A scene lit only with an ambient light
source
Diffuse Reflection
• Diffuse reflections are constant over each surface
in a scene, independent of the viewing direction.
• The amount of the incident light that is diffusely
reflected can be set for each surface with
parameter kd, the diffuse-reflection coefficient, or
diffuse reflectivity.
0  kd  1;
kd near 1 – highly reflective surface;
kd near 0 – surface that absorbs most of the
incident light;
kd is a function of surface color; 48
Diffuse Reflection
Even though there is equal light scattering in all direction
from a surface, the brightness of the surface does depend
on the orientation of the surface relative to the light
source:

(a) (b)

Fig. 8
A surface perpendicular to the direction of the incident light (a) is more illuminated than an equal-sized
surface at an oblique angle (b) to the incoming light direction.

49
Diffuse Reflection
• As the angle between the surface normal and the
incoming light direction increases, less of the
incident light falls on the surface.

• We denote the angle of incidence between the


incoming light direction and the surface normal as
. Thus, the amount of illumination depends on
cos.

• If the incoming light from the source is


perpendicular to the surface at a particular point,
that point is fully illuminated.

50
Diffuse Reflection
If Il is the intensity of the point N

Light source, then the diffuse To Light Source

reflection equation for a point L



on the surface can be written as
Il,diff = kdIlcos
or Fig. 9
Angle of incidence between the unit light-source

Il,diff = kdIl(N.L) direction vector L and the unit surface normal N.

where
N is the unit normal vector to a surface and L is the
unit direction vector to the point light source from a
position on the surface.

51
Diffuse Reflection
Figure 10 illustrates the illumination with
diffuse reflection, using various values of
parameter kd between 0 and1.

Fig. 10
Series of pictures of sphere illuminated by diffuse reflection model only using different kd values (0.4,
0.55, 0.7, 0.85,1.0).
52
Diffuse Reflection
We can combine the ambient and point-source
intensity calculations to obtain an expression for the
total diffuse reflection.
Idiff = kaIa+kdIl(N.L)
where both ka and kd depend on surface material
properties and are assigned values in the range from
0 to 1.

Fig. 11
Series of pictures of sphere illuminated by ambient and diffuse reflection model.
Ia = Il = 1.0, kd = 0.4 and ka values (0.0, 0.15, 0.30, 0.45, 0.60).
53
Specular Reflection and the
Phong Model
• Specular reflection is the result of total, or near total,
reflection of the incident light in a concentrated region
around the specular-reflection angle.
• Shiny surfaces have a narrow specular-reflection range.
• Dull surfaces have a wider reflection range.

55
Specular Reflection
Figure 13 shows the specular reflection
direction at a point on the N

illuminated surface. In this figure, To Light Source


R
• R represents the unit vector in L


the direction of specular reflection; 
 V

• L – unit vector directed toward the


point light source;
• V – unit vector pointing to the viewer from
Fig. 13
thespecular
Modeling surface
reflection.
position;
• Angle  is the viewing angle relative to the specular-
reflection direction R.

56
Specular reflection
Phong Reflection Model
• A simple model supports three models of light-
matter interactions
– Diffuse
– Specular
– Ambient
• and uses four vectors
– Normal
• To surface
• To viewer
• Perfect reflector
Phong Model

59
Phong Model
Phong model is an empirical model for
calculating the
specular-reflection range:
• Sets the intensity of specular reflection
proportional to cosns;
• Angle  assigned values in the range 0o to
90o, so that cos values from 0 to 1;
• Specular-reflection parameter ns is
determined by the type of surface that we
want to display,
• Specular-reflection coefficient ks equal to
some value in the range 0 to 1 for each
surface.
60
Phong Model
• Very shiny surface is modeled with a large value for ns
(say, 100 or more);
• Small values are used for duller surfaces.
• For perfect reflector (perfect mirror), ns is infinite;
N N
R R

L L

Shiny Surface (Large ns) Dull Surface


(Small ns)
Fig. 14
Modeling specular reflection with parameter ns.
61
Phong Model
Phong specular-reflection model:
Ispec = ksIl cosns
Since V and R are unit N

vectors in the viewing To Light Source


R
L

and specular-reflection  
 V

directions, we can
calculate the value of Fig. 13
Modeling specular reflection.

cosns with the dot


product V.R.
Ispec = ksIl (V.R)ns
62
Phong Model
L

R
L N. L

Calculation of vector R by considering projections onto the direction of the normal vector N.

R + L = (2N.L)N
R = (2N.L)N-L

63
Phong Model
N H
R

L 

 V

Fig. 17
Halfway vector H along the bisector of the angle between L and V.

H = (L + V)/|(L + V)|

Ispec = ksIl (N.H)ns


64
Example

65
Combine Diffuse & Specular
Reflections
For a single point light source, we can model
the combined diffuse and specular reflections
from a point on an illuminated surface as

I = Idiff + Ispec

= kaIa + kdIl(N.L) + ksIl(N.H)ns

66
Combine Diffuse & Specular
Reflections with Multiple Light
Sources
If we place more than one point source in a
scene, we obtain the light reflection at any
surface point by summering the contributions
from the individual sources:

I = kaIa + ni=1 Ili [kd (N.Li) + ks(N.Hi)ns]

67

You might also like