Computer Graphics - Study Material
Computer Graphics - Study Material
COURSE MATERIAL
Faculty Name: Mrs. M. Priya Class: III B.Sc(cs)
72-78
IV THREE DIMENSIONAL DISPLAY METHODS
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
MAJOR BASED ELECTIVE II (A)
COMPUTER GRAPHICS 2
Objective:
To understand the concepts on basic Graphical Techniques, Raster Graphics, Two Dimensional
and Three Dimensional Graphics
Unit I
Overview of Computer Graphics System: Video Display Devices – Raster Scan Systems –
Random – Scan Systems - Graphics Monitors and Workstations – Input Devices – Hardcopy Devices –
Graphics Software.
Unit II
Output Primitives: Line Drawing Algorithms – Loading the Frame Buffer –Line Function –
Circle – Generating Algorithms. Attributes of Output Primitives: Line Attributes – Curve Attributes –
Color and Grayscale levels– Area fill Attributes – Character Attributes – Bundled Attributes – Inquiry
Functions.
Unit III
2D Geometric Transformations: Basic Transformation – Matrix Representations – Composite
Transformations – Window to View port Co-Ordinate Transformations. Clipping: Point Clipping – Line
Clipping – Cohen-Sutherland Line Clipping – Liang Barsky Line Clipping – Polygon Clipping –
Sutherland – Hodgman Polygon Clipping – Curve Clipping – Text Clipping.
Unit IV
Graphical User Interfaces and Interactive Input Methods: The User Dialogue – Input of Graphical
Data – Input Functions – Interactive Picture Construction Techniques. Three Dimensional Concepts: 3D-
Display Methods – #Three Dimensional Graphics Packages
Unit V
3D Geometric and Modeling Transformations: Translation – Scaling – Rotation – Other
Transformations.Visible Surface Detection Methods: Classification of Visible Surface Detection
Algorithm –Backface Detection – Depth-Buffer Method – A-Buffer Method –Scan-Line Method –
Applications of Computer Graphics.
Text Book:
1. Donald Hearn M. Pauline Baker, Computer Graphics C Version, Second Edition,
Pearson Education, 2014.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
1. OVERVIEW OF GRAPHICS SYSTEMS
3
Definition:
Computer graphics is an art of drawing pictures on computer screens with the help of
programming. It involves computations, creation, and manipulation of data. Computer graphics is a
rendering tool for the generation and manipulation of images.
In other words, we can say that it is a visual representations of data displayed on a monitor. It can
be a series of images (most often called video) or a single image. It is used for making movie, video
game, scientific modeling, design for catalogs and other commercial art.
Classification of Computer Graphics:
Computer graphics as drawing pictures on computers, also called rendering. 2D computer
graphics are usually split into two categories:
1. Vector Graphics
2. Raster graphics
Vector Graphics
Vector graphics is the creation of digital images through a sequence of commands or
mathematical statements.
In Vector graphics lines, shapes, and text are used to create a more complex image.
Vector graphics are made with programs like Adobe Illustrator and Inkscape etc…
Vector graphics image is shown in Fig 1.1.
Raster Graphics
Raster Graphics or Bitmap Image is a dot matrix data structure, representing rectangular grid of
pixels, or points of color.
Raster images are stored in image files with varying formats.
Raster graphics use pixels to make up a larger image.
Raster programs are made by Paintbrushes Adobe Photoshop and Corel Paint Shop Pro.
Sometimes people do use only pixels to make an image. This is called pixel art and it has a very
unique style. Raster image is shown in Fig 1.2.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
VIDEO DISPLAY DEVICES:
The primary output device in a graphics system is a video monitor is shown in Fig 1.3. The 4
operation of most video monitors is based on the standard cathode-ray tube (CRT) design.
Refresh Cathode-Ray Tubes
A beam of electrons (cathode rays), emitted by an electron gun, passes through focusing and
deflection systems that direct the beam toward specified positions on the phosphor-coated screen.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Sometimes the electron gun is built to contain the accelerating anode and focusing system within
the same unit. 5
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Persistence:
Persistence means how long the phosphors continue to emit light after the CRT beam is removed. 6
A phosphor with low persistence is useful for animation.
A higher persistence phosphor is useful for displaying highly complex, static pictures.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
In a raster-scan system, the electron beam is swept across the screen, one row at a time from top
to bottom. 7
The electron beam moves across each row, the beam intensity is turned on and off to create a
pattern of illuminated spots.
Refresh Buffer or Frame Buffer
Picture definition is stored in a memory area called the refresh buffer or frame buffer.
Scan Line
The frame buffer holds the set of intensity values for all the screen points. Stored intensity values
are then retrieved from the refresh buffer and "painted" on the screen one row at a time is called scan line
(Fig. 1.7).
Pixel
Each screen point is referred to as a pixel or pel or picture element.
Intensity range for pixel positions depends on the capability of the raster system.
In a simple black-and-white system, each screen point is either on or off. So only one bit per pixel
is needed to control the intensity of screen positions.
For a bi-level system, a bit value of 1 indicates that the electron beam is to be turned on at that
position, and a value of 0 indicates that the beam intensity is to be off.
Bitmap
On a black-and-white system with one bit per pixel, the frame buffer is commonly called a
bitmap.
Pixmap
Systems with multiple bits per pixel, the frame buffer is often referred to as a pixmap.
Refreshing on raster scan displays is carried out at the rate of 60 to 80 frames per second. Refresh
rates are described in units of cycles per second or Hertz.
Using these units, we would describe a refresh rate of 60 frames per second as simply 60 Hz.
Horizontal Retrace
At the end of each scan line, the electron beam returns to the left side of the screen to begin
displaying the next scan line.
The return to the left of the screen, after refreshing each scan line, is called the horizontal retrace
of the electron beam.
Vertical Retrace
At the end of each frame the electron beam returns to the top left corner of the screen to begin the
next frame is called vertical retrace.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
8
Fig 1.8 Interlacing scan lines on a Raster Scan Display
Interlace
Each frame is displayed in two passes using an interlaced refresh procedure.
In the first pass, the beam sweeps across every other scan line from top to bottom.
Then after the vertical retrace, the beam sweeps out the remaining scan lines (Fig 1.8).
Random-Scan Displays
In a random-scan display unit, a CRT has the electron beam directed only to the parts of the
screen where a picture is to be drawn (Fig 1.9).
Random-scan monitors draw a picture one line at a time and for this reason are also referred to as
vector displays or stroke-writing or calligraphic displays.
Refresh rate on a random-scan system depends on the number of lines to be displayed.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Color CRT Monitors
A CRT monitor displays colour pictures by using a combination of phosphors that emit different- 9
colored light. By combining the emitted light from the different phosphors, a range of colors can be
generated.
The two basic techniques for producing color displays.
1. Shadow-mask method.
2. Beam-Penetration Method
Beam-Penetration Method
In beam-penetration method two layers of phosphor, usually red and green, are coated onto the
inside of the CRT screen.
The displayed color depends on how far the electron beam penetrates into the phosphor layers.
A beam of slow electrons excites only the outer red layer.
A beam of very fast electrons penetrates through the red layer and excites the inner green layer.
At intermediate beam speeds, combinations of red and green light are emitted to show two
additional colors, orange and yellow.
Shadow-mask method
Shadow-mask methods are commonly used in raster scan systems because they produce wider
range of colors than the beam penetration method.
A shadow-mask CRT has three phosphor color dots at each pixel position.
One phosphor dot emits a red light, another emits a green light, and the third emits a blue light.
This type of CRT has three electron guns, one for each color dot, and a shadow-mask grid just
behind the phosphor-coated screen.
Types of Shadow-mask:
There are two types of shadow mask
1. Delta – delta Shadow mask
2. Inline shadow mask
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Delta –delta shadow mask method are used in color CRT systems. Three electron beams are
deflected and focused, which contains a series of holes aligned with the phosphor dot patterns 10
(Fig 1.10).
When three beams pass through a hole in the shadow mask, they activate a dot triangle color spot
on the screen.
Another arrangement for the electron gun is an in-line arrangement in which the three electron
guns and the corresponding red-green-blue color dots on the screen are aligned in one scan line
instead of triangular pattern.
These in line arrangement of electron guns are used in high resolution color CRT.
Full Color System or True Color System
An RGB color system with 24 bits of storage per pixel is generally referred to as a full-color
system or a true-color system.
Direct-View Storage Tubes
It stores the picture information inside the CRT instead of refreshing the screen. Two electron
guns are used in a DVST.
1. Primary Gun
2. Flood Gun
The primary gun is used to store the picture pattern. The second, the flood gun, maintains the
picture display. The DVST has both advantages and disadvantages compared to refresh CRT.
Advantages
No refreshing is needed; very complex pictures can be displayed at very high resolutions without
flicker.
Disadvantages
They do not display color and that selected parts of a picture cannot be erased.
To eliminate a picture section, the entire screen must be erased. Erasing and redrawing process
can take several seconds for a complex picture.
Flat-Panel Displays
The term flat-panel display refers to a class of video devices that have reduced volume, weight,
and power requirements compared to a CRT.
A significant feature of flat-panel displays is that they are thinner than CRTs, and we can hang
them on walls or wear them on our wrists.
Current uses for flat-panel displays include small TV monitors, calculators, pocket video games,
laptop computers, etc.,
There are two categories:
Emissive displays or Emitters
Non emissive displays or non emitters
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
1. Emissive displays or Emitters
The emissive displays devices that convert electrical energy into light. Plasma panels, thin-film 11
electroluminescent displays and light emitting diodes are examples of emissive displays.
Plasma panels:
It is also called gas –discharge displays, are constructed by filling the region between two glass
plates with mixture of gases, usually includes neon.
A series of vertical conducting ribbons is placed on one glass panel, and set of horizontal ribbons
is built into other glass panel.
Firing voltage is applied to a pair of horizontal and vertical conductors, the gas at the intersection
of two conductors break down into glowing plasma of electrons and ions.
Picture definition is stored in a refresh buffer, and the firing voltages are applied to refresh the
pixel positions 60 times per second. Alternating current methods are used to provide faster
application of firing voltages and brighter displays.
Disadvantages:
Plasma panel only applicable for monochromatic devices, but systems have been developed for
displaying color and grayscale.
Thin-Film Electroluminescent Displays:
The construction of Thin-Film Electroluminescent Displays is similar to plasma panel.
But the difference is that the region between the glass plates is filled with a phosphor such as Zinc
sulphide doped with manganese, instead of gas.
When high voltage is applied to a pair of electrodes, electrical energy is absorbed by manganese
atoms then release the spot of light similar to plasma panel.
It is more powerful than plasma panel and produce good color and gray scale.
LED (Light –emitting diode):
A matrix of diodes is arranged to form the pixel positions in the display, and picture definition is
stored in a refresh buffer.
Information is read from the refresh buffer and converted to voltage that is applied to the diodes
into light pattern in the display.
2. Non emissive displays or non emitters
Non emissive displays use optical effects to convert sunlight or light from some other source into
graphics patterns.
Example: LCD
Liquid-crystal displays (LCDS) are commonly used in small systems, such as calculators and
portable, laptop computers.
Each pixel of an LCD consists of a layer of molecules aligned between two transparent electrodes
with light polarizer.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Passive-matrix LCD is an LCD technology that uses a grid of vertical and horizontal conductors
comprised of Indium Tin Oxide (ITO) to create an image. 12
Another method for constructing LCD is to place a transistor at each pixel position, using thin
film transistor technology. The transistors are used to control the voltage at pixel locations are
called active matrix displays.
RASTER-SCAN SYSTEM:
In raster graphics, in addition to the central processing unit, or CPU, a special-purpose processor,
called the video controller or display controller, is used to control the operation of the display device.
Organization of a simple raster system in shown in Fig.1.11.
Fig 1.12 Architecture of raster system with a fixed portion of the system memory
A fixed area of the system memory is reserved for the frame buffer. So the video controller is
given direct access to the frame-buffer memory (Fig 1.12).
Frame-buffer locations, and the corresponding screen positions, are referenced in Cartesian
coordinates. The coordinate origin is defined at lower left screen corner.
Then the first quadrant of a two dimensional system, positive x values increasing to the right and
positive y values increasing from bottom to top.
Scan lines are labeled from ymax at the top of the screen to 0 at the bottom, each scan line screen
pixel positions are labeled from 0 to xmax.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
There are two registers are used to store the coordinates of the screen pixels.
Initially, the x register is set to 0 and the y register is set to ymax. 13
The value stored in the frame buffer for this pixel position is then retrieved and used to set the
intensity of the CRT beam.
Then the x register is incremented by 1, and the process repeated for the next pixel on the top
scan line.
In high quality system, two frame buffers are provided, one for refreshing other for filling
intensity values.
Raster-Scan Display Processor
A raster system containing a separate display processor, sometimes referred to as a graphics
controller or a display coprocessor (Fig 1.13).
Scan Conversion
Task of the display processor is digitizing a picture definition given in an application program
into a set of pixel-intensity values for storage in the frame buffer. This digitization process is
called scan conversion.
Display processors are also designed to perform a number of additional operations.
These functions include generating various line styles (dashed, dotted, or solid), displaying color
areas, and performing certain transformations and manipulations on displayed objects.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
RANDOM SCAN SYSTEMS
The organization of simple random scan system is shown in Fig 1.14. 14
An application program is input and is stored in the system memory.
Graphics commands in the application program are translated by the graphics package into a
display file stored in the system memory.
This display file is then accessed by the display processor to refresh the screen.
The display processor in a random-scan system is referred to as a display processing unit or a
graphics controller.
Fig 1.15 General Purpose Computer System that can be used for Graphics Application
For example, laser printers and plotters are graphics devices because they permit the computer
to output pictures.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
15
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
The lower spherical ball moves in a socket
The joystick can be moved in all four directions. 16
It is mainly used in Computer Aided Designing (CAD) and playing computer games.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Digitizer
Digitizer is an input device, which converts analog information into a digital form. 17
Digitizer can convert a signal from the television camera into a series of numbers that could be
stored in a computer.
They can be used by the computer to create a picture of whatever the camera had been pointed at.
Digitizer is also known as Tablet or Graphics Tablet because it converts graphics and pictorial
data into binary inputs.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Printers
Printer is the most important output device, which is used to print information on paper. 18
There are two types of printers:
1 Impact Printers
2 Non-Impact Printers
Impact Printers
The printers that print the characters by striking against the ribbon and onto the paper are called
impact printers.
1. Character Printer
2. Line Printer
Character Printer:
It prints only one character at a time.
It has relatively slower speed. Eg. Dot matrix printers.
Dot Matrix Printer:
It prints characters as combination of dots.
Dot matrix printers are the most popular among serial printers.
These have a matrix of pins on the print head of the printer which form the character.
The computer memory sends one character at a time to be printed by the printer. There is a
carbon between the pins & the paper.
The words get printed on the paper when the pin strikes the carbon. There are generally 24 pins.
Non-Impact Printers:
These printers use non-Impact technology such as ink-jet or laser technology.
These printers provide better quality of O/P at higher speed.
There are two types:
1. Ink-Jet Printer
2. Laser Printer
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Ink-Jet Printer:
It prints characters by spraying patterns of ink on the paper from a nozzle or jet. 19
It prints from nozzles having very fine holes, from which a specially made ink is pumped out to
create various letters and shapes.
Laser Printer:
It is a type of printer that utilizes a laser beam to produce an image on a drum.
This is also the way copy machines work. Because an entire page is transmitted to a drum before
the toner is applied, laser printers are sometimes called page printers.
GRAPHICS SOFTWARE
There are two general classifications for graphics software:
General Programming Packages
A general graphics programming package provides an extensive set of graphics functions that can
be used in a high-level programming language, such as C or FORTRAN.
Example: Generating picture components straight lines, polygons, circles, and other figures.
Special-Purpose Applications Packages
Application graphics packages are designed for nonprogrammers, so that users can generate
displays without worrying about how graphics operations work.
Example: Artist’s painting programs and various business, medical, and CAD systems
Coordinate Representations
Coordinate values for a picture are converted to Cartesian coordinates before they can be input to
the graphics package.
Different Cartesian reference frames are used to construct and display a scene.
Modeling Coordinates
We can construct the shape of individual objects, such as trees or furniture, in a scene within
separate coordinate reference frames called modeling coordinates, or sometimes local coordinates
or master coordinates.
World Coordinates
Once individual object shapes have been specified, we can place the objects into appropriate
positions within the scene using a reference frame called world coordinates.
Graphics Functions
A general-purpose graphics package provides users with a variety of functions for creating and
manipulating pictures.
The basic building blocks for pictures are referred to as output primitives. They include character
strings and geometric entities, such as points, straight lines, curved lines, filled areas (polygons,
circles, etc.).
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Attributes are the properties of the output primitives. It includes intensity and color
specifications, line styles, text styles, and area-filling patterns. 20
Geometric Transformations
To change the size, position, or orientation of an object within a scene using geometric
transformations.
Modeling Transformations
It is used to construct a scene using object descriptions given in modeling coordinates
Viewing Transformations
Viewing transformations are used to specify the view that is to be presented and the portion of the
output display area that is to be used.
Pictures can be subdivided into component parts, called structures or segments or objects,
depending on the software package in use
Interactive graphics applications use various kinds of input devices, such as a mouse, a tablet, or a
joystick.
Software Standards
The primary goal of standardized graphics software is portability.
When packages are designed with standard graphics functions, software can he moved easily
from one hardware system to another and used in different implementations and applications.
Graphical Kernel System (GKS)
It is the first graphics software standard by the International Standards Organization (ISO).
It is also includes in the American National Standards Institute (ANSI).
PHIGS (Programmer's Hierarchical Interactive Graphics standard)
It is the second software standard to be developed and approved by the standards organizations.It
is an extension of GKS.
PHIGS Workstations
Workstation refers to a computer system with a combination of input and output devices that is
designed for a single user.
In PHIGS and GKS, however, the term workstation is used to identify various combinations of
graphics hardware and software.
A PHIGS workstation can be a single output device, a single input device, a combination of input
and output devices, a file, or even a window displayed on a video monitor.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
2. OUTPUT PRIMITIVES
Introduction 21
The Primitives are the simple geometric functions that are used to generate various Computer
Graphics required by the User.
Basic Output primitives are point-position (pixel), and a straight line.
Some other output primitives are rectangle, conic section, circle, or may be a surface.
POINT AND LINES
Point Function
A point function is the most basic Output primitive in the graphic package.
A point function contains location using x and y coordinate and the user may also pass other
attributes such as its intensity and color.
The location is stored as two integer tuple, the color is defined using hex codes.
The size of a pixel is equal to the size of pixel on display monitor.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
22
To determine pixel positions along a straight-line path from the geometric properties of the line.The
Cartesian slope-intercept equation for a straight line is
y = m · x + b -------------------- 1
where m as the slope of the line and b as the y intercept.
Given that the two endpoints of a line segment are specified at positions (x 1, y1) and (x2, y2), as
shown in Fig.2.2.
Fig 2.3 Line path between endpoint positions (x1, y1) and (x2, y2)
To determine values for the slope m and y intercept b with the following calculations:
Algorithms for displaying straight lines are based on the line equation 1 and the calculations
given in Eqs. 2 and 3.
For any given x interval ∆x along a line, we can compute the corresponding y interval ∆y from
Eq.2 as
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
∆y = m · ∆x 4
Similarly, we can obtain the x interval ∆x corresponding to a specified ∆y as 23
∆x = ∆y / m 5
For lines with slope magnitudes |m| < 1, ∆x can be set proportional to a small horizontal
deflection voltage, and the corresponding vertical deflection is then set proportional to ∆y as calculated
from Eq-4.
For lines whose slopes have magnitudes |m| > 1, ∆y can be set proportional to a small vertical
deflection voltage with the corresponding horizontal deflection voltage set proportional to ∆x, calculated
from Eq.5.
For lines with m = 1, ∆x = ∆y and the horizontal and vertical deflections voltages are equal. In
each case, a smooth line with slope m is generated between the specified endpoints.
On raster system, lines are plotted with pixels, and step sizes in the horizontal and vertical
directions are constrained by pixel separations.
Scan conversion process for straight lines is illustrated in Fig 2.3.
DDA Algorithm
The digital differential analyzer (DDA) is a scan-conversion line algorithm based on calculating
either ∆y or ∆x, using Eq. 4 or Eq. 5.
First consider a line with positive slope, as shown in Fig. If the slope is less than or equal to 1,
sample at unit x intervals (∆x = 1) and compute successive y values as
yk+1 = yk + m 6
Fig 2.4 straight line segment with five sampling positions along the x axis between x1 and x2.
Subscript k takes integer values starting from 0, for the first point, and increases by 1 until the
final endpoint is reached.
Since m can be any real number between 0.0 and 1.0, each calculated y value must be rounded to
the nearest integer corresponding to a screen pixel position in the x column.
For lines with a positive slope greater than 1.0, reverse the roles of x and y. That is, we sample at
unit y intervals (∆y = 1) and calculate consecutive x values as
x k+1 = xk + 1/m 7
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
In this case, each computed x value is rounded to the nearest pixel position along the current y
scan line. 24
Equations 6 and 7 are based on the assumption that lines are to be processed from the left
endpoint to the right endpoint Fig 2.2. If this processing is reversed, so that the starting endpoint is at the
right, then either we have ∆x = −1 and
yk+1 = yk - m 8
xk+1 = xk – 1/ m 9
Negative slopes are calculated using Eq-s 6 through 9. If the absolute value of the slope is less
than 1 and the starting endpoint is at the left, we set ∆x = 1 and calculate y values with Eq-6.
When the starting endpoint is at the right (for the same slope), we set ∆x = −1 and obtain y
positions using Eq. 8.
For a negative slope with absolute value greater than 1, we use ∆y = −1 and Eq. 9 or we use ∆y =
1 and Eq.7.
Algorithm
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Step 5: loop the following process for steps number of times
1. Use a unit of increment or decrement in the x and y direction 25
2. if xa is less than xb the values of increment in the x and y directions are 1 and m
3. if xa is greater than xb then the decrements -1 and – m are used.
Example: Consider the line from (0, 0) to (4, 6)
1. xa = 0, ya = 0 and xb = 4 yb = 6
2. dx = xb - xa = 4-0 = 4 and dy = yb - ya = 6-0 = 6
3. x = 0 and y = 0
4. 4 > 6 (false) so, steps = 6
5. Calculate xIncrement = dx/steps = 4 / 6 = 0.66 and
yIncrement = dy/steps = 6/6 = 1
6. Setpixel(x, y) = Setpixel(0, 0) (Starting Pixel Position)
7. Iterate the calculation for xIncrement and yIncrement for steps (6) number of times
8. Tabulation of the each iteration is given below.
Fig 2.5 Pixel positions along the line path between endpoints (0, 0) and (4, 6) plotted with DDA line
algorithm
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Bresenham’s Line Algorithm
An accurate and efficient raster line-generating algorithm developed by Bresenham that uses only 26
incremental integer calculations.
In addition,Bresenham’s line algorithm can be adapted to display circles and other curves.
To illustrate Bresenham’s approach, we first consider the scan-conversion process for lines with
positive slope less than 1.0.
Pixel positions along a line path are determined by sampling at unit x intervals. Starting from the
left endpoint (x0, y0) of a given line, we step to each successive column (x position) and plot the
pixel whose scan-line y value is closest to the line path.
Assuming we have determined that the pixel at (xk , yk ) is to be displayed, next we need to decide
which pixel to plot in column xk+1. Our choices are the pixels at positions (xk + 1, yk ) and (xk + 1, yk + 1).
For example, is shown in the following figure 2.4. From position (2, 3) we need to determine at
next sample position is whether (3, 3) or (3, 4). We choose the point which is closer to the original line.
Fig 2.6 a straight line segment is to be plotted, starting from the pixel at column 2 on scan line 3.
At sampling position xk + 1, we label vertical pixel separations from the mathematical line path as
dlower and dupper in Fig 2.5.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
The y coordinate on the mathematical line at pixel column position xk + 1 is calculated as
y = m (xk+1) + b ----------------------------- 10 27
Then
dlower = y - yk
= m (xk+1) + b - yk
And
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
4. At each xk along the line, starting at k = 0, perform the following test. If pk < 0, the next point to
plot is (xk + 1, yk ) and 28
pk+1 = pk + 2∆y
5. Otherwise, the next point to plot is (xk + 1, yk + 1) and
pk+1 = pk + 2∆y − 2∆x
6. Perform step 4 ∆x − 1 times.
Implementation of Bresenham Line drawing Algorithm
void lineBres (int xa, int ya, int xb, int yb)
{
int dx = abs( xa – xb) , dy = abs (ya - yb);
int p = 2 * dy – dx;
int twoDy = 2 * dy, twoDyDx = 2 *(dy - dx);
int x , y, xEnd; /* determine which point to use as
start, which as end */
if (xa > x b )
{
x = xb;
y = yb;
xEnd = xa;
}
else
{
x = xa;
y = ya;
xEnd = xb;
}
setPixel(x, y);
while(x < xEnd)
{
x++;
if (p < 0)
p+ = twoDy;
else
{ y++;
p+ = twoDyDx;
}
setPixel(x,y);
}
}
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Example:
Consider the line with endpoints (20, 10) to (30, 18) 29
The line has the slope m = (18 - 10)/ (30 - 20) = 8/10 = 0.8
Δx = 10
Δy = 8
The initial decision parameter has the value p0 = 2Δy - Δx = 6
and the increments for calculating successive decision parameters are
2Δy = 16
2Δy - 2 Δx = -4
We plot the initial point (x0, y0) = (20, 10) and determine successive pixel positions along the
line path from the decision parameter.
Tabulation:
2.8 Pixel positions along the line path between endpoints (20, 10) and (30, 18) plotted with
Bresenham’s line algorithm
Advantages
1. Algorithm is fast
2. Uses only integer calculations
Disadvantages
1. It is meant only for basic line drawing.
LOADING THE FRAME BUFFER
After scan converting the straight line segments and other objects in the raster system, frame
buffer positions must be calculated.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
It is done by set pixel procedure that stores intensity values for the pixels at corresponding
addresses within the frame buffer array. 30
Scan conversion algorithms generate pixel positions at successive intervals.
To calculate frame-buffer addresses, incremental methods are used.
Figure 2.9 Pixel screen positions stored within the frame buffer
For example, in the figure 2.9, the frame buffer array is addressed in row major order. The pixel
positions vary from (0,0) to (xmax, ymax).
The pixel postion (x,y) is calculated as follows:
addr (x,y) = addr (0,0) + y(xmax+1) + x
We can calculate the frame buffer address for the next pixel logo in the scanline by using
incremental method as follows:
addr (x+1,y) = addr (x, y) +1
For calculating the frame buffer address for the pixel position (x+1, y+1).
addr (x+1,y+1) = addr (x,y) + xmax+2
Where the constant xmax +2 is precomputed once for all line segments.
CIRCLE GENERATING ALGORITHMS
PROPERTIES OF CIRCLE
A circle is defined as the set of points that are all at a given distance r from a center position (xc,
yc).
Figure 2.10 Circle with Center Coordinate (xc, yc) and Radius r
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
The distance relationship is expressed by the Pythagorean Theorem in Cartesian coordinates as follows
(x – xc)2+(y – yc)2 = r2 31
y values at each position is calculated as
and the x axis steps from xc – r to xc + r. This method is not a best method for generating a circle.
Problems
(1) It involves considerable computation at each step.
(2) The spacing between each plotted pixel position is not uniform.
Solutions
(1) The spacing can be adjusted by interchanging x and y whenever the absolute value of the
slope of the circle is greater than 1.
It increases the computation and processing of the algorithm.
(2) Another way to adjust the unequal spacing is to calculate points along the circular boundary
using polar coordinates r and θ.
The circle equation in parametric polar form yields the following pair of equations:
x = xc+r cosθ
y = yc+r sinθ
Where θ is a fixed angular step size.
By using the above equation, a circle is plotted with equally spaced points along the
circumference.
Symmetry of a circle
•By considering the symmetry of a circle, computations can be reduced.
•The shape of the circle is similar in all the four quadrants.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Circle sections in adjacent octants within one quadrant are symmetric with respect to the 45° line
dividing the two octants. 32
Advantage
We can generate all pixel positions around a circle by calculating only the points within the sector
from x=0 to x=y.
MIDPOINT CIRCLE ALOGITHM
In midpoint method, the circle function is defined as follows:
fcircle (x, y) =x2+y2–r2
Any point (x,y) on the boundary of the circle with radius r or satisfies the following equation.
fcircle (x, y) = 0
If the point is in the interior of the circle, the circle function is negative.
If the point is outside the circle, the circle function is positive.
The relative position of any point (x,y) can be determined by checking the sign of the circle
function as follows:
The circle function test is performed for the mid positions between pixels near the circle path at
each sampling step. So the circle function is the decision parameter in the midpoint algorithm.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
33
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
The start position (0, r) with the value (0, 2r).
Successive values are obtained by adding 2 to the previous value of 2x, and subtracting 2 from 34
the previous value of 2y.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
35
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
ATTRIBUTES OF OUTPUT PRIMITIVES
Any parameter that affects the way a primitive is to be displayed is referred to as an 36
attribute parameter. Attribute parameters are color, size etc. It is used to determine the
fundamental characteristics of a primitive.
TYPES OF ATTRIBUTES
1. Line Attributes
2. Curve Attributes
3. Color and Grayscale Levels
4. Area Fill Attributes
5. Character Attributes
6. Bundled Attributes
Line Attributes
Basic attributes of a straight line segment are
1. Line Type
2. Line Width
3. Pen and Brush Options
4. Line Color
Line type
Line type attribute includes solid lines, dashed lines and dotted lines.
To set line type attributes in a PHIGS application program, a user invokes the function
setLinetype (lt)
where parameter lt is assigned a positive integer value of 1, 2, 3 or 4 to generate lines that are
solid, dashed, dash dotted respectively. Other values for line type parameter it could be used to display
variations in dot-dash patterns.
Line width
Implementation of line width option depends on the capabilities of the output device to set the
line width attributes.
To set the line-width attributes using the following command
setLinewidthScaleFactor (lw)
Line width parameter lw is assigned a positive number to indicate the relative width of line to be
displayed.
A value of 1 specifies a standard width line.
To set lw to a value of 0.5 to plot a line whose width is half that of the standard line.
Values greater than 1 produce lines thicker than the standard.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Line Cap
To adjust the shape of the line ends to give them a better appearance by adding line cap (Fig: 2.8). 37
There are three types of line cap. They are
1. Butt cap
2. Round cap
3. Projecting square cap
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Round join
It is produced by capping the connection between the two segments with a circular boundary 38
whose diameter is equal to the width.
Bevel join
It is generated by displaying the line segment with but caps and filling in triangular gap where the
segments meet.
Pen and Brush Options
In some graphics packages, lines can also be displayed using selected pen or brush options.
Options in this category include shape, size, and pattern. Some possible pen or brush shapes are
given in following figure 2.10.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
The areas can be displayed using various brush styles, colors and transparency parameters.
Fill Styles 39
Areas are displayed with three basic fill styles, are shown in
Fig: 2.11.
1. Hollow with a color border
2. Filled with a solid color
3. Filled with a specified pattern or design.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
40
For fill style pattern, table entries can be created on individual output devices with the following
function
setPatternRepresentation (ws,pi,nx,ny,cp)
Parameter pi sets the pattern index number for workstation code ws, and cp is a two dimensional
array of color codes with nx columns and ny rows. For example the following function could be used to
set the first entry in the pattern table for workstation 1.
cp [1,1 ] = 4; cp [2,2] = 4;
cp [1,2] = 0; cp[2,1] = 0;
setPatternRepresentation (1,1,2,2,cp);
When color array cp is to be applied to fill a region, we need to specify the size of an array with
the following function
setPatternSize( dx,dy)
Where parameters dx and dy give the coordinate width and height of the array mapping. Then a
reference position for starting a pattern fill is assigned with the following statement;
setPatternReferencePoint (position);
Where parameter position is a pointer to coordinates (xp,yp) that fix the lower left corner of the
rectangular pattern.
Tiling:
The process of filling an area with a rectangular pattern is called tiling and it is also referred to as
tiling patterns.
Soft Fill:
Soft fill or tint fill algorithms are applied to repaint areas so that the fill color is combined with
background color. An example of this type of fill is linear soft fill algorithm repaints an area by merging a
fore ground color F with a single background color B, Where F is not equal B.
Character Attributes
The appearance of displayed character is controlled by attributes such as font, size, color and
orientation.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Attributes can be set both for entire character strings (text) and for individual characters defined
as marker symbols. 41
Text Attributes
The choice of font or type face is set of characters with a particular design style as courier,
Helvetica, times roman, and various symbol groups.
The characters in a selected font also are displayed with styles (solid, dotted, double) in bold face
in italics and in outline or shadow styles.
A particular font and associated style is selected in a PHIGS program by setting an integer code
for the text font parameter tf in the function
setTextFont (tf)
Control of text color (or intensity) is managed from an application program with
setTextColourIndex (tc)
Where text color parameter tc specifies an allowable color code.
We can adjust text size by scaling the overall dimensions of characters or by scaling only the
character width.
Character size is specified by points, where 1 point is 0.013837 inch. Point measurements specify
the size of the body of a character.
The distance between the bottom-line and the top line of the character body is same for all
characters in particular size and typeface, but width of the body may vary.
Proportionally spaced fonts assign a smaller body width to narrow characters such as i, j, l and f
compared to broad characters such as W or M.
Character height is defined as the distance between the base line and cap line of characters.
Text size can be adjusted without changing the width to height ratio of characters with
setCharacterHeight (ch)
Parameter ch is assigned a real value greater than 0 to set the coordinate height of capital letters.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Width 0.5 42
Width 1.0
Width 2.0
Spacing between characters is controlled separately with
setCharacterSpacing (cs)
Spacing
Spacing
S p ac in g
Where the character-spacing parameter cs can he assigned any real value.
The orientation for a displayed character string is set according to the direction of the character up
vector
setCharacterUpVector (upvect)
Parameter upvect in this function is assigned two values that specify the x and y vector
components.
Text is displayed so that the orientation of characters from base line to cap line is in the direction
of the up vector. For example, upvect = (1, 1) is displayed the text in 450 as shown in the following
figure.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
To arrange character strings vertically or horizontally
setTextPath (tp) 43
tp can be assigned the value: right, left, up, or down
Another attribute for character strings is alignment. This attribute specifies how text is to be
positioned with respect to the start coordinates. Alignment attributes are set with
SetTextAlignment (h,v)
Where parameters h and v control horizontal and vertical alignment.
ST RING
GNIRTS STRING
STRING
setMarkerSizeScaleFactor (ms)
With parameter marker size ms assigned a positive number. This scaling parameter is applied to
the nominal size for the particular marker symbol.
Values greater than 1 increase the marker size and values less than one reduce the marker size.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Marker color is specified with
SetPolymarkerColourIndex (mc) 44
Selected color code parameter mc is stored in the current attribute list and used to display
subsequently specified marker primitives
Bundled Attributes
A single attribute that specifies exactly how a primitive is to be displayed with that attribute
setting. These specifications are called individual or unbundled attributes.
A particular set of attributes values for a primitive on each output device is chosen by specifying
appropriate table index. Attributes specified in this manner are called bundled attributes.
The table for each primitive that defines groups of attribute values to be used on particular output
devices is called a bundle table.
The choice between a bundled or an unbundled specification is made by setting a switch called
the aspect source flag for each of these attributes
setIndividualASF( attributeptr, flagptr)
Where parameter attributerptr points to a list of attributes and parameter flagptr points to the
corresponding list of aspect source flags.
Each aspect source flag can be assigned a value of individual or bundled.
Bundled line Attributes
Entries in the bundle table for line attributes on a specified workstation are set with the function
A poly line that is assigned a table index value of 3 would be displayed using dashed lines at half
thickness in a blue color on work station 1; while on workstation 4, this same index generates solid,
standard-sized white lines.
Once the bundled tables have been set up, a group of bundled line attributes is chosen for each
workstation by specifying table index value;
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
setPolylineIndex (li);
Bundled Area fills Attributes 45
Table entries for bundled area-fill attributes are set with
setInteriorRepresentation (ws, fi, fs, pi, fc)
Which defines the attributes list corresponding to fill index fi on workstation ws.
Parameter fs, pi and fc are assigned values for the fill style, pattern index and fill color
respectively.
A particular attribute bundle is selected from the table with the function
setInteriorIndex (fi);
Bundled Text Attributes
Table entries for bundled text attributes are set with
setTextRepresentation (ws, ti, tf, tp, te, ts, tc)
Bundles values for text font, precision,expansion factor, size and color in a table position for
work station ws that is specified by value assigned to text index parameter ti.
A particular text index value is chosen with the function
setTextIndex (ti);
Bundled Marker Attributes
Table entries for bundled marker attributes are set with
setPolymarkerRepresentation (ws, mi, mt, ms, mc)
That defines marker type, marker scale factor, marker color for index mi on workstation ws.
Bundle table selections are made with the function
setPolymarkerIndex (mi);
COLOUR AND GRAYSCALE LEVELS
Colour options are numerically coded with values ranging from 0 through the positive integers.
These color codes are converted to intensity level settings for the electron beams in CRT
monitors.
Color Tables
Color information can be stored in the frame buffer in two ways :
o The colour codes can be directly put in the frame buffer (or)
o Colour codes can be maintained in a separate table and pixel values can be used as an
index into this table.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
46
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
47
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
3. TWO DIMENSIONAL GEOMETRIC TRANSFORMATIONS
Definition 48
Transformation is a basic concept in computer graphics. It means to alter the
orientation, size, and shape of an object with geometric transformation in a 2-D plane.
BASIC TRANSFORMATIONS
There are three basic transformations they are
1. Translation
2. Rotation
3. Scaling
Translation
A translation is applied to an object by representing it along a straight line path from one
coordinate location to another. It is also refers to the shifting of a point or move an object from one place
to some other place.
In order to move an object in 2-D space, we need to add or subtract some value from its x and y
coordinates. That distance is known as 'translational distance'.
By adding translation distances, tx and ty to the original coordinate position (x,y) to move the
point to a new position (x', y')
P= , P = , T=
P' =P +T
Sometimes matrix transformation equations are expressed in terms of coordinate row vectors
instead of column vector such representation as P =[x y] and T = [ tx ty ].
Translation is a rigid body transformation that moves objects without deformation. That means
every point on the object is translated by the same amount.
Similar methods are used to translate curved objects. To change the position of a circle or ellipse,
we translate center coordinates and redraw the figure in new location.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Rotation
Rotation is used to rotate a point about an axis. The axis can be any of the coordinates or simply 49
any other specified line also.
A two-dimensional rotation is applied to an object by repositioning it along a circular path in the
xy plane.
To generate a rotation, specify a rotation angle θ and the position (x r, yr) of the rotation point (or
pivot point).
Positive values for the rotation angle define counter clock wise rotation about pivot point. A
negative value rotates objects in clock wise direction.
The transformation can also be described as a rotation about a rotation axis perpendicular to xy
plane and passes through pivot point.
We first determine the transformation equations for rotation of a point position P when the pivot
point is at coordinate origin. The angular and coordinate relationships of the original and transformed
point positions are shown in the figure 3.3
Where r is the constant distance of the point from the origin, angle Ф is the original angular
position of the point θ is the rotation angle.
Rotation of a point from position (x, y) to position (x’, y') through angle θ relative to coordinate
origin. The transformed coordinates in terms of angle θ and Ф as
x'= rcos(θ+Ф) = rcosθ cosФ – rsinθsinФ
y'= rsin(θ+Ф) = rsinθ cosФ + rcosθsinФ
The original coordinates of the point in polar coordinate are
x = rcosФ, y = rsinФ
Substituting expression 2 into 1, we obtain the transformation equation for rotating a point at
position (x,y) through an angle θ about origin
x'= xcosθ – ysinθ
y'= xsinθ + ycosθ
We can write Rotation Equation in the matrix form:
P' = R. P
Where the Rotation Matrix are
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
R= 50
When coordinate positions are represented as row vectors instead of column vectors, the matrix
product in rotation euation is transposed so that transformed row coordinate vector is [x' y'] calculated
as
P' T = (R. P) T
= PT. RT
Where PT= [x y], and the transpose RT of matrix R is obtained by interchanging rows and
columns. For a rotation matrix, the transpose is obtained by simply changing the sign of the sine terms.
The transformation equations for rotation of a point about any specified rotation position (xr,yr) :
= .
(or)
P' = S. P
Where S is 2 by 2 scaling matrix
Turning a square (a) Into a rectangle (b) with scaling factors sx = 2 and sy = 1.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Any positive numeric values are valid for scaling factors sx and sy. If values less than 1 reduce the
size of the objects as well as it moves the objects closer to the coordinate origin, while values greater than 51
1 produce an enlarged object and it moves coordinate positions farther from the origin.
There are two types of scaling such as
1. Uniform scaling
2. Non Uniform Scaling or differential scaling
To get uniform scaling it is necessary to assign same value for s x and sy. Unequal values for sx
and sy result in a non uniform scaling.
The location of a scaled object by choosing a specified position is called fixed point scaling that is
to remain unchanged after the scaling transformation. The Coordinates for fixed point (x f, yf) can be
chosen as one of the vertices, the object centroid, or any other position. For a vertex with coordinates
(x,y) ,the scaled coordinates (x' ,y') are calculated as;
x' = xf + (x – xf) sx
y' = yf + (y – yf) sy
To separate the multiplicative and additive terms in the above scaling transformation as,
x' = x . sx + xf (1 –sx)
y' = y . sy + yf (1 –sy)
Where xf (1 –sx) and yf (1 –sy) are constant for all points in the objects
MATRIX REPRESENTATION AND HOMOGENEOUS COORDINATES
Many graphics applications involve sequences of geometric transformations. For example an
animation requires an object to be translated and rotated at each increment of the motion. In order to
combine sequence of transformations we have to eliminate the matrix addition.
The basic transformation can be expressed in the general matrix form
P' = M1 .P + M2
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
With coordinate positions p and p' represented as column vectors. Matrix M1 is 2 by 2 array
containing multiplicative factors, and M2 is a two element column matrix containing translation terms. 52
To produce a sequence of transformations, first the coordinates are scaled then these scaled
coordinates are rotated, finally rotated coordinates are translated.
To combine the multiplicative and additive terms for two dimensional transformations into a
single matrix representation by represent matrix as 3 X 3 instead of 2 X 2 introducing an additional
dummy coordinate h. Here points are specified by three numbers instead of two.
This coordinate system is called as Homogeneous coordinate system and it allows expressing
transformation equation as matrix multiplication.
Cartesian coordinate (x, y) is represented as homogeneous coordinate triple (x h, yh, h) where
x = xh / h
y = yh / h
A general homogeneous coordinate representation can also be written as (h.x,h.y,h). For two
dimensional geometric transformations, choose homogeneous parameter h to be any non -zero value. For
example, simply set h=1 so that the coordinates are represented by (x, y, 1).
For Translation
= .
= .
Or as
P ' = R (θ). P
where rotation transformation operator R (θ) is the 3 by 3 rotation matrix. The inverse of the
rotation matrix when θ is replaced with – θ.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
For Scaling
53
= .
Or as
P ' = S (Sx, Sy). P
Where S (Sx,Sy) is the 3 by 3 scaling matrix. Replacing these parameters with their multiplicative
inverses (1/ sx and 1/ sy) yields the inverse scaling matrix.
COMPOSITE TRANSFORMATIONS
A composite transformation is a sequence of transformations; one followed by the other. In a
matrix, any sequence of transformations as a composite transformation matrix by calculating the matrix
product of the individual transformations.
For example, If a transformation of the plane T1 is followed by a second plane transformation
T2, then the result represented by a single transformation T which is the composition of T1 and T2 is
written as T = T1∙T2.
Translation
If two successive translation vectors (tx1, ty1) and (tx2, ty2) are applied to a coordinate position P,
the final transformed location P ' is calculated as
. =
(or)
T(tx2, ty2).T(tx1,ty1) = T(tx1+tx2, ty1+ty2)
Which demonstrated the two successive translations are additive.
Rotations
Two successive rotations applied to point P produce the transformed position
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
By multiplying the two rotation matrices, we can verify that two successive rotation are additive
54
R (θ2).R (θ1) = R(θ1 + θ2)
So that the final rotated coordinates can be calculated with the composite rotation matrix as
. =
Or
S( Sx2, Sy2 ) .S( Sx1, Sy1 ) = S (Sx1. Sx2, Sy1 . Sy2)
The resulting matrix indicates that successive scaling operations are multiplicative.
General Pivot-Point Rotation
To generate rotation about any selected pivot point ( xr ,yr) by performing the following sequence
of translate-rotate-translate operations;
1. Translate the object so that pivot-position is moved to the coordinate origin
2. Rotate the object about the coordinate origin
3. Translate the object so that the pivot point is returned to its original position
The composite transformation matrix for this sequence is obtain with the concatenation
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
. . 55
. .
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
OTHER TRANSFORMATIONS
Some additional transformations are 56
1. Reflection
2. Shear
Reflection
A reflection is a transformation that produces a mirror image of an object. The mirror image for a
two-dimensional reflection is generated relative to an axis of reflection by rotating the object 180 about
the reflection axis.
We can choose an axis of reflection in the xy plane or perpendicular to the xy plane or coordinate
origin.
Reflection of an object about the x axis is accomplished with the transformation matrix,
This transformation keeps x values as same, but “flips” the y values of coordinate positions. The
resulting orientation of an object after it has been reflected is shown on the figure.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Flip both the x and y coordinates of a point by reflecting relative to an x axis that is perpendicular
to the xy plane and it passes through the coordinate origin. The matrix for the transformation is 57
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
The resulting transformation matrix is follows;
58
SHEAR
A Transformation that distorts the shape of an object is called the shear transformation. Two
common shearing transformations are used. One shifts x coordinate values and other shift y coordinate
values. In both cases only one coordinate (x or y) changes its coordinates and other preserves its values.
X – Shear
An x-direction shear preserves the y coordinates, but changes the x values which cause vertical
lines to tilt right or left as shown in figure
Fig A unit square converted to a parallelogram using the x direction shear matrix with shx=2
x ' = x + x shx .y
y'= y
Any real number can be assigned to the shear parameter shx. A coordinate position (x,y) is shifted
horizontally from x axis (y=0).
Y - Shear
The y- direction shear preserves the x coordinates, but changes the y values which cause
horizontal lines which slope up or down. The Transformations matrix for y-shear is
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
59
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
60
Fig A unit square is transformed to a shifted parallelogram with shx=1/2 and yref= -1
Y - Shear with x reference line
We can generate y-direction shears relative to other reference lines with the transformation matrix
which transforms the coordinates as
x' =x
y ' = shy (x - xref) + y
Example
Shy = ½ and xref = -1
Fig A unit square is transformed to a shifted parallelogram with shy=1/2 and xref= -1
This transformation shifts a coordinate position vertically by an amount proportional to its
distance from the reference line x = xref.
WINDOW – TO- VIEWPORT COORDINATE TRANSFORMATION
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
In the above figure A point at position (xw , yw) in the window is mapped into a position (xv ,yv) in
the view port. 61
To maintain the position in the viewport it is in window,
The above equation for (xv, yv) can be derived with a set of transformations that converts the
window area into the viewport area as follows
Step 1: Perform a scaling transformation at the position (xwmin, ywmin) and scale the
window area to the size of the viewport.
Step 2: Translate the scaled window area to the viewport position.
If (sx = sy) then relative proportions are maintained, otherwise, the world objects will be
stretched or contracted in either x or y directions when displayed on the output device.
Workstation Transformation
Workstation transformation is used to partition a view so that different parts of normalized space
can be displayed on different output devices.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
62
Any procedure that identifies those portions of a picture that are either inside or outside of a
specified region of space is referred to as a clipping algorithm or clipping. The region against which an
object is to clip is called a clip window.
Types of Clipping
1. Point Clipping
2. Line Clipping
3. Area Clipping (Polygons)
4. Curve Clipping
5. Text Clipping
POINT CLIPPING
Assuming that the clip window is a rectangle in standard position, we save a point p = (x, y) for
display if the following inequalities are satisfied.
where the edges of the clip window (xwmin, xwmax, ywmin, ywmax) can be either the world-
coordinate window boundaries or viewpoint boundaries. If any of these four inequalities is not satisfied,
the point is clipped.
LINE CLIPPING
A line clipping procedure involves the following phases, we can test a given line segment to
determine whether it lies completely inside the clipping window.
If it does not, we try to determine whether it lies completely outside the window.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
If we cannot identify a live as completely inside or completely outside the window, we must
perform intersection calculations with one or more clipping boundaries. We process lines through 63
the “inside - outside” tests by checking the line end points.
Let us consider the figure 3.15. A line with both end points inside all clipping boundaries such as
the line from P1 to P2 is saved. A line with both end points outside any one of the clip boundaries
(line P3 P4) is outside the window. All other lines cross one or more clipping boundaries and
require calculation of multiple intersection points.
To minimize calculations, clipping algorithms are used that can efficiently identify outside lines
and reduce intersection calculations.
For a line segment with endpoints (x1, y1) and (x2, y2) and one or both end points outside the
clipping rectangle, the parametric representation.
could be used to determine values of parameter u.If the value of u is within 0 to 1, the line
segment does not cross into the clipping area.
If the value of u is outside the range of 0 to 1, the line does not enter the interior of the window at
the boundary.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
COHEN SUTHERLAND LINE CLIPPING
This method speeds up the processing of line segments by performing initial tests that reduce the 64
number of intersections that must be calculated.
Every line end point in a picture is assigned a four digit binary code, called a region code that
identifies the location of the point. It is shown in figure.
Each bit position in the region code is used to indicate one of the 4 relative coordinate positions
of the point with respect to the clip window; to the left, right, top or bottom.
By numbering the bit positions in the region code as 1 through 4 from right to left, the coordinate
regions can be correlated with the bit positions as
bit 1: left
bit 2: right
bit 3: below
bit 4: above
The value 1 in any bit position indicates that the point is in that relative position; otherwise the bit
position is set to 0.
If a point in within the clipping rectangle, the region code is 0000; A point that is below and to
the left of the rectangle has a region code of 0101.
Bit values in the region code are determined by comparing endpoint coordinate values (x, y) to
the clip boundaries. Bit 1 is set to 1 if x < x10 min.
The other 3 bit values can be determined using similar comparisons.
Any lines that are completely contained within the window boundaries have a region code of
0000 for both endpoints and these lines can be trivially accepted.
Any lines that have a 1 in the same bit position in the region codes for each end points are
completely outside the clipping window, and these lines are trivially rejected.
Discard the line that has a region code of 1001 for one end point and a code of 0101 for the other
end point.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
The clipping of lines is explained in the figure.In the above example, startup with the bottom end
point of the line from P1 to P2, P1 is checked against the left, right and bottom boundaries. Then 65
and intersection point '1P is found with the bottom boundary and the line section '1P to P2 is
discarded.
Since P2 in outside the clip window, again it in checked with the left point of the window.
Intersection point '2P is calculated, but this point in above the window, so ''2P is calculated and
the line from '1P to ''2P is saved.
In the next line, point P3 is to the left of the clipping window and '3P is calculated and the line P3
to '3P is eliminated. By checking the region code for the line '3P to P4, the line is below the clip
window and it can be discarded.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
(bottom and left), and then move from the inside to the outside of the other two boundary lines (top and
right). 66
If we use u1 and u2, where u1 ≤ u2, u2 = minimum (1, u1, ur) where u1, ub, ut and ur correspond
to the intersection point of the extended like with the window’s left, bottom, top and right boundary,
respectively.
If pk < 0, the extended line proceeds from the outside to the inside of the corresponding boundary
line.
If pk > 0, the extended line proceeds from the inside to the outside of the corresponding boundary
line.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
When pk≠0, the value of u that corresponds to the intersection point is qk/pk.
The Liang Barsky algorithm for finding the visible portion of the line can be started as a four-step
67
process.
1. If pk =0 and qk < 0 for any k, eliminate the line and stop, otherwise proceed to the
next step.
2. For all k such that pk < 0, calculate rk = qk/pk.
Let u1 be the maximum of the set containing 0 and the calculated r values.
3. For all k such that pk > 0, calculate rk = qk / pk.
Let u2 be the minimum of the set containing 1 and the calculated r values.
4. If u1 > u2, eliminate the line since it is completely outside the clipping window
otherwise, use u1 and u2 to calculate the endpoints of the clipped line.
POLYGON CLIPPING
A polygon is nothing but the collection of lines. As polygon is a closed solid area, after
clipping, it should remain closed. To achieve this, we require an algorithm to generate additional
line segments which make the polygon as a closed area. For example, in the following figure the
lines a–b, c–d, d–e, f–g and h–i are added to polygon description to make it closed.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
68
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
69
Going through above four cases, we can realize that there are two key processes in this algorithm.
(1) Determining the visibility of a point or vertex.
(2) Determining the intersection of the polygon edge and the clipping plane. Determining the
visibility of a point or vertex can be described as follows.
Consider that two points A and B define the window boundary and point under consideration is
V, then these three points define a plane. Two vectors which lie in that plane are AB and AV. If this plane
is considered in the xy plane, then the vector across product AV × AB has only a Z component given by
(xV–xA) (yB–yA) – (yV–yA) (xB–xA). The sign of the Z component decides the position of point V with
respect to window boundary.
If Z is Positive – point is on the right side of the window boundary. Zero – point is on the
window boundary. Negative – point is on the leftside of the window boundary. Sutherland - Hodgeman
Polygon clipping Algorithm:
Step 1: Read coordinates of all vertices of the polygon.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Step 2: Read coordinates of the clipping window.
Step 3: Consider the left edge of the window. 70
Step 4: Compare the vertices of each edge of the polygon, individually with the clipping plane.
Step 5: Save the resulting intersections and vertices in the new list of vertices according to four
possible relationships between the edge and the clipping boundary.
Step 6. Repeat the steps 4 and 5 for remaining edges of the clipping window. Each time the
resultant list of vertices is successively passed to process the next edge of the clipping window.
Step 7. Stop.
CURVE CLIPPING
Curve clipping procedure involves non linear equations:
Step 1: The boundary rectangle for a circle or other curved object can be used to test for
overlap with the clip window.
Step 2: If the bounding rectangle of the object is completely inside the window, the object
is saved.
Step 3: Otherwise, the object is discarded. It is illustrated in the figure 3.22.
TEXT CLIPPING
The simplest method for processing character strings relative to a window boundary is to use the all-
or-none-string clipping. It is shown in figure 3.22 if all the string is inside the clip window, it is saved,
otherwise the entire string is discarded.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
All-or-none-character Clipping
Here we discard only those characters that are not completely inside the window (figure 3.24). In this 71
case, the boundary limits of individual characters are compared to the window, any character that either
overlaps or is outside a window boundary is clipped.
Clipping Individual Character Components Here, the characters are treated as lines. If an individual
character overlaps a clip window boundary we clip off the parts of the character that are outside the
window (figure 3.25) outline character fonts formed with line segments can be processed using line
clipping algorithm.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
4. THREE DIMENSIONAL DISPLAY METHODS
Three Dimensional Concepts 72
Three Dimensional Display Methods
To obtain a display of a three dimensional scene that has been modeled in world coordinates, we
must setup a co-ordinate reference for the ‘camera’.
This coordinate reference defines the position and orientation for the plane of the camera film
(Figure 4.1) which is the plane we want to use to display a view of the objects in the scene.
Object descriptions are then transferred to the camera reference coordinates and projected onto
the selected display plane.
The objects can be displayed in wire frame form, or we can apply lighting and surface rendering
techniques to shade the visible surfaces.
Parallel Projection
Parallel projection is a method for generating a view of a solid object is to project points on the
object surface along parallel lines onto the display plane.
In parallel projection, parallel lines in the world coordinate scene project into parallel lines on the
two dimensional display planes.
This technique is used in engineering and architectural drawings to represent an object with a set
of views that maintain relative proportions of the object.
The appearance of the solid object can be reconstructed from the major views.
Perspective Projection
It is a method for generating a view of a three dimensional scene is to project points to the display
plane along converging paths.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
This makes objects further from the viewing position to be displayed smaller than objects of the
same size that are nearer to the viewing position. 73
In a perspective projection, parallel lines in a scene that are not parallel to the display plane are
projected into converging lines.
•Scenes displayed using perspective projections appear more realistic, since this is the way that
our eyes and a camera lens form images.
Depth Cueing
Depth information is important to identify the viewing direction, which is the front and which is
the back of displayed object.
Depth cueing is a method for indicating depth with wire frame displays is to vary the intensity of
objects according to their distance from the viewing position.
Depth cueing is applied by choosing maximum and minimum intensity (or color) values and a
range of distances over which the intensities are to vary.
Visible line and surface identification
A simplest way to identify the visible line is to highlight the visible lines or to display them in a
different color.
Another method is to display the non visible lines as dashed lines.
Surface Rendering
Surface Rendering method is used to generate a degree of realism in a displayed scene.
Realism is attained in displays by setting the surface intensity of objects according to the lighting
conditions in the scene and surface characteristics.
Lighting conditions include the intensity and positions of light sources and the background
illumination.
Surface characteristics include degree of transparency and how rough or smooth the surfaces are
to be.
Exploded and Cutaway Views
Exploded and cutaway views of objects can be used to show the internal structure and
relationship of the objects parts.
An alternative to exploding an object into its component parts is the cut away view which
removes part of the visible surfaces to show internal structure.
Three-dimensional and Stereoscopic Views
In Stereoscopic views, three dimensional views can be obtained by reflecting a raster image from
a vibrating flexible mirror.
The vibrations of the mirror are synchronized with the display of the scene on the CRT.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
As the mirror vibrates, the focal length varies so that each point in the scene is projected to a
position corresponding to its depth. 74
Stereoscopic devices present two views of a scene; one for the left eye and the other for the right
eye.
The two views are generated by selecting viewing positions that corresponds to the two eye
positions of a single viewer.
These two views can be displayed on alternate refresh cycles of a raster monitor, and viewed
through glasses that alternately darken first one lens then the other in synchronization with the
monitor refresh cycles.
THREE DIMENSIONAL GRAPHICS PACKAGES
The 3D package must include methods for mapping scene descriptions onto a flat viewing
surface.
There should be some consideration on how surfaces of solid objects are to be modeled, how
visible surfaces can be identified, how transformations of objects are preformed in space, and how to
describe the additional spatial properties.
World coordinate descriptions are extended to 3D, and users are provided with output and input
routines accessed with specifications such as
Polyline3(n, WcPoints)
Fillarea3(n, WcPoints)
Text3(WcPoint, string)
Getlocator3(WcPoint)
Translate3(translate Vector, matrix Translate)
Where points and vectors are specified with 3 components and transformation matrices
have 4 rows and 4 columns.
INTERACTIVE INPUT METHODS AND GRAPHICAL USER INTERFACES
Graphical Input Data
Graphics software can use a variety of input data:
Coordinate positions
Character-string specifications
Geometric transformation values
Viewing conditions
Illumination parameters
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Many graphics systems and various standards organizations (ISO, ANSI) provide input functions for such
data. 75
Input functions are often classified by the type of data they process.
Logical Classification of Input Devices
When input functions are classified by data type,any device that can supply needed data is called
a logical input device for that data type.
Logical input-data classifications:
Locator – specify one coordinate position
Stroke – specify a set of coordinate positions
String – specify some text input
Valuator – specify a scalar value
Choice – select a menu option
Pick – select a component of a picture
Locator Devices
Common approach is to use the screen cursor at some location.
Mouse, touchpad, thumbwheel, dial, touch-screen,etc. can be used for screen cursor positioning.
Keyboards, particularly arrow keys (with modifiers) can be used to indicate screen cursor
movement.
Light pens can also be used, but there are less common than the other techniques.
Stroke Devices
Locator devices can also be used to provide multiple coordinate positions.
“Click and drag” actions with a mouse provide stroke input, specifically a start and end
coordinate (as for a line).
Buttons (e.g. on a keyboard) can be used to modify the actions. For example, pressing a shift key
may constrain coordinate positions to a horizontal or vertical line.
String Devices
The obvious choice for string input is a keyboard.
The keyboard need not necessarily be a “full” keyboard as found on a traditional computer; it is
often also one of these:
o A keyboard with a limited set of keys appropriate to a specific application
o A simulated keyboard displayed on a touch-sensitive screen.
Valuator Devices
Numerous devices can be used as valuators:
o Dials (e.g. potentiometers or rotary switches)
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
o Keyboards
o Joysticks, pressure-sensitive devices, etc. 76
o Various on-screen widgets like slider bars, buttons, rotating scales, etc.
In all cases it is appropriate to provide user feedback so verification (and correction) of the input
value is possible.
Choice Devices
Numerous possibilities exist:
One of many pushbuttons
Keyboards
On-screen menus and cursor-positioning devices
Multi-level menus for complicated choices
Voice input (text suggests this for small # of choices)
Pick Devices
Can be used to select an entire object, a facet of a surface, a polygon edge, a vertex, etc.
Using the mouse, we translate a screen coordinate position to a world coordinate position using
inverse transformations.
Object selection first tries to determine if the coordinate position is uniquely associated with a
single object.
If that fails, then coordinate extents of individual object components (e.g. line segments) can be
tested.
Object Selection
If coordinate-extent tests don’t resolve to a single object, then distance to individual line
segments could be computed (see next slide).
Pick actions can also specify selection of multiple objects at once, especially if these multiple
objects are in a specified region, or are somehow grouped to represent a single object.
Keyboards can be used to allow selection of an on-screen objects and sequencing through the
objects.
Named objects can also be selected by keyboard.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
77
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
• These include functions that specify what actions are to take place when certain events (like input
actions) occur. 78
• Examples include
• “normal” and special keyboard entry
• mouse button presses and mouse motion
Interactive Picture-Construction
Basic Positioning Methods
Obvious possibility include using a mouse or a trackball Specific object type could be
specified (e.g. square, ellipse) and “filled in” given parameters (e.g. from keyboard or mouse)
• Numeric position information could be displayed as text on the screen (or other device)
Keyboard, dials, etc. could be used to make small interactive adjustments in coordinate values
Dragging
• Common technique – select an object and draw to a different location Often performed
with mouse – press button to select object, move mouse to desired location, release
button
Constraints
• With additional input (e.g. from keyboard), the parameters for an interactively-drawn
object can be limited. For example, holding shift key while drawing a line may constrain
the angle between the line and a coordinate axis
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
79
5. THREE DIMENSIONAL TRANSFORMATIONS
BASIC TRANSFORMATION
Geometric transformations and object modeling in three dimensions are extended from two-
dimensional methods by including considerations for the z-coordinate.
Translation
In a three dimensional homogeneous coordinate representation, a point or an object is translated from
position P = (x, y, z) to position () P' = x',y',z with the matrix operation.
Parameters tx, ty and tz specifying translation distances for the coordinate directions x, y and z are
assigned any real values.
The matrix representation in equation (1) in equivalent to the three equations.
Inverse of the translation matrix in equation (1) can be obtained by negating the translation distances
tx, ty and tz.
This produces a translation in the opposite direction and the product of a translation matrix and its
inverse produces the identity matrix.
Rotation
To generate a rotation transformation for an object an axis of rotation must be designed to rotate
the object and the amount of angular rotation is also be specified.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Positive rotation angles produce counter clockwise rotations about a coordinate axis.
Co-ordinate Axes Rotations 80
The 2D z axis rotation equations are easily extended to 3D.
Parameters θ specifies the rotation angle. In homogeneous coordinate form, the 3D z axis rotation
equations are expressed as
Transformation equations for rotation about the other two coordinate axes can be obtained with a
cyclic permutation of the coordinate parameters x, y and z in equation (2) i.e., we use the replacements.
Substituting permutations (5) in Equation (2), we get the equations for an x-axis rotation:
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
which can be written in the homogeneous coordinate form
81
Cyclically permuting coordinates in equation (6) give the transformation equation for a y axis
rotation.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
82
Negative values for rotation angles generate rotations in a clockwise direction, so the identity
matrix is produced when any rotation matrix is multiplied by its inverse.
Since only the sine function is affected by the change in sign of the rotation angle, the inverse
matrix can also be obtained by interchanging rows and columns. (i.e.) we can calculate the
inverse of any rotation matrix R by evaluating its transpose (R–1 = RT).
General Three Dimensional Rotations
A rotation matrix for any axis that does not coincide with a coordinate axis can be set up as a
composite transformation involving combinations of translations and the coordinate axes
rotations.
We obtain the required composite matrix by
(1) Setting up the transformation sequence that moves the selected rotation axis onto one of the
coordinate axes.
(2) Then set up the rotation matrix about that coordinate axis for the specified rotation angle.
(3) Obtaining the inverse transformation sequence that returns the rotation axis to its original
position.
In the special case where an object is to be rotated about an axis that is parallel to one of the
coordinate axes, we can attain the desired rotation with the following transformation sequence
(1) Translate the object so that the rotation axis coincides with the parallel coordinate axis.
(2) Perform the specified rotation about that axis.
(3) Translate the object so that the rotation axis is moved back to its original position.
When an object is to be rotated about an axis that is not parallel to one of the coordinate axes, we
need to perform some additional transformations.
In such case, we need rotations to align the axis with a selected coordinate axis and to bring the
axis back to its original orientation.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Given the specifications for the rotation axis and the rotation angle, we can accomplish the
required rotation in five steps: 83
(1) Translate the object so that the rotation axis passes through the coordinate origin.
(2) Rotate the object so that the axis of rotation coincides with one of the coordinate axes.
(3) Perform the specified rotation about that coordinate axis.
(4) Apply inverse rotations to bring the rotation axis back to its original orientation.
(5) Apply the inverse translation to bring the rotation axis back to its original position.
Scaling
The matrix expression for the scaling transformation of a position P = (x, y, z) relative to the
coordinate origin can be written as
Where scaling parameters sx, sy and sz are assigned any position values.
Explicit expressions for the coordinate transformations for scaling relative to the origin are
Scaling an object changes the size of the object and repositions the object relative to the
coordinate origin.
If the transformation parameters are not equal, relative dimensions in the object are changed.
The origin shape of the object is preserved with a uniform scaling (sx = sy = sz). (Figure 4.10)
shows the result of scaling an object uniformly with each scaling parameter set to 2.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
84
Scaling with respect to a selected fixed position (xf, yf, zf) can be represented with the following
transformation sequence:
(1) Translate the fixed point to the origin.
(2) Scale the object relative to the coordinate origin using equation (11).
(3) Translate the fixed point back to its original position.
This sequence of transformation is shown in Figure 4.11.
The matrix representation for an arbitrary fixed point scaling can be expressed as the
concatenation of the translate-scale-translate transformations are
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Inverse scaling matrix m formed by replacing the scaling parameters sx, sy and sz with their
reciprocals. 85
The inverse matrix generates an opposite scaling transformation, so the concatenation of any
scaling matrix and its inverse produces the identity matrix.
OTHER TRANSFORMATIONS
Reflections
A 3D reflection can be performed relative to a selected reflection axis or with respect to a
selected reflection plane.
Reflection relative to a given axis are equivalent to 180° rotations about the axis.
When the reflection plane in a coordinate plane (either xy, xz or yz) then the transformation can
be a conversion between left-handed and right-handed systems.
An example of a reflection that converts coordinate specifications from a right handed system to a
left-handed system is shown in Figure 4.12.
This transformation changes the sign of z coordinates, leaves the x and y coordinate values
unchanged.
The matrix representation for this reflection of points relative to the xy plane is
Reflections about other planes can be obtained as a combination of rotations and coordinate plane
reflections.
Shears
Shearing transformations are used to modify object shapes.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
They are also used in three dimensional viewing for obtaining general projections
transformations. 86
The following transformation produces a z-axis shear.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
If V is a vector in the viewing direction from the eye (or “camera”) position, then the polygon is a
backface if 87
V.N > 0
If object descriptions have been connected to projection coordinates and the viewing direction is
parallel to the viewing Zv axis then V = (0, 0, Vz) and
V . N = Vz . C
So we have to consider the sign of C, the z component of the normal vector N. It is shown in
Figure.5.14.
In a right-handed viewing system with viewing direction along the negative Zv axis (Figure.5.15)
the polygon is back-face if C < 0.
If C = 0, we cannot see any face, since the viewing direction is graying that polygon.
Thus we can label any polygon as a back-face if its normal vector has a Z component value
C ≤0
Back-faces have normal vectors that point away from the viewing position and are identified by C
³ 0 when the viewing direction is along the positive Zv axis.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
DEPTH BUFFER METHOD (OR) Z- BUFFER ALGORITHM
This method compares surface depths at each pixel position on the projection plane. 88
The surface depth is measured from the view plane along the Z axis of a viewing system.
When object description is converted to projection coordinates (x, y, z), each pixel position on the
view plane is specified by x and y coordinate, and z value gives the depth information. Thus
object depths can be compared by comparing the z-values.
The Z buffer algorithm is usually implemented in the normalized coordinates, so that z values
range from 0 at the back clipping plane to 1 at the front clipping plane.
The implementation requires another buffer memory called z-buffer along with the frame buffer
memory required for raster display devices.
A z-buffer is used to store depth values for each (x, y) position as surfaces are processed and the
frame buffer stores the intensity values for each position.
At the beginning, z-buffer is initialized to 0, representing the z-value at the back clipping plane
and the frame buffer is initialized to the background color.
Each surface listed in the display file is then processed one scan line at a time, calculating the
depth (z value) at each (x, y) pixel position.
The calculated depth value is compared to the value previously stored in the z-buffer at that
position.
If the calculated depth value is greater than the value stored in the z-buffer, the new depth value is
stored, and the surface intensity at that position is determined and placed in the same x,y location
in the frame buffer.
For example in the following Fig.5.16 among three surfaces, surface s has the smallest depth at
view position (x, y) and hence highest z value. So it is visible at that position.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Z-buffer Algorithm
Step-1: Initialize the z-buffer and frame buffer so that for all buffer positions 89
Z-buffer (x, y) = 0 and frame buffer(x, y) = Ibackground
Step-2: During scan conversion process, for each position on each polygon surface, compare
depth values to previously stored values in the depth buffer to determine visibility.
Calculate z-value for each (x,y) position on the polygon.
If z>Z-buffer(x,y) then set
z-buffer(x,y)= Z, frame-buffer(x,y) = Isurface(x,y)
Step-3: Stop
After processing all the surfaces, the z-buffer, contains depth values for the variable surfaces and
the frame buffer contains the corresponding intensity values for those surfaces.
To calculate z-values, the plane equation
Ax + By + Cz + D =0
is used where (x, y, z) is any point on the plane, and the coefficient A, B, C and D are constants
describing the spatial properties of the plane.
Only one subtraction is needed to calculate z(x+1, y), given z(x, y) since the quotient A/C is
constant and Δx=1.
A similar incremental calculation can be performed to determine the first value of z on the next
scanline, decrementing by B/C for each Δy.
Advantages
(1) It is very easy to implement.
(2) It can be implemented in hardware to overcome the speed problem.
(3) Since the algorithm processes objects one at a time, the total number of polygons in a
picture can be arbitrarily large.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
Disadvantages
(1) It requires an additional buffer and hence the large memory. 90
(2) It is a time consuming process as it requires comparison for each pixel instead of the
entire polygon.
UFFER METHOD
An extension of the ideas in the depth buffer method is the A-buffer method.
A-buffer method represents an antialiased, area-averaged, accumulation-buffer method.
It expands the depth buffer so that each position in the buffer can reference a linked list of
surfaces.
More than one surface intensity can be taken into consideration at each pixel position and object
edges can be antialiased.
Each position in the A-buffer has 2 fields.
(1) Depth field-stores a positive or negative real number.
(2) Intensity field-stores surface-intensity information or a pointer value.
If the depth field is positive, the number stored at that position in the depth of a single surface
overlapping the corresponding pixel area.
The intensity field stores the RGB components of the surface color at that point and the percent of
pixel coverage.
If the depth field is negative, it indicates multiple surface contributions to the pixel intensity.
The intensity field stores a pointer to a linked list of surface data.
The data for each surface in the linked list includes :
1. RGB intensity components.
2. Opacity parameter.
3. Depth.
4. Percent of area coverage.
5. Surface identifier.
6. Other surface rendering parameters.
7. Pointer to next surface.
The A-buffer can be constructed using methods similar to those in the depth-buffer algorithm.
Scan lines are processed to determine surface overlaps of pixels across the individual scan lines.
Surfaces are subdivided into a polygon mesh and clipped against the pixel boundaries.
Using the opacity factors and percent of surface overlaps, the intensity of each pixel is calculated
as an average of the contributions from the overlapping surfaces.
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
SCANLINE METHOD
A Scan line method of hidden surface removal is an another approach of image space method. 91
This method deals with more than one surface.
As each scanline is processed, it examines all polygon surfaces intersecting that line to determine
which are visible.
It performs the depth calculation and finds which polygon is nearest to the view plane.
Finally, it enters the intensity value of the nearest polygon at that position into the frame buffer.
The scanline algorithm maintains active edge list.
The active edge list contains only edges that cross the current scan line, sorted in order of
increasing x.
The scanline method of hidden surface removal also stores a flag for each surface that is set ON
or OFF to indicate whether a position along a scan line is inside or outside of the surface.
Scanlines are processed from left to right.
At the leftmost boundary of the surface, the surface flag is turned ON, and at the rightmost
boundary, it is turned OFF.
The Figure.5.17 illustrates the scanline method for hidden surface removal. Here, the active edge list
for scanline 1 contains the information for edges AD, BC, EH, and FG.
For the positions along this scan line between edges AD and BC, only the flag for surface S1 is
ON.
Therefore, no depth calculations are necessary and intensity information for flag for surface S2 is
ON and during that position of scan line the intensity information for surface S2 is entered into
Mrs.M.Priya,MCA.,M.Phil.,
Computer Graphics
92
For scan line 2, the active edge list contains edges AD, EH, BC and FG. Along the scan line 2
from edge AD to edge EH, only the flag for surface S1 is ON.
Between edges EH and BC, the flags for both surfaces are ON. In this portion of scanline2, the
depth calculations are necessary.
Here we have assumed that the depth of S1 is less than the depth of S2 and hence the intensities
of surface S1 are loaded into the frame buffer.
Then, for edge BC to edge FG portion of scan line 2 intensities of surface S2 are entered into the
frame buffer because during that portion only, flag for S2 is ON.
Mrs.M.Priya,MCA.,M.Phil.,