Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

18.paper XVIII - COMPUTER GRAPHICS - 2022

Download as pdf or txt
Download as pdf or txt
You are on page 1of 88

Computer Graphics and Multimedia

Introduction to computer Graphics - Video display devices- Raster scan Systems - Random
Scan Systems - Interactive input devices - Hard copy devices - Graphics software - Output
primitives - line drawing algorithms - initializing lines - line function - circle Generating
algorithms.

INTRODUCTION TO COMPUTER GRAPHICS


Computer Graphics involves creation, display, manipulation and storage of pictures and
experimental data/models or images for proper visualization using a computer. Computers are
becoming a powerful tool for the rapid and economic production of pictures.
Computer graphics are used in diverse areas such as :
Science Engineering Medicine
Business Industry Government
Art Entertainment Advertising
Education Training

APPLICATIONS OF COMPUTER GRAPHICS


1. Computer Aided Design (CAD) :
• It is used in the design of Buildings, Automobiles, Aircraft, Watercraft, Spacecraft,
Computers, Textiles and many more products
• For some design applications, objects are first displayed in a wireframe outline which shows
the overall shape and internal features of objects
• Circuits and Networks for communications, water supply or other utilities are constructed

1
Computer Graphics and Multimedia
• The shapes used in a design represent the different network or circuit components
• Real time animations using wireframe displays on a video monitor are useful for testing
performance of a vehicle or system
• Realistic displays are generated for advertising of automobiles and other vehicles using
special lighting effects and background scenes
• Architects use interactive graphics methods to layout floor plans such as positioning of
rooms, windows, stairs, shelves and other building features

Fig 1.1 Example for Wireframe drawing

2. Presentation Graphics
Presentation Graphics is used to produce illustrations for reports, Slides or transparencies for use
with projectors. It is used to summarize Financial, Statistical, Mathematical, Scientific as well as
Economic Data. For example,Bar charts, Line Graphs, Surface Graphs, Pie charts and other displays
shows relationships between multiple parameters.

Fig 1.2 Example for Presentation Graphics

2
Computer Graphics and Multimedia
3. Computer Art
Computer Graphics are used in both fine art and commercial art applications. Artists use a variety of
computer methods including special purpose hardware, artists paint brush programs, specially
developed software,symbolic mathematical packages, CAD software, Desktop publishing software
and Animation packages that provide facilities for designing object shapes and specifying object
motions
Eg) LOGOs , TV advertising, Combining Text and Graphics, Morphing, etc.

Fig 1.3 Example for Computer Art

4. Entertainment

Graphics objects are combined with actors and live scenes.Eg) Motion pictures, Music Videos,
Television shows.

Fig 1.4 Example for animation

3
Computer Graphics and Multimedia

5. Education and Training


Computer generated models of physical, financial and economic systems are often used as
educational aids. Eg) Simulators for practice sessions, Training of ship captions, Training of pilots,
Air traffic control personnel, etc.

Fig 1.5 Example for aircraft simulator

6. Visualization
Scientists, Engineers, Medical Personnel, Business Analysts need to analyze large amount of
data. Numerical simulations carried out on super computers frequently produce data files
containing thousands and millions of data values. Satellite cameras and other sources can
process amassing large data files faster than they can be interpreted .
Visualization can be of two types :
1. Scientific : Graphical representation for scientific, engineering and Medical datasets
2. Business : Datasets related to commerce,industry and other non-scientific areas

Fig 1.5 Example for Data Analysis

4
Computer Graphics and Multimedia
7. Image Processing
Image processing is applied to modify / interpret existing pictures such as photographs and TV
scans.
Applications of Image Processing:
1. Improving picture quality
2. M/C perception of visual information as used in robotics

In order to apply image-processing methods:


Digitize a photograph or picture into an image file
Apply techniques to enhance color separation or improve the quality of shading
e.g., Analyze satellite photos, Photos of earth, Galaxy, etc.
Medical Image processing uses Ultrasonics which converts High frequency sound waves to process
data and Medical scanners

Fig 1.6 Example for Image Processing

8. Graphical User Interface


Uses Window manager which allows a user to display multiple window areas. It can be activated by
an icon.

Fig 1.6 Example for Graphical User Interface

5
Computer Graphics and Multimedia
VISUAL DISPLAY DEVICES
The Primary output device is Video Monitor. The operation is based on the Standard Cathode Ray
Tube.
A beam of electrons (cathode rays) emitted by the electron gun passes through focusing and
deflection systems that direct the beam towards specified positions on the phosphor-coated
screen.The phosphor emits a small spot of light at each position contacted by the electron beam.
Because the light emitted by the phosphor fades rapidly, some means is needed for maintaining the
picture.One way to keep the phosphor glowing is to redraw the picture repeatedly and quickly by
directing the electron beam back over the same points. This type of display is Refresh CRT.

Fig 1.7 Structure of CRT

Main Components of CRT are:


1. Electron Gun: Electron gun consisting of a series of elements, primarily a heating filament
(heater) and a cathode. The electron gun creates a source of electrons which are focused into a
narrow beam directed at the face of the CRT.
2. Control Electrode: It is used to turn the electron beam on and off.
3. Focusing system: It is used to create a clear picture by focusing the electrons into a narrow
beam.
4. Deflection Yoke: It is used to control the direction of the electron beam. It creates an electric
or magnetic field which will bend the electron beam as it passes through the area. In a
conventional CRT, the yoke is linked to a sweep or scan generator. The deflection yoke which is
connected to the sweep generator creates a fluctuating electric or magnetic potential.
5. Phosphorus-coated screen: The inside front surface of every CRT is coated with phosphors.
Phosphors glow when a high-energy electron beam hits them. Phosphorescence is the term used
to characterize the light given off by a phosphor after it has been exposed to an electron beam.

6
Computer Graphics and Multimedia
Functions of Heated metal Cathode and a Control Grid
Heat is supplied to the cathode by directing current through a coil of wire called filament, inside a
cylindrical cathode structure. This causes electrons to be “boiled off” the hot cathode surface. In the
vacuum inside the CRT envelope, the free negatively charged electrons are then accelerated towards
the phosphor coating by a high positive voltage or an accelerating anode or focusing anode.Intensity
of the electron beam is controlled by setting voltage levels on the control grid will shut off the beam
by repelling the electrons and stopping them from passing through the small grid at the end of the
control grid.A small negative charge will decrease the electrons passing through.
The light emitted by the phosphor coating depends on the number of electrons striking the screen, so
we can control the brightness of a display by varying voltage on the control grid.The focusing system
in a CRT is needed to force the electron beam to converge into a small spot as it strikes the phosphor,
otherwise the electrons would repel each other and the beam would spread out as it reaches the
screen.
Focussing is done with electric / magnetic fields. Electrostatic focusing is done in television and
computer graphics monitors. Additional focusing hardware is used in high-precision systems to keep
the beam in focus at all screen positions. Electron beam will be focused at the centre properly and to
other places it will be blurred. This can be compensated by adjusting focusing systems.
Deflection can be controlled by electric or magnetic fields. CRTs are constructed with magnetic
deflection coils mounted on the outside of the CRT. Spots of light are produced on the screen by the
transfer of CRT beam energy to the phosphor. When the electrons collide with phosphor coated
screen, they are stopped and their kinetic energy is absorbed by the phosphor.Part of beam energy is
converted by friction into heat energy and the remainder causes electrons in the phosphor atoms to
move upto high-quantum levels.After a short time, the excited phosphor electrons begin dropping
back to their stable ground state.

High resolution systems are called as High definition system


Aspect Ratio : Number of vertical points to horizontal points necessary to produce equal length
linesin both directions on the screen.
E.g., Aspect Ratio ¾ : Vertical points 3 plotted with horizontal 4
Different kinds of phosphors are available for use in CRT. Major factor is persistence (how long
they continue to emit light).
Persistence is defined as the time it takes the emitted light from the screen to decay to one-tenth of
its original intensity.Lower persistence phosphor requires higher refresh rates. Lower persistence is
used for animation. High persistence is used for highly complex and static pictures. Persistence
varies between 10-60 microsec

7
Computer Graphics and Multimedia

RASTER SCAN DISPLAYS


The most common type of CRT is the Raster Scan display, based on television technology.In a
raster-scan system,the electron beam is swept across the screen, one row at a time from top to
bottom.As the electron beam moves across each row, the beam intensity is turned on and off to
create a pattern of illuminated spots.Picture definition is stored in a memory area called refresh
buffer or frame buffer.The memory area holds the set of intensity values for all screen points.Stored
intensity values are then retrieved from the refresh buffer and “painted” on the screen one row at a
time.

Fig 1.8 Raster Scan display


Each screen point is referred to as a Pixel or Pel (Picture element). Eg) Home television sets and
printers are examples of systems using raster-scan methods.Intensity range for pixel positions
depends on the capability of raster system.In a simple black and white system, each screen point is
either on or off . (i.e.) only one pixel is needed to control the intensity of screen positions
In High Quality system upto 24 bits / pixel is required. 3MB of storage is required for Frame buffer.
Black and White Frame buffer is called as Bitmap. Color monitor is called as Pixmap. Refreshing
on a raster scan is carried out at a rate of 60-80 frames/second.
Scanning left to right is called as Horizontal retrace. At the end of each frame, the electron beam
returns to the top left corner of the screen (vertical retrace) to begin the next frame.The CRT has the
electron beam directed only to the parts of the screen where a picture is to be redrawn.

Random Scan (Vector Scan)


In this technique, the electron beam is directed only to the part of the screen where the picture is to
be drawn rather than scanning from left to right and top to bottom as in raster scan. It is also called
vector display, stroke-writing display, or calligraphic display.

8
Computer Graphics and Multimedia

Picture definition is stored as a set of line-drawing commands in an area of memory referred to as


the refresh display file. To display a specified picture, the system cycles through the set of
commands in the display file, drawing each component line in turn. After all the line-drawing
commands are processed, the system cycles back to the first line command in the list. Random-
scan displays are designed to draw all the component lines of a picture 30 to 60 times each second.

Refresh rates depends on the no. of lines to be drawn.Picture definition is stored as a set of line-
drawing commands in an area of memory called refresh display file (refresh buffer).

Fig 1.9 Structure of Random Scan display

Suppose we want to display a square ABCD on the screen. The commands will be:
• Draw a line from A to B
• Draw a line from B to C
• Draw a line from C to D
• Draw a line from D to A

Fig 1.10 Example for construction of square with Random scan display

Drawbacks of random scan display


• It can be used for line drawing applications
• Cannot display realistic shaded scenes
• Not stored as a set of intensity

9
Computer Graphics and Multimedia

COLOR CRT MONITOR


A CRT monitor displays color pictures by using a combination of phosphors that emit different
colored light. Techniques for color displays :
i) Beam penetration method
ii) Shadow mask method

i) Beam penetration method


Color pictures can be displayed with random scan monitors. Two layers of phosphors (Red and
Green) are coated on the inside of CRT screen.A beam of slow electrons excites only the outer
red color. A beam of very fast electrons penetrates through the red layer and excites the inner
green color.
At intermediate beam speeds, combinations of red and green are emitted which gives additional
two colors orange and yellow. Beam penetration is inexpensive way to produce color in random-
scan monitors but only few colors are possible. Quality of picture is not as good as with other
methods

Fig 1.11 Working of CRT with Beam penetration method

ii) Shadow Mask method

They are used in Raster scan systems because they produce a wide range of colors than the beam
penetration method. It contains three phosphor color dots at each pixel position. One phosphor
dot emits a red light and another emits green light and the third emits a blue light
This type of CRT has three electron guns, one for each color dot and a shadow-mask grid just
behind the phosphor coated screen. The three electron beams are deflected and focused as a
group onto the shadow mask, which contains a series of holes aligned with the phosphor-dot
patterns. When the three beams pass through a hole in the shadow mask, they activate a dot
triangle which appears as a small color spot on the screen.The phosphor dots are arranged so that
each electron beam can activate only its corresponding color dot when it passes through the
10
Computer Graphics and Multimedia
shadow mask. We obtain color variations in a shadow-mask CRT by varying the intensity levels
of the three electron beams.
R + G + B Equal intensity  White
G + R Yellow
B + R Magenta
B + G Cyan
Color displays in Graphic systems are designed as RGB monitors.They use shadow-mask method
and take the intensity level for each electron gun (Red,Green and Blue).
High Quality raster systems have 24 bits / pixel in the frame buffer. An RGB color system with
24 bits of storage/pixel is referred to as full-color system / true-color system.

Fig 1.12 Working of CRT with Beam penetration method

Direct View Storage Tube


DVST terminals use random scan approach to generate the image on the CRT screen. The term
"storage tube" refers to the ability of the screen to retain the image once it is projected.

Function of guns: Two guns are used in DVST


1. Primary guns: It is used to store the picture pattern.
2. Flood gun or Secondary gun: It is used to maintain picture display.

11
Computer Graphics and Multimedia

Fig 1.13 Structure of CRT with Direct View Storage Tube

Advantage:
• No refreshing is needed.
• High Resolution
• Cost is very less

Disadvantage:
• It is not possible to erase the selected part of a picture.
• It is not suitable for dynamic graphics applications.
• They do not display colors and are available with single level of line intensity
• If a part of picture is to modify, then time is consumed.
• Erasing of screen produces unpleasant flash over the entire screen surface which
prevents its use of dynamic graphics applications.
• It has poor contrast as a result of the comparatively low accelerating potential applied
to the flood electrons.

The performance of DVST is inferior to Refresh CRT.


FLAT PANEL DISPLAYS
Flat panel displays refer to a class of video devices that have reduced volume, weight and power
requirements in comparison with CRT
e.g) TV monitors, Calculators, Laptop,Pocket video games, armrest viewing of movies on airlines,
advertisement boards in elevators, etc.
Flat panel displays are divided into two categories:
1. Emissive display
2. Non-Emissive display

12
Computer Graphics and Multimedia

Fig 1.14 Classification of Flat Panel Display

1. EMISSIVE DISPLAYS (E.G., PLASMA PANEL AND LED)

These displays convert electric energy into light

i) Plasma Panel

Fig 1.15 Structure of Plasma Panel

They are constructed by filling the region between two glass plates with a mixture of gases that
usually includes neon. A series of vertical conducting ribbon is placed on one glass panel and a
series of horizontal ribbon is built into the other glass panel.
Voltages applied to a pair of horizontal and vertical conductors cause the gas at the intersection
of the two conductors to break down into glowing plasma of electrons and icons.Picture
definition is stored in the refresh buffer and voltages are applied to refresh the pixel positions 60
times / sec.

13
Computer Graphics and Multimedia
Advantages:
1. High Resolution
2. Large screen size is also possible.
3. Less Volume
4. Less weight
5. Flicker Free Display
Disadvantage:
1. Poor Resolution
2. Its addressing is also complex.

ii) LED (Light Emitting Diode):

In an LED, a matrix of diodes is organized to form the pixel positions in the display and picture
definition is stored in a refresh buffer. Data is read from the refresh buffer and converted to
voltage levels that are applied to the diodes to produce the light pattern in the display.

2. NON-EMISSIVE DISPLAY (E.G., LCD)


Liquid Crystal Displays are the devices that produce a picture by passing polarized light through a
liquid-crystal material that transmits the light.LCD uses the liquid-crystal material between two glass
plates; each plate is the right angle to each other between plates liquid is filled. One glass plate
consists of rows of conductors arranged in vertical direction. Another glass plate is consisting of a
row of conductors arranged in horizontal direction. The pixel position is determined by the
intersection of the vertical & horizontal conductor. This position is an active part of the screen.
Advantage:

1. Low power consumption.


2. Small Size
3. Low Cost

Disadvantage:

1. LCDs are temperature-dependent (0-70°C)


2. The resolution is not as good as that of a CRT.

14
Computer Graphics and Multimedia
INPUT DEVICES
The Input Devices are the hardware that is used to transfer input to the computer. The data can be
in the form of text, graphics, sound, and text. Output device display data from the memory of the
computer. Output can be text, numeric data, line, polygon, and other objects.

Fig 1.16 Processing of input to output

The following are some of the examples of Input Devices :


1. Keyboard
2. Mouse
3. Trackball
4. Spaceball
5. Joystick
6. Lightpen
7. Digitizer
8. Touch panels
9. Image Scanner

1. Keyboard :

The most commonly used input device is a keyboard. The data is entered by pressing the set
of keys. All keys are labeled. A keyboard with 101 keys is called a QWERTY keyboard.
The keyboard has alphabetic as well as numeric keys. Some special keys are also available.
1. Numeric Keys: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
2. Alphabetic keys: a to z (lower case), A to Z (upper case)
3. Special Control keys: Ctrl, Shift, Alt
4. Special Symbol Keys: ; , " ? @ ~ ? :
5. Cursor Control Keys: ↑ → ← ↓
6. Function Keys: F1 F2 F3. .. F9.
7. Numeric Keyboard: It is on the right-hand side of the keyboard and used for fast entry of
numeric data.
Functions of Keyboard:
1. Alphanumeric Keyboards are used in CAD. (Computer Aided Drafting)
2. Keyboards are available with special features line screen co-ordinates entry, Menu selection
or graphics functions, etc.
3. Special purpose keyboards are available having buttons, dials, and switches.

15
Computer Graphics and Multimedia
Advantage:
1. Suitable for entering numeric data.
2. Function keys are a fast and effective method of using commands, with fewer errors.
Disadvantage:
1. Keyboard is not suitable for graphics input.

2. Mouse

A Mouse is a pointing device and used to position the pointer on the screen. There are two or
three depression switches on the top. The movement of the mouse along the x-axis helps in the
horizontal movement of the cursor and the movement along the y-axis helps in the vertical
movement of the cursor on the screen. The mouse cannot be used to enter text.
Advantage:
 Easy to use
 Not very expensive

Fig 1.17 Movement of Mouse

3. Trackball
It is a pointing device. It is similar to a mouse. This is mainly used in notebook or laptop
computer, instead of a mouse. This is a ball which is half inserted, and by changing fingers
on the ball, the pointer can be moved.
Advantage:
1. Trackball is stationary, so it does not require much space to use it.
2. Compact Size

Fig 1.18 Trackball

16
Computer Graphics and Multimedia
4. Space ball
It is similar to trackball, but it can move in six directions where trackball can move in two directions
only. The movement is recorded by the strain gauge. Strain gauge is applied with pressure. It can be
pushed and pulled in various directions. The ball has a diameter around 7.5 cm. The ball is mounted
in the base using rollers. One-third of the ball is an inside box, the rest is outside.
Applications:

 It is used for three-dimensional positioning of the object.


 It is used to select various functions in the field of virtual reality.
 It is applicable in CAD applications.
 Animation is also done using spaceball.
 It is used in the area of simulation and modeling.

Fig 1.19 Space ball

5. Joystick
A Joystick is also a pointing device which is used to change cursor position on a monitor screen.
Joystick is a stick having a spherical ball as its both lower and upper ends as shown in Fig. 1.20.
The lower spherical ball moves in a socket. The joystick can be changed in all four directions. The
function of a joystick is similar to that of the mouse. It is mainly used in Computer Aided Designing
(CAD) and playing computer games.

Fig 1.20 Joystick

6. Light Pen

Light Pen (similar to the pen) is a pointing device which is used to select a displayed menu item or
draw pictures on the monitor screen. It consists of a photocell and an optical system placed in a small

17
Computer Graphics and Multimedia
tube. When its tip is moved over the monitor screen, and pen button is pressed, its photocell sensing
element detects the screen location and sends the corresponding signals to the CPU.
Uses:
1. Light Pens can be used as input coordinate positions
2. If background color or intensity, a light pen can be used as a locator.
3. It is used as a standard pick device with many graphics system.
4. It can be used as stroke input devices.
5. It can be used as valuators

Fig 1.21 Joystick

7. Digitizers

The digitizer is an operator input device, which contains a large, smooth board (the appearance is
similar to the mechanical drawing board) & an electronic tracking device
The electronic tracking device contains a switch for the user to record the desire x & y
coordinate positions. The coordinates can be entered into the computer memory or stored or an
off-line storage medium such as magnetic tape.

Advantages:

 Drawing can easily be changed.


 It provides the capability of interactive graphics.
Disadvantages:
 Costly
 Suitable only for applications which required high-resolution graphics.

Fig 1.22 Digitizer Fig 1.23 Digital Camera

18
Computer Graphics and Multimedia
8. Touch Panels

Touch Panels is a type of display screen that has a touch-sensitive transparent panel covering the
screen. A touch screen registers input when a finger or other object comes in contact with the
screen. When the wave signals are interrupted by some contact with the screen, that located is
recorded. Touch screens have long been used in military applications.

9. Voice Recognition
Voice Recognition is one of the newest, most complex input techniques used to interact with the
computer. The user inputs data by speaking into a microphone. The simplest form of voice
recognition is a one-word command spoken by one person. Each command is isolated with
pauses between the words. Voice Recognition is used in some graphics workstations as input
devices to accept voice commands. The voice-system input can be used to initiate graphics
operations or to enter data. These systems operate by matching an input against a predefined
dictionary of words and phrases.
Advantage:

1. More efficient device.


2. Easy to use
3. Unauthorized speakers can be identified

Disadvantages:

1. Very limited vocabulary


2. Voice of different operators can't be distinguished.

10. Image Scanner


It is an input device. The data or text is written on paper. The paper is feeded to scanner. The
paper written information is converted into electronic format; this format is stored in the
computer. The input documents can contain text, handwritten material, picture extra.
By storing the document in a computer document became safe for longer period of time. The
document will be permanently stored for the future. We can change the document when we need.
The document can be printed when needed. Scanning can be of the black and white or colored
picture. On stored picture 2D or 3D rotations, scaling and other operations can be applied.

19
Computer Graphics and Multimedia

Types of image Scanners:

1. Flat Bed Scanner:


It resembles a photocopy machine. It has a glass top on its top. Glass top in further covered using
a lid. The document to be scanned is kept on glass plate. The light is passed underneath side of
glass plate. The light is moved left to right. The scanning is done the line by line. The process is
repeated until the complete line is scanned. Within 20-25 seconds a document of 4" * 6" can be
scanned.

Fig 1.24 Flatbed Scanner

2. Hand Held Scanner:


It has a number of LED's (Light Emitting Diodes). The LED's are arranged in the small case. It is
called a Hand held Scanner because it can be kept in hand which performs scanning. For scanning
the scanner is moved over document from the top towards the bottom. Its light is on, while we move
it on document. It is dragged very slowly over document. If dragging of the scanner over the
document is not proper, the conversion will not correct.

Fig 1.25 Handheld Scanner

20
Computer Graphics and Multimedia

HARDCOPY DEVICES

Fig 1.26 Classification of Printers

Types of printers
1. Impact Printers: The printers that print the characters by striking against the ribbon and onto the
papers are known as Impact Printers.
These Printers are of two types:
1. Character Printers
2. Line Printers
2. Non-Impact Printers: The printers that print the characters without striking against the ribbon and
onto the papers are called Non-Impact Printers. These printers print a complete page at a time,
therefore, also known as Page Printers.

21
Computer Graphics and Multimedia
Page Printers are of two types:
1. Laser Printers
2. Inkjet Printers

Impact Printers
1. Dotmatrix printers

Dot matrix has printed in the form of dots. A printer has a head which contains nine pins. The
nine pins are arranged one below other. Each pin can be activated independently.
All or only the same needles are activated at a time. When needless is not activated, and then the
tip of needle stay in the head. When pin work, it comes out of the print head.
In nine pin printer, pins are arranged in 5 * 7 matrixes.

Fig 1.27 Dotmatrix printer

Advantages of Dotmatrix printers

1. Dot Matrix Printers prints output as dots, so it can print any shape of the character. This
allows the printer to print special character, charts, graphs, etc.
2. Dot Matrix Printers come under the category of impact printers. The printing is done when
the hammer pin strikes the inked ribbon. The impressions are printed on paper. By placing
multiple copies of carbon, multiple copies of output can be produced.
3. It is suitable for printing of invoices of companies.

22
Computer Graphics and Multimedia
2. Daisy Wheel Printers

Head is lying on a wheel and Pins corresponding to characters are like petals of Daisy, that's
why called Daisy wheel printer.
Advantage:
1. More reliable than Dot Matrix Printers
2. Better Quality
Disadvantage:
Slower than Dot Matrix Printers

Fig 1.28 Daisywheel Printer

3. Drum Printers

These are line printers, which prints one line at a time. It consists of a drum. The shape of the
drum is cylindrical. The drum is solid and has characters embossed on it in the form of
vertical bands. The characters are in circular form. Each band consists of some characters.
Each line on drum consists of 132 characters. Because there are 96 lines so total characters
are (132 * 95) = 12, 672.

Chain Printers:

These are called as line printers. These are used to print one line at a line. Basically, chain
consists of links. Each link contains one character. Printers can follow any character set style,
i.e., 48, 64 or 96 characters. Printer consists of a number of hammers also.

Advantages:
1. Chain or Band if damaged can be changed easily.
2. It allows printing of different form.
3. Different Scripts can be printed using this printer.

Disadvantages:
1. It cannot print charts and graphs.
2. It cannot print characters of any shape.
3. Chain Printers is impact printer, hammer strikes so it is noisy.
23
Computer Graphics and Multimedia

Non-Impact Printers – Inkjet printers

These printers use a special link called electrostatic ink. The printer head has a special nozzle.
Nozzle drops ink on paper. Head contains up to 64 nozzles. The ink dropped is deflected by the
electrostatic plate. The plate is fixed outside the nozzle. The deflected ink settles on paper.
Advantages:
1. These produce high quality of output as compared to the dot matrix.
2. A high-quality output can be produced using 64 nozzles printed.
3. Inkjet can print characters in a variety of shapes.
4. Inkjet can print special characters.
5. The printer can print graphs and charts.
Disadvantages:
1. Inkjet Printers are slower than dot matrix printers.
2. The cost of inkjet is more than a dot matrix printer.

Fig 1.29 Inkjet Printer

Non-Impact Printers – Laser printers

These printers uses laser lights to produces the dots needed to form the characters to be printed on a
page & hence the name laser printers.

The output is generated in the following steps:


Step1: The bits of data sent by processing unit act as triggers to turn the laser beam on & off.
Step2: The output device has a drum which is cleared & is given a positive electric charge.
To print a page the laser beam passing from the laser scans back & forth the surface of the drum.
Step3: The laser exposed parts of the drum attract an ink powder known as toner.
Step4: The attracted ink powder is transferred to paper.
Step5: The ink particles are permanently fixed to the paper by using either heat or pressure
technique.
Step6: The drum rotates back to the cleaner where a rubber blade cleans off the excess ink &
prepares the drum to print the next page.
24
Computer Graphics and Multimedia

PLOTTERS

Plotters are a special type of output device. It is suitable for applications:


1. Architectural plan of the building.
2. CAD applications like the design of mechanical components of aircraft.
3. Many engineering applications.

Advantage:
• It can produce high-quality output on large sheets.
• It is used to provide the high precision drawing.
• It can produce graphics of various sizes.
• The speed of producing output is high.

Fig 1.30 Plotter

GRAPHICS SOFTWARE
There are two types of Graphics Software.
1. General Purpose Packages: Basic Functions in a general package include those for generating
picture components (straight lines, polygons, circles and other figures), setting color and intensity
values, selecting views, and applying transformations.Example of general-purpose package is the GL
(Graphics Library), GKS, PHIGS, PHIGS+ etc.
2. Special Purpose Packages: These packages are designed for non programmers, so that these
users can use the graphics packages, without knowing the inner details.
Example of special purpose package is :
• Painting programs
• Package used for business purpose
• Package used for medical systems.
• CAD packages

25
Computer Graphics and Multimedia
LINE DRAWING ALGORITHMS

A line connects two points. It is a basic element in graphics. To draw a line, you need two end
points (x1,y1) and (x2,y2). The line segment is sampled at unit intervals in one coordinate and
corresponding integer values nearer to the line path are determined for other coordinates.

Equation of a line is given by :


y = mx+b

where m is the slope of the line and b is the slope intercept.

b = y-mx

Considering a line with positive slope, if the slope is less than or equal to 1, we sample at unit x
intervals (dx=1) and compute successive y values as :

Yk+1 =Yk + m

Subscript k takes integer values starting from 0, for the 1st point and increases along x axis in unit
intervals until the endpoint is reached. Y is rounded off to a nearest integer to correspond to a
screen pixel.

Similarly, if the slope is greater than or equal to 1, we sample at unit y intervals (dy=1) and
compute x in successive manner as :

X k+1 = X k + 1/m

Digital Differential Analyzer (DDA) line drawing algorithm

Digital Differential Analyzer (DDA) algorithm is the simple line generation algorithm which is
explained step by step here

 Get the inputs of two end points (X1,Y1) and (X2, Y2)
 Calculate the difference between two endpoints (dx and dy)
 Based on the calculated difference, you need to identify the number of steps to put pixel. If

26
Computer Graphics and Multimedia
dx > dy, then you need more steps in x coordinate; otherwise in y coordinate.
 Calculate the increment in x coordinate and y coordinate.
 Put the pixel by successfully incrementing x and y coordinates accordingly and complete
the drawing of the line.

Algorithm for DDA

Procedure lineDDA (x1,y1,x2,y2 : integer)


{
var
dx,dy,steps,k : integer;
xinc, yinc, x,y : real;
dx = x2-x1;
dy = y2-y1;
if abs(dx) > abs(dy) then
steps = abs (dx)
else steps = abs(dy);
xinc = dx/steps;
yinc = dy/steps;
x = x1;
y = y1;
Setpixel (round(x),round(y),1);
for (i=1; i<=steps; i++)
{
x = x + xinc;
y = y + yinc;
Setpixel (round(x),round(y),1);
}
}

27
Computer Graphics and Multimedia

Eg 1) Draw a line between (5,5) – (10,10)


x1 = 5, y1 = 5, x2 = 10, y2=10

𝑦2−𝑦1 10−5
m= = =1
𝑥2−𝑥1 10−5

if dx >dy
steps = abs (dx)
else
steps = abs(dy)

From this example , both dx and dy are equal.


steps = abs(dy)
Xinc = dx / steps = 5/5 = 1
Yinc = dy /steps = 5/5 = 1

Increment both x and y using xinc and yinc for steps number of times and the points generated are
given in table below and it is also plotted in Fig. 1.31.

K Xk Yk
0 5 5
1 6 6
2 7 7
3 8 8
4 9 9
5 10 10

28
Computer Graphics and Multimedia

Fig 1.31 Line generated using DDA algorithm

Eg 2) Draw a line between (5,5) – (10,8)


x1 = 5, y1 = 5, x2 = 10, y2=8

𝑦2−𝑦1 8−5
m= = = .6
𝑥2−𝑥1 10−5

From the above equation dy > dx : Hence yinc = .6 and xinc = 1 and the points generated are
shown in table below and it is plotted in Fig. 1.32.
K Xk Yk
0 5 5
1 6 5.6
2 7 6.2
3 8 6.8
4 9 7.4
5 10 8

Fig 1.32 Line generated using DDA algorithm

Advantages :
1. It is the simplest algorithm and does not require special skills for implementation
2. Fastest method for calculating pixel positions
3. Involves floating point arithmetic
Computer Graphics and Multimedia

Bresenham line drawing algorithm


The big advantage of this algorithm is that it uses only integer calculations.
The main idea of the Bresenham’s line drawing algorithm: Move across the x-axis in unit intervals
and at each step choose between two different y coordinates. For example from position (2,3) we
have to choose between (3,3) and (3,4), we would like the point that is closer to the original line.

Deriving TheBresenham Line Algorithm

To illustrate Bresenham’s approach, we first consider the scan conversion process for lines with
positive slope less than 1 (m<1).
Pixel positions along a line path are determined by sampling at unit x intervals. Starting from the
left endpoint (x0, y0) of a given line, we step to each successive column (x position) and
demonstrate the kth step in this process.
Assuming we have determined that the pixel at (xk, yk) is to be displayed, we next need to decide
which pixel to plot in column xk+1. Our choices are the pixels at positions (xk+1, yk) and (xk+1,
yk+1).
Bresenham line drawing algorithm

1. Input the two-line end-points, storing the left end-point in (x0,y0)


2. Plot the point (x0, y0)
Computer Graphics and Multimedia
3. Calculate the constants dx, dy, 2dy, and (2dy – 2dx) and get the first value for the decision
parameter as:
P0 = 2dy – dx
4. At each xk along the line, starting at k=0, perform the following test:
If pk< 0, the next point to plot is
(xk+1, yk) and
pk+1= pk + 2dy
Otherwise, the next point to plot is
(xk+1, yk+1) and
pk+1 = pk + 2dy –2dx
Repeat step 4 (dx ) times
Computer Graphics and Multimedia
Detailed algorithm

Procedure lineBresenham (x1, y1, x2, y2 : integer)


var
dx,dy,x, y, xend, p : integer;
{
dx=abs(x1-x2);
dy=abs(y1-y2);
p=2*dy-dx;
if(x1>x2) then
{
x=x2;
y=y2;
xend=x1;
}
else
{
x=x1;
y=y1;
xend=x2;
}

putpixel(x,y,4);
while(x<=xend)
{
x=x+1;
if (p<0) then
p=p+2*dy;
else
{
y=y+1;
p=p+2*(dy-dx);
}
putpixel(x,y,4);
}
}

Eg., Draw a line between (20,10) and (30, 18)


dx =10, dy = 8
The initial decision parameter has the value P0 = 2dy – dx = 6
The increments for calculation successive decision parameters are:
2dy = 16
2dy – 2dx = – 4
Computer Graphics and Multimedia
We plot the initial point (20, 10) , and determine successive pixel positions along the line path from
the decision parameters as:

Difference between DDA Line Drawing Algorithm and Bresenhams Line Drawing Algorithm

Features Digital Differential Analyzer Line Bresenham’s Line Drawing Algorithm


Drawing Algorithm
Arithmetic DDA algorithm uses floating points i.e. Bresenham’s algorithm uses fixed points i.e.
Real Arithmetic. Integer Arithmetic
Operations DDA algorithm uses multiplication and Bresenham’s algorithm uses only subtraction
division in its operations. and addition in its operations.
Speed DDA algorithm is rather slowly than Bresenham’s algorithm is faster than DDA
Bresenham’s algorithm in line drawing algorithm in line drawing because it performs
because it uses real arithmetic (floating only addition and subtraction in its calculation
point operations) and uses only integer arithmetic so it runs
significantly faster.

Accuracy & DDA algorithm is not as accurate and Bresenham’s algorithm is more efficient and
Efficiency efficient as Bresenham’s algorithm much accurate than DDA algorithm.
Drawing DDA algorithm can draw circles and Bresenham’s algorithm can draw circles and
curves but that are not as accurate as curves with much more accuracy than DDA
Bresenham’salgorithm algorithm.
Round off DDA algorithm round off the coordinates Bresenham’s algorithm does not round but takes
to integer that is nearest to the line the incremental value in its operation.
Expensive DDA algorithm uses an enormous number Bresenham’s algorithm is less expensive than
of floating-point multiplications so it is DDA algorithm as it uses only addition and
expensive subtraction.
Computer Graphics and Multimedia
Mid Point Subdivision algorithm for Circle
The mid-point circle drawing algorithm is an algorithm used to determine the points needed for
generating a circle. We use the mid-point algorithm to calculate all the perimeter points of the
circle in the first octant and then print them along with their mirror points in the other octants. This
will work because a circle is symmetric about it’scentre.

Fig 1.33 8 way symmetry for drawing circle

The equation for a circle is:


x2 + y2 = r2
Where r is the radius of the circle. So, we can write a direct circle drawing algorithm by solving
the equation for y at unit x intervals using:

y = (r2 – x2)1/2
For a given radius r and screen center position (xc, yc), we can first set up our algorithm to
calculate pixel positions around a circle path centered at the coordinate (0, 0). Then each
calculated position (x, y) is moved to its proper screen position by adding xc to x and yc to y.
Assuming we have just plotted point (xk, yk), we next need to determine whether the pixel at
position (xk+1, yk) or the one at position (xk+1, yk–1) is closer to the circle path. Our decision
parameter is The circle function evaluated at the midpoint between these two pixels:
Computer Graphics and Multimedia

The relative position of any point (x, y) can be determined by checking the sign of the on the
boundary of the circle function:
<0 If (x, y) is inside the circle boundary
fcircle(x, y) = 0 If (x, y) is on the circle boundary
>0 If (x, y) is outside the circle boundary
The circle function tests are performed for the midpoints positions between pixels near the circle
path at each sampling step. Thus, circle function is decision parameter in the midpoint algorithm,
and we can set up incremental calculations for this function as we did in the line algorithm.

1. Input radius r and circle centre (xc, yc), and obtain the first point on the circumference of a
circle centred on the origin as:
(x0, y0) = (0, r)
2. Calculate the initial value of the decision parameter as:
P0 = 5/4 – r (or ) p=1-r
3. At each position xk Starting with k=0, perform the following test.
If pk < 0, the next point along the circle centred on (0, 0) is
(xk+1 ,yk) and pk+1 = pk + 2xk+1 + 1
Otherwise the next point along the circle is
(xk+1 ,yk–1) and pk+1 = pk + 2xk +1 +1 – 2yk +1
where 2xk+1 = 2xk+1 + 2 and 2yk+1= 2yk+1 – 2
4. Determine symmetry points in the other seven octants
5. Move each calculated pixel position (x, y) onto the circular path centred at (xc, yc) and plot
the coordinate values:
x= x+ xc and y = y+ yc
6. Repeat steps 3 to 5 until x >= y
Computer Graphics and Multimedia

eg.,) Given a circle with radius=10, we demonstrate the midpoint circle algorithm by determining
positions along the circle octant in the first quadrant from x = 0 to x = y. the initial value of the
decision parameter is P0 = 1 – r = – 9
For the circle centred on the origin, the initial point is (x0, y0) = (0, 10), and the initial increment
term for calculating the decision parameters are
2x0 = 0 and 2y0= 20
Successive decision parameters values and positions along the circle path are calculated using
midpoint algorithm as:

Fig 1.34 A plot of the generated pixel positions in the first quadrant

DETAILED ALGORITHM
Computer Graphics and Multimedia
The advantages of Mid Point Circle Drawing Algorithm are-
• It is a powerful and efficient algorithm.
• The entire algorithm is based on the simple equation of circle X2 + Y2 = R2.
• It is easy to implement from the programmer’s perspective.
• This algorithm is used to generate curves on raster displays.

The disadvantages of Mid Point Circle Drawing Algorithm are-


• Accuracy of the generating points is an issue in this algorithm.
• The circle generated by this algorithm is not smooth.
• This algorithm is time consuming.
2D Geometric Transformations: Basic Transformations – Matrix Representations –
Composite Transformations – Other Transformations. 2D Viewing: The Viewing Pipeline –
Viewing Co-ordinate Reference Frame – Window-to-Viewport Co-ordinate Transformation -
2D Viewing Functions – Clipping Operations- Point Clipping-Line Clipping-Polygon
Clipping-Curve Clipping-Text Clipping-Exterior Clipping.

2- Dimensional Transformations

The Basic Transformations:

1. Translation
2. Scaling
3. Rotation

Other Transformations:

1. Reflection
2. Shearing

Translations

Displacement of an object in a given distance and direction from its original position.

 Rigid body transformation that moves object without deformation


 Initial Position point P (x,y)
 The new point P’
(x’, y’) where
x’ = x + tx , y’ = y + ty , tx and ty is the displacement in x and y respectively.

1
Computer Graphics and Multimedia

Fig 2.1 Translation from P to P’


The translation pair (tx, ty) is called a translation vector or shift vector

Problem:

 Assume you are given a point at (x,y)=(2,1). Where will the point be if you move it 3
units to the right and 1 unit up? Ans: (x',y') = (5,2). How was this obtained? - (x',y') =
(x+3,y+1). That is, to move a point by some amount dx to the right and dy up, you must
add dx to the x-coordinate and add dy to they-coordinate.
 What was the required transformation to move the green triangle to the red triangle? Here
the green triangle is represented by 3points

triangle = { p1=(1,0), p2=(2,0), p3=(1.5,2) }

Fig 2.2 Example for Translation

2
Computer Graphics and Multimedia

Matrix/Vector Representation of Translations

A translation can also be represented by a pair of numbers, t=(tx,ty) where tx is the change in the
x-coordinate and tyis the change in y coordinate. To translate the point p by t, we simply add to
obtain the new (translated) point.

p’ = p + t.

Rotation

Rotation is applied to an object by repositioning it along a circular path in the xy plane.


To generate a rotation, we specify
 Rotation angleθ
 Pivot point ( xr ,yr)

Fig 2.3 Rotation

3
Computer Graphics and Multimedia
Positive values of θ is used for counter clockwise rotation Negative values of θ for clockwise rotation.

Matrix Representation of
P’=R.P

Fig 2.4 Example for rotation

4
Computer Graphics and Multimedia
Scaling

 Scaling alters the size of anobject.


 Operation can be carried out by multiplying each of its components by ascalar
 Uniform scaling means this scalar is the same for all components
 Non-uniform scaling: different scalars per component

x’ = x* sx
y’ = y * sy

In matrix form:

eg., When Sx=2 and Sy=2

Fig 2.5 Example for Scaling

Reflection

A reflection is a transformation that produces a mirror image of an object Generated relative


to an axis of reflection

5
Computer Graphics and Multimedia
1. Reflection along xaxis
2. Reflection along yaxis
3. Reflection relative to an axis perpendicular to the xy plane and passing through the coordinate
origin
4. Reflection of an object relative to an axis perpendicular to the xy plane and passing through
point P
5. Reflection of an object with respect to the liney=x.

Reflection about X-Axis :

Fig 2.4 Example for reflection about X-axis

Reflection about y-axis:

Fig 2.5 Example for reflection about Y-axis

6
Computer Graphics and Multimedia
Reflection relative to an axis perpendicular to the xy plane and passing through the
coordinate origin:

Fig 2.6 Example for reflection about XY plane

Reflection of an object with respect to the line y=x

Fig 2.7 Example for reflection about Y=X

7
Computer Graphics and Multimedia

Shearing

A transformation that distorts the shape of an object such that the transformed object appears as if
the object were composed of internal layers that had been caused to slide over each other.

Shear relative tothe x-axis Shear relative to they-axis

Fig 2.8 Example for X- Shearing

8
Computer Graphics and Multimedia

Fig 2.9 Example for Y- Shearing

2D VIEWING

The mapping of a 2D world coordinate system to device coordinates is called a two-dimensional


viewing transformation. The clipping window is the section of the 2D scene that is selected for
viewing. The display window is where the scene will be viewed. The viewport controls the
placement of the scene within the display window

A window-viewport transformation describes the mapping of a (rectangular) window in one


coordinate system into another (rectangular) window in another coordinate system. This
transformation is defined by the section of the original image that is transformed (clipping
window), the location of the resulting window (viewport), and how the window is translated,
scaled or rotated.

Fig 2.10 Steps involved with window to viewport mapping

9
Computer Graphics and Multimedia

Fig 2.11 window and viewport

Window to Viewport coordinate transformation

To maintain the same relative placement in view port as in window. The conversion is performed
with the following sequence of transformations:

1. Perform a scaling transformation using point position of (xw min, yw min) that scales the
window area to the size of view port.
2. Translate the scaled window area to the position of view port. Relative proportions of
objects are maintained if scaling factor are the same(Sx=Sy).

Fig 2.12 Example for window and viewport

Formula for window to viewport transformation

World coordinate – It is the Cartesian coordinate defined by Xwmin, Xwmax, Ywmin, Ywmax
Device Coordinate –It is the screen coordinate where the objects is to be displayed, like Xvmin,
Xvmax, Yvmin, Yvmax
Window –It is the area on world coordinate selected for display.
ViewPort –It is the area on device coordinate where graphics is to be displayed.

10
Computer Graphics and Multimedia

Now the relative position of the object in Window and Viewport are same

From the above equations :

in which

Example :
Let Xwmin = 20, Xwmax = 80, Ywmin = 40, Ywmax = 80 and Xvmin = 30, Xvmax = 60, Yvmin = 40,
Yvmax = 60.
Given a point ( Xw, Yw ) be ( 30, 80 ) on the window. Find its position on the viewport.

11
Computer Graphics and Multimedia
STEP 1 :
Calculate scaling factor of x coordinate Sx and scaling factor of y coordinate Sy

Sx = (60 – 30) / (80 -20) = 30 /60


Sy = (60-40)/(80 -40) = 20/40

STEP 2 : Find the point on the view port using the formula given below :

Xv = 30+(30 – 20) * 30/60 = 30 + 10*30/60 = 35


Yv = 40+(80 -40) * 20/40 = 40 + 40 *20/40 = 60

Thus the viewport coordinates are Xv = 35 and Yv = 60

Advantages

1. The position of the viewport can be changed allowing objects to be viewed at different
positions on the Interface Window.
2. Multiple viewports can also be used to display different sections of a scene at different
screen positions. Also, by changing the dimensions of the viewport, the size and proportions
of the objects being displayed can be manipulated.
3. Thus, a zooming affect can be achieved by successively mapping different dimensioned
clipping windows on a fixed sized viewport.
4. If the aspect ratio of the world window and the viewport are different, then the image may
look distorted.

12
Computer Graphics and Multimedia

2D viewing functions

13
Computer Graphics and Multimedia

CLIPPING OPERATION
1. Point Clipping
2. Line Clipping
3. Area Clipping
4. Curve Clipping
5. Text Clipping

1. Point Clipping
In computer graphics our screen act as a 2-D coordinate system. it is not necessary that each and
every point can be viewed on our viewing pane(i.e. our computer screen). We can view points,
which lie in particular range (0,0) and (Xmax, Ymax). So, clipping is a procedure that identifies
those portions of a picture that are either inside or outside of our viewing pane.

In case of point clipping, we only show/print points on our window which are in range of our
viewing pane, others points which are outside the range are discarded.

Fig 2.13 Input of point clipping

Fig 2.14 Output of point clipping

14
Computer Graphics and Multimedia
Algorithm :
1. Get the minimum and maximum coordinates of both viewing pane.
2. Get the coordinates for a point.
3. Check whether given input lies between minimum and maximum coordinate of viewing
pane.
4. If yes display the point which lies inside the region otherwise discard it.

2. LINE CLIPPING

i) COHEN –SUTHERLAND LINE CLIPPING algorithm


Given a set of lines and a rectangular area of interest, the task is to remove lines which are outside
the area of interest and clip the lines which are partially inside the area.

Fig 2.15 Clipping window

Cohen-Sutherland algorithm divides a two-dimensional space into 9 regions and then efficiently
determines the lines and portions of lines that are inside the given rectangular area.

15
Computer Graphics and Multimedia

Fig 2.16 Nine regions of Clipping window

Set first Bit to 1 if Points lies to left of window (x < xmin)


Set second Bit to 1 if Points lies to right of window (x > xmax)
Set third Bit to 1 if Points lies to bottom of window (y < ymin)
Set fourth Bit to 1 if Points lies to top of window (y > ymax)
Order is TBRL (Top Bottom Right Left)

There are three possible cases for any given line.


1. Completely inside the given rectangle : Bitwise OR of region of two end points of line is 0
(Both points are inside the rectangle)
2. Completely outside the given rectangle : Both endpoints share at least one outside region
which implies that the line does not cross the visible region. (bitwise AND of endpoints !=
0).
3. Partially inside the window : Both endpoints are in different regions. In this case, the
algorithm finds one of the two points that is outside the rectangular region. The intersection of
the line from outside point and rectangular window becomes new corner point and the
algorithm repeats

16
Computer Graphics and Multimedia

Algorithm :

Step1:Calculate positions of both endpoints of the line


Step2:Perform OR operation on both of these end-points
Step3:If the OR operation gives 0000 Then

line is considered to be completely inside the window


else
Perform AND operation on both endpoints
If And ≠ 0000
then the line is completely outside
else
And=0000
Line is partially inside
Step4:If a line is clipped case, find an intersection with boundaries of the window
m=(y2-y1 ) / (x2-x1)
(a) If bit 1 is "1" line intersects with left boundary of rectangle window
y3=y1+m(x-X1)
where X = Xwmin
where Xwminis the minimum value of X co-ordinate of window
(b) If bit 2 is "1" line intersect with right boundary
y3=y1+m(X-X1)
where X = Xwmax
where X more is maximum value of X co-ordinate of the window
(c) If bit 3 is "1" line intersects with bottom boundary
X3=X1+(y-y1)/m
where y = ywmin
ywmin is the minimum value of Y co-ordinate of the window
(d) If bit 4 is "1" line intersects with the top boundary
X3=X1+(y-y1)/m
where y = ywmax
ywmax is the maximum value of Y co-ordinate of the window

17
Computer Graphics and Multimedia

Example 1:

Example 2 :

18
Computer Graphics and Multimedia

Example 3

Advantages of line clipping


1. It will extract part we desire.
2. For identifying the visible and invisible area in the 3D object.
3. For creating objects using solid modeling.
4. For drawing operations.
5. Operations related to the pointing of an object.
6. For deleting, copying, moving part of an object.

Limitation

The algorithm is merely applicable to rectangular windows and not to the other convex shaped
window.

19
Computer Graphics and Multimedia
ii) LIANG-BARSKY LINE CLIPPING

Based on analysis of parametric equation of a line segment, faster line clippers have been
developed, which can be written in the form :
x = x1 + u ∆x
y = y1 + u ∆y where 0<=u<=1 and where ∆x = (x2 - x1) and ∆y = (y2 - y1)

In the Liang-Barsky approach we first the point clipping condition in parametric form
xwmin <= x1 + u ∆x <=. xwmax
ywmin <= y1 + u ∆y <= ywmax

CONDITION POSITION OF LINE

pk = 0 parallel to the clipping boundaries

pk = 0 and qk < 0 completely outside the boundary

pk = 0 and qk >= 0 inside the parallel clipping boundary

pk < 0 line proceeds from outside to inside

pk > 0 line proceeds from inside to outside

20
Computer Graphics and Multimedia
Cases of Selection :
1. pk = 0
• Line is parallel to boundaries
• If for the same k, qk < 0, reject
• Else, accept
2. pk < 0
• Line starts outside this boundary
• rk = qk / pk
• u1 = max(0, rk, u1)
3. pk > 0
• Line starts inside this boundary
• rk = qk / pk
• u2 = min(1, rk, u2)
4. If u1 > u2, the line is completely outside

Example :

A line AB with end points A(–1,7) and B(11,1) is to be clipped against a rectangular window
with xmin=1,xmax=9,ymin=2, ymax=8. Find the lower and upper bound of the clipped line

21
Computer Graphics and Multimedia

Thus Clipped lines are (x=1, y=6) and (x=9, y=2)

22
Computer Graphics and Multimedia
Example 2
A(–5,3) and B(15,9) xmin=0, xmax=10,ymin=0,ymax=10. Find the lower and upper bound of the
clipped line

X = -5 + ¼ * 20 = -5 + 5 = 0
Y= 3+1/4 * 6 = 3+ 1.5 = 4.5
X = -5 + ¾ * 20 = -5+15 = 10
Y = 3+3/4 * 6 = 3 + 4.5 = 7.5

Thus , the clipped points are (0,4.5 and 10,7.5)


3. POLYGON CLIPPING

An algorithm that clips a polygon is rather complex. Each edge of the polygon must be tested
against each edge of the clipping window, usually a rectangle. As a result, new edges may be
added, and existing edges may be discarded, retained, or divided. Multiple polygons may result
from clipping a single polygon. We need an organized way to deal with all of these cases.
23
Computer Graphics and Multimedia

SUTHERLAND HODGEMAN POLYGON CLIPPING

The Sutherland and Hodgman's polygon clipping algorithm is used in this tool. This algorithm is
based on a divide- and-conquer strategy that solves a series of simple and identical problems
that, when combined, solve the overall problem. The simple problem is to clip a polygon against
a single infinite clipping edge. This process outputs the series of vertices that define the clipped
polygon. Four clipping edges, each defining one boundary of the clipping window, are used to
successively to fully clip the polygon.

Fig 2.17 Area clipping

How to clip against an edge of clipping area?


1. Both vertices are inside : Only the second vertex is added to the output list
2. First vertex is outside while second one is inside : Both the point of intersection of the
edge with the clip boundary and the second vertex are added to the output list
3. First vertex is inside while second one is outside : Only the point of intersection of the
edge with the clip boundary is added to the output list

24
Computer Graphics and Multimedia
4. Both vertices are outside : No vertices are added to the output list

Fig 2.18 Cases of Polygon clipping

Assuming vertex A has already been processed,

Case 1 — vertex B is added to the output list


Case 2 — vertex B’ is added to the output (edge AB is clipped to AB’) Case
Case 3 — no vertex added (segment AB is clipped out)

Case 4 – Vertices A’ and B’ are added to the output

SamplePolygon After Clipping

Fig 2.19 Polygon Clipping


25
Computer Graphics and Multimedia

Fig 2.20 Example for Polygon Clipping

Drawback of Sutherland Hodgeman Algorithm:

Clipping of the concave polygon Can produce two CONNECTED areas

Fig 2.21 Connected polygons

26
Computer Graphics and Multimedia

5. Curve Clipping
Curve Clipping involves complex procedures as compared to line clipping. Curve clipping requires
more processing than for object with linear boundaries. Consider window which is rectangular in
shape. The circle is to consider against rectangle window.
If circle is completely inside boundary of the window, it is considered visible. So save the circle. If
a circle is in outside window, discard it. If circle cut the boundary then consider it to be clipping
case.
The below figure illustrates circle clipping against a rectangular window. On the first pass, we can
clip the bounding rectangle of the object against the bounding rectangle of the clip region. If the
two regions overlap, we will need to solve the simultaneous line-curve equations to obtain the
clipping intersection points

Fig 2.21 Curve Clipping

Exterior Clipping
It is opposite to previous clipping. Here picture which is outside the window is considered. The
picture inside the rectangle window is discarded. So part of the picture outside the window is saved.
Uses of Exterior Clipping:
1. It is used for displaying properly the pictures which overlap each other.
2. It is used in the concept of overlapping windows.
3. It is used for designing various patterns of pictures.
4. It is used for advertising purposes.
5. It is suitable for publishing.
6. For designing and displaying of the number of maps and charts, it is also used.

27
Computer Graphics and Multimedia

5. TEXT CLIPPING

Various techniques are used to provide text clipping in a computer graphics. It depends on the
methods used to generate characters and the requirements of a particular application. There are three
methods for text clipping which are listed below −

 All or none string clipping


 All or none character clipping
 Text clipping
The following figure shows all or none string clipping −

Fig 2.22 Text Clipping Example -1

In all or none string clipping method, either we keep the entire string or we reject entire string based
on the clipping window. As shown in the above figure, STRING2 is entirely inside the clipping
window so we keep it and STRING1 being only partially inside the window, we reject.
The following figure shows all or none character clipping −

28
Computer Graphics and Multimedia

Fig 2.23 Text Clipping Example -2

This clipping method is based on characters rather than entire string. In this method if the string is
entirely inside the clipping window, then we keep it. If it is partially outside the window, then −
 You reject only the portion of the string being outside
 If the character is on the boundary of the clipping window, then we discard that entire
character and keep the rest string.
The following figure shows text clipping −

Fig 2.24 Text Clipping Example -3

29
Computer Graphics and Multimedia
This clipping method is based on characters rather than the entire string. In this method if the string
is entirely inside the clipping window, then we keep it. If it is partially outside the window, then
 You reject only the portion of string being outside.
 If the character is on the boundary of the clipping window, then we discard only that portion
of character that is outside of the clipping window.

Three Dimensional Concepts - 3D Geometric and Modeling Transformations – Three-Dimensional


Viewing - Visible-Surface Detection Methods: Back-Face Detection - Depth-Buffer Method -Scan
Line Method - A-Buffer Method-Depth Sorting Method-BSP Tree Method-Area Subdivision
Method.

THREE DIMENSIONAL TRANSFORMATIONS

Basic Transformations:
1.Translation
2.Rotation
3.Scaling

Other Transformations:
1.Reflection
2.Shearing

1. Translation

A translation in space is described by tx, ty and tz. It is easy to see that this matrix realizes the
equations:

x2=x1+tx
y2=y1+ty
z2=z1+tz
Computer Graphics and Multimedia

Fig. 3.1 Example for Translation

Matrix for Translation

A point P(x,y,z) after translation will be P’(X’,Y’,Z’) :


Computer Graphics and Multimedia

Example :

A point has coordinates in the x, y, z direction i.e., (5, 6, 7). The translation is done in the x-direction and
y direction by 3 coordinate. Three coordinates and in the z- direction by two coordinates. Shift the object.
Find coordinates of the new position.

Multiply co-ordinates of point with translation matrix:


Computer Graphics and Multimedia
2. Rotation

3D rotation is not same as 2D rotation. In 3D rotation, we have to specify the angle of rotation
along with the axis of rotation. We can perform 3D rotation about X, Y, and Z axes. They are
represented in the matrix form as below :

Fig. 3. 2 Rotation representation


Computer Graphics and Multimedia
Example 1 :

Example 2 :
Computer Graphics and Multimedia

Example 3 :

3. Scaling

We can change the size of an object using scaling transformation. In the scaling process, you
either expand or compress the dimensions of the object. Scaling can be achieved by multiplying
the original coordinates of the object with the scaling factor to get the desired result.

In 3D scaling operation, three coordinates are used. Let us assume that the original coordinates
are (X, Y, Z), scaling factors are (Sx,Sy,Sz) respectively, and the produced coordinates are (X’,
Y’, Z’). This can be mathematically represented as shown below:
Computer Graphics and Multimedia

Fig. 3. 3 Scaling

Reflection

A transformation that gives the mirror image of the object.

We can mirror the different planes by using scaling factor -1 on the axis that
is placed normally on the plane. Notice the matrix to the left. It mirrors
around the xy-plane, and changes the coordinates from a right hand system to
a left handsystem.
Computer Graphics and Multimedia

Shear

A transformation that slants the shape of an object is called the shear transformation. Like in 2D
shear, we can shear an object along the X-axis, Y-axis, or Z-axis in 3D.

Fig. 3. 4 Shearing

As shown in the above figure, there is a coordinate P. You can shear it to get a new coordinate P',
which can be represented in 3D matrix form as below
Computer Graphics and Multimedia

Problems

i) Given a 3D object with coordinate points A(0, 3, 3), B(3, 3, 6), C(3, 0, 1), D(0, 0, 0). Apply the
scaling parameter 2 towards X axis, 3 towards Y axis and 3 towards Z axis and obtain the new
coordinates of the object.

Solution-

Given :
Old coordinates of the object = A (0, 3, 3), B(3, 3, 6), C(3, 0, 1), D(0, 0, 0)
Scaling factor along X axis = 2
Scaling factor along Y axis = 3
Scaling factor along Z axis = 3

Coordinates of the object = A (0, 3, 3), B(3, 3, 6), C(3, 0, 1), D(0, 0, 0)

For Coordinates A(0, 3, 3)


Let the new coordinates of A after scaling = (Xnew, Ynew, Znew).

Applying the scaling equations, we have-

Xnew = Xold x Sx = 0 x 2 = 0
Ynew = Yold x Sy = 3 x 3 = 9
Znew = Zold x Sz = 3 x 3 = 9

Thus, New coordinates of corner A after scaling = (0, 9, 9).


Computer Graphics and Multimedia
For Coordinates B(3, 3, 6)

Let the new coordinates of B after scaling = (Xnew, Ynew, Znew).

Applying the scaling equations, we have-


Xnew = Xold x Sx = 3 x 2 = 6
Ynew = Yold x Sy = 3 x 3 = 9
Znew = Zold x Sz = 6 x 3 = 18

Thus, New coordinates of corner B after scaling = (6, 9, 18).


For Coordinates C(3, 0, 1) :

Let the new coordinates of C after scaling = (Xnew, Ynew, Znew).

Applying the scaling equations, we have-


Xnew = Xold x Sx = 3 x 2 = 6
Ynew = Yold x Sy = 0 x 3 = 0
Znew = Zold x Sz = 1 x 3 = 3

Thus, New coordinates of corner C after scaling = (6, 0, 3).

For Coordinates D(0, 0, 0)

Let the new coordinates of D after scaling = (Xnew, Ynew, Znew).

Applying the scaling equations, we have-


Xnew = Xold x Sx = 0 x 2 = 0
Ynew = Yold x Sy = 0 x 3 = 0
Znew = Zold x Sz = 0 x 3 = 0

Thus, New coordinates of corner D after scaling = (0, 0, 0).

THREE DIMENSIONAL VIEWING


 ParallelProjection
 PerspectiveProjection
 DepthCueing
 Visible Line and SurfaceIdentification
 SurfaceRendering
 Exploded and CutawayViews
 Three-Dimensional and Stereoscopic Views
Parallel Projection
Computer Graphics and Multimedia
In this method a view plane is used. z co-ordinate is discarded. The 3 d view is constructed by
extending lines from each vertex on the object until they intersect the view plane. Then connect
the projected vertices by line segments which correspond to connections on the original object.

Fig. 3. 5 Parallel Projection

Perspective Projection
Here the lines of projection are not parallel. Instead, they all converge at a single point called
the`center of projection' or `projection reference point'. The object positions are transformed
to the view plane along these converged projection lines. In this method, Objects farther from
the viewing position appear smaller.
• Perspective projection is located as a finite point whereas Parallel projection is located at
infinite points.
• Perspective projection form a realistic picture of object whereas Parallel projection do not
form realistic view of object.
• Perspective projection can not preserve the relative proportion of an object whereas
Parallel projection can preserve the relative proportion of an object.
• Perspective projection represents the object in three dimensional way whereas Parallel
projection projection represents the object in a different way like telescope.
• The lines of perspective projection are not parallel whereas The lines of parallel projection
are parallel.

• Perspective projection can not give the accurate view of object whereas Parallel projection
projection can give the accurate view of object
Computer Graphics and Multimedia

Fig. 3.6 Perspective Projection

Depth Cueing
Depth information is added. The depth of an object can be represented by the intensity of the
image. The parts of the objects closest to the viewing position are displayed with the highest
intensities. Objects farther away are displayed with decreasing intensities.

Fig. 3.7 Depth Cueing

Visible Line and Surface Identification


Visible lines are displayed in different color. Invisible lines either displayed in dashed lines or
not at all displayed. Removing invisible lines also removes the info about the backside of the
object. Surface rendering can be applied for the visible surfaces, so that hidden surfaces will
become obscured.
Computer Graphics and Multimedia

Fig. 3.7 Visible surface identification

Surface Rendering
Surface intensity of objects will be according to the lighting conditions in the scene and
according to assigned surface characteristics. This method is usually combined with the previous
method to attain a degree of realism.

Fig. 3.8 Surface Rendering

Exploded and Cutaway Views


To show the internal details, we can define the object in hierarchical structure

Fig. 3.7 Exploded View Fig. 3.8 Cutaway View


Computer Graphics and Multimedia
VISIBLE SURFACE DETECTION METHODS

Introduction

 To generate realistic graphicsdisplays.


 Todeterminewhatisvisiblewithinascenefromachosen viewingposition.
 For3Dworlds,thisisknownasvisiblesurfacedetectionorhiddensurfaceelimination.
 So many methods and algorithms areavailable.
 Some require more memory space, some require more processingtime.
 Which method? For which application? Depends on which object to be
displayed, available equipment, complexity of the scene,etc..
Basic classification

Two basic classifications –based on either an object or projected image , which is


going to be displayed
 Object-space Methods :

This method compares objects and parts of objects to each other within a
scene definition to determine which surfaces are visible
 Image-space Methods :
In this method visibility is determined point-by point a each pixel position
on the projection plane.
 Image-space Method is the most commonly used method

1. Back-face detection

This method adopts Object Space Method. This algorithm is used find the back faces of a
polyhedron.
Consider a polygon surface with parameters A,B,C and D.
A point (x,y,z) is inside the polyhedrons’ backface only if
Ax+By+Cz+D < 0
A fast and simple object-space method for identifying the back faces of a polyhedron is based
on the "inside-outside" tests.
Computer Graphics and Multimedia

Fig. 3.9 Inside-outside Test

We can simply say that the z component of the polygon’s normal is less than zero, then the point is
on the back face.

Fig. 3.10 Back face Detection

Fig. 3.11 Back face Detection from a given view plane


Computer Graphics and Multimedia

If we take V as the vector in the viewing direction from the eye and N as the normal
vector on the polygon’s surface, then we can state the condition as :

V.N>0 , then that face is the back face.

 It eliminates about half of the polygon surfaces in a scene from further visibility tests.

2. Depth buffer method or z-buffer method

 It adopts Image SpaceMethod.


 Compares surface depths at each pixel position throughout the scene on the projection
plane.
 It is usually applied to images containing polygons.
 Very fast method

Fig. 3.11 Depth Buffer

The depth-buffer algorithm proceeds by starting at the top vertex of the polygon. Then we
recursively calculate the x-coordinate values down a left edge of the polygon. The x value for the
beginning position on each scan line can be calculated from the previous one using the formula
given below :
Computer Graphics and Multimedia

The depth-buffer algorithm proceeds by starting at the top vertex of the polygon. Then we
recursively calculate the x-coordinate values down a left edge of the polygon. The x value for
the beginning position on each scan line can be calculated from the previous one using the
formula given below :

Depth buffer algorithm :

Step-1 : Set the buffer values


Depthbuffer x,y = 0
Framebuffer x,y= background color

Step-2 − Process each polygon one at a time


For each projected x,y pixel position of a polygon, calculate depth z.
If Z > depthbuffer x,y :
Compute surface color,
set depthbuffer x,y = z,
framebuffer x,y = surfacecolor x,y

The depth-buffer algorithm proceeds by starting at the top vertex of the polygon.
Then we recursively calculate the x-coordinate values down from left edge of the
polygon
The x value for the beginning position on each scan line can be calculated
from the previous one.
Advantages:

• It is easy to implement.
• It reduces the speed problem if implemented in hardware.
• It processes one object at a time

Drawbacks
• It requires large memory.
• It is time consuming process.
Computer Graphics and Multimedia

3. A-buffer Algorithm

It is the extension of Depth-buffer ->accumulationbuffer(A-Buffer - A- Antialiased,


Area-averaged, Accumulation –Buffer)

Drawback of Depth-buffer algorithm :


Can find one visible surface at each pixel position.
Cannotaccumulate intensity values for more than one surface.

Each buffer position can reference a linked-list Each position has 2 fields.

Depth field – stores a positive or negative real number


Intensity field- stores surface intensity information or a pointer value.

Fig. 3.12 A –Buffer

If depth >= 0, the number stored at that position is the depth of a single surface overlapping
the corresponding pixel area. The intensity field then stores the RGB components of the
surface color at that point and the percent of pixel coverage.
If depth < 0, it indicates multiple-surface contributions to the pixel intensity. The intensity
field then stores a pointer to a linked list of surface data. The surface buffer in the A-buffer
includes :
Computer Graphics and Multimedia
• RGB intensity components
• Opacity Parameter
• Depth
• Percent of area coverage
• Surface identifier
The algorithm proceeds just like the depth buffer algorithm. The depth and opacity values
are used to determine the final color of a pixel.

4. Scan-line method

It is an image-space method to identify visible surface. This method has a depth information
for only single scan-line. In order to require one scan-line of depth values, we must group
and process all polygon intersecting a given scan-line at the same time before processing the
next scan-line.
Two important tables, edge table and polygon table, are maintained for this.
• The Edge Table − It contains coordinate endpoints of each line in the scene, the
inverse slope of each line, and pointers into the polygon table to connect edges to
surfaces.
• The Polygon Table − It contains the plane coefficients, surface material properties,
other surface data, and may be pointers to the edge table.

Fig. 3.13 Vertex Table, Edge Table and Polygon-Surface Table


Computer Graphics and Multimedia

Fig. 3.14 Scan Line conversion method

To search surfaces crossing a given scan-line, an active list of edges is formed. The active list
stores only those edges that cross the scan-line in order of increasing x. Also a flag is set for
each surface to indicate whether a position along a scan-line is either inside or outside the
surface. Pixel positions across each scan-line are processed from left to right.At the left
intersection with a surface, the surface flag is turned on and at the right, the flag is turned off.
You only need to perform depth calculations when multiple surfaces have their flags turned
on at a certain scan-line position.

5. Depth- sorting or painter’s algorithm

 Sort the objects by distance from the viewer.


 Draw objects in order from farthest to nearest.
 Nearer objects will “overwrite” farther ones.

 If 2 objects DO overlap :
Need to find a plane to split one polygon by so that each new polygon is entirely in
front of or entirely behind the other
Computer Graphics and Multimedia

Fig. 3.15 Painter’s algorithm

Fig. 3.16 Addition of surfaces one by one and painting done using painter’s algorithm

6. BSP Tree Method

• A BSP (Binary Space-Partitioning) tree is formed by first choosing a triangle from


the set of all triangles in the scene.
• The plane that contains this triangle is called P. Classify all other triangles into two
groups: One group in front of P, and the other group behind P. All triangles that are
intersected by P are split into multiple smaller triangles, each of which is either in
front of P or behind P
• Within each group, recursively pick another triangle and partition all the triangles in
this group into two sub-groups.
Do this until there is only one triangle in each group.
Computer Graphics and Multimedia
The result is a tree
BSP Algorithm

Procedure DisplayBSP(tree: BSP_tree)


Begin
If tree is not empty then
If viewer is in front of the root then
Begin
DisplayBSP(tree.back_child)
displayPolygon(tree.root)
DisplayBSP(tree.front_child)
End
Else
Begin
DisplayBSP(tree.front_child)
displayPolygon(tree.root)
DisplayBSP(tree.back_child)
End
End

This algorithm is suitable for a static group of 3D polygons to be viewed from a number of
view points. - based on the observation that hidden surface elimination of a polygon is
guaranteed if all polygons on the other side of it as the viewer is painted first, then itself, then
all polygons on the same side of it as the viewer.

Fig. 3.17 BSP Tree representation


Computer Graphics and Multimedia

Fig. 3.18 Example for BSP Tree representation

• From the above figure, first take A as a root.


• Make a list of all nodes in figure aa.
• Put all the nodes that are in front of root A to the left side of node A and put all those
nodes that are behind the root A to the right side
• Process all the front nodes first and then the nodes at the back.
• First process the node B.
• As there is nothing in front of the node B, we have put NIL.
However, we have node C at back of node B, so node C will go to the right side of
node B. Repeat the same process for the node D

You might also like