Computer Graphics
Computer Graphics
Computer Graphics
Computer Graphics is the creation of pictures with the help of a computer. The end product of the computer graphics is a
picture it may be a business graph, drawing, and engineering.
In computer graphics, two or three-dimensional pictures can be created that are used for research. Many hardware devices
algorithm has been developing for improving the speed of picture generation with the passes of time. It includes the
creation storage of models and image of objects. These models for various fields like engineering, mathematical and so on.
Today computer graphics is entirely different from the earlier one. It is not possible. It is an interactive user can control
the structure of an object of various input devices.
Suppose a shoe manufacturing company want to show the sale of shoes for five years. For this vast amount of information
is to store. So a lot of time and memory will be needed. This method will be tough to understand by a common man. In
this situation graphics is a better alternative. Graphics tools are charts and graphs. Using graphs, data can be represented
in pictorial form. A picture can be understood easily just with a single look.
Interactive computer graphics work using the concept of two-way communication between computer users. The computer
will receive signals from the input device, and the picture is modified accordingly. Picture will be changed quickly when we
apply command.
Application of Computer Graphics
1. Education and Training: Computer-generated model of the physical, financial and
economic system is often used as educational aids. Model of physical systems,
physiological system, population trends or equipment can help trainees to
understand the operation of the system.
For some training applications, particular systems are designed. For example Flight
Simulator.
Flight Simulator: It helps in giving training to the pilots of airplanes. These pilots spend
much of their training not in a real aircraft but on the ground at the controls of a Flight
Simulator.
Advantages:
1. Fuel Saving
2. Safety
3. Ability to familiarize the training with a large number of the world’s airports.
2. Use in Biology: Molecular biologist can display a picture of molecules and gain insight
into their structure with the help of computer graphics.
5. Presentation Graphics: Example of presentation Graphics are bar charts, line graphs,
pie charts and other displays showing relationships between multiple parameters.
Presentation Graphics is commonly used to summarize
o Financial Reports
o Statistical Reports
o Mathematical Reports
o Scientific Reports
o Economic Data for research reports
o Managerial Reports
o Consumer Information Bulletins
o And other types of reports
6. Computer Art: Computer Graphics are also used in the field of commercial arts. It is
used to generate television and advertising commercial.
7. Entertainment: Computer Graphics are now commonly used in making motion pictures,
music videos and television shows.
10. Printing Technology: Computer Graphics is used for printing technology and textile
design.
Non-interactive Graphics involves only one-way communication between the computer and
the user, User can see the produced image, and he cannot make any change in the image.
Interactive Computer Graphics require two-way communication between the computer and
the user. A User can see the image and make any change by sending his command with an
input device.
Advantages:
1. Higher Quality
2. More precise results or products
3. Greater Productivity
4. Lower analysis and design cost
5. Significantly enhances our ability to understand data and to perceive trends.
Working of Interactive Computer Graphics:
The modern graphics display is very simple in construction. It consists of three components:
Frame Buffer: A digital frame buffer is large, contiguous piece of computer memory used
to hold or map the image displayed on the screen.
o At a minimum, there is 1 memory bit for each pixel in the raster. This amount of
memory is called a bit plane.
o A 1024 x 1024 element requires 220 (210=1024;220=1024 x 1024)sq.raster or
1,048,576 memory bits in a single bit plane.
o The picture is built up in the frame buffer one bit at a time.
o ∵ A memory bit has only two states (binary 0 or 1), a single bit plane yields a black
and white (monochrome display).
o As frame buffer is a digital device write raster CRT is an analog device.
Properties of Video Monitor:
2. Resolution: Use to describe the number of pixels that are used on display image.
3. Aspect Ratio: It is the ratio of width to its height. Its measure is unit in length or
number of pixels.
Aspect Ratio =
Display Processor:
It is interpreter or piece of hardware that converts display processor code into pictures. It is
one of the four main parts of the display processor
Display Controller:
1. It handles interrupt
2. It maintains timings
3. It is used for interpretation of instruction.
Display Generator:
Display Console: It contains CRT, Light Pen, and Keyboard and deflection system.
The raster scan system is a combination of some processing units. It consists of the control
processing unit (CPU) and a particular processor called a display controller. Display
Controller controls the operation of the display device. It is also called a video controller.
Working: The video controller in the output circuitry generates the horizontal and vertical
drive signals so that the monitor can sweep. Its beam across the screen during raster scans.
As fig showing that 2 registers (X register and Y register) are used to store the coordinate of
the screen pixels. Assume that y values of the adjacent scan lines increased by 1 in an
upward direction starting from 0 at the bottom of the screen to ymax at the top and along
each scan line the screen pixel positions or x values are incremented by 1 from 0 at the
leftmost position to xmax at the rightmost position.
The origin is at the lowest left corner of the screen as in a standard Cartesian coordinate
system.
At the start of a Refresh Cycle:
X register is set to 0 and y register is set to ymax. This (x, y’) address is translated into a
memory address of frame buffer where the color value for this pixel position is stored.
The controller receives this color value (a binary no) from the frame buffer, breaks it up into
three parts and sends each element to a separate Digital-to-Analog Converter (DAC).
These voltages, in turn, controls the intensity of 3 e-beam that are focused at the (x, y)
screen position by the horizontal and vertical drive signals.
This process is repeated for each pixel along the top scan line, each time incrementing the X
register by Y.
As pixels on the first scan line are generated, the X register is incremented throughx max.
Then x register is reset to 0, and y register is decremented by 1 to access the next scan
line.
Pixel along each scan line is then processed, and the procedure is repeated for each
successive scan line units pixels on the last scan line (y=0) are generated.
For a display system employing a color look-up table frame buffer value is not directly used
to control the CRT beam intensity.
It is used as an index to find the three pixel-color value from the look-up table. This lookup
operation is done for each pixel on every display cycle.
As the time available to display or refresh a single pixel in the screen is too less, accessing
the frame buffer every time for reading each pixel intensity value would consume more time
what is allowed:
Multiple adjacent pixel values are fetched to the frame buffer in single access and stored in
the register.
After every allowable time gap, the one-pixel value is shifted out from the register to control
the warm intensity for that pixel.
The procedure is repeated with the next block of pixels,and so on, thus the whole group of
pixels will be processed.
Display Devices:
The most commonly used display device is a video monitor. The operation of most video
monitors based on CRT (Cathode Ray Tube). The following display devices are used:
Once the electron heats the phosphorus, they light up, and they are projected on a screen.
The color you view on the screen is produced by a blend of red, blue and green light.
Components of CRT:
Main Components of CRT are:
4. Deflection Yoke: It is used to control the direction of the electron beam. It creates an
electric or magnetic field which will bend the electron beam as it passes through the area.
In a conventional CRT, the yoke is linked to a sweep or scan generator. The deflection yoke
which is connected to the sweep generator creates a fluctuating electric or magnetic
potential.
5. Phosphorus-coated screen: The inside front surface of every CRT is coated with
phosphors. Phosphors glow when a high-energy electron beam hits them. Phosphorescence
is the term used to characterize the light given off by a phosphor after it has been exposed
to an electron beam.
Advantages:
1. A CRT has the electron beam directed only to the parts of the screen where an image is to
be drawn.
2. Produce smooth line drawings.
3. High Resolution
Disadvantages:
1. Random-Scan monitors cannot display realistic shades scenes.
1. Interlaced Scanning
2. Non-Interlaced Scanning
In Interlaced scanning, each horizontal line of the screen is traced from top to bottom. Due
to which fading of display of object may occur. This problem can be solved by Non-
Interlaced scanning. In this first of all odd numbered lines are traced or visited by an
electron beam, then in the next circle, even number of lines are located.
For non-interlaced display refresh rate of 30 frames per second used. But it gives flickers.
For interlaced display refresh rate of 60 frames per second is used.
Advantages:
1. Realistic image
2. Million Different colors to be generated
3. Shadow Scenes are possible.
Disadvantages:
1. Low Resolution
2. Expensive
5. Refresh rate depends or resolution 5. Refresh rate does not depend on the
picture.
7. Beam Penetration technology come under 7. Shadow mark technology came under
it. this.
Advantages:
1. Inexpensive
Disadvantages:
1. Only four colors are possible
2. Quality of pictures is not as good as with another method.
4. Shadow-Mask Method:
o Shadow Mask Method is commonly used in Raster-Scan System because they
produce a much wider range of colors than the beam-penetration method.
o It is used in the majority of color TV sets and monitors.
Construction: A shadow mask CRT has 3 phosphor color dots at each pixel position.
This type of CRT has 3 electron guns, one for each color dot and a shadow mask grid just
behind the phosphor coated screen.
Shadow mask grid is pierced with small round holes in a triangular pattern.
Figure shows the delta-delta shadow mask method commonly used in color CRT system.
Working: Triad arrangement of red, green, and blue guns.
The deflection system of the CRT operates on all 3 electron beams simultaneously; the 3
electron beams are deflected and focused as a group onto the shadow mask, which contains
a sequence of holes aligned with the phosphor- dot patterns.
When the three beams pass through a hole in the shadow mask, they activate a dotted
triangle, which occurs as a small color spot on the screen.
The phosphor dots in the triangles are organized so that each electron beam can activate
only its corresponding color dot when it passes through the shadow mask.
electron guns and the corresponding red-green-blue color dots on the screen, are aligned
along one scan line rather of in a triangular pattern.
This inline arrangement of electron guns in easier to keep in alignment and is commonly
used in high-resolution color CRT’s.
Advantage:
1. Realistic image
2. Million different colors to be generated
3. Shadow scenes are possible
Disadvantage:
1. Relatively expensive compared with the monochrome CRT.
2. Relatively poor resolution
3. Convergence Problem
Disadvantage:
1. It is not possible to erase the selected part of a picture.
2. It is not suitable for dynamic graphics applications.
3. If a part of picture is to modify, then time is consumed.
Example: Small T.V. monitor, calculator, pocket video games, laptop computers, an
advertisement board in elevator.
1. Emissive Display: The emissive displays are devices that convert electrical energy into
light. Examples are Plasma Panel, thin film electroluminescent display and LED (Light
Emitting Diodes).
2. Non-Emissive Display: The Non-Emissive displays use optical effects to convert
sunlight or light from some other source into graphics patterns. Examples are LCD (Liquid
Crystal Device).
1. Cathode: It consists of fine wires. It delivers negative voltage to gas cells. The voltage is
released along with the negative axis.
2. Anode: It also consists of line wires. It delivers positive voltage. The voltage is supplied
along positive axis.
3. Fluorescent cells: It consists of small pockets of gas liquids when the voltage is applied to
this liquid (neon gas) it emits light.
4. Glass Plates: These plates act as capacitors. The voltage will be applied, the cell will glow
continuously.
The gas will slow when there is a significant voltage difference between horizontal and
vertical wires. The voltage level is kept between 90 volts to 120 volts. Plasma level does not
require refreshing. Erasing is done by reducing the voltage to 90 volts.
Each cell of plasma has two states, so cell is said to be stable. Displayable point in plasma
panel is made by the crossing of the horizontal and vertical grid. The resolution of the
plasma panel can be up to 512 * 512 pixels.
Disadvantage:
1. Poor Resolution
2. Wiring requirement anode and the cathode is complex.
3. Its addressing is also complex.
LCD uses the liquid-crystal material between two glass plates; each plate is the right angle
to each other between plates liquid is filled. One glass plate consists of rows of conductors
arranged in vertical direction. Another glass plate is consisting of a row of conductors
arranged in horizontal direction. The pixel position is determined by the intersection of the
vertical & horizontal conductor. This position is an active part of the screen.
Disadvantage:
1. LCDs are temperature-dependent (0-70°C)
2. LCDs do not emit light; as a result, the image has very little contrast.
3. LCDs have no color capability.
4. The resolution is not as good as that of a CRT.
Look-Up Table:
Image representation is essentially the description of pixel colors. There are three primary
c0olors: R (red), G (green) and B (blue). Each primary color can take on intensity levels
produces a variety of colors. Using direct coding, we may allocate 3 bits for each pixel, with
one bit for each primary color. The 3-bit representation allows each primary to vary
independently between two intensity levels: 0 (off) or 1 (on). Hence each pixel can take on
one of the eight colors.
0 0 0 Black
0 0 1 Blue
0 1 0 Green
0 1 1 Cyan
1 0 0 Red
1 0 1 Magenta
1 1 0 Yellow
1 1 1 White
A widely accepted industry standard uses 3 bytes, or 24 bytes, per pixel, with one byte for
each primary color. The way, we allow each primary color to have 256 different intensity
levels. Thus a pixel can take on a color from 256 x 256 x 256 or 16.7 million possible
choices. The 24-bit format is commonly referred to as the actual color representation.
Lookup Table approach reduces the storage requirement. In this approach pixel values do
not code colors directly. Alternatively, they are addresses or indices into a table of color
values. The color of a particular pixel is determined by the color value in the table entry that
the value of the pixel references. Figure shows a look-up table with 256 entries. The entries
have addresses 0 through 255. Each entry contains a 24-bit RGB color value. Pixel values
are now 1-byte. The color of a pixel whose value is i, where 0 <i<255, is persistence by the
color value in the table entry whose address is i. It reduces the storage requirement of a
1000 x 1000 image to one million bytes plus 768 bytes for the color values in the look-up
table.
The circuitry of the video display device of the computer is capable of converting binary
values (0, 1) into a pixel on and pixel off information. 0 is represented by pixel off. 1 is
represented using pixel on. Using this ability graphics computer represent picture having
discrete dots.
Any model of graphics can be reproduced with a dense matrix of dots or points. Most human
beings think graphics objects as points, lines, circles, ellipses. For generating graphical
object, many algorithms have been developed.
The closer the dots or pixels are, the better will be the quality of picture. Closer the dots
are, crisper will be the picture. Picture will not appear jagged and unclear if pixels are
closely spaced. So the quality of the picture is directly proportional to the density of pixels
on the screen.
Pixels are also defined as the smallest addressable unit or element of the screen. Each pixel
can be assigned an address as shown in fig:
Different graphics objects can be generated by setting the different intensity of pixels and
different colors of pixels. Each pixel has some co-ordinate value. The coordinate is
represented using row and column.
P (5, 5) used to represent a pixel in the 5th row and the 5th column. Each pixel has some
intensity value which is represented in memory of computer called a frame buffer. Frame
Buffer is also called a refresh buffer. This memory is a storage area for storing pixels values
using which pictures are displayed. It is also called as digital memory. Inside the buffer,
image is stored as a pattern of binary digits either 0 or 1. So there is an array of 0 or 1
used to represent the picture. In black and white monitors, black pixels are represented
using 1’s and white pixels are represented using 0’s. In case of systems having one bit per
pixel frame buffer is called a bitmap. In systems with multiple bits per pixel it is called a
pixmap.
The lines must be generated parallel or at 45° to the x and y-axes. Other lines cause a
problem: a line segment through it starts and finishes at addressable points, may happen
to pass through no another addressable points in between.
6. Lines should terminate accurately: Unless lines are plotted accurately, they may
terminate at the wrong place.
7. Lines should have constant density: Line density is proportional to the no. of
dots displayed divided by the length of the line.
Example: A line with starting point as (0, 0) and ending point (6, 18) is given. Calculate
value of intermediate points and slope of line.
x1=0
y1=0
x2=6
y2=18
We know equation of line is
y =m x + b
y = 3x + b..............equation (1)
put value of x from initial point in equation (1), i.e., (0, 0) x =0, y=0
0=3x0+b
0 = b ⟹ b=0
Step6: Calculate m =
Step8: Set (x, y) equal to starting point, i.e., lowest point and xendequal to largest value
of x.
If dx < 0
then x = x2
y = y2
xend= x1
If dx > 0
then x = x1
y = y1
xend= x2
Step9: Check whether the complete line has been drawn if x=xend, stop
Step13: Go to Step9.
OUTPUT:
DDA Algorithm
DDA stands for Digital Differential Analyzer. It is an incremental method of scan
conversion of line. In this method calculation is performed at each step but by using
results of previous steps.
m=
yi+1-yi=∆y.......................equation 3
yi+1-xi=∆x......................equation 4
yi+1=yi+∆y
∆y=m∆x
yi+1=yi+m∆x
∆x=∆y/m
xi+1=xi+∆x
xi+1=xi+∆y/m
yi+1=y1+m, x=x+1
Until x = x2</x
xi+1= , y=y+1
Until y → y2</y
Advantage:
3. It allows us to detect the change in the value of x and y ,so plotting of same point twice is not possible.
Disadvantage:
1. It involves floating point additions rounding off is done. Accumulations of round off error cause accumulation of error.
2. Rounding off operations and floating point operations consumes a lot of time.
3. It is more suitable for generating line using the software. But it is less suited for hardware implementation.
DDA Algorithm:
Step1: Start Algorithm
Else
Step7: xinc=dx/step
yinc=dy/step
assign x = x1
assign y = y1
Step9: x = x + xinc
y = y + yinc
Example: If a line is drawn from (2, 3) to (6, 15) with use of DDA. How many points will needed to generate such line?
x1=2
y1=3
x2= 6
y2=15
dx = 6 – 2 = 4
dy = 15 – 3 = 12
m=
2. #include<conio.h>
3. #include<stdio.h>
4. void main()
5. {
6. intgd = DETECT ,gm, i;
7. float x, y,dx,dy,steps;
8. int x0, x1, y0, y1;
9. initgraph(&gd, &gm, “C:\\TC\\BGI”);
10. setbkcolor(WHITE);
11. x0 = 100 , y0 = 200, x1 = 500, y1 = 300;
12. dx = (float)(x1 – x0);
13. dy = (float)(y1 – y0);
14. if(dx>=dy)
15. {
16. steps = dx;
17. }
18. else
19. {
20. steps = dy;
21. }
22. dx = dx/steps;
23. dy = dy/steps;
24. x = x0;
25. y = y0;
26. i = 1;
27. while(i<= steps)
28. {
29. putpixel(x, y, RED);
30. x += dx;
31. y += dy;
32. i=i+1;
33. }
34. getch();
35. closegraph();
36. }
#include<graphics.h>
#include<conio.h>
#include<stdio.h>
void main()
Output:
Bresenham’s Line Algorithm
This algorithm is used for scan converting a line. It was developed by Bresenham. It is
an efficient method because it involves only integer addition, subtractions, and
multiplication operations. These operations can be performed very rapidly so lines can be
generated quickly.
In this method, next pixel selected is that one who has the least distance from true line.
Assume a pixel P1’(x1’,y1’),then select subsequent pixels as we work our may to the
night, one pixel position at a time in the horizontal direction toward P 2’(x2’,y2’).
The line is best approximated by those pixels that fall the least distance from the path
between P1’,P2’.
To chooses the next one between the bottom pixel S and top pixel T.
If S is chosen
We have xi+1=xi+1 and yi+1=yi
If T is chosen
We have xi+1=xi+1 and yi+1=yi+1
This difference is
s-t = (y-yi)-[(yi+1)-y]
= 2y – 2yi -1
di=△x (2 (xi+1)+2b-2yi-1)
=2△xyi-2△y-1△x.2b-2yi△x-△x
di=2△y.xi-2△x.yi+c
We can write the decision variable di+1 for the next slip on
di+1=2△y.xi+1-2△x.yi+1+c
di+1-di=2△y.(xi+1-xi)- 2△x(yi+1-yi)
Special Cases
If chosen pixel is at the top pixel T (i.e., di≥0)⟹ yi+1=yi+1
di+1=di+2△y-2△x
Finally, we calculate d1
d1=△x[2m(x1+1)+2b-2y1-1]
d1=△x[2(mx1+b-y1)+2m-1]
Advantage:
3. It can be implemented using hardware because it does not use multiplication and
division.
Disadvantage:
8. This algorithm is meant for basic line drawing only Initializing is not a part of
Bresenham’s line algorithm. So to draw smooth lines, you should want to look into a
different algorithm.
Step5: Consider (x, y) as starting point and xendas maximum possible value of x.
If dx < 0
Then x = x2
y = y2
xend=x1
If dx > 0
Then x = x1
y = y1
xend=x2
Step9: Increment x = x + 1
Step11: Go to step 7
Step12: End of Algorithm
Example: Starting and Ending position of the line are (1, 1) and (8, 5). Find
intermediate points.
Solution: x1=1
y1=1
x2=8
y2=5
dx= x2-x1=8-1=7
dy=y2-y1=5-1=4
I1=2* ∆y=2*4=8
I2=2*(∆y-∆x)=2*(4-7)=-6
d = I1-∆x=8-7=1
X y d=d+I1 or I2
1 1 d+I2=1+(-6)=-5
2 2 d+I1=-5+8=3
3 2 d+I2=3+(-6)=-3
4 3 d+I1=-3+8=5
5 3 d+I2=5+(-6)=-1
6 4 d+I1=-1+8=7
7 4 d+I2=7+(-6)=1
8 5
Program to implement Bresenham’s Line Drawing Algorithm:
1. #include<stdio.h>
2. #include<graphics.h>
3. void drawline(int x0, int y0, int x1, int y1)
4. {
5. int dx, dy, p, x, y;
6. dx=x1-x0;
7. dy=y1-y0;
8. x=x0;
9. y=y0;
10. p=2*dy-dx;
11. while(x<x1)
12. {
13. if(p>=0)
14. {
15. putpixel(x,y,7);
16. y=y+1;
17. p=p+2*dy-2*dx;
18. }
19. else
20. {
21. putpixel(x,y,7);
22. p=p+2*dy;}
23. x=x+1;
24. }
25. }
26. int main()
27. {
28. int gdriver=DETECT, gmode, error, x0, y0, x1, y1;
29. initgraph(&gdriver, &gmode, “c:\\turboc3\\bgi”);
30. printf(“Enter co-ordinates of first point: “);
31. scanf(“%d%d”, &x0, &y0);
32. printf(“Enter co-ordinates of second point: “);
33. scanf(“%d%d”, &x1, &y1);
34. drawline(x0, y0, x1, y1);
35. return 0;
36. }
#include<stdio.h>
#include<graphics.h>
void draw line(int x0, int y0, int x
{
int dx, dy, p, x, y;
Output:
5.DDA Algorithm can draw circle and curves but are not accurate as 5.
Bresenham's’Line Algorithm Br
es
e
n
h
a
m
's’
Li
n
e
Al
g
or
it
h
m
ca
n
dr
a
w
cir
cl
e
a
n
d
cu
rv
es
wi
th
m
or
e
ac
cu
ra
te
th
a
n
D
D
A
Al
g
or
it
h
m
.
Defining a Circle:
Circle is an eight-way symmetric figure. The shape of circle is the same in all quadrants. In each quadrant, there are two
octants. If the calculation of the point of one octant is done, then the other seven points can be calculated easily by using
the concept of eight-way symmetry.
For drawing, circle considers it at the origin. If a point is P (x, y), then the other seven points will be
1
So we will calculate only 45°arc. From which the whole circle can be determined easily.
If we want to display circle on screen then the putpixel function is used for eight points as shown below:
putpixel (x, y, color)
putpixel (x, -y, color)
putpixel (-x, y, color)
putpixel (-x, -y, color)
putpixel (y, x, color)
putpixel (y, -x, color)
putpixel (-y, x, color)
putpixel (-y, -x, color)
Example: Let we determine a point (2, 7) of the circle then other points will be (2, -7), (-2, -7), (-2, 7), (7, 2), (-7, 2), (-
7, -2), (7, -2)
These seven points are calculated by using the property of reflection. The reflection is accomplished in the following way:
There are two standards methods of mathematically defining a circle centered at the origin.
1. Defining a circle using Polynomial Method
2. Defining a circle using Polar Co-ordinates
y2=r2-x2
Where x = the x coordinate
y = the y coordinate
r = the circle radius
With the method, each x coordinate in the sector, from 90° to 45°, is found by stepping x
xend=
Step2: Test to determine whether the entire circle has been scan-converted.
If x > xend then stop.
Step3: Compute y =
Step4: Plot the eight points found by symmetry concerning the center (h, k) at the current
(x, y) coordinates.
Step5: Increment x = x + i
Output:
Algorithm:
Step1: Set the initial variables:
r = circle radius
(h, k) = coordinates of the circle center
i = step size
θ_end=
θ=0
Step3: Compute
x = r * cos θ y=r*sin?θ
Step4: Plot the eight points, found by symmetry i.e., the center (h, k), at the current (x, y)
coordinates.
Output:
Bresenham's’Circle Algorithm:
Scan-Converting a circle using Bresenham's’algorithm works as follows: Points are
generated from 90° to 45°, moves will be made only in the +x & -y directions as shown
in fig:
The best approximation of the true circle will be described by those pixels in the raster
that falls the least distance from the true circle. We want to generate the points from
90° to 45°. Assume that the last scan-converted pixel is P1 as shown in fig. Each new
point closest to the true circle can be found by taking either of two actions.
Let D (Si) is the distance from the origin to the true circle squared minus the distance to
point P3 squared. D (Ti) is the distance from the origin to the true circle squared minus
the distance to point P2 squared. Therefore, the following expressions arise.
Therefore,
di=(xi-1+1)2+ yi-12 -r2+(xi-1+1)2+(yi-1 -1)2-r2
If it is assumed that the circle is centered at the origin, then at the first step x = 0 & y =
r.
Therefore,
di=(0+1)2+r2 -r2+(0+1)2+(r-1)2-r2
=1+1+r2-2r+1-r2
= 3 - –r
Bresenham's’Circle Algorithm:
Step1: Start Algorithm
Step4: Calculate d = 3 - –r
Step7: Plot eight points by using concepts of eight-way symmetry. The center is at (p,
q). Current active pixel is (x, y).
putpixel (x+p, y+q)
putpixel (y+p, x+q)
putpixel (-y+p, x+q)
putpixel (-x+p, y+q)
putpixel (-x+p, -y+q)
putpixel (-y+p, -x+q)
putpixel (y+p, -x+q)
putpixel (x+p, -y-q)
Step9: Go to step 6
Example: Plot 6 points of circle using Bresenham Algorithm. When radius of circle is 10
units. The circle has centre (50, 50).
So P1 (0,0)⟹(50,50)
P2 (1,10)⟹(51,60)
P3 (2,10)⟹(52,60)
P4 (3,9)⟹(53,59)
P5 (4,9)⟹(54,59)
P6 (5,8)⟹(55,58)
Output:
If Pi is+ve ⟹midpoint is outside the circle (or on the circle)and we choose pixel S.
We have yi+1=yi
We have yi+1=yi-1
We can put ≅1
∴r is an integer
So, P1=1-r
Algorithm:
Step1: Put x =0, y =r in equation 2
We have p=1-r
Step2: Repeat steps while x ≤ y
Plot (x, y)
If (p<0)
Then set p = p + 2x + 3
Else
p = p + 2(x-y)+5
y =y - – (end if)
x =x+1 (end loop)
Step3: End
Output:
Four connected approaches is more suitable than the eight connected approaches.
1. Four connected approaches: In this approach, left, right, above, below pixels are
tested.
2. Eight connected approaches: In this approach, left, right, above, below and four
diagonals are selected.
Boundary can be checked by seeing pixels from left and right first. Then pixels are checked
by seeing pixels from top to bottom. The algorithm takes time and memory because some
recursive calls are needed.
Algorithm:
1. Procedure fill (x, y, color, color1: integer)
2. int c;
3. c=getpixel (x, y);
4. if (c!=color) (c!=color1)
5. {
6. setpixel (x, y, color)
7. fill (x+1, y, color, color 1);
8. fill (x-1, y, color, color 1);
9. fill (x, y+1, color, color 1);
10. fill (x, y-1, color, color 1);
11. }
The flood fill algorithm has many characters similar to boundary fill. But this method is more suitable for fill
we use this algorithm.
In fill algorithm, we start from a specified interior point (x, y) and reassign all pixel values are currently set
we then step through pixel positions until all interior points have been repainted.
Disadvantage:
Very slow algorithm
May be fail for large polygons
Initial pixel required more knowledge about surrounding pixels.
Algorithm:
Procedure floodfill (x, y,fill_ color, old_color: integer)
If (getpixel (x, y)=old_color)
{
setpixel (x, y, fill_color);
fill (x+1, y, fill_color, old_color);
fill (x-1, y, fill_color, old_color);
fill (x, y+1, fill_color, old_color);
fill (x, y-1, fill_color, old_color);
}
10. }
Output:
Program2: To implement 8-connected flood fill algorithm:
#include<stdio.h>
#include<graphics.h>
#include<dos.h>
#include<conio.h>
void floodfill(intx,inty,intold,intnewcol)
{
int current;
current=getpixel(x,y);
if(current==old)
10. {
11. delay(5);
12. putpixel(x,y,newcol);
13. floodfill(x+1,y,old,newcol);
14. floodfill(x-1,y,old,newcol);
15. floodfill(x,y+1,old,newcol);
16. floodfill(x,y-1,old,newcol);
17. floodfill(x+1,y+1,old,newcol);
18. floodfill(x-1,y+1,old,newcol);
19. floodfill(x+1,y-1,old,newcol);
20. floodfill(x-1,y-1,old,newcol);
21. }
22. }
23. void main()
24. {
25. intgd=DETECT,gm;
26. initgraph(&gd,&gm,"C”\\TURBOC3\\BGI")”
27. rectangle(50,50,150,150);
28. floodfill(70,70,0,15);
29. getch();
30. closegraph();
31. }
Output:
Introduction of Transformations
Computer Graphics provide the facility of viewing object from different angles. The architect
can study building from different angles i.e.
1. Front Evaluation
2. Side elevation
3. Top plan
A Cartographer can change the size of charts and topographical maps. So if graphics images
are coded as numbers, the numbers can be stored in memory. These numbers are modified
by mathematical operations called as Transformation.
The purpose of using computers for drawing is to provide facility to user to view the object
from different angles, enlarging or reducing the scale or shape of object called as
Transformation.
There are two complementary points of view for describing object transformation.
1. Geometric Transformation: The object itself is transformed relative to the coordinate system
or background. The mathematical statement of this viewpoint is defined by geometric
transformations applied to each point of the object.
2. Coordinate Transformation: The object is held stationary while the coordinate system is
transformed relative to the object. This effect is attained through the application of
coordinate transformations.
Types of Transformations:
1. Translation
2. Scaling
3. Rotating
4. Reflection
5. Shearing
Translation
It is the straight line movement of an object from one position to another is called
Translation. Here the object is positioned from one coordinate location to another.
Translation of point:
To translate a point from coordinate position (x, y) to another (x 1 y1), we add algebraically
the translation distances Tx and Ty to original coordinate.
x1=x+Tx
y1=y+Ty
Scaling:
It is used to alter or change the size of objects. The change is done using scaling factors.
There are two scaling factors, i.e. Sx in x direction Sy in y-direction. If the original position is
x and y. Scaling factors are Sx and Sy then the value of coordinates after scaling will be x1
and y1.
If the picture to be enlarged to twice its original size then Sx = Sy =2. If Sxand Sy are not
equal then scaling will occur but it will elongate or distort the picture.
If scaling factors are less than one, then the size of the object will be reduced. If scaling
factors are higher than one, then the size of the object will be enlarged.
If Sxand Syare equal it is also called as Uniform Scaling. If not equal then called as
Differential Scaling. If scaling factors with values less than one will move the object closer
to coordinate origin, while a value higher than one will move coordinate position farther
from origin.
Enlargement: If T1= ,If (x1 y1)is original position and T1is translation vector
then (x2 y2) are coordinated after scaling
Example: Prove that 2D Scaling transformations are commutative i.e, S1 S2=S2 S1.
Types of Rotation:
1. Anticlockwise
2. Counterclockwise
The positive value of the pivot point (rotation angle) rotates an object in a counter-
clockwise (anti-clockwise) direction.
The negative value of the pivot point (rotation angle) rotates an object in a clockwise
direction.
When the object is rotated, then every point of the object is rotated by the same angle.
Straight Line: Straight Line is rotated by the endpoints with the same angle and
redrawing the line between new endpoints.
Polygon: Polygon is rotated by shifting every vertex using the same rotational angle.
Curved Lines: Curved Lines are rotated by repositioning of all points and drawing of the
curve at new positions.
Ellipse: Its rotation can be obtained by rotating major and minor axis of an ellipse by
the desired angle.
Matrix for rotation is a clockwise direction.
Output:
Before rotation
After rotation
Reflection:
It is a transformation which produces a mirror image of an object. The mirror image can be
either about x-axis or y-axis. The object is rotated by180°.
Types of Reflection:
1. Reflection about the x-axis
2. Reflection about the y-axis
3. Reflection about an axis perpendicular to xy plane and passing through the origin
4. Reflection about line y=x
1. Reflection about x-axis: The object can be reflected about x-axis with the help of the
following matrix
In this transformation value of x will remain same whereas the value of y will become
negative. Following figures shows the reflection of the object axis. The object will lie another
side of the x-axis.
2. Reflection about y-axis: The object can be reflected about y-axis with the help of
following transformation matrix
Here the values of x will be reversed, whereas the value of y will remain the same. The
object will lie another side of the y-axis.
In this value of x and y both will be reversed. This is also called as half revolution about the
origin.
4. Reflection about line y=x: The object may be reflected about line y = x with the help
of following transformation matrix
First of all, the object is rotated at 45°. The direction of rotation is clockwise. After it
reflection is done concerning x-axis. The last step is the rotation of y=x back to its original
position that is counterclockwise at 45°.
A (3 4)
B (6 4)
C (4 8)
Solution:
The a point coordinates after reflection
Shearing:
It is transformation which changes the shape of object. The sliding of layers of object occur.
The shear can be in one direction or in two directions.
Shearing in the X-direction: In this horizontal shearing sliding of layers occur. The
homogeneous matrix for shearing in the x-direction is shown below:
Shearing in the Y-direction: Here shearing is done by sliding along vertical or y-axis.
Shearing in X-Y directions: Here layers will be slided in both x as well as y direction. The
sliding will be in horizontal as well as vertical direction. The shape of the object will be
distorted. The matrix of shear in both directions is given by:
Matrix Representation of 2D Transformation
Homogeneous Coordinates
The rotation of a point, straight line or an entire image on the screen, about a point other
than origin, is achieved by first moving the image until the point of rotation occupies the
origin, then performing rotation, then finally moving the image to its original position.
The moving of an image from one place to another in a straight line is called a translation. A
translation may be done by adding or subtracting to each point, the amount, by which
picture is required to be shifted.
Homogeneous coordinates are generally used in design and construction applications. Here
we perform translations, rotations, scaling to fit the picture into proper position.
Suppose we want to perform rotation about an arbitrary point, then we can perform it by the sequence of three
transformations
1. Translation
2. Rotation
3. Reverse Translation
The ordering sequence of these numbers of transformations must not be changed. If a matrix is represented in column
form, then the composite transformation is performed by multiplying matrix in order from right to left side. The output
obtained from the previous matrix is multiplied with the new coming matrix.
Step2: The object is translated so that its center coincides with the origin as in fig (b)
Step3: Scaling of an object by keeping the object at origin is done in fig (c)
Step4: Again translation is done. This second translation is called a reverse translation. It will position the object at the
origin location.
represented using Homogeneous matrices and P will be the final transformation matrix obtained after multiplication.
Above resultant matrix show that two successive translations are additive.
Composition of two Scaling: The composition of two scaling is multiplicative. Let S and S are matrix to be multiplied.
11 12
General Pivot Point Rotation or Rotation about
Fixed Point:
For it first of all rotate function is used. Sequences of steps are given below for rotating an
object about origin.
1. Translate object to origin from its original position as shown in fig (b)
2. Rotate the object about the origin as shown in fig (c).
3. Translate the object to its original position from origin. It is called as reverse translation as
shown in fig (d).
The matrix multiplication of above 3 steps is given below
Step2: The object is translated so that its center coincides with origin as shown in fig (b)
Step3: Scaling of object by keeping object at origin is done as shown in fig (c)
Basically, the window is an area in object space. It encloses the object. After the user
selects this, space is mapped on the whole area of the viewport. Almost all 2D and 3D
graphics packages provide means of defining viewport size on the screen. It is possible to
determine many viewports on different areas of display and view the same object in a
different angle in each viewport.
The size of the window is (0, 0) coordinate which is a bottom-left corner and toward right
side until window encloses the desired area. Once the window is defined data outside the
window is clipped before representing to screen coordinates. This process reduces the
amount of data displaying signals.
The window size of the Tektronix 4.14 tube in Imperial College contains 4.96 points
horizontally and 3072 points vertically.
First, we construct the scene in world coordinate using the output primitives and attributes.
To obtain a particular orientation, we can set up a 2-D viewing coordinate system in the
window coordinate plane and define a window in viewing coordinates system.
Once the viewing frame is established, are then transform description in world coordinates
to viewing coordinates.
Then, we define viewport in normalized coordinates (range from 0 to 1) and map the
viewing coordinates description of the scene to normalized coordinates.
At the final step, all parts of the picture that (i.e., outside the viewport are dipped, and the
contents are transferred to device coordinates).
By changing the position of the viewport:We can view objects at different locations on
the display area of an output device as shown in fig:
By varying the size of viewports: We can change the size and proportions of displayed
objects. We can achieve zooming effects by successively mapping different-sized windows
on a fixed-size viewport.
As the windows are made smaller, we zoom in on some part of a scene to view details that
are not shown with larger windows.
Computer Graphics Window to Viewport Co-
ordinate Transformation
Once object description has been transmitted to the viewing reference frame, we choose the
window extends in viewing coordinates and selects the viewport limits in normalized
coordinates.
We do this thing using a transformation that maintains the same relative placement of an
object in normalized space as they had in viewing coordinates.
In order to maintain the same relative placement of the point in the viewport as in the
window, we require:
Solving these impressions for the viewport position (xv, yv), we have
xv=xvmin+(xw-xwmin)sx
yv=yvmin+(yw-ywmin)sy ...........equation 2
1. Perform a scaling transformation using a fixed point position (xw min,ywmin) that scales the
window area to the size of the viewport.
2. Translate the scaled window area to the position of the viewport. Relative proportions of
objects are maintained if the scaling factors are the same (sx=sy).
From normalized coordinates, object descriptions are mapped to the various display devices.
Any number of output devices can we open in a particular app, and three windows to
viewport transformation can be performed for each open output device.
Viewing Transformation= T * S * T1
Note:
Each pixel value is used four times twice on each of the two successive scan lines.
Such integration of pixels sometimes involves replication using a set of ordered patterns,
commonly known as Dithering.
There are widely used, especially when the grey levels (share of brightness) are
synthetically generated.
Advantage:
Effective increase in zoom area in all four direction even if the selected image portion (for
zooming) is close to the screen boundary.
Inking:
If we sample the position of a graphical input device at regular intervals and display a dot at
each sampled position, a trial will be displayed of the movement of the device.This
technique which closely simulates the effect of drawing on paper is called Inking.
For many years the primary use of inking has been in conjunction with online character-
recognition programs.
Scissoring:
In computer graphics, the deleting of any parts of an image which falls outside of a window
that has been sized and laid the original vision ever. It is also called the clipping.
Clipping:
When we have to display a large portion of the picture, then not only scaling & translation is
necessary, the visible part of picture is also identified. This process is not easy. Certain
parts of the image are inside, while others are partially inside. The lines or elements which
are partially visible will be omitted.
For deciding the visible and invisible portion, a particular process called clipping is used.
Clipping determines each element into the visible and invisible portion. Visible portion is
selected. An invisible portion is discarded.
Types of Lines:
Lines are of three types:
The window against which object is clipped called a clip window. It can be curved or
rectangle in shape.
Applications of clipping:
1. It will extract part we desire.
2. For identifying the visible and invisible area in the 3D object.
3. For creating objects using solid modeling.
4. For drawing operations.
5. Operations related to the pointing of an object.
6. For deleting, copying, moving part of an object.
Clipping can be applied to world co-ordinates. The contents inside the window will be
mapped to device co-ordinates. Another alternative is a complete world co-ordinates picture
is assigned to device co-ordinates, and then clipping of viewport boundaries is done.
Types of Clipping:
1. Point Clipping
2. Line Clipping
3. Area Clipping (Polygon)
4. Curve Clipping
5. Text Clipping
6. Exterior Clipping
Point Clipping:
Point Clipping is used to determining, whether the point is inside the window or not. For this
following conditions are checked.
1. x ≤ xmax
2. x ≥ xmin
3. y ≤ ymax
4. y ≥ ymin
The (x, y) is coordinate of the point. If anyone from the above inequalities is false, then the
point will fall outside the window and will not be considered to be visible.
Line Clipping:
It is performed by using the line clipping algorithm. The line clipping algorithms are:
1. Visible
2. Not Visible
3. Clipping Case
1. Visible: If a line lies within the window, i.e., both endpoints of the line lies within the
window. A line is visible and will be displayed as it is.
2. Not Visible: If a line lies outside the window it will be invisible and rejected. Such lines
will not display. If any one of the following inequalities is satisfied, then the line is
considered invisible. Let A (x1,y2) and B (x2,y2) are endpoints of line.
3. Clipping Case: If the line is neither visible case nor invisible case. It is considered to be
clipped case. First of all, the category of a line is found based on nine regions given below.
All nine regions are assigned codes. Each code is of 4 bits. If both endpoints of the line have
end bits zero, then the line is considered to be visible.
The center area is having the code, 0000, i.e., region 5 is considered a rectangle window.
Step4:If a line is clipped case, find an intersection with boundaries of the window
m=(y2-y1 )(x2-x1)
(a) If bit 1 is "1" line intersects with left boundary of rectangle window
y3=y1+m(x-X1)
where X = Xwmin
where Xwminis the minimum value of X co-ordinate of window
The region code for point (x, y) is set according to the scheme
Bit 1 = sign (y-ymax)=sign (y-6) Bit 3 = sign (x-xmax)= sign (x-2)
Bit 2 = sign (ymin-y)=sign(1-y) Bit 4 = sign (xmin-x)=sign(-3-x)
Here
So
We place the line segments in their appropriate categories by testing the region codes found
in the problem.
Category1 (visible): EF since the region code for both endpoints is 0000.
Category2 (not visible): IJ since (1001) AND (1000) =1000 (which is not 0000).
Category 3 (candidate for clipping): AB since (0001) AND (1000) = 0000, CD since
(0000) AND (1010) =0000, and GH. since (0100) AND (0010) =0000.
In clipping AB, the code for A is 0001. To push the 1 to 0, we clip against the boundary line
xmin=-3. The resulting intersection point is I1 (-3,3 ). We clip (do not display) AI1 and I1 B.
The code for I1is 1001. The clipping category for I1 B is 3 since (0000) AND (1000) is (0000).
Now B is outside the window (i.e., its code is 1000), so we push the 1 to a 0 by clipping
against the line ymax=6. The resulting intersection is l2 (-1 ,6). Thus I2 B is clipped. The code
for I2 is 0000. The remaining segment I1 I2 is displayed since both endpoints lie in the
window (i.e., their codes are 0000).
For clipping CD, we start with D since it is outside the window. Its code is 1010. We push
the first 1 to a 0 by clipping against the line ymax=6. The resulting intersection I3 is ( ,6),and
its code is 0000. Thus I3 D is clipped and the remaining segment CI3 has both endpoints
coded 0000 and so it is displayed.
For clipping GH, we can start with either G or H since both are outside the window. The
code for G is 0100, and we push the 1 to a 0 by clipping against the line y min=1.The resulting
intersection point is I4 (2 ,1) and its code is 0010. We clip GI4 and work on I4 H. Segment I4
H is not displaying since (0010) AND (0010) =0010.
Step5: Check each midpoint, whether it nearest to the boundary of a window or not.
Step6: If the line is totally visible or totally rejected not found then repeat step 1 to 5.
Example: Window size is (-3, 1) to (2, 6). A line AB is given having co-ordinates of A (-4, 2)
and B (-1, 7). Does this line visible. Find the visible portion of the line using midpoint
subdivision?
Solution:
A (-4, 2) B ""(-1, 6)
Liang-Barsky Line Clipping Algorithm:
Liang and Barsky have established an algorithm that uses floating-point arithmetic but finds the
appropriate endpoints with at most four computations. This algorithm uses the parametric
equations for a line and solves four inequalities to find the range of the parameter for which the
line is in the viewport.
Let P(x1, y1), Q(x2, y2) is the line which we want to study. The parametric equation of the line
segment from gives x-values and y-values for every point in terms of a parameter that ranges
from 0 to 1. The equations are
We can see that when t = 0, the point computed is P(x1, y1); and when t = 1, the point computed
is Q(x2, y2).
3. If tmin< tmax? then draw a line from (x1 + dx*tmin, y1 + dy*tmin) to (x1 + dx*tmax?, y1 + dy*tmax? )
4. If the line crosses over the window, you will see (x1 + dx*tmin, y1 + dy*tmin) and (x1 + dx*tmax? , y1 + dy*tmax?) are intersection between line and edge.
Text Clipping:
Several methods are available for clipping of text. Clipping method is dependent on the
method of generation used for characters. A simple method is completely considered, or
nothing considers method. This method is also called as all or none. If all characters of the
string are inside window, then we will keep the string, if a string character is outside then
whole string will be discarded in fig (a).
Another method is discarded those characters not completely inside the window. If a
character overlap boundary of window. Those will be discarded in fig (b).
In fig (c) individual character is treated. Character lies on boundary is discarded as which it
is outside the window.
Curve Clipping:
Curve Clipping involves complex procedures as compared to line clipping. Curve clipping
requires more processing than for object with linear boundaries. Consider window which is
rectangular in shape. The circle is to consider against rectangle window. If circle is
completely inside boundary of the window, it is considered visible. So save the circle. If a
circle is in outside window, discard it. If circle cut the boundary then consider it to be
clipping case.
Exterior Clipping:
It is opposite to previous clipping. Here picture which is outside the window is considered.
The picture inside the rectangle window is discarded. So part of the picture outside the
window is saved.
1. It is used for displaying properly the pictures which overlap each other.
2. It is used in the concept of overlapping windows.
3. It is used for designing various patterns of pictures.
4. It is used for advertising purposes.
5. It is suitable for publishing.
6. For designing and displaying of the number of maps and charts, it is also used.
Polygon Clipping:
Polygon clipping is applied to the polygons. The term polygon is used to define objects
having outline of solid. These objects should maintain property and shape of polygon after
clipping.
Polygon:
Polygon is a representation of the surface. It is primitive which is closed in nature. It is
formed using a collection of lines. It is also called as many-sided figure. The lines combined
to form polygon are called sides or edges. The lines are obtained by combining two vertices.
Example of Polygon:
1. Triangle
2. Rectangle
3. Hexagon
4. Pentagon
Types of Polygons
1. Concave
2. Convex
A polygon is called convex of line joining any two interior points of the polygon lies inside
the polygon. A non-convex polygon is said to be concave. A concave polygon has one
interior angle greater than 180°. So that it can be clipped into similar polygons.
A polygon can be positive or negative oriented. If we visit vertices and vertices visit
produces counterclockwise circuit, then orientation is said to be positive.
Sutherland-Hodgeman Polygon Clipping:
It is performed by processing the boundary of polygon against each window corner or edge.
First of all entire polygon is clipped against one edge, then resulting polygon is considered,
then the polygon is considered against the second edge, so on for all four edges.
1. If the first vertex is an outside the window, the second vertex is inside the window. Then
second vertex is added to the output list. The point of intersection of window boundary and
polygon side (edge) is also added to the output line.
2. If both vertexes are inside window boundary. Then only second vertex is added to the
output list.
3. If the first vertex is inside the window and second is an outside window. The edge which
intersects with window is added to output list.
4. If both vertices are the outside window, then nothing is added to output list.
Following figures shows original polygon and clipping of polygon against four windows.
Disadvantage of Cohen Hodgmen Algorithm:
This method requires a considerable amount of memory. The first of all polygons are stored
in original form. Then clipping against left edge done and output is stored. Then clipping
against right edge done, then top edge. Finally, the bottom edge is clipped. Results of all
these operations are stored in memory. So wastage of memory for storing intermediate
polygons.
Weiler-Atherton Polygon Clipping:
When the clipped polygons have two or more separate sections, then it is the concave
polygon handled by this algorithm. The vertex-processing procedures for window boundaries
are modified so that concave polygon is displayed.
Let the clipping window be initially called clip polygon and the polygon to be clipped the
subject polygon. We start with an arbitrary vertex of the subject polygon and trace around
its border in the clockwise direction until an intersection with the clip polygon is
encountered:
1. If the edge enters the clip polygon, record the intersection point and continue to trace the
subject polygon.
2. If the edge leaves the clip polygon, record the intersection point and make a right turn to
follow the clip polygon in the same manner (i.e., treat the clip polygon as subject polygon
and the subject polygon as clip polygon and proceed as before).
Whenever our path of traversal forms a sub-polygon we output the sub-polygon as part of
the overall result. We then continue to trace the rest of the original subject polygon from a
recorded intersection point that marks the beginning of a not-yet traced edge or portion of
an edge. The algorithm terminates when the entire border of the original subject polygon
has been traced exactly once.
Introduction of Shading
Shading is referred to as the implementation of the illumination model at the pixel points or
polygon surfaces of the graphics objects.
Shading model is used to compute the intensities and colors to display the surface. The
shading model has two primary ingredients: properties of the surface and properties of the
illumination falling on it. The principal surface property is its reflectance, which determines
how much of the incident light is reflected. If a surface has different reflectance for the light
of different wavelengths, it will appear to be colored.
An object illumination is also significant in computing intensity. The scene may have to save
illumination that is uniform from all direction, called diffuse illumination.
Shading models determine the shade of a point on the surface of an object in terms of a
number of attributes. The shading Mode can be decomposed into three parts, a contribution
from diffuse illumination, the contribution for one or more specific light sources and a
transparency effect. Each of these effects contributes to shading term E which is summed to
find the total energy coming from a point on an object. This is the energy a display should
generate to present a realistic image of the object. The energy comes not from a point on
the surface but a small area around the point.
Epd=Rp Id
where Epd is the energy coming from point P due to diffuse illumination. Id is the diffuse
illumination falling on the entire scene, and Rp is the reflectance coefficient at P which
ranges from shading contribution from specific light sources will cause the shade of a
surface to vary as to its orientation concerning the light sources changes and will also
include specular reflection effects. In the above figure, a point P on a surface, with light
arriving at an angle of incidence i, the angle between the surface normal Np and a ray to the
light source. If the energy Ips arriving from the light source is reflected uniformly in all
directions, called diffuse reflection, we have
This equation shows the reduction in the intensity of a surface as it's tipped obliquely to the
light source. If the angle of incidence i exceeds90°, the surface is hidden from the light
source and we must set Epsto zero.
In general, flat shading of polygon facets provides an accurate rendering for an object if all
of the following assumptions are valid:-
The object is a polyhedron and is not an approximation of an object with a curved surface.
All light sources illuminating the objects are sufficiently far from the surface so that N. L and
the attenuation function are constant over the surface (where N is the unit normal to a
surface and L is the unit direction vector to the point light source from a position on the
surface).
The viewing position is sufficiently far from the surface so that V. R is constant over the
surface (where V is the unit vector pointer to the viewer from the surface position and R
represent a unit vector in the direction of ideal specular reflection).
Gouraud shading
This Intensity-Interpolation scheme, developed by Gouraud and usually referred to as
Gouraud Shading, renders a polygon surface by linear interpolating intensity value across
the surface. Intensity values for each polygon are coordinate with the value of adjacent
polygons along the common edges, thus eliminating the intensity discontinuities that can
occur in flat shading.
Each polygon surface is rendered with Gouraud Shading by performing the following
calculations:
At each polygon vertex, we obtain a normal vector by averaging the surface normals of all
polygons staring that vertex as shown in fig:
Thus, for any vertex position V, we acquire the unit vertex normal with the calculation
Once we have the vertex normals, we can determine the intensity at the vertices from a
lighting model.
Following figures demonstrate the next step: Interpolating intensities along the
polygon edges. For each scan line, the intensities at the intersection of the scan line with a
polygon edge are linearly interpolated from the intensities at the edge endpoints. For
example: In fig, the polygon edge with endpoint vertices at position 1 and 2 is intersected
by the scanline at point 4. A fast method for obtaining the intensities at point 4 is to
interpolate between intensities I1 and I2 using only the vertical displacement of the scan line.
Similarly, the intensity at the right intersection of this scan line (point 5) is interpolated
from the intensity values at vertices 2 and 3. Once these bounding intensities are
established for a scan line, an interior point (such as point P in the previous fig) is
interpolated from the bounding intensities at point 4 and 5 as
Incremental calculations are used to obtain successive edge intensity values between scan
lines and to obtain successive intensities along a scan line as shown in fig:
If the intensity at edge position (x, y) is interpolated as
Then we can obtain the intensity along this edge for the next scan line, Y-1 as
Similar calculations are used to obtain intensities at successive horizontal pixel positions
along each scan line.
When surfaces are to be rendered in color, the intensities of each color component is
calculated at the vertices. Gouraud Shading can be connected with a hidden-surface
algorithm to fill in the visible polygons along each scan-line. An example of an object-
shaded with the Gouraud method appears in the following figure:
Gouraud Shading discards the intensity discontinuities associated with the constant-shading
model, but it has some other deficiencies. Highlights on the surface are sometimes
displayed with anomalous shapes, and the linear intensity interpolation can cause bright or
dark intensity streaks, called Match bands, to appear on the surface. These effects can be
decreased by dividing the surface into a higher number of polygon faces or by using other
methods, such as Phong shading, that requires more calculations.
Phong Shading
A more accurate method for rendering a polygon surface is to interpolate the normal vector
and then apply the illumination model to each surface point. This method developed by
Phong Bui Tuong is called Phong Shading or normal vector Interpolation Shading. It displays
more realistic highlights on a surface and greatly reduces the Match-band effect.
A polygon surface is rendered using Phong shading by carrying out the following steps:
Incremental methods are used to evaluate normals between scan lines and along each scan
line. At each pixel position along a scan line, the illumination model is applied to determine
the surface intensity at that point.
Intensity calculations using an approximated normal vector at each point along the scan line
produce more accurate results than the direct interpolation of intensities, as in Gouraud
Shading. The trade-off, however, is that phong shading requires considerably more
calculations.