Notes On Computer Graphics
Notes On Computer Graphics
A Complete Manual Of
Computer Graphics
BE Computer \ Electronics (Sixth Semester)
Page 1 of
Compiled by: Er. Ganesh Ram Suwal
2008
Prepared By:
Er. Ganesh Ram Suwal
Lecturer
Khwopa Engineering College
Chapter 1
1.0 Introduction:
Computer graphics is one of the most exciting and rapidly growing computer fields. The
computer graphics field is related to the generation of graphics using computers. It includes
the creation, Storage, and manipulation of image objects. These objects come from diverse
fields such as physical, mathematical, engineering, architectural, abstract structures and
natural phenomenon. Computer graphics today is largely interactive that is, the user
controls the contents, structure, and appearance of images of the objects by using input
devices. Such as a keyboard, mouse, or touch sensitive panel on the screen.
Until 1980’s computer graphics was a small specialized field, largely because the hardware
was expensive and graphics-based application programs that were easy to use and cost-
effective were few. Then, PC (Personal computers) with build in raster graphics displays such
as the Xerox Star Apple Macintosh and IBM PC-popularized the use of bitmap graphics for
users computer interactions. A bitmap is ones (1) and zeros (0) representation of the
rectangular array points on the screen. Each point is called Pixel or Pel (shortened forms of
Picture Elements). Once bitmap graphics became affordable, an explosion of easy to use
and inexpensive graphics based applications soon followed. Graphics-based user interfaces
allowed millions of new users to control simple, low-cost application programs, such as
word processors, spreadsheets, and drawing programs.
The concept of a “desktop” now became a popular metaphor for organizing screen space. By
means of a window manager, the user could create, position, and resize rectangular screen
areas called windows. This allows user to switch among multiple activities just by pointing
Page 2 of
Compiled by: Er. Ganesh Ram Suwal
and clicking at the desired window, typically with a mouse. Besides windows, icons which
represent data files, application program, file cabinets, mailboxes, printers, recycle bin, and
so on, made the user computer interaction more effective. By pointing and clicking icons,
user could activate the corresponding programs or objectives, which replace much of the
typing of the commands, used in earlier operating systems and computer applications.
Today, almost all interactive application programs, even those for manipulating text (e.g.
word processor) or numerical data (e.g. spreadsheet programs) use graphics extensively in
the user interface and for visualizing and manipulating the application-specified objects.
Even people who do not use computers encounter computer graphics in TV commercial and
cinematic special effects. Thus computer graphics is an integral part of all computer user
interfaces, and is indispensable for visualizing 2D, 3D objects in almost all areas such as
education, science, engineering, medicine, commerce, the military, research, advertising
and entertainment. The theme is that learning how to program and use computers now
includes learning how to use simple 2D graphics.
The SAGE air-defense system developed in middle 1950s was the first to use command and
control CRT display consoles on which operators identified targets with light pens (hand
held pointing devices that sense light emitted by objects on the screen).
Later on Sketchpad system by Ivan Sutherland came in light. That was the beginning
of modern interactive graphics. In this system, keyboard and light pen were used for
pointing, making choices, and drawing.
Page 3 of
Compiled by: Er. Ganesh Ram Suwal
At the same time, it was becoming clear to computer, automobile, and aerospace
manufactures that CAD (Computer-Aided Design) and CAM (Computer-Aided
Manufacturing) activities had enormous potential for automating drafting and other
drawing-intensive activities. The General Motors DAC system for automobile designs and
the Itek-Digitek System for lens designs were pioneering efforts that showed the utility of
graphical interaction in the iterative design cycles common in engineering. By the mid-60s, a
number of commercial products using these systems had appeared.
At that time only the most technology-intensive organizations could use the interactive
computer graphics whereas others used punch cards, a non-interactive system.
Page 4 of
Compiled by: Er. Ganesh Ram Suwal
1. User Interface.
Most applications have user interfaces that rely on desktop window systems to manage
multiple simultaneous activities, and on point-and-click facilities to allow user to select
menu items, icons, and objects on the screen. These activities fall under computer graphics.
Typing is necessary only input text to be stored and manipulated. For example Word
processor, Spreadsheet, and desktop-publishing programs are the typical examples where
user-interface techniques are implemented.
2. Plotting
Plotting 2D, 3D graphics of mathematical, physical and economics functions use computer
graphics extensively. The histogram, bar, and pi-chart; the task-scheduling charts are the
most commonly used plotting. These all are used to present meaningful and concisely the
trends and patterns of complex data.
Page 5 of
Compiled by: Er. Ganesh Ram Suwal
and patterns inherent in the huge amount of data sets. It would otherwise be almost
impossible to analyze those data numerically.
6. Simulation
Simulation is the imitation of the conditions like those, which is encountered in real life.
Simulation thus helps to learn or feel the conditions; one might have to face in near future
without being danger at the beginning of the course. For example, astronauts can exercise
the feeling of weightlessness in a simulator. Similarly a pilot training can be conducted in a
flight simulator. The military tank simulator, the naval simulator, driving simulator, air traffic
control simulator, heavy duty vehicle, and so on are some of the mostly used simulator in
practice. Simulators are also used to optimize the system. For example the vehicle,
observing the reaction of the driver during the operation of the simulator.
7. Entertainment
Disney movies such as Lion Kings and The Beauty and the Best, and other scientific movies
like Star Trek, are the best examples of the application of computer graphics in the field of
entertainment. Instead of drawing necessary frames with slightly changing scenes for the
production of cartoon film, only the key frames are sufficient for such cartoon-film where in-
between frames are interpolated by the graphics system dramatically decreasing the cost of
production while maintaining the quality. Computer and video games such as Fifa, Formula-
1, Superbike, and Moto are few to name where computer graphics is used extensively.
9. Cartography
Cartography is a subject, which deals with making map and charts. Computer graphics is
used to produce both accurate and schematic representations of geographical and other
Page 6 of
Compiled by: Er. Ganesh Ram Suwal
Image Processing
Computer graphics is used to create a picture while image processing is used to modify or
interpret existing pictures such as photographs and TV Scans. Two principle use in image
processing are
1. improving picture quality
2. machine perception of visual information as used in robotics
In image processing photograph is first digitize into an image file and then the
rearrangement picture parts, to enhance color separations or to improve the quality of
shading.
In medical application image processing is used to enhance the photograph for example
“tomography” and simulation operation. Tomography is a technique of X-ray photography
that allows cross sectional views of physiology system to be displayed.
Page 7 of
Compiled by: Er. Ganesh Ram Suwal
printf("\n\t\t 4 circle");
printf("\n\t\t 5 Ellipse");
printf("\n\t\t 6 pieslice");
printf("\n\t\t 7 3 d bar");
scanf("%d",&x);
cleardevice();
switch(x) {
case 1:
printf("\n\tGive the value x-axis");
scanf("%d",&a);
cleardevice();
line(a,a,a,a);
break;
case 2:
printf("\n\tGive the corodinate of the line x1,y1,x2,y2\t");
scanf("%d %d %d %d",&a,&b,&c,&d);
cleardevice();
line(a,b,c,d);
break;
case 3:
printf("\n\tGive the left, top, right and bottom value\t");
scanf("%d %d %d %d",&a,&b,&c,&d);
cleardevice();
rectangle(a,b,c,d);
break;
case 4:
printf("\n\tGive the x-axis and Y-axis");
printf(" center valuse and radii\t");
scanf("%d%d%d%d",&a,&b,&c);
cleardevice();
circle(a,b,c);
break;
Page 8 of
Compiled by: Er. Ganesh Ram Suwal
case 5:
printf("\n\tGive Center of ellipse X-axis");
printf(" and Y-axis starting anlge\n");
printf("end angle Horizontal axis X-axis and");
printf(" Vertical axis Y- axis \t");
cleardevice();
scanf("%d%d%d%d%d%d",&a,&b,&c,&d,&e,&f);
ellipse(a,b,c,d,e,f);
break;
case 6:
printf("\n\tGive Center of pieslice X-axis ");
printf("and Y-axis starting anlge");
printf("\nend angle radii \t");
scanf("%d%d%d%d%d%d",&a,&b,&c,&d,&e);
cleardevice();
pieslice(a,b, c,d,e);
break;
case 7:
printf("\n\tGive left, top, right, bottom, ");
printf("dept and topflag cordinates");
scanf("%d%d%d%d%d%d",&a,&b,&c,&d,&e,&f);
cleardevice();
bar3d(a,b,c,d,e,f);
break;
}
getch();
closegraph();
}
Page 9 of
Compiled by: Er. Ganesh Ram Suwal
Chapter 2
2.0 Hardware Concepts:
Since a computer is an electronic machine, so without any input to a computer it doesn’t
work anything. An input device is an electromechanical device, which accepts data from the
outside world, and translates them into a form, which the computer can interpret. Data
input devices like keyboards are used to provide additional data to the computers whereas
pointing and selection devices like mouse, light pens, touch panels are used to provide
visual and indication-input to the application.
Page 10 of
Compiled by: Er. Ganesh Ram Suwal
A. Mechanical Mouse
When a roller in the base of this mechanical mouse is moved, a pair of orthogonally
arranged toothed wheels, each placed in between a LED and a photo detector, interrupts
the light path. An optical detector counts the pulses. These pulses generated in the
horizontal and vertical directions move the cursor on the screen. Hence, the numbers of
interrupts so generated are used to report the mouse movements to the computer.
Page 11 of
Compiled by: Er. Ganesh Ram Suwal
Fig: a Fig: b
Fig: a and b showing cross-sectional view of optical mouse
Advantages of optical mouse:
1. The cursor moves smoothly.
2. There are no moving parts; so there is less chance of failure.
3. No mechanical problem due to dusts, etc.
Page 12 of
Compiled by: Er. Ganesh Ram Suwal
3. Light pen:
A light pen is a pencil shaped device and has a chord at the trailing end. It is a computer
input device in the form of a light-sensitive wand used in conjunction with the computer's
CRT monitor. It lets the user select screen position by detecting the light coming from point
in the CRT screen. It also allows the user to point to displayed objects, or draw on the
screen, in a similar way to a touch screen but with greater positional accuracy. They are
sensitive to short bursts of light emitted from the phosphor coating at the instant electron
beam strikes a particular point. An activated light pen pointed at a spot on a screen as the
electron beam lights up the spot, generates an electrical pulse that causes the coordinate
position of the electron beam to be recorded. The light pen when pointed to the screen
detects the bright/dim effect and when light goes from dim to light, it sends a signal pulse to
the video chip. The video chip sets a latch which feeds two numbers: X-location, Y-location
into a memory location and can tell where the light pen is pointed on the screen by two
numbers. A light pen can work with any CRT-based monitor, but not with LCD screens,
projectors and other display devices.
A light pen is fairly simple to implement. The light pen works by sensing the sudden small
change in brightness of a point on the screen when the electron gun refreshes that spot. By
noting exactly where the scanning has reached at that moment, the X, Y position of the pen
can be resolved. This is usually achieved by the light pen causing an interrupt, at which point
the scan position can be read from a special register, or computed from a counter or timer.
The pen position is updated on every refresh of the screen. Due to the fact that the user was
required to hold his or her arm in front of the screen for long periods of time, the light pen
fell out of use as a general purpose input device.
Page 13 of
Compiled by: Er. Ganesh Ram Suwal
Drawbacks:
Prolong use of the light pen can cause arm fatigue.
It gives sometimes false reading due to background lightning in a room
It cannot report the coordinates of a point that is completely black as a remedy one
can display a dark blue field in place of the regular image for a single frame time.
Light pen obscures the screen images as it is pointed to required spot.
When a light pen is pointed at the screen, part of the screen image is obscured by
the hand and pen and prolonged use of the light pen can cause arm fatigue.
It requires special implementations for some applications because they cannot
detect positions within black areas.
4. Touch Panel:
Touch panel allows the user to directly point to the screen with the touch of the finger to
move the cursor or to select the menu item. Its application is in processing of options where
graphical icons are given. The touch input can be recorded using optical, electrical or
acoustical methods. There are three kinds of touch panels:
a) Optical Touch Panel
Page 14 of
Compiled by: Er. Ganesh Ram Suwal
Page 15 of
Compiled by: Er. Ganesh Ram Suwal
Page 16 of
Compiled by: Er. Ganesh Ram Suwal
A tablet is a digitizer. In general, a digitizer is a device which is used to scan over an object
and to input a set of discrete coordinate positions. These positions can then be joined with
straight line segments to approximate the shape of the original object. A tablet digitizes an
object detecting the position of a movable stylus (pencil-shaped device) or puck (like mouse
with cross hairs for sighting positions) held in the user’s hand. It is a common device for
drawing, painting or interactively selecting coordinate position on an object. One type of
digitizer is the graphic tablet. It is used to enter two dimensional coordinates by activating a
hand cursor or stylus at selected position on a flat surface. A tablet is a flat surfaces ranging
from 6 x 6 inches to 48 x 72 inches which can detect the position of movable of stylus or
pack held in user hand. The accuracy of tablets usually falls below 0.2 mm.
There are three types of digitizer:
a) Electrical Tablet
b) Sonic Tablet
c) Resistive Tablet
Electrical Tablet:
A grid of wires on ¼ to ½ inch center is embedded in the tablet surface, and electromagnetic
signals generated by the electric pulses applied in the sequence to the wire in the grid
induce electrical signals in a wire coil in the stylus (or puck). The strength of the signal
induced by each pulse is used to determine the position of the stylus. The signal strength is
also used to determine roughly how far the stylus is from the tablet. When the stylus is
within ½ inch from the tablet, it is taken as “near” otherwise it is either “far” or “touching”.
When the stylus is “near” or “touching”, a cursor is usually shown on the display to provide
visual feedback to the user. A signal is sent to the computer when the tip of the stylus is
pressed against the tablet, or when any button on the puck is pressed. The information
provided by the tablet repeats 30 to 60 times per second.
Page 17 of
Compiled by: Er. Ganesh Ram Suwal
Sonic Tablet
The sonic tablet use sound waves to couple the stylus to microphones positioned on the
periphery of the digitizing area. An electric spark at the tip of the stylus creates sound
bursts. The position of the stylus or coordinate value s is calculated using the delay between
when the spark occurs and when its sound arrives at each microphone. The main advantage
of sonic tablet is that it doesn’t require the dedicated working area for the microphone can
placed on any surface from the “tablet” work area. These facilitate digitizing drawing on
thick books, because in an electrical tablet this is not convenient for the stylus cannot get
closer to the tablet surface.
Resistive Tablet
The resistive tablet consists of a piece of glass coated with thin layer of conducting material.
When a battery-powered stylus is activated at certain position, it emits high frequency radio
signals, which induce the radio signal on the conducting layers. The strength of the signal
received at the edges of the tablet is used to calculate the position of the stylus.
When the stylus is activated at certain point, it emits high frequency radio signal which
induces radio signal into the conducting layer. The strength of the signal at the edges of the
tablet determines the position of the stylus.
Several types of tablets are transparent and thus can be backlit for digitizing x-rays film and
photographic negatives, the resistive tablet can be used to digitize the objects on CRT
Page 18 of
Compiled by: Er. Ganesh Ram Suwal
because it can be curved to the shape of the CRT. The mechanism of sonic tablet or
electrical tablets can be used to digitize the 3D objects, while the resistive tablets can be
used to digitize on CRT because it can be curved to shape of the CRT.
Page 19 of
Compiled by: Er. Ganesh Ram Suwal
Some of the rater scans display system each frame is displayed in two pass using an
interlaced refresh procedure. It is primarily used lower refresh rate for preserving the
phosphor from burn out.
A 640 pixels by 480 lines is an example of medium resolution raster display. A 1600 by 1200
is a high resolution one. A pixel in a frame buffer may be represented by one bit as in
monochromatic system where each pixel on CRT screen is either ON '1' or OFF '0' or it may
be represented by eight bits resulting 28 = 256 gray levels for continuous shades of gray on
CRT screen. In color system, each of the three colors – red, green and blue is represented by
eight bits producing 224 = 16 million colors. A medium resolution color display having 640 x
480 pixels will thus require (640 x 480 x 24)/8 = 9 kb of RAM.
Page 20 of
Compiled by: Er. Ganesh Ram Suwal
Advantages
It has an ability to fill the areas with solid colors or patterns.
The time required for refreshing is independent of complexity of an image
Low cost
Disadvantages
For Real-Time dynamics not only the end points are required to move but all the
pixels in between the moved end points have to be scan converted with appropriate
algorithms which might slow down the dynamic process.
Required special algorithm to move all pixels.
Due to scan conversion "jaggies" or "stair-casing" are unavoidable.
Video Controller:
It is a special purpose processor. It accesses the frame buffer to refresh the screen. It is
given direct access to the frame buffer memory. Some transformation such as enlargement,
reduction, or movement from one location to another can also be accomplished with the
video controller. Some systems are designed to allow the video controller to mix the frame
buffer image with an input from television camera or other input device.
System memory holds data and those programs that execute on the CPU, and the
application program, graphics package and OS. The display processor memory holds data
plus the program that perform scan conversion and raster operations. The frame buffer
Page 21 of
Compiled by: Er. Ganesh Ram Suwal
stores displayable image created by scan conversion and raster operations. The organization
is given below in figure:
System Bus
Display System
Processor Memory
Page 22 of
Compiled by: Er. Ganesh Ram Suwal
Advantages:
1. It can produce a smooth output primitive with higher resolution unlike the raster display
technology.
2. It is better than raster display for real time dynamics such as animation.
3. For transformation, only the endpoints has to be moved to the new position in vector
display, but in raster display it is necessary to move those endpoints, and at the same
time all the pixels between the endpoints must be scan-converted using appropriate
algorithm, no prior information on pixels can be reused.
Disadvantage:
Page 23 of
Compiled by: Er. Ganesh Ram Suwal
1. A vector display cannot fill areas with patterns, and manipulate bits.
2. Time required for refreshing an image depends upon its complexity (more the lines,
longer the time), the flicker may therefore appear as the complexity of the image
increases. The fastest vector display can draw about 100,000 short vectors in a refresh
cycle without flickering.
Display device
1. Fluorescence / Phosphorescence
When electron beam strikes phosphor coated screen, the individual electron is moving with
kinetic energy that is proportional to the acceleration voltage. Some of this energy is
dissipated as heat and the rest of the energy is transferred to the electrons of phosphor
atom, making them jump to higher quantum energy levels. In returning to their previous
quantum levels the excited electrons give up their extra energy in the form of light,
predicated by quantum theory. Any given phosphor has several quantum levels to which
Page 24 of
Compiled by: Er. Ganesh Ram Suwal
2. Persistence
A phosphor’s persistence is defined as the time from the removal of excitation to the
moment when phosphorescence has decayed to 10% of the initial light output. The range of
persistence of different phosphors can reach many seconds. The phosphors used for
graphics display devices usually have persistence of 10 to 60 microseconds. A phosphor with
low persistence is useful for animation; a high persistence phosphor is useful for highly
complex, static pictures.
3. Refresh Rate
The refresh rate is the number of times per second the image is redrawn to give a feeling of
un-flickering pictures, and it is usually 50 per second. As the refresh rate decreases flicker
develops because the eye can no longer integrate the individual light impulses coming from
a pixel. The refresh rate above which a picture stops flickering and fuse into a steady image
is called critical fusion frequency (CFF).
The factors affecting the CFF are:
Persistence: longer the persistence the lower the CFF, but the relation between the CFF
and persistence is non linear.
Image intensity: increasing the image intensity increase the CFF with nonlinear
relationship.
Ambient room light: Decreasing the ambient room light increase the CFF with nonlinear
relationship.
Wavelengths of emitted light
Page 25 of
Compiled by: Er. Ganesh Ram Suwal
Observer
5. Resolution
Resolution is defined as the maximum number of points that can be displayed horizontally
and vertically without overlap on display device. Factors affecting the resolution are follows.
Spot profile: the spot intensity has a Gaussian distribution as depicted in fig a. So two
adjacent spots on the display device appear distinct as long as their separation (D2) is
greater than the diameter of the spot (D1) at which each spot has an intensity of about
60% of that at the center of the spot as shown in fig b
Fig. b
Intensity: as the intensity of the electron beam increases, the spot size on display tends to
increase because of spreading of energy beyond the point of bombardment. This
phenomenon is also known as blooming. Consequently the resolution decreases.
Thus it is noted that resolution is not necessarily a constant, and it is not necessarily equal to
the resolution of a pixmap which is allocated in buffer memory.
Page 26 of
Compiled by: Er. Ganesh Ram Suwal
The electron gun emits a stream of electrons that is accelerated towards the phosphor
coated screen by a high positive voltage applied inner side of the tube near the screen. The
electrons are forced into a narrow beam by the focusing mechanism and directed towards a
particular point on screen by the deflection mechanism. The mechanism may be
electrostatic or magnetic. When electron hit the screen the phosphor emits visible light and
then the phosphor’s light output decays exponentially with time. The entire picture must be
refreshed (redrawn) many times per-second so that the viewer sees an un-flickering
pictures.
The stream of electrons from the heated cathode is accelerated towards the phosphor
coated screen by a high voltage 15,000 to 20,000 volts. The control grid voltage determines
how many electrons are actually in the electron beam. More negative the control grid
voltage, fewer the electrons that pass through the grid. Since the light output of the
phosphor depends upon the number of electrons in the beam the brightness of the screen is
therefore controlled by varying the grid voltage.
Since electrons in electron-beam repel to each other and tends to diverge. The focusing
mechanism employs an electron lens (electrostatic or magnetic) to concentrate the
electrons in thin beam, and converge to thin small point when it hits phosphor coating. The
cross-sectional electron density of the beam is Gaussian (normal) and the intensity spot on
the phosphor has the same distribution as shown in fig. typically spots size of high
resolution monochrome CRT is 0.005 inches.
Page 27 of
Compiled by: Er. Ganesh Ram Suwal
Page 28 of
Compiled by: Er. Ganesh Ram Suwal
Page 29 of
Compiled by: Er. Ganesh Ram Suwal
mm (0.61 mm for home TV tubes). The diameter of each electron beam is set at 1.75 times
the pitch.
For example:
A color CRT (15.5 * 11.6) inches has pitch of 0.01 inches.
Then beam diameter = 0.01 * 1.75 =0.018 inches
Resolution per inch = 1/0.018 =55 lines
Hence resolution achievable for given CRT is 15.5 * 55 = 850 by 11.6 * 55 = 638 lines
Therefore the resolution of a CRT can be increased by decreasing the pitch. But small pitch
CRT is difficult to manufacture because it is difficult to set small triads and the shadow mask
is more fragile owing to too many holes on it. Besides the shadow mask is more likely to
warp from heating by the electrons.
The shadow mask of color CRT also decreases the brightness because only 20% of electrons
in the beam hit the phosphor and rest hits the shadow mask.
Flat Panel Display:
Flat Panel means reduction of volume, weight and power requirements compared to a CRT.
Current applications are calculators, laptops computers, etc. It is classified into two types:
a) Emissive Displays: These are the device that converts electrical energy into light.
E.g.: plasma panel, electro luminescent display, etc.
b) Non-Emissive Display: It uses optical effects to converts sunlight or other light
sources into graphical patterns. E.g.: LCD
Page 30 of
Compiled by: Er. Ganesh Ram Suwal
The cells in a glass envelop are luminous when they are electrified through "electrodes”.
With a sufficiently high voltage, some of the atoms in the gas of a cell lose electrons and
become ionized creating electrically conducting plasma of atoms, free electrons, and ions.
This disassociated gas is called as plasma. The collisions of the flowing electrons in the
plasma with the inert gas atoms lead to light emission; such light-emitting plasmas are
known as glow discharges. To turn on a bulb, system adjusts voltages on the corresponding
lines. Once a glow discharge starts, it can be maintained by applying a low-level voltage
between all the horizontal and vertical electrodes–even after the ionizing voltage is
removed. When the electrons recombined energy is released in the form of protons, then
the gas glows with orange-red color. To turn on a bulb, system adjusts voltages on the
corresponding lines. Once the glow starts, a lower voltage is applied to sustain it. To turn off
a bulb, the system momentarily decreases the voltage on the appropriate line than the
sustaining voltage.
Page 31 of
Compiled by: Er. Ganesh Ram Suwal
absorbed by a manganese atom which then release energy as a spot of light similar to the
glowing plasma effect in the plasma panel.
Page 32 of
Compiled by: Er. Ganesh Ram Suwal
Voice System
The voice system consists of speech recognizer which analyzes the sound of each person.
The voice system input can be used to initiate graphics operations or to enter data. These
systems operate by matching an input against a predefined dictionary of words and phrases.
The microphone is designed to minimize input of other background sounds. It must consist
of dictionary of words (frequency pattern) spoken words are converted into frequency
pattern.
Scanners
Converts any printed image of an object into electronic form by shinning light onto the
image and sensing the intensity of light's reflection at any point.
Color scanners use filters to separate components of color into primary additive colors
(red, green, blue) at each point.
Page 33 of
Compiled by: Er. Ganesh Ram Suwal
Red, Green, Blue are primary additive colors because they can be combined to create
any other color.
Image scanners translate printed images into electronic format that can be stored into
a computer memory.
Software is then used to manipulate the scanned electronic image.
Images are enhanced or manipulated by graphics programs like Adode.
Printed page
Light source Light source move across printed page
Light sensitive diodes And onto light sensitive diodes which convert light to
electricity, there are 300 to 600 diodes per inch.
Data Structure
Page 34 of
Compiled by: Er. Ganesh Ram Suwal
Data may be a single or it may be a set of values. Whether it is a single value or a group of
values to be processed must be organized in a particular fashion. This organization leads to
structuring of data.
Page 35 of
Compiled by: Er. Ganesh Ram Suwal
Page 36 of
Compiled by: Er. Ganesh Ram Suwal
The most recently pushed element can be checked prior to performing a pop operation. A
stack is based on the last in first out algorithm (LIFO) in which insertion and deletion
operations are performed at one end of the stack. The most accessible information in a
stack is at the top of the stack and least accessible information is at the bottom of the stack.
If someone wants to delete on item from an empty stack, it causes underflow and if
someone wants to add an element to the stack (which is already full) then it is the case of
overflow.
Implementation of Stack:
Stack can be implemented in two ways:
1. Pointer (Linked list)
2. Array
Push operation:
Step 1: Check for the overflow
If TOS ≥ = size
Output: “stack overflow and exit”.
Step 2: Increment the pointer value by one
TOS = TOS +1
Step 3: Perform insertion
S[TOS] = value
Step 4: Exit
POP operation:
Step 1: Check for the underflow
If TOS = 0
Output: “Stack underflow and exit”
Step 2: Decrement the pointer value by one
Value = S[TOS]
TOS = TOS-1
Step 3: Return the former information of the stack
Return [value]
Page 37 of
Compiled by: Er. Ganesh Ram Suwal
Step 4: Exit.
Queue:
This is a non primitive linear data structure used to represent a linear list and permits
deletion to be performed at one end of the list called front of a list and insertion on the
other end of the list called rear of a list. The information is such a list is processed in the
same order as it was received, i.e on First In First Out basis. (FIFO). A queue has two pointes;
front and rear, pointing to the front and rear elements of the queue respectively. Consider a
queue consisting of ‘n’ elements and an element value which we have to insert into the
queue. The value NULL (0) of front pointer employees an empty queue.
Circular queue:
Suppose we have an array Q that contains n element in which Q1 comes after Qn in the
array. When this technique is used to construct a queue then the queue is called the circular
queue. In other words we can say that a queue is called circular when the last room comes
just before the first room. Fig given below show the circular queue.
Linked List
Information part may consist of one or more than one fields. In other words a linked list
consist of a series of a structure contains one or more then one contiguous information
fields and a pointer to a structure containing its successor. The last node of the list contains
NULL (‘\0’). The pointer contains the address of the location where the next information is
stored. A linked list also contains a list
pointer variable called start, which contains the address of the first node is the list. Hence ,
there is an arrow drawn from start to the node in the linked list. If there is no node in the
list, the is called NULL list or EMPTY list. The different operations performed on the linked
list are:
(1) Insertion:
a. Inset a node at the beginning of the list.
b. Inset a node at the end of the list.
c. Inset a node between the nodes.
Page 38 of
Compiled by: Er. Ganesh Ram Suwal
(2) Deletion
a. Delete a node at the beginning of the list.
b. Delete a node at the end of the list.
c. Delete a node between the nodes.
(3) Traversing
Travelling from one node to another node in the list.
The links are used to denote the predecessor and successor of a node. The link denoting the
predecessor of a node is called the left link and that denoting its successor is right link. A list
with this type of arrangement is called Doubly Linked List as shown in the figure above. Now
it is possible to define a doubly linked as a collection of nodes, each nodes having three
fields.
- pointer to previous node ( pointer to predecessor)
- Information field
- Pointer to next node (pointer to successor)
Graphs:
graphs G is defined as set of two tuples G = (V,E).
Where ,
V represents set of vertices of G and
E represents the set of edges of G.
There exists a mapping from the set of edges to a set of pairs of element of V. Fog eg. given
below shows the different types of graphs.
Page 39 of
Compiled by: Er. Ganesh Ram Suwal
Chapter 3:
3.1: line drawing
1. DDA (Digital Differential Analyzer) Algorithm
The basis of the DDA (Digital Differential Analyzer) method is to take unit steps along one of
the coordinate assume x co-ordinate and compute the corresponding values along the
other coordinate lets say Y co-ordinate. The unit steps Lets say m are always along the
coordinate of greatest change, e.g. if we have dx = 11 and dy = 6, then we would take unit
steps along x and compute the steps m along y.
Slope = m =(dy/dx)
Page 40 of
Compiled by: Er. Ganesh Ram Suwal
Page 41 of
Compiled by: Er. Ganesh Ram Suwal
void main(){
int x1,x2,y1,y2;
initgraph(&gm,&gd,”c:\\tc\\bgi”);
printf(“Enter the first point of a line”);
scanf(“%d%d”,&x1,&y1);
printf(“Enter the End point of a line”);
scanf(“%d%d”,&x2,&y2);
DDALine(x1, y1, x2, y2);
getch();
closegraph();
}
Page 42 of
Compiled by: Er. Ganesh Ram Suwal
}
}//End DDALine Algorithm Function
For example, from position (2, 3) we have to choose between (3, 3) and (3, 4)
We would like the point that is closer to the original line
At sample position xk+1 the vertical separations from the mathematical line are labelled
dupper and dlower
The y coordinate on the mathematical line at xk+1 is:
y £ m( xk £1) £ b
d lower £ y £ yk
£ m( xk £1) £ b £ yk
and:
d upper £ ( yk £1) £ y
£ yk £1 £ m( xk £1) £ b
We can use these to make a simple decision about which pixel is closer to the mathematical
line
This simple decision is based on the difference between the two pixel positions:
Page 43 of
Compiled by: Er. Ganesh Ram Suwal
Let’s substitute m with ∆y/∆x where ∆x and ∆y are the differences between the end-points:
£y
£x(d lower £ d upper ) £ £x(2 ( xk £1) £ 2 yk £ 2b £1)
£x
£ 2£y xk £ 2£x yk £ 2£y £ £x(2b £1)
£ 2 £y xk £ 2 £x y k £ c
So, a decision parameter pk for the kth step along a line is given by:
The sign of the decision parameter pk is the same as that of dlower – dupper
If pk is negative (pk < 0), then we choose the lower pixel, otherwise we choose the upper
pixel
Remember coordinate changes occur along the x axis in unit steps so we can do everything
with integer calculations
At step k+1 the decision parameter is given as:
pk £1 £ 2£y xk £1 £ 2£x yk £1 £ c
pk £1 £ pk £ 2£y ( xk £1 £ xk ) £ 2£x( yk £1 £ yk )
pk £1 £ pk £ 2£y £ 2£x( yk £1 £ yk )
p0 £ 2 £ y £ £ x
Note: The algorithm and derivation above assumes slopes are less than 1. for other slopes
we need to adjust the algorithm slightly
Page 44 of
Compiled by: Er. Ganesh Ram Suwal
Comparing this to the DDA algorithm, DDA has the following problems:
– Accumulation of round-off errors can make the pixelated line drift away from what
was intended
– The rounding operations and floating point arithmetic involved are time consuming
Page 45 of
Compiled by: Er. Ganesh Ram Suwal
cin>>x1>>y1;
cout<<"\n Enter the value for X2 and Y2\t";
cin >>x2>>y2;
cleardevice();
putpixel(x,y,RED);
dy=y2-y1;
dx=x2-x1;
p=dy2-dx;
for(k=0;k<=dx;k++){
if(p>0) {
putpixel(x,y,RED);
p=p+dy2;
x++;
}
else {
putpixel(x,y,RED);
p=p+dy2-2*dx;
x++;
y++;
}
}
getch();
closegraph();
}
Circle
A circle is a set of points that are all at a given distance r from a center position. If the centre
of a circle at origin (0,0). Then the equation for a circle is:
x2 £ y2 £ r 2
Where
r is the radius of the circle
Page 46 of
Compiled by: Er. Ganesh Ram Suwal
So, we can write a simple circle drawing algorithm by solving the equation for y at unit x
intervals using:
y £ r 2 £ x2
We could use this equation to calculate the position of points on a circle circumference by
stepping along the x axis in unit steps from x c - r to x c + r and calculating the corresponding
y values at each position as
One problem with this approach is that Firstly it involves considerable computation at each
step. Moreover, the spacing between plotted pixel positions is not uniform.
We could adjust the spacing by interchanging x and y. But this simply increases the
computation and processing required by the algorithm.
Secondly the calculations are not very efficient
– The square (multiply) operations
– The square root operation – try really hard to avoid these!
We need a more efficient, more accurate solution
Page 47 of
Compiled by: Er. Ganesh Ram Suwal
Another way to eliminate the unequal spacing shown in Fig. is to calculate points along the
circular boundary using polar coordinates r and Theta.
Expressing the circle equation in parametric polar form yields the pair of equations
When a display is generated with these equations using a fixed angular step size, a circle is
plotted with equally spaced points along the circumference.
To reduce calculations, we can use a large angular separation between points along the
circumference and connect the points with straight-line segments to approximate the
circular path.
Page 48 of
Compiled by: Er. Ganesh Ram Suwal
For a more continuous boundary on a raster display, set the angular step size at 1/r. This
plots pixel positions that are approximately one unit apart. Although polar coordinates
provide equal point spacing, the trigonometric calculations are still time consuming.
Similarly to the case with lines, there is an incremental algorithm for drawing circles – the
mid-point circle algorithm
In the mid-point circle algorithm we use eight-way symmetry so only ever calculate the
points for the top right eighth of a circle, and then use symmetry to get the rest of the
points
Page 49 of
Compiled by: Er. Ganesh Ram Suwal
f circ ( x, y ) £ x 2 £ y 2 £ r 2
pk £ f circ ( xk £1, yk £ 1 )
2
£ ( xk £1) £ ( yk £ 1 ) 2 £ r 2
2
2
If pk < 0 the midpoint is inside the circle and and the pixel at yk is closer to the circle
Otherwise the midpoint is outside and yk-1 is closer
To ensure things are as efficient as possible we can do all of our calculations incrementally
First consider:
£
pk £1 £ f circ xk £1 £1, yk £1 £ 1
2
£
£
£ [( xk £1) £1]2 £ yk £1 £ 1
2
££ r
2
2
or:
pk £1 £ pk £ 2( xk £1) £ ( yk2£1 £ yk2 ) £ ( yk £1 £ yk ) £1
Page 50 of
Compiled by: Er. Ganesh Ram Suwal
This method is more easily applied to other conics; and for an integer circle radius, the
midpoint approach generates the same pixel positions as the Bresenham circle algorithm.
/* Program for plotting a circle using mid point circle drawing algorithm */
#include<iostream.h>
#include<conio.h>
#include<graphics.h>
#include<stdio.h>
void main(){
float xc,yc,x1,y1,pk,pk1,xk,yk,xk1,yk1,r;
printf("Enter the centre of the circle");
scanf("%f%f",&xc,&yc);
printf("Enter the radius of the circle");
scanf("%f",&r);;
int driver,mode;
driver=DETECT;
initgraph(&driver,&mode,"..\\bgi");
pk=5/4-r;
xk=0;
yk=r;
while(xk<=yk) {
Page 51 of
Compiled by: Er. Ganesh Ram Suwal
if(pk<0) {
xk1=xk+1;
yk1=yk;
pk=pk+2*xk1+1;
}
else {
xk1=xk+1;
yk1=yk-1;
pk=pk+2*xk1+1-2*yk1;
}
putpixel(xc+xk,yc+yk,1);
putpixel(xc-xk,yc+yk,1);
putpixel(xc+xk,yc-yk,1);
putpixel(xc-xk,yc-yk,1);
putpixel(xc+yk,yc+xk,1);
putpixel(xc-yk,yc+xk,1);
putpixel(xc+yk,yc-xk,1);
putpixel(xc-yk,yc-xk,1);
xk=xk1;
yk=yk1;
}
getch();
closegraph();
}
Ellipse
// program to draw the ellipse by mid point
#include<stdio.h>
#include<conio.h>
#include<graphics.h>
#include<math.h>
Page 52 of
Compiled by: Er. Ganesh Ram Suwal
int round(float);
void main(void){
float rx2,ry2,x,y,xk,yk,p;
int xc,yc,rx,ry;
int m,d=DETECT;
initgraph(&d,&m,"..\\bgi");
printf("\n Enter the center coordinate (x,y) for ellipse:");
scanf("%d%d",&x,&y);
printf("\n Enter the radius to x and y axis:");
scanf("%d%d",&rx,&ry);
cleardevice();
xk=0;
yk=ry;
x=rx*rx;
y=ry*ry;
p=(y-x*ry+(0.25)*x);
while(2*y*xk<2*x*yk) {
xk=xk+1;
if(p<0) {
yk=yk;
p=(p+2*y*xk+y);
}
else {
yk=yk-1;
p=(p+2*y*xk-2*x*yk+y);
}
putpixel(xc+xk,yc+yk,1);
putpixel(xc-xk,yc+yk,1);
putpixel(xc+xk,yc-yk,1);
putpixel(xc-xk,yc-yk,1);
}
p=(y*(xk+0.5)*(xk+0.5)+x*(yk-1)*(yk-1)-x*y);
Page 53 of
Compiled by: Er. Ganesh Ram Suwal
while(xk<rx) {
yk=yk-1;
if(p>0) {
xk=xk;
p=p+x-2*x*yk;
}
else {
xk=xk+1;
p=p+2*y*xk-2*x*yk+x;
}
putpixel(xc+xk,yc+yk,1);
putpixel(xc-xk,yc+yk,1);
putpixel(xc+xk,yc-yk,1);
putpixel(xc-xk,yc-yk,1);
}
getch();
}
Clipping
Clipping refers to the removal of part of a scene. Internal clipping removes parts of a picture
outside a given region; external clipping removes parts inside a region. We'll explore internal
clipping, but external clipping can almost always be accomplished as a by-product.
Page 54 of
Compiled by: Er. Ganesh Ram Suwal
A line clipping algorithms takes as input two endpoints of line segment and returns one (or
more) line segments. A polygon clipper takes as input the vertices of a polygon and returns
one (or more) polygons. There are several clipping algorithms. We'll study the Cohen-
Sutherland line clipping algorithm to learn some basic concepts.
Cohen-Sutherland Line Clipping
The Cohen-Sutherland algorithm clips a line to an upright rectangular window. It is an
application of triage, or makes the simple case fast. The algorithm extended window
boundaries to define 9 regions:
Top-left, top-center, top-right, center-left, center, center-right, bottom-left, bottom-center,
and bottom-right.
See figure 1 below. These 9 regions can be uniquely identified using a 4 bit code, often
called an outcode or Region code. We'll use the order: left, right, bottom, top (LRBT) for
Left (first) bit is set to 1 when p lies to left of window (i.e x< xwmin)
Right (second) bit is set to 1 when p lies to right of window (i.e x > xwmax)
Bottom (third) bit is set to 1 when p lies below window (i.e y < ywmin)
Top (fourth) bit set is set to 1 when p lies above window (i.e y > ywmax)
The LRBT (Left, Right, Bottom, and Top) order is somewhat arbitrary, but once an order is
chosen we must stick with it. Note that points on the clipping window edge are considered
inside (the bits are left at 0).
Page 55 of
Compiled by: Er. Ganesh Ram Suwal
Figure 1: The nine regions defined by an up-right window and their Region codes.
Given a line segment with end points
And ,
Page 56 of
Compiled by: Er. Ganesh Ram Suwal
the right edge is x=1, and we need to find y. The y value of the intersection is found by
substituting x=1 into the line equation (from p0 to p1)
code 0000.
o Logical bitwise OR no information.
o Logical bitwise AND no information.
Page 57 of
Compiled by: Er. Ganesh Ram Suwal
o is outside.
Clipping Polygons
An algorithm that clips a polygon must deal with many different cases. The case is
particularly note worthy in that the concave polygon is clipped into two separate polygons.
All in all, the task of clipping seems rather complex. Each edge of the polygon must be tested
against each edge of the clip rectangle; new edges must be added, and existing edges must be
discarded, retained, or divided. Multiple polygons may result from clipping a single polygon.
We need an organized way to deal with all these cases.
The following example illustrates a simple case of polygon clipping.
Page 58 of
Compiled by: Er. Ganesh Ram Suwal
2D Transformations
Transformations are a fundamental part of computer graphics. Transformations are used to
position objects, to shape objects, to change viewing positions, and even to change how
something is viewed (e.g. the type of perspective that is used).
Page 59 of
Compiled by: Er. Ganesh Ram Suwal
These basic transformations can also be combined to obtain more complex transformations.
In order to make the representation of these complex transformations easier to understand and
more efficient, we introduce the idea of homogeneous coordinates.
Representation of Points/Objects
A point p in 2D is represented as a pair of numbers: p= (x, y) where x is the x-coordinate of
the point p and y is the y-coordinate of p . 2D objects are often represented as a set of points
(vertices), {p1,p2,...,pn}, and an associated set of edges {e1,e2,...,em}. An edge is defined as a
pair of points e = {pi,pj}. What are the points and edges of the triangle below?
Page 60 of
Compiled by: Er. Ganesh Ram Suwal
Translations
Translation is repositioning the object along the straight line path. Assume you are given a
point at (x,y)=(2,1). Where will the point be if you move it 3 units to the right and 1 unit up?
Ans: (x',y') = (5,2). How was this obtained? - (x',y') = (x+3,y+1). That is, to move a point by
some amount dx to the right and dy up, you must add dx to the x-coordinate and add dy to the
y-coordinate.
What was the required transformation to move the green triangle to the red triangle? Here the
green triangle is represented by 3 points
triangle = { p1=(1,0), p2=(2,0), p3=(1.5,2) }
What are the points and edges in this picture of a house? What is the transformation is
required to move this house so that the peak of the roof is at the origin? What is required to
move the house as shown in animation?
Page 61 of
Compiled by: Er. Ganesh Ram Suwal
q=p+t= + =
Scaling
Suppose we want to double the size of a 2-D object. What do we mean by double? Double in
size, width only, height only, along some line only? When we talk about scaling we usually
mean some amount of scaling along each dimension. That is, we must specify how much to
change the size along each dimension. Below we see a triangle and a house that have been
doubled in both width and height (note, the area is more than doubled).
Page 62 of
Compiled by: Er. Ganesh Ram Suwal
Page 63 of
Compiled by: Er. Ganesh Ram Suwal
The scaling for the x dimension does not have to be the same as the y dimension. If these are
different, then the object is distorted. What is the scaling in each dimension of the pictures
below?
Page 64 of
Compiled by: Er. Ganesh Ram Suwal
And if we double the size, where is the resulting object? In the pictures above, the scaled
object is always shifted to the right. This is because it is scaled with respect to the origin.
That is, the point at the origin is left fixed. Thus scaling by more than 1 moves the object
away from the origin and scaling of less than 1 moves the object toward the origin. This can
be seen in the animation below.
This is because of how basic scaling is done. The above objects have been scaled simply by
multiplying each of its points by the appropriate scaling factor. For example, the point
p=(1.5,2) has been scaled by 2 along x and .5 along y. Thus, the new point is
q = (2*1.5,.5*2) = (1,1).
Matrix/Vector Representation of Translations
Scaling transformations are represented by matrices. For example, the above scaling of 2 and
.5 is represented as a matrix:
scale matrix: s = =
Page 65 of
Compiled by: Er. Ganesh Ram Suwal
Rotation
Below, we see objects that have been rotate by 25 degrees.
Page 66 of
Compiled by: Er. Ganesh Ram Suwal
Again, we see that basic rotations are with respect to the origin:
Page 67 of
Compiled by: Er. Ganesh Ram Suwal
Shear
Combining Transformations
We saw that the basic scaling and rotating transformations are always with respect to the
origin. To scale or rotate about a particular point (the fixed point) we must first translate the
object so that the fixed point is at the origin. We then perform the scaling or rotation and then
Page 68 of
Compiled by: Er. Ganesh Ram Suwal
the inverse of the original translation to move the fixed point back to its original position. For
example, if we want to scale the triangle by 2 in each direction about the point fp = (1.5,1),
we first translate all the points of the triangle by T = (-1.5,1), scale by 2 (S) , and then
translate back by -T=(1.5,1). Mathematically this looks like
q= = ( + )+
Order Matters!
Notice the order in which these transformations are performed. The first (rightmost)
transformation is T and the last (leftmost) is -T. If you apply these transformations in a
different order then you will get very different results. For example, what happens when you
first apply T followed by -T followed by S? Here T and -T cancel each other out and you are
simply left with S
.Sometimes (but be careful) order does not matter, For example, if you apply multiple 2D
rotations, order makes no difference:
R1 R2 = R2 R1
But this will not necessarily be true in 3D!!
Homogeneous Coordinates
In general, when you want to perform a complex transformation, you usually make it by
combining a number of basic transformations. The above equation for q, however, is
awkward to read because scaling is done by matrix multiplication and translation is done by
vector addition. In order to represent all transformations in the same form, computer
scientists have devised what are called homogeneous coordinates. Do not try to apply any
exotic interpretation to them. They are simply a mathematical trick to make the representation
be more consistent and easier to use.
Homogeneous coordinates (HC) add an extra virtual dimension. Thus 2D HC are actually 3D
and 3D HC are 4D. Consider a 2D point p = (x,y). In HC, we represent p as p = (x,y,1). An
extra coordinate is added whose value is always 1. This may seem odd but it allows us to now
represent translations as matrix multiplication instead of as vector addition. A translation (dx,
dy) which would normally be performed as q = (x,y) + (dx, dy) now is written as
Page 69 of
Compiled by: Er. Ganesh Ram Suwal
q=Tp= =
Now, we can write the scaling about a fixed point as simply a matrix multiplication:
q = (-T) S T p = A p,
where A = (-T) S T
The matrix A can be calculated once and then applied to all the points in the object. This is
much more efficient than our previous representation. It is also easier to identify the
transformations and their order when everything is in the form of matrix multiplication.
The matrix for scaling in HC is
S=
R=
Page 70 of
Compiled by: Er. Ganesh Ram Suwal
int i,j;
void ident(float a[3][3]) {
for(i=0;i<3;i++)
for(j=0;j<3;j++) {
if(i==j)
a[i][j] = 1;
else
a[i][j] = 0;
}
}
void matb() {
b[0][0] =x1; b[0][1] =x2; b[0][2] =x3;
b[1][0] =y1; b[1][1] =y2; b[1][2] =y3;
b[2][0] =1; b[2][1] =1; b[2][2] =1;
}
void show() {
Page 71 of
Compiled by: Er. Ganesh Ram Suwal
setcolor(WHITE);
line(320+c[0][0],240-c[1][0],320+c[0][1],240-c[1][1]);
line(320+c[0][1],240-c[1][1],320+c[0][2],240-c[1][2]);
line(320+c[0][2],240-c[1][2],320+c[0][0],240-c[1][0]);
}
void trans() {
ident(a);
a[0][2] = tx;
a[1][2] = ty;
matb();
matrixmult(a,b);
show();
}
void rotate(){
ident(a);
a[0][2] = xr;
a[1][2] = yr;
ident(b);
b[0][0] = cos(angle);
b[1][0] = sin(angle);
b[0][1] = -sin(angle);
b[1][1] = cos(angle);
matrixmult(a,b);
for(i =0;i<3;i++)
for(j=0;j<3;j++)
a[i][j] = c[i][j];
ident(b);
b[0][2] = -xr;
b[1][2] = -yr;
matrixmult(a,b);
Page 72 of
Compiled by: Er. Ganesh Ram Suwal
for(i =0;i<3;i++)
for(j=0;j<3;j++)
a[i][j] = c[i][j];
matb();
matrixmult(a,b);
show();
}
void triangle() {
b[0][0] = 0; b[0][1] = 100; b[0][2] = 0;
b[1][0] = 0; b[1][1] = 100; b[1][2] = 100;
b[2][0] = 1; b[2][1] = 1; b[2][2] = 1;
}
void reflecty() {
ident(a);
a[0][0] = -1;
matb();
matrixmult(a,b);
show();
}
void scale() {
ident(a);
a[0][0] = sx;
a[0][2] = xr*(1-sx);
a[1][1] = sy;
a[1][2] = yr*(1-sy);
matb();
matrixmult(a,b);
show();
}
Page 73 of
Compiled by: Er. Ganesh Ram Suwal
void sheary() {
ident(a);
a[1][0] = sh;
triangle();
setcolor(RED);
line(320+b[0][0],240-b[1][0],320+b[0][1],240-b[1][1]);
line(320+b[0][1],240-b[1][1],320+b[0][2],240-b[1][2]);
line(320+b[0][2],240-b[1][2],320+b[0][0],240-b[1][0]);
matrixmult(a,b);
show();
}
void first() {
line(0,240,640,240);
line(320,0,320,480);
line(320+x1,240-y1,320+x2,240-y2);
line(320+x2,240-y2,320+x3,240-y3);
line(320+x3,240-y3,320+x1,240-y1);
}
void main() {
int gd=DETECT,gm;
int choice;
while(1) {
clrscr();
printf("\n 1. Translation \n 2. Rotation \n 3. Reflection along Y\n 4. Scaling \n 5.
Shearing along Y \n 6. Exit ");
printf("\n\n Enter your choice :");
scanf("%d",&choice);
initgraph(&gd,&gm,"d:\\tc\\bgi");
setcolor(BLUE);
Page 74 of
Compiled by: Er. Ganesh Ram Suwal
switch(choice) {
case 1:
first();
trans();
outtextxy(330,300,"Translation : tx = 100 , ty = 50");
break;
case 2:
first();
rotate();
outtextxy(330,300,"Rotation : angle = 90");
break;
case 3:
first();
reflecty();
outtextxy(330,300,"Reflection along Y");
break;
case 4:
first();
scale();
outtextxy(330,300,"Scaling : sx = 2 , sy = 2");
break;
case 5:
line(0,240,640,240);
line(320,0,320,480);
sheary();
outtextxy(330,300,"Shearing along Y : sh = 0.5");
break;
case 6:
exit(0);
default:
printf("Wrong choice");
}
Page 75 of
Compiled by: Er. Ganesh Ram Suwal
getch();
closegraph();
}
}
Chapter 4
Graphical Software
Graphical software is used to create images as well as processing the images to make it
realistic. There are two types of graphics software.
1. General Programming Package
2. Special Purpose Application Package.
Page 76 of
Compiled by: Er. Ganesh Ram Suwal
These programs are designed for non-programmers so that the user can generate displays
without worrying about how graphic operation works. Example: Paintbrush, CAD
(Computer Aided Design).
Page 77 of
Compiled by: Er. Ganesh Ram Suwal
Direct X
Direct X is a set of development libraries for high performance games under windows.
It consists of:
a. Direct Draw: 2D graphics programming
b. Direct Sound: Allows mixing of sounds.
c. Direct Play: Allow multiplayer games to connect via modems, LANs etc.
d. Direct Input: Allow joysticks and handles input from various peripherals.
e. Direct 3D: 3D graphics programming.
Page 78 of
Compiled by: Er. Ganesh Ram Suwal
4. Image data format: It specifies the order in which pixel values are stored in the
image data selection. For eg: left to right, top to bottom. The values of image data
may be compressed by using compression algorithm. Eg: Run Length Encoding (RLE).
Page 79 of
Compiled by: Er. Ganesh Ram Suwal
JPEG compressed images are often stored in a file format called JFIF (JPEF file
interchange format) which most people refer to as JPEG.
JPEG: can have millions of colors.
It uses lossy compression to make images smaller in size. So there is loss of image
quality.
Graphics software:
1. Paint program: Paint program works with bit map images.
2. Photo manipulation program: It works with bit map images and is widely used to edit
digitized photographs.
Page 80 of
Compiled by: Er. Ganesh Ram Suwal
5. Animation:
Computer are used to create animation for use in various fields, including games, and
movies composing to allow game makers and film makers to add characters and objects to
scenes that did not originally contain them.
Error Diagnostics:
Programming errors often remain undetected until an attempt is made to compile the
program once the compile command has been issued. However, the presence of certain
errors will become readily apparent, since these errors will prevent the program from being
compiling successfully, some particular common errors of this types are declaring constants
and variables improperly, a reference to an undeclared variable and incorrect punctuation.
Such errors are referred as syntactical error or grammatical error. Most version of C
program will generate a diagnostic message when a syntactical error has been detected (the
compiler usually come to an interrupt when this happens). These diagnostic messages are
not always completely straight forward in their meaning, but they are nevertheless helpful
in identifying the nature and location of the error.
Logical Debugging:
Syntactical errors and certain types of logical errors will cause diagnostic messages to be
generated when compiling or executing a program. Errors of these types are easy to find
and correct. Some types of logical errors can be much more difficult to detect, however,
since the output resulting from a logically incorrect program may appear to be error free.
Moreover, logical errors are often hard to find even when they are known to exits (as for
example, when the computed output is obviously incorrect)
Page 81 of
Compiled by: Er. Ganesh Ram Suwal
Chapter 5 & 7
Project Development
Project
Project is a set of activities which has got definite starting time and final end time. It has a
life cycle. All the projects operate within constrains of time, cost and quality performance.
#Definition
According to “Harold Kerzner”
“A project is any series of activities and tasks that
Have a specific objectives to be completed within certain specifications
Have defined start and end dates
Page 82 of
Compiled by: Er. Ganesh Ram Suwal
Every project has specific objectives. It is project people who make or break the project. It
works within constrains of time, cost and quality performance in a dynamic environment.
Page 83 of
Compiled by: Er. Ganesh Ram Suwal
3. Implementation Phase
4. Termination Phase
Project Management
Project management is the discipline of planning, organizing, securing and managing
resources to bring about the successful completion of specific project goals and objectives.
Project Planning
Project planning is a decision-making task as it involves choosing among alternatives. One of
the objectives of project planning is to completely define all works required, so that it will
be readily identifiable to each project participant.
Definition:
Project planning is defined as developing the plan in the required level of details with
accompanying milestones and the use of available tools for preparing and monitoring the
plan.
It is a rational determination of how to initiate, sustain and terminate a project.
Thus we can define project planning as a decision based on futurity.
Page 84 of
Compiled by: Er. Ganesh Ram Suwal
Organization
mission
Project Objectives
ss
Project Goals
ss
Project Strategy
ss
Project Team Work & organization
structure
Style
ss
Project Resources
ss
Page 85 of
Compiled by: Er. Ganesh Ram Suwal
2. Establishment of objectives
On the basis of environmental conditions, goal of the project are laid down. It is formulated
in all key areas of project operations. It should be laid down in precise and specific terms so
that it can be comprehended by all the concerned of the project.
Page 86 of
Compiled by: Er. Ganesh Ram Suwal
7. Assignment of responsibility
Each activity of project should be allocated to responsibility centers: individual, unit and
department.
1. Network Analysis
Page 87 of
Compiled by: Er. Ganesh Ram Suwal
It provides the framework for defining the activities to be done in the project, integrating
the activities in a logical time sequence and dynamic control over the progress of the project
plan. The best techniques for network analysis are
1) PERT (program Evaluation and Review Technique)
2) CPM (Critical Path Method)
PERT
Various activities of a project are identified. Work break down structure is used for this
purpose.
The order of precedence is determined. The identified activities need to be done first
before starting others. This is done through a diagram.
Time estimates are made
2 = Event
4
1
= Activities
6
3
5
Figure: PERT Network
CPM
This technique is used for project planning, sequencing and control where the emphasis is
on optimizing resource allocation and minimizing overall cost for a given execution time.
Page 88 of
Compiled by: Er. Ganesh Ram Suwal
Money
Information
3. Financial Analysis
It studies the financial sustainability of the project
Capital requirements
Source of funds
Projected cash flow
Account reporting system
Project profitability
Page 89 of
Compiled by: Er. Ganesh Ram Suwal
1. Project Manager
Integrate people from various functional areas to achieve specific project goals
PM provide motivation to the employee (project member and employee)
PM perform the project compilation
PM can provide necessary training to employee
PM should have capability of leadership
PM is an expert. He/She divide whole project into modules and assign that module to
group of employee
At the end of the project completion he/she integrate all the modules to form a
complete package
2. Project Team
Project team is a group of people from different field led by project manager
Project member are the knowledgeable people who actually do the project.
Page 90 of
Compiled by: Er. Ganesh Ram Suwal
[For more detail please go through Project management and organization and management
by Govinda Ram Agrawal ]
Page 91 of
Compiled by: Er. Ganesh Ram Suwal
Chapter 6:
Three Dimensional
Visible Surface Detection or Hidden surface
We must determine what is visible within a scene from a chosen viewing position For 3D
worlds this is known as visible surface detection or hidden surface elimination
Visible surface detection algorithms are broadly classified as:
– Object Space Methods: Compares objects and parts of objects to each other within
the scene definition to determine which surfaces are visible
– Image Space Methods: Visibility is decided point-by-point at each pixel position on the
projection plane
Image space methods are by far the more common
Back-Face Detection
The simplest thing we can do is find the faces on the backs of polyhedra and discard them
We know from before that a point (x, y, z) is behind a polygon surface if: where A, B, C & D
are the plane parameters for the surface This can actually be made even easier if we
organise things to suit ourselves
Ensure we have a right handed system with the viewing direction along the negative z-axis
Now we can simply say that if the z component of the polygon’s normal is less than zero the
surface cannot be seen
Page 92 of
Compiled by: Er. Ganesh Ram Suwal
In general back-face detection can be expected to eliminate about half of the polygon
surfaces in a scene from further visibility tests More complicated surfaces though scupper
us! We need better techniques to handle these kind of situations
Depth-Buffer Method
Compares surface depth values throughout a scene for each pixel position on the projection
plane Usually applied to scenes only containing polygons As depth values can be computed
easily, this tends to be very fast Also often called the z-buffer method
1. Initialise the depth buffer and frame buffer so that for all buffer positions (x, y)
depthBuff(x, y) = 1.0
frameBuff(x, y) = bgColour
2. Process each polygon in a scene, one at a time
– For each projected (x, y) pixel position of a polygon, calculate the depth z (if
not already known)
– If z < depthBuff(x, y), compute the surface colour at that position and set
depthBuff(x, y) = z
frameBuff(x, y) = surfColour(x, y)
Page 93 of
Compiled by: Er. Ganesh Ram Suwal
After all surfaces are processed depthBuff and frameBuff will store correct values
£ Ax £ By £ D
z£
C
At any surface position the depth is calculated from the plane equation as:
For any scan line adjacent x positions differ by ±1, as do adjacent y positions
£ A( x £1) £ By £ D A
z' £ z' £ z £
C C
The depth-buffer algorithm proceeds by starting at the top vertex of the polygon Then we
recursively calculate the x-coordinate values down a left edge of the polygon The x value for
the beginning position on each scan line can be calculated from the previous one
1
x' £ x £
m
where m is the slope
Depth values along the edge being considered are calculated using
A £B
z' £ z £ m
C
A-Buffer Method
The A-buffer method is an extension of the depth-buffer method The A-buffer method is
visibility detection method developed at Lucasfilm Studios for the rendering system REYES
(Renders Everything You Ever Saw)
The A-buffer expands on the depth buffer method to allow transparencies The key data
structure in the A-buffer is the accumulation buffer
Page 94 of
Compiled by: Er. Ganesh Ram Suwal
If depth is >= 0, then the surface data field stores the depth of that pixel position as before
If depth < 0 then the data filed stores a pointer to a linked list of surface data
Scan-Line Method
An image space method for identifying visible surfaces Computes and compares depth
values along the various scan-lines for a scene
Two important tables are maintained:
– The edge table
– The surface facet table
The edge table contains:
– Coordinate end points of reach line in the scene
Page 95 of
Compiled by: Er. Ganesh Ram Suwal
To facilitate the search for surfaces crossing a given scan-line an active list of edges is
formed for each scan-line as it is processed
The active list stores only those edges that cross the scan-line in order of increasing x
Also a flag is set for each surface to indicate whether a position along a scan-line is either
inside or outside the surface
Pixel positions across each scan-line are processed from left to right At the left intersection
with a surface the surface flag is turned on At the right intersection point the flag is turned
off We only need to perform depth calculations when more than one surface has its flag
turned on at a certain scan-line position
Page 96 of
Compiled by: Er. Ganesh Ram Suwal
We need to make sure that we only draw visible surfaces when rendering scenes
There are a number of techniques for doing this such as
– Back face detection
– Depth-buffer method
– A-buffer method
– Scan-line method
Next time we will look at some more techniques and think about which techniques are
suitable for which situations
Page 97 of
Compiled by: Er. Ganesh Ram Suwal
Illumination models
Given the parameters:
• the optical properties of surfaces (opaque/transparent, shiny/dull, surface-texture);
• the relative positions of the surfaces in a scene;
• the color and positions of the light sources;
• The position and orientation of the viewing plane.
Illumination models calculate the intensity projected from a particular surface point in a
specified viewing direction.
Light Sources
• When we view an opaque nonluminous object, we see reflected light from the
surfaces of the object.
• The total reflected light is the sum of the contributions from light sources and other
reflecting surfaces in the scene.
• Light sources = light-emitting sources.
• Reflecting surfaces = light-reflecting sources.
Light Sources
Page 98 of
Compiled by: Er. Ganesh Ram Suwal
Fig. 1 Light viewed from an opaque surface is in general a combination of reflected light
from a light source and reflections of light reflections from other surfaces.
• The rays emitted from a point light radially diverge from the source.
• Point sources are abstraction of real-world sources of light such as light bulbs,
candles, or the sun.
• The light originates at a particular place; it comes from a particular direction over a
particular distance.
• Surfaces facing towards and positioned near the light source will receive more light
than those facing away from or far removed from the source following radially
diverging paths as shown in fig. 2.
Distributed Light Source
Page 99 of
Compiled by: Er. Ganesh Ram Suwal
• It is as if the light source was infinitely far away from the surface that it is
illuminating.
• Sunlight is an example of an infinite light source.
Materials
• When light is incident on an opaque surface, part of it is reflected and part is
absorbed.
• Shiny materials reflect more of the incident light, and dull surface absorb more of
the incident light.
For an illuminated transparent surface, some of the incident light will be reflected and some
will be transmitted through the material.
Diffuse reflection
• Grainy surfaces scatter the reflected light in all directions. This scattered light is
called diffuse reflection.
• The surface appears equally bright from all viewing directions.
What we call the color of an object is the color of the diffuse reflection of the incident light.
Specular reflection
Light sources create highlights, bright spots, called specular reflection. More pronounced on
shiny surfaces than on dull.
Page 100 of
Compiled by: Er. Ganesh Ram Suwal
• The amount of ambient light incident on each object is a constant for all surfaces and
over all directions.
• The amount of ambient light that is reflected by an object is constant for all surfaces
and over all direction, but the intensity of reflected light for each surface depends
only on the optical properties of the surface.
Ambient Light
• The level of ambient light in a scene is a parameter Ia , and each surface illuminated
with this constant value.
• Illumination equation for ambient light is
I = k aIa
where
I is the resulting intensity
Ia is the incident ambient light intensity
ka is the object’s basic intensity, ambient-
reflection coefficient or ambient reflectivity.
Diffuse Reflection
• Diffuse reflections are constant over each surface in a scene, independent of the
viewing direction.
• The amount of the incident light that is diffusely reflected can be set for each surface
with parameter kd, the diffuse-reflection coefficient, or diffuse reflectivity.
0 £ kd £ 1;
kd near 1 – highly reflective surface;
kd near 0 – surface that absorbs most of the
incident light;
kd is a function of surface color;
Even though there is equal light scattering in all direction from a surface, the brightness of
the surface does depend on the orientation of the surface relative to the light source:
Diffuse Reflection
Page 101 of
Compiled by: Er. Ganesh Ram Suwal
As the angle between the surface normal and the incoming light direction increases, les of
the incident light falls on the surface.
We denote the angle of incidence between the incoming light direction and the surface
normal as q. Thus, the amount of illumination depends on cosq. If the incoming light from
the source is perpendicular to the surface at a particular point, that point is fully illuminated.
If Il is the intensity of the point Light source, then the diffuse reflection equation for a point
on the surface can be written as
Il,diff = kdIlcosq
or
Il,diff = kdIl(N.L)
where N is the unit normal vector to a surface and L is the unit direction vector to the point
light source from a position on the surface.
Figure 10 illustrates the illumination with diffuse reflection, using various values of
parameter kd between 0 and1.
Page 102 of
Compiled by: Er. Ganesh Ram Suwal
Specular Reflection
Figure 13 shows the specular reflection direction at a point on the illuminated surface. In
this figure,
• R represents the unit vector in the direction of specular reflection;
• L – unit vector directed toward the point light source;
• V – unit vector pointing to the viewer from the surface position;
Angle q is the viewing angle relative to the specular-reflection direction R.
Page 103 of
Compiled by: Er. Ganesh Ram Suwal
• Very shiny surface is modeled with a large value for ns (say, 100 or more);
• Small values are used for duller surfaces.
• For perfect reflector (perfect mirror), ns is infinite;
R + L = (2N.L)N
R = (2N.L)N-L
q = q /2
H = (L + V)/|(L + V)|
Ispec = ksIl (N.H)ns
Page 104 of
Compiled by: Er. Ganesh Ram Suwal
Intensity Attenuation
As radiant energy from a point light source travels through space, its amplitude is
attenuated by the factor 1/d2, where d is the distance that the light has traveled.
A surface close to the light source (small d) receives higher incident intensity from the
source than a distant surface (large d).
• Problem in using the factor 1/d2 to attenuate intensities:
The factor 1/d2 produces too much intensity variations when d is small, and it
produces very little variation when d is large.
• We can compensate for these problems by using inverse linear or quadratic
functions of d to attenuate intensities.
• A general inverse quadratic attenuation function:
1
f (d ) £
a0 £ a1d £ a2 d 2
• The value of the constant term a0 can be adjusted to prevent f(d) from becoming too
large when d is very small.
Surface rendering
• Surface rendering can be performed by applying the illumination model to every
visible surface point, or the rendering can be accomplished by interpolating
intensities across the surface from a small set of illumination-model calculations.
• Scan-line algorithms use interpolation schemes.
• Surface-rendering procedures are termed surface-shading methods.
Page 105 of
Compiled by: Er. Ganesh Ram Suwal
Shadows
• Hidden-surface methods can be used to locate areas where light sources produce
shadows.
– Apply a hidden-surface method with a light source at a view position.
– Shadow patterns generated by a hidden-surface method are valid for any
selected viewing position, as long as the light-source positions are not
changed.
• In polygon-based system, we can add surface-detail polygons that correspond to
shadow areas of surface polygons.
• We can display shadow areas with ambient light intensity only, or we can combine
the ambient light with specified surface texture.
Page 106 of
Compiled by: Er. Ganesh Ram Suwal
• Flat shading provides an accurate rendering for an object if all of the following
assumptions are valid:
– The object is a polyhedron and is not an approximation of an object with a
curved surface;
– All light sources illuminating the object are far from the surface so that N.L
and the attenuation function are constant over the surface;
– The viewing position is also far from the surface so that V.R is constant over
the surface;
Gouraud Shading
o Intensity-interpolation scheme, referred to as Gouraud shading, renders a polygon
surface by linearly interpolating intensity values across the surface.
o Intensity values for each polygon are matched with the values of adjacent polygons
along the common edges, thus eliminating the intensity discontinuities that can
occur in flat shading.
Each polygon surface is rendered with Gouraud
shading by performing the following calculations:
o Determine the average unit normal vector at each polygon vertex;
Page 107 of
Compiled by: Er. Ganesh Ram Suwal
For each scan line, the intensity at the intersection of the scan line with a polygon edge is
linearly interpolated from the intensities at the edge endpoints.
A fast method for obtaining this intensity is to interpolate between intensities of endpoints
by using only the vertical displacement of the scan line:
y s £ y2 y £ ys
I a £ I1 £ I2 1
y1 £ y2 y1 £ y2
AND
y s £ y3 y £ ys
I b £ I1 £ I3 1
y1 £ y3 y1 £ y3
Page 108 of
Compiled by: Er. Ganesh Ram Suwal
Once these bounding intensities are established for a scan line, an interior point (such as p)
is interpolated from the bounding intensities at points a and b as
xb £ x p x p £ xa
I p £ Ia £ Ib
xb £ xa xb £ xa
Incremental calculations:
then we can obtain the intensity along this edge for the next scan line, y-1, as
I2 £ I
I'£ I £
y1 £ y2
Similar calculations are used to obtain intensities at horizontal pixel positions along each
scan line.
• Highlights on the surface are sometimes displayed with anomalous shapes.
Page 109 of
Compiled by: Er. Ganesh Ram Suwal
• Can cause bright or dark intensity streaks to appear on the surface (Mach-band
effect).
Dividing the surface into a greater number of
polygon faces can reduce these effects.
Phong Shading
• A more accurate method for rendering a polygon surface.
• Interpolates normal vectors, and then applies the illumination model to each surface
point.
• Method developed by Phong Bui Tuong.
• Called Phong shading, or normal-vector interpolation shading.
• More realistic highlights.
• Greatly reduces the Mach-band effect.
Page 110 of
Compiled by: Er. Ganesh Ram Suwal
The normal vector N for the scan line intersection point along the edge between vertices 1
and 2 can be obtained by vertically interpolating between edge endpoint normals:
y £ y2 y £y
N £ N1 £ N2 1
y1 £ y2 y1 £ y2
Incremental methods are used to evaluate normals between scan lines and along each
individual scan line.
Page 111 of
Compiled by: Er. Ganesh Ram Suwal
Page 112 of
Compiled by: Er. Ganesh Ram Suwal
~ The End ~
Page 113 of