Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

18CS62 Computer Graphics NOTES

Download as pdf or txt
Download as pdf or txt
You are on page 1of 129

Module 1 Computer Graphics and OpenGL

1. Overview: Computer Graphics and OpenGL


1.1 Basics of computer graphics
1.2 Application of Computer Graphics,
1.3 Video Display Devices
1.3.1 Random Scan and Raster Scan displays,
1.3.2 Color CRT monitors,
1.3.4 Flat panel displays.
1.4 Raster-scan systems:
1.4.1 Video controller,
1.4.2 Raster scan Display processor,
1.4.3 Graphics workstations and viewing systems,
1.5 Input devices,
1.6 Graphics networks,
1.7 Graphics on the internet,
1.8 Graphics software.
OpenGL:
1.9 Introduction to OpenGL ,
1.10 Coordinate reference frames,
1.11 Specifying two-dimensional world coordinate reference frames in OpenGL,
1.12 OpenGL point functions,
1.13 OpenGL line functions, point attributes,
1.14 Line attributes,
1.15 Curve attributes,
1.16 OpenGL point attribute functions,
1.17 OpenGL line attribute functions,
1.18 Line drawing algorithms(DDA, Bresenham’s),
1.19 Circle generation algorithms (Bresenham’s).

1.1 Basics of Computer Graphics


Computer graphics is an art of drawing pictures, lines, charts, etc. using computers with the help
of programming. Computer graphics image is made up of number of pixels. Pixel is the smallest
addressable graphical unit represented on the computer screen.

Dept., of CSE, ATMECE 1


Module 1 Computer Graphics and OpenGL

1.2 Applications of Computer Graphics


a. Graphs and Charts

✓ An early application for computer graphics is the display of simple data graphs usually
plotted on a character printer. Data plotting is still one of the most common graphics
application.
✓ Graphs & charts are commonly used to summarize functional, statistical, mathematical,
engineering and economic data for research reports, managerial summaries and other
types of publications.
✓ Typically examples of data plots are line graphs, bar charts, pie charts, surface graphs,
contour plots and other displays showing relationships between multiple parameters in
two dimensions, three dimensions, or higher-dimensional spaces

b. Computer-Aided Design

✓ A major use of computer graphics is in design processes-particularly for engineering and


architectural systems.

Dept., of CSE, ATMECE 2


Module 1 Computer Graphics and OpenGL

✓ CAD, computer-aided design or CADD, computer-aided drafting and design methods are
now routinely used in the automobiles, aircraft, spacecraft, computers, home appliances.
✓ Circuits and networks for communications, water supply or other utilities are constructed
with repeated placement of a few geographical shapes.
✓ Animations are often used in CAD applications. Real-time, computer animations using
wire-frame shapes are useful for quickly testing the performance of a vehicle or system.

c. Virtual-Reality Environments

✓ Animations in virtual-reality environments are often used to train heavy-equipment


operators or to analyze the effectiveness of various cabin configurations and control
placements.
✓ With virtual-reality systems, designers and others can move about and interact with
objects in various ways. Architectural designs can be examined by taking simulated
“walk” through the rooms or around the outsides of buildings to better appreciate the
overall effect of a particular design.
✓ With a special glove, we can even “grasp” objects in a scene and turn them over or move
them from one place to another.
d. Data Visualizations
✓ Producing graphical representations for scientific, engineering and medical data sets and
processes is another fairly new application of computer graphics, which is generally
referred to as scientific visualization. And the term business visualization is used in
connection with data sets related to commerce, industry and other nonscientific areas.

Dept., of CSE, ATMECE 3


Module 1 Computer Graphics and OpenGL

✓ There are many different kinds of data sets and effective visualization schemes depend on
the characteristics of the data. A collection of data can contain scalar values, vectors or
higher-order tensors.

e. Education and Training

✓ Computer generated models of physical,financial,political,social,economic & other


systems are often used as educational aids.
✓ Models of physical processes physiological functions,equipment, such as the color coded
diagram as shown in the figure, can help trainees to understand the operation of a system.
✓ For some training applications,special hardware systems are designed.Examples of such
specialized systems are the simulators for practice sessions ,aircraft pilots,air traffic-
control personnel.
✓ Some simulators have no video screens,for eg: flight simulator with only a control panel
for instrument flying

Dept., of CSE, ATMECE 4


Module 1 Computer Graphics and OpenGL

f. Computer Art

✓ The picture is usually painted electronically on a graphics tablet using a stylus, which can
simulate different brush strokes, brush widths and colors.
✓ Fine artists use a variety of other computer technologies to produce images. To create
pictures the artist uses a combination of 3D modeling packages, texture mapping,
drawing programs and CAD software etc.
✓ Commercial art also uses theses “painting” techniques for generating logos & other
designs, page layouts combining text & graphics, TV advertising spots & other
applications.
✓ A common graphics method employed in many television commercials is morphing,
where one object is transformed into another.

g. Entertainment

✓ Television production, motion pictures, and music videos routinely a computer graphics
methods.
✓ Sometimes graphics images are combined a live actors and scenes and sometimes the
films are completely generated a computer rendering and animation techniques.

Dept., of CSE, ATMECE 5


Module 1 Computer Graphics and OpenGL

✓ Some television programs also use animation techniques to combine computer generated
figures of people, animals, or cartoon characters with the actor in a scene or to transform
an actor’s face into another shape.

h. Image Processing

✓ The modification or interpretation of existing pictures, such as photographs and TV scans


is called image processing.
✓ Methods used in computer graphics and image processing overlap, the two areas are
concerned with fundamentally different operations.
✓ Image processing methods are used to improve picture quality, analyze images, or
recognize visual patterns for robotics applications.
✓ Image processing methods are often used in computer graphics, and computer graphics
methods are frequently applied in image processing.
✓ Medical applications also make extensive use of image processing techniques for picture
enhancements in tomography and in simulations and surgical operations.
✓ It is also used in computed X-ray tomography(CT), position emission
tomography(PET),and computed axial tomography(CAT).

i. Graphical User Interfaces


✓ It is common now for applications software to provide graphical user interface (GUI).
✓ A major component of graphical interface is a window manager that allows a user to
display multiple, rectangular screen areas called display windows.

Dept., of CSE, ATMECE 6


Module 1 Computer Graphics and OpenGL

✓ Each screen display area can contain a different process, showing graphical or non-
graphical information, and various methods can be used to activate a display window.
✓ Using an interactive pointing device, such as mouse, we can active a display window on
some systems by positioning the screen cursor within the window display area and
pressing the left mouse button.

1.3 Video Display Devices


✓ The primary output device in a graphics system is a video monitor.
✓ Historically, the operation of most video monitors was based on the standard cathoderay
tube (CRT) design, but several other technologies exist.
✓ In recent years, flat-panel displays have become significantly more popular due to their
reduced power consumption and thinner designs.

Refresh Cathode-Ray Tubes

Dept., of CSE, ATMECE 7


Module 1 Computer Graphics and OpenGL

✓ A beam of electrons, emitted by an electron gun, passes through focusing and deflection
systems that direct the beam toward specified positions on the phosphor-coated screen.
✓ The phosphor then emits a small spot of light at each position contacted by the electron
beam and the light emitted by the phosphor fades very rapidly.
✓ One way to maintain the screen picture is to store the picture information as a charge
distribution within the CRT in order to keep the phosphors activated.
✓ The most common method now employed for maintaining phosphor glow is to redraw
the picture repeatedly by quickly directing the electron beam back over the same screen
points. This type of display is called a refresh CRT.
✓ The frequency at which a picture is redrawn on the screen is referred to as the refresh
rate.

Operation of an electron gun with an accelarating anode

✓ The primary components of an electron gun in a CRT are the heated metal cathode and a
control grid.
✓ The heat is supplied to the cathode by directing a current through a coil of wire, called the
filament, inside the cylindrical cathode structure.
✓ This causes electrons to be “boiled off” the hot cathode surface.
✓ Inside the CRT envelope, the free, negatively charged electrons are then accelerated
toward the phosphor coating by a high positive voltage.

Dept., of CSE, ATMECE 8


Module 1 Computer Graphics and OpenGL

✓ Intensity of the electron beam is controlled by the voltage at the control grid.
✓ Since the amount of light emitted by the phosphor coating depends on the number of
electrons striking the screen, the brightness of a display point is controlled by varying the
voltage on the control grid.
✓ The focusing system in a CRT forces the electron beam to converge to a small cross
section as it strikes the phosphor and it is accomplished with either electric or magnetic
fields.
✓ With electrostatic focusing, the electron beam is passed through a positively charged
metal cylinder so that electrons along the center line of the cylinder are in equilibrium
position.
✓ Deflection of the electron beam can be controlled with either electric or magnetic fields.
✓ Cathode-ray tubes are commonly constructed with two pairs of magnetic-deflection coils
✓ One pair is mounted on the top and bottom of the CRT neck, and the other pair is
mounted on opposite sides of the neck.
✓ The magnetic field produced by each pair of coils results in a traverse deflection force
that is perpendicular to both the direction of the magnetic field and the direction of travel
of the electron beam.
✓ Horizontal and vertical deflections are accomplished with these pair of coils

Electrostatic deflection of the electron beam in a CRT


✓ When electrostatic deflection is used, two pairs of parallel plates are mounted inside the
CRT envelope where, one pair of plates is mounted horizontally to control vertical
deflection, and the other pair is mounted vertically to control horizontal deflection.
✓ Spots of light are produced on the screen by the transfer of the CRT beam energy to the
phosphor.
✓ When the electrons in the beam collide with the phosphor coating, they are stopped and
their kinetic energy is absorbed by the phosphor.
✓ Part of the beam energy is converted by the friction in to the heat energy, and the
remainder causes electros in the phosphor atoms to move up to higher quantum-energy
levels.

Dept., of CSE, ATMECE 9


Module 1 Computer Graphics and OpenGL

✓ After a short time, the “excited” phosphor electrons begin dropping back to their stable
ground state, giving up their extra energy as small quantum of light energy called
photons.

✓ What we see on the screen is the combined effect of all the electrons light emissions: a
glowing spot that quickly fades after all the excited phosphor electrons have returned to
their ground energy level.
✓ The frequency of the light emitted by the phosphor is proportional to the energy
difference between the excited quantum state and the ground state.
✓ Lower persistence phosphors required higher refresh rates to maintain a picture on the
screen without flicker.
✓ The maximum number of points that can be displayed without overlap on a CRT is
referred to as a resolution.
✓ Resolution of a CRT is dependent on the type of phosphor, the intensity to be displayed,
and the focusing and deflection systems.
✓ High-resolution systems are often referred to as high-definition systems.

Dept., of CSE, ATMECE 10


Module 1 Computer Graphics and OpenGL

1.3.1 Raster-Scan Displays and Random Scan Displays


i) Raster-Scan Displays
❖ The electron beam is swept across the screen one row at a time from top to bottom.
❖ As it moves across each row, the beam intensity is turned on and off to create a pattern of
illuminated spots.
❖ This scanning process is called refreshing. Each complete scanning of a screen is
normally called a frame.
❖ The refreshing rate, called the frame rate, is normally 60 to 80 frames per second, or
described as 60 Hz to 80 Hz.
❖ Picture definition is stored in a memory area called the frame buffer.
❖ This frame buffer stores the intensity values for all the screen points. Each screen point is
called a pixel (picture element).
❖ Property of raster scan is Aspect ratio, which defined as number of pixel columns
divided by number of scan lines that can be displayed by the system.

Case 1: In case of black and white systems


✓ On black and white systems, the frame buffer storing the values of the pixels is called a
bitmap.
✓ Each entry in the bitmap is a 1-bit data which determine the on (1) and off (0) of the
intensity of the pixel.

Dept., of CSE, ATMECE 11


Module 1 Computer Graphics and OpenGL

Case 2: In case of color systems


❖ On color systems, the frame buffer storing the values of the pixels is called a pixmap
(Though now a days many graphics libraries name it as bitmap too).
❖ Each entry in the pixmap occupies a number of bits to represent the color of the pixel. For
a true color display, the number of bits for each entry is 24 (8 bits per red/green/blue
channel, each channel 28=256 levels of intensity value, ie. 256 voltage settings for each
of the red/green/blue electron guns).

ii). Random-Scan Displays


✓ When operated as a random-scan display unit, a CRT has the electron beam directed only
to those parts of the screen where a picture is to be displayed.
✓ Pictures are generated as line drawings, with the electron beam tracing out the component
lines one after the other.
✓ For this reason, random-scan monitors are also referred to as vector displays (or
strokewriting displays or calligraphic displays).
✓ The component lines of a picture can be drawn and refreshed by a random-scan system in
any specified order

✓ A pen plotter operates in a similar way and is an example of a random-scan, hard-copy


device.

Dept., of CSE, ATMECE 12


Module 1 Computer Graphics and OpenGL

✓ Refresh rate on a random-scan system depends on the number of lines to be displayed on


that system.
✓ Picture definition is now stored as a set of line-drawing commands in an area of memory
referred to as the display list, refresh display file, vector file, or display program
✓ To display a specified picture, the system cycles through the set of commands in the
display file, drawing each component line in turn.
✓ After all line-drawing commands have been processed, the system cycles back to the first
line command in the list.
✓ Random-scan displays are designed to draw all the component lines of a picture 30 to 60
times each second, with up to 100,000 “short” lines in the display list.
✓ When a small set of lines is to be displayed, each refresh cycle is delayed to avoid very
high refresh rates, which could burn out the phosphor.

Difference between Raster scan system and Random scan system


Base of
Raster Scan System Random Scan System
Difference
The electron beam is swept The electron beam is directed only
Electron Beam across the screen, one row at a to theparts of screen where a
time, from top to bottom picture is to be drawn
Its resolution is poor because Its resolution is good because this
raster system in contrast system produces smooth lines
Resolution
produces zigzag lines that are drawings because CRT beam
plotted as discrete point sets. directly follows the line path.
Picture definition is stored as
Picture definition is stored as a set
Picture a set of intensity values for
of line drawing instructions in a
Definition all screen points,called pixels
display file.
in a refresh buffer area.
The capability of this system
These systems are designed for
Realistic to store intensity values for
line-drawing and can’t display
Display pixel makes it well suited for
realistic shaded scenes.
the realistic display of scenes

Dept., of CSE, ATMECE 13


Module 1 Computer Graphics and OpenGL

contain shadow and color


pattern.
Screen points/pixels are used Mathematical functions are used to
Draw an Image
to draw an image draw an image

1.3.2 Color CRT Monitors


❖ A CRT monitor displays color pictures by using a combination of phosphors that emit
different-colored light.
❖ It produces range of colors by combining the light emitted by different phosphors.
❖ There are two basic techniques for color display:
1. Beam-penetration technique
2. Shadow-mask technique
1) Beam-penetration technique:
✓ This technique is used with random scan monitors.
✓ In this technique inside of CRT coated with two phosphor layers usually red and green.
✓ The outer layer of red and inner layer of green phosphor.
✓ The color depends on how far the electron beam penetrates into the phosphor layer.
✓ A beam of fast electron penetrates more and excites inner green layer while slow eletron
excites outer red layer.
✓ At intermediate beam speed we can produce combination of red and green lights which
emit additional two colors orange and yellow.
✓ The beam acceleration voltage controls the speed of the electrons and hence color of
pixel.
Disadvantages:
➢ It is a low cost technique to produce color in random scan monitors.
➢ It can display only four colors.
➢ Quality of picture is not good compared to other techniques.

2)Shadow-mask technique
✓ It produces wide range of colors as compared to beam-penetration technique.
✓ This technique is generally used in raster scan displays. Including color TV.

Dept., of CSE, ATMECE 14


Module 1 Computer Graphics and OpenGL

✓ In this technique CRT has three phosphor color dots at each pixel position.
✓ One dot for red, one for green and one for blue light. This is commonly known as Dot
triangle.
✓ Here in CRT there are three electron guns present, one for each color dot. And a shadow
mask grid just behind the phosphor coated screen.
✓ The shadow mask grid consists of series of holes aligned with the phosphor dot pattern.
✓ Three electron beams are deflected and focused as a group onto the shadow mask and
when they pass through a hole they excite a dot triangle.
✓ In dot triangle three phosphor dots are arranged so that each electron beam can activate
only its corresponding color dot when it passes through the shadow mask.
✓ A dot triangle when activated appears as a small dot on the screen which has color of
combination of three small dots in the dot triangle.
✓ By changing the intensity of the three electron beams we can obtain different colors in
the shadow mask CRT.

1.3.3Flat Panel Display


➔ The term flat panel display refers to a class of video device that have reduced volume,
weight & power requirement compared to a CRT.
➔ As flat panel display is thinner than CRTs, we can hang them on walls or wear on our
wrists.

Dept., of CSE, ATMECE 15


Module 1 Computer Graphics and OpenGL

➔ Since we can even write on some flat panel displays they will soon be available as pocket
notepads.
➔ We can separate flat panel display in two categories:
1. Emissive displays: - the emissive display or emitters are devices that convert
electrical energy into light. For Ex. Plasma panel, thin film electroluminescent
displays and light emitting diodes.
2. Non emissive displays: - non emissive display or non emitters use optical
effects to convert sunlight or light from some other source into graphics patterns.
For Ex. LCD (Liquid Crystal Display).

a) Plasma Panels displays


 This is also called gas discharge displays.
 It is constructed by filling the region between two glass plates with a mixture of gases
that usually includes neon.
 A series of vertical conducting ribbons is placed on one glass panel and a set
ofhorizontal ribbon is built into the other glass panel.

 Firing voltage is applied to a pair of horizontal and vertical conductors cause the gasat
the intersection of the two conductors to break down into glowing plasma of
electrons and ions.
 Picture definition is stored in a refresh buffer and the firing voltages are applied to refresh
the pixel positions, 60 times per second.

Dept., of CSE, ATMECE 16


Module 1 Computer Graphics and OpenGL

 Alternating current methods are used to provide faster application of firing voltagesand
thus brighter displays.
 Separation between pixels is provided by the electric field of conductor.
 One disadvantage of plasma panels is they were strictly monochromatic device
h
tatmeans shows only one color other than black like black and white.

b) Thin Film Electroluminescent Displays


 It is similar to plasma panel display but region between the glass plates is filled
wth
i phosphors such as doped with magnesium instead of gas.
 When sufficient voltage is applied the phosphors becomes a conductor in area
ofintersection of the two electrodes.
 Electrical energy is then absorbed by the manganese atoms which then release the energy
as a spot of light similar to the glowing plasma effect in plasma panel.
 It requires more power than plasma panel.
 In this good color and gray scale difficult to achieve.

c. Light Emitting Diode (LED)


 In this display a matrix of multi-color light emitting diode is arranged to form the pixel
position in the display and the picture definition is stored in refresh buffer.
 Similar to scan line refreshing of CRT information is read from the refresh buffer and
converted to voltage levels that are applied to the diodes to produce the light pattern
on the display.

Dept., of CSE, ATMECE 17


Module 1 Computer Graphics and OpenGL

d)Liquid Crystal Display (LCD)


 This non emissive device produce picture by passing polarized light from the surrounding
or from an internal light source through liquid crystal material that can be aligned to
either block or transmit the light.
 The liquid crystal refreshes to fact that these compounds have crystalline arrangement
ofmolecules then also flows like liquid.
 It consists of two glass plates each with light polarizer at right angles to each other
sandwich the liquid crystal material between the plates.
 Rows of horizontal transparent conductors are built into one glass plate, and column
ofvertical conductors are put into the other plates.
 The intersection of two conductors defines a pixel position.
 In the ON state polarized light passing through material is twisted so that it will pas
through the opposite polarizer.
 In the OFF state it will reflect back towards source.

Three- Dimensional Viewing Devices


 Graphics monitors for the display of three-dimensional scenes have been devised usinga
technique that reflects a CRT image from a vibrating, flexible mirror As the varifocal
mirror vibrates, it changes focal length.

Dept., of CSE, ATMECE 18


Module 1 Computer Graphics and OpenGL

 These vibrations are synchronized with the display of an object on a CRT so that each
point on the object is reflected from the mirror into a spatial position corresponding to the
distance of that point from a specified viewing location.
 This allows us to walk around an object or scene and view it from different sides.

1.4 Raster-Scan Systems


➔ Interactive raster-graphics systems typically employ several processing units.
➔ In addition to the central processing unit (CPU), a special-purpose processor, called the
video controller or display controller, is used to control the operation of the display
device.
➔ Organization of a simple raster system is shown in below Figure.

➔ Here, the frame buffer can be anywhere in the system memory, and the video controller
accesses the frame buffer to refresh the screen.

Dept., of CSE, ATMECE 19


Module 1 Computer Graphics and OpenGL

➔ In addition to the video controller, raster systems employ other processors as


coprocessors and accelerators to implement various graphics operations.

1.4.1 Video controller:


✓ The figure below shows a commonly used organization for raster systems.
✓ A fixed area of the system memory is reserved for the frame buffer, and the video
controller is given direct access to the frame-buffer memory.
✓ Frame-buffer locations, and the corresponding screen positions, are referenced in the
Cartesian coordinates.

Cartesian reference frame:


✓ Frame-buffer locations and the corresponding screen positions, are referenced in
Cartesian coordinates.
✓ In an application (user) program, we use the commands within a graphics software
package to set coordinate positions for displayed objects relative to the origin of the
✓ The coordinate origin is referenced at the lower-left corner of a screen display area by the
software commands, although we can typically set the origin at any convenient location
for a particular application.

Dept., of CSE, ATMECE 20


Module 1 Computer Graphics and OpenGL

Working:
✓ Figure shows a two-dimensional Cartesian reference frame with the origin at the
lowerleft screen corner.

✓ The screen surface is then represented as the first quadrant of a two-dimensional system
with positive x and y values increasing from left to right and bottom of the screen to the
top respectively.
✓ Pixel positions are then assigned integer x values that range from 0 to xmax across the
screen, left to right, and integer y values that vary from 0 to ymax, bottom to top.

Basic Video Controller Refresh Operations


✓ The basic refresh operations of the video controller are diagrammed

✓ Two registers are used to store the coordinate values for the screen pixels.

Dept., of CSE, ATMECE 21


Module 1 Computer Graphics and OpenGL

✓ Initially, the x register is set to 0 and the y register is set to the value for the top scan line.
✓ The contents of the frame buffer at this pixel position are then retrieved and used to set
the intensity of the CRT beam.
✓ Then the x register is incremented by 1, and the process is repeated for the next pixel on
the top scan line.
✓ This procedure continues for each pixel along the top scan line.
✓ After the last pixel on the top scan line has been processed, the x register is reset to 0 and
the y register is set to the value for the next scan line down from the top of the screen.
✓ The procedure is repeated for each successive scan line.
✓ After cycling through all pixels along the bottom scan line, the video controller resets the
registers to the first pixel position on the top scan line and the refresh process starts over
a.Speed up pixel position processing of video controller:
✓ Since the screen must be refreshed at a rate of at least 60 frames per second,the simple
procedure illustrated in above figure may not be accommodated by RAM chips if the
cycle time is too slow.
✓ To speed up pixel processing, video controllers can retrieve multiple pixel values from
the refresh buffer on each pass.
✓ When group of pixels has been processed, the next block of pixel values is retrieved from
the frame buffer.
Advantages of video controller:
✓ A video controller can be designed to perform a number of other operations.
✓ For various applications, the video controller can retrieve pixel values from different
memory areas on different refresh cycles.
✓ This provides a fast mechanism for generating real-time animations.
✓ Another video-controller task is the transformation of blocks of pixels, so that screen
areas can be enlarged, reduced, or moved from one location to another during the refresh
cycles.
✓ In addition, the video controller often contains a lookup table, so that pixel values in the
frame buffer are used to access the lookup table. This provides a fast method for
changing screen intensity values.

Dept., of CSE, ATMECE 22


Module 1 Computer Graphics and OpenGL

✓ Finally, some systems are designed to allow the video controller to mix the framebuffer
image with an input image from a television camera or other input device

b) Raster-Scan Display Processor


✓ Figure shows one way to organize the components of a raster system that contains a
separate display processor, sometimes referred to as a graphics controller or a display
coprocessor.

✓ The purpose of the display processor is to free the CPU from the graphics chores.
✓ In addition to the system memory, a separate display-processor memory area can be
provided.
Scan conversion:
✓ A major task of the display processor is digitizing a picture definition given in an
application program into a set of pixel values for storage in the frame buffer.
✓ This digitization process is called scan conversion.
Example 1: displaying a line
➔ Graphics commands specifying straight lines and other geometric objects are scan
converted into a set of discrete points, corresponding to screen pixel positions.
➔ Scan converting a straight-line segment.

Dept., of CSE, ATMECE 23


Module 1 Computer Graphics and OpenGL

Example 2: displaying a character


➔ Characters can be defined with rectangular pixel grids
➔ The array size for character grids can vary from about 5 by 7 to 9 by 12 or more for
higher-quality displays.
➔ A character grid is displayed by superimposing the rectangular grid pattern into the frame
buffer at a specified coordinate position.

Using outline:
➔ For characters that are defined as outlines, the shapes are scan-converted into the frame
buffer by locating the pixel positions closest to the outline.

Additional operations of Display processors:


➔ Display processors are also designed to perform a number of additional operations.
➔ These functions include generating various line styles (dashed, dotted, or solid),
displaying color areas, and applying transformations to the objects in a scene.
➔ Display processors are typically designed to interface with interactive input devices, such
as a mouse.

Methods to reduce memory requirements in display processor:


➔ In an effort to reduce memory requirements in raster systems, methods have been devised
for organizing the frame buffer as a linked list and encoding the color information.
➔ One organization scheme is to store each scan line as a set of number pairs.

Dept., of CSE, ATMECE 24


Module 1 Computer Graphics and OpenGL

➔ Encoding methods can be useful in the digital storage and transmission of picture
information
i) Run-length encoding:
 The first number in each pair can be a reference to a color value, and the second
numbercan specify the number of adjacent pixels on the scan line that are to be displayed in
that color.
 This technique, called run-length encoding, can result in a considerable saving in storage
space if a picture is to be constructed mostly with long runs of a single color each.
 A similar approach can be taken when pixel colors change linearly.
ii) Cell encoding:
 Another approach is to encode the raster as a set of rectangular areas (cell encoding).

Disadvantages of encoding:
❖ The disadvantages of encoding runs are that color changes are difficult to record and
storage requirements increase as the lengths of the runs decrease.
❖ In addition, it is difficult for the display controller to process the raster when many short
runs are involved.
❖ Moreover, the size of the frame buffer is no longer a major concern, because of sharp
declines in memory costs

1.4.3 Graphics workstations and viewing systems


✓ Most graphics monitors today operate as raster-scan displays, and both CRT and flat
panel systems are in common use.
✓ Graphics workstation range from small general-purpose computer systems to multi
monitor facilities, often with ultra –large viewing screens.
✓ High-definition graphics systems, with resolutions up to 2560 by 2048, are commonly
used in medical imaging, air-traffic control, simulation, and CAD.
✓ Many high-end graphics workstations also include large viewing screens, often with
specialized features.

Dept., of CSE, ATMECE 25


Module 1 Computer Graphics and OpenGL

✓ Multi-panel display screens are used in a variety of applications that require “wall-sized”
viewing areas. These systems are designed for presenting graphics displays at meetings,
conferences, conventions, trade shows, retail stores etc.
✓ A multi-panel display can be used to show a large view of a single scene or several
individual images. Each panel in the system displays one section of the overall picture
✓ A large, curved-screen system can be useful for viewing by a group of people studying a
particular graphics application.
✓ A 360 degree paneled viewing system in the NASA control-tower simulator, which is
used for training and for testing ways to solve air-traffic and runway problems at airports.

1.5 Input Devices


➢ Graphics workstations make use of various devices for data input.Most systems have
keyboards and mouses,while some other systems have trackball,spaceball,joystick,button
boxes,touch panels,image scanners and voice systems.
Keyboard:
➢ Keyboard on graphics system is used for entering text strings,issuing certain commands
and selecting menu options.
➢ Keyboards can also be provided with features for entry of screen coordinates,menu
selections or graphics functions.
➢ General purpose keyboard uses function keys and cursor-control keys.
➢ Function keys allow user to select frequently accessed operations with a single
keystroke.Cursor-control keys are used for selecting a displayed object or a location by
positioning the screen cursor.

Button Boxes and Dials:


➢ Buttons are often used to input predefined functions .Dials are common devices for
entering scalar values.
➢ Numerical values within some defined range are selected for input with dial rotations.

Dept., of CSE, ATMECE 26


Module 1 Computer Graphics and OpenGL

Mouse Devices:
➢ Mouse is a hand-held device,usually moved around on a flat surface to position the
screen cursor.wheeler or roolers on the bottom of the mouse used to record the amount
and direction of movement.
➢ Some of the mouses uses optical sensors,which detects movement across the horizontal
and vertical grid lines.
➢ Since a mouse can be picked up and put down,it is used for making relative changes in
the position of the screen.
➢ Most general purpose graphics systems now include a mouse and a keyboard as the
primary input devices.

Trackballs and Spaceballs:


➢ A trackball is a ball device that can be rotated with the fingers or palm of the hand to
produce screen cursor movement.
➢ Laptop keyboards are equipped with a trackball to eliminate the extra space required by a
mouse.
➢ Spaceball is an extension of two-dimensional trackball concept.
➢ Spaceballs are used for three-dimensional positioning and selection operations in virtual-
reality systems,modeling,animation,CAD and other applications.

Joysticks:
➢ Joystick is used as a positioning device,which uses a small vertical lever(stick) mounded
on a base.It is used to steer the screen cursor around and select screen position with the
stick movement.
➢ A push or pull on the stick is measured with strain gauges and converted to movement of
the screen cursor in the direction of the applied pressure.

Data Gloves:
➢ Data glove can be used to grasp a virtual object.The glove is constructed with a series of
sensors that detect hand and finger motions.
➢ Input from the glove is used to position or manipulate objects in a virtual scene.

Dept., of CSE, ATMECE 27


Module 1 Computer Graphics and OpenGL

Digitizers:
➢ Digitizer is a common device for drawing,painting or selecting positions.
➢ Graphics tablet is one type of digitizer,which is used to input 2-dimensional coordinates
by activating a hand cursor or stylus at selected positions on a flat surface.
➢ A hand cursor contains cross hairs for sighting positions and stylus is a pencil-shaped
device that is pointed at positions on the tablet.

Image Scanners:
➢ Drawings,graphs,photographs or text can be stored for computer processing with an
image scanner by passing an optical scanning mechanism over the information to be
stored.
➢ Once we have the representation of the picture, then we can apply various image-
processing method to modify the representation of the picture and various editing
operations can be performed on the stored documents.

Touch Panels:
➢ Touch panels allow displayed objects or screen positions to be selected with the touch of
a finger.
➢ Touch panel is used for the selection of processing options that are represented as a menu
of graphical icons.
➢ Optical touch panel-uses LEDs along one vertical and horizontal edge of the frame.
➢ Acoustical touch panels generates high-frequency sound waves in horizontal and vertical
directions across a glass plate.

Light Pens:
➢ Light pens are pencil-shaped devices used to select positions by detecting the light
coming from points on the CRT screen.
➢ To select positions in any screen area with a light pen,we must have some nonzero light
intensity emitted from each pixel within that area.
➢ Light pens sometimes give false readings due to background lighting in a room.

Dept., of CSE, ATMECE 28


Module 1 Computer Graphics and OpenGL

Voice Systems:
➢ Speech recognizers are used with some graphics workstations as input devices for voice
commands.The voice system input can be used to initiate operations or to enter data.
➢ A dictionary is set up by speaking command words several times,then the system
analyses each word and matches with the voice command to match the pattern

1.6 Graphics Networks


➔ So far, we have mainly considered graphics applications on an isolated system with a
single user.
➔ Multiuser environments & computer networks are now common elements in many
graphics applications.
➔ Various resources, such as processors, printers, plotters and data files can be distributed
on a network & shared by multiple users.
➔ A graphics monitor on a network is generally referred to as a graphics server.
➔ The computer on a network that is executing a graphics application is called the client.
➔ A workstation that includes processors, as well as a monitor and input devices can
function as both a server and a client.

1.7 Graphics on Internet


✓ A great deal of graphics development is now done on the Internet.
✓ Computers on the Internet communicate using TCP/IP.
✓ Resources such as graphics files are identified by URL (Uniform resource locator).
✓ The World Wide Web provides a hypertext system that allows users to loacate and view
documents, audio and graphics.
✓ Each URL sometimes also called as universal resource locator.
✓ The URL contains two parts Protocol- for transferring the document, and Server-
contains the document.

1.8 Graphics Software


✓ There are two broad classifications for computer-graphics software

Dept., of CSE, ATMECE 29


Module 1 Computer Graphics and OpenGL

1. Special-purpose packages: Special-purpose packages are designed for


nonprogrammers
Example: generate pictures, graphs, charts, painting programs or CAD systems in
some application area without worrying about the graphics procedure
2. General programming packages: general programming package provides a library of
graphics functions that can be used in a programming language such as C, C++, Java,
or FORTRAN.
Example: GL (Graphics Library), OpenGL, VRML (Virtual-Reality Modeling
Language), Java 2D And Java 3D

NOTE: A set of graphics functions is often called a computer-graphics application


programming interface (CG API)

1.10 Coordinate Representations


✓ To generate a picture using a programming package we first need to give the geometric
descriptions of the objects that are to be displayed known as coordinates.
✓ If coordinate values for a picture are given in some other reference frame (spherical,
hyperbolic, etc.), they must be converted to Cartesian coordinates.
✓ Several different Cartesian reference frames are used in the process of constructing and
displaying
✓ First we define the shapes of individual objects, such as trees or furniture, These
reference frames are called modeling coordinates or local coordinates
✓ Then we place the objects into appropriate locations within a scene reference frame
called world coordinates.
✓ After all parts of a scene have been specified, it is processed through various output-
device reference frames for display. This process is called the viewing pipeline.
✓ The scene is then stored in normalized coordinates. Which range from −1 to 1 or from 0
to 1 Normalized coordinates are also referred to as normalized device coordinates.
✓ The coordinate systems for display devices are generally called device coordinates, or
screen coordinates.
NOTE: Geometric descriptions in modeling coordinates and world coordinates can be given in

Dept., of CSE, ATMECE 30


Module 1 Computer Graphics and OpenGL

floating-point or integer values.


✓ Example: Figure briefly illustrates the sequence of coordinate transformations from
modeling coordinates to device coordinates for a display

1.11 Graphics Functions


➔ It provides users with a variety of functions for creating and manipulating pictures
➔ The basic building blocks for pictures are referred to as graphics output primitives
➔ Attributes are properties of the output primitives
➔ We can change the size, position, or orientation of an object using geometric
transformations
➔ Modeling transformations, which are used to construct a scene.
➔ Viewing transformations are used to select a view of the scene, the type of projection to
be used and the location where the view is to be displayed.
➔ Input functions are used to control and process the data flow from these interactive
devices(mouse, tablet and joystick)
➔ Graphics package contains a number of tasks .We can lump the functions for carrying out
many tasks by under the heading control operations.

Software Standards
✓ The primary goal of standardized graphics software is portability.

Dept., of CSE, ATMECE 31


Module 1 Computer Graphics and OpenGL

✓ In 1984, Graphical Kernel System (GKS) was adopted as the first graphics software
standard by the International Standards Organization (ISO)
✓ The second software standard to be developed and approved by the standards
organizations was Programmer’s Hierarchical Interactive Graphics System (PHIGS).
✓ Extension of PHIGS, called PHIGS+, was developed to provide 3-D surface rendering
capabilities not available in PHIGS.
✓ The graphics workstations from Silicon Graphics, Inc. (SGI), came with a set of routines
called GL (Graphics Library)

Other Graphics Packages


✓ Many other computer-graphics programming libraries have been developed for
1. general graphics routines
2. Some are aimed at specific applications (animation, virtual reality, etc.)
Example: Open Inventor Virtual-Reality Modeling Language (VRML).
We can create 2-D scenes with in Java applets (java2D, Java 3D)

1.12 Introduction To OpenGL


✓ OpenGL basic(core) library :-A basic library of functions is provided in OpenGL for
specifying graphics primitives, attributes, geometric transformations, viewing
transformations, and many other operations.

Basic OpenGL Syntax


➔ Function names in the OpenGL basic library (also called the OpenGL core library) are
prefixed with gl. The component word first letter is capitalized.
➔ For eg:- glBegin, glClear, glCopyPixels, glPolygonMode
➔ Symbolic constants that are used with certain functions as parameters are all in capital
letters, preceded by “GL”, and component are separated by underscore.
➔ For eg:- GL_2D, GL_RGB, GL_CCW, GL_POLYGON,
GL_AMBIENT_AND_DIFFUSE.

Dept., of CSE, ATMECE 32


Module 1 Computer Graphics and OpenGL

➔ The OpenGL functions also expect specific data types. For example, an OpenGL function
parameter might expect a value that is specified as a 32-bit integer. But the size of an
integer specification can be different on different machines.
➔ To indicate a specific data type, OpenGL uses special built-in, data-type names, such as
GLbyte, GLshort, GLint, GLfloat, GLdouble, Glboolean

Related Libraries
➔ In addition to OpenGL basic(core) library(prefixed with gl), there are a number of
associated libraries for handling special operations:-
1) OpenGL Utility(GLU):- Prefixed with “glu”. It provides routines for setting up
viewing and projection matrices, describing complex objects with line and polygon
approximations, displaying quadrics and B-splines using linear approximations,
processing the surface-rendering operations, and other complex tasks.
-Every OpenGL implementation includes the GLU library
2) Open Inventor:- provides routines and predefined object shapes for interactive three-
dimensional applications which are written in C++.
3) Window-system libraries:- To create graphics we need display window. We cannot
create the display window directly with the basic OpenGL functions since it contains
only device-independent graphics functions, and window-management operations are
device-dependent. However, there are several window-system libraries that supports
OpenGL functions for a variety of machines.
Eg:- Apple GL(AGL), Windows-to-OpenGL(WGL), Presentation Manager to
OpenGL(PGL), GLX.
4) OpenGL Utility Toolkit(GLUT):- provides a library of functions which acts as
interface for interacting with any device specific screen-windowing system, thus making
our program device-independent. The GLUT library functions are prefixed with “glut”.

Header Files
✓ In all graphics programs, we will need to include the header file for the OpenGL core
library.

Dept., of CSE, ATMECE 33


Module 1 Computer Graphics and OpenGL

✓ In windows to include OpenGL core libraries and GLU we can use the following header
files:-
#include <windows.h> //precedes other header files for including Microsoft windows ver
of OpenGL libraries
#include<GL/gl.h>
#include <GL/glu.h>
✓ The above lines can be replaced by using GLUT header file which ensures gl.h and glu.h
are included correctly,
✓ #include <GL/glut.h> //GL in windows
✓ In Apple OS X systems, the header file inclusion statement will be,
✓ #include <GLUT/glut.h>

Display-Window Management Using GLUT


✓ We can consider a simplified example, minimal number of operations for displaying a
picture.
Step 1: initialization of GLUT
 We are using the OpenGL Utility Toolkit, our first step is to initialize GLUT.
 This initialization function could also process any command line arguments, but we
wlinot need to use these parameters for our first example programs.
 We perform the GLUT initialization with the statement
glutInit (&argc, argv);
Step 2: title
 We can state that a display window is to be created on the screen with a given caption
forthe title bar. This is accomplished with the function
glutCreateWindow ("An Example OpenGL Program");
 where the single argument for this function can be any character string that we want
o
t use for the display-window title.
Step 3: Specification of the display window
 Then we need to specify what the display window is to contain.
 For this, we create a picture using OpenGL functions and pass the picture definition
o
tthe GLUT routine glutDisplayFunc, which assigns our picture to the display
window.

Dept., of CSE, ATMECE 34


Module 1 Computer Graphics and OpenGL

 Example: suppose we have the OpenGL code for describing a line segment in a
procedure called lineSegment.
 Then the following function call passes the line-segment description to the display
window:
glutDisplayFunc (lineSegment);
Step 4: one more GLUT function
 But the display window is not yet on the screen.
 We need one more GLUT function to complete the window-processing operations.
 After execution of the following statement, all display windows that we have created,
including their graphic content, are now activated:
glutMainLoop ( );
 This function must be the last one in our program. It displays the initial graphics and
pustthe program into an infinite loop that checks for input from devices such as a mouse
or keyboard.
Step 5: these parameters using additional GLUT functions
 Although the display window that we created will be in some default location and size,
we can set these parameters using additional GLUT functions.
GLUT Function 1:
➔ We use the glutInitWindowPosition function to give an initial location for the upper left
corner of the display window.
➔ This position is specified in integer screen coordinates, whose origin is at the upper-left
corner of the screen.

Dept., of CSE, ATMECE 35


Module 1 Computer Graphics and OpenGL

GLUT Function 2:
After the display window is on the screen, we can reposition and resize it.
GLUT Function 3:
➔ We can also set a number of other options for the display window, such as buffering and
a choice of color modes, with the glutInitDisplayMode function.
➔ Arguments for this routine are assigned symbolic GLUT constants.
➔ Example: the following command specifies that a single refresh buffer is to be used for
the display window and that we want to use the color mode which uses red, green, and
blue (RGB) components to select color values:
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);
➔ The values of the constants passed to this function are combined using a logical or
operation.
➔ Actually, single buffering and RGB color mode are the default options.
➔ But we will use the function now as a reminder that these are the options that are set for
our display.
➔ Later, we discuss color modes in more detail, as well as other display options, such as
double buffering for animation applications and selecting parameters for viewing
threedimensional scenes.

A Complete OpenGL Program


➔ There are still a few more tasks to perform before we have all the parts that we need for a
complete program.
Step 1: to set background color
➔ For the display window, we can choose a background color.
➔ Using RGB color values, we set the background color for the display window to be
white, with the OpenGL function:
glClearColor (1.0, 1.0, 1.0, 0.0);
➔ The first three arguments in this function set the red, green, and blue component colors to
the value 1.0, giving us a white background color for the display window.
➔ If, instead of 1.0, we set each of the component colors to 0.0, we would get a black
background.

Dept., of CSE, ATMECE 36


Module 1 Computer Graphics and OpenGL

➔ The fourth parameter in the glClearColor function is called the alpha value for the
specified color. One use for the alpha value is as a “blending” parameter
➔ When we activate the OpenGL blending operations, alpha values can be used to
determine the resulting color for two overlapping objects.
➔ An alpha value of 0.0 indicates a totally transparent object, and an alpha value of 1.0
indicates an opaque object.
➔ For now, we will simply set alpha to 0.0.
➔ Although the glClearColor command assigns a color to the display window, it does not
put the display window on the screen.

Step 2: to set window color


➔ To get the assigned window color displayed, we need to invoke the following OpenGL
function:
glClear (GL_COLOR_BUFFER_BIT);
➔ The argument GL COLOR BUFFER BIT is an OpenGL symbolic constant specifying
that it is the bit values in the color buffer (refresh buffer) that are to be set to the values
indicated in the glClearColor function. (OpenGL has several different kinds of buffers
that can be manipulated.

Step 3: to set color to object


➔ In addition to setting the background color for the display window, we can choose a
variety of color schemes for the objects we want to display in a scene.
➔ For our initial programming example, we will simply set the object color to be a dark
green
glColor3f (0.0, 0.4, 0.2);
➔ The suffix 3f on the glColor function indicates that we are specifying the three RGB
color components using floating-point (f) values.
➔ This function requires that the values be in the range from 0.0 to 1.0, and we have set red
= 0.0, green = 0.4, and blue = 0.2.

Dept., of CSE, ATMECE 37


Module 1 Computer Graphics and OpenGL

Example program
➔ For our first program, we simply display a two-dimensional line segment.
➔ To do this, we need to tell OpenGL how we want to “project” our picture onto the display
window because generating a two-dimensional picture is treated by OpenGL as a special
case of three-dimensional viewing.
➔ So, although we only want to produce a very simple two-dimensional line, OpenGL
processes our picture through the full three-dimensional viewing operations.
➔ We can set the projection type (mode) and other viewing parameters that we need with
the following two functions:
glMatrixMode (GL_PROJECTION);
gluOrtho2D (0.0, 200.0, 0.0, 150.0);
➔ This specifies that an orthogonal projection is to be used to map the contents of a
twodimensional rectangular area of world coordinates to the screen, and that the x-
coordinate values within this rectangle range from 0.0 to 200.0 with y-coordinate values
ranging from 0.0 to 150.0.
➔ Whatever objects we define within this world-coordinate rectangle will be shown within
the display window.
➔ Anything outside this coordinate range will not be displayed.
➔ Therefore, the GLU function gluOrtho2D defines the coordinate reference frame within
the display window to be (0.0, 0.0) at the lower-left corner of the display window and
(200.0, 150.0) at the upper-right window corner.
➔ For now, we will use a world-coordinate rectangle with the same aspect ratio as the
display window, so that there is no distortion of our picture.
➔ Finally, we need to call the appropriate OpenGL routines to create our line segment.
➔ The following code defines a two-dimensional, straight-line segment with integer,
➔ Cartesian endpoint coordinates (180, 15) and (10, 145).
glBegin (GL_LINES);
glVertex2i (180, 15);
glVertex2i (10, 145);
glEnd ( );
➔ Now we are ready to put all the pieces together:

Dept., of CSE, ATMECE 38


Module 1 Computer Graphics and OpenGL

The following OpenGL program is organized into three functions.


➔ init: We place all initializations and related one-time parameter settings in function init.
➔ lineSegment: Our geometric description of the “picture” that we want to display is in
function lineSegment, which is the function that will be referenced by the GLUT function
glutDisplayFunc.
➔ main function main function contains the GLUT functions for setting up the display
window and getting our line segment onto the screen.
➔ glFlush: This is simply a routine to force execution of our OpenGL functions, which are
stored by computer systems in buffers in different locations,depending on how OpenGL
is implemented.
➔ The procedure lineSegment that we set up to describe our picture is referred to as a
display callback function.
➔ And this procedure is described as being “registered” by glutDisplayFunc as the routine
to invoke whenever the display window might need to be redisplayed.
Example: if the display window is moved.
Following program to display window and line segment generated by this program:
#include <GL/glut.h> // (or others, depending on the system in use)
void init (void)
{
glClearColor (1.0, 1.0, 1.0, 0.0); // Set display-window color to white.
glMatrixMode (GL_PROJECTION); // Set projection parameters.
gluOrtho2D (0.0, 200.0, 0.0, 150.0);
}
void lineSegment (void)
{
glClear (GL_COLOR_BUFFER_BIT); // Clear display window.
glColor3f (0.0, 0.4, 0.2); // Set line segment color to green.
glBegin (GL_LINES);
glVertex2i (180, 15); // Specify line-segment geometry.
glVertex2i (10, 145);
glEnd ( );

Dept., of CSE, ATMECE 39


Module 1 Computer Graphics and OpenGL

glFlush ( ); // Process all OpenGL routines as quickly as possible.


}
void main (int argc, char** argv)
{
glutInit (&argc, argv); // Initialize GLUT.
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB); // Set display mode.
glutInitWindowPosition (50, 100); // Set top-left display-window position.
glutInitWindowSize (400, 300); // Set display-window width and height.
glutCreateWindow ("An Example OpenGL Program"); // Create display window.
init ( ); // Execute initialization procedure.
glutDisplayFunc (lineSegment); // Send graphics to display window.
glutMainLoop ( ); // Display everything and wait.
}

1.13 Coordinate Reference Frames


To describe a picture, we first decide upon
 A convenient Cartesian coordinate system, called the world-coordinate reference frame,
which could be either 2D or 3D.
 We then describe the objects in our picture by giving their geometric specifications in
terms of positions in world coordinates.
 Example: We define a straight-line segment with two endpoint positions, and a polygon
is specified with a set of positions for its vertices.
 These coordinate positions are stored in the scene description along with other info about
the objects, such as their color and their coordinate extents
 Co-ordinate extents :Co-ordinate extents are the minimum and maximum x, y, andz
values for each object.
 A set of coordinate extents is also described as a bounding box for an object.
 Ex:For a 2D figure, the coordinate extents are sometimes called its bounding rectangle.
 Objects are then displayed by passing the scene description to the viewing routines which
identify visible surfaces and map the objects to the frame buffer positions and then on the
video monitor.

Dept., of CSE, ATMECE 40


Module 1 Computer Graphics and OpenGL

 The scan-conversion algorithm stores info about the scene, such as color values, at
h
teappropriate locations in the frame buffer, and then the scene is displayed on the
output device.

Screen co-ordinates:
✓ Locations on a video monitor are referenced in integer screen coordinates, which
correspond to the integer pixel positions in the frame buffer.
✓ Scan-line algorithms for the graphics primitives use the coordinate descriptions to
determine the locations of pixels
✓ Example: given the endpoint coordinates for a line segment, a display algorithm must
calculate the positions for those pixels that lie along the line path between the endpoints.
✓ Since a pixel position occupies a finite area of the screen, the finite size of a pixel must
be taken into account by the implementation algorithms.
✓ For the present, we assume that each integer screen position references the centre of a
pixel area.
✓ Once pixel positions have been identified the color values must be stored in the frame
buffer

Assume we have available a low-level procedure of the form


i) setPixel (x, y);
• stores the current color setting into the frame buffer at integer position(x, y), relative to
the position of the screen-coordinate origin
ii) getPixel (x, y, color);
• Retrieves the current frame-buffer setting for a pixel location;
• Parameter color receives an integer value corresponding to the combined RGB bit codes
stored for the specified pixel at position (x,y).
• Additional screen-coordinate information is needed for 3D scenes.
• For a two-dimensional scene, all depth values are 0.

Dept., of CSE, ATMECE 41


Module 1 Computer Graphics and OpenGL

Absolute and Relative Coordinate Specifications


Absolute coordinate:
➢ So far, the coordinate references that we have discussed are stated as absolute coordinate
values.
➢ This means that the values specified are the actual positions within the coordinate system
in use.
Relative coordinates:
➢ However, some graphics packages also allow positions to be specified using relative
coordinates.
➢ This method is useful for various graphics applications, such as producing drawings with
pen plotters, artist’s drawing and painting systems, and graphics packages for publishing
and printing applications.
➢ Taking this approach, we can specify a coordinate position as an offset from the last
position that was referenced (called the current position).

Specifying a Two-Dimensional World-Coordinate Reference Frame in OpenGL


➢ The gluOrtho2D command is a function we can use to set up any 2D Cartesian reference
frames.
➢ The arguments for this function are the four values defining the x and y coordinate limits
for the picture we want to display.
➢ Since the gluOrtho2D function specifies an orthogonal projection, we need also to be sure
that the coordinate values are placed in the OpenGL projection matrix.
➢ In addition, we could assign the identity matrix as the projection matrix before defining
the world-coordinate range.
➢ This would ensure that the coordinate values were not accumulated with any values we
may have previously set for the projection matrix.
➢ Thus, for our initial two-dimensional examples, we can define the coordinate frame for
the screen display window with the following statements
glMatrixMode (GL_PROJECTION);
glLoadIdentity ( );
gluOrtho2D (xmin, xmax, ymin, ymax);

Dept., of CSE, ATMECE 42


Module 1 Computer Graphics and OpenGL

➢ The display window will then be referenced by coordinates (xmin, ymin) at the lower-left
corner and by coordinates (xmax, ymax) at the upper-right corner, as shown in Figure
below

➢ We can then designate one or more graphics primitives for display using the coordinate
reference specified in the gluOrtho2D statement.
➢ If the coordinate extents of a primitive are within the coordinate range of the display
window, all of the primitive will be displayed.
➢ Otherwise, only those parts of the primitive within the display-window coordinate limits
will be shown.
➢ Also, when we set up the geometry describing a picture, all positions for the OpenGL
primitives must be given in absolute coordinates, with respect to the reference frame
defined in the gluOrtho2D function.

1.14 OpenGL Functions


Geometric Primitives:
➢ It includes points, line segments, polygon etc.
➢ These primitives pass through geometric pipeline which decides whether the primitive is
visible or not and also how the primitive should be visible on the screen etc.
➢ The geometric transformations such rotation, scaling etc can be applied on the primitives
which are displayed on the screen.The programmer can create geometric primitives as
shown below:

Dept., of CSE, ATMECE 43


Module 1 Computer Graphics and OpenGL

where:
glBegin indicates the beginning of the object that has to be displayed
glEnd indicates the end of primitive

1.15 OpenGL Point Functions


➢ The type within glBegin() specifies the type of the object and its value can be as follows:
GL_POINTS
➢ Each vertex is displayed as a point.
➢ The size of the point would be of at least one pixel.
➢ Then this coordinate position, along with other geometric descriptions we may have in
our scene, is passed to the viewing routines.
➢ Unless we specify other attribute values, OpenGL primitives are displayed with a default
size and color.
➢ The default color for primitives is white, and the default point size is equal to the size of a
single screen pixel
Syntax:
Case 1:
glBegin (GL_POINTS);
glVertex2i (50, 100);
glVertex2i (75, 150);
glVertex2i (100, 200);

Dept., of CSE, ATMECE 44


Module 1 Computer Graphics and OpenGL

glEnd ( );
Case 2:
➢ we could specify the coordinate values for the preceding points in arrays such as
int point1 [ ] = {50, 100};
int point2 [ ] = {75, 150};
int point3 [ ] = {100, 200};
and call the OpenGL functions for plotting the three points as
glBegin (GL_POINTS);
glVertex2iv (point1);
glVertex2iv (point2);
glVertex2iv (point3);
glEnd ( );
Case 3:
➢ specifying two point positions in a three dimensional world reference frame. In this case,
we give the coordinates as explicit floating-point values:
glBegin (GL_POINTS);
glVertex3f (-78.05, 909.72, 14.60);
glVertex3f (261.91, -5200.67, 188.33);
glEnd ( );

1.16 OpenGL LINE FUNCTIONS


➢ Primitive type is GL_LINES
➢ Successive pairs of vertices are considered as endpoints and they are connected to form
an individual line segments.
➢ Note that successive segments usually are disconnected because the vertices are
processed on a pair-wise basis.
➢ we obtain one line segment between the first and second coordinate positions and another
line segment between the third and fourth positions.
➢ if the number of specified endpoints is odd, so the last coordinate position is ignored.

Dept., of CSE, ATMECE 45


Module 1 Computer Graphics and OpenGL

Case 1: Lines
glBegin (GL_LINES);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );

Case 2: GL_LINE_STRIP:
Successive vertices are connected using line segments. However, the final vertex is not
connected to the initial vertex.
glBegin (GL_LINES_STRIP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );

Case 3: GL_LINE_LOOP:
Successive vertices are connected using line segments to form a closed path or loop i.e., final
vertex is connected to the initial vertex.
glBegin (GL_LINES_LOOP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );

Dept., of CSE, ATMECE 46


Module 1 Computer Graphics and OpenGL

1.16 Point Attributes


➔ Basically, we can set two attributes for points: color and size.
➔ In a state system: The displayed color and size of a point is determined by the current
values stored in the attribute list.
➔ Color components are set with RGB values or an index into a color table.
➔ For a raster system: Point size is an integer multiple of the pixel size, so that a large point
is displayed as a square block of pixels

Opengl Point-Attribute Functions


Color:
➔ The displayed color of a designated point position is controlled by the current color
values in the state list.
➔ Also, a color is specified with either the glColor function or the glIndex function.
Size:
➔ We set the size for an OpenGL point with
glPointSize (size);
and the point is then displayed as a square block of pixels.
➔ Parameter size is assigned a positive floating-point value, which is rounded to an integer
(unless the point is to be antialiased).
➔ The number of horizontal and vertical pixels in the display of the point is determined by
parameter size.
➔ Thus, a point size of 1.0 displays a single pixel, and a point size of 2.0 displays a 2×2
pixel array.
➔ If we activate the antialiasing features of OpenGL, the size of a displayed block of pixels
will be modified to smooth the edges.
➔ The default value for point size is 1.0.

Example program:
➔ Attribute functions may be listed inside or outside of a glBegin/glEnd pair.
➔ Example: the following code segment plots three points in varying colors and sizes.

Dept., of CSE, ATMECE 47


Module 1 Computer Graphics and OpenGL

➔ The first is a standard-size red point, the second is a double-size green point, and the third
is a triple-size blue point:

Ex:
glColor3f (1.0, 0.0, 0.0);
glBegin (GL_POINTS);
glVertex2i (50, 100);
glPointSize (2.0);
glColor3f (0.0, 1.0, 0.0);
glVertex2i (75, 150);
glPointSize (3.0);
glColor3f (0.0, 0.0, 1.0);
glVertex2i (100, 200);
glEnd ( );

1.17 Line-Attribute Functions OpenGL


➔ In OpenGL straight-line segment with three attribute settings: line color, line-width, and
line style.
➔ OpenGL provides a function for setting the width of a line and another function for
specifying a line style, such as a dashed or dotted line.

OpenGL Line-Width Function


➔ Line width is set in OpenGL with the function
Syntax: glLineWidth (width);
➔ We assign a floating-point value to parameter width, and this value is rounded to the
nearest nonnegative integer.
➔ If the input value rounds to 0.0, the line is displayed with a standard width of 1.0, which
is the default width.
➔ Some implementations of the line-width function might support only a limited number of
widths, and some might not support widths other than 1.0.

Dept., of CSE, ATMECE 48


Module 1 Computer Graphics and OpenGL

➔ That is, the magnitude of the horizontal and vertical separations of the line endpoints,
deltax and deltay, are compared to determine whether to generate a thick line using
vertical pixel spans or horizontal pixel spans.

OpenGL Line-Style Function


➔ By default, a straight-line segment is displayed as a solid line.
➔ But we can also display dashed lines, dotted lines, or a line with a combination of dashes
and dots.
➔ We can vary the length of the dashes and the spacing between dashes or dots.
➔ We set a current display style for lines with the OpenGL function:
Syntax: glLineStipple (repeatFactor, pattern);

Pattern:
➔ Parameter pattern is used to reference a 16-bit integer that describes how the line should
be displayed.
➔ 1 bit in the pattern denotes an “on” pixel position, and a 0 bit indicates an “off” pixel
position.
➔ The pattern is applied to the pixels along the line path starting with the low-order bits in
the pattern.
➔ The default pattern is 0xFFFF (each bit position has a value of 1),which produces a solid
line.

repeatFactor
➔ Integer parameter repeatFactor specifies how many times each bit in the pattern is to be
repeated before the next bit in the pattern is applied.
➔ The default repeat value is 1.

Polyline:
➔ With a polyline, a specified line-style pattern is not restarted at the beginning of each
segment.

Dept., of CSE, ATMECE 49


Module 1 Computer Graphics and OpenGL

➔ It is applied continuously across all the segments, starting at the first endpoint of the
polyline and ending at the final endpoint for the last segment in the series.
Example:
➔ For line style, suppose parameter pattern is assigned the hexadecimal representation
0x00FF and the repeat factor is 1.
➔ This would display a dashed line with eight pixels in each dash and eight pixel positions
that are “off” (an eight-pixel space) between two dashes.
➔ Also, since low order bits are applied first, a line begins with an eight-pixel dash starting
at the first endpoint.
➔ This dash is followed by an eight-pixel space, then another eight-pixel dash, and so forth,
until the second endpoint position is reached.

Activating line style:


➢ Before a line can be displayed in the current line-style pattern, we must activate the line-
style feature of OpenGL.
glEnable (GL_LINE_STIPPLE);
➢ If we forget to include this enable function, solid lines are displayed; that is, the default
pattern 0xFFFF is used to display line segments.
➢ At any time, we can turn off the line-pattern feature with
glDisable (GL_LINE_STIPPLE);
➢ This replaces the current line-style pattern with the default pattern (solid lines).

Example Code:
typedef struct { float x, y; } wcPt2D;
wcPt2D dataPts [5];
void linePlot (wcPt2D dataPts [5])
{
int k;
glBegin (GL_LINE_STRIP);
for (k = 0; k < 5; k++)
glVertex2f (dataPts [k].x, dataPts [k].y);

Dept., of CSE, ATMECE 50


Module 1 Computer Graphics and OpenGL

glFlush ( );
glEnd ( );
}
/* Invoke a procedure here to draw coordinate axes. */
glEnable (GL_LINE_STIPPLE); /* Input first set of (x, y) data values. */
glLineStipple (1, 0x1C47); // Plot a dash-dot, standard-width polyline.
linePlot (dataPts);
/* Input second set of (x, y) data values. */
glLineStipple (1, 0x00FF); / / Plot a dashed, double-width polyline.
glLineWidth (2.0);
linePlot (dataPts);
/* Input third set of (x, y) data values. */
glLineStipple (1, 0x0101); // Plot a dotted, triple-width polyline.
glLineWidth (3.0);
linePlot (dataPts);
glDisable (GL_LINE_STIPPLE);

1.18 Curve Attributes


➔ Parameters for curve attributes are the same as those for straight-line segments.
➔ We can display curves with varying colors, widths, dot-dash patterns, and available pen
or brush options.
➔ Methods for adapting curve-drawing algorithms to accommodate attribute selections are
similar to those for line drawing.
➔ Raster curves of various widths can be displayed using the method of horizontal or
vertical pixel spans.
Case 1: Where the magnitude of the curve slope |m| <= 1.0, we plot vertical spans;
Case 2: when the slope magnitude |m| > 1.0, we plot horizontal spans.

Different methods to draw a curve:


Method 1: Using circle symmetry property, we generate the circle path with vertical spans in the
octant from x = 0 to x = y, and then reflect pixel positions about the line y = x to y=0

Dept., of CSE, ATMECE 51


Module 1 Computer Graphics and OpenGL

Method 2: Another method for displaying thick curves is to fill in the area between two Parallel
curve paths, whose separation distance is equal to the desired width. We could do this using the
specified curve path as one boundary and setting up the second boundary either inside or outside
the original curve path. This approach, however, shifts the original curve path either inward or
outward, depending on which direction we choose for the second boundary.

Method 3:The pixel masks discussed for implementing line-style options could also be used in
raster curve algorithms to generate dashed or dotted patterns

Method 4: Pen (or brush) displays of curves are generated using the same techniques discussed
for straight-line segments.

Method 5: Painting and drawing programs allow pictures to be constructed interactively by


using a pointing device, such as a stylus and a graphics tablet, to sketch various curve shapes.

1.19 Line Drawing Algorithm


✓ A straight-line segment in a scene is defined by coordinate positions for the endpoints of
the segment.
✓ To display the line on a raster monitor, the graphics system must first project the
endpoints to integer screen coordinates and determine the nearest pixel positions along
the line path between the two endpoints then the line color is loaded into the frame buffer
at the corresponding pixel coordinates
✓ The Cartesian slope-intercept equation for a straight line is
y=m * x +b ------------>(1)
with m as the slope of the line and b as the y intercept.
✓ Given that the two endpoints of a line segment are specified at positions (x0,y0) and
(xend, yend) ,as shown in fig.

Dept., of CSE, ATMECE 52


Module 1 Computer Graphics and OpenGL

✓ We determine values for the slope m and y intercept b with the following equations:
m=(yend - y0)/(xend - x0) ---------------- >(2)
b=y0 - m.x0 ------------- >(3)
✓ Algorithms for displaying straight line are based on the line equation (1) and calculations
given in eq(2) and (3).
✓ For given x interval δx along a line, we can compute the corresponding y interval δy from
eq.(2) as
δy=m. δx ---------------- >(4)
✓ Similarly, we can obtain the x interval δx corresponding to a specified δy as
δx=δy/m ----------------- >(5)
✓ These equations form the basis for determining deflection voltages in analog displays,
such as vector-scan system, where arbitrarily small changes in deflection voltage are
possible.
✓ For lines with slope magnitudes
➔ |m|<1, δx can be set proportional to a small horizontal deflection voltage with the
corresponding vertical deflection voltage set proportional to δy from eq.(4)
➔ |m|>1, δy can be set proportional to a small vertical deflection voltage with the
corresponding horizontal deflection voltage set proportional to δx from eq.(5)
➔ |m|=1, δx=δy and the horizontal and vertical deflections voltages are equal

DDA Algorithm (DIGITAL DIFFERENTIAL ANALYZER)


➔ The DDA is a scan-conversion line algorithm based on calculating either δy or δx.

Dept., of CSE, ATMECE 53


Module 1 Computer Graphics and OpenGL

➔ A line is sampled at unit intervals in one coordinate and the corresponding integer values
nearest the line path are determined for the other coordinate
➔ DDA Algorithm has three cases so from equation i.e.., m=(yk+1 - yk)/(xk+1 - xk)

Case1:
if m<1,x increment in unit intervals
i.e..,xk+1=xk+1
then, m=(yk+1 - yk)/( xk+1 - xk)
m= yk+1 - yk
yk+1 = yk + m ----------- >(1)
➔ where k takes integer values starting from 0,for the first point and increases by 1 until
final endpoint is reached. Since m can be any real number between 0.0 and 1.0,

Case2:
if m>1, y increment in unit intervals
i.e.., yk+1 = yk + 1
then, m= (yk + 1- yk)/( xk+1 - xk)
m(xk+1 - xk)=1
xk+1 =(1/m)+ xk ---------------- (2)

Case3:
if m=1,both x and y increment in unit intervals
i.e..,xk+1=xk + 1 and yk+1 = yk + 1

Equations (1) and (2) are based on the assumption that lines are to be processed from the left
endpoint to the right endpoint. If this processing is reversed, so that the starting endpoint is at the
right, then either we have δx=-1 and
yk+1 = yk - m (3)
or(when the slope is greater than 1)we have δy=-1 with
xk+1 = xk - (1/m) --------------- (4)

Dept., of CSE, ATMECE 54


Module 1 Computer Graphics and OpenGL

➔ Similar calculations are carried out using equations (1) through (4) to determine the pixel
positions along a line with negative slope. thus, if the absolute value of the slope is less
than 1 and the starting endpoint is at left ,we set δx==1 and calculate y values with eq(1).
➔ when starting endpoint is at the right(for the same slope),we set δx=-1 and obtain y
positions using eq(3).
➔ This algorithm is summarized in the following procedure, which accepts as input two
integer screen positions for the endpoints of a line segment.
➔ if m<1,where x is incrementing by 1
yk+1 = yk + m
➔ So initially x=0,Assuming (x0,y0)as initial point assigning x= x0,y=y0 which is the
starting point .
o Illuminate pixel(x, round(y))
o x1= x+ 1 , y1=y + 1
o Illuminate pixel(x1,round(y1))
o x2= x1+ 1 , y2=y1 + 1
o Illuminate pixel(x2,round(y2))
o Till it reaches final point.
➔ if m>1,where y is incrementing by 1
xk+1 =(1/m)+ xk
➔ So initially y=0,Assuming (x0,y0)as initial point assigning x= x0,y=y0 which is the
starting point .
o Illuminate pixel(round(x),y)
o x1= x+( 1/m) ,y1=y
o Illuminate pixel(round(x1),y1)
o x2= x1+ (1/m) , y2=y1
o Illuminate pixel(round(x2),y2)
o Till it reaches final point.

➔ The DDA algorithm is faster method for calculating pixel position than one that directly
implements .

Dept., of CSE, ATMECE 55


Module 1 Computer Graphics and OpenGL

➔ It eliminates the multiplication by making use of raster characteristics, so that appropriate


increments are applied in the x or y directions to step from one pixel position to another
along the line path.
➔ The accumulation of round off error in successive additions of the floating point
increment, however can cause the calculated pixel positions to drift away from the true
line path for long line segments. Furthermore ,the rounding operations and floating point
arithmetic in this procedure are still time consuming.
➔ we improve the performance of DDA algorithm by separating the increments m and 1/m
into integer and fractional parts so that all calculations are reduced to integer operations.
#include <stdlib.h>
#include <math.h>
inline int round (const float a)
{
return int (a + 0.5);
}
void lineDDA (int x0, int y0, int xEnd, int yEnd)
{
int dx = xEnd - x0, dy = yEnd - y0, steps, k;
float xIncrement, yIncrement, x = x0, y = y0;
if (fabs (dx) > fabs (dy))
steps = fabs (dx);
else
steps = fabs (dy);
xIncrement = float (dx) / float (steps);
yIncrement = float (dy) / float (steps);
setPixel (round (x), round (y));
for (k = 0; k < steps; k++) {
x += xIncrement;
y += yIncrement;
setPixel (round (x), round (y));
}

Dept., of CSE, ATMECE 56


Module 1 Computer Graphics and OpenGL

Bresenham’s Algorithm:
➔ It is an efficient raster scan generating algorithm that uses incremental integral
calculations
➔ To illustrate Bresenham’s approach, we first consider the scan-conversion process for
lines with positive slope less than 1.0.
➔ Pixel positions along a line path are then determined by sampling at unit x intervals.
Starting from the left endpoint (x0, y0) of a given line, we step to each successive column
(x position) and plot the pixel whose scan-line y value is closest to the line path.

➔ Consider the equation of a straight line y=mx+c where m=dy/dx

Bresenham’s Line-Drawing Algorithm for |m| < 1.0


1. Input the two line endpoints and store the left endpoint in (x0, y0).
2. Set the color for frame-buffer position (x0, y0); i.e., plot the first point.
3. Calculate the constants ∆x, ∆y, 2∆y, and 2∆y − 2∆x, and obtain the starting value for
the decision parameter as
p0 = 2∆y −∆x
4. At each xk along the line, starting at k = 0, perform the following test:
If pk < 0, the next point to plot is (xk + 1, yk ) and
pk+1 = pk + 2∆y
Otherwise, the next point to plot is (xk + 1, yk + 1) and
pk+1 = pk + 2∆y − 2∆x
5. Repeat step 4 ∆x − 1 more times.
Note:
If |m|>1.0
Then
p0 = 2∆x −∆y
and

Dept., of CSE, ATMECE 57


Module 1 Computer Graphics and OpenGL

If pk < 0, the next point to plot is (xk , yk +1) and


pk+1 = pk + 2∆x
Otherwise, the next point to plot is (xk + 1, yk + 1) and
pk+1 = pk + 2∆x − 2∆y

Code:
#include <stdlib.h>
#include <math.h>
/* Bresenham line-drawing procedure for |m| < 1.0. */
void lineBres (int x0, int y0, int xEnd, int yEnd)
{
int dx = fabs (xEnd - x0), dy = fabs(yEnd - y0);
int p = 2 * dy - dx;
int twoDy = 2 * dy, twoDyMinusDx = 2 * (dy - dx);
int x, y;
/* Determine which endpoint to use as start position. */
if (x0 > xEnd) {
x = xEnd;
y = yEnd;
xEnd = x0;
}
else {
x = x0;
y = y0;
}
setPixel (x, y);
while (x < xEnd) {
x++;
if (p < 0)
p += twoDy;

Dept., of CSE, ATMECE 58


Module 1 Computer Graphics and OpenGL

else {
y++;
p += twoDyMinusDx;
}
setPixel (x, y);
}
}

Properties of Circles
➔ A circle is defined as the set of points that are all at a given distance r from a center
position (xc , yc ).
➔ For any circle point (x, y), this distance relationship is expressed by the Pythagorean
theorem in Cartesian coordinates as

➔ We could use this equation to calculate the position of points on a circle circumference
by stepping along the x axis in unit steps from xc −r to xc +r and calculating the
corresponding y values at each position as

➔ One problem with this approach is that it involves considerable computation at each step.
Moreover, the spacing between plotted pixel positions is not uniform.
➔ We could adjust the spacing by interchanging x and y (stepping through y values and
calculating x values) whenever the absolute value of the slope of the circle is greater than
1; but this simply increases the computation and processing required by the algorithm.
➔ Another way to eliminate the unequal spacing is to calculate points along the circular
boundary using polar coordinates r and θ
➔ Expressing the circle equation in parametric polar form yields the pair of equations

Dept., of CSE, ATMECE 59


Module 1 Computer Graphics and OpenGL

Midpoint Circle Algorithm


➔ Midpoint circle algorithm generates all points on a circle centered at the origin by
incrementing all the way around circle.
➔ The strategy is to select which of 2 pixels is closer to the circle by evaluating a function
at the midpoint between the 2 pixels
➔ To apply the midpoint method, we define a circle function as

➔ To summarize, the relative position of any point (x, y) can be determined by checking the
sign of the circle function as follows:

Eight way symmetry


➔ The shape of the circle is similar in each quadrant.
➔ Therefore ,if we determine the curve positions in the first quadrant ,we can generate the
circle positions in the second quadrant of xy plane.
➔ The circle sections in the third and fourth quadrant can be obtained from sections in the
first and second quadrant by considering the symmetry along X axis

Dept., of CSE, ATMECE 60


Module 1 Computer Graphics and OpenGL

➔ Conside the circle centered at the origin,if the point ( x, y) is on the circle,then we can
compute 7 other points on the circle as shown in the above figure.
➔ Our decision parameter is the circle function evaluated at the midpoint between these
two pixels:

➔ Successive decision parameters are obtained using incremental calculations.


➔ We obtain a recursive expression for the next decision parameter by evaluating the circle
function at sampling position xk+1 + 1 = xk + 2:

Dept., of CSE, ATMECE 61


Module 1 Computer Graphics and OpenGL

➔ The initial decision parameter is obtained by evaluating the circle function at the start
position (x0, y0) = (0, r ):

➔ If the radius r is specified as an integer, we can simply round p0 to


p0 = 1 − r (for r an integer)
because all increments are integers.

Midpoint Circle Algorithm


1. Input radius r and circle center (xc , yc ), then set the coordinates for the first point on the
circumference of a circle centered on the origin as
(x0, y0) = (0, r )
2. Calculate the initial value of the decision parameter as
p0 = 1-r
3. At each xk position, starting at k = 0, perform the following test:
If pk <0, the next point along the circle centered on (0, 0) is (xk+1, yk ) and
pk+1 = pk + 2xk+1 + 1
Otherwise, the next point along the circle is (xk + 1, yk − 1) and

pk+1 = pk + 2xk+1 + 1 – 2yk+1


where 2xk+1 = 2xk + 2 and 2yk+1= 2yk − 2.

4. Determine symmetry points in the other seven octants.


5. Move each calculated pixel position (x, y) onto the circular path centered at (xc , yc ) and plot
the coordinate values as follows:
x = x + xc , y = y + yc
6. Repeat steps 3 through 5 until x ≥ y.

Dept., of CSE, ATMECE 62


Module 1 Computer Graphics and OpenGL

Code:
void draw_pixel(GLint cx, GLint cy)
{
glColor3f(0.5,0.5,0.0);
glBegin(GL_POINTS);
glVertex2i(cx, cy);
glEnd();
}

void plotpixels(GLint h, GLint k, GLint x, GLint y)


{
draw_pixel(x+h, y+k);
draw_pixel(-x+h, y+k);
draw_pixel(x+h, -y+k);
draw_pixel(-x+h, -y+k);
draw_pixel(y+h, x+k);
draw_pixel(-y+h, x+k);
draw_pixel(y+h, -x+k);
draw_pixel(-y+h, -x+k);
}

void circle_draw(GLint xc, GLint yc, GLint r)


{
GLint d=1-r, x=0,y=r;
while(y>x)
{
plotpixels(xc, yc, x, y);
if(d<0) d+=2*x+3;
else
{

Dept., of CSE, ATMECE 63


Module 1 Computer Graphics and OpenGL

d+=2*(x-y)+5;
--y;
}
++x;
}
plotpixels(xc, yc, x, y);
}

Dept., of CSE, ATMECE 64


Module 2 Fill Area Primitives

2.1 Fill area Primitives:


2.1.1 Introduction
2.1.2 Polygon fill-areas,
2.1.3 OpenGL polygon Fill Area Functions,
2.1.4 Fill area attributes,
2.1.5 General scan line polygon fill algorithm,
2.1.6 OpenGL fill-area Attribute functions.

2.1.1 Introduction
• An useful construct for describing components of a picture is an area that is filled with
some solid color or pattern.
• A picture component of this type is typically referred to as a fill area or a filled area.
• Any fill-area shape is possible, graphics libraries generally do not support specifications
for arbitrary fill shapes
• Figure below illustrates a few possible fill-area shapes.

• Graphics routines can more efficiently process polygons than other kinds of fill shapes
because polygon boundaries are described with linear equations.
• When lighting effects and surface-shading procedures are applied, an approximated
curved surface can be displayed quite realistically.
• Approximating a curved surface with polygon facets is sometimes referred to as surface
tessellation, or fitting the surface with a polygon mesh.

Dept., of CSE, ATMECE 1


Module 2 Fill Area Primitives

• Below figure shows the side and top surfaces of a metal cylinder approximated in an
outline form as a polygon mesh.

• Displays of such figures can be generated quickly as wire-frame views, showing only the
polygon edges to give a general indication of the surface structure
• Objects described with a set of polygon surface patches are usually referred to as standard
graphics objects, or just graphics objects.

2.1.2 Polygon Fill Areas


✓ A polygon is a plane figure specified by a set of three or more coordinate positions,
called vertices, that are connected in sequence by straight-line segments, called the edges
or sides of the polygon.
✓ It is required that the polygon edges have no common point other than their endpoints.
✓ Thus, by definition, a polygon must have all its vertices within a single plane and there
can be no edge crossings
✓ Examples of polygons include triangles, rectangles, octagons, and decagons
✓ Any plane figure with a closed-polyline boundary is alluded to as a polygon, and one
with no crossing edges is referred to as a standard polygon or a simple polygon
Problem:
➢ For a computer-graphics application, it is possible that a designated set of polygon
vertices do not all lie exactly in one plane
➢ This is due to roundoff error in the calculation of numerical values, to errors in selecting
coordinate positions for the vertices, or, more typically, to approximating a curved
surface with a set of polygonal patches
Solution:
➢ To divide the specified surface mesh into triangles

Dept., of CSE, ATMECE 2


Module 2 Fill Area Primitives

Polygon Classifications
✓ Polygons are classified into two types
1. Convex Polygon and
2. Concave Polygon
Convex Polygon:
✓ The polygon is convex if all interior angles of a polygon are less than or equal to 180◦,
where an interior angle of a polygon is an angle inside the polygon boundary that is
formed by two adjacent edges
✓ An equivalent definition of a convex polygon is that its interior lies completely on one
side of the infinite extension line of any one of its edges.
✓ Also, if we select any two points in the interior of a convex polygon, the line segment
joining the two points is also in the interior.
Concave Polygon:
✓ A polygon that is not convex is called a concave
polygon.Te below figure shows convex and concave polygon

✓ The term degenerate polygon is often used to describe a set of vertices that are collinear
or that have repeated coordinate positions.

Problems in concave polygon:


➔ Implementations of fill algorithms and other graphics routines are more complicated
Solution:
➔ It is generally more efficient to split a concave polygon into a set of convex polygons
before processing

Dept., of CSE, ATMECE 3


Module 2 Fill Area Primitives

Identifying Concave Polygons


Characteristics:
❖ A concave polygon has at least one interior angle greater than 180◦.
❖ The extension of some edges of a concave polygon will intersect other edges, and
❖ Some pair of interior points will produce a line segment that intersects the polygon
boundary

Identification algorithm 1
❖ Identifying a concave polygon by calculating cross-products of successive pairs of edge
vectors.
❖ If we set up a vector for each polygon edge, then we can use the cross-product of adjacent
edges to test for concavity. All such vector products will be of the same sign (positive or
negative) for a convex polygon.
❖ Therefore, if some cross-products yield a positive value and some a negative value, we
have a concave polygon

Identification algorithm 2
❖ Look at the polygon vertex positions relative to the extension line of any edge.
❖ If some vertices are on one side of the extension line and some vertices are on the other
side, the polygon is concave.

Dept., of CSE, ATMECE 4


Module 2 Fill Area Primitives

Splitting Concave Polygons


✓ Split concave polygon it into a set of convex polygons using edge vectors and edge cross-
products; or, we can use vertex positions relative to an edge extension line to determine
which vertices are on one side of this line and which are on the other.

Vector method
➔ First need to form the edge vectors.
➔ Given two consecutive vertex positions, Vk and Vk+1, we define the edge vector between
them as
Ek = Vk+1 – Vk
➔ Calculate the cross-products of successive edge vectors in order around the polygon
perimeter.
➔ If the z component of some cross-products is positive while other cross-products have a
negative z component, the polygon is concave.
➔ We can apply the vector method by processing edge vectors in counterclockwise order If
any cross-product has a negative z component (as in below figure), the polygon is
concave and we can split it along the line of the first edge vector in the cross-product pair

E1 = (1, 0, 0) E2 = (1, 1, 0)
E3 = (1, −1, 0) E4 = (0, 2, 0)
E5 = (−3, 0, 0) E6 = (0, −2, 0)

➔ Where the z component is 0, since all edges are in the xy plane.


➔ The crossproduct Ej × Ek for two successive edge vectors is a vector perpendicular to the
xy plane with z component equal to E jxEky − EkxE jy:
➔ The values for the above figure is as follows
E1 × E2 = (0, 0, 1) E2 × E3 = (0, 0, −2)
E3 × E4 = (0, 0, 2) E4 × E5 = (0, 0, 6)

Dept., of CSE, ATMECE 5


Module 2 Fill Area Primitives

E5 × E6 = (0, 0, 6) E6 × E1 = (0, 0, 2)

➔ Since the cross-product E2 × E3 has a negative z component, we split the polygon along
the line of vector E2.
➔ The line equation for this edge has a slope of 1 and a y intercept of −1 . No other edge
cross-products are negative, so the two new polygons are both convex.

Rotational method
➔ Proceeding counterclockwise around the polygon edges,
we shift the position of the polygon so that each vertex Vk
in turn is at the coordinate origin.
➔ We rotate the polygon about the origin in a clockwise
direction so that the next vertex Vk+1 is on the x axis.
➔ If the following vertex, Vk+2, is below the x axis,
the polygon is concave.
➔ We then split the polygon along the x axis to form two
new polygons, and we repeat the concave test for
each of the two new polygons

Splitting a Convex Polygon into a Set of Triangles


➢ Once we have a vertex list for a convex polygon, we could transform it into a set of
triangles.
➢ First define any sequence of three consecutive vertices to be a new polygon (a triangle).
➢ The middle triangle vertex is then deleted from the original vertex list .
➢ The same procedure is applied to this modified vertex list to strip off another triangle.
➢ We continue forming triangles in this manner until the original polygon is reduced to just
three vertices, which define the last triangle in the set.
➢ Concave polygon can also be divided into a set of triangles using this approach, although
care must be taken that the new diagonal edge formed by joining the first and third
selected vertices does not cross the concave portion of the polygon, and that the three
selected vertices at each step form an interior angle that is less than 180◦

Dept., of CSE, ATMECE 6


Module 2 Fill Area Primitives

Identifying interior and exterior region of polygon


➢ We may want to specify a complex fill region with intersecting edges.
➢ For such shapes, it is not always clear which regions of the xy plane we should call
“interior” and which regions.
➢ We should designate as “exterior” to the object boundaries.
➢ Two commonly used algorithms
1. Odd-Even rule and
2. The nonzero winding-number rule.

Inside-Outside Tests
✓ Also called the odd-parity rule or the even-odd rule.
✓ Draw a line from any position P to a distant point outside the coordinate extents of the
closed polyline.
✓ Then we count the number of line-segment crossings along this line.
✓ If the number of segments crossed by this line is odd, then P is considered to be an
interior point Otherwise, P is an exterior point
✓ We can use this procedure, for example,to fill the interior region between two concentric
circles or two concentric polygons with a specified color.

Nonzero Winding-Number rule


✓ This counts the number of times that the boundary of an object “winds” around a
particular point in the counterclockwise direction termed as winding number,
✓ Initialize the winding number to 0 and again imagining a line drawn from any position P
to a distant point beyond the coordinate extents of the object.
✓ The line we choose must not pass through any endpoint coordinates.
✓ As we move along the line from position P to the distant point, we count the number of
object line segments that cross the reference line in each direction
✓ We add 1 to the winding number every time we intersect a segment that crosses the line
in the direction from right to left, and we subtract 1 very time we intersect a segment that
crosses from left to right

Dept., of CSE, ATMECE 7


Module 2 Fill Area Primitives

✓ If the winding number is nonzero, P is considered to be an interior point. Otherwise, P is


taken to be an exterior point

✓ The nonzero winding-number rule tends to classify as interior some areas that the odd-
even rule deems to be exterior.
✓ Variations of the nonzero winding-number rule can be used to define interior regions in
other ways define a point to be interior if its winding number is positive or if it is
negative; or we could use any other rule to generate a variety of fill shapes
✓ Boolean operations are used to specify a fill area as a combination of two regions
✓ One way to implement Boolean operations is by
using a variation of the basic winding-number rule
consider the direction for each boundary to be
counterclockwise, the union of two regions would
consist of those points whose winding number is positive

✓ The intersection of two regions with counterclockwise


boundaries would contain those points whose
winding number is greater than 1,

Dept., of CSE, ATMECE 8


Module 2 Fill Area Primitives

▪ To set up a fill area that is the difference of two


regions (say, A − B), we can enclose region A
with a counterclockwise border and
B with a clockwise border

Polygon Tables
✓ The objects in a scene are described as sets of polygon surface facets
✓ The description for each object includes coordinate information specifying the geometry
for the polygon facets and other surface parameters such as color, transparency, and light-
reflection properties.
✓ The data of the polygons are placed into tables that are to be used in the subsequent
processing, display, and manipulation of the objects in the scene
✓ These polygon data tables can be organized into two groups:
1. Geometric tables and
2. Attribute tables
✓ Geometric data tables contain vertex coordinates and parameters to identify the spatial
orientation of the polygon surfaces.
✓ Attribute information for an object includes parameters specifying the degree of
transparency of the object and its surface reflectivity and texture characteristics
✓ Geometric data for the objects in a scene are arranged conveniently in three lists: a vertex
table, an edge table, and a surface-facet table.
✓ Coordinate values for each vertex in the object are stored in the vertex table.
✓ The edge table contains pointers back into the vertex table to identify the vertices for
each polygon edge.
✓ And the surface-facet table contains pointers back into the edge table to identify the edges
for each polygon

Dept., of CSE, ATMECE 9


Module 2 Fill Area Primitives

✓ The object can be displayed efficiently by using data from the edge table to identify
polygon boundaries.
✓ An alternative arrangement is to use just two tables: a vertex table and a surface-facet
table this scheme is less convenient, and some edges could get drawn twice in a wire-
frame display.
✓ Another possibility is to use only a surface-facet table, but this duplicates coordinate
information, since explicit coordinate values are listed for each vertex in each polygon
facet. Also the relationship between edges and facets would have to be reconstructed
from the vertex listings in the surface-facet table.
✓ We could expand the edge table to include forward pointers into the surface-facet table so
that a common edge between polygons could be identifiedmore rapidly the vertex table
could be expanded to reference corresponding edges, for faster information retrieval

✓ Because the geometric data tables may contain extensive listings of vertices and edges for
complex objects and scenes, it is important that the data be checked for consistency and
completeness.
✓ Some of the tests that could be performed by a graphics package are
(1) that every vertex is listed as an endpoint for at least two edges,

Dept., of CSE, ATMECE 10


Module 2 Fill Area Primitives

(2) that every edge is part of at least one polygon,


(3) that every polygon is closed,
(4) that each polygon has at least one shared edge, and
(5) that if the edge table contains pointers to polygons, every edge referenced by a
polygon pointer has a reciprocal pointer back to the polygon.

Plane Equations
➢ Each polygon in a scene is contained within a plane of infinite extent.
➢ The general equation of a plane is
Ax + B y + C z + D = 0
Where,
➔ (x, y, z) is any point on the plane, and
➔ The coefficients A, B, C, and D (called plane parameters) are
constants describing the spatial properties of the plane.
➢ We can obtain the values of A, B, C, and D by solving a set of three plane equations
using the coordinate values for three noncollinear points in the plane for the three
successive convex-polygon vertices, (x1, y1, z1), (x2, y2, z2), and (x3, y3, z3), in a
counterclockwise order and solve the following set of simultaneous linear plane
equations for the ratios A/D, B/D, and C/D:
(A/D)xk + (B/D)yk + (C/D)zk = −1, k = 1, 2, 3
➢ The solution to this set of equations can be obtained in determinant form, using Cramer’s
rule, as

➢ Expanding the determinants, we can write the calculations for the plane coefficients in
the form

Dept., of CSE, ATMECE 11


Module 2 Fill Area Primitives

➢ It is possible that the coordinates defining a polygon facet may not be contained within a
single plane.
➢ We can solve this problem by dividing the facet into a set of triangles; or we could find
an approximating plane for the vertex list.
➢ One method for obtaining an approximating plane is to divide the vertex list into subsets,
where each subset contains three vertices, and calculate plane parameters A, B, C, Dfor
each subset.

Front and Back Polygon Faces


➢ The side of a polygon that faces into the object interior is called the back face, and the
visible, or outward, side is the front face .
➢ Every polygon is contained within an infinite plane that partitions space into two regions.
➢ Any point that is not on the plane and that is visible to the front face of a polygon surface
section is said to be in front of (or outside) the plane, and, thus, outside the object.
➢ And any point that is visible to the back face of the polygon is behind (or inside) the
plane.
➢ Plane equations can be used to identify the position of spatial points relative to the
polygon facets of an object.
➢ For any point (x, y, z) not on a plane with parameters A, B, C, D, we have
Ax + B y + C z + D != 0
➢ Thus, we can identify the point as either behind or in front of a polygon surface contained
within that plane according to the sign (negative or positive) of
Ax + By + Cz + D:
if Ax + B y + C z + D < 0, the point (x, y, z) is behind the plane
if Ax + B y + C z + D > 0, the point (x, y, z) is in front of the plane
➢ Orientation of a polygon surface in space can be described with the normal vector for the
plane containing that polygon

Dept., of CSE, ATMECE 12


Module 2 Fill Area Primitives

➢ The normal vector points in a direction from inside the plane to the outside; that is, from
the back face of the polygon to the front face.
➢ Thus, the normal vector for this plane is N = (1, 0, 0), which is in the direction of the
positive x axis.
➢ That is, the normal vector is pointing from inside the cube to the outside and is
perpendicular to the plane x = 1.
➢ The elements of a normal vector can also be obtained using a vector crossproduct
Calculation.
➢ We have a convex-polygon surface facet and a right-handed Cartesian system, we again
select any three vertex positions,V1,V2, and V3, taken in counterclockwise order when
viewing from outside the object toward the inside.
➢ Forming two vectors, one from V1 to V2 and the second from V1 to V3, we calculate N
as the vector cross-product:
N = (V2 − V1) × (V3 − V1)
➢ This generates values for the plane parameters A, B, and C.We can then obtain the value
for parameter D by substituting these values and the coordinates in
Ax + B y + C z + D = 0
➢ The plane equation can be expressed in vector form using the normal N and the position
P of any point in the plane as
N·P = −D

Dept., of CSE, ATMECE 13


Module 2 Fill Area Primitives

2.1.3 OpenGL Polygon Fill-Area Functions


✓ A glVertex function is used to input the coordinates for a single polygon vertex, and a
complete polygon is described with a list of vertices placed between a glBegin/glEnd
pair.
✓ By default, a polygon interior is displayed in a solid color, determined by the current
color settings we can fill a polygon with a pattern and we can display polygon edges as
line borders around the interior fill.
✓ There are six different symbolic constants that we can use as the argument in the glBegin
function to describe polygon fill areas
✓ In some implementations of OpenGL, the following routine can be more efficient than
generating a fill rectangle using glVertex specifications:
glRect* (x1, y1, x2, y2);
✓ One corner of this rectangle is at coordinate position (x1, y1), and the opposite corner of
the rectangle is at position (x2, y2).
✓ Suffix codes for glRect specify the coordinate data type and whether coordinates are to be
expressed as array elements.
✓ These codes are i (for integer), s (for short), f (for float), d (for double), and v (for
vector).
✓ Example
glRecti (200, 100, 50, 250);
If we put the coordinate values for this rectangle into arrays, we can generate the
same square with the following code:
int vertex1 [ ] = {200, 100};
int vertex2 [ ] = {50, 250};
glRectiv (vertex1, vertex2);
Polygon
❖ With the OpenGL primitive constant GL POLYGON, we can display a single polygon
fill area.
❖ Each of the points is represented as an array of (x, y) coordinate values:
glBegin (GL_POLYGON);
glVertex2iv (p1);

Dept., of CSE, ATMECE 14


Module 2 Fill Area Primitives

glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glVertex2iv (p6);
glEnd ( );
❖ A polygon vertex list must contain at least three vertices. Otherwise, nothing is displayed.

(a) A single convex polygon fill area generated with the primitive constant GL POLYGON. (b)
Two unconnected triangles generated with GL TRIANGLES.
(c) Four connected triangles generated with GL TRIANGLE STRIP.
(d) Four connected triangles generated with GL TRIANGLE FAN.

Triangles
❖ Displays the trianlges.

Dept., of CSE, ATMECE 15


Module 2 Fill Area Primitives

❖ Three primitives in triangles, GL_TRIANGLES, GL_TRIANGLE_FAN,


GL_TRIANGLE_STRIP
glBegin (GL_TRIANGLES);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p6);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );

❖ In this case, the first three coordinate points define the vertices for one triangle, the next
three points define the next triangle, and so forth.
❖ For each triangle fill area, we specify the vertex positions in a counterclockwise order
triangle strip
glBegin (GL_TRIANGLE_STRIP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p6);
glVertex2iv (p3);
glVertex2iv (p5);
glVertex2iv (p4);
glEnd ( );
❖ Assuming that no coordinate positions are repeated in a list of N vertices, we obtain N − 2
triangles in the strip. Clearly, we must have N ≥ 3 or nothing is displayed.
❖ Each successive triangle shares an edge with the previously defined triangle, so the
ordering of the vertex list must be set up to ensure a consistent display.
❖ Example, our first triangle (n = 1) would be listed as having vertices (p1, p2, p6). The
second triangle (n = 2) would have the vertex ordering (p6, p2, p3). Vertex ordering for
the third triangle (n = 3) would be (p6, p3, p5). And the fourth triangle (n = 4) would be
listed in the polygon tables with vertex ordering (p5, p3, p4).

Dept., of CSE, ATMECE 16


Module 2 Fill Area Primitives

Triangle Fan
❖ Another way to generate a set of connected triangles is to use the “fan” Approach
glBegin (GL_TRIANGLE_FAN);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glVertex2iv (p6);
glEnd ( );
❖ For N vertices, we again obtain N−2 triangles, providing no vertex positions are repeated,
and we must list at least three vertices be specified in the proper order to define front and
back faces for each triangle correctly.
❖ Therefore, triangle 1 is defined with the vertex list (p1, p2, p3); triangle 2 has the vertex
ordering (p1, p3, p4); triangle 3 has its vertices specified in the order (p1, p4, p5); and
triangle 4 is listed with vertices (p1, p5, p6).

Quadrilaterals
✓ OpenGL provides for the specifications of two types of quadrilaterals.
✓ With the GL QUADS primitive constant and the following list of eight vertices, specified
as two-dimensional coordinate arrays, we can generate the display shown in Figure (a):
glBegin (GL_QUADS);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glVertex2iv (p6);
glVertex2iv (p7);
glVertex2iv (p8);
glEnd ( );

Dept., of CSE, ATMECE 17


Module 2 Fill Area Primitives

✓ Rearranging the vertex list in the previous quadrilateral code example and changing the
primitive constant to GL QUAD STRIP, we can obtain the set of connected quadrilaterals
shown in Figure (b):
glBegin (GL_QUAD_STRIP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p4);
glVertex2iv (p3);
glVertex2iv (p5);
glVertex2iv (p6);
glVertex2iv (p8);
glVertex2iv (p7);
glEnd ( );

✓ For a list of N vertices, we obtain N/2− 1 quadrilaterals, providing that N ≥ 4. Thus, our
first quadrilateral (n = 1) is listed as having a vertex ordering of (p1, p2, p3, p4). The
second quadrilateral (n=2) has the vertex ordering (p4, p3, p6, p5), and the vertex
ordering for the third quadrilateral (n=3) is (p5, p6, p7, p8).

Dept., of CSE, ATMECE 18


Module 2 Fill Area Primitives

2.1.4 Fill-Area Attributes


➔ We can fill any specified regions, including circles, ellipses, and other objects with
curved boundaries
Fill Styles
➔ A basic fill-area attribute provided by a general graphics library is the display style of the
interior.
➔ We can display a region with a single color, a specified fill pattern, or in a “hollow” style
by showing only the boundary of the region

➔ We can also fill selected regions of a scene using various brush styles, color-blending
combinations, or textures.
➔ For polygons, we could show the edges in different colors, widths, and styles; and we can
select different display attributes for the front and back faces of a region.
➔ Fill patterns can be defined in rectangular color arrays that list different colors for
different positions in the array.
➔ An array specifying a fill pattern is a mask that is to be applied to the display area.
➔ The mask is replicated in the horizontal and vertical directions until the display area is
filled with nonoverlapping copies of the pattern.
➔ This process of filling an area with a rectangular pattern is called tiling, and a rectangular
fill pattern is sometimes referred to as a tiling pattern predefined fill patterns are available
in a system, such as the hatch fill patterns

Dept., of CSE, ATMECE 19


Module 2 Fill Area Primitives

➔ Hatch fill could be applied to regions by drawing sets of line segments to display either
single hatching or crosshtching

Color-Blended Fill Regions


➢ Color-blended regions can be implemented using either transparency factors to control
the blending of background and object colors, or using simple logical or replace
operations as shown in figure

➢ The linear soft-fill algorithm repaints an area that was originally painted by merging a
foreground color F with a single background color B, where F != B.
➢ The current color P of each pixel within the area to be refilled is some linear combination
of F and B:

Dept., of CSE, ATMECE 20


Module 2 Fill Area Primitives

P = tF + (1 − t)B
➢ Where the transparency factor t has a value between 0 and 1 for each pixel.
➢ For values of t less than 0.5, the background color contributes more to the interior color
of the region than does the fill color.
➢ If our color values are represented using separate red, green, and blue (RGB)
components, each component of the colors, with
P = (PR, PG, PB), F = (FR, FG, FB), B = (BR, BG, BB) is used
➢ We can thus calculate the value of parameter t using one of the RGB color components as
follows:

Where k = R, G, or B; and Fk != Bk .
➢ When two background colors B1 and B2 are mixed with foreground color F, the resulting
pixel color P is
P = t0F + t1B1 + (1 − t0 − t1)B2
➢ Where the sum of the color-term coefficients t0, t1, and (1 − t0 − t1) must equal 1.
➢ With three background colors and one foreground color, or with two background and two
foreground colors, we need all three RGB equations to obtain the relative amounts of the
four colors.

2.1.5 General Scan-Line Polygon-Fill Algorithm


➔ A scan-line fill of a region is performed by first determining the intersection positions of
the boundaries of the fill region with the screen scan lines.
➔ Then the fill colors are applied to each section of a scan line that lies within the interior of
the fill region.
➔ The simplest area to fill is a polygon because each scanline intersection point with a
polygon boundary is obtained by solving a pair of simultaneous linear equations, where
the equation for the scan line is simply y = constant.

Dept., of CSE, ATMECE 21


Module 2 Fill Area Primitives

➔ Figure above illustrates the basic scan-line procedure for a solid-color fill of a polygon.
➔ For each scan line that crosses the polygon, the edge intersections are sorted from left to
right, and then the pixel positions between, and including, each intersection pair are set to
the specified fill color the fill color is applied to the five pixels from x = 10 to x = 14 and
to the seven pixels from x = 18 to x = 24.
➔ Whenever a scan line passes through a vertex, it intersects two polygon edges at that
point.
➔ In some cases, this can result in an odd number of boundary intersections for a scan line.

➔ Scan line y’ intersects an even number of edges, and the two pairs of intersection points
along this scan line correctly identify the interior pixel spans.
➔ But scan line y intersects five polygon edges.
➔ Thus, as we process scan lines, we need to distinguish between these cases.
➔ For scan line y, the two edges sharing an intersection vertex are on opposite sides of the
scan line.
➔ But for scan line y’, the two intersecting edges are both above the scan line.

Dept., of CSE, ATMECE 22


Module 2 Fill Area Primitives

➔ Thus, a vertex that has adjoining edges on opposite sides of an intersecting scan line
should be counted as just one boundary intersection point.
➔ If the three endpoint y values of two consecutive edges monotonically increase or
decrease, we need to count the shared (middle) vertex as a single intersection point for
the scan line passing through that vertex.
➔ Otherwise, the shared vertex represents a local extremum (minimum or maximum) on the
polygon boundary, and the two edge intersections with the scan line passing through that
vertex can be added to the intersection list.
➔ One method for implementing the adjustment to the vertex-intersection count is to
shorten some polygon edges to split those vertices that should be counted as one
intersection.
➔ We can process nonhorizontal edges around the polygon boundary in the order specified,
either clockwise or counterclockwise.
➔ Adjusting endpoint y values for a polygon, as we process edges in order around the
polygon perimeter. The edge currently being processed is indicated as a solid line

In (a), the y coordinate of the upper endpoint of the current edge is decreased by 1. In
(b), the y coordinate of the upper endpoint of the next edge is decreased by 1.

➔ Coherence properties can be used in computer-graphics algorithms to reduce processing.


➔ Coherence methods often involve incremental calculations applied along a single scan
line or between successive scan lines

Dept., of CSE, ATMECE 23


Module 2 Fill Area Primitives

➔ The slope of this edge can be expressed in terms of the scan-line intersection coordinates:

➔ Because the change in y coordinates between the two scan lines is simply
y k+1 − yk = 1
➔ The x-intersection value xk+1 on the upper scan line can be determined from the x-
intersection value xk on the preceding scan line as

➔ Each successive x intercept can thus be calculated by adding the inverse of the slope and
rounding to the nearest integer.
➔ Along an edge with slope m, the intersection xk value for scan line k above the initial scan
line can be calculated as
xk = x0 +k/m
Where, m is the ratio of two integers

➔ Where Δx and Δy are the differences between the edge endpoint x and y coordinate
values.
➔ Thus, incremental calculations of x intercepts along an edge for successive scan lines can
be expressed as

Dept., of CSE, ATMECE 24


Module 2 Fill Area Primitives

➔ To perform a polygon fill efficiently, we can first store the polygon boundary in a sorted
edge table that contains all the information necessary to process the scan lines efficiently.
➔ Proceeding around the edges in either a clockwise or a counterclockwise order, we can
use a bucket sort to store the edges, sorted on the smallest y value of each edge, in the
correct scan-line positions.
➔ Only nonhorizontal edges are entered into the sorted edge table.
➔ Each entry in the table for a particular scan line contains the maximum y value for that
edge, the x-intercept value (at the lower vertex) for the edge, and the inverse slope of the
edge. For each scan line, the edges are in sorted order fromleft to right

➔ We process the scan lines from the bottom of the polygon to its top, producing an active
edge list for each scan line crossing the polygon boundaries.
➔ The active edge list for a scan line contains all edges crossed by that scan line, with
iterative coherence calculations used to obtain the edge intersections
➔ Implementation of edge-intersection calculations can be facilitated by storing Δx and Δy
values in the sorted edge list

2.1.6 OpenGL Fill-Area Attribute Functions


➔ We generate displays of filled convex polygons in four steps:
1. Define a fill pattern.
2. Invoke the polygon-fill routine.

Dept., of CSE, ATMECE 25


Module 2 Fill Area Primitives

3. Activate the polygon-fill feature of OpenGL.


4. Describe the polygons to be filled.
➔ A polygon fill pattern is displayed up to and including the polygon edges. Thus, there are
no boundary lines around the fill region unless we specifically add them to the display

OpenGL Fill-Pattern Function


➢ To fill the polygon with a pattern in OpenGL, we use a 32 × 32 bit mask.
➢ A value of 1 in the mask indicates that the corresponding pixel is to be set to the current
color, and a 0 leaves the value of that frame-buffer position unchanged.
➢ The fill pattern is specified in unsigned bytes using the OpenGL data type Glubyte
GLubyte fillPattern [ ] = { 0xff, 0x00, 0xff, 0x00, ... };
➢ The bits must be specified starting with the bottom row of the pattern, and continuing up
to the topmost row (32) of the pattern.
➢ This pattern is replicated across the entire area of the display window, starting at the
lower-left window corner, and specified polygons are filled where the pattern overlaps
those polygons

➢ Once we have set a mask, we can establish it as the current fill pattern with the function
glPolygonStipple (fillPattern);
➢ We need to enable the fill routines before we specify the vertices for the polygons that are
to be filled with the current pattern
glEnable (GL_POLYGON_STIPPLE);
➢ Similarly, we turn off pattern filling with
glDisable (GL_POLYGON_STIPPLE);

Dept., of CSE, ATMECE 26


Module 2 Fill Area Primitives

OpenGL Texture and Interpolation Patterns


➢ Another method for filling polygons is to use texture patterns.
➢ This can produce fill patterns that simulate the surface appearance of wood, brick,
brushed steel, or some other material.
➢ We assign different colors to polygon vertices.
➢ Interpolation fill of a polygon interior is used to produce realistic displays of shaded
surfaces under various lighting conditions.
➢ The polygon fill is then a linear interpolation of the colors at the vertices:
glShadeModel (GL_SMOOTH);
glBegin (GL_TRIANGLES);
glColor3f (0.0, 0.0, 1.0);
glVertex2i (50, 50);
glColor3f (1.0, 0.0, 0.0);
glVertex2i (150, 50);
glColor3f (0.0, 1.0, 0.0);
glVertex2i (75, 150);
glEnd ( );

OpenGL Wire-Frame Methods


➔ We can also choose to show only polygon edges. This produces a wire-frame or hollow
display of the polygon; or we could display a polygon by plotting a set of points only at
the vertex positions.
➔ These options are selected with the function
glPolygonMode (face, displayMode);
➔ We use parameter face to designate which face of the polygon that we want to show as
edges only or vertices only.
➔ This parameter is then assigned either
GL_FRONT, GL_BACK, or GL_FRONT_AND_BACK.
➔ If we want only the polygon edges displayed for our selection, we assign the constant
GL_LINE to parameter displayMode.

Dept., of CSE, ATMECE 27


Module 2 Fill Area Primitives

➔ To plot only the polygon vertex points, we assign the constant GL_POINT to parameter
displayMode.
➔ Another option is to display a polygon with both an interior fill and a different color or
pattern for its edges.
➔ The following code section fills a polygon interior with a green color, and then the edges
are assigned a red color:
glColor3f (0.0, 1.0, 0.0);
/* Invoke polygon-generating routine. */
glColor3f (1.0, 0.0, 0.0);
glPolygonMode (GL_FRONT, GL_LINE);
/* Invoke polygon-generating routine again. */
➔ For a three-dimensional polygon (one that does not have all vertices in the xy plane), this
method for displaying the edges of a filled polygon may produce gaps along the edges.
➔ This effect, sometimes referred to as stitching.
➔ One way to eliminate the gaps along displayed edges of a three-dimensional polygon is to
shift the depth values calculated by the fill routine so that they do not overlap with the
edge depth values for that polygon.
➔ We do this with the following two OpenGL functions:
glEnable (GL_POLYGON_OFFSET_FILL);
glPolygonOffset (factor1, factor2);
➔ The first function activates the offset routine for scan-line filling, and the second function
is used to set a couple of floating-point values factor1 and factor2 that are used to
calculate the amount of depth offset.
➔ The calculation for this depth offset is
depthOffset = factor1 · maxSlope + factor2 · const
Where,
maxSlope is the maximum slope of the polygon and
const is an implementation constant
➔ As an example of assigning values to offset factors, we can modify the previous code
segment as follows:
glColor3f (0.0, 1.0, 0.0);

Dept., of CSE, ATMECE 28


Module 2 Fill Area Primitives

glEnable (GL_POLYGON_OFFSET_FILL);
glPolygonOffset (1.0, 1.0);
/* Invoke polygon-generating routine. */
glDisable (GL_POLYGON_OFFSET_FILL);
glColor3f (1.0, 0.0, 0.0);
glPolygonMode (GL_FRONT, GL_LINE);
/* Invoke polygon-generating routine again. */
➔ Another method for eliminating the stitching effect along polygon edges is to use the
OpenGL stencil buffer to limit the polygon interior filling so that it does not overlap the
edges.
➔ To display a concave polygon using OpenGL routines, we must first split it into a set of
convex polygons.
➔ We typically divide a concave polygon into a set of triangles. Then we could display the
triangles.

Dividing a concave polygon (a) into a set of triangles (b) produces triangle edges (dashed) that
are interior to the original polygon.
➔ Fortunately, OpenGL provides a mechanism that allows us to eliminate selected edges
from a wire-frame display.
➔ So all we need do is set that bit flag to “off” and the edge following that vertex will not
be displayed.
➔ We set this flag for an edge with the following function:
glEdgeFlag (flag)
➔ To indicate that a vertex does not precede a boundary edge, we assign the OpenGL
constant GL_FALSE to parameter flag.

Dept., of CSE, ATMECE 29


Module 2 Fill Area Primitives

➔ This applies to all subsequently specified vertices until the next call to glEdgeFlag is
made.
➔ The OpenGL constant GL_TRUE turns the edge flag on again, which is the default.
➔ As an illustration of the use of an edge flag, the following code displays only two edges
of the defined triangle
glPolygonMode (GL_FRONT_AND_BACK, GL_LINE);
glBegin (GL_POLYGON);
glVertex3fv (v1);
glEdgeFlag (GL_FALSE);
glVertex3fv (v2);
glEdgeFlag (GL_TRUE);
glVertex3fv (v3);
glEnd ( );
➔ Polygon edge flags can also be specified in an array that could be combined or associated
with a vertex array.
➔ The statements for creating an array of edge flags are
glEnableClientState (GL_EDGE_FLAG_ARRAY);
glEdgeFlagPointer (offset, edgeFlagArray);

OpenGL Front-Face Function


➢ We can label selected surfaces in a scene independently as front or back with the function
glFrontFace (vertexOrder);
➢ If we set parameter vertexOrder to the OpenGL constant GL_CW, then a subsequently
defined polygon with a clockwise ordering.
➢ The constant GL_CCW labels a counterclockwise ordering of polygon vertices as front-
facing, which is the default ordering.r its vertices is considered to be front-facing

Dept., of CSE, ATMECE 30


Module 2 2D Viewing

2.2 2DGeometric Transformations:


2.2.1 Basic 2D Geometric Transformations,
2.2.2 Matrix representations and homogeneous coordinates.
2.2.3 Inverse transformations,
2.2.4 2DComposite transformations,
2.2.5 Other 2D transformations,
2.2.6 Raster methods for geometric transformations,
2.2.7 OpenGL raster transformations
2.2.8 OpenGL geometric transformations function,

Two-Dimensional Geometric Transformations


Operations that are applied to the geometric description of an object to change its
position, orientation, or size are called geometric transformations.

2.2.1 Basic Two-Dimensional Geometric Transformations


The geometric-transformation functions that are available in all graphics packages are
those for translation, rotation, and scaling.

Two-Dimensional Translation
➢ We perform a translation on a single coordinate point by adding offsets to its
coordinates so as to generate a new coordinate position.
➢ We are moving the original point position along a straight-line path to its new location.
➢ To translate a two-dimensional position, we add translation distances tx and ty to the
original coordinates (x, y) to obtain the new coordinate position (x’, y’) as shown in
Figure

Dept., of CSE, ATMECE 1


Module 2 2D Viewing

➢ The translation values of x’ and y’ is calculated as

➢ The translation distance pair (tx, ty) is called a translation vector or shift vector Column
vector representation is given as

➢ This allows us to write the two-dimensional translation equations in the matrix Form

➢ Translation is a rigid-body transformation that moves objects without deformation.


Code:
class wcPt2D {
public:
GLfloat x, y;
};
void translatePolygon (wcPt2D * verts, GLint nVerts, GLfloat tx, GLfloat ty)
{
GLint k;
for (k = 0; k < nVerts; k++) {
verts [k].x = verts [k].x + tx;
verts [k].y = verts [k].y + ty;
}
glBegin (GL_POLYGON);
for (k = 0; k < nVerts; k++)
glVertex2f (verts [k].x, verts [k].y);
glEnd ( );
}

Two-Dimensional Rotation
✓ We generate a rotation transformation of an object by specifying a rotation axis and a
rotation angle.

Dept., of CSE, ATMECE 2


Module 2 2D Viewing

✓ A two-dimensional rotation of an object is obtained by repositioning the object along a


circular path in the xy plane.
✓ In this case, we are rotating the object about a rotation axis that is perpendicular to the xy
plane (parallel to the coordinate z axis).
✓ Parameters for the two-dimensional rotation are the rotation angle θ and a position
(xr, yr ), called the rotation point (or pivot point), about which the object is to be rotated

✓ A positive value for the angle θ defines a counterclockwise rotation about the pivot point,
as in above Figure , and a negative value rotates objects in the clockwise direction.
✓ The angular and coordinate relationships of the original and transformed point positions
are shown in Figure

✓ In this figure, r is the constant distance of the point from the origin, angle φ is the original
angular position of the point from the horizontal, and θ is the rotation angle.
✓ we can express the transformed coordinates in terms of angles θ and φ as

✓ The original coordinates of the point in polar coordinates are

Dept., of CSE, ATMECE 3


Module 2 2D Viewing

✓ Substituting expressions of x and y in the eaquations of x’ and y’ we get

✓ We can write the rotation equations in the matrix form


P’ = R· P
Where the rotation matrix is,

✓ Rotation of a point about an arbitrary pivot position is illustrated in Figure

✓ The transformation equations for rotation of a point about any specified rotation position
(xr , yr ):

Code:
class wcPt2D {
public:
GLfloat x, y;
};
void rotatePolygon (wcPt2D * verts, GLint nVerts, wcPt2D pivPt, GLdouble theta)
{
wcPt2D * vertsRot;
GLint k;
for (k = 0; k < nVerts; k++) {

Dept., of CSE, ATMECE 4


Module 2 2D Viewing

vertsRot [k].x = pivPt.x + (verts [k].x - pivPt.x) * cos (theta) - (verts [k].y -
pivPt.y) * sin (theta);
vertsRot [k].y = pivPt.y + (verts [k].x - pivPt.x) * sin (theta) + (verts [k].y -
pivPt.y) * cos (theta);
}
glBegin (GL_POLYGON);
for (k = 0; k < nVerts; k++)
glVertex2f (vertsRot [k].x, vertsRot [k].y);
glEnd ( );
}

Two-Dimensional Scaling
✓ To alter the size of an object, we apply a scaling transformation.
✓ A simple twodimensional scaling operation is performed by multiplying object positions
(x, y) by scaling factors sx and sy to produce the transformed coordinates (x’, y’):

✓ The basic two-dimensional scaling equations can also be written in the following matrix
form

Where S is the 2 × 2 scaling matrix


✓ Any positive values can be assigned to the scaling factors sx and sy.
✓ Values less than 1 reduce the size of objects
✓ Values greater than 1 produce enlargements.
✓ Specifying a value of 1 for both sx and sy leaves the size of objects unchanged.
✓ When sx and sy are assigned the same value, a uniform scaling is produced, which
maintains relative object proportions.

Dept., of CSE, ATMECE 5


Module 2 2D Viewing

✓ Unequal values for sx and sy result in a differential scaling that is often used in design
applications.
✓ In some systems, negative values can also be specified for the scaling parameters. This
not only resizes an object, it reflects it about one or more of the coordinate axes.
✓ Figure below illustrates scaling of a line by assigning the value 0.5 to both sx and sy

✓ We can control the location of a scaled object by choosing a position, called the fixed
point, that is to remain unchanged after the scaling transformation.
✓ Coordinates for the fixed point, (x f , yf ), are often chosen at some object position, such
as its centroid but any other spatial position can be selected.
✓ For a coordinate position (x, y), the scaled coordinates (x’, y’) are then calculated from
the following relationships:

✓ We can rewrite Equations to separate the multiplicative and additive terms as

✓ Where the additive terms x f (1 − sx) and yf (1 − sy) are constants for all points in the
object.
Code:
class wcPt2D {
public:
GLfloat x, y;
};
void scalePolygon (wcPt2D * verts, GLint nVerts, wcPt2D fixedPt, GLfloat sx, GLfloat sy)
{
wcPt2D vertsNew;

Dept., of CSE, ATMECE 6


Module 2 2D Viewing

GLint k;
for (k = 0; k < nVerts; k++) {
vertsNew [k].x = verts [k].x * sx + fixedPt.x * (1 - sx);
vertsNew [k].y = verts [k].y * sy + fixedPt.y * (1 - sy);
}
glBegin (GL_POLYGON);
for (k = 0; k < nVerts; k++)
glVertex2f (vertsNew [k].x, vertsNew [k].y);
glEnd ( );
}

2.2.2 Matrix Representations and Homogeneous Coordinates


✓ Each of the three basic two-dimensional transformations (translation, rotation, and
scaling) can be expressed in the general matrix form

✓ With coordinate positions P and P’ represented as column vectors.


✓ Matrix M1 is a 2 × 2 array containing multiplicative factors, and M2 is a two-element
column matrix containing translational terms.
✓ For translation, M1 is the identity matrix.
✓ For rotation or scaling, M2 contains the translational terms associated with the pivot
point or scaling fixed point.

Homogeneous Coordinates
➢ Multiplicative and translational terms for a two-dimensional geometric transformation
can be combined into a single matrix if we expand the representations to 3 × 3 matrices
➢ We can use the third column of a transformation matrix for the translation terms, and all
transformation equations can be expressed as matrix multiplications.
➢ We also need to expand the matrix representation for a two-dimensional coordinate
position to a three-element column matrix

Dept., of CSE, ATMECE 7


Module 2 2D Viewing

➢ A standard technique for accomplishing this is to expand each twodimensional


coordinate-position representation (x, y) to a three-element representation (xh, yh, h),
called homogeneous coordinates, where the homogeneous parameter h is a nonzero
value such that

➢ A general two-dimensional homogeneous coordinate representation could also be written


as (h·x, h·y, h).
➢ A convenient choice is simply to set h = 1. Each two-dimensional position is then
represented with homogeneous coordinates (x, y, 1).
➢ The term homogeneous coordinates is used in mathematics to refer to the effect of this
representation on Cartesian equations.

Two-Dimensional Translation Matrix


✓ The homogeneous-coordinate for translation is given by

✓ This translation operation can be written in the abbreviated form

with T(tx, ty) as the 3 × 3 translation matrix

Two-Dimensional Rotation Matrix


✓ Two-dimensional rotation transformation equations about the coordinate origin can be
expressed in the matrix form

✓ The rotation transformation operator R(θ ) is the 3 × 3 matrix with rotation parameter θ.

Dept., of CSE, ATMECE 8


Module 2 2D Viewing

Two-Dimensional Scaling Matrix


✓ A scaling transformation relative to the coordinate origin can now be expressed as the
matrix multiplication

✓ The scaling operator S(sx, sy ) is the 3 × 3 matrix with parameters sx and sy

2.2.3 Inverse Transformations


❖ For translation,we obtain the inverse matrix by negating the translation distances. Thus, if
we have two-dimensional translation distances tx and ty, the inverse translation matrix is

❖ An inverse rotation is accomplished by replacing the rotation angle by its negative.


❖ A two-dimensional rotation through an angle θ about the coordinate origin has the
inverse transformation matrix

❖ We form the inverse matrix for any scaling transformation by replacing the scaling
parameters with their reciprocals. the inverse transformation matrix is

Dept., of CSE, ATMECE 9


Module 2 2D Viewing

2.2.4 Two-Dimensional Composite Transformations


✓ Forming products of transformation matrices is often referred to as a concatenation, or
composition, of matrices if we want to apply two transformations to point position P, the
transformed location would be calculated as

✓ The coordinate position is transformed using the composite matrixM, rather than
applying the individual transformations M1 and thenM2.

Composite Two-Dimensional Translations


✓ If two successive translation vectors (t1x, t1y) and (t2x, t2y) are applied to a
twodimensional coordinate position P, the final transformed location P’ is calculated as

where P and P’ are represented as three-element, homogeneous-coordinate


column vectors
✓ Also, the composite transformation matrix for this sequence of translations is

Composite Two-Dimensional Rotations


✓ Two successive rotations applied to a point P produce the transformed position

✓ By multiplying the two rotation matrices, we can verify that two successive rotations are
additive:
R(θ2) · R(θ1) = R(θ1 + θ2)

Dept., of CSE, ATMECE 10


Module 2 2D Viewing

✓ So that the final rotated coordinates of a point can be calculated with the composite
rotation matrix as
P’ = R(θ1 + θ2) · P

Composite Two-Dimensional Scalings


✓ Concatenating transformation matrices for two successive scaling operations in two
dimensions produces the following composite scaling matrix

General Two-Dimensional Pivot-Point Rotation

✓ We can generate a two-dimensional rotation about any other pivot point (xr , yr ) by
performing the following sequence of translate-rotate-translate operations:
1. Translate the object so that the pivot-point position is moved to the coordinate origin.
2. Rotate the object about the coordinate origin.
3. Translate the object so that the pivot point is returned to its original position.
✓ The composite transformation matrix for this sequence is obtained with the concatenation

Dept., of CSE, ATMECE 11


Module 2 2D Viewing

which can be expressed in the form

where T(−xr , −yr ) = T−1(xr , yr ).

General Two-Dimensional Fixed-Point Scaling

✓ To produce a two-dimensional scaling with respect to a selected fixed position (x f , yf ),


when we have a function that can scale relative to the coordinate origin only. This
sequence is
1. Translate the object so that the fixed point coincides with the coordinate origin.
2. Scale the object with respect to the coordinate origin.
3. Use the inverse of the translation in step (1) to return the object to its original position.
✓ Concatenating the matrices for these three operations produces the required scaling
matrix:

Dept., of CSE, ATMECE 12


Module 2 2D Viewing

General Two-Dimensional Scaling Directions


✓ Parameters sx and sy scale objects along the x and y directions.
✓ We can scale an object in other directions by rotating the object to align the desired
scaling directions with the coordinate axes before applying the scaling transformation.
✓ Suppose we want to apply scaling factors with values specified by parameters s1 and s2
in the directions shown in Figure

✓ The composite matrix resulting from the product of these three transformations is

Matrix Concatenation Properties


Property 1:
✓ Multiplication of matrices is associative.
✓ For any three matrices,M1,M2, andM3, the matrix product M3 · M2 · M1 can be
performed by first multiplying M3 and M2 or by first multiplyingM2 and M1:
M3 ·M2 · M1 = (M3 · M2) ·M1 = M3 · (M2 · M1)
✓ We can construct a composite matrix either by multiplying from left to right
(premultiplying) or by multiplying from right to left (postmultiplying)

Property 2:
✓ Transformation products, on the other hand, may not be commutative. The matrix
productM2 · M1 is not equal toM1 ·M2, in general.

Dept., of CSE, ATMECE 13


Module 2 2D Viewing

✓ This means that if we want to translate and rotate an object, we must be careful about the
order in which the composite matrix is evaluated

✓ Reversing the order in which a sequence of transformations is performed may affect the
transformed position of an object. In (a), an object is first translated in the x direction,
then rotated counterclockwise through an angle of 45◦. In (b), the object is first rotated
45◦ counterclockwise, then translated in the x direction.

General Two-Dimensional Composite Transformations and Computational Efficiency


✓ A two-dimensional transformation, representing any combination of translations,
rotations, and scalings, can be expressed as

✓ The four elements rsjk are the multiplicative rotation-scaling terms in the transformation,
which involve only rotation angles and scaling factors if an object is to be scaled and
rotated about its centroid coordinates (xc , yc ) and then translated, the values for the
elements of the composite transformation matrix are

✓ Although the above matrix requires nine multiplications and six additions, the explicit
calculations for the transformed coordinates are

Dept., of CSE, ATMECE 14


Module 2 2D Viewing

✓ We need actually perform only four multiplications and four additions to transform
coordinate positions.
✓ Because rotation calculations require trigonometric evaluations and several
multiplications for each transformed point, computational efficiency can become an
important consideration in rotation transformations
✓ If we are rotating in small angular steps about the origin, for instance, we can set cos θ to
1.0 and reduce transformation calculations at each step to two multiplications and two
additions for each set of coordinates to be rotated.
✓ These rotation calculations are
x’= x − y sin θ, y’ = x sin θ + y

Two-Dimensional Rigid-Body Transformation


➔ If a transformation matrix includes only translation and rotation parameters, it is a rigid-
body transformation matrix.
➔ The general form for a two-dimensional rigid-body transformation matrix is

where the four elements r jk are the multiplicative rotation terms, and the elements trx
and try are the translational terms
➔ A rigid-body change in coordinate position is also sometimes referred to as a rigid-
motion transformation.
➔ In addition, the above matrix has the property that its upper-left 2 × 2 submatrix is an
orthogonal matrix.
➔ If we consider each row (or each column) of the submatrix as a vector, then the two row
vectors (rxx, rxy) and (ryx, ryy) (or the two column vectors) form an orthogonal set of
unit vectors.
➔ Such a set of vectors is also referred to as an orthonormal vector set. Each vector has unit
length as follows

and the vectors are perpendicular (their dot product is 0):

Dept., of CSE, ATMECE 15


Module 2 2D Viewing

➔ Therefore, if these unit vectors are transformed by the rotation submatrix, then the vector
(rxx, rxy) is converted to a unit vector along the x axis and the vector (ryx, ryy) is
transformed into a unit vector along the y axis of the coordinate system

➔ For example, the following rigid-body transformation first rotates an object through an
angle θ about a pivot point (xr , yr ) and then translates the object

➔ Here, orthogonal unit vectors in the upper-left 2×2 submatrix are (cos θ, −sin θ) and (sin
θ, cos θ).

Constructing Two-Dimensional Rotation Matrices


✓ The orthogonal property of rotation matrices is useful for constructing the matrix when
we know the final orientation of an object, rather than the amount of angular rotation
necessary to put the object into that position.
✓ We might want to rotate an object to align its axis of symmetry with the viewing
(camera) direction, or we might want to rotate one object so that it is above another
object.
✓ Figure shows an object that is to be aligned with the unit direction vectors u_ and v

Dept., of CSE, ATMECE 16


Module 2 2D Viewing

The rotation matrix for revolving an object from position (a) to position (b) can be constructed
with the values of the unit orientation vectors u’ and v’ relative to the original orientation.

2.2.5 Other Two-Dimensional Transformations


Two such transformations
1. Reflection and
2. Shear.

Reflection
✓ A transformation that produces a mirror image of an object is called a reflection.
✓ For a two-dimensional reflection, this image is generated relative to an axis of reflection
by rotating the object 180◦ about the reflection axis.
✓ Reflection about the line y = 0 (the x axis) is accomplished with the transformation
Matrix

✓ This transformation retains x values, but “flips” the y values of coordinate positions.
✓ The resulting orientation of an object after it has been reflected about the x axis is shown
in Figure

Dept., of CSE, ATMECE 17


Module 2 2D Viewing

✓ A reflection about the line x = 0 (the y axis) flips x coordinates while keeping y
coordinates the same. The matrix for this transformation is

✓ Figure below illustrates the change in position of an object that has been reflected about
the line x = 0.

✓ We flip both the x and y coordinates of a point by reflecting relative to an axis that is
perpendicular to the xy plane and that passes through the coordinate origin the matrix
representation for this reflection is

✓ An example of reflection about the origin is shown in Figure

Dept., of CSE, ATMECE 18


Module 2 2D Viewing

✓ If we choose the reflection axis as the diagonal line y = x (Figure below), the reflection
matrix is

✓ To obtain a transformation matrix for reflection about the diagonal y = −x, we could
concatenate matrices for the transformation sequence:
(1) clockwise rotation by 45◦,
(2) reflection about the y axis, and
(3) counterclockwise rotation by 45◦.
The resulting transformation matrix is

Dept., of CSE, ATMECE 19


Module 2 2D Viewing

Shear
✓ A transformation that distorts the shape of an object such that the transformed shape
appears as if the object were composed of internal layers that had been caused to slide
over each other is called a shear.
✓ Two common shearing transformations are those that shift coordinate x values and those
that shift y values. An x-direction shear relative to the x axis is produced with the
transformation Matrix

which transforms coordinate positions as

✓ Any real number can be assigned to the shear parameter shx Setting parameter shx to the
value 2, for example, changes the square into a parallelogram is shown below. Negative
values for shx shift coordinate positions to the left.

A unit square (a) is converted to a parallelogram (b) using the x -direction shear with shx = 2.

✓ We can generate x-direction shears relative to other reference lines with

Now, coordinate positions are transformed as

Dept., of CSE, ATMECE 20


Module 2 2D Viewing

✓ A y-direction shear relative to the line x = xref is generated with the transformation
Matrix

which generates the transformed coordinate values

2.2.6 Raster Methods for Geometric Transformations


✓ Raster systems store picture information as color patterns in the frame buffer.
✓ Therefore, some simple object transformations can be carried out rapidly by manipulating
an array of pixel values
✓ Few arithmetic operations are needed, so the pixel transformations are particularly
efficient.
✓ Functions that manipulate rectangular pixel arrays are called raster operations and
moving a block of pixel values from one position to another is termed a block transfer, a
bitblt, or a pixblt.
✓ Figure below illustrates a two-dimensional translation implemented as a block transfer of
a refresh-buffer area

Translating an object from screen position (a) to the destination position shown in (b) by moving
a rectangular block of pixel values. Coordinate positions Pmin and Pmax specify the limits of the
rectangular block to be moved, and P0 is the destination reference position.

Dept., of CSE, ATMECE 21


Module 2 2D Viewing

✓ Rotations in 90-degree increments are accomplished easily by rearranging the elements


of a pixel array.
✓ We can rotate a two-dimensional object or pattern 90◦ counterclockwise by reversing the
pixel values in each row of the array, then interchanging rows and columns.
✓ A 180◦ rotation is obtained by reversing the order of the elements in each row of the
array, then reversing the order of the rows.
✓ Figure below demonstrates the array manipulations that can be used to rotate a pixel
block by 90◦ and by 180◦.

✓ For array rotations that are not multiples of 90◦, we need to do some extra processing.
✓ The general procedure is illustrated in Figure below.

✓ Each destination pixel area is mapped onto the rotated array and the amount of overlap
with the rotated pixel areas is calculated.
✓ A color for a destination pixel can then be computed by averaging the colors of the
overlapped source pixels, weighted by their percentage of area overlap.
✓ Pixel areas in the original block are scaled, using specified values for sx and sy, and then
mapped onto a set of destination pixels.
✓ The color of each destination pixel is then assigned according to its area of overlap with
the scaled pixel areas

Dept., of CSE, ATMECE 22


Module 2 2D Viewing

2.2.7 OpenGL Raster Transformations


❖ A translation of a rectangular array of pixel-color values from one buffer area to another
can be accomplished in OpenGL as the following copy operation:
glCopyPixels (xmin, ymin, width, height, GL_COLOR);
❖ The first four parameters in this function give the location and dimensions of the pixel
block; and the OpenGL symbolic constant GL_COLOR specifies that it is color values
are to be copied.

❖ A block of RGB color values in a buffer can be saved in an array with the function
glReadPixels (xmin, ymin, width, height, GL_RGB, GL_UNSIGNED_BYTE, colorArray);
❖ If color-table indices are stored at the pixel positions, we replace the constant GL RGB
with GL_COLOR_INDEX.

❖ To rotate the color values, we rearrange the rows and columns of the color array, as
described in the previous section. Then we put the rotated array back in the buffer with
glDrawPixels (width, height, GL_RGB, GL_UNSIGNED_BYTE, colorArray);

❖ A two-dimensional scaling transformation can be performed as a raster operation in


OpenGL by specifying scaling factors and then invoking either glCopyPixels or
glDrawPixels.
❖ For the raster operations, we set the scaling factors with
glPixelZoom (sx, sy);

Dept., of CSE, ATMECE 23


Module 2 2D Viewing

❖ We can also combine raster transformations with logical operations to produce various
effects with the exclusive or operator

2.2.8 OpenGL Functions for Two-Dimensional Geometric Transformations


✓ To perform a translation, we invoke the translation routine and set the components for the
three-dimensional translation vector.
✓ In the rotation function, we specify the angle and the orientation for a rotation axis that
intersects the coordinate origin.
✓ In addition, a scaling function is used to set the three coordinate scaling factors relative to
the coordinate origin. In each case, the transformation routine sets up a 4 × 4 matrix that
is applied to the coordinates of objects that are referenced after the transformation call

Basic OpenGL Geometric Transformations


➔ A 4× 4 translation matrix is constructed with the following routine:
glTranslate* (tx, ty, tz);
✓ Translation parameters tx, ty, and tz can be assigned any real-number
values, and the single suffix code to be affixed to this function is either f
(float) or d (double).
✓ For two-dimensional applications, we set tz = 0.0; and a two-dimensional
position is represented as a four-element column matrix with the z
component equal to 0.0.
✓ example: glTranslatef (25.0, -10.0, 0.0);
➔ Similarly, a 4 × 4 rotation matrix is generated with
glRotate* (theta, vx, vy, vz);
✓ where the vector v = (vx, vy, vz) can have any floating-point values for its
components.
✓ This vector defines the orientation for a rotation axis that passes through
the coordinate origin.
✓ If v is not specified as a unit vector, then it is normalized automatically
before the elements of the rotation matrix are computed.

Dept., of CSE, ATMECE 24


Module 2 2D Viewing

✓ The suffix code can be either f or d, and parameter theta is to be assigned


a rotation angle in degree.
✓ For example, the statement: glRotatef (90.0, 0.0, 0.0, 1.0);
➔ We obtain a 4 × 4 scaling matrix with respect to the coordinate origin with the following
routine:
glScale* (sx, sy, sz);
✓ The suffix code is again either f or d, and the scaling parameters can be assigned
any real-number values.
✓ Scaling in a two-dimensional system involves changes in the x and y dimensions,
so a typical two-dimensional scaling operation has a z scaling factor of 1.0
✓ Example: glScalef (2.0, -3.0, 1.0);

OpenGL Matrix Operations


✓ The glMatrixMode routine is used to set the projection mode which designates the matrix
that is to be used for the projection transformation.
✓ We specify the modelview mode with the statement
glMatrixMode (GL_MODELVIEW);
• which designates the 4×4 modelview matrix as the current matrix
• Two other modes that we can set with the glMatrixMode function are the texture
mode and the color mode.
• The texture matrix is used for mapping texture patterns to surfaces, and the color
matrix is used to convert from one color model to another.
• The default argument for the glMatrixMode function is GL_MODELVIEW.
✓ With the following function, we assign the identity matrix to the current matrix:
glLoadIdentity ( );
✓ Alternatively, we can assign other values to the elements of the current matrix using
glLoadMatrix* (elements16);
✓ A single-subscripted, 16-element array of floating-point values is specified with
parameter elements16, and a suffix code of either f or d is used to designate the data type
✓ The elements in this array must be specified in column-major order
✓ To illustrate this ordering, we initialize the modelview matrix with the following code:

Dept., of CSE, ATMECE 25


Module 2 2D Viewing

glMatrixMode (GL_MODELVIEW);
GLfloat elems [16];
GLint k;
for (k = 0; k < 16; k++)
elems [k] = float (k);
glLoadMatrixf (elems);
Which produces the matrix

✓ We can also concatenate a specified matrix with the current matrix as follows:
glMultMatrix* (otherElements16);
✓ Again, the suffix code is either f or d, and parameter otherElements16 is a 16-element,
single-subscripted array that lists the elements of some other matrix in column-major
order.
✓ Thus, assuming that the current matrix is the modelview matrix, which we designate as
M, then the updated modelview matrix is computed as
M = M· M’
✓ The glMultMatrix function can also be used to set up any transformation sequence with
individually defined matrices.
✓ For example,
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ( ); // Set current matrix to the identity.
glMultMatrixf (elemsM2); // Postmultiply identity with matrix M2.
glMultMatrixf (elemsM1); // Postmultiply M2 with matrix M1.
produces the following current modelview matrix:
M = M2 · M1

Dept., of CSE, ATMECE 26


Module 2 2D Viewing

2.3 Two Dimensional Viewing


2.3.1 2D viewing pipeline
2.3.1 OpenGL 2D viewing functions.

2.3.1 The Two-Dimensional Viewing Pipeline


➢ A section of a two-dimensional scene that is selected for display is called a clipping
Window.
➢ Sometimes the clipping window is alluded to as the world window or the viewing window
➢ Graphics packages allow us also to control the placement within the display window
using another “window” called the viewport.
➢ The clipping window selects what we want to see; the viewport indicates where it is to be
viewed on the output device.
➢ By changing the position of a viewport, we can view objects at different positions on the
display area of an output device
➢ Usually, clipping windows and viewports are rectangles in standard position, with the
rectangle edges parallel to the coordinate axes.
➢ We first consider only rectangular viewports and clipping windows, as illustrated in
Figure

Dept., of CSE, ATMECE 1


Module 2 2D Viewing

Viewing Pipeline
➢ The mapping of a two-dimensional, world-coordinate scene description to device
coordinates is called a two-dimensional viewing transformation.
➢ This transformation is simply referred to as the window-to-viewport transformation or the
windowing transformation
➢ We can describe the steps for two-dimensional viewing as indicated in Figure

➢ Once aworld-coordinate scene has been constructed,wecould set up a separate two-


dimensional, viewingcoordinate reference frame for specifying the clipping window.
➢ To make the viewing process independent of the requirements of any output device,
graphics systems convert object descriptions to normalized coordinates and apply the
clipping routines.
➢ Systems use normalized coordinates in the range from 0 to 1, and others use a normalized
range from −1 to 1.
➢ At the final step of the viewing transformation, the contents of the viewport are
transferred to positions within the display window.
➢ Clipping is usually performed in normalized coordinates.
➢ This allows us to reduce computations by first concatenating the various transformation
matrices

2.3.2 OpenGL Two-Dimensional Viewing Functions


• The GLU library provides a function for specifying a two-dimensional clipping window,
and we have GLUT library functions for handling display windows.

OpenGL Projection Mode


✓ Before we select a clipping window and a viewport in OpenGL, we need to establish the
appropriate mode for constructing the matrix to transform from world coordinates to
screen coordinates.

Dept., of CSE, ATMECE 2


Module 2 2D Viewing

✓ We must set the parameters for the clipping window as part of the projection
transformation.
✓ Function:
glMatrixMode (GL_PROJECTION);
✓ We can also set the initialization as
glLoadIdentity ( );
This ensures that each time we enter the projection mode, the matrix will be reset
to the identity matrix so that the new viewing parameters are not combined with the
previous ones

GLU Clipping-Window Function


✓ To define a two-dimensional clipping window, we can use the GLU function:
gluOrtho2D (xwmin, xwmax, ywmin, ywmax);
✓ This function specifies an orthogonal projection for mapping the scene to the screen the
orthogonal projection has no effect on our two-dimensional scene other than to convert
object positions to normalized coordinates.
✓ Normalized coordinates in the range from −1 to 1 are used in the OpenGL clipping
routines.
✓ Objects outside the normalized square (and outside the clipping window) are eliminated
from the scene to be displayed.
✓ If we do not specify a clipping window in an application program, the default coordinates
are (xwmin, ywmin) = (−1.0, −1.0) and (xwmax, ywmax) = (1.0, 1.0).
✓ Thus the default clipping window is the normalized square centered on the coordinate
origin with a side length of 2.0.

OpenGL Viewport Function


✓ We specify the viewport parameters with the OpenGL function
glViewport (xvmin, yvmin, vpWidth, vpHeight);
Where,
➔ xvmin and yvmin specify the position of the lowerleft corner of the viewport relative
to the lower-left corner of the display window,

Dept., of CSE, ATMECE 3


Module 2 2D Viewing

➔ vpWidth and vpHeight are pixel width and height of the viewport
✓ Coordinates for the upper-right corner of the viewport are calculated for this
transformation matrix in terms of the viewport width and height:

✓ Multiple viewports can be created in OpenGL for a variety of applications.


✓ We can obtain the parameters for the currently active viewport using the query function
glGetIntegerv (GL_VIEWPORT, vpArray);
where,
➔ vpArray is a single-subscript, four-element array.

Creating a GLUT Display Window


✓ The GLUT library interfaces with any window-management system, we use the GLUT
routines for creating and manipulating display windows so that our example programs
will be independent of any specific machine.
✓ We first need to initialize GLUT with the following function:
glutInit (&argc, argv);
✓ We have three functions inGLUTfor defining a display window and choosing its
dimensions and position:
1. glutInitWindowPosition (xTopLeft, yTopLeft);
➔ gives the integer, screen-coordinate position for the top-left corner of the display
window, relative to the top-left corner of the screen

2. glutInitWindowSize (dwWidth, dwHeight);


➔ we choose a width and height for the display window in positive integer pixel
dimensions.
➔ If we do not use these two functions to specify a size and position, the default size is
300 by 300 and the default position is (−1, −1), which leaves the positioning of the
display window to the window-management system

Dept., of CSE, ATMECE 4


Module 2 2D Viewing

3. glutCreateWindow ("Title of Display Window");


➔ creates the display window, with the specified size and position, and assigns a title,
although the use of the title also depends on the windowing system

Setting the GLUT Display-Window Mode and Color


✓ Various display-window parameters are selected with the GLUT function
1. glutInitDisplayMode (mode);
➔ We use this function to choose a color mode (RGB or index) and different buffer
combinations, and the selected parameters are combined with the logical or
operation.

2. glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);


➔ The color mode specification GLUT_RGB is equivalent to GLUT_RGBA.

3. glClearColor (red, green, blue, alpha);


➔ A background color for the display window is chosen in RGB mode with the OpenGL
routine

4. glClearIndex (index);
➔ This function sets the display window color using color-index mode,
➔ Where parameter index is assigned an integer value corresponding to a position
within the color table.

GLUT Display-Window Identifier


✓ Multiple display windows can be created for an application, and each is assigned a
positive-integer display-window identifier, starting with the value 1 for the first window
that is created.
✓ Function:
windowID = glutCreateWindow ("A Display Window");

Dept., of CSE, ATMECE 5


Module 2 2D Viewing

Deleting a GLUT Display Window


✓ If we know the display window’s identifier, we can eliminate it with the statement
glutDestroyWindow (windowID);

Current GLUT Display Window


✓ When we specify any display-window operation, it is applied to the current display
window, which is either the last display window that we created or the one.
✓ we select with the following command
glutSetWindow (windowID);
✓ We can query the system to determine which window is the current display window:
currentWindowID = glutGetWindow ( );
➔ A value of 0 is returned by this function if there are no display windows or if the
current display window was destroyed

Relocating and Resizing a GLUT Display Window


✓ We can reset the screen location for the current display window with the function
glutPositionWindow (xNewTopLeft, yNewTopLeft);
✓ Similarly, the following function resets the size of the current display window:
glutReshapeWindow (dwNewWidth, dwNewHeight);
✓ With the following command, we can expand the current display window to fill the
screen:
glutFullScreen ( );
✓ Whenever the size of a display window is changed, its aspect ratio may change and
objects may be distorted from their original shapes. We can adjust for a change in
display-window dimensions using the statement
glutReshapeFunc (winReshapeFcn);

Managing Multiple GLUT Display Windows


✓ The GLUT library also has a number of routines for manipulating a display window in
various ways.

Dept., of CSE, ATMECE 6


Module 2 2D Viewing

✓ We use the following routine to convert the current display window to an icon in the form
of a small picture or symbol representing the window:
glutIconifyWindow ( );
✓ The label on this icon will be the same name that we assigned to the window, but we can
change this with the following command:
glutSetIconTitle ("Icon Name");
✓ We also can change the name of the display window with a similar command:
glutSetWindowTitle ("New Window Name");
✓ We can choose any display window to be in front of all other windows by first
designating it as the current window, and then issuing the “pop-window” command:
glutSetWindow (windowID);
glutPopWindow ( );
✓ In a similar way, we can “push” the current display window to the back so that it is
behind all other display windows. This sequence of operations is
glutSetWindow (windowID);
glutPushWindow ( );
✓ We can also take the current window off the screen with
glutHideWindow ( );
✓ In addition, we can return a “hidden” display window, or one that has been converted to
an icon, by designating it as the current display window and then invoking the function
glutShowWindow ( );

GLUT Subwindows
✓ Within a selected display window, we can set up any number of second-level display
windows, which are called subwindows.
✓ We create a subwindow with the following function:
glutCreateSubWindow (windowID, xBottomLeft, yBottomLeft, width, height);
✓ Parameter windowID identifies the display window in which we want to set up the
subwindow.

Dept., of CSE, ATMECE 7


Module 2 2D Viewing

✓ Subwindows are assigned a positive integer identifier in the same way that first-level
display windows are numbered, and we can place a subwindow inside another
subwindow.
✓ Each subwindow can be assigned an individual display mode and other parameters. We
can even reshape, reposition, push, pop, hide, and show subwindows

Selecting a Display-Window Screen-Cursor Shape


✓ We can use the following GLUT routine to request a shape for the screen cursor that is to
be used with the current window:
glutSetCursor (shape);
where, shape can be
➔ GLUT_CURSOR_UP_DOWN : an up-down arrow.
➔ GLUT_CURSOR_CYCLE: A rotating arrow is chosen
➔ GLUT_CURSOR_WAIT: a wristwatch shape.
➔ GLUT_CURSOR_DESTROY: a skull and crossbones

Viewing Graphics Objects in a GLUT Display Window


✓ After we have created a display window and selected its position, size, color, and other
characteristics, we indicate what is to be shown in that window
✓ Then we invoke the following function to assign something to that window:
glutDisplayFunc (pictureDescrip);
✓ This routine, called pictureDescrip for this example, is referred to as a callback function
because it is the routine that is to be executed whenever GLUT determines that the
display-window contents should be renewed.
✓ We may need to call glutDisplayFunc after the glutPopWindow command if the display
window has been damaged during the process of redisplaying the windows.
✓ In this case, the following function is used to indicate that the contents of the current
display window should be renewed:
glutPostRedisplay ( );

Dept., of CSE, ATMECE 8


Module 2 2D Viewing

Executing the Application Program


✓ When the program setup is complete and the display windows have been created and
initialized, we need to issue the final GLUT command that signals execution of the
program:
glutMainLoop ( );

Other GLUT Functions


✓ Sometimes it is convenient to designate a function that is to be executed when there are
no other events for the system to process. We can do that with
glutIdleFunc (function);
✓ Finally, we can use the following function to query the system about some of the current
state parameters:
glutGet (stateParam);
✓ This function returns an integer value corresponding to the symbolic constant we select
for its argument.
✓ For example, for the stateParam we can have the values
➔ GLUT_WINDOW_X: obtains the x-coordinate position for the top-left corner of the
current display window
➔ GLUT_WINDOW_WIDTH or GLUT_SCREEN_WIDTH : retrieve the current
display-window width or the screen width with.

Dept., of CSE, ATMECE 9

You might also like