Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
29 views

Computer Graphics Notes

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Computer Graphics Notes

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Computer Graphics ( EG3101CT )

Unit 1. Introduction [3 Hrs.]


1.1. History of Computer Graphics
1.2. Application of Computer Graphics
1.3. CAD and CAM
Introduction to Computer Graphics

Graphics are defined as any sketch or a drawing or a special network that
pictorially represents some meaningful information. Computer Graphics is
used where a set of images needs to be manipulated or the creation of the
image in the form of pixels and is drawn on the computer. Computer
Graphics can be used in digital photography, film, entertainment,
electronic gadgets, and all other core technologies which are required. It is
a vast subject and area in the field of computer science. Computer
Graphics can be used in UI design, rendering, geometric objects,
animation, and many more. In most areas, computer graphics is an
abbreviation of CG. There are several tools used for the implementation of
Computer Graphics. The basic is the <graphics.h> header file in Turbo-C,
Unity for advanced and even OpenGL can be used for its
Implementation.
The term ‘Computer Graphics’ was coined by Verne Hudson and
William Fetter from Boeing who were pioneers in the field.
Computer Graphics refers to several things:
 The manipulation and the representation of the image or the data in a
graphical manner.
 Various technology is required for the creation and manipulation.
 Digital synthesis and its manipulation.

Types of Computer Graphics


 Raster Graphics: In raster, graphics pixels are used for an image to be
drawn. It is also known as a bitmap image in which a sequence of
images is into smaller pixels. Basically, a bitmap indicates a large
number of pixels together.
 Vector Graphics: In vector graphics, mathematical formulae are used
to draw different types of shapes, lines, objects, and so on.
1.1 History of Computer Graphics:-

We need to take a brief look at the historical development of computer


graphics to place today’s system in context. Crude plotting of hardcopy
devices such as teletypes and line printers dates from the early days of
computing
 1951- whirlwind computer developed in MIT had computer driven
CRT displays for output
 Mid 1950’s- SAGE air defense system uses commands and control
CRT on which operator targets with light pens. Light pens sense
light emitted by objects on the screen.
 1962- sketchpad, by Ivan Sutherland
 1964- CAD and CAM-shows graphical interaction and graphical
activities
 1968-Commercial company- flight simulators and High cost of
graphics hardware.
 1971-Gouraud Shading- rendering method and is faster then
phong shading
 1974-1977 : Phong Shading
 1982- Ray Tracing ( illumination based renderingmethod)
 1993-Open GL- Open Graphics Library
 1995- Microsoft, game playing API

1.2. Application of Computer Graphics

 Computer Graphics are used for an aided design for engineering


and architectural system- These are used in electrical automobiles,
electro-mechanical, mechanical, electronic devices. For example
gears and bolts.
 Computer Art – MS Paint.
 Presentation Graphics – It is used to summarize financial statistical
scientific or economic data. For example- Bar chart, Line chart.
 Entertainment- It is used in motion pictures, music videos, television
gaming.
 Education and training- It is used to understand the operations of
complex systems. It is also used for specialized system such for
framing for captains, pilots and so on.
 Visualization- To study trends and patterns.For example- Analyzing
satellite photo of earth.
1.3. CAD and CAM

The development of CAD-CAM technology took many individuals decades of


work in the name of production automation. It is the goal of those who are using
technology to create the future and increase output, including innovators,
inventors, mathematicians, and machinists. Prototypes, final goods, and
production runs of products are designed and produced using CAD/CAM
software.

1) What is CAD?

Computer Aided Design (CAD) is the use of computers for designing models
of physical products which means computers are used to aid in creating the
design, modifying the design, and analyzing the designing activities. Computer
Aided Design is also known as Computer Aided Drafting. The purpose of CAD
is to make 2D technical drawings and 3D models. So in Simple, we can say
CAD represents your part geometry to the computer. Computer Aided Design
(CAD) software is mostly used by an engineer.
Examples of CAD software include AutoCAD, Autodesk Inventor, CATIA,
Solid Works, etc.
Computer + Designed Software = CAD

Features of CAD Software

 2D and 3D Modeling: CAD software allows designers to create both 2D and 3D models
of their designs.
 Visualization: CAD software allows designers to view and analyze their designs from
different angles and perspectives.
 Simulation: CAD software allows designers to simulate how their designs will perform
in the real world and make changes accordingly.
 Collaboration: CAD software allows multiple users to work on the same design
simultaneously and share their progress with each other.
2) What is CAM?

Computer Aided Manufacturing (CAM) is the use of computer software to control


machine tools in the manufacturing of modules. CAM transforms engineering designs into
end products. CAM is different than conventional manufacturing as it implements
automation in the manufacturing process. Computer Aided Manufacturing is also known as
Computer-Aided Modeling or Machining. The purpose of CAM is to use 3D models to
design machining processes. So in Simple, we can say CAM converts the geometry to the
machine tool. So, without Computer-Aided Design (CAD), Computer-Aided
Manufacturing (CAM) has no meaning. Computer Aided Manufacturing (CAM) software
is mostly used by a trained machinists.
Examples of CAM software include Work NC, Siemens NX, Power MILL, SolidCAM, etc.
Manufacturing Tools + Computer = CAM
Features of CAM Software

 Toolpath Generation: CAM software generates toolpaths that machines can follow to
create the desired product.
 Machine Control: CAM software controls the machines used in the manufacturing
process, ensuring that they operate correctly and safely.
 Optimization: CAM software optimizes the manufacturing process, reducing waste and
improving efficiency.
 Integration: CAM software can be integrated with other software systems, such as CAD
software, to streamline the manufacturing process.

Difference between CAD and CAM


CAD CAM

CAD refers to Computer Aided


CAM refers to Computer Aided Manufacturing.
Design.

CAD is the use of computers for Computer Aided Manufacturing (CAM) is the
designing means computers are used to use of computer software to control machine
aid in creating the design, modifying tools in the manufacturing of modules. CAM
and analyzing the designing activities. transforms engineering designs into end products.

Computer Aided Design is also known Computer Aided Manufacturing is also known as
as Computer Aided Drafting. Computer Aided Modeling.

The purpose of CAD is making 2D The purpose of CAM is using 3D models to


technical drawings and 3D models. design machining processes.

Due to CAD it is much easier, more


accurate and faster drafting, making Due to CAM atomization in machining process is
3D models impossible without achieved.
computers.

So in Simple we can say CAD


So in Simple we can say CAM converts the
represents your part geometry to the
geometry to machine tool.
computer.

For Computer Aided Design process For Computer Aided Manufacturing computer
only a computer and CAD software are and often a CAM software package along with
required for a technician to create a that for manufacturing process requires a CAM
design. machine.

Computer Aided Design (CAD)


Computer Aided Manufacturing (CAM) software
software is mostly used by an
is mostly used by a trained machinist.
engineer.
CAD CAM

Some examples of CAD software


Some examples of CAM software includes Work
includes AutoCAD, Autodesk
NC, Siemens NX, Power MILL, SolidCAM.
Inventor, CATIA, Solid Works.

Some of the Applications of CAM (Computer


Some of the Applications of CAD
Aided Manufacturing)-
(Computer Aided Design)-
 Laser cutting
 Solid Modelling
 Metal working
 Drafting Detailing
 3D Milling
 Surface Modelling
 Metal Spinning
 Creating Animations
 Wood Turning
 Assembly and more.
 Glass working and more.
Unit - 2
Graphics Hardware
2.1. Input Hardware
2.1.1. Keyboard, Mouse (mechanical & optical), Light pen, Touch panel (Optical,
Sonic, and Electrical), Digitizers (Electrical, Sonic, Resistive), Scanner, Joystick
2.2. Output Hardware
2.2.1. Monitors
2.2.2. Monochromatic CRT Monitors
2.2.3.Color CRT Monitors
2.2.4. Flat Panel Display Monitors
2.3. Hardcopy Devices
2.3.1. Plotters
2.3.2. Printers
2.4. Raster and Vector Display Architectures, Principles and Characteristics

In computer graphics, input hardware refers to devices used to input


data or commands into a computer system for the creation,
manipulation, and interaction with graphical content. Some common
input hardware devices used in computer graphics include:

i) KEYBOARD
The most commonly used input device is a keyboard.
The data is entered by pressing the set of keys. All keys are labeled. The
layout of the keyboard is like that of traditional typewriter, although
there are some additional keys provided for performing additional
functions.

ii) Mouse (Mechanical and Optical)

A Mouse is a pointing device and used to positionthe pointer on the


screen. It is a small palm size box. There are two or three depression
switches on the top. The mouse cannot be used to enter text.
Therefore, they are used in conjunction witha keyboard.
Mechanical Mouse

A mechanical mouse is a computer mouse that contains a metal or


rubber ball on its under side. When the ball is rolled in any
direction, sensors inside the mouse detect this motion and move the
on-screen mouse pointer in the same direction.

Optical mouse

Optical mouse is a computer pointing device that uses a light-


emitting diode (LED), optoelectronic
sensor and digital signal processor
(DSP) to detect changesin reflected
light from image to image.

LIGHT PEN
 It is a pencil shaped device to determine the coordinates of a point
on the screen where it is activated such as
pressing the button.
 It works by sensing the sudden small change in
brightness of a point on the screen when the
electron gun refreshes that spot.
 Light pens have the advantage of 'drawing'
directly onto the screen, but this can become uncomfortable, and
they are not as accurate as digitizing tablets
TOUCH PANEL

 The touch panel allows the user to point at the


screen directly with a finger to move the cursor
around the screen or to select the icons. When a
user touches the surface, the system records the
change in the electrical current that flows
through the display

OPTICAL TOUCH PANEL


 It uses a series of infrared light emitting diodes
(LED) along one vertical edge and along one
horizontal edge of the panel .
 The opposite vertical and horizontal edges
contain photo detectors to form a grid of
invisible infrared light beams over the display
area.
 Touching the screen breaks one or two vertical
and horizontal light beams thereby indicating
the fingers position .
 The cursor is then moved to this position or the
icon at this position is selected .
 This is a low resolution panel which offers 10 to
50 positions in each direction

SONIC TOUCH PANEL


 Bursts of high frequency sound waves
traveling alternately horizontally and
vertically are generated at the edge of the
panel .
 Touching the screen causes part of each wave to be reflected back
to its source .
 The screen position at the point of contact is then calculated using
the time elapsed between when the wave is emitted and when it
arrives back at the source .
 This is a high resolution touch panel having about 500 positions in
each direction

ELECTRICAL TOUCH PANEL


 It consists of slightly separated two transparent panel one coated
with a thin layer of conducting material and the other with resistive
material .
 When the panel is touched with a finger the two plates are forced to
touch at the point of contact thereby creating the voltage drop
across the resistive plate which is then used to calculate the
coordinate of the touched position .
 The resolution of the touch panel is similar to that of sonic touch
panel

Digitizers

Digitizers are devices that convert analog information


(like physical position or pressure) into digital data a
computer can understand. In the context of computer
graphics, they're often used for capturing user input
related to drawing, writing, or manipulating objects on a
digital canvas. Here's a breakdown of the three types you
mentioned:
1. Electrical Digitizers:
 Electrical digitizers, also known as electromagnetic digitizers
or electromagnetic induction digitizers, use electromagnetic
fields to detect the position of a stylus or pen.
 These digitizers consist of a grid of wires or coils embedded
beneath the surface of a tablet or screen, and a stylus
containing a coil that generates an electromagnetic signal.
 When the stylus is brought near the digitizer surface, the coils
in the stylus induce currents in the grid of wires, allowing the
digitizer to determine the position of the stylus.
 Electrical digitizers offer high precision and pressure
sensitivity, making them suitable for professional graphics
applications such as digital art and design.
2. Sonic Digitizers:
 Sonic digitizers, also known as acoustic digitizers or ultrasonic
digitizers, use sound waves to determine the position of a
stylus or pen.
 These digitizers consist of transducers placed around the edges
of a tablet or screen, which emit ultrasonic signals.
 The stylus contains a microphone that detects the ultrasonic
signals emitted by the transducers. By measuring the time it
takes for the signals to reach the microphone, the digitizer can
calculate the position of the stylus.
 Sonic digitizers are less common than electrical digitizers but
offer advantages such as compatibility with non-metallic
styluses and resistance to electromagnetic interference.
3. Resistive Digitizers:
 Resistive digitizers, also known as resistive touchscreens,
consist of multiple layers of flexible material coated with
a conductive material such as indium tin oxide (ITO).
 When pressure is applied to the screen, the top and bottom
layers come into contact, creating a voltage drop at the
point of contact.
 The digitizer detects the voltage drop and calculates the
position of the touch based on the changes in resistance
across the screen.
 Resistive digitizers are commonly used in touchscreen
devices such as smartphones, GPS systems, and industrial
control panels due to their low cost and durability.
However, they typically offer lower precision and
sensitivity compared to other digitizer types.

Scanners

 Convert any printed image of an object into electronic form by shinning light
onto the image and sensing the intensity oflight’s reflection at any point.
 Color scanners use filters to separate components of color into primary additive
colors (red, green, blue) at each point.
 R G B are primary additive colors because they can be combined to create any
other color.
 Image scanners translate printed images into electronic format that can be stored
into a computer memory.
 Software is then used to manipulate the scanned electronicimage.
 Images are enhanced or manipulated by graphics programslike
Adobe.

Joystick

A joystick is a hand-held input device that


controls the movement of a digital object on a
computer screen. It consists of a stick that pivots
on a base and reports its angle or direction to the
device it is controlling.
Joysticks are often used in
gaming to control the
movement of characters
or vehicles within a game.
They can also be used for
non-gaming purposes,
such as controlling robotic
arms, operating
machinery, or navigating three-dimensional (3D)
modeling software.

2.3 Output Hardware

 Graphics hardware output


devices are those hardware that
generates computer graphics
and allows them to be shown on
a display.

Monitors

 Monitors, commonly called as Visual Display


Unit (VDU), are the main output device of a computer.
 It forms images from tiny dots, called pixels that are arranged
in a rectangular form.
 The sharpness of the image depends upon the number of pixels.
 This allows you to see the images produced by the computer.
 The quality of the graphics that you see depends on the size and
the resolution of the monitor.

Monochromic CRT Monitors

Monochrome CRT monitors are a type of CRT monitor that


can only display graphics in gray shades, such as black and
white. They were common in the 1960s through the 1980s,
before color monitors became widely available. Monochrome
monitors have only one color of phosphor, while color
monitors use alternating red, green, and blue
phosphors. Monochrome monitors produce sharper text and
images than color CRT monitors, but they are susceptible to
screen burn and ghosting.

Monochrome monitors are also called "mono" and refer to display


screens that use one foreground and one background color. For
example, black on white, white on black, or green on black. The
first terminals connected to mainframes and minicomputers were
monochrome, and monochrome screens were widely used on early
personal computers.
Monochrome monitors are still used in some applications, such as
computerized cash register systems.
Monochrome CRT monitors, while not as widely used today, were
the pioneers of displaying visuals on computers.
What are they?
 CRT stands for Cathode Ray Tube, the core technology used
in these monitors.
 Monochrome refers to the single color they display, unlike
color CRTs with red, green, and blue phosphors.
How they work:
 Inside the CRT, an electron beam is fired towards a phosphor-
coated screen.
 This phosphor glows when struck by the electrons, creating
the image.
 In monochrome monitors, the phosphor is just one color,
typically green, amber, or white.
Advantages:
Sharper Text: Due to the single, continuous phosphor layer,
monochrome monitors offered superior sharpness for text and
detailed graphics compared to early color CRTs.
Readability: The chosen phosphor colors (green, amber) were
often easier on the eyes for extended reading sessions.
Cost-effective: Simpler design with one phosphor layer made them
cheaper to produce.
Disadvantages:
Limited Color: Obvious drawback is the lack of color
representation, making them unsuitable for applications requiring
color graphics or images.
Bulkier Design: CRT technology itself inherently leads to bulky
monitors compared to modern flat-panel displays.
Applications:
Monochrome CRT monitors dominated the early computing era
(1960s-1980s) for personal computers and terminals.
They were prevalent in word processing, text-based games, early
graphical user interfaces (GUIs), and applications where color
wasn't crucial.
Color CRT Monitors (Displays)
The color CRT display is a device used for presenting information to a user. The CRT
produces images by projecting an electron beam onto the screen and using the phosphor
colors to produce colors or light.
A color television display often referred to as a TV or simply as “the telly” is an electrical
appliance that emits images in order to present content, predominantly as part of consumer
media rather than professional equipment. Although color televisions were used for the
first time in Europe in 1928 and in North America in 1929, the first commercially
successful television which could be purchased by a consumer was not available until
1939.
The use of electronic devices for displaying information has become less common with the
advance of computers and other digital media, but televisions are still very popular.
CRT stands for “cathode ray tube”. The CRT is a display device that uses a focused beam
of electrons to produce images. The images can be transferred into various other output
devices, like video projectors or computer monitors (see also: Digital television ). The
display is built of colored phosphor dots or diodes; each dot is called a “picture element”,
or “pel” for short.

Working:

“EDS” CRTs emit an electron beam and the electron leaves the cathode of the CRT at a
certain angle. When the beam hits a phosphor dot, the signal shifts to another color. With
several layers of phosphors, one can produce multiple colors. The brightness of each color
is controlled by a color wheel (sometimes called “color wheel”). The image produced in
such a TV display needs to be converted into a digital format before it can be displayed on
computers or mobile devices. This is done by special marking patterns on or near the
screen: what are known as “luma” and “chroma”.
Luma is used to represent the brightness information in an image, while chroma represents
the color information. The luma pattern on a CRT display consists of two vertical stripes,
one dark and one bright. It is usually placed where the center of the picture is located; for
example, between two vertical stripes indicating fine horizontal lines of a black-and-white
image. As such, it can be scanned by a TV tuner card’s (TV tuner) analog-to-digital
converter with minimal loss of resolution.
Color CRT Monitors:
The CRT Monitor display by using a combination of phosphors. The
phosphors are different colors. There are two popular approaches for
producing color displays with a CRT are:

1. Beam Penetration Method


2. Shadow-Mask Method

1. Beam Penetration Method:


The Beam-Penetration method has been used with random-scan
monitors. In this method, the CRT screen is coated with two layers of
phosphor, red and green and the displayed color depends on how far
the electron beam penetrates the phosphor layers. This method
produces four colors only, red, green, orange and yellow. A beam of
slow electrons excites the outer red layer only; hence screen shows
red color only. A beam of high-speed electrons excites the inner
green layer. Thus screen shows a green color.

Advantages:
1. Inexpensive

Disadvantages:
1. Only four colors are possible
2. Quality of pictures is not as good as with another method.

2. Shadow-Mask Method:
o Shadow Mask Method is commonly used in Raster-Scan System
because they produce a much wider range of colors than the
beam-penetration method.
o It is used in the majority of color TV sets and monitors.

Construction: A shadow mask CRT has 3 phosphor color dots at each


pixel position.

o One phosphor dot emits: red light


o Another emits: green light
o Third emits: blue light
This type of CRT has 3 electron guns, one for each color dot and a shadow mask grid just behind the phosphor coated screen.

Shadow mask grid is pierced with small round holes in a triangular pattern.

Figure shows the delta-delta shadow mask method commonly used in color CRT system.
Working: Triad arrangement of red, green, and blue guns.

The deflection system of the CRT operates on all 3 electron beams simultaneously; the 3 electron beams are deflected and focused
as a group onto the shadow mask, which contains a sequence of holes aligned with the phosphor- dot patterns.

When the three beams pass through a hole in the shadow mask, they activate a dotted triangle, which occurs as a small color spot
on the screen.

The phosphor dots in the triangles are organized so that each electron beam can activate only its corresponding color dot when it
passes through the shadow mask.

Inline arrangement: Another configuration for the 3 electron guns is an Inline arrangement in which the 3

electron guns and the corresponding red-green-blue color dots on the screen, are aligned along one scan line rather of in a
triangular pattern.

This inline arrangement of electron guns in easier to keep in alignment and is commonly used in high-resolution color CRT's.

Advantage:
1. Realistic image
2. Million different colors to be generated
3. Shadow scenes are possible

Disadvantage:
1. Relatively expensive compared with the monochrome CRT.
2. Relatively poor resolution
3. Convergence Problem

Flat Panel Display Monitors


Flat-Panel Devices are the devices that have less volume,
weight, and power consumption compared to Cathode Ray
Tube (CRT). Due to the advantages of the Flat-Panel Display,
use of CRT decreased. As Flat Panel Devices are light in
weights that’s why they can be hang on walls and wear them
on our wrist as a watch. Flat Panel Display (FPD) allow users

to view data, graphics, text and images.


Types of Flat Panel Display:
1. Emissive Display:
The Emissive Display or Emitters are the devices that convert
electrical energy into light energy.
Examples: Plasma Panel, LED (Light Emitting Diode), Flat CRT.
2. Non-Emissive Display:
Non-Emissive Display or Non-Emitters are the devices that
use optical effects to convert sunlight or some other source
into graphic patterns.
Examples: LCD (Liquid Crystal Display)
Advantages of Flat Panel Devices:
 Flat Panel Devices like LCD produces high quality digital

images.
 Flat Panel monitor are stylish and have very space saving

design.
 Flat Panel Devices consumes less power and give

maximum image size in minimum space.


 Flat Panel Devices use its full color display capability.
 Full motion video can be viewed on Flat Panel Devices
without artifacts or contrast loss.
Disadvantages of Flat Panel Devices:
 They are very expensive compared to CRTs.

 They have very high refresh rates.

 Slow response times.

 They may be heavier and bulkier than other display types.

2.3 Hardcopy Devices

A hard copy device is a piece of hardware that creates a physical copy


of information from a computer. This physical copy is also known as a
paper copy or hardcopy. Hard copy devices can produce printed copies
on any type of paper, or on flat surfaces that fit into the printer's feed.

Plotters
 A plotter is a special output device used to producehard copies
of large graphs and designs on paper.
 They are used to draw construction maps, engineering
drawings, architectural plans and business charts.
 They are mostly used by engineers and designers whoneed to
draw complicated diagrams.
 They are also used by marketing agents for printinghuge
advertisements and posters.
 Types of plotters
o Flat-bed plotter
o Ink-jet plotter
o Drum Plotter

Printers
 A printer is an external output device that takes data from a computer
and generates output in the form of graphics / text on a paper.
 The printed output is a permanent form of output.
 Printers are categorized according to whether or not the image produced
is formed by physical contact of the print mechanism with the paper.
 Types of printers
o Impact printers
o Non-Impact printers

Raster Display Technology :-

This technology based on television technology was developed in early


70s. It consists of centralprocessing unit, a video controller, a monitor,
system memory and peripheral devices such as mouse and keyboard
 When a particular command is called by
the application program the graphics
subroutine package sets the appropriate
pixels in the frame buffer.
 The video controller then cycles the frame
buffer, one scan line at a time typically 50
times per second.
 It brings a value of each pixel contained in
the buffer and uses it to control the
intensity of the CRT electron beam. So
there exists a one to one relationship
between the pixel in the frame buffer and
that on the CRT screen.
 A pixel in a frame buffer may be
represented by one bit as in
monochromatic system where each pixel
on CRT screen is either on ‘1’ or off ‘0’
Principles:
 Raster displays generate images by dividing the
screen into a grid of pixels (picture elements).
 Each pixel is individually addressed and assigned
a color value.
 The entire image is created by scanning the screen
line by line, starting from the top-left corner and
moving horizontally across each row.
 As each pixel is addressed, the corresponding
color information is sent to the display, resulting
in the formation of the complete image.

Characteristics:

 Pixel-based: Images are composed of individual


pixels arranged in a grid.
 Resolution-dependent: Image quality is directly
related to the number of pixels on the screen.
 Common in modern displays: LCD, LED, and
OLED displays commonly use raster architectures.
 Well-suited for complex graphics and multimedia
content.
 Can accurately represent photographic images and
realistic scenes.
 Typically used in gaming, video playback, graphic
design, and other multimedia applications.
Vector Display Technology
 Vector display technology was developed in 60’s and used as a
common display device until 80’s , It is also called random
scan, a stroke, a line drawing or calligraphic display.
 The application program and graphics subroutine package
both reside in the system memory and execute on CPU . A
graphics subroutine package creates a display list and stores
in the system memory.

 A display list contains point and line plotting commands with


end point coordinates as well ascharacter plotting commands.
 The DPU interprets the commands in the displaylist and plots
the respective output primitives such as point, line and
characters.
 As a matter of fact the DPU sends digital pointcoordinates to
a vector generator that converts the digital coordinate values to
analog
Unit - 3
( Two Dimensional Algorithms and Transformations. )
3.1. Mathematical Line Drawing Concept
3.2. Line Drawing Algorithms
3.2.1. Digital Differential Analyzer (DDA)
3.2.2. Bresenham’s Line Drawing Algorithm
3.3. Mid-point Circle Drawing
3.4. Mid-point Ellipse Drawing Algorithm
3.5. Review of Matrix Operations – Addition and Multiplication
3.6. Two-dimensional Transformations
3.6.1. Translation
3.6.2. Scaling
3.6.3. Rotation
3.6.4. Reflection
3.6.5. Shearing
3.7. Two-Dimensional Viewing Pipeline

Mathematical Line Drawing Concept

In computer graphics, representing lines perfectly on a screen presents a challenge.


Screens are made of pixels, which are tiny squares that light up to form an image. A true
mathematical line is infinitely thin, but a pixel has a definite size. So, algorithms are
needed to approximate lines on a pixel grid.

In computer graphics, the mathematical line drawing concept refers to the


algorithms and techniques used to render straight lines on a digital display.
These algorithms are fundamental to rendering graphics, as lines serve as the
building blocks for more complex shapes and images. One of the most basic
algorithms for line drawing is the Digital Differential Analyzer (DDA)
algorithm, which calculates the coordinates of pixels along a line.
Line Drawing Algorithms
Line drawing algorithms are mathematical procedures used to render straight lines on a
digital display. Two commonly used algorithms for this purpose are the Digital
Differential Analyzer (DDA) algorithm and Bresenham's line drawing algorithm.
Digital Differential Analyzer (DDA)
Basic Principle:
 The DDA algorithm calculates the coordinates of pixels along a line by
incrementally stepping from one endpoint to the other.
 It determines the pixel positions using the line equation and calculates the
increments for x and y coordinates for each step.
DDA (Digital Differential Analyzer) is a line drawing algorithm used in computer graphics
to generate a line segment between two specified endpoints. It is a simple and efficient
algorithm that works by using the incremental difference between the x-coordinates and y-
coordinates of the two endpoints to plot the line.
The steps involved in DDA line generation algorithm are:
1. Input the two endpoints of the line segment, (x1,y1) and (x2,y2).
2. Calculate the difference between the x-coordinates and y-coordinates of the endpoints as
dx and dy respectively.
3. Calculate the slope of the line as m = dy/dx.
4. Set the initial point of the line as (x1,y1).
5. Loop through the x-coordinates of the line, incrementing by one each time, and calculate
the corresponding y-coordinate using the equation y = y1 + m(x – x1).
6. Plot the pixel at the calculated (x,y) coordinate.
7. Repeat steps 5 and 6 until the endpoint (x2,y2) is reached.

DDA algorithm is relatively easy to implement and is computationally efficient, making it


suitable for real-time applications. However, it has some limitations, such as the inability to
handle vertical lines and the need for floating-point arithmetic, which can be slow on some
systems. Nonetheless, it remains a popular choice for generating lines in computer
graphics.
In any 2-Dimensional plane, if we connect two points (x0, y0) and (x1, y1), we get a line
segment. But in the case of computer graphics, we can not directly join any two coordinate
points, for that, we should calculate intermediate points’ coordinates and put a pixel for
each intermediate point, of the desired color with the help of functions like putpixel(x, y,
K) in C, where (x,y) is our co-ordinate and K denotes some color.
DDA Algorithm:
Consider one point of the line as (X0, Y0) and the second point of the line as (X1, Y1).
// calculate dx , dy
dx = X1 – X0;
dy = Y1 – Y0;
// Depending upon absolute value of dx & dy
// choose number of steps to put pixel as
// steps = abs(dx) > abs(dy) ? abs(dx) : abs(dy)
steps = abs(dx) > abs(dy) ? abs(dx) : abs(dy);
// calculate increment in x & y for each steps
Xinc = dx / (float) steps;
Yinc = dy / (float) steps;
// Put pixel for each step
X = X0;
Y = Y0;
for (int i = 0; i <= steps; i++)
{
putpixel (round(X),round(Y),WHITE);
X += Xinc;
Y += Yinc;
}
Advantages of DDA Algorithm:
 It is a simple and easy-to-implement algorithm.
 It avoids using multiple operations which have high time complexities.
 It is faster than the direct use of the line equation because it does not use any floating
point multiplication and it calculates points on the line.
Disadvantages of DDA Algorithm:
 It deals with the rounding off operation and floating point arithmetic so it has high time
complexity.
 As it is orientation-dependent, so it has poor endpoint accuracy.
 Due to the limited precision in the floating point representation, it produces a cumulative
error.

Bresenham's Line Drawing Algorithm:


 Bresenham's algorithm is an efficient method for
rendering lines using only integer arithmetic
operations.
 It utilizes the idea of error accumulation to determine
the best pixel positions to plot along the line.
 This algorithm is used for scan converting a line. It was
developed by Bresenham.
Steps:
 Calculate the decision parameter (e) based on the slope of the
line.
 Incrementally step along the line from one endpoint to the other,
adjusting the decision parameter at each step.
 Based on the decision parameter, determine whether to increment
the x coordinate or the y coordinate.
 Plot the pixel at the nearest integer coordinates determined by the
algorithm.
Advantages:
 More efficient and accurate than the DDA algorithm.
 Handles special cases like vertical and horizontal lines with ease.
 Suitable for implementation on systems with limited resources.

Limitations:
 Requires more complex logic compared to the DDA algorithm.
 May be slightly less intuitive to understand and implement.

Note:- Prefer PDF 2 for better examples on both algorithms from


page no - 41

DDA Algorithm Bresenham's Line Algorithm

1. DDA Algorithm use floating point, 1. Bresenham's Line Algorithm use fixed
i.e., Real Arithmetic. point, i.e., Integer Arithmetic

2. DDA Algorithms uses 2.Bresenham's Line Algorithm uses only


multiplication & division its subtraction and addition its operation
operation

3. DDA Algorithm is slowly than 3. Bresenham's Algorithm is faster than


Bresenham's Line Algorithm in line DDA Algorithm in line because it involves
drawing because it uses real only addition & subtraction in its
arithmetic (Floating Point operation) calculation and uses only integer
arithmetic.

4. DDA Algorithm is not accurate 4. Bresenham's Line Algorithm is more


and efficient as Bresenham's Line accurate and efficient at DDA Algorithm.
Algorithm.

5.DDA Algorithm can draw circle and 5. Bresenham's Line Algorithm can draw
curves but are not accurate as circle and curves with more accurate than
Bresenham's Line Algorithm DDA Algorithm.
Mid - Point Circle Drawing

The mid-point circle drawing algorithm is an algorithm used to determine the


points needed for rasterizing a circle.

We use the mid-point algorithm to calculate all the perimeter points of the
circle in the first octant and then print them along with their mirror points in the
other octants. This will work because a circle is symmetric about its centre.

The algorithm is very similar to the Mid-Point Line Generation Algorithm. Here,
only the boundary condition is different.

For any given pixel (x, y), the next pixel to be plotted is either (x, y+1) or (x-1,
y+1). This can be decided by following the steps below.

1. Find the mid-point p of the two possible pixels i.e (x-0.5, y+1)
2. If p lies inside or on the circle perimeter, we plot the pixel (x, y+1), otherwise
if it’s outside we plot the pixel (x-1, y+1)
Boundary Condition : Whether the mid-point lies inside or outside the circle
can be decided by using the formula:-

Given a circle centered at (0,0) and radius r and a point p(x,y)


F(p) = x2 + y2 – r2
if F(p)<0, the point is inside the circle
F(p)=0, the point is on the perimeter
F(p)>0, the point is outside the circle

In our program, we denote F(p) with P. The value of P is calculated at the mid-
point of the two contending pixels i.e. (x-0.5, y+1). Each pixel is described with
a subscript k.

Pk = (Xk — 0.5)2 + (yk + 1)2 – r2


Now,
xk+1 = xk or xk-1 , yk+1= yk +1
? Pk+1 = (xk+1 – 0.5)2 + (yk+1 +1)2 – r2
= (xk+1 – 0.5)2 + [(yk +1) + 1]2 – r2
= (xk+1 – 0.5)2 + (yk +1)2 + 2(yk + 1) + 1 – r2
= (xk+1 – 0.5)2 + [ – (xk – 0.5)2 +(xk – 0.5)2 ] + (yk + 1)2 – r2 + 2(yk + 1) + 1
= Pk + (xk+1 – 0.5)2 – (xk – 0.5)2 + 2(yk + 1) + 1
= Pk + (x2k+1 – x2k) – (xk+1 – xk) + 2(yk + 1) + 1
= Pk + 2(yk +1) + 1, when Pk <=0 i.e the midpoint is inside the circle
(xk+1 = xk)
Pk + 2(yk +1) – 2(xk – 1) + 1, when Pk>0 I.e the mid point is outside the
circle(xk+1 = xk-1)

The first point to be plotted is (r, 0) on the x-axis. The initial value of P is
calculated as follows:-

P1 = (r – 0.5)2 + (0+1)2 – r2
= 1.25 – r
= 1 -r (When rounded off)
Examples:

Input : Centre -> (0, 0), Radius -> 3


Output : (3, 0) (3, 0) (0, 3) (0, 3)
(3, 1) (-3, 1) (3, -1) (-3, -1)
(1, 3) (-1, 3) (1, -3) (-1, -3)
(2, 2) (-2, 2) (2, -2) (-2, -2)

Input : Centre -> (4, 4), Radius -> 2


Output : (6, 4) (6, 4) (4, 6) (4, 6)
(6, 5) (2, 5) (6, 3) (2, 3)
(5, 6) (3, 6) (5, 2) (3, 2)

From Another Note :-


Mid-Point Ellipse Drawing Algorithm
Mid-point Ellipse algorithm is used to draw an ellipse in computer graphics.
Also refer : Midpoint line algorithm, Midpoint circle algorithm
Midpoint ellipse algorithm plots(finds) points of an ellipse on the first quadrant by dividing
the quadrant into two regions.
Each point(x, y) is then projected into other three quadrants (-x, y), (x, -y), (-x, -y) i.e. it
uses 4-way symmetry.
Function of ellipse:
fellipse(x, y)=ry2x2+rx2y2-rx2ry2
fellipse(x, y)<0 then (x, y) is inside the ellipse.
fellipse(x, y)>0 then (x, y) is outside the ellipse.
fellipse(x, y)=0 then (x, y) is on the ellipse.

Decision parameter:
Initially, we have two decision parameters p10 in region 1 and p20 in region 2.
These parameters are defined as : p10 in region 1 is given as :
p10=ry2+1/4rx2-rx2ry
Mid-Point Ellipse Algorithm :
1. Take input radius along x axis and y axis and obtain center of ellipse.
2. Initially, we assume ellipse to be centered at origin and the first point as : (x, y 0)= (0, ry).
3. Obtain the initial decision parameter for region 1 as: p1 0=r y2+1/4rx2-rx 2ry
4. For every xk position in region 1 :
If p1k<0 then the next point along the is (x k+1 , yk) and p1k+1=p1k+2ry2xk+1+ry2
Else, the next point is (xk+1, yk-1 )
And p1k+1=p1k+2ry2xk+1 – 2rx2yk+1+ry2
5. Obtain the initial value in region 2 using the last point (x 0, y0) of region 1 as:
p20=ry2(x0+1/2)2+rx2 (y0-1)2-rx2ry2
6. At each yk in region 2 starting at k =0 perform the following task.
If p2k>0 the next point is (xk, yk-1) and p2k+1=p2k-2rx2yk+1+rx2
7. Else, the next point is (xk+1, yk -1) and p2k+1=p2k+2r y2xk+1 -2rx2yk+1+rx2
8. Now obtain the symmetric points in the three quadrants and plot the coordinate value as:
x=x+xc, y=y+yc
9. Repeat the steps for region 1 until 2r y2x&gt=2rx2y
Review of Matrix Operations - Addition and Multiplication
Note:- This topic is focused on learning and reviewing the concept of addition and multiplication of matrix so
practice more over to practices rather than learning theory , lesson only focus towards addition and multiplication
but recommended to practices subtraction too.

Matrix Operations are the operations that are operated on the matrix. Matrix
Operation includes operations such as Addition of Matrix, Subtraction of
Matrix, Multiplication of Matrix, etc, and others. These operations are very b
useful for solving various problems of matrices and help us to find the
transpose, inverse, rank, and others of the matrix. These operations help us to
combine two or matrices.
In this article, we will learn about, Matrix Operations, Examples, and others in
detail.
What are Matrix Operations?
Matrix operations are the operations that are used to combine various matrices
to form a single matrix. The operations such as addition, subtraction, and
multiplication are easily performed on the matrix. These matrix operations are
very useful to solve matrix problems and to find the transpose and the inverse of
the matrix.
Various matrix operations that are used to solve matrix problems are,
 Addition of Matrix
 Subtraction of Matrix
 Scaler Multiplication of Matrix
 Multiplication of Matrix

Addition of Matrices
As we add two numbers we can easily add two matrices. The only thing we have to note is
that the order of both the matrices that are to be added must be the same. That is to add two
matrices we have to make sure that they are of the same order and then each element of the
first matrix adds with each element of the second matrix to get a single matrix and thus the
addition operation gets completed.
Properties of Matrix Addition
There are various properties associated with matrix addition that are, for matrices A, B,
and C of the same order, then
 Commutative Law: A + B = B + A
 Associative Law: (A + B) + C = A + (B + C)
 Identity of Matrix: A + O = O + A = A, where O is a zero matrix which is the
Additive Identity of Matrix
 Additive Inverse: A + (-A) = O = (-A) + A, where (-A) is obtained by changing the
sign of every element of A, which is the additive inverse of the matrix.

Subtraction of Matrices
 As we add two matrices e can also easily subtract two matrices. The only thing we
have to note is that the order of both the matrices that are to be subtracted must be
the same. That is to subtract two matrices we have to make sure that they are of the
same order and then each element of the first matrix is subtracted with each element
of the second matrix to get a single matrix and thus the subtraction operation gets
completed.
Multiplication of Matrix
Matrix multiplication is the operation that helps us to multiply two
matrices. This is different from algebraic multiplication and not all the
matrices can be multiplied. Only those matrices can be multiplied where
the number of columns in the first is equal to the number of rows in the
second, i.e for matrix Am×n and matrix Bn×p the multiplication is possible
for any other matrices where the column of the first matrix is not equal
to the row in the second matrix the multiplication is not possible.
Also, the multiplication of the matrices is not commutative, i.e. if matrix
A and matrix B are taken then A×B ≠ B×A.
Transpose Operation of a Matrix
Tramspose operation of a matrix is used to find the transpose of any
matrix. Transpose of any matrix is a matrix in which the rows of the
matrix are changed to the column of the matrix and the column of the
matrix is changed to the rows of the matrix. Suppose we have a matrix
A of order m×n such that A =[ij]m×n then the transpose of matrix A is
represented as (A)T and its value is,
(A)T = [ji]n×m
Inverse Operation of a Matrix
For any matrix A its inverse is found only when A is a square matrix and
its determinant is equal to 1, i.e.A = [ij]n×n and |A| = 1
Now the inverse of a matrix A is a matrix that on multiplying with the
matrix A results in the identity matrix. It is represented as (A)-1, and the
inverse operation of the matrix is an operation that helps us to find the
inverse of the matrix. For any square matrix A we know that,A×(A)-1 = I
where “I” is the identity matrix of the same order as A.

Examples of Addition and Multiplication:-


Two Dimensional Transformations
Transformation means changing some graphics into something else by applying
rules. We can have various types of transformations such as translation, scaling
up or down, rotation, shearing, etc. When a transformation takes place on a 2D
plane, it is called 2D transformation.
Transformations play an important role in computer graphics to reposition the
graphics on the screen and change their size or orientation.

Translation
2D - Rotation
2D - Reflection

2D - Shearing
 Shearing involves skewing an object along one or both axes, resulting in a
transformation that stretches or compresses the object in one direction.
 The shearing operation is represented by a shearing matrix, which specifies
the amount of shear for each axis.
Two - Dimensional Viewing Pipeline :-
The two-dimensional (2D) viewing pipeline is a series of steps or stages involved in the
process of rendering two-dimensional graphical objects onto a display screen. It
encompasses various transformations and operations that convert objects from their model
space coordinates to their final positions on the screen.

We know that the picture is stored in the computer memory using any convenient
Cartesian co-ordinate system, referred to as World Co-Ordinate System
(WCS). However, when picture is displayed on the display device it is measured
in Physical Device Co-Ordinate System (PDCS) corresponding to the display
device. Therefore, displaying an image of a picture involves mapping the co-
ordinates of the Points and lines that form the picture into the appropriate
physical device co-ordinate where the image is to be displayed. This mapping of
co-ordinates is achieved with the use of co-ordinate transformation known
as viewing transformation.
The viewing transformation which maps picture co-ordinates in the WCS to
display co-ordinates in PDCS is performed by the following transformations.
• Converting world co-ordinates to viewing co-ordinates.
• Normalizing viewing co-ordinates.
• Converting normalized viewing co-ordinates to device co-ordinates.

The steps involved in viewing transformation:-


1. Construct the scene in world co-ordinate using the output primitives and
attributes.
2. Obtain a particular orientation for the window by setting a two-dimensional
viewing co-ordinate system in the world co-ordinate plane and define a
window in the viewing co-ordinate system.
3. Use viewing co-ordinates reference frame to provide a method for setting
up arbitrary orientations for rectangular windows.
4. Once the viewing reference frame is established, transform descriptions in
world co-ordinates to viewing co-ordinates.
5. Define a view port in normalized co-ordinates and map the viewing co-
ordinates description of the scene to normalized co-ordinates.
6. Clip all the parts of the picture which lie outside the viewport.
Unit - 4 ( Three Dimensional Graphics )
4.1. Three-dimensions transformations
4.1.1. Translation
4.1.2. Scaling
4.1.3. Rotation
4.1.4. Reflection
4.1.5. Shearing
4.2. Three-dimensional Viewing Pipeline
4.3. Three-dimensions Projections
4.3.1. Concept of Projection
4.3.2. Projection of 3D Objects onto 2D Display Devices
4.3.3. Three-dimensional Projection Methods
4.3.3.1. Parallel Projection Method
4.3.3.2. Perspective Projection Method
4.4. Three-dimensional Object Representations
4.4.1. Polygon Surfaces
4.4.2. Polygon Tables
4.5. Introduction to Hidden Line and Hidden Surface Removal Techniques
4.5.1. Object Space Method
4.5.2. Image Space Method
4.6. Introduction to Illumination/ Lighting Models
4.6.1. Ambient Model
4.6.2. Diffuse Model
4.6.3. Specular Model
4.7. Introduction to Shading/ Surface Rendering Models
4.7.1. Constant Shading Model
4.7.2. Gouraud Shading Model
4.7.3. Phong Shading Model

Three-dimensional (3D) graphics refer to the creation, rendering, and


manipulation of visual content that simulates three-dimensional objects and
environments in a digital space. Unlike traditional two-dimensional (2D)
graphics, which are flat and lack depth, 3D graphics aim to represent objects with
height, width, and depth, providing a more realistic and immersive visual
experience.
Three - dimensional transformations
The three-dimensional transformations are extensions of two-dimensional
transformation. In 2D two coordinates are used, i.e., x and y whereas in 3D three
co-ordinates x, y, and z are used.

For three dimensional images and objects, three-dimensional transformations are


needed. These are translations, scaling, and rotation. These are also called as
basic transformations are represented using matrix. More complex
transformations are handled using matrix in 3D.

The 2D can show two-dimensional objects. Like the Bar chart, pie chart, graphs.
But some more natural objects can be represented using 3D. Using 3D, we can
see different shapes of the object in different sections.

In 3D when a translation is done we need three factors for rotation also, it is a


component of three rotations. Each can be performed along any three Cartesian
axis. In 3D also we can represent a sequence of transformations as a single
matrix.

Computer Graphics uses CAD. CAD allows manipulation of machine


components which are 3 Dimensional. It also provides automobile bodies,
aircraft parts study. All these activities require realism. For realism 3D is
required. In the production of a realistic 3D scene from 2D is tough. It require
three dimension, i.e., depth.
The geometric transformations play a vital role in generating images of three Dimensional objects
with the help of these transformations. The location of objects relative to others can be easily
expressed. Sometimes viewpoint changes rapidly, or sometimes objects move in relation to each
other. For this number of transformation can be carried out repeatedly.

Translation
It is the movement of an object from one position to another position. Translation is done using translation
vectors. There are three vectors in 3D instead of two. These vectors are in x, y, and z directions. Translation
in the x-direction is represented using Tx. The translation is y-direction is represented using Ty. The
translation in the z- direction is represented using Tz.

If P is a point having co-ordinates in three directions (x, y, z) is translated, then after translation its
coordinates will be (x1 y1 z1) after translation. Tx Ty Tz are translation vectors in x, y, and z directions
respectively.

x1=x+Tx
y1=y+Ty
z1=z+ Tz
Three-dimensional transformations are performed by transforming each vertex of the object. If an object
has five corners, then the translation will be accomplished by translating all five points to new locations.
Following figure 1 shows the translation of point figure 2 shows the translation of the cube.

Matrix for translation


Matrix representation of point translation
Point shown in fig is (x, y, z). It become (x1,y1,z1) after translation. Tx Ty Tz are translation vector.

Scaling

Scaling
Scaling is used to change the size of an object. The size can be increased or
decreased. The scaling three factors are required Sx Sy and Sz.

Sx=Scaling factor in x- direction


Sy=Scaling factor in y-direction
Sz=Scaling factor in z-direction

Matrix for Scaling


Scaling of the object relative to a fixed point
Following are steps performed when scaling of objects with fixed point (a, b, c).
It can be represented as below:

1. Translate fixed point to the origin


2. Scale the object relative to the origin
3. Translate object back to its original position.

Note: If all scaling factors Sx=Sy=Sz.Then scaling is called as uniform. If scaling is


done with different scaling vectors, it is called a differential scaling.

In figure (a) point (a, b, c) is shown, and object whose scaling is to done also
shown in steps in fig (b), fig (c) and fig (d).
Rotation
It is moving of an object about an angle. Movement can be anticlockwise or clockwise. 3D rotation is
complex as compared to the 2D rotation. For 2D we describe the angle of rotation, but for a 3D angle of
rotation and axis of rotation are required. The axis can be either x or y or z.

Following figures shows rotation about x, y, z- axis


Following figure show rotation of the object about the Y axis

Following figure show rotation of the object about the Z axis


Reflection
It is also called a mirror image of an object. For this reflection axis and reflection of
plane is selected. Three-dimensional reflections are similar to two dimensions.
Reflection is 180° about the given axis. For reflection, plane is selected (xy,xz or yz).
Following matrices show reflection respect to all these three planes.

Reflection relative to XY plane


Reflection relative to YZ plane

Reflection relative to ZX plane


Shearing
It is change in the shape of the object. It is also called as deformation. Change can be in the x -direction or y
-direction or both directions in case of 2D. If shear occurs in both directions, the object will be distorted. But
in 3D shear can occur in three directions.

Matrix for shear


Three - dimensional viewing pipeline
The three-dimensional (3D) viewing pipeline in computer graphics is a series of
stages or transformations involved in the process of rendering three-dimensional
scenes onto a two-dimensional display screen. It encompasses various operations
that convert 3D objects from their model space coordinates to their final
positions on the screen, taking into account aspects such as perspective
projection, visibility determination, and lighting effects.

The 3D viewing pipeline in computer graphics builds upon the concepts of the 2D pipeline
but adds an extra crucial step: projection, which transforms the 3D scene into a 2D view for
the final image. Here's a breakdown of the stages:

1. Modeling (Same as 2D):


o 3D objects are defined using points, lines, and polygons with coordinates in
world space.
2. Viewing Transformation (Similar to 2D with an extra step):
o Similar to 2D, this stage positions and orients the scene based on a virtual camera
viewpoint.
 World to View Transformation: Repositions the scene based on camera
position and direction.
 Viewing Clip Definition: Defines the viewing frustum (pyramid) to specify the
visible portion and culling objects outside it.
o Additional Step: Viewing Reference Point Definition: A specific point within
the scene (often the camera's target point) is chosen as the origin of the viewing
coordinate system.
3. Projection:
o This key stage takes the 3D scene data from viewing coordinates and transforms
it into a 2D representation onto a projection plane. It's like flattening the 3D
scene onto a canvas.
 There are two main projection types:
 Perspective Projection: Creates a more realistic view by simulating depth using
vanishing points. Objects farther away appear smaller.
 Orthographic Projection: Creates a parallel projection, maintaining object sizes
regardless of distance. Useful for technical drawings or architectural plans.
4. Windowing and Normalization (Same as 2D):
o A specific window within the projected image is chosen, and the coordinates are
normalized to the device's aspect ratio.
5. Clipping (Same as 2D):
o Objects outside the window are culled for efficiency.
6. Device Coordinates (Same as 2D):
o Transformed coordinates are converted to the specific format of the output device
(monitor).
7. Rasterization (Same as 2D):
o Transformed objects are converted into scanlines with pixel information for final
display.

Significance of the 3D Pipeline:

The 3D viewing pipeline enables us to visualize and interact with complex 3D


scenes on 2D screens. It ensures proper positioning, scaling, projection, and
culling of objects, resulting in a realistic or technical representation depending on
the chosen projection type. This pipeline is the foundation for rendering 3D
graphics in various applications like movies, games, simulations, and design
software.

Three-dimensions Projections
Concept of Projections
 Projection is defined as transformation that changes a pointin n-dimensional
coordinate system into a point in a coordinate system that has dimension
less than n.
 3D objects are transformed on to a 2D plane usingprojections.
 The plane is called projection plane and the lines are called
Projector.
 There are two types of projection :
o Parallel Projection
o Perspective Projection

Projection of 3D objects onto 2D Display Devices


Representing an n-dimensional object into an n-1 dimension is known as
projection. It is process of converting a 3D object into 2D object, we represent a
3D object on a 2D plane {(x,y,z)->(x,y)}. It is also defined as mapping or
transforming of the object in projection plane or view plane. When geometric
objects are formed by the intersection of lines with a plane, the plane is called
the projection plane and the lines are called projections.

Parallel Projection
 In parallel projection, coordinate positions are transformed to the
view plane along parallel lines.
 It preserves relative proportion of object.
 Accurate views of various sides of an object are obtained.
 Doesn’t give realistic representation of the appearance of the 3-D
object.

Types of Parallel Projection


 Orthographic Projection:
when the projection is perpendicular to the view plane. In short,
The direction of projection = normal to the projection plane.
The projection is perpendicular to the view plane.
 Oblique Projection:
when the projection is not perpendicular to the view plane. In short,
direction of projection normal to the projection plane.
Not perpendicular.
Perspective Projection

 In perspective projection, object positions are transformed to the


view plane along lines that converge to a point called projection
reference point (center of projection).
 visual effect is similar to human visual system
 Equal sized object appears in different size according as distance
from view plan

1. Perspective Projection:

 This type aims to create a more realistic view by simulating how we perceive
depth in the real world. It achieves this using the concept of vanishing points.
 Key Concepts:
o Vanishing Point: As parallel lines in a 3D scene recede into the distance,
they appear to converge at a single point on the projection plane, called the
vanishing point. This creates the illusion of depth.
o Field of View: The angle that defines the portion of the 3D scene captured in
the projection. A wider field of view encompasses more of the scene but may
appear less focused, while a narrower field of view offers a more zoomed-in
perspective.
 Applications:
o Widely used in movies, video games, and other applications where a realistic
depiction of space and depth is desired.
 Advantages:
o Creates a natural and intuitive representation of depth, similar to human vision.
o Provides a sense of realism and immersion in the 3D scene.
 Disadvantages:
o Objects farther away appear smaller, which may not be ideal for technical
drawings or architectural plans where accurate size representation is crucial.
o Requires more complex calculations compared to orthographic projection.

2. Orthographic Projection:

 This type uses parallel lines for projection, resulting in a non-perspective view
where objects retain their relative sizes regardless of their distance from the
viewpoint.
 Key Concepts:
o Parallel projection: Lines that are parallel in 3D space also remain parallel in
the projected image. This creates a more technical and distortion-free view.
o Multiple viewpoints: Orthographic projections can be generated from various
viewpoints (front, top, side, etc.) to provide different perspectives of the 3D
scene.
 Applications:
o Often used in engineering drawings, architectural plans, and other scenarios
where accurate size representation and dimensional relationships are
essential.
o Also used in some design software to provide a clear view of objects from
different angles.
 Advantages:
o Maintains accurate size and proportion of objects, making it ideal for technical
drawings.
o Simpler calculations compared to perspective projection.
 Disadvantages:
o Can appear less natural and realistic compared to perspective projection.
o May not provide a strong sense of depth, especially for complex 3D scenes.
Three - dimensional Object Representations

Methods of 3D object representation


 Graphics scenes can contain many different kind of objects like tree,flowers,
clouds, rock, water etc.
 These cannot be described with only methods. It obviously requires
precious such as polygon, quadratic surface, surface volume rendering
visualization technique etc. Procedural methods for representing
engineering structure with curved surfaces. Representing scheme for solid
object are often divided into two broadcategories.
Boundary Representation :
 It is used to describe a 3D object as a set of surface that separate the object
interior from the environment. E.g. polygon surface, curved surface.
Space partitioning representation:
 It is used to describe a 3D interior impurities. E.g. octree representation.

 Polygon surface :
Most commonly used boundary representation for a 3D object is a set of surface
polygon that enclose the object . It is simple and fast because all surface is
described as object is created by dividing the connected polygons. Such
representation is common in design and modeling application, to give a general
indication of the surface structure. Object are represented in polygon table.
Equation of plane is Ax +By +Cz + D = 0 .
 Polygon Table:
To specify polygon surface, as set of vertex co-ordinate and associated attribute
parameters are replaced into tables and that are used in the subsequent processing,
displaying error checking and manipulation of the object in a scene. Polygon table
can be organized into two groups
: geometrical and attribute tables.

Geometric Tables:
 It contains vertex co-ordinates and parameters to specify the spatial
orientation of the polygon surface. It is organized into tree lists.
 Vertex table: co-ordinate values for each vertex in the object arestored.
 Edge Table : Contains points back into the vertex table to identify thevertices
for each polygon edge.
 Polygon Surface Table: It contains pointers back into the edge table to
identify the edge for each.
Guidelines to Generate Error Free Table
1. Every vertices listed as an end point for at least two edges
2. Every edge is part of at least one polygon
3. Every polygon is closed.
4. Every polygon has at least one shared edge
5. If edge table contains pointers to polygons, every edge referenced by a
polygon pointer has a reciprocal pointerback to polygon

Attribute Tables:
Attribute information from an object includes parameter specifying the degree of
transparency of an object and its surface reflectivity and texture characteristics.
Polygon Mesh :
A polygon mesh is a collection of edges, vertices and polygons connected such
that each edge is shared by at most two polygons.
Some graphics packages provide several polygon functions formodeling objects.
Introduction to Hidden Line and Hidden Surface
Removal Techniques:

Hidden line and hidden surface removal techniques are fundamental processes in
computer graphics used to determine which lines or surfaces are visible and
which are obscured or hidden by other objects in a 3D scene. Let's define each
term:

Hidden Line Removal:


 Hidden line removal, also known as line occlusion or line clipping, is the
process of identifying and removing lines or edges that are hidden or
obscured by other objects in a 3D scene.
 This technique is used to improve the clarity and readability of wireframe
or line drawings by eliminating lines that are not visible from a particular
viewpoint.
 Hidden line removal is essential for visualizing complex 3D models,
architectural plans, and engineering designs, where it helps highlight the
visible edges of objects and convey spatial relationships.

Hidden Surface Removal:


 Hidden surface removal, also known as surface occlusion or back-face
culling, is the process of identifying and removing surfaces or polygons that
are obscured or hidden by other surfaces in a 3D scene.
 This technique is used to determine which surfaces are visible from a
particular viewpoint and should be rendered, while obscuring surfaces are
discarded or not rendered.
 Hidden surface removal is crucial for rendering realistic 3D scenes, as it
ensures that only the visible surfaces contribute to the final image,
improving rendering performance and reducing visual clutter.
 Common methods for hidden surface removal include depth buffering, z-
buffering, painter's algorithm, and binary space partitioning (BSP) trees.
Types of hidden surface detection algorithms
1. Object space methods
2. Image space methods

Object space methods: In this method, various parts of objects are compared.
After comparison visible, invisible or hardly visible surface is determined. These
methods generally decide visible surface. In the wireframe model, these are
used to determine a visible line. So these algorithms are line based instead of
surface based. Method proceeds by determination of parts of an object whose
view is obstructed by other object and draws these parts in the same color.

Image space methods: Here positions of various pixels are determined. It is


used to locate the visible surface instead of a visible line. Each point is detected
for its visibility. If a point is visible, then the pixel is on, otherwise off. So the
object close to the viewer that is pierced by a projector through a pixel is
determined. That pixel is drawn is appropriate color.

These methods are also called a Visible Surface Determination. The


implementation of these methods on a computer requires a lot of processing
time and processing power of the computer.

The image space method requires more computations. Each object is defined
clearly. Visibility of each object surface is also determined.

Differentiate between Object space and Image


space method
Object Space Image Space

1. Image space is object based. It 1. It is a pixel-based method. It is


concentrates on geometrical relation concerned with the final image, what
among objects in the scene. is visible within each raster pixel.

2. Here surface visibility is determined. 2. Here line visibility or point visibility


is determined.

3. It is performed at the precision with 3. It is performed using the resolution


which each object is defined, No of the display device.
resolution is considered.

4. Calculations are not based on the 4. Calculations are resolution base, so


resolution of the display so change of the change is difficult to adjust.
object can be easily adjusted.

5. These were developed for vector 5. These are developed for raster
graphics system. devices.

6. Object-based algorithms operate on 6. These operate on object data.


continuous object data.

7. Vector display used for object 7. Raster systems used for image
method has large address space. space methods have limited address
space.

8. Object precision is used for 8. There are suitable for application


application where speed is required. where accuracy is required.

9. It requires a lot of calculations if the 9. Image can be enlarged without


image is to enlarge. losing accuracy.

10. If the number of objects in the 10. In this method complexity


scene increases, computation time increase with the complexity of visible
also increases. parts.
Introduction to Illumination / Lighting Models

Illumination or lighting models in computer graphics simulate


how light interacts with surfaces in a 3D scene. These models
play a crucial role in rendering realistic images by determining
how light sources illuminate objects and how light is reflected or
absorbed by their surfaces. Three key components of
illumination models are ambient, diffuse, and specular lighting.

1. Ambient Lighting:

 Ambient lighting represents the overall background or environmental


illumination in a scene.
 It provides a uniform level of light that is present regardless of the direction or
intensity of light sources.
 Ambient lighting contributes to the overall brightness and visibility of objects
in a scene, even in areas not directly illuminated by light sources.
 This component helps to simulate indirect lighting and fill in shadows,
enhancing the realism of rendered images.
 Ambient lighting is typically represented as a constant color or intensity
applied uniformly across all surfaces in the scene.

2. Diffuse Lighting:

 Diffuse lighting simulates the even distribution of light on the surface of an


object.
 It models how light is scattered or diffused across a surface due to its
roughness or texture.
 Diffuse lighting depends on the angle between the surface normal and the
direction of the incoming light.
 Surfaces that face the light source directly receive more light, while surfaces
angled away from the light receive less.
 The intensity of diffuse lighting decreases with increasing angle between the
surface normal and the light direction, following Lambert's cosine law.
 Diffuse lighting contributes to the perceived brightness and color of objects
and helps define their overall appearance and shape.
3. Specular Lighting:

 Specular lighting represents the reflection of light off shiny or glossy surfaces.
 It models the phenomenon of specular reflection, where light is reflected in a
concentrated and mirror-like manner.
 Specular highlights are bright spots or reflections that appear on surfaces when
light sources are reflected directly into the viewer's eye.
 The position, size, and intensity of specular highlights depend on the viewing
angle, surface orientation, and properties of the material (e.g., shininess or
specular reflectivity).
 Specular lighting enhances the visual appeal of objects by adding highlights
and conveying surface smoothness or reflectivity.

Introduction to Shading / Surface Rendering Models

Shading or surface rendering models in computer graphics determine the color


and appearance of surfaces in a 3D scene by simulating how light interacts with
the surfaces. These models aim to achieve realistic rendering by considering
factors such as lighting conditions, surface orientation, and material properties.
Three common shading models are the constant shading model, Gouraud shading
model, and Phong shading model.

Constant Shading Model:

 The constant shading model, also known as flat shading, assigns a single color
to each polygonal surface in a 3D scene.
 This model calculates the color of the surface based on the lighting conditions
at a single point on the surface, typically at the surface's vertex or centroid.
 The same color is then applied uniformly across the entire surface, regardless
of variations in surface orientation or lighting.
 Constant shading is computationally efficient but can result in visually
unrealistic renderings, especially for surfaces with varying orientations or
smooth transitions between adjacent polygons.

Gouraud Shading Model:


 The Gouraud shading model, named after computer scientist Henri
Gouraud, interpolates colors across polygonal surfaces by calculating
shading at each vertex and interpolating the colors across the surface.
 This model computes the vertex normal's and colors and then interpolates
these values across the surface using linear interpolation techniques such as
bar centric interpolation.
 Gouraud shading produces smoother and more realistic renderings than
constant shading by considering variations in surface orientation and
lighting across the surface.
 However, Gouraud shading may suffer from artifacts such as color banding
or discontinuities along polygon edges, especially for surfaces with
complex geometry.

Phong Shading Model:

 The Phong shading model, developed by computer scientist Bui Tuong Phong,
calculates shading at every pixel on the surface by interpolating normals and
colors across the surface.
 This model computes the normal vectors at each vertex and interpolates these
normals across the surface to determine the shading at each pixel using the
Phong reflection model.
 Phong shading produces high-quality and smooth renderings with accurate
specular highlights and reflections by considering shading at a finer level of
detail than Gouraud shading.
 Phong shading is computationally more expensive than Gouraud shading due
to the need to calculate shading at every pixel, but it produces more visually
appealing results, especially for surfaces with complex geometry and specular
highlights.
Unit - 5
( Web Graphics Designs and Graphics Design Packages )
5.1. Introduction to graphics file formats
5.2. Principles of web graphics design – browser safe colors, size, resolution, background,
anti-aliasing
5.3. Type, purposes and features of graphics packages
5.4. Examples of graphics packages and libraries

Introduction to Graphics File formats


Graphics file formats are standardized methods for storing and encoding digital
images, graphics, and visual data. These formats define the structure and
organization of data within a file, including image resolution, color depth,
compression techniques, and metadata. Different file formats are optimized for
specific use cases, such as web graphics, print publishing, or lossless archival.
Here are explanations of some popular graphics file formats:
1. JPEG (Joint Photographic Experts Group):

 JPEG is one of the most widely used lossy compression formats for digital
images.
 It is well-suited for photographs and natural images with smooth color
gradients.
 JPEG achieves high compression ratios by discarding some image data,
leading to loss of quality (though adjustable via compression settings).
 It supports both RGB and CMYK color spaces and is commonly used for web
graphics and digital photography.

2. PNG (Portable Network Graphics):

 PNG is a lossless compression format designed as an alternative to GIF.


 It supports 24-bit RGB color and 8-bit indexed color images, as well as
transparency (alpha channel).
 PNG is suitable for images with sharp edges, text, or areas of solid color, as it
preserves image quality without introducing artifacts.
 It is widely used for web graphics, digital art, and images that require
transparent backgrounds.

3. GIF (Graphics Interchange Format):


 GIF is a lossless compression format commonly used for animated images and
graphics with limited color palettes.
 It supports 8-bit indexed color images with a maximum of 256 colors and
includes support for animation and transparency (using a single color as
transparent).
 GIF animations consist of multiple frames displayed in sequence, making it
suitable for short animations, icons, and simple graphics.

4. TIFF (Tagged Image File Format):

 TIFF is a versatile, high-quality format commonly used in professional


printing and graphic design.
 It supports various color spaces, including RGB, CMYK, grayscale, and
indexed color.
 TIFF files can be uncompressed or use lossless compression methods such as
LZW or ZIP.
 TIFF is suitable for storing high-quality images, scanned documents, and
images intended for printing or archival purposes.
5. BMP (Bitmap Image File):

 BMP is a simple, uncompressed raster image format developed by Microsoft


for Windows.
 It supports 1-bit monochrome, 4-bit, 8-bit, 16-bit, 24-bit, and 32-bit (with
alpha channel) color depths.
 BMP files are typically large in size due to lack of compression, making them
less suitable for web use but suitable for archival or high-quality printing.

6. SVG (Scalable Vector Graphics):

 SVG is an XML-based vector graphics format used for scalable and


resolution-independent images.
 It supports shapes, paths, text, gradients, and other vector graphics elements.
 SVG files can be scaled to any size without loss of quality and are ideal for
web graphics, icons, logos, and interactive graphics.

7. EPS (Encapsulated Postscript )


It is used in vector based images. It contains text as well as graphics, They are a
common format used for transferring image data betweendifferent OS.
 PSD - Photoshop Document
 PDF - Portable Document Format
 AI - Adobe Illustrator Document
 INDD - Adobe In Design Document

Principles of Web graphics design

Browser Safe Colors:


 Browser-safe colors are expressed as amounts
of RED, GREEN and BLUE in hexadecimal code (HEX) and always in this
order.
 The color management system is currently used by web browser software
and is based on 8 bit, and are limited to displaying 256 colors, and the
system software and browsers reserve up to 40 colors for their own use
(256-40=216).
Font Size
 It is how large the character displayed on a screen or printed on apage are
 A font is often measured in pt (Points). Points determine the height of the
lettering.
 The font size ranging from 6pt to 84 pt . It is also measured in pixels.

Resolution

 The graphics display resolution is the width and height dimension of an


electronic visual display device, such as a computer monitor, in pixels.
Certain combinations of width andheight are standardized.
 It measures the no. of pixels in a digital display or image. It is defined as
width by height (W x H).

Background

 One of the major changes you will notice is the background todays, the
background are one of the core features that determine, how visually
interesting a website is .
 The background holds the theme of the web graphics.

Anti- aliasing
 Anti-aliasing is the smoothing of jagged edges in digital images by
averaging the colors of the pixels at a boundary.
 It makes edges appear less jagged and helps blend colors in anatural-looking
way.
 Anti-aliasing smoothes edges by estimating the colors along eachedge.
o For example, a black diagonal line against a white backgroundmight be
shades of light and dark grey instead of black and white.
 The goal of an anti-aliasing algorithm is to make a digital imagelook natural
when viewed from a typical viewing distance.

Type Purposes and features of graphics packages

Graphics Packages
A graphics package is an application that can be used to create and
manipulate images on a computer.
Graphics presentations are the popular method to present large
quantities of data, They act as the great visual guide to the information
being delivered, They offer the convenience to the audience, It is easier
to read the chart rather than listening to a set of numbers and trying to
make sense of it.
There are two main types of graphics package:
 painting packages
 drawing packages

1. Printing Packages
 A painting package produces images by changing the color of pixels on the
screen.
 These are coded as a pattern of bits to create a bitmapped graphicsfile.
 Bitmapped graphics are used for images such as scanned photographs or
pictures taken with a digital camera.
Advantages:
 The main advantage offered by this type of graphic is that individual pixels can be
changed which makes very detailed editing possible.

Disadvantages of painting packages


 Individual parts of an image cannot be resized;
 only the whole picture can be increased or decreased in size.
 Information has to be stored about every pixel in an image which produces
files that use large amounts of backing storage space.
Examples of graphics packages that produce bitmapped images include:- MS Paint, PC
Paintbrush, Adobe Photoshop and JASC’s Paint Shop Pro.

2. Drawing Packages

 A drawing package produces images that are made up from colored lines and shapes
such as circles, squares and rectangles.
 When an image is saved it is stored in a vector graphics file as aseries of instructions,
which can be used to recreate it.
Advantages:
 They use less storage space than bitmap graphics;
 Each part of an image is treated as a separate object, which means that individual
parts can be easily modified.

Disadvantages of drawing packages


 They don’t look as realistic as bitmap graphics

Examples of drawing graphics packages include CorelDraw, Micrographix Designer and


computer aided design (CAD) packages such as AutoCAD.

Features of Graphics Packages


 Drawing straight lines and freehand lines.
 Drawing regular pre-defined shapes like squares, rectangle and circle.
 Entering text and changing the style and size of font.
 Relation objects in either clockwise or anticlockwise by specifyingdirection
and angle of rotation.
 Stretching objects either horizontally or vertically
 A paint Palette from which different colors and patterns can bechosen.
 Zoom or magnify is a feature that allows an area of the screen to beseen
closeup for detailed work.
Examples of Graphics Packages and Libraries

Graphics packages and libraries are software tools used for creating, editing, and
manipulating digital images and visual elements. They cater to a wide range of
users, from casual hobbyists to professional designers and artists. Here's a
breakdown of the two categories and some popular examples:

1. Graphics Packages:

 Standalone applications with user-friendly interfaces designed for creating


and editing images.
 They offer a variety of tools and features for tasks like:
o Image editing (cropping, resizing, color correction, adding effects)
o Drawing and painting (using brushes, pens, shapes, and textures)
o Creating logos and illustrations
o Photo manipulation and editing

Examples of Graphics Packages:

 Adobe Photoshop: Industry-standard professional software offering a vast


array of features and tools for high-end image editing, photo manipulation,
and graphic design.
 GIMP: A free and open-source alternative to Photoshop with powerful editing
capabilities and a customizable interface.
 Krita: A free and open-source program specifically designed for digital
painting and illustration, offering brush engines and texture creation tools.
 PaintShop Pro: A feature-rich commercial application with a user-friendly
interface ideal for photo editing, creating graphics, and digital painting.
 Microsoft Paint: A basic built-in program on Windows operating systems for
simple image editing and creation tasks.

2. Graphics Libraries:

 Collections of pre-written code that provide functionalities for creating and


manipulating graphics within other software applications.
 Programmers can integrate these libraries into their programs to add
graphics capabilities without having to write the code from scratch.
 Common tasks supported by graphics libraries include:
o Drawing shapes and lines
o Applying transformations and rotations
o Working with textures and colors
o Creating user interfaces with visual elements

Examples of Graphics Libraries:

 OpenGL: A cross-platform graphics library widely used for creating high-


performance 2D and 3D graphics applications, games, and simulations.
 Direct3D: A graphics library developed by Microsoft specifically for creating
3D graphics applications for Windows operating systems.
 SDL (Simple DirectMedia Layer): A cross-platform library offering
functionalities for multimedia development, including graphics, audio, and
input handling.
 Cairo: An open-source 2D graphics library that allows for creating high-
quality vector graphics and text rendering.
 Skia: An open-source graphics library created by Google, used for rendering
user interfaces, vector graphics, and text across various platforms.
Choosing between a graphics package and a graphics library depends on
your needs:
Graphics packages: Great for users who want a user-friendly interface with
comprehensive tools for creating and editing images without needing to write
code.
Graphics libraries: Ideal for programmers who want to integrate graphics
capabilities into their own software applications and have more control over the
functionality.
Unit - 5
( Virtual Reality )
6.1. Introduction
6.2. Types of Virtual Reality
6.2.1. Non-immersive Virtual Reality
6.2.2. Semi-immersive Virtual Reality
6.2.3. Fully-immersive Virtual Reality
6.2.4. Augmented Virtual Reality
6.2.5. Collaborative Virtual Reality
6.3. Applications of Virtual Reality

What is virtual reality?


Virtual reality is a simulated 3D environment that enables users to explore and interact with
a virtual surrounding in a way that approximates reality, as it is perceived through the users'
senses. The environment is created with computer hardware and software, although users
might also need to wear devices such as helmets or goggles to interact with the environment.
The more deeply users can immerse themselves in a VR environment -- and block out their
physical surroundings -- the more they are able to suspend their belief and accept it as real,
even if it is fantastical in nature.

VR systems can vary significantly from one to the next, depending on their
purpose and the technology used, although they generally fall into one of the
following three categories:
 Non-immersive. This type of VR typically refers to a 3D simulated
environment that's accessed through a computer screen. The environment
might also generate sound, depending on the program. The user has some
control over the virtual environment using a keyboard, mouse or other device,
but the environment does not directly interact with the user. A video game is a
good example of non-immersive VR, as is a website that enables a user to
design a room's decor.

 Semi-immersive. This type of VR offers a partial virtual experience that's


accessed through a computer screen or some type of glasses or headset. It
focuses primarily on the visual 3D aspect of virtual reality and does not
incorporate physical movement in the way that full immersion does. A
common example of semi-immersive VR is the flight simulator, which is used
by airlines and militaries to train their pilots.

 Fully immersive. This type of VR delivers the greatest level of virtual reality,
completely immersing the user in the simulated 3D world. It incorporates
sight, sound and, in some cases, touch. There have even been some
experiments with the addition of smell. Users wear special equipment such as
helmets, goggles or gloves and are able to fully interact with the environment.
The environment might also incorporate such equipment as treadmills or
stationary bicycles to provide users with the experience of moving through the
3D space. Fully immersive VR technology is a field still in its infancy, but it
has made important inroads into the gaming industry and to some extent the
healthcare industry, and it's generating a great deal of interest in others.

 Augmented reality also is sometimes referred to as a type of virtual reality,


although many would argue that it is a separate but related field. With
augmented reality, virtual simulations are overlaid onto real-world
environments in order to enhance or augment those environments. For
example, a furniture retailer might provide an app that enables users to point
their phones at a room and visualize what a new chair or table might look like
in that setting.

 Collaborative VR is sometimes cited as a type of virtual reality. In this model,


people from different locations come together in a virtual environment to
interact with one another, with each individual represented by a projected 3D
character. The users typically communicate through microphones and
headsets.
Applications of Virtual Reality

Virtual Reality (VR) has a wide range of applications across various industries
and fields, offering immersive and interactive experiences that can enhance
training, education, entertainment, communication, and more. Here is a list of
applications of Virtual Reality:

1. Gaming:

 VR gaming offers immersive experiences where players can interact with


virtual environments and characters in three-dimensional space.
 VR games provide a sense of presence and realism, allowing players to feel
as if they are part of the game world.

2. Training and Simulation:

 VR is used for training simulations in fields such as aviation, healthcare,


military, and emergency response.
 VR simulations allow trainees to practice skills and procedures in realistic
environments without real-world risks or consequences.
3. Education:

 VR is utilized in education to create immersive learning experiences that


engage students and enhance understanding of complex subjects.
 VR educational applications include virtual field trips, anatomy lessons,
historical recreations, and interactive science experiments.

4. Healthcare:

 VR is used in healthcare for medical training, surgical simulation, patient


rehabilitation, pain management, and mental health therapy.
 VR simulations enable medical professionals to practice procedures and
surgeries in a safe and controlled environment.

5. Architecture and Design:

 VR is employed in architecture and design to visualize and explore building


designs and prototypes in virtual space.
 VR allows architects, designers, and clients to walk through virtual
buildings, evaluate designs, and make informed decisions before
construction begins.

6. Manufacturing and Engineering:


 VR is used in manufacturing and engineering for product design,
prototyping, assembly simulation, and maintenance training.
 VR simulations help optimize manufacturing processes, identify design
flaws, and improve efficiency and safety in industrial settings.

7. Entertainment and Media:


 VR is used in entertainment and media for immersive storytelling, virtual
tours, live events, and interactive experiences.
 VR entertainment applications include virtual concerts, theater
performances, museum exhibitions, and cinematic experiences.
8. Tourism and Hospitality:
 VR is utilized in tourism and hospitality to provide virtual tours of
destinations, hotels, and attractions.
 VR allows travelers to explore destinations and accommodations before
booking, enhancing their travel planning experience.
9. Military and Defense:
 VR is used in military and defense for training simulations, battlefield
visualization, mission planning, and vehicle simulations.
 VR training programs help military personnel develop tactical skills,
decision-making abilities, and situational awareness in realistic scenarios.
10. Retail and E-commerce:
 VR is employed in retail and e-commerce for virtual shopping experiences,
product visualization, and showrooming.
 VR allows customers to browse and interact with products in virtual stores,
enhancing their shopping experience and increasing engagement.

You might also like