Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CGR Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 43

INTRODUCTION

Image

• An image is basically representation of a real-world object on a computer. It can be


an actual picture display, a stored page in a video memory, or a source code
generated by a program.
• Mathematically, an image is a two-dimensional array of data with intensity or a
colour value at each element of the array. In simple term, an image is defined by
pixels.

Object

• In computer graphics and rendering (CGR), an object refers to a 3D model or shape


that is rendered on the screen. Objects can be simple or complex shapes, such as a
cube, a sphere, or a human figure.

Pixel

• Pixel stands for picture element. A pixel is the smallest piece of information in an
image.
• In computer graphics objects are represented as collection of pixels, where pixel is
smallest addressable point which can be displayed on screen.
• Pixel can be displayed on screen by setting its intensity and color.

Resolution

• It is expressed in terms of the number of pixels on the horizontal axis and the
number on the vertical axis.
• In pixel resolution, the term resolution refers to the total no. of count of pixels in a
digital image.
• For example, if an has M rows and N columns, then its resolution can be defined as
M X N.

Frame Buffer:

• In computer graphics, a frame buffer is a memory buffer that stores the color
values for each pixel on the screen.
• The frame buffer is used to hold the final image that will be displayed on the screen,
and it is typically implemented as a 2D array of color values, with one element for
each pixel on the screen.
Text Mode

• In text mode, a display screen is divided into rows and columns of boxes. Each can
contain one character. Text mode is also called character mode.
• All video standards support a text mode that divides the screen into 25 rows and 80
columns.
• Text mode is also known as Character mode or Alphanumeric mode.

Text mode graphics function:

1. Window ( ): This function specifies a window on screen.


Syntax- window (left, top, right, bottom);
2. putch ( ): It displays a single character at cursor position.
Syntax – putch ( char);
3. clrscr ( ): It clears the entire screen and locates the cursor in top left corner of
screen.
Syntax – clrscr ( );
4. gotoxy ( ): It positions the cursor to the specified location on screen.
Syntax – gotoxy (x, y);
5. puts ( ): It display string at cursor position.
Syntax – puts (s1);
6. textcolor ( ): It sets the colour for the text.
Syntax: textcolor (color);

Graphics Mode:

• Graphic mode is a computer display mode that displays images using pixels.
• In graphic mode, the display screen is treated as an array of pixels.
• Programs that run in graphics mode can display an unlimited variety of shapes and
fonts.

Graphics Mode Functions:

1. initgraph () – It is used to initialize graphic mode.


Syntax: initgraph (int driver, int mode, char path);
2. closegraph () – It is used to close graphics mode.
Syntax: closegraph ();
BASIC GRAPHICS PIPELINE

The graphics pipeline, also known as the rendering pipeline, is the series of steps that a
computer graphics system uses to convert a 3D scene into a 2D image that can be
displayed on a screen. The basic graphics pipeline consists of the following stages:

1. Modeling: In this stage, the 3D scene is created and defined by specifying the
positions and properties of the objects in the scene. This includes the geometric
shapes of the objects, their location in 3D space, and their surface properties, such
as color, texture, and lighting.
2. Viewing: In this stage, the 3D scene is transformed into a 2D image by projecting it
onto an image plane. This includes defining the camera position and orientation in
the scene, and the field of view of the camera.
3. Clipping: In this stage, the parts of the 3D scene that are outside the viewable area
are discarded. This is done by comparing the positions of the objects in the scene to
the boundaries of the viewable area, and discarding any that are outside the
boundaries.
4. Rasterization: In this stage, the 2D image is created by converting the 3D scene into
a set of pixels. This includes determining the color and depth values of each pixel,
and using these values to create the final image.
5. Display: In this stage, the final 2D image is displayed on the screen. This includes
transferring the image from the frame buffer to the screen and adjusting the
properties of the image, such as brightness and contrast.

The graphics pipeline is a complex


process that involves many calculations
and transformations, and it can be
optimized in many ways to achieve better
performance and image quality. Modern
graphics pipelines can include additional
stages such as lighting, shading, and post-
processing, which are used to make the
images look more realistic and add more
details.
BITMAP AND VECTOR GRAPHICS

Bitmap Graphics (Raster Graphics) *2

• Bitmap graphics, also known as raster graphics, is a type of digital image


representation that uses a grid of pixels to store the color values for each point on
the image.
• In a bitmap image, each pixel is assigned a specific color value, and the image is
formed by the combination of these pixels.
• The resolution of a bitmap image is determined by the number of pixels in the
image, and the size of the image can be changed by adding or removing pixels.

Vector-Based Graphics:

• The images in vector Graphics are basically mathematically based images. Unlike
bitmaps, vector images are not based on pixel patterns, but instead use
mathematical formulas to draw lines and curves that can be combined to create an
image.
• Vector based images have smooth edges and therefore used to create curves and
shapes. Vector images are edited by manipulating the lines and curves that make
up the image using a program.

DISPLAY DEVICES

1. CRT

• A cathode ray tube (CRT) is a type of display technology that was commonly used in
older televisions and computer monitors.
• It works by emitting a beam of electrons from a cathode, which is then focused and
directed by an electron gun onto a phosphor-coated screen.
• The phosphors on the screen emit light when struck by the electron beam, creating
the image that is displayed on the screen.

A CRT display consists of three main components:

1. The electron gun, which generates the electron beam and focuses it onto the
screen.
2. The phosphor-coated screen, which emits light when struck by the electron
beam, creating the image that is displayed on the screen.
3. The deflection system, which is responsible for steering the electron beam
across the screen, allowing it to create the entire image.
• CRT displays have some advantages over newer display technologies such as LCD
and OLED, they have good color depth, wide viewing angles and fast response time.
• However, they also have some disadvantages like they are large, heavy, and
consume more power than newer technologies. They also have issues like screen
flicker, distortion, and geometric distortion.
• CRT displays have been mostly phased out by newer technologies and are not
commonly used anymore.

2. Raster Scan Display *1

• A raster scan display is a type of display technology that is used in most modern
televisions and computer monitors.
• It works by emitting a beam of electrons or other particles, which is then scanned
horizontally across the screen, creating a pattern of lines called a raster.
• As the beam moves across the screen, it illuminates phosphors or other types of
light-emitting elements, creating the image that is displayed on the screen.

A raster scan display consists of several main components:

1. The electron gun or other source of the particle beam, which generates the
beam and focuses it onto the screen.
2. The phosphor-coated screen or other light-emitting elements, which emit light
when struck by the beam, creating the image that is displayed on the screen.
3. The deflection system, which is responsible for steering the beam across the
screen, allowing it to create the entire image.
4. The synchronization circuit, which is used to synchronize the beam with the
scanning process and the image data.
• Raster scan displays are widely used because they are relatively inexpensive to
produce and can display high-resolution images.
• They can be used in a variety of applications, such as televisions, computer
monitors, and other types of displays.
• However, they also have some limitations, such as a limited viewing angle and a
fixed resolution, they also have issues like screen flicker and distortion.
3. Random Scan Display *2

• A random scan display, also known as vector display, is a type of display technology
that is used in some older computer graphics systems and arcade games.
• Unlike raster scan displays, which scan the entire screen in a regular pattern,
random scan displays only draw the parts of the image that are changing.
• This allows them to use less power and produce less heat, but it also means that
they can only display simple graphics and animations.

A random scan display consists of several main components:

1. The electron gun or other source of the particle beam, which generates the
beam and focuses it onto the screen.
2. The phosphor-coated screen or other light-emitting elements, which emit light
when struck by the beam, creating the image that is displayed on the screen.
3. The deflection system, which is responsible for steering the beam across the
screen, allowing it to create the image.
4. The refresh memory, which stores the image data and the beam positioning
information.
• In random scan displays, the image is drawn by specifying the endpoints of the lines
that make up the image and the display system then draws the lines by moving the
electron beam from one endpoint to the other.
• This way, the display only updates the parts of the image that are changing, making
it more efficient.
• Random scan displays have been mostly phased out by newer technologies and are
not commonly used anymore, but they were popular for applications where high-
resolution images were not needed and power consumption was an important
factor.
Difference between raster and random scan display *3

Parameter Raster Scan Display Random Scan Display

Electron The electron beam is swept across The electron beam is directed only
beam the screen, one row at a time, top to to the parts of the screen where a
bottom. picture is to be drawn.

Resolution Its resolution is poor. (Produces Its resolution is good. (CRT beam
zigzag lines that are plotted as directly follows the line path).
discrete point sets).

Picture Picture definition is stored as a set Picture definition is stored as a set


definition of intensity values for all screen of line drawing instructions in a
points, called pixels in a refresh display file.
buffer area.

Realistic The capability of the system to store These systems are designed for
display intensity values for a pixel makes it line-drawing and cannot display
well suited for the realistic display realistic shaded scenes.
of scenes containing shadow and
colour patterns.

Draw an Pixels are used to draw an image. Mathematical functions are used to
image draw an image.

4. Flat Scan Display

• The term flat panel display refers to a class of video devices that have reduced
weight, volume and power requirement as compared to CRT.
• This display uses thread-like liquid crystal compounds that tend to keep the long
excess of rod-shaped molecules aligned.

4.1 Plasma Panels

• Plasma panels are a type of display technology that uses small cells filled with a
mixture of gases, such as neon and xenon, to produce images on a screen.
• Each cell, or pixel, is made up of three sub-pixels, one red, one blue, and one green.
These sub-pixels are excited by an electrical current to produce light and create the
image.

A plasma display panel (PDP) consists of several main components:

1. The front glass panel, which is the screen that the image is displayed on.
2. The discharge cells, which are the small cells filled with gases that produce the
light for the image.
3. The electrodes, which are used to excite the gases in the cells and create the
light for the image.
4. The address electrodes, which are used to control the electrical current that is
applied to the cells and create the image.
5. The driving circuit, which is used to control the electrical current and create the
image.
• Plasma panels have some advantages over other display technologies, such as
deep blacks and a wide viewing angle.
• They are also capable of producing high-quality images with rich colors and fast
response times.
• However, they also have some disadvantages, such as a relatively high power
consumption, image burn-in, and a shorter lifespan than LCD and LED displays.
• Plasma panels have been mostly phased out by newer technologies and are not
commonly used anymore.

4.2 LED

• LED is a semiconductor that illuminates when an electrical charge passes through


it.
• In this display a matrix of multi-color light emitting diode is arranged to form the
pixel position in the display. And the picture definition is stored in refresh buffer.
• A light emitting diode is made up of a semiconductor chip which is surrounded by a
transparent plastic case. The plastic case allows the light to pass through it.
• The emission of different colors including ultraviolet and infrared light depends on
the semiconductor material which is used in the diode.

4.3 LCD

• LCD is a flat panel display, electronic visual display, or video display that uses the
light modulating properties of liquid crystals.
• This display uses nematic (thread like) liquid crystal compounds that tend to keep
the long axes of rod-shaped molecules aligned. These nematic compounds have
crystalline arrangement of molecules, yet they flow like a liquid and hence termed
as Liquid Crystal Display.
5. Touch screen

• It is an input technology. A touch screen is a computer display screen that is


sensitive to human touch. A touch screen is a display device that allows the user to
interact with a computer by using their finger.
• Touch screens are used on a variety of devices such as computer and laptop
monitors, smartphones, tablets etc. Output Primitives

OUTPUT PRIMITIVES

Graphics primitives are the functions that we use to draw actual lines and the characters
that make up the picture. These functions provide convenient method to application
programmer for describing pictures.

1. Point: Plots a single pixel on screen.


2. Line: It draws a straight line.
3. Text: Draws a string of characters.
4. Polygons: A set of line segments joined end to end.
5. Marker: It draws a specified symbol at given co-ordinated position in different color
and different sizes.

AUGMENTED REALITY (MIXED REALITY) *1

• Augmented reality (AR) is the integration of digital information with the user's
environment in real time.
• Unlike virtual reality (VR), which creates a totally artificial environment, AR users
experience a real-world environment with generated perceptual information
overlaid on top of it.
• Augmented reality is used to either visually change natural environments in some
way or to provide additional information to users.
• The primary benefit of AR is that it manages to blend digital and three-dimensional
(3D) components with an individual's perception of the real world.
• AR has a variety of uses, from helping in decision-making to entertainment.
APPLICATIONS OF CGR *1

1. Education and Training: Computer-generated model of the physical, financial and


economic system is often used as educational aids. Model of physical systems,
physiological system, population trends or equipment can help trainees to
understand the operation of the system.
2. Use in Biology: Molecular biologist can display a picture of molecules and gain
insight into their structure with the help of computer graphics.
3. Computer-Generated Maps: Town planners and transportation engineers can use
computer-generated maps which display data useful to them in their planning work.
4. Architect: Architect can explore an alternative solution to design problems at an
interactive graphics terminal. In this way, they can test many more solutions that
would not be possible without the computer.
5. Presentation Graphics: Example of presentation Graphics are bar charts, line
graphs, pie charts and other displays showing relationships between multiple
parameters. Presentation Graphics is commonly used to summarize
1. Financial Reports
2. Statistical Reports
3. Mathematical Reports
4. Scientific Reports
5. Economic Data for research reports
6. Managerial Reports
7. Consumer Information Bulletins
8. And other types of reports
6. Computer Art: Computer Graphics are also used in the field of commercial arts. It is
used to generate television and advertising commercial.
7. Entertainment: Computer Graphics are now commonly used in making motion
pictures, music videos and television shows.
8. Visualization: It is used for visualization of scientists, engineers, medical personnel,
business analysts for the study of a large amount of information.
9. Educational Software: Computer Graphics is used in the development of educational
software for making computer-aided instruction.
10. Printing Technology: Computer Graphics is used for printing technology and textile
design.
GRAPHIC STANDARDS

1. CORE (Core of a Graphics System)


2. GKS (Graphics kernel system)
3. IGES (Initial Graphics Exchange Specifications)
4. PHIGS (Programmer9s Hierarchical Interactive Graphics System)
5. CGM (Computer Graphics Metafile)
6. CGI (Computer Graphics Interface)

VIRTUAL REALITY *1

• In computer graphics and rendering (CGR), virtual reality (VR) refers to the use of
computer technology to create a simulated, three-dimensional environment that
can be interacted with in a seemingly real or physical way by a person using a
device.
• It can be used to immerse the user in a computer-generated environment, allowing
them to interact with and explore it in a realistic way.

Components of Virtual Reality System:

1. Virtual world: A virtual world is a three-dimensional environment that is often, but


not necessarily, realized through a medium where one can interact with others.
Two common types of immersion include:
a. Mental immersion- A deep mental state of engagement, with suspension of
disbelief that one is in a virtual environment.
b. Physical immersion: Exhibited physical engagement in a virtual environment,
with suspension of disbelief that one is in a virtual environment.
2. Virtual Reality Engine: It is the main component of any VR system and it consists of
simulation engine and graphics engine
3. Interactive: Virtual environment responses to interaction can include the way a
participant moves around
4. Sensory feedback: Virtual reality requires senses which include vision, hearing,
touch and more.
DIGITAL DIFFERENTIAL ALGORITHM (DDA) *1

The Digital Differential Analyzer helps us to interpolate the variables on an interval from
one point to another point. We can use the digital Differential Analyzer algorithm to
perform rasterization on polygons, lines, and triangles.

Digital Differential Analyzer algorithm is also known as an incremental method of scan


conversion. In this algorithm, we can perform the calculation in a step by step manner. We
use the previous step result in the next step.

Algorithm:

Step 1: Read the end points of line.


Step 2: Δx = abs (x2 – x1) and Δy = abs (y2 – y1)
Step 3: If Δx > Δy
Length = Δx
Else length = Δy
Step 4: Δx = (x2-x1)/length and Δy = (y2-y1)/length
Step 5: x = x1 + 0.5 * sign(Δx)
y = y1 + 0.5 * sign(Δy)
Step 6: i = 1
while (i <= length) {
plot (integer(x), integer(y))
x = x + Δx
y = y + Δy
i=i+1}
Step 7: End
Program: *1

#include <stdio.h>
#include<graphics.h>
#include<math.h>

int main(){
float x, y, x1, y1, x2, y2, dx, dy, length;
int gd, gm;
printf("Enter coordinates of x1 and y1: \n");
scanf("%f%f", &x1, &y1);
printf("Enter coordinates of x2 and y2: \n");
scanf("%f%f", &x2, &y2);

detectgraph(&gd, &gm);
initgraph(&gd, &gm, "");
dx = abs(x2-x1);
dy = abs(y2-y1);
if (dx >= dy){
length = dx;
} else {
length = dy;
}

dx = (x2-x1) / length;
dy = (y2-y1) / length;
x = x1 + 0.5;
y = y1 + 0.5;
int i = 1;
while(i<=length){
putpixel (x, y, WHITE);
x = x + dx;
y = y + dy;
i++;
}

closegraph();
}
Advantages of DDA Algorithm: *1

• It is a simple algorithm to implement.


• It is a faster algorithm than the direct line equation.
• We cannot use the multiplication method in Digital Differential Analyzer.
• Digital Differential Analyzer algorithm tells us about the overflow of the point when
the point changes its location.

Disadvantages of DDA Algorithm:

• The floating-point arithmetic implementation of the Digital Differential Analyzer is


time-consuming.
• The method of round-off is also time-consuming.
• Sometimes the point position is not accurate.

BRESENHAM’S LINE DRAWING ALGORITHM *1

This algorithm was introduced by “Jack Elton Bresenham” in 1962. This algorithm helps us
to perform scan conversion of a line. It is a powerful, useful, and accurate method. We use
incremental integer calculations to draw a line. The integer calculations include addition,
subtraction, and multiplication.

In Bresenham’s Line Drawing algorithm, we have to calculate the slope (m) between the
starting point and the ending point.

Algorithm:

Step 1: Read line end points


Step 2: Δx = |x2- x1| and Δy = |y2- Y1|
Step 3: Initialize starting point of line, i.e., x=x1, y = y1
Step 4: Plot (x, y) i.e., plot first point.
Step 5: Obtain initial value of decision parameter Pk as,
Pk = 2Δy- Δx
Step 6: If Pk <0 (Here, Pk = P0 as initial value) {
x = x+1
y=y
Pk = Pk +2Δу
}
if Pk >= 0 {
x = x+1
y = y +1
Pk = Pk +2Δy-2ΔX
}
plot (x, y)
Step 7: Repeat step 6 (Δx) times.
Step 8: Stop

Advantages:

• It is simple to implement because it only contains integers.


• It is quick and incremental
• It is fast to apply but not faster than the Digital Differential Analyzer (DDA)
algorithm.
• The pointing accuracy is higher than the DDA algorithm.

Disadvantages:

• The Bresenham’s Line drawing algorithm only helps to draw the basic line.
• The resulted draw line is not smooth.

BRESENHAM’S CIRCLE GENERATING ALGORITHM

Bresenham’s algorithm is also used for circle drawing. It is known as Bresenham’s circle
drawing algorithm. It helps us to draw a circle. The circle generation is more complicated
than drawing a line.

In this algorithm, we will select the closest pixel position to complete the arc. We cannot
represent the continuous arc in the raster display system.

The different part of this algorithm is that we only use arithmetic integer. We can perform
the calculation faster than other algorithms.
Algorithm:

Step 1: Read radius (r) of circle.


Step 2: Calculate initial decision variable Pi
Step 3: x=0 and y=r
Step 4: if (P< 0) {
x = x+1
Pi = Pi + 4x + 6
} else if (Pi >= 0) {
x=x+1
y=y-1
Pi = Pi + 4(x-y) + 10
}
Step 5: Plot pixels in all octants as,
Plot (x, y)
Plot (y, x)
Plot (-y, x)
Plot (-x, y)
Plot (-x, -y)
Plot (-y, -x)
Plot (y,-x)
Plot (x, y)
Step 6: Stop

Advantages of Bresenham’s Circle Drawing Algorithm:

• It is simple and easy to implement.


• The algorithm is based on simple equation x2 + y2 = r2.

Disadvantages of Bresenham’s Circle Drawing Algorithm:

• The plotted points are less accurate than the midpoint circle drawing.
• It is not so good for complex images and high graphics images.
POLYGONS

• A closed plane figure made up of several line segments that are joined together is
called a polygon.
• In polygons, the sides do not cross each other. Exactly two sides meet at every
vertex.

Features of polygons:

• Ordered set of vertices.


• Usually counter-clockwise.
• Two consecutive vertices define the edge.
• Left side of the edge is inside.
• Right side of the edge is outside.
• Last vertex implicitly connects to the first.

Types of Polygons: *1

1. Convex Polygon: A polygon in which the line segment joining only two points within
the polygon lies completely inside the polygon is called a convex polygon.
2. Concave Polygon: A polygon in which the line segment joining only two points within
the polygon may or may not lie completely inside the polygon is called concave
polygon.

INSIDE-OUTSIDE TEST *1

• In a standard polygon, shapes like triangle, rectangle, its components edges are
joined only at vertices and edges do not have a common point in the plane and
edges are non-intersecting.
• But some graphic applications produce polygon shapes which produce intersecting
edges.
• For such a shape to find out if a particular point is inside or outside a polygon is
difficult and needs some rules to follow.
• Basically, there are two tests to find out if a point is inside or outside of polygon:
1. Odd Even Rule: This rule determines the point on the canvas by drawing a ray
from that point to infinity in any direction and counting the number of path
segments from the given shape that the ray crosses. If this number is odd,
the point is inside; if even, the point is outside.
2. Non-zero Winding Number Rule: Conceptually, to check a point P, construct a
line segment starting from P to point on boundary. Treat line segment to be
elastic pinned at P. Stretch other end of elastic around the polygon for one
complete cycle. Check how many times elastic has been wounded around
point P. If count is non-zero, then point lies inside of polygon. Else, outside of
polygon.

SCAN LINE ALGORITHM *1

• Scan line algorithm is a method for rendering images in computer graphics,


specifically for filling the polygon.
• The scan-line algorithm works by scanning the image from top to bottom, one row
of pixels at a time, and determining which pixels are inside the polygon and which
are outside.

The basic steps of scan line algorithm are:

1. Sort the polygon's edges by their y-coordinates.


2. Initialize an empty list of active edges, which will store the edges that are
currently being scanned.
3. For each row of pixels, starting from the top of the image:
• Remove any edges that are no longer active from the list of active edges.
• Add any new edges that start at this row to the list of active edges.
• Sort the active edges by their x-coordinates.
• Fill in the pixels between the x-coordinates of the active edges.
4. Repeat step 3 for each row of pixels until the entire image has been scanned.
• The scan line algorithm is an efficient method for filling polygons, as it only needs to
consider the edges of the polygon that are currently being scanned.
• The scan line algorithm is widely used in computer graphics, it is used in many
applications like polygon filling, hidden surface removal, texture mapping, and
more. It is widely used for its efficiency and simplicity.
BOUNDARY FILL ALGORITHM *1

• The basic idea of polygon filing starts from any arbitrary point inside the polygon,
set it to fill colour.
• Examine neighbouring pixels of seed pixels to check whether boundary pixels are
reached.
• If boundary pixels are not reached set fill colour to the pixels and continue the
process until boundary pixels are reached.

There are two subtypes of boundary fill algorithm:

1. Four connected method: Here, four neighbouring points of a current test point or
seed point are tested. The pixel positions are right, left, above and below of the
current pixel. This process will continue until we find a boundary with a different
colour.
2. Eight connected method: Here, four neighbouring points of a current test point or
seed point are tested. The pixel positions are right, left, above, below and four
diagonal pixels of the current pixel. This process will continue until we find a
boundary with a different colour.

Limitations of Boundary Fill Algorithm:

• It may not fill regions correctly if the same interior pixels also displayed in fill
colour.
• In the 4-connected approach, there is a problem. Sometimes it does not fill the
corner pixel as it checks only the adjacent position of the given pixel.

Program:

#include <stdio.h>
#include <graphics.h>
void boundaryFill4(int x, int y, int fill_color, int boundary_color) {
if (getpixel (x, y) != boundary_color && getpixel(x, y) != fill_color {
putpixel(x, y, fill_color);
boundaryFill4(x + 1, y, fill_color, boundary_color);
boundaryFill4(x, y + 1, fill_color, boundary_color);
boundaryFill4(x - 1, y, fill_color, boundary_color);
boundaryFill4(x, y - 1, fill_color, boundary_color);
}
}

int main ()
{
int gd = DETECT, gm;
initgraph(&gd, &gm, "");
int x = 250, y = 200, radius = 50;
circle(x, y, radius);
boundaryFill4 (x, y, 6, 15);
delay(10000);
getch();
closegraph();
return 0;
}

FLOOD FILL ALGORITHM *1

• This is a modified form of boundary fill algorithm. The basic concept is just like
boundary fill, i.e., select any seed point and start colouring toward borders.
• But here, the boundary of a polygon is defined by several different colour regions,
then we can point such areas by replacing a specified interior colour to fill colour,
instead of searching for boundary colours.

Algorithm:

Step 1: Start
Step 2: Read any seed pixels position (x, y).
Step 3: Check to see if this pixel (x, y) has old interior colour.
If old color, then set it to new fill colour.
Step 4: Repeat step 3 for 4 neighbouring pixels of (x, y).
Step 5: In step 3 and 4 if pixel (x, y) do not have old interior colour then jump to step 6.
Step 6: Stop.
STROKE METHOD

• In this method, we use small line segments to generate a character. The small
series of line segments are drawn like a stroke of a pen to form a character.
• We can build our own stroke method by calling a line drawing algorithm. Here, it is
necessary to decide which line segments are needed for each character and then
draw these segments using line drawing algorithms.
• This method also supports scaling of the character.

STARBUST METHOD

• In this method, a fixed pattern of line segments is used to generate characters.


• As shown in the figure, there are 24-line segments. Out of these 24, segments
required to display a particular character are highlighted.
BITMAP METHOD

This is also known as dot matrix because in this method characters are represented by an
array of dots in the matrix form. It’s a two-dimensional array having columns and rows.
2D TRANSFORMATIONS

Translation:

• It is the process of repositioning an object along a straight-line path from one


coordinate location to a new coordinate location.
• To translate a 2D point, one needs to add translation distance tx and ty along the x-
axis and y-axis respectively to the original coordinate position as,
x’ = x + tx
y’ = y = ty where,
(x, y) = Original coordinate of a point
(x’, y’) = New coordinate of a point
(tx, ty) = Translation distance in X and Y direction

x’ 1 0 tx x

y’ = 0 1 ty y

1 0 0 1 1

Scaling:

• Scaling changes the size of an object along x, y or both the axis.


• New transformed coordinates of a point (x, y) can be calculated by multiplying it
with scaling factors (sx, sy) in x and y direction respectively.
x’ = x * sx
y’ = y * sy where,
(x, y) = Original coordinate of a point
(x’, y’) = New coordinate of a point
(sx, sy) = Scaling factor in X and Y direction

Case 1: If sx = sy = 1
then scaling does not change the size of an object.
Case 2: If sx = sy > 1
then scaling stretches the object, increases its size.
Case 3: If sx = sy < 1
then scaling decreases its size.
Case 4: If sx = sy
then scaling the object either increases or decreases its size.
Case 5: If sx ≠ sy
then scaling changes its size as well as the shape of the object.

x’ sx 0 0 x

y’ = 0 sy 0 y

1 0 0 1 1

Rotation:

• Rotation of an object is repositioning it along a circular path in the XY plane .


• We need to specify rotation angle θ of position of rotation point about which object
is to be rotated.
x’ = x.cosθ - y.sinθ
y’ = x.sinθ + y.cosθ
Positive angle = Counter-clockwise
Negative angle = Clockwise

x’ cosƟ -sinƟ 0 x

y’ = sinƟ cosƟ 0 y

1 0 0 1 1

Reflection:

• It is a transformation that produces a mirror image of an object related to an axis of


reflection. We can choose an axis of reflection in the xy plane or perpendicular to
the xy plane.

1 0 0

0 -1 0

0 0 1
Shearing:

• A transformation that slants the slope of an object is called shear transformation.


Types of shear transformation are:
1. X-shear: It shifts x-coordinates and preserves y-coordinates. Therefore, the
object gets tilted towards either right or left.
2. Y-shear: It shifts x-coordinates and preserves y-coordinates. Therefore, the
object gets tilted towards either up or down.

1 0 0 1 shy 0

shx 1 0 0 1 0

0 0 1 0 0 1

3D TRANSFORMATIONS

Translation:

• It is the process of repositioning an object from one coordinate location to a new


coordinate location.
• To translate a 3D point, one needs to add translation distance tx, ty, tz along the x-
axis, y-axis and z-axis respectively to the original coordinate position as,
x’ = x + tx
y’ = y + ty
z’ = z + tz where,
(x, y, z) = Original coordinate of a point
(x’, y’, z’) = New coordinate of a point
(tx, ty, tz) = Translation distance in X and Y direction

Scaling:

• Scaling changes the size of an object along x, y, z or all the axis.


• New transformed coordinates of a point (x, y, z) can be calculated by multiplying it
with scaling factors (sx, sy, sz) in x, y and z direction respectively.
x’ = x * sx
y’ = y * sy
z’ = z * sz where,
(x, y, z) = Original coordinate of a point
(x’, y’, z’) = New coordinate of a point
(sx, sy, sz) = Scaling factor in X and Y direction

Rotation:

• Rotation of an object is repositioning it along a circular path in the XY plane .


• We need to specify rotation angle θ of position of rotation point about which object
is to be rotated.
x’ = x.cosθ - y.sinθ
y’ = x.sinθ + y.cosθ
Positive angle = Counter-clockwise
Negative angle = Clockwise

PROJECTION

Types of Projection:

• In simple words, transforming 3D points into 2D points is called projection.


• It can also be said that projection transforms a 3D object onto a 2D plane like
screen, canvas, paper etc.
• Projection is a process of representing a 3D object onto a screen. It is basically a
mapping of any point P(x, y, x) to its image P’(x’, y’, z’) onto a plane called a
projection plane.

There are two types of projections:

1. Parallel Projection
2. Perspective Projection

Parallel Projection:

• In this projection, the lines of projection are parallel, both in reality and in the
projection plane.
• Parallel projection discards z-coordinates and parallel lines from each vertex on
the object are extended until they intersect the view plane.

There are two types of parallel projection:

1. Orthographic Projection: The projection utilizes perpendicular projectors from the


object to the plane of projection to generate a system of drawing views. These
projections are used to describe the design and the features of an object. It is
widely used in engineering and architectural drawing. There are three types of
orthographic projection:
a. Top View
b. Front View
c. Side View
2. Oblique Projection: In this projection, the direction of projection is not perpendicular
to the projection of the plane. In oblique projection, we can view the object better
than orthographic projection.

Perspective Projection: In this projection, the distance from the centre of projection to the
projected plane is finite and the size of the object varies inversely with distance which
looks more realistic.

Diff between parallel and perspective projection:

Sr. No. Parallel Projection Perspective Projection


In parallel projection, the centre of In perspective projection, the centre of
1
projection is at infinity. projection is at a finite distance.
Here, all projectors are parallel to
2 Here, projectors are not parallel.
each other.
3 It is a less realistic view. It is more realistic.
It can be used for the applications
It resembles to that of our photographic
4 where exact measurement is
systems and human eye.
required.
These are linear transforms
5 These are non-linear transforms.
implemented with a matrix.
Examples: Use of drawing Examples: Use in architectural rendering
6
schematic diagrams. realistic views.
WINDOWING AND CLIPPING

Windowing:

• The process of selecting and viewing the picture with a different view is called
windowing.

Clipping:

• The process which divides each element of the picture into its visible and invisible
portions and allows the invisible portion to be discarded is called clipping.

Viewing Transformation:

• In general, the process of mapping pictures from the World Coordinate System to
Physical Device Coordinate System is called viewing transformation. Viewing
transformation consists of two parts:
1. Normalization Transformation: It maps the world coordinate system to the
normalized device coordinate system.
2. Workstation Transformation: It maps this normalized device coordinate
system to the physical device coordinate system.

WINDOW TO VIEWPORT TRANSFORMATION

In general, the process of mapping a picture from the world coordinate system to the
physical device coordinate system is called window to viewport transformation.

1. Window: A world coordinate area selected for display is called a window.


2. Viewport: An area on a display device to which a window is mapped is called a
viewport.

Generally, windows and


viewports are rectangles in a
standard position with the
rectangle edges parallel to the
coordinate axis.
POINT CLIPPING

• In point clipping, if a specified point is inside the clipping window, then it is accepted
and displayed on the screen.
• If the specified point is outside the clipping window, then it is rejected and not
displayed on the screen.
• Therefore, we need to check if the point (x, y) is inside or outside the clipping
window using the following condition.
Xmin <= X <= Xmax
Ymin <= Y <= Ymax

LINE CLIPPING

Lines intersecting a rectangular clip region are always clipped to a single line segment.
Line segments inside the clipping window are displayed and line segment outside clipping
window are discarded.

There are three cases:

1. Visible (Completely inside the window): Here, both the endpoints of line lie within
the window.
2. Not visible (Completely outside the window): Here, both the endpoints of line lie
outside the window.
3. Clipping Candidate (Intersects the boundaries of a window): Here, the line
intersects clipping window boundaries.
COHEN-SUTHERLAND LINE CLIPPING ALGORITHM

The Cohen-Sutherland line clipping algorithm is a method for clipping a line segment to a
rectangular window in a two-dimensional space.

The algorithm divides the window into 9 regions, and by testing a line's endpoint against
these regions, it can quickly determine whether a line is inside, outside, or partially inside
the window.

This algorithm was developed by Dan Cohen and Evan Sutherland.

The algorithm can be broken down into the following steps:

1. Divide the rectangular window into 9 regions, called "outcodes". The regions are
defined by the left, right, top, and bottom edges of the window.
2. For each endpoint of the line segment, calculate the outcode by testing its position
relative to the edges of the window.
3. If both outcodes are 0 (the endpoints are inside the window), the line segment is
entirely inside the window and does not need to be clipped.
4. If both outcodes have the same value, the line segment is entirely outside the
window and should be discarded.
5. If the outcodes have different values, the line segment is partially inside the window
and needs to be clipped.
6. For each endpoint of the line segment, calculate the intersection point with the edge
of the window that it is outside of. This will be a new endpoint for the line segment.
7. Repeat steps 2-6 for each edge of the window. The final endpoint of the line
segment will be the intersection of the line segment and the window.
8. Return the final endpoint(s) of the line segment, clipped to the rectangular window.
CYRUS-BECK LINE CLIPPING ALGORITHM

The Cyrus-Beck line clipping algorithm is a method for clipping a line segment to a convex
polygon.

It works by defining a "half-space" for each edge of the polygon, and determining whether
the line segment lies within all of the half-spaces.

If it does, the line segment is entirely inside the polygon and does not need to be clipped. If
it does not, the line segment is partially outside the polygon and must be clipped to the
intersection of the line segment and the polygon.

The algorithm is an efficient method for line clipping and is commonly used in computer
graphics and geographic information systems (GIS).

The Cyrus-Beck line clipping algorithm involves the following steps:

1. For each edge of the polygon, calculate the dot product of the line segment and the
normal vector of the edge. This will give the signed distance of the line segment
from the edge.
2. Compare the signed distance of the line segment from each edge to determine if it
is inside or outside the clipping window. If the line segment is outside of the
window, it must be clipped.
3. If the line segment needs to be clipped, calculate the intersection point between the
line segment and the boundary of window. This will be a new endpoint for the line
segment.
4. Repeat steps 1-3 for each edge of the polygon. The final endpoint of the line
segment will be the intersection of the line segment and the polygon.
5. If the line segment is inside all half-spaces, the line segment is entirely within the
polygon and no clipping is necessary.
6. Return the final endpoint(s) of the line segment, clipped to the convex polygon.
LIANG-BARSKY LINE CLIPPING ALGORITHM

The Liang-Barsky algorithm is a method for line clipping in computer graphics. It is an


efficient algorithm that clips a line segment to a rectangular window by determining the
intersection points of the line segment with the edges of the window.

The algorithm works by testing the line segment against each edge of the window
separately, and determining the intersection points if any.

It does this by first calculating the values of the line parameter for each edge, and if the
value is between zero and one, it means that the edge is intersecting the line segment.

The algorithm can then calculate the intersection point by using the parametric form of the
line segment.

The algorithm can be broken down into the following steps:

1. Initialize the line parameters p1, p2, and q1, q2 for the left, right, bottom, and top
edges of the window.
2. For each edge of the window, calculate the value of the line parameter. If the line
parameter is between 0 and 1, it means that the edge is intersecting the line
segment.
3. If the line parameter is outside the range of 0 to 1, the line segment is outside of the
window and doesn't need to be clipped.
4. If the line parameter is between 0 and 1, calculate the intersection point of the line
segment and the edge using the parametric form of the line.
5. The final endpoint of the line segment will be the intersection point(s) with the
window edges.
6. Return the final endpoint(s) of the line segment, clipped to the rectangular window.

The Liang-Barsky algorithm is considered more efficient than the Cohen-Sutherland


algorithm, because it only needs to check the intersection with four edges of the window,
while the Cohen-Sutherland algorithm needs to check against nine regions.
MIDPOINT SUBDIVISION LINE CLIPPING ALGORITHM

The Midpoint Subdivision line clipping algorithm is a method for line clipping that is based
on the principle of dividing a line segment into two equal parts, and checking if each half is
inside or outside the clipping window.

The algorithm is an efficient and simple way to clip a line segment to a rectangular
window, and it can be broken down into the following steps:

1. First, the midpoint of the line segment is calculated.


2. Next, the midpoint is tested against the edges of the clipping window. If the midpoint
is inside the window, then both halves of the line segment are inside the window
and do not need to be clipped.
3. If the midpoint is outside the window, then one or both halves of the line segment
are outside the window and need to be clipped.
4. If one or both halves of the line segment are outside the window, the algorithm is
applied recursively to the half(s) that are outside the window, until the line segment
is completely inside the window or no longer exists.
5. The final endpoint of the line segment will be the intersection of the line segment
and the window.
6. Return the final endpoint(s) of the line segment, clipped to the rectangular window.

The Midpoint Subdivision algorithm is efficient as it only needs to check the intersection
with the window for the midpoint, and it does not need to check for all the endpoints of the
segment, it's a recursive algorithm and the number of recursion is dependent on the size
of the segment, the smaller the segment the less number of recursion.
POLYGON CLIPPING

Polygon clipping is the process of clipping a polygon (a closed shape with multiple sides)
to a defined clipping window.

The main purpose of polygon clipping is to remove the parts of the polygon that are
outside of the clipping window, and to keep the parts of the polygon that are inside the
window. There are several algorithms that can be used to clip a polygon, including the
Sutherland-Hodgman algorithm.

SUTHERLAND-HODGEMAN POLYGON CLIPPING


ALGORITHM

The Sutherland-Hodgman algorithm is a polygon clipping algorithm that clips a polygon


against a convex clipping window. It works by defining a "clipping edge" for each side of
the window and then successively clipping the polygon against each edge.

The algorithm can be broken down into the following steps:

1. Define the clipping window, which is typically a rectangle or another convex


polygon.
2. Define a "clipping edge" for each side of the window.
3. For each edge of the polygon, calculate the intersection point(s) with the clipping
edge.
4. Keep the intersection point(s) that are inside the clipping window and discard the
ones that are outside the window.
5. Repeat steps 3 and 4 for each side of the window, until the entire polygon has been
clipped.
6. Return the final polygon, clipped to the window.
TEXT CLIPPING

• There are several techniques that can be used to provide text clipping in a graphics
package.
• The clipping technique depends on the methods used to generate characters and
the requirements of a particular application.

There are three methods:

1. All or None String Clipping:


• In this clipping method, either we keep the entire string or reject the entire
string based on the clipping window.
2. All or None Character Clipping:
• In this method, the clipping is based on characters rather than the entire
string. If the string is entirely inside, we keep it and if it is partially outside
the window then,
a. We reject only the portion of the string which is outside.
b. If the character is on the boundary of the clipping window, we discard
the entire character and display the remaining string.
3. Text Clipping:
• In this method, if the string is entirely inside, we keep it and if it is partially
outside the window then,
a. We reject only the portion of the string which is outside.
b. If the character is on the boundary of the clipping window, we discard
only the portion of the character outside the window.
INTRODUCTION TO CURVES

• Objects in the real world are not always made up of regular geometric shapes. It
may involve curves.
• Curves cannot be represented by exact mathematical functions or equations.
• Natural objects are neither perfectly flat nor smoothly curved but often have rough,
jagged counters.
• Drawing curves involves complex mathematical analysis in the form of various
interpolation techniques by maintaining the continuity and other properties.

Geometrically, curves are of two classifications:

1. Namable Curves
2. Unnamable Curves

Namable Curves:

• These are the parts of geometry that can be analyzed mathematically by equations.
• These include planes, spheres, parabolas, circles, straight lines and the surface of
revolution about the axis.

DDA ARC GENERATING ALGORITHM

The DDA (Digital Differential Analyzer) algorithm is a method for generating an arc by
approximating it with a series of straight line segments. It is a simple and efficient
algorithm that can be used for both 2D and 3D graphics applications.

The algorithm can be broken down into the following steps:

1. Define the center point, radius, and starting and ending angles of the arc.
2. Calculate the starting and ending points of the arc on the x-y plane using the center
point, radius, and angles.
3. Determine the number of line segments needed to approximate the arc. This is
typically done by dividing the angle of the arc by a fixed increment, such as 1 degree
or 0.1 degree.
4. For each line segment, calculate the incremental x and y values for the line
segment using the slope of the line segment (rise/run) and the angle of the arc.
5. Starting from the starting point of the arc, generate line segments by adding the
incremental x and y values to the current x and y values.
6. Repeat step 5 for each line segment, until the ending point of the arc is reached.
7. The final result is a series of line segments that approximate the arc.

INTERPOLATION

• A curve having no simple mathematical definition can be drawn by using


approximation.
• If we have an array of sample points, just by filling the portions of an unknown
curve with the piece of known curve which passes through the nearby samples.
• Since, the known curves and unknown curves share, so assume that there are two
curves, which will be much alike in this region.
• Now, fill in the gap between sample points by finding the coordinates of points
along the known approximating curve and connecting these points with line
segments.

Lagrange interpolation: Lagrange interpolation is a method of constructing a


polynomial that passes through a set of given points. The idea behind the method is to
create a polynomial that passes through all the given points and has the least degree
among all such polynomials.
TYPES OF CURVES

Spline Curve Representation:

• Spline curve representation is popularly used in computer graphics. It frequently


refers to parametric curve.
• Spline produces a smooth curve through a designated set of points using
parametric function whose first and second derivatives are continuous across the
curve sections.

They are further categorized as:

1. Bezier Curve
2. B-spline Curve

Fractals Representation:

• A fractal is a type of geometric shape or pattern that is self-similar, meaning that it


looks the same at different scales. Fractals can be generated by applying a specific
set of rules or algorithms repeatedly, leading to the creation of highly complex
shapes that exhibit a high degree of symmetry and repetition.
• Fractals are widely used in computer graphics, computer-aided design, and other
fields, due to their unique properties, such as self-similarity, and their ability to
generate complex shapes with relatively simple rules. They can be used in a variety
of applications, including image compression, data visualization, animation, and
computer art.
• Fractals are also used in many other fields such as physics, weather forecasting,
and finance. They can be used to model natural phenomena such as coastlines,
mountains, and leaves. They can also be used to model complex systems such as
the stock market, fluid dynamics and more.

They are further categorized as:

1. Hilbert Curve
2. Koch Curve
BEZIER CURVE

• The Cubic Bezier Curve is adequate for most of the graphic applications. This curve
requires four control points. These four points completely specify the curve.
• We cannot extend the Bezier Curve, but we can take four more points and we can
construct a second Bezier Curve that can be attached to the first Bezier Curve.
• As shown in the figure, the curve begins at the first control point and ends on the
fourth control point.

• To connect two Bezier Curves, just make the first control point of the second Bezier
Curve match the last control point of the first Bezier Curve.
• At the start of the curve, it is tangent to the line connecting the first and the second
control points.
• Even at the end of the curve, it is tangent to the line connecting the third and fourth
control points.

Properties of Bezier Curve:

• They generally follow the shape of the control polygon which consists of the
segments joining the control points.
• They always pass through the first and last control points.
• A Bezier Curve generally follows the shape of the polygon.
• No straight line intersects a Bezier Curve more times than it intersects its control
polygon.
B-SPLINE CURVE

A B-spline curve is a type of mathematical curve that is commonly used in computer


graphics and computer-aided design (CAD) to create smooth, complex shapes. B-spline
curves are defined by a set of control points and a set of basis functions, called B-spline
basis functions.

B-spline curves are widely used because they can represent a wide range of shapes,
they're easy to manipulate and they're very smooth. They're used in many applications like
Computer-Aided Design, Computer Animation, and others.
HILBERT CURVE

The Hilbert curve is a type of space-filling curve, meaning that it visits every point in a
two-dimensional space without leaving any empty spaces. It is named after David Hilbert,
who was one of the first to study such curves in the early 20th century.

The Hilbert curve can be generated recursively through the following steps:

1. Start with a single line segment, representing the first-order Hilbert curve.
2. Replace each line segment with a 90-degree rotation of the next-order curve.
3. Repeat step 2 for each order of the curve, increasing the number of line segments
and the level of recursion.
4. The final Hilbert curve is a continuous, self-similar curve that visits every point in
the two-dimensional space.

The Hilbert curve is used in a variety of applications, including image compression, data
visualization, and spatial data indexing. Because the curve visits all points in a space, it
can be used to order data in a space-filling way, and it can be used to map a two-
dimensional space into a one-dimensional space while preserving the relative spatial
relationships of the points.

It's also used in data compression and other applications where data has a natural spatial
relationship, like in geographic information systems and others.
KOCH CURVE

The Koch curve is a type of fractal curve that is generated by repeatedly replacing the
straight line segments of an initial shape with a specific pattern. It was first described by
the Swedish mathematician Helge von Koch in a 1904 paper.

The most well-known version of the Koch curve is the Koch snowflake, which is created
by repeatedly applying the Koch curve to the three sides of an equilateral triangle.

The Koch curve can be generated recursively through the following steps:

1. Start with a single line segment, representing the first-order Koch curve.
2. Replace each line segment with a specific pattern of four line segments, which
includes a smaller copy of the original curve in the middle, and two line segments
of the same length as the original curve, forming an equilateral triangle, this is
called the next-order curve.
3. Repeat step 2 for each order of the curve, increasing the number of line segments
and the level of recursion.
4. The final Koch curve is a continuous, self-similar curve that has infinite length but
encloses a finite area.

The Koch curve is used in a variety of applications, including fractal image compression,
data visualization, and computer graphics. The Koch curve has a fractal property, meaning
that it has the same pattern at different
FRACTAL LINES AND SURFACES

A fractal is a type of geometric shape or pattern that is self-similar, meaning that it looks
the same at different scales. Fractals can be generated by applying a specific set of rules
or algorithms repeatedly, leading to the creation of highly complex shapes that exhibit a
high degree of symmetry and repetition.

Fractal lines are one-dimensional fractals that can be created by applying a specific set of
rules to a simple line segment, such as the Koch curve, the Hilbert curve, and the Dragon
curve. These fractal lines are infinite in length, but they have a finite width.

Fractal surfaces are two-dimensional fractals that can be created by applying a specific
set of rules to a simple surface, such as the Mandelbrot set and the Julia set. These fractal
surfaces are infinite in area, but they have a finite perimeter.

You might also like