CGR Notes
CGR Notes
CGR Notes
Image
Object
Pixel
• Pixel stands for picture element. A pixel is the smallest piece of information in an
image.
• In computer graphics objects are represented as collection of pixels, where pixel is
smallest addressable point which can be displayed on screen.
• Pixel can be displayed on screen by setting its intensity and color.
Resolution
• It is expressed in terms of the number of pixels on the horizontal axis and the
number on the vertical axis.
• In pixel resolution, the term resolution refers to the total no. of count of pixels in a
digital image.
• For example, if an has M rows and N columns, then its resolution can be defined as
M X N.
Frame Buffer:
• In computer graphics, a frame buffer is a memory buffer that stores the color
values for each pixel on the screen.
• The frame buffer is used to hold the final image that will be displayed on the screen,
and it is typically implemented as a 2D array of color values, with one element for
each pixel on the screen.
Text Mode
• In text mode, a display screen is divided into rows and columns of boxes. Each can
contain one character. Text mode is also called character mode.
• All video standards support a text mode that divides the screen into 25 rows and 80
columns.
• Text mode is also known as Character mode or Alphanumeric mode.
Graphics Mode:
• Graphic mode is a computer display mode that displays images using pixels.
• In graphic mode, the display screen is treated as an array of pixels.
• Programs that run in graphics mode can display an unlimited variety of shapes and
fonts.
The graphics pipeline, also known as the rendering pipeline, is the series of steps that a
computer graphics system uses to convert a 3D scene into a 2D image that can be
displayed on a screen. The basic graphics pipeline consists of the following stages:
1. Modeling: In this stage, the 3D scene is created and defined by specifying the
positions and properties of the objects in the scene. This includes the geometric
shapes of the objects, their location in 3D space, and their surface properties, such
as color, texture, and lighting.
2. Viewing: In this stage, the 3D scene is transformed into a 2D image by projecting it
onto an image plane. This includes defining the camera position and orientation in
the scene, and the field of view of the camera.
3. Clipping: In this stage, the parts of the 3D scene that are outside the viewable area
are discarded. This is done by comparing the positions of the objects in the scene to
the boundaries of the viewable area, and discarding any that are outside the
boundaries.
4. Rasterization: In this stage, the 2D image is created by converting the 3D scene into
a set of pixels. This includes determining the color and depth values of each pixel,
and using these values to create the final image.
5. Display: In this stage, the final 2D image is displayed on the screen. This includes
transferring the image from the frame buffer to the screen and adjusting the
properties of the image, such as brightness and contrast.
Vector-Based Graphics:
• The images in vector Graphics are basically mathematically based images. Unlike
bitmaps, vector images are not based on pixel patterns, but instead use
mathematical formulas to draw lines and curves that can be combined to create an
image.
• Vector based images have smooth edges and therefore used to create curves and
shapes. Vector images are edited by manipulating the lines and curves that make
up the image using a program.
DISPLAY DEVICES
1. CRT
• A cathode ray tube (CRT) is a type of display technology that was commonly used in
older televisions and computer monitors.
• It works by emitting a beam of electrons from a cathode, which is then focused and
directed by an electron gun onto a phosphor-coated screen.
• The phosphors on the screen emit light when struck by the electron beam, creating
the image that is displayed on the screen.
1. The electron gun, which generates the electron beam and focuses it onto the
screen.
2. The phosphor-coated screen, which emits light when struck by the electron
beam, creating the image that is displayed on the screen.
3. The deflection system, which is responsible for steering the electron beam
across the screen, allowing it to create the entire image.
• CRT displays have some advantages over newer display technologies such as LCD
and OLED, they have good color depth, wide viewing angles and fast response time.
• However, they also have some disadvantages like they are large, heavy, and
consume more power than newer technologies. They also have issues like screen
flicker, distortion, and geometric distortion.
• CRT displays have been mostly phased out by newer technologies and are not
commonly used anymore.
• A raster scan display is a type of display technology that is used in most modern
televisions and computer monitors.
• It works by emitting a beam of electrons or other particles, which is then scanned
horizontally across the screen, creating a pattern of lines called a raster.
• As the beam moves across the screen, it illuminates phosphors or other types of
light-emitting elements, creating the image that is displayed on the screen.
1. The electron gun or other source of the particle beam, which generates the
beam and focuses it onto the screen.
2. The phosphor-coated screen or other light-emitting elements, which emit light
when struck by the beam, creating the image that is displayed on the screen.
3. The deflection system, which is responsible for steering the beam across the
screen, allowing it to create the entire image.
4. The synchronization circuit, which is used to synchronize the beam with the
scanning process and the image data.
• Raster scan displays are widely used because they are relatively inexpensive to
produce and can display high-resolution images.
• They can be used in a variety of applications, such as televisions, computer
monitors, and other types of displays.
• However, they also have some limitations, such as a limited viewing angle and a
fixed resolution, they also have issues like screen flicker and distortion.
3. Random Scan Display *2
• A random scan display, also known as vector display, is a type of display technology
that is used in some older computer graphics systems and arcade games.
• Unlike raster scan displays, which scan the entire screen in a regular pattern,
random scan displays only draw the parts of the image that are changing.
• This allows them to use less power and produce less heat, but it also means that
they can only display simple graphics and animations.
1. The electron gun or other source of the particle beam, which generates the
beam and focuses it onto the screen.
2. The phosphor-coated screen or other light-emitting elements, which emit light
when struck by the beam, creating the image that is displayed on the screen.
3. The deflection system, which is responsible for steering the beam across the
screen, allowing it to create the image.
4. The refresh memory, which stores the image data and the beam positioning
information.
• In random scan displays, the image is drawn by specifying the endpoints of the lines
that make up the image and the display system then draws the lines by moving the
electron beam from one endpoint to the other.
• This way, the display only updates the parts of the image that are changing, making
it more efficient.
• Random scan displays have been mostly phased out by newer technologies and are
not commonly used anymore, but they were popular for applications where high-
resolution images were not needed and power consumption was an important
factor.
Difference between raster and random scan display *3
Electron The electron beam is swept across The electron beam is directed only
beam the screen, one row at a time, top to to the parts of the screen where a
bottom. picture is to be drawn.
Resolution Its resolution is poor. (Produces Its resolution is good. (CRT beam
zigzag lines that are plotted as directly follows the line path).
discrete point sets).
Realistic The capability of the system to store These systems are designed for
display intensity values for a pixel makes it line-drawing and cannot display
well suited for the realistic display realistic shaded scenes.
of scenes containing shadow and
colour patterns.
Draw an Pixels are used to draw an image. Mathematical functions are used to
image draw an image.
• The term flat panel display refers to a class of video devices that have reduced
weight, volume and power requirement as compared to CRT.
• This display uses thread-like liquid crystal compounds that tend to keep the long
excess of rod-shaped molecules aligned.
• Plasma panels are a type of display technology that uses small cells filled with a
mixture of gases, such as neon and xenon, to produce images on a screen.
• Each cell, or pixel, is made up of three sub-pixels, one red, one blue, and one green.
These sub-pixels are excited by an electrical current to produce light and create the
image.
1. The front glass panel, which is the screen that the image is displayed on.
2. The discharge cells, which are the small cells filled with gases that produce the
light for the image.
3. The electrodes, which are used to excite the gases in the cells and create the
light for the image.
4. The address electrodes, which are used to control the electrical current that is
applied to the cells and create the image.
5. The driving circuit, which is used to control the electrical current and create the
image.
• Plasma panels have some advantages over other display technologies, such as
deep blacks and a wide viewing angle.
• They are also capable of producing high-quality images with rich colors and fast
response times.
• However, they also have some disadvantages, such as a relatively high power
consumption, image burn-in, and a shorter lifespan than LCD and LED displays.
• Plasma panels have been mostly phased out by newer technologies and are not
commonly used anymore.
4.2 LED
4.3 LCD
• LCD is a flat panel display, electronic visual display, or video display that uses the
light modulating properties of liquid crystals.
• This display uses nematic (thread like) liquid crystal compounds that tend to keep
the long axes of rod-shaped molecules aligned. These nematic compounds have
crystalline arrangement of molecules, yet they flow like a liquid and hence termed
as Liquid Crystal Display.
5. Touch screen
OUTPUT PRIMITIVES
Graphics primitives are the functions that we use to draw actual lines and the characters
that make up the picture. These functions provide convenient method to application
programmer for describing pictures.
• Augmented reality (AR) is the integration of digital information with the user's
environment in real time.
• Unlike virtual reality (VR), which creates a totally artificial environment, AR users
experience a real-world environment with generated perceptual information
overlaid on top of it.
• Augmented reality is used to either visually change natural environments in some
way or to provide additional information to users.
• The primary benefit of AR is that it manages to blend digital and three-dimensional
(3D) components with an individual's perception of the real world.
• AR has a variety of uses, from helping in decision-making to entertainment.
APPLICATIONS OF CGR *1
VIRTUAL REALITY *1
• In computer graphics and rendering (CGR), virtual reality (VR) refers to the use of
computer technology to create a simulated, three-dimensional environment that
can be interacted with in a seemingly real or physical way by a person using a
device.
• It can be used to immerse the user in a computer-generated environment, allowing
them to interact with and explore it in a realistic way.
The Digital Differential Analyzer helps us to interpolate the variables on an interval from
one point to another point. We can use the digital Differential Analyzer algorithm to
perform rasterization on polygons, lines, and triangles.
Algorithm:
#include <stdio.h>
#include<graphics.h>
#include<math.h>
int main(){
float x, y, x1, y1, x2, y2, dx, dy, length;
int gd, gm;
printf("Enter coordinates of x1 and y1: \n");
scanf("%f%f", &x1, &y1);
printf("Enter coordinates of x2 and y2: \n");
scanf("%f%f", &x2, &y2);
detectgraph(&gd, &gm);
initgraph(&gd, &gm, "");
dx = abs(x2-x1);
dy = abs(y2-y1);
if (dx >= dy){
length = dx;
} else {
length = dy;
}
dx = (x2-x1) / length;
dy = (y2-y1) / length;
x = x1 + 0.5;
y = y1 + 0.5;
int i = 1;
while(i<=length){
putpixel (x, y, WHITE);
x = x + dx;
y = y + dy;
i++;
}
closegraph();
}
Advantages of DDA Algorithm: *1
This algorithm was introduced by “Jack Elton Bresenham” in 1962. This algorithm helps us
to perform scan conversion of a line. It is a powerful, useful, and accurate method. We use
incremental integer calculations to draw a line. The integer calculations include addition,
subtraction, and multiplication.
In Bresenham’s Line Drawing algorithm, we have to calculate the slope (m) between the
starting point and the ending point.
Algorithm:
Advantages:
Disadvantages:
• The Bresenham’s Line drawing algorithm only helps to draw the basic line.
• The resulted draw line is not smooth.
Bresenham’s algorithm is also used for circle drawing. It is known as Bresenham’s circle
drawing algorithm. It helps us to draw a circle. The circle generation is more complicated
than drawing a line.
In this algorithm, we will select the closest pixel position to complete the arc. We cannot
represent the continuous arc in the raster display system.
The different part of this algorithm is that we only use arithmetic integer. We can perform
the calculation faster than other algorithms.
Algorithm:
• The plotted points are less accurate than the midpoint circle drawing.
• It is not so good for complex images and high graphics images.
POLYGONS
• A closed plane figure made up of several line segments that are joined together is
called a polygon.
• In polygons, the sides do not cross each other. Exactly two sides meet at every
vertex.
Features of polygons:
Types of Polygons: *1
1. Convex Polygon: A polygon in which the line segment joining only two points within
the polygon lies completely inside the polygon is called a convex polygon.
2. Concave Polygon: A polygon in which the line segment joining only two points within
the polygon may or may not lie completely inside the polygon is called concave
polygon.
INSIDE-OUTSIDE TEST *1
• In a standard polygon, shapes like triangle, rectangle, its components edges are
joined only at vertices and edges do not have a common point in the plane and
edges are non-intersecting.
• But some graphic applications produce polygon shapes which produce intersecting
edges.
• For such a shape to find out if a particular point is inside or outside a polygon is
difficult and needs some rules to follow.
• Basically, there are two tests to find out if a point is inside or outside of polygon:
1. Odd Even Rule: This rule determines the point on the canvas by drawing a ray
from that point to infinity in any direction and counting the number of path
segments from the given shape that the ray crosses. If this number is odd,
the point is inside; if even, the point is outside.
2. Non-zero Winding Number Rule: Conceptually, to check a point P, construct a
line segment starting from P to point on boundary. Treat line segment to be
elastic pinned at P. Stretch other end of elastic around the polygon for one
complete cycle. Check how many times elastic has been wounded around
point P. If count is non-zero, then point lies inside of polygon. Else, outside of
polygon.
• The basic idea of polygon filing starts from any arbitrary point inside the polygon,
set it to fill colour.
• Examine neighbouring pixels of seed pixels to check whether boundary pixels are
reached.
• If boundary pixels are not reached set fill colour to the pixels and continue the
process until boundary pixels are reached.
1. Four connected method: Here, four neighbouring points of a current test point or
seed point are tested. The pixel positions are right, left, above and below of the
current pixel. This process will continue until we find a boundary with a different
colour.
2. Eight connected method: Here, four neighbouring points of a current test point or
seed point are tested. The pixel positions are right, left, above, below and four
diagonal pixels of the current pixel. This process will continue until we find a
boundary with a different colour.
• It may not fill regions correctly if the same interior pixels also displayed in fill
colour.
• In the 4-connected approach, there is a problem. Sometimes it does not fill the
corner pixel as it checks only the adjacent position of the given pixel.
Program:
#include <stdio.h>
#include <graphics.h>
void boundaryFill4(int x, int y, int fill_color, int boundary_color) {
if (getpixel (x, y) != boundary_color && getpixel(x, y) != fill_color {
putpixel(x, y, fill_color);
boundaryFill4(x + 1, y, fill_color, boundary_color);
boundaryFill4(x, y + 1, fill_color, boundary_color);
boundaryFill4(x - 1, y, fill_color, boundary_color);
boundaryFill4(x, y - 1, fill_color, boundary_color);
}
}
int main ()
{
int gd = DETECT, gm;
initgraph(&gd, &gm, "");
int x = 250, y = 200, radius = 50;
circle(x, y, radius);
boundaryFill4 (x, y, 6, 15);
delay(10000);
getch();
closegraph();
return 0;
}
• This is a modified form of boundary fill algorithm. The basic concept is just like
boundary fill, i.e., select any seed point and start colouring toward borders.
• But here, the boundary of a polygon is defined by several different colour regions,
then we can point such areas by replacing a specified interior colour to fill colour,
instead of searching for boundary colours.
Algorithm:
Step 1: Start
Step 2: Read any seed pixels position (x, y).
Step 3: Check to see if this pixel (x, y) has old interior colour.
If old color, then set it to new fill colour.
Step 4: Repeat step 3 for 4 neighbouring pixels of (x, y).
Step 5: In step 3 and 4 if pixel (x, y) do not have old interior colour then jump to step 6.
Step 6: Stop.
STROKE METHOD
• In this method, we use small line segments to generate a character. The small
series of line segments are drawn like a stroke of a pen to form a character.
• We can build our own stroke method by calling a line drawing algorithm. Here, it is
necessary to decide which line segments are needed for each character and then
draw these segments using line drawing algorithms.
• This method also supports scaling of the character.
STARBUST METHOD
This is also known as dot matrix because in this method characters are represented by an
array of dots in the matrix form. It’s a two-dimensional array having columns and rows.
2D TRANSFORMATIONS
Translation:
x’ 1 0 tx x
y’ = 0 1 ty y
1 0 0 1 1
Scaling:
Case 1: If sx = sy = 1
then scaling does not change the size of an object.
Case 2: If sx = sy > 1
then scaling stretches the object, increases its size.
Case 3: If sx = sy < 1
then scaling decreases its size.
Case 4: If sx = sy
then scaling the object either increases or decreases its size.
Case 5: If sx ≠ sy
then scaling changes its size as well as the shape of the object.
x’ sx 0 0 x
y’ = 0 sy 0 y
1 0 0 1 1
Rotation:
x’ cosƟ -sinƟ 0 x
y’ = sinƟ cosƟ 0 y
1 0 0 1 1
Reflection:
1 0 0
0 -1 0
0 0 1
Shearing:
1 0 0 1 shy 0
shx 1 0 0 1 0
0 0 1 0 0 1
3D TRANSFORMATIONS
Translation:
Scaling:
Rotation:
PROJECTION
Types of Projection:
1. Parallel Projection
2. Perspective Projection
Parallel Projection:
• In this projection, the lines of projection are parallel, both in reality and in the
projection plane.
• Parallel projection discards z-coordinates and parallel lines from each vertex on
the object are extended until they intersect the view plane.
Perspective Projection: In this projection, the distance from the centre of projection to the
projected plane is finite and the size of the object varies inversely with distance which
looks more realistic.
Windowing:
• The process of selecting and viewing the picture with a different view is called
windowing.
Clipping:
• The process which divides each element of the picture into its visible and invisible
portions and allows the invisible portion to be discarded is called clipping.
Viewing Transformation:
• In general, the process of mapping pictures from the World Coordinate System to
Physical Device Coordinate System is called viewing transformation. Viewing
transformation consists of two parts:
1. Normalization Transformation: It maps the world coordinate system to the
normalized device coordinate system.
2. Workstation Transformation: It maps this normalized device coordinate
system to the physical device coordinate system.
In general, the process of mapping a picture from the world coordinate system to the
physical device coordinate system is called window to viewport transformation.
• In point clipping, if a specified point is inside the clipping window, then it is accepted
and displayed on the screen.
• If the specified point is outside the clipping window, then it is rejected and not
displayed on the screen.
• Therefore, we need to check if the point (x, y) is inside or outside the clipping
window using the following condition.
Xmin <= X <= Xmax
Ymin <= Y <= Ymax
LINE CLIPPING
Lines intersecting a rectangular clip region are always clipped to a single line segment.
Line segments inside the clipping window are displayed and line segment outside clipping
window are discarded.
1. Visible (Completely inside the window): Here, both the endpoints of line lie within
the window.
2. Not visible (Completely outside the window): Here, both the endpoints of line lie
outside the window.
3. Clipping Candidate (Intersects the boundaries of a window): Here, the line
intersects clipping window boundaries.
COHEN-SUTHERLAND LINE CLIPPING ALGORITHM
The Cohen-Sutherland line clipping algorithm is a method for clipping a line segment to a
rectangular window in a two-dimensional space.
The algorithm divides the window into 9 regions, and by testing a line's endpoint against
these regions, it can quickly determine whether a line is inside, outside, or partially inside
the window.
1. Divide the rectangular window into 9 regions, called "outcodes". The regions are
defined by the left, right, top, and bottom edges of the window.
2. For each endpoint of the line segment, calculate the outcode by testing its position
relative to the edges of the window.
3. If both outcodes are 0 (the endpoints are inside the window), the line segment is
entirely inside the window and does not need to be clipped.
4. If both outcodes have the same value, the line segment is entirely outside the
window and should be discarded.
5. If the outcodes have different values, the line segment is partially inside the window
and needs to be clipped.
6. For each endpoint of the line segment, calculate the intersection point with the edge
of the window that it is outside of. This will be a new endpoint for the line segment.
7. Repeat steps 2-6 for each edge of the window. The final endpoint of the line
segment will be the intersection of the line segment and the window.
8. Return the final endpoint(s) of the line segment, clipped to the rectangular window.
CYRUS-BECK LINE CLIPPING ALGORITHM
The Cyrus-Beck line clipping algorithm is a method for clipping a line segment to a convex
polygon.
It works by defining a "half-space" for each edge of the polygon, and determining whether
the line segment lies within all of the half-spaces.
If it does, the line segment is entirely inside the polygon and does not need to be clipped. If
it does not, the line segment is partially outside the polygon and must be clipped to the
intersection of the line segment and the polygon.
The algorithm is an efficient method for line clipping and is commonly used in computer
graphics and geographic information systems (GIS).
1. For each edge of the polygon, calculate the dot product of the line segment and the
normal vector of the edge. This will give the signed distance of the line segment
from the edge.
2. Compare the signed distance of the line segment from each edge to determine if it
is inside or outside the clipping window. If the line segment is outside of the
window, it must be clipped.
3. If the line segment needs to be clipped, calculate the intersection point between the
line segment and the boundary of window. This will be a new endpoint for the line
segment.
4. Repeat steps 1-3 for each edge of the polygon. The final endpoint of the line
segment will be the intersection of the line segment and the polygon.
5. If the line segment is inside all half-spaces, the line segment is entirely within the
polygon and no clipping is necessary.
6. Return the final endpoint(s) of the line segment, clipped to the convex polygon.
LIANG-BARSKY LINE CLIPPING ALGORITHM
The algorithm works by testing the line segment against each edge of the window
separately, and determining the intersection points if any.
It does this by first calculating the values of the line parameter for each edge, and if the
value is between zero and one, it means that the edge is intersecting the line segment.
The algorithm can then calculate the intersection point by using the parametric form of the
line segment.
1. Initialize the line parameters p1, p2, and q1, q2 for the left, right, bottom, and top
edges of the window.
2. For each edge of the window, calculate the value of the line parameter. If the line
parameter is between 0 and 1, it means that the edge is intersecting the line
segment.
3. If the line parameter is outside the range of 0 to 1, the line segment is outside of the
window and doesn't need to be clipped.
4. If the line parameter is between 0 and 1, calculate the intersection point of the line
segment and the edge using the parametric form of the line.
5. The final endpoint of the line segment will be the intersection point(s) with the
window edges.
6. Return the final endpoint(s) of the line segment, clipped to the rectangular window.
The Midpoint Subdivision line clipping algorithm is a method for line clipping that is based
on the principle of dividing a line segment into two equal parts, and checking if each half is
inside or outside the clipping window.
The algorithm is an efficient and simple way to clip a line segment to a rectangular
window, and it can be broken down into the following steps:
The Midpoint Subdivision algorithm is efficient as it only needs to check the intersection
with the window for the midpoint, and it does not need to check for all the endpoints of the
segment, it's a recursive algorithm and the number of recursion is dependent on the size
of the segment, the smaller the segment the less number of recursion.
POLYGON CLIPPING
Polygon clipping is the process of clipping a polygon (a closed shape with multiple sides)
to a defined clipping window.
The main purpose of polygon clipping is to remove the parts of the polygon that are
outside of the clipping window, and to keep the parts of the polygon that are inside the
window. There are several algorithms that can be used to clip a polygon, including the
Sutherland-Hodgman algorithm.
• There are several techniques that can be used to provide text clipping in a graphics
package.
• The clipping technique depends on the methods used to generate characters and
the requirements of a particular application.
• Objects in the real world are not always made up of regular geometric shapes. It
may involve curves.
• Curves cannot be represented by exact mathematical functions or equations.
• Natural objects are neither perfectly flat nor smoothly curved but often have rough,
jagged counters.
• Drawing curves involves complex mathematical analysis in the form of various
interpolation techniques by maintaining the continuity and other properties.
1. Namable Curves
2. Unnamable Curves
Namable Curves:
• These are the parts of geometry that can be analyzed mathematically by equations.
• These include planes, spheres, parabolas, circles, straight lines and the surface of
revolution about the axis.
The DDA (Digital Differential Analyzer) algorithm is a method for generating an arc by
approximating it with a series of straight line segments. It is a simple and efficient
algorithm that can be used for both 2D and 3D graphics applications.
1. Define the center point, radius, and starting and ending angles of the arc.
2. Calculate the starting and ending points of the arc on the x-y plane using the center
point, radius, and angles.
3. Determine the number of line segments needed to approximate the arc. This is
typically done by dividing the angle of the arc by a fixed increment, such as 1 degree
or 0.1 degree.
4. For each line segment, calculate the incremental x and y values for the line
segment using the slope of the line segment (rise/run) and the angle of the arc.
5. Starting from the starting point of the arc, generate line segments by adding the
incremental x and y values to the current x and y values.
6. Repeat step 5 for each line segment, until the ending point of the arc is reached.
7. The final result is a series of line segments that approximate the arc.
INTERPOLATION
1. Bezier Curve
2. B-spline Curve
Fractals Representation:
1. Hilbert Curve
2. Koch Curve
BEZIER CURVE
• The Cubic Bezier Curve is adequate for most of the graphic applications. This curve
requires four control points. These four points completely specify the curve.
• We cannot extend the Bezier Curve, but we can take four more points and we can
construct a second Bezier Curve that can be attached to the first Bezier Curve.
• As shown in the figure, the curve begins at the first control point and ends on the
fourth control point.
• To connect two Bezier Curves, just make the first control point of the second Bezier
Curve match the last control point of the first Bezier Curve.
• At the start of the curve, it is tangent to the line connecting the first and the second
control points.
• Even at the end of the curve, it is tangent to the line connecting the third and fourth
control points.
• They generally follow the shape of the control polygon which consists of the
segments joining the control points.
• They always pass through the first and last control points.
• A Bezier Curve generally follows the shape of the polygon.
• No straight line intersects a Bezier Curve more times than it intersects its control
polygon.
B-SPLINE CURVE
B-spline curves are widely used because they can represent a wide range of shapes,
they're easy to manipulate and they're very smooth. They're used in many applications like
Computer-Aided Design, Computer Animation, and others.
HILBERT CURVE
The Hilbert curve is a type of space-filling curve, meaning that it visits every point in a
two-dimensional space without leaving any empty spaces. It is named after David Hilbert,
who was one of the first to study such curves in the early 20th century.
The Hilbert curve can be generated recursively through the following steps:
1. Start with a single line segment, representing the first-order Hilbert curve.
2. Replace each line segment with a 90-degree rotation of the next-order curve.
3. Repeat step 2 for each order of the curve, increasing the number of line segments
and the level of recursion.
4. The final Hilbert curve is a continuous, self-similar curve that visits every point in
the two-dimensional space.
The Hilbert curve is used in a variety of applications, including image compression, data
visualization, and spatial data indexing. Because the curve visits all points in a space, it
can be used to order data in a space-filling way, and it can be used to map a two-
dimensional space into a one-dimensional space while preserving the relative spatial
relationships of the points.
It's also used in data compression and other applications where data has a natural spatial
relationship, like in geographic information systems and others.
KOCH CURVE
The Koch curve is a type of fractal curve that is generated by repeatedly replacing the
straight line segments of an initial shape with a specific pattern. It was first described by
the Swedish mathematician Helge von Koch in a 1904 paper.
The most well-known version of the Koch curve is the Koch snowflake, which is created
by repeatedly applying the Koch curve to the three sides of an equilateral triangle.
The Koch curve can be generated recursively through the following steps:
1. Start with a single line segment, representing the first-order Koch curve.
2. Replace each line segment with a specific pattern of four line segments, which
includes a smaller copy of the original curve in the middle, and two line segments
of the same length as the original curve, forming an equilateral triangle, this is
called the next-order curve.
3. Repeat step 2 for each order of the curve, increasing the number of line segments
and the level of recursion.
4. The final Koch curve is a continuous, self-similar curve that has infinite length but
encloses a finite area.
The Koch curve is used in a variety of applications, including fractal image compression,
data visualization, and computer graphics. The Koch curve has a fractal property, meaning
that it has the same pattern at different
FRACTAL LINES AND SURFACES
A fractal is a type of geometric shape or pattern that is self-similar, meaning that it looks
the same at different scales. Fractals can be generated by applying a specific set of rules
or algorithms repeatedly, leading to the creation of highly complex shapes that exhibit a
high degree of symmetry and repetition.
Fractal lines are one-dimensional fractals that can be created by applying a specific set of
rules to a simple line segment, such as the Koch curve, the Hilbert curve, and the Dragon
curve. These fractal lines are infinite in length, but they have a finite width.
Fractal surfaces are two-dimensional fractals that can be created by applying a specific
set of rules to a simple surface, such as the Mandelbrot set and the Julia set. These fractal
surfaces are infinite in area, but they have a finite perimeter.