Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

N-Bit Colour Frame

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Q1: What is frame buffer ? How it is different from the display buffer ?

How a frame buffer is used for


putting colour and controlling intensity of any display device ?

Ans. A frame buffer is a large, contiguous piece of computer memory. At a minimum there is one memory bit
for each pixel in the rater; this amount of memory is called a bit plane. The picture is built up in the frame
buffer one bit at a time.
You know that a memory bit has only two states, therefore a single bit plane yields a black-and white display.
You know that a frame buffer is a digital device and the CRT is an analog device. Therefore, a conversion from
a digital representation to an analog signal must take place when information is read from the frame buffer
and displayed on the raster CRT graphics device. For this you can use a digital to analog converter (DAC).Each
pixel in the frame buffer must be accessed and converted before it is visible on the raster CRT.
N-bit colour Frame
buffer Color or gray scales are incorporated into a frame buffer rater graphics device by using additional bit
planes. The intensity of each pixel on the CRT is controlled by a corresponding pixel location in each of the N
bit planes. The binary value from each of the N bit planes is loaded into corresponding positions in a register.
The resulting binary number is interpreted as an intensity level between 0 (dark) and 2n -1 (full intensity).
This is converted into an analog voltage between 0 and the maximum voltage of the electron gun by the DAC.
A total of 2N intensity levels are possible. Figure given below illustrates a system with 3 bit planes for a total of
8 (23) intensity levels. Each bit plane requires the full complement of memory for a given raster resolution;
e.g., a 3-bit plane frame buffer for a 1024 X1024 raster requires 3,145,728 (3 X 1024 X1024) memory bits.

An increase in the number of available intensity levels is achieved for a modest increase in required
memory by using a lookup table. Upon reading the bit planes in the frame buffer, the resulting
number is used as an index into the lookup table. The look up table must contain 2N entries. Each
entry in the lookup table is W bit wise. W may be greater than N. When this occurs, 2W intensities
are available; but only 2N different intensities are available at one time. To get additional
intensities, the lookup table must be changed.

Because there are three primary colours, a simple color frame buffer is implemented with three bit
planes, one for each primary color. Each bit plane drives an individual color gun for each of the
three primary colors used in color video. These three primaries (red, green, and blue) are combined
at the CRT to yield eight colors.
Q2: What is animation ? How it is different from Graphics? Explain how acceleration is simulated in
animation? Discuss all the cases i.e. zero acceleration, Positive acceleration, Negative acceleration and
combination of positive and negative acceleration.
Ans.
Animation
Many Web pages use animation, which is the appearance of motion created by displaying a series of still
images in sequence. Animation can make Web pages more visually interesting or draw attention to
important information or links. You can create animations by using any software from a variety of
software that allow you to create animation. Simple animation can be GIF animated file but the complex
animation can be the face of human or alien in movie or game.
Graphics
A graphic, or graphical image, is a digital representation of non-text information such as a drawing,
chart, or photo. Many Web pages use colorful graphical designs and images to convey messages. Of the
graphics formats that exist on the Web, the two more common are JPEG and GIF formats. JPEG
(pronounced JAY-peg) is a format that compresses graphics to reduce their file size, which means the file
takes up less storage space.
The goal with JPEG graphics is to reach a balance between image quality and file size. Digital photos
often use the JPEG format. GIF (pronounced jiff) graphics also use compression techniques to reduce file
sizes. The GIF format works best for images that have only a few distinct colors, such as company logos.
Some Web sites use thumb nails on their pages because graphics can be time-consuming to display. A
thumbnail is a small version of a larger graphic. You usually can click a thumbnail to display a larger image.
Acceleration is the change in velocity per unit of given time. The object is in the state of
acceleration if it shows any of these three changes as given below; first if it changes its speed
which change the magnitude of velocity; the second if it changes its direction and the third when
it shows changes in both. It can be negative and positive acceleration. When a moving object
than the acceleration of a moving object can be negative and positive. The positive acceleration when
the acceleration is in the direction of motion of object. The negative acceleration can be in two types.
First is the one when an object is in motion and it slows down and the direction of the acceleration is
in the negative direction. The second is the one when the direction of the acceleration is in the
direction of velocity and the object increases its speed. Let’s discuss the negative acceleration, its
graph representation, and some more examples based on it.
Positive and Negative Acceleration
Positive Acceleration :
If the velocity of an object increases, then the object is said to be moving with positive acceleration.
Example:
1. A ball rolling down on an inclined plane.
2. When you are driving you find the road is clear and for obvious reasons of saving the time you
increase the speed of your car. This is called a positive acceleration.
In other words, a positive acceleration means increasing the speed within a time interval, usually very
short interval of time.
Negative Acceleration :
If the velocity of an object decreases, then the object is said to be moving with negative
acceleration. Negative acceleration is also known as retardation or deceleration.

Example:
1. A ball moving up an inclined plane.
2. A ball thrown vertically upwards is moving with a negative acceleration as the velocity decreases
with time.

Q3: Explain the scan line polygon filling algorithm with the help of suitable diagram..
Ans. Polygon is an ordered list of vertices as shown in the following figure. For filling polygons with
particular colors, you need to determine the pixels falling on the border of the polygon and those
which fall inside the polygon. In this chapter, we will see how we can fill polygons using different
techniques

Scan Line Algorithm


This algorithm works by intersecting scanline with polygon edges and fills the polygon between pairs
of intersections. The following steps depict how this algorithm works.
Step 1 − Find out the Y min and Y max from the given polygon.

Step 2 – Scan Line intersects with each edge of the polygon from Y min to Y max. Name each
intersection point of the polygon. As per the figure shown above, they are named as p0, p1, p2, p3.
Step 3 − Sort the intersection point in the increasing order of X coordinate i.e. (p0, p1), (p1, p2), and
(p2, p3).
Step 4 − Fill all those pair of coordinates that are inside polygons and ignore the alternate pairs.

Q4: Write Z-Buffer Algorithm for hidden surface detection. Explain how this algorithm is applied to
determine the hidden surfaces.
Ans.
When viewing a picture containing non transparent objects and surfaces, it is not possible to see those
objects from view which are behind from the objects closer to eye. To get the realistic screen image,
removal of these hidden surfaces is must. The identification and removal of these surfaces is called as
the Hidden-surface problem.
Z-buffer, which is also known as the Depth-buffer method is one of the commonly used method for
hidden surface detection. It is an Image space method. Image space methods are based on the pixel to
be drawn on 2D. For these methods, the running time complexity is the number of pixels times number
of objects. And the space complexity is two times the number of pixels because two arrays of pixels are
required, one for frame buffer and the other for the depth buffer.
The Z-buffer method compares surface depths at each pixel position on the projection plane. Normally
z-axis is represented as the depth. The algorithm for the Z-buffer method is given below :
Algorithm :
First of all, initialize the depth of each pixel.
i.e, d(i, j) = infinite (max length)
Initialize the color value for each pixel as c(i, j) = background color
for each polygon, do the following steps :
for (each pixel in polygon's projection)
{
find depth i.e, z of polygon
at (x, y) corresponding to pixel (i, j)
if (z < d(i, j))
{
d(i, j) = z;
c(i, j) = color;
}
}
Let’s consider an example to understand the algorithm in a better way. Assume the polygon
given is as below :

In starting, assume that the depth of each pixel is infinite.

As the z value i.e, the depth value at every place in the given polygon is 3, on applying the algorithm, the
result is:

Now, the z values generated on the pixel will be different which are as shown below :
Therefore, in the Z buffer method, each surface is processed separately one position at a time across the
surface. After that the depth values i.e, the z values for a pixel are compared and the closest i.e, (smallest z)
surface determines the color to be displayed in frame buffer. The z values, i.e, the depth values are usually
normalized to the range [0, 1]. When the z = 0, it is known as Back Clipping Pane and when z = 1, it is called as
the Front Clipping Pane.
In this method, 2 buffers are used :
1. Frame buffer
2. Depth buffer
Calculation of depth :
As we know that the equation of the plane is :
ax + by + cz + d = 0, this implies
z = -(ax + by + d)/c, c!=0
Calculation of each depth could be very expensive, but the computation can be reduced to a single add
per pixel by using an increment method as shown in figure below :

Let’s denote the depth at point A as Z and at point B as Z’. Therefore :


AX + BY + CZ + D = 0 implies
Z = (-AX - BY - D)/C ------------(1)
Similarly, Z' = (-A(X + 1) - BY -D)/C ----------(2)
Hence from (1) and (2), we conclude :
Z' = Z - A/C ------------(3)
Hence, calculation of depth can be done by recording the plane equation of each polygon in the (normalized)
viewing coordinate system and then using the incremental method to find the depth Z. So, to summarize, it
can be said that this approach compares surface depths at each pixel position on the projection plane. Object
depth is usually measured from the view plane along the z-axis of a viewing system.
Example :
Let S1, S2, S3 are the surfaces. The surface closest to projection
plane is called visible surface. The
computer would start (arbitrarily) with surface 1 and put it’s value
into the buffer. It would do the same
for the next surface. It would then check each overlapping pixel
and check to see which one is closer to
the viewer and then display the appropriate color. As at view-plane position (x, y), surface S1 has the
smallest depth from the view plane, so it is visible at that position.

Q5: Write Midpoint Circle Generation Algorithm. Computer coordinate points of circle drawn with centre at
(0,0) and radius 5, using midpoint circle algorithm
Ans. Drawing a circle on the screen is a little complex than drawing a line. There are two popular algorithms
for generating a circle − Bresenham’s Algorithmand Midpoint Circle Algorithm. These algorithms are based on
the idea of determining the subsequent points required to draw the circle. Let us discuss the algorithms in
detail − The equaon of circle is X2+Y2=r2,X2+Y2=r2, where r is radius.

Bresenham’s Algorithm
We cannot display a continuous arc on the raster display. Instead, we have to choose the nearest pixel
position to complete the arc.
From the following illustration, you can see that we have put the pixel at (X, Y) location and now
need to decide where to put the next pixel − at N (X+1, Y) or at S (X+1, Y-1).

This can be decided by the decision parameter d.



If d <= 0, then N(X+1, Y) is to be chosen as next pixel.

If d > 0, then S(X+1, Y-1) is to be chosen as the next pixel.
Algorithm
Step 1 − Get the coordinates of the center of the circle and radius, and store them in x, y, and R
respectively. Set P=0 and Q=R.
Step 2 − Set decision parameter D = 3 – 2R.
Step 3 − Repeat through step-8 while P ≤ Q.
Step 4 − Call Draw Circle (X, Y, P, Q).
Step 5 − Increment the value of P.
Step 6 − If D < 0 then D = D + 4P + 6.
Step 7 − Else Set R = R - 1, D = D + 4(P-Q) + 10.
Step 8 − Call Draw Circle (X, Y, P, Q).
Find the mid-point p of the two possible pixels i.e (x-0.5, y+1)
If p lies inside or on the circle perimeter, we plot the pixel (x, y+1), otherwise if it’s outside we plot
the pixel (x-1, y+1)
Boundary Condition : Whether the mid-point lies inside or outside the circle can be decided by using
the formula:-
Given a circle centered at (0,0) and radius r and a point p(x,y)
F(p) = x2 + y2 – r2
if F(p)<0, the point is inside the circle
F(p)=0, the point is on the perimeter
F(p)>0, the point is outside the circle

Example
In our program we denote F(p) with P. The value of P is calculated at the mid-point of the two
contending pixels i.e. (x-0.5, y+1). Each pixel is described with a subscript k.
Pk = (Xk — 0.5)2 + (yk + 1)2 – r2
Now,
xk+1 = xk or xk-1 , yk+1= yk +1
∴ Pk+1 = (xk+1 – 0.5)2 + (yk+1 +1)2 – r2
= (xk+1 – 0.5)2 + [(yk +1) + 1]2 – r2
= (xk+1 – 0.5)2 + (yk +1)2 + 2(yk + 1) + 1 – r2
= (xk+1 – 0.5)2 + [ – (xk – 0.5)2 +(xk – 0.5)2 ] + (yk + 1)2 – r2 + (yk + 1) + 1
= Pk + (xk+1 – 0.5)2 – (xk – 0.5)2 + 2(yk + 1) + 1
= Pk + (x2k+1 – x2k)2 + (xk+1 – xk)2 + 2(yk + 1) + 1
= Pk + 2(yk +1) + 1, when Pk <=0 i.e the midpoint is inside the circle
(xk+1 = xk)
Pk + 2(yk +1) – 2(xk – 1) + 1, when Pk>0 I.e the mid point is outside the circle(xk+1 = xk-1)
The first point to be plotted is (r, 0) on the x-axis. The initial value of P is calculated as follows:-
P1 = (r – 0.5)2 + (0+1)2 – r2
= 1.25 – r
= 1 -r (When rounded off)
Examples:
Input : Centre -> (0, 0), Radius -> 5

Output :
(5, 0) (5, 0) (0, 5) (0, 5)
(5, 1) (-5, 1) (5, -1) (-5, -1)
(1, 5) (-1, 5) (1, -5) (-1, -5)
(2, 2) (-2, 2) (2, -2) (-2, -2)

Q6: Discuss Shear Transformation with suitable example, write Shear transformation matrix for Shear along
X- axis, Y-axis and Generalized Shear. Show that the simultaneous shearing shxy (a, b), is not same as the
shearing in x-direction, shx(a) followed by a shearing in y-direction, shy(b).

Ans. Shear transformations produce a shape distortion.


(old coordinates are (x, y) and the new coordinates are (x', y'))
X-Direction Shear is given by the following matrix:

(1 0 0)
(SHx 1 0)
(0 0 1)
( 1 0 0)
(x' y' 1) = ( x y 1) * (SHx 1 0)
( 0 0 1)
Which produces a shearing along x that is proportional to y:
x' = x + SHx * y
y' = y
1=1
Y-Direction Shear is given by the following matrix:
(1 SHy 0)
(0 1 0)
(0 0 1)
(1 SHy 0)
(x' y' 1) = ( x y 1) * (0 1 0)
(0 0 1)
Which produces a shearing along y that is proportional to x:
x' = x
y' = x * SHy + y
1=1

xy-shear about the origin


Let an object point P(x,y) be moved to P’(x’,y’) as a result of shear transformation in both x- and
y-directions with shearing factors a and b, respectively, as shown in
The points P(x,y) and P’(x’,y’) have the following relationship :

x' = x +ay
y' = y+bx = Shxy(a,b) …………………(1)
where ′ay′ and ′bx′ are shear factors in x and y directions, respectively. The xy-shear is also
called simultaneous shearing or shearing for short.
In matrix form, we have,
(x’,y’)=(x,y) 1 b …………………(2)

a 1
In terms of Homogeneous Coordinates, we have

(x’,y’,1)=(x,y,1) 1 b 0
A 1 0

0 0 1
That is, P’h = Ph.Shxy(a,b) …………………………….(3)
Where Ph and P’h represent object points, before and after required transformation, in Homogeneous
Coordinates and Shxy(a,b)is called homogeneous transformation matrix for xy-shear in both x- and y
directions with shearing factors a and b, respectively,
Special case: when we put b=0 in equation (21), we have shearing in x-direction, andwhen a=0, we have
Shearing in the y-direction, respectively.
Q7: What is the role of light in computer graphics? Discuss the Lamberts Cosine Law? Explain ambient,
diffused and specular reflection. Give general mathematical expression of each, also give the mathematical
expression to determine the Intensity when all three type of reflections are available

Ans. Lighting in Computer Graphics refers to the placement of lights in a scene to achieve some desired effect.
Image synthesis and animation packages all contain different types of lights that can be placed in different
locations and modified by changing the parameters. Too often, people who are creating images or animations
ignore or place little emphasis on lighting. This is unfortunate since lighting is a very important part of image
synthesis. The proper use of lights in a scene is one of the things that differentiates the talented CG people
from the untalented. This is not a new topic as a large amount of work has been done on lighting issues in
photography, film, and video. Since Image Synthesis is trying to emulate reality, we can learn much from this
previous work.
Lighting can be used to create more of a 3D effect by separating the foreground from the background, or it
can merge the two to create a flat 2D effect. It can be used to set an emotional mood and to influence the
viewer.
In optics ( Physics ), Lambert's cosine law states that the radiant intensity or luminous intensity observed from
an ideal diffusely reflecting surface is directly proportional to the cosine of the angle θ formed between the
direction of the incident light and the surface normal.

It states that when light falls obliquely on a surface, the illumination of the surface is directly
proportional to the cosine of the angle θ between the direction of the incident light and the
surface nurmal. The law is also known as the cosine emission law or Lambert's emission law. It is
used to find the illumination of a surface when light falls on the surface along an oblique direction.
When light strikes a surface, some of it will be reflected. Exactly how it reflects depends in a
complicated way on the nature of the surface, what I am calling the material properties of the
surface. In OpenGL (and in many other computer graphics systems), the complexity is approximated
by two general types of reflection, specular reflection and diffuse reflection.

In perfect specular ("mirror-like") reflection, an incoming ray of light is reflected from the surface
intact. The reflected ray makes the same angle with the surface as the incoming ray. A viewer can
see the reflected ray only if the viewer is in exactly the right position, somewhere along the path of
the reflected ray. Even if the entire surface is illuminated by the light source, the viewer will only
see the reflection of the light source at those points on the surface where the geometry is right.
Such reflections are referred to as specular highlights. In practice, we think of a ray of light as being
reflected not as a single perfect ray, but as a cone of light, which can be more or less narrow.

Q8: Discuss the Taxonomy of projection with suitable diagram. How Perspective projection differs from
Parallel projection. Derive a transformation matrix for a perspective projection of a point P (x,y,z) onto Z =4
plane as viewed from E (0, 0,-d )
Ans. In the 2D system, we use only two coordinates X and Y but in 3D, an extra coordinate Z is added. 3D
graphics techniques and their application are fundamental to the entertainment, games, and computer-aided
design industries. It is a continuing area of research in scientific visualization.
Furthermore, 3D graphics components are now a part of almost every personal computer and, although
traditionally intended for graphics-intensive software such as games, they are increasingly being used by other
applications.

Drawing is a visual art that has been used by man for self-expression throughout history. It uses
pencils, pens, colored pencils, charcoal, pastels, markers, and ink brushes to mark different types of
medium such as canvas, wood, plastic, and paper.
It involves the portrayal of objects on a flat surface such as the case in drawing on a piece of paper or
a canvas and involves several methods and materials. It is the most common and easiest way of
recreating objects and scenes on a two-dimensional medium.
Perspective projection is seeing things larger when they’re up close and smaller at a distance. It is a
three-dimensional projection of objects on a two-dimensional medium such as paper. It allows an
artist to produce a visual reproduction of an object which resembles the real one.

Parallel projection, on the other hand, resembles seeing objects which are located far from the viewer
through a telescope. It works by making light rays entering the eyes parallel, thus, doing away with
the effect of depth in the drawing. Objects produced using parallel projection do not appear larger
when they are near or smaller when they are far. It is very useful in architecture. However, when
measurements are involved, perspective projection is best.

Solution: Plane of projection: z = 4 (given)


Let P (x, y, z) be any point in the space.
We know the parametric equation of a line AB, starting from A and passing Transformations through B is
P (t) = A + t. (B – A), o < t < ∞
So that parametric equation of a line starting from E (0, 0,-d) and passing through P (x, y, z) is:
E + t ( P – E) , o < t < ∞.
= (0, 0,-d) + t [(x, y, z) – (0, 0, -d)]
= (0, 0,-d) + [t x, t. y, t. (z - d)]
= [t. x, t. y, t. (z – d)+ d]. Assume
Point P’ is obtained, when t = t*
∴ P’ = (x’, y’, z’) = [t* x, t*y, t*.(z – d)+ d]
Since, P’ lies on z = 4 plane, so
t* (z – d)+ d = 4 must be true;
t* = (4-d)/(z-d)
P’ = (x’, y’, z’) = ((4-d) x /(z-d), (4-d) y /(z-d),4)
= ((4-d) x /(z-d), (4-d) y /(z-d),4z-4d/(z-d))

In Homogeneous coordinate system


P’ = (x’, y’, z’, 1) = ((4-d) x /(z-d), (4-d) y /(z-d),4z-4d/(z-d ) ,1 )
= ((4-d) x, (4-d) y ,4z-4d,1 )………………..(1)
In Matrix form: (x’, y’, z’, 1) = (x, y, z, 1) -------------- (2)

4-d 0 0 0
0 4-d 0 0
0 0 4 1
0 0 -4d -4

Thus, equation (2) is the required transformation matrix for perspective view from(0, 0,-d)

Q9: Write Bresenham line drawing algorithm and DDA algorithm ? Compare both algorithms and identify
which one is better and why? Draw a line segment joining (4, 8)and (8, 10) using both algorithms i.e.
Bresenham line drawing algorithm and DDA algorithm.

DDA Algorithm Bresenhams Line


Drawing Algorithm
Arithmetic gorithm uses floating points i.e. Real Bresenhams algorithm uses fixed points
metic. i.e. Integer Arithmetic.
Operations DDA algorithm uses multiplication Bresenhams algorithm uses only
and division in its operations. subtraction and addition in its operations.
Speed DDA algorithm is rather slowly than Bresenhams algorithm is faster than DDA
Bresenhams algorithm in line algorithm in line drawing because it
drawing performs only addition and subtraction in
because it uses real arithmetic its calculation and uses only integer
(floating arithmetic so it runs significantly faster.
point operations).
Accuracy & Efficiency DDA algorithm is not as accurate and Bresenhams algorithm is more efficient
fficient as Bresenham algorithm. and much accurate than DDA algorithm.
Drawing DDA algorithm can draw circles and Bresenhams algorithm can draw circles
curves but that are not as accurate as and curves with much more accuracy than
Bresenhams algorithm. DDA algorithm.
Round Off DDA algorithm round off the Bresenhams algorithm does not round off
coordinates but takes the incremental value in its
to integer that is nearest to the line. operation
Expensive DDA algorithm uses an enormous Bresenhams algorithm is less expensive
number than DDA algorithm as it uses only
of floating-point multiplications so it addition and subtraction.
is
expensive.

Comparision
• DDA uses floating points where as Bresenham algorithm use fixed points.
• DDA round off the coordinates to nearest integer but Bresenham algorithm does not.
• Bresenham algorithm is much accurate and efficient than DDA.
• Bresenham algorithm can draw circles and curves with much more accuracy than DDA.
• DDA uses multiplication and division of equation but Bresenham algorithm uses subtraction and
addition only
Q10: What is Bezier Curve? Discuss the Role of Bernstein Polynomial in Bezier Curve. How Bezier curves
contribute to Bezier Surfaces?
Prove the following properties of Bezier curve.
(i) P(u=1) = Pn
(ii) P(u=0)=P0
Given four control points PO (2, 2) P1 (3, 4) P2 (5, 4) andP3 (4, 2) as vertices of Bezier curve. Determine four
points over the Bezier Curve, with given control points.

Ans. Bezier Curves


Bezier curve is discovered by the French engineer Pierre Bézier. These curves can be generated under the
control of other points. Approximate tangents by using control points are used to generate curve. The Bezier
curve can be
represented mathematically as −
Σk=0nPiBni(t) Σk=0nPiBin(t)
Where pipi is the set of points and Bni(t) Bin(t) represents the Bernstein polynomials which are given by –

Bni(t)=(ni)(1−t)n−iti Bin(t)=(ni)(1−t)n−iti
Where n is the polynomial degree, i is the index, and t is the variable. The simplest Bézier curve is the straight
line from the point P0P0 to P1P1. A quadratic Bezier curve is determined by three control points. A cubic
Bezier curve is determined by four control points.
Properties of Bezier Curves
Bezier curves have the following properties −
• They generally follow the shape of the control polygon, which consists of the segments joining the control
points.
• They always pass through the first and last control points.
• They are contained in the convex hull of their defining control points.
• The degree of the polynomial defining the curve segment is one less that the number of defining polygon
point. Therefore, for 4 control points, the degree of the polynomial is 3, i.e. cubic polynomial.
• A Bezier curve generally follows the shape of the defining polygon.
• The direction of the tangent vector at the end points is same as that of the vector determined by first and
last segments.
• The convex hull property for a Bezier curve ensures that the polynomial smoothly follows the control points.
• No straight line intersects a Bezier curve more times than it intersects its control polygon.
• They are invariant under an affine transformation.

(ii) P(u=0)=P0
Q11: What is the advantage of using homogenous co-ordinate system over Euclidean coordinate system?
Consider the square ABCD with vertices A(0, 0),B (0, 2),C (2, 0), D (2, 2).
Perform the follows transformation.
(i) Scale up the polygon to twice its size.
(ii) Rotate the polygon by 45° in anti clockwise direction.
(iii) Translate the centroid of the polygon to point (3,5)

Ans. advantage of homogenous co-ordinate :-


Representing all transformations as matrix multiplications
Two Dimensional coordinates are represented using three-element column vectors, and
Transformation operation is represented by 3 x 3 matrices.

which can be written in abbreviated form as

Capturing composite transformations conveniently


On the basis of the matrix product of the individual transformations we can set up a matrix for any
sequence of transformation known as composite transformation matrix. For row-matrix
representation we form composite transformations by multiplying matrices in order from left to right
whereas in column-matrix representation we form composite transformations by multiplying
matrices in order from right to left.

Non linear transformations (3D-perspective transformations)

STANDARD PERSPECTIVE PROJECTION


A perspective transformation is the transformation from one three space in to another three space. In
contrast to the parallel transformation , in perspective transformations parallel lines converge, object
size is reduced with increasing distance from the center of projection, and non uniform foreshortening
of lines in the object as a function of orientation and the distance of the object from the center of
projection occurs. All of these effects laid the depth perception of the human visual system., but the
shape of the object is not preserved. Perspective drawings are characterized by perspective
foreshortening and vanishing points .Perspective foreshortening is the illusion that object and lengths
appear smaller as there distance from the center of projection increases. The illusion that certain sets of
parallel lines appear to meet at a point is another feature of perspective drawings. These points are called
vanishing points .Principal vanishing points are formed by the apparent intersection of lines
parallel to one of the three x,y or z axis. The number of principal vanishing points is determined by the
number of principal axes interested by the view plane

Perspective Anomalies
1.Perspective foreshortening- The farther an object is from the center of projection ,the smaller it
appears
2.vanishing Points- Projections of lines that are not parallel to the view plane (i.e. lines that are not
perpendicular to the view plane normal) appear to meet at some point on the view plane. This point is
called the vanishing point. A vanishing point corresponds to every set of parallel lines. Vanishing points
corresponding to the three principle directions are referred to as "Principle Vanishing Points (PVPs)". We
can thus have at most three PVPs. If one or more of these are at infinity (that is parallel lines in that
direction continue to appear parallel on the projection plane), we get 1 or 2 PVP perspective projection.

Transformation Matrix for Standard Perspective Projection


View plane at
i)

= 0 0
0 4
4 0
4 4

New coordinates are A(0, 0),B (0, 4),C (4, 0), D (4, 4)
Q12: Derive the 2D-transformtion matrix for reflection about the line y = x. Use this transformation matrix
to reflect the triangle A (0,0) ,B(1, 1), C(2 ,0) about the line = 2 .

Q13: Why Shading is required in Computer Graphics? Briefly Discuss the role of interpolation technique in
Shading. Compare intensity interpolation and Normal Interpolation ? Which Interpolation technique
contributes to which type of shading? Which shading technique is better Phong shading or Gourand
shading, give reasons.

Ans. Shading is used in drawing for depicting levels of darkness on paper by applying media
more densely or with a darker shade for darker areas, and less densely or with a lighter shade for
lighter areas. There are various techniques of shading including cross hatching where
perpendicular lines of varying closeness are drawn in a grid pattern to shade an area. The closer
the lines are together, the darker the area appears. Likewise, the farther apart the lines are, the
lighter the area appears.
Light patterns, such as objects having light and shaded areas, help when creating the illusion of
depth on paper.
Powder shading is a sketching shading method. In this style, the stumping powder and paper
stumps are used to draw a picture. This can be in color. The stumping powder is smooth and
doesn't have any shiny particles. The poster created with powder shading looks more beautiful
than the original. The paper to be used should have small grains on it so that the powder
remains on the paper.
Interpolation techniques
When calculating the brightness of a surface during rendering, our illumination model requires that we
know the surface normal. However, a 3D model is usually described by a polygon mesh , which may only
store the surface normal at a limited number of points, usually either in the vertices, in the polygon
faces, or in both. To get around this problem, one of a number of interpolation techniques can be used.
Flat shading
Here, a color is calculated for one point on each polygon (usually for the first vertex in the polygon, but
sometimes for the centroid for triangle meshes), based on the polygon's surface normal and on the
assumption that all polygons are flat. The color everywhere else is then interpolated by coloring all
points on a polygon the same as the point for which the color was calculated, giving each polygon a
uniform color (similar to in nearest-neighbor interpolation). It is usually used for high speed rendering
where more advanced shading techniques are too computationally expensive. As a result of flat shading
all of the polygon's vertices are colored with one color, allowing differentiation between adjacent
polygons. Specular highlights are rendered poorly with flat shading: If there happens to be a large
specular component at the representative vertex, that brightness is drawn uniformly over the entire
face. If a specular highlight doesn’t fall on the representative point, it is missed entirely. Consequently,
the specular reflection component is usually not included in flat shading computation.

Smooth shading
In contrast to flat shading where the colors change discontinuously at polygon borders, with smooth
shading the color changes from pixel to pixel, resulting in a smooth color transition between two
adjacent polygons. Usually, values are first calculated in the vertices and bilinear interpolation is
then used to calculate the values of pixels between the vertices of the polygons.
Types of smooth shading include:
• Gouraud shading
• Phong shading

Gouraud shading
• Determine the normal at each polygon vertex.
• Apply an illumination model to each vertex to calculate the light intensity from the vertex normal.
• Interpolate the vertex intensities using bilinear interpolation over the surface polygon.
Data structures
• Sometimes vertex normals can be computed directly (e.g. height field with uniform mesh)
• More generally, need data structure for mesh
• Key: which polygons meet at each vertex.
Advantages
• Polygons, more complex than triangles, can also have different colors specified for each vertex.
In these instances, the underlying logic for shading can become more intricate.
Problems
• Even the smoothness introduced by Gouraud shading may not prevent the appearance of the
shading differences between adjacent polygons.
• Gouraud shading is more CPU intensive and can become a problem when rendering real time
environments with many polygons.
• T-Junctions with adjoining polygons can sometimes result in visual anomalies. In general, T
Junctions should be avoided.
Phong shading
Phong shading is similar to Gouraud shading, except that instead of interpolating the light intensities,
the normals are interpolated between the vertices. Thus, the specular highlights are computed much
more precisely than in the Gouraud shading model:
1. Compute a normal N for each vertex of the polygon.
2. From bilinear interpolation compute a normal, N, for each pixel. (This must be renormalized each
time.)
3. Apply an illumination model to each pixel to calculate the light intensity from Ni.
Other approaches
Both Gouraud shading and Phong shading can be implemented using bilinear interpolation. Bishop and
Weimer proposed to use a Taylor series expansion of the resulting expression from applying
an illumination model and bilinear interpolation of the normals. Hence, second degree polynomial
interpolation was used. This type of biquadratic interpolation was further elaborated by Barrera et
al., where one second order polynomial was used to interpolate the diffuse light of the Phong reflection
model and another second order polynomial was used for the specular light.
Spherical Linear Interpolation (Slerp) was used by Kuij and Blake for computing both the normal over the
polygon as well as the vector in the direction to the light source. A similar approach was proposed by
Hast, which uses Quaternion interpolation of the normals with the advantage that the normal will
always have unit length and the computationally heavy normalization is avoided.

intensity interpolation
Normal Interpolation
Phong Shading and Gouraud Shading
Phong Shading: Phong Shading overcomes some of the disadvantages of Gouraud Shading
and specular reflection can be successfully incorporated in the scheme. The first stage in the
process is the same as for the Gouraud Shading - for any polygon we evaluate the vertex
normals. For each scan line in the polygon we evaluate by linear intrepolation the normal vectors
at the end of each line. These two vectors Na and Nb are then used to interpolate Ns. we thus
derive a normal vector for each point or pixel on the polygon that is an approximation to the real
normal on the curved surface approximated by the polygon. Ns , the interpolated normal vector,
is then used in the intensity calculation. The vector interpolation tends to restore the curvature of
the original surface that has been approximated by a polygon mesh. We have :

These are vector equations that would each be implemented as a set of three equations, one for
each of the components of the vectors in world space. This makes the Phong Shading
interpolation phase three times as expensive as Gouraud Shading. In addition there is an
application of the Phong model intensity equation at every pixel. The incremental computation is
also used for the intensity interpolation:

(2.6)
The implementation of Phong Shading is as follows:

So in Phong Shading the attribute interpolated are the vertex normals, rather than vertex intensities.
Interpolation of normal allows highlights smaller than a polygon.

Gouraud Shading : In Gouraud Shading, the intensity at each vertex of the polygon is first
calculated by applying equation 1.7. The normal N used in this equation is the vertex normal which is
calculated as the average of the normals of the polygons that share the vertex. This is an important feature
of the Gouraud Shading and the vertex normal is an approximation to the true normal of the surface at
that point. The intensities at the edge of each scan line are calculated from the vertex intensities and the
intensities along a scan line from these. The interpolation equations are as follows:
(2.2)
For computational efficiency these equations are often implemented as incremental calculations. The
intensity of one pixel can be calculated from the previous pixel according to the increment of intensity:

Where CScene.ZBuf is the data structure to store the depth of the pixel for hidden-surface removal (I will
discuss this later). And CScene.frameBuf is the buffer to store the pixle value. The above code is the
implementation for one active scan line. In Gouraud Shading anomalies can appear in animated sequences
because the intensity interpolation is carried out in screen coordinates from vertex normals calculated in
world coordinate. No highlight is smaller than a polygon.

Q14: What is windowing transformation ? Discuss the real life example where you can apply the windowing
transformation? Explain the concept of window to view port transformation with the help of suitable
diagram and calculations.

Ans. Window:
1. A world-coordinate area selected for display is called a window.
2. In computer graphics, a window is a graphical control element.
3. It consists of a visual area containing some of the graphical user interface of the program it
belongs to and is framed by a window decoration.
4. A window defines a rectangular area in world coordinates. You define a window with a
GWINDOW statement. You can define the window to be larger than, the same size as, or smaller
than the actual range of data values, depending on whether you want to show all of the data or only
part of the data.
Viewport:
1. An area on a display device to which a window is mapped is called a viewport.
2. A viewport is a polygon viewing region in computer graphics. The viewport is an area expressed
in rendering-device-specific coordinates, e.g. pixels for screen coordinates, in which the objects of
interest are going to be rendered.
3. A viewport defines in normalized coordinates a rectangular area on the display device where the
image of the data appears. You define a viewport with the GPORT command.
You can have your graph take up the entire display device or show it in only a portion, say the upper
right part.
Window to viewport transformation:
1. Window-to-Viewport transformation is the process of transforming a two-dimensional, world
coordinate scene to device coordinates.
2. In particular, objects inside the world or clipping window are mapped to the viewport. The
viewport is displayed in the interface window on the screen.
3. In other words, the clipping window is used to select the part of the scene that is to be displayed.
The viewport then positions the scene on the output device.
3. Example:

1. This transformation involves developing formulas that start with a point in the world window, say
(xw, yw).
2. The formula is used to produce a corresponding point in viewport coordinates, say (xv, yv). We
would like for this mapping to be "proportional" in the sense that if xw is 30% of the way from the
left edge of the world window, then xv is 30% of the way from the left edge of the viewport.
3. Similarly, if yw is 30% of the way from the bottom edge of the world window, then yv is 30% of
the way from the bottom edge of the viewport. The picture below shows this proportionality.

1. The position of the viewport can be changed allowing objects to be viewed at different positions
on the Interface Window.
2. Multiple viewports can also be used to display different sections of a scene at different screen
positions. Also, by changing the dimensions of the viewport, the size and proportions of the objects
being displayed can be manipulated.
3. Thus, a zooming affect can be achieved by successively mapping different dimensioned clipping
windows on a fixed sized viewport.
4. If the aspect ratio of the world window and the viewport are different, then the image may look
distorted.

Q15: Write and explain the pseudocode for Sutherland Hodgman polygon clipping algorithm. Using this
algorithm clip the following polygon LMNO against the rectangular window ABCD as given below

Pseudo code
Given a list of edges in a clip polygon, and a list of vertices in a subject polygon, the following
procedure clips the subject polygon against the clip polygon.
List outputList = subjectPolygon;
for (Edge clipEdge in clipPolygon) do
List inputList = outputList;
outputList.clear();
Point S = inputList.last;
for (Point E in inputList) do
if (E inside clipEdge) then
if (S not inside clipEdge) then
outputList.add(ComputeIntersection(S,E,clipEdge));
end if
outputList.add(E);
else if (S inside clipEdge) then
outputList.add(ComputeIntersection(S,E,clipEdge));
end if
S = E;
done
done
Sutherland - Hodgman Polygon Clipping
The Sutherland - Hodgman algorithm performs a clipping of a polygon against each window edge in
turn. It accepts an ordered sequence of verices v1, v2, v3, ..., vn and puts out a set of vertices
defining the clipped polygon.

This figure represents a polygon (the large, solid, upward pointing arrow) before clipping has
occurred.
The following figures show how this algorithm works at each edge, clipping the polygon.
b. Clipping against the top side of the clip window.
c. Clipping against the right side of the clip window.
d. Clipping against the bottom side of the clip window.
Four Types of Edges
As the algorithm goes around the edges of the window, clipping the polygon, it encounters four types
of edges. All four edge types are illustrated by the polygon in the following figure. For each edge
type, zero, one, or two vertices are added to the output list of vertices that define the clipped polygon.

The four types of edges are:


1. Edges that are totally inside the clip window. - add the second inside vertex point
2. Edges that are leaving the clip window. - add the intersection point as a vertex
3. Edges that are entirely outside the clip window. - add nothing to the vertex output list
4. Edges that are entering the clip window. - save the intersection and inside points as vertices
How To Calculate Intersections
Assume that we're clipping a polgon's edge with vertices at (x1,y1) and (x2,y2) against a clip window
with vertices at (xmin, ymin) and (xmax,ymax).
The location (IX, IY) of the intersection of the edge with the left side of the window is:
i. IX = xmin
ii. IY = slope*(xmin-x1) + y1, where the slope = (y2-y1)/(x2-x1)
The location of the intersection of the edge with the right side of the window is:
i. IX = xmax
ii. IY = slope*(xmax-x1) + y1, where the slope = (y2-y1)/(x2-x1)

The intersection of the polygon's edge with the top side of the window is:
i. IX = x1 + (ymax - y1) / slope
ii. IY = ymax
Finally, the intersection of the edge with the bottom side of the window is:
i. IX = x1 + (ymin - y1) / slope
ii. IY = ymin

Q 16 : Explain any five of the following terms with the help of suitable diagram/example, if needed.
(a) Ray Tracing (b)Ray Casting. (c) Audio file formats
(d) Video file formats (e) Authoring tools

Ans.
(a) Ray Tracing:
Ray Tracing is a global illumination based rendering method. It traces rays of light from the eye back
through the image plane into the scene. Then the rays are tested against all objects in the scene to
determine if they intersect any objects. If the ray misses all objects, then that pixel is shaded the
background color. Ray tracing handles shadows, multiple specular reflections, and texture mapping
in a very easy straight-forward manner.
Note that ray tracing, like scan-line graphics, is a point sampling algorithm.

(b)Ray Casting.
Ray casting is a rendering technique used in computer graphics and computational geometry. It is
capable of creating a three-dimensional perspective in a two-dimensional map. Developed by
scientists at the Mathematical Applications Group in the 1960s, it is considered one of the most basic
graphics-rendering algorithms. Ray casting makes use of the same geometric algorithm as ray
tracing.

(c) Audio file formats


An audio file format is a file format for storing digital audio data on a
computer system. The bit layout of the audio data (excluding metadata) is called the audio coding
format and can be uncompressed, or compressed to reduce the file size, often using lossy
compression. The data can be a raw bitstream in an audio coding format, but it is usually embedded
in a container format or an audio data format with defined storage layer.

(d) Video file formats


A video file format is a type of file format for storing digital video data
on a computer system. Video is almost always stored in compressed form to reduce the file size.
A video file normally consists of a container (e.g. in the Matroska format) containing video data in a
video coding format (e.g. VP9) alongside audio data in an audio coding format(e.g. Opus). The
container can also contain synchronization information, subtitles, and metadata such as title. A
standardized (or in some cases de facto standard) video file type such as .webm is a profile specified
by a restriction on which container format and which video and audio compression formats are
allowed

(e) Authoring tools


a content authoring tool is a software application used to create
multimedia content typically for delivery on the World Wide Web. Content-authoring tools may also
create content in other file formats so the training can be delivered on a CD (compact disc) or in
other formats for various different uses. The category of content-authoring tools includes HTML,
Flash, and various types of e-learning authoring tools

You might also like