Cse VI Computer Graphics and Visualization 10cs65 Solution
Cse VI Computer Graphics and Visualization 10cs65 Solution
UNIT -1
INTRODUCTION
2. Explain the concept of pinhole camera of an imaging system. Also derive the expression
for angle of view. (June 2012) 6M
Ans :
3. Discuss the graphics pipeline architecture, with the help of a functional schematic
diagram. (June 2012) 10M
Ans : Graphics Pipeline :
Process objects one at a time in the order they are generated by the application
All steps can be implemented in hardware on the graphics card
Vertex Processor
Much of the work in the pipeline is in converting object representations from one
coordinate system to another
– Object coordinates
– Camera (eye) coordinates
– Screen coordinates
Every change of coordinates is equivalent to a matrix transformation
Vertex processor also computes vertex colors
Primitive Assembly
Vertices must be collected into geometric objects before clipping and rasterization can take
place
– Line segments
– Polygons
– Curves and surfaces
Clipping
Just as a real camera cannot “see” the whole world, the virtual camera can only see part of the
world or object space
– Objects that are not within this volume are said to be clipped out of the scene
Rasterization :
If an object is not clipped out, the appropriate pixels in the frame buffer must be assigned
colors
Rasterizer produces a set of fragments for each object
Fragments are “potential pixels”
– Have a location in frame bufffer
– Color and depth attributes
Vertex attributes are interpolated over objects by the rasterizer
Fragment Processor:
Fragments are processed to determine the color of the corresponding pixel in the frame
buffer
Colors can be determined by texture mapping or interpolation of vertex colors
Fragments may be blocked by other fragments closer to the camera
4. With a neat diagram, explain the components of a graphics system. (Dec 2011) 6M
Ans : A Graphics system has 5 main elements :
Input Devices
Processor
Memory
Frame Buffer
Output Devices
A Frame buffer is implemented either with special types of memory chips or it can be a part of
system memory.
In simple systems the CPU does both normal and graphical processing.
Graphics processing - Take specifications of graphical primitives from application program and
assign values to the pixels in the frame buffer It is also known as Rasterization or scan
conversion.
5. With a neat diagram, explain the human visual system. (Dec 2011) 6M
Ans:
Electron Gun – emits electron beam which strikes the phosphor coating to emit light.
Deflection Plates – controls the direction of beam. The output of the computer is
converted by digital-to-analog converters o voltages across x & y deflection plates.
Refresh Rate – In order to view a flicker free image, the image on the screen has to be
retraced by the beam at a high rate (modern systems operate at 85Hz)
2 types of refresh:
Noninterlaced display: Pixels are displayed row by row at the refresh rate.
Interlaced display: Odd rows and even rows are refreshed alternately.
UNIT -2
THE OPENGL
The OpenGL Utility Library (GLU) contains several routines that use lower-level OpenGL
commands to perform such tasks as setting up matrices for specific viewing orientations and
projections, performing polygon tessellation, and rendering surfaces. This library is provided
as part of every OpenGL implementation.
For every window system, there is a library that extends the functionality of that window
system to support OpenGL rendering. For machines that use the X Window System, the
OpenGL Extension to the X Window System (GLX) is provided as an adjunct to OpenGL.
GLX routines use the prefix glX. For Microsoft Windows, the WGL routines provide the
Windows to OpenGL interface.
The OpenGL Utility Toolkit (GLUT) is a window system-independent toolkit, written by
Mark Kilgard, to hide the complexities of differing window system APIs.
2. Write explanatory notes on: i) RGB color model; ii) indexed color model. (Jun2012) 6M
Ans: Colors are indices into tables of RGB values
Requires less memory
– indices usually 8 bits
– not as important now
Memory inexpensive
In indexed mode, colors are stored as indices. If there are k indices then there can be kn-1 colors
that could be got by combining red, green and blue. This yields a huge color palette as compared
to the normal RGB mode.
3. Write an open GL recursive program for 2D sierpinski gasket with relevant comments.
(Jun2012) 10M
Ans: #include "stdafx.h"
#include <GL/glut.h>
/* initial tetrahedron */
int n;
glEnd();
}
glColor3f(1.0,0.0,0.0);
divide_triangle(v[0], v[1], v[2], m);
glColor3f(0.0,1.0,0.0);
divide_triangle(v[3], v[2], v[1], m);
glColor3f(0.0,0.0,1.0);
divide_triangle(v[0], v[3], v[1], m);
glColor3f(0.0,0.0,0.0);
divide_triangle(v[0], v[2], v[3], m);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
tetrahedron(n);
glFlush();
}
glutDisplayFunc(display);
glEnable(GL_DEPTH_TEST);
glClearColor (1.0, 1.0, 1.0, 1.0);
glutMainLoop();
}
4. With a neat diagram, discuss the color formation. Explain the additive and subtractive
colors, indexed color and color solid concept. (Dec2011) 12M
Ans:
A visible color can be characterized by the function C(λ)
Tristimulus values – responses of the 3 types of cones to the colors.
3 color theory – “If 2 colors produce the same tristimulus values, then they are visually
indistinguishable.”
Additive color model – Adding together the primary colors to get the percieved colors.
E.g. CRT.
Subtractive color model – Colored pigments remove color components from light that is
striking the surface. Here the primaries are the complimentary colors : cyan, magenta
and yellow
RGB color
Each color component is stored separately in the frame buffer
Usually 8 bits per component in buffer
Note in glColor3f the color values range from 0.0 (none) to 1.0 (all), whereas in
glColor3ub the values range from 0 to 255
The color as set by glColor becomes part of the state and will be used until changed
– Colors and other attributes are not part of the object but are assigned when the
object is rendered
We can create conceptual vertex colors by code such as
glColor
glVertex
glColor
glVertex
RGBA color system :
This has 4 arguments – RGB and alpha
alpha – Opacity.
glClearColor(1.0,1.0,1.0,1.0)
This would render the window white since all components are equal to 1.0, and is opaque
as alpha is also set to 1.0
Indexed color
Colors are indices into tables of RGB values
Requires less memory
o indices usually 8 bits
o not as important now
Memory inexpensive
Need more colors for shading
The position of the window is with reference to the origin. The origin (0, 0) is the top left
corner of the screen.
glutInit()
allows application to get command line arguments and initializes system.The function is
basically used for initializing the glut library and also to initiate a session with the windows
system. The function does not take any arguments and should be the first function to be
called within the main program.
gluInitDisplayMode() requests properties for the window (the rendering context)
RGB color- specified by the argument GLUT_RGB. It specifies that a 3 color mode
needs to be used.
Single buffering – GLUT_SINGLE: specifies that the images are static and only a
single frame buffer is required to store the pixels
GLUT_DOUBLE: specifies that the images are animations and two frame
buffers,front and back are required for rendering a smooth image.
Properties logically ORed together
glutWindowSize in pixels
glutWindowPosition from top-left corner of display
glutCreateWindow create window with a particular title
/* initial tetrahedron */
int n;
glColor3f(1.0,0.0,0.0);
divide_triangle(v[0], v[1], v[2], m);
glColor3f(0.0,1.0,0.0);
divide_triangle(v[3], v[2], v[1], m);
glColor3f(0.0,0.0,1.0);
divide_triangle(v[0], v[3], v[1], m);
glColor3f(0.0,0.0,0.0);
divide_triangle(v[0], v[2], v[3], m);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
tetrahedron(n);
glFlush();
}
scanf("%d",&n);
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize(640, 480);
glutCreateWindow("3D Gasket");
glutReshapeFunc(myReshape);
glutDisplayFunc(display);
glEnable(GL_DEPTH_TEST);
glClearColor (1.0, 1.0, 1.0, 1.0);
glutMainLoop();
}
7. Classify the major groups of API functions in open GL. Explain any four of them.
(July2011) 6M
Ans:
Primitive functions: Defines low level objects such as points, line segments, polygons
etc.
Attribute functions : Attributes determine the appearance of objects
o Color (points, lines, polygons)
o Size and width (points, lines)
o Polygon mode
Display as filled
Display edges
Display vertices
Viewing functions: Allows us to specify various views by describing the camera’s
position and orientation.
Transformation functions: Provides user to carry out transformation of objects like
rotation, scaling etc.
Input functions : Allows us to deal with a diverse set of input devices like keyboard,
mouse etc
Control functions: Enables us to initialize our programs, helps in dealing with any errors
during execution of the program.
Query functions: Helps query information about the properties of the particular
implementation.
8. What is an attribute with respect to graphics system? List attributes for lines and
polygons. (July2011) 4M
Ans: Attribute functions : Attributes determine the appearance of objects
◦ Polygon mode
Display as filled
Display edges
Display vertices
Polygons : Object that has a border that can be described by a line loop & also has a well
defined interior
9. List out different open GL primitives, giving examples for each. (Jan2010) 10M
Raster primitives (arrays of pixels) – passes through a separate pipeline to the frame
buffer.
Line segments
GL_LINES
GL_LINE_STRIP
GL_LINE_LOOP
10. Briefly explain the orthographic viewing with OpenGL functions for 2d and 3d viewing.
Indicate the significance of projection plane and viewing point in this. (Jan2010) 10M
Ans: In the default orthographic view, points are projected forward along the z axis onto the
plane z=0
UNIT-3
1. What are the various classes of logical input devices that are supported by open
GL? Explain the functionality of each of these classes. (Jun2012) 8M
Graphical input is more varied than input to standard programs which is usually numbers,
characters, or bits
Two older APIs (GKS, PHIGS) defined six types of logical input
– Locator: return a position. Placing the mouse pointer at any location on the
screen would return the corresponding x and y coordinates of the location. Mouse
acts as a locator device.
– Pick: return ID of an object. When there are several graphical objects on the
screen picking one of them and that would occupy the entire screen. This is a pick
operation.
Again the mouse can act as apick device.
– Stroke: return array of positions. Basically used in paint applications where when
a paint brush is moved across the editor a stroke is generated. All the locations
involved in the stroke are returned as an array. Mouse can act as a stroke device.
–Choice: return one of n items. When there are several items on the screen then
selecting one of them is the purpose of this device. It could be selected by a
mouse click which returns the id associated with a particular object.
2. List the various features that a good interactive program should include. (Jun2012)
4M
Ans: Some of the good features of a interactive graphics program are:
User friendly GUI
Having help menus
Easily understandable
Providing smooth transitions of images.
Smooth 3d animations by using z buffer.
sub_menu = glutCreateMenu(size_menu);
glutAddMenuEntry(“increase square size”, 2);
glutAddMenuEntry(“Decrease square size”,3);
glutCreateMenu(top_menu);
glutAddMenuEntry(“Quit”,1);
glutAddSubMenu(“Resize”,sub_menu);
glutAttachMenu(GLUT_RIGHT_BUTTON);
4. Which are the six classes of logical input devices? Explain. (Dec2011) 6M
Ans:
– Locator: return a position. Placing the mouse pointer at any location on the
screen would return the corresponding x and y coordinates of the location. Mouse
acts as a locator device.
– Pick: return ID of an object. When there are several graphical objects on the
screen picking one of them and that would occupy the entire screen. This is a pick
operation.
Again the mouse can act as apick device.
– Stroke: return array of positions. Basically used in paint applications where when
a paint brush is moved across the editor a stroke is generated. All the locations
involved in the stroke are returned as an array. Mouse can act as a stroke device.
– Choice: return one of n items. When there are several items on the screen then
selecting one of them is the purpose of this device. It could be selected by a
mouse click which returns the id associated with a particula object.
5. Discuss the request mode, sample mode and event modes with the figures wherever
required. (Dec2011) 8M
Ans: Request Mode:
The measure of the device is not returned to program until device is triggered
Sample Mode:
As soon as the function call in the user program is encountered, the measure is returned.
No trigger is needed
The user must have positioned the pointing device before the function call, because the
measure is extracted immediately from the buffer.
Both request and sample mode are useful where program guides the user.
Event Mode:
Most systems have more than one input device, each of which can be triggered at an
arbitrary time by a user
Each trigger generates an event whose measure is put in an event queue which can be
examined by the user program
The points x=cos θ, y=sin θ always lies on a unit circle regardless of the value of θ.
In order to increase θ by a fixed amount whenever nothing is happening, we use the idle
function
Void idle()
{
theta+ =2;
If (theta >=360.0) theta - = 360.0;
glutPostRedisplay();
}
In order to turn the rotation feature on and off, we can include a mouse function as
follows :
Void mouse(int button, int state, int x, int y)
{
if (button == GLUT_LEFT_BUTTON && state == GLUT_DOWN)
glutIdleFunc(idle);
if (button == GLUT_MIDDLE_BUTTON && state == GLUT_DOWN)
glutIdleFunc(NULL);
}
7. Suppose that the openGL window is 500 X 50 pixels and the clipping window is a
unit square with the origin at the lower left corner. Use simple XOR mode to draw
erasable lines. (Jul2011) 10M
Ans:
if(FLAG == 0){
X = x;
Y = winh - y;
Xn = x;
Yn = winh - y;
FLAG = 1;
}
glBegin(GL_LINES);
glVertex2i(X,Y);
glVertex2i(Xn,Yn);
glEnd();
glFlush();/*Old line erased*/
glBegin(GL_LINES);
glVertex2i(X,Y);
glVertex2i(x, winh - y);
glEnd();
glFlush();
Xn = x;
Yn = winh - y;
}
}
Ans : Display lists help in avoiding redundant code by storing the compiled code in a buffer
and then re executing it again and again.
For eg:The most efficient way of defining text is to define the font once, using a display
list for each char, and then store the font on the server using these display lists
A function to draw ASCII characters
void OurFont(char c)
{
switch(c)
{
case ‘O’ :
glTranslatef(0.5,0.5,0.0); /* move to the center */
glBegin(GL_QUAD_STRIP)
for (i=0;i<12;i++) /* 12 vertices */
{
angle = 3.14159/6.0 * i; /* 30 degrees in radians */
draw_objects(GL_SELECT);
glMatrixMode(GL_PROJECTION);
hits = glRenderMode(GL_RENDER);
processHits(hits, nameBuff);
/* normal render */
glutPostRedisplay();
}
}
void draw_objects(GLenum mode)
{
if (mode == GL_SELECT)
glLoadName(1);
glColor3f(1.0,0.0,0.0)
glRectf(-0.5,-0.5,1.0,1.0);
if (mode == GL_SELECT)
glLoadName(2);
glColor3f(0.0,0.0,1.0)
glRectf(-1.0,-1.0,0.5,0.5);
}
void processHits(GLint hits, GLUint buffer[])
{
unsigned int i,j;
UNIT – 4
1. Explain the complete procedure of converting a world object frame into camera or eye
frame, using the model view matrix. (Jun2012) 10M
Object space for a particular object gives it no spatial relationship with respect to other objects.
The purpose of world space is to provide some absolute reference for all the objects in your
scene. How a world-space coordinate system is established is arbitrary. For example, you may
decide that the origin of world space is the center of your room. Objects in the room are then
positioned relative to the center of the room and some notion of scale (Is a unit of distance a foot
or a meter?) and some notion of orientation (Does the positive y-axis point "up"? Is north in the
direction of the positive x-axis?).
The way an object, specified in object space, is positioned within world space is by means of a
modeling transform. For example, you may need to rotate, translate, and scale the 3D model of a
chair so that the chair is placed properly within your room's world-space coordinate system. Two
chairs in the same room may use the same 3D chair model but have different modeling
transforms, so that each chair exists at a distinct location in the room.
You can mathematically represent all the transforms in this chapter as a 4x4 matrix. Using the
properties of matrices, you can combine several translations, rotations, scales, and projections
into a single 4x4 matrix by multiplying them together. When you concatenate matrices in this
way, the combined matrix also represents the combination of the respective transforms. This
turns out to be very powerful, as you will see.
If you multiply the 4x4 matrix representing the modeling transform by the object-space position
in homogeneous form (assuming a 1 for the w component if there is no explicit w component),
the result is the same position transformed into world space. This same matrix math principle
applies to all subsequent transforms discussed in this chapter.
Eye Space
Ultimately, you want to look at your scene from a particular viewpoint (the "eye"). In the
coordinate system known as eye space(or view space), the eye is located at the origin of the
coordinate system. Following the standard convention, you orient the scene so the eye is looking
down one direction of the z-axis. The "up" direction is typically the positive y direction.
Eye space, which is particularly useful for lighting, will be discussed in Chapter 5.
The transform that converts world-space positions to eye-space positions is the view transform.
Once again, you express the view transform with a 4x4 matrix.
The typical view transform combines a translation that moves the eye position in world space to
the origin of eye space and then rotates the eye appropriately. By doing this, the view transform
defines the position and orientation of the viewpoint.
Most lighting and other shading computations involve quantities such as positions and surface
normals. In general, these computations tend to be more efficient when performed in either eye
space or object space. World space is useful in your application for establishing the overall
spatial relationships between objects in a scene, but it is not particularly efficient for lighting and
other shading computations.
For this reason, we typically combine the two matrices that represent the modeling and view
transforms into a single matrix known as the modelview matrix. You can combine the two
matrices by simply multiplying the view matrix by the modeling matrix.
Ans :
i) Vertex arrays provide a method for encapsulating the information in data structure such that
we can draw polyhedral objects with only few function calls.
There are three steps in using vertex arrays
(i) Enable the functionality of vertex arrays
(ii) Tell openGL, location & format of the array.
(iii) Render the object.
The first two steps are called initialization part and the third step is called display callback.
OpenGL allows many different types of arrays; here we are using two such arrays called color
and vertex arrays. The arrays can be enabled as follows.
glEnableClientstate (GL_COLOR_ARRAY)
glEnableClientstate (GL_VERTEX_ARRAY).
The arrays are same as before. Next, we identify where the arrays are as follows.
glVertexPointer (3,GL_FLOAT, 0, Vertices);
glColorPointer (3,GL_FLOAT, 0, COLOR);
Ans: We can use the vertex list to define a color cube. We can define a function quad to draw
quadrilaterals polygons specified by pointers into the vertex list. The color cube specifies the six
faces, taking care to make them all outward facing as follows.
GLfloatVertices [8] [3] = {{-1.0, -1.0, -1.0}, {1.0, -1.0, -1.0}, {1.0, 1.0, -1.0}, {-1.0, 1.0, -1.0} {-
1.0, -1.0, 1.0}, {1.0, -1.0, 1.0}, {1.0, 1.0, 1.0}, {-1.0, 1.0, 1.0}}
GLfloat color [8] [3] = {{0.0, 0.0, 0.0}, {1.0, 0.0, 0.0}, {1.0, 1.0, 0.0}, {0.0, 1.0, 0.0}, {0.0, 0.0,
1.0}, {1.0, 0.0, 1.0}, {1.0, 1.0, 1.0}, {0.0, 1.0, 1.0}};
maps variables (e.g. pixel intensity values located at position in an input image) into new
variables (e.g. in an output image) by applying a linear combination of translation,
rotation, scaling and/or shearing (i.e. non-uniform scaling in some directions) operations.
By defining only the B matrix, this transformation can carry out pure translation:
Pure rotation uses the A matrix and is defined as (for positive angles being clockwise rotations):
Here, we are working in image coordinates, so the y axis goes downward. Rotation formula can
be defined for when the y axis goes upward.
(Note that several different affine transformations are often combined to produce a resultant
transformation. The order in which the transformations occur is significant since a translation
followed by a rotation is not necessarily equivalent to the converse.)
Since the general affine transformation is defined by 6 constants, it is possible to define this
transformation by specifying the new output image locations of any three input image
coordinate pairs. (In practice, many more points are measured and a least squares method
is used to find the best fitting transform.)
5. In a homogenous coordinate system given two frames (v 1, v2, v3, P0) and (u1, u2, u3, Q0).
Let a and b be two vectors defined in two frames respectively. Derive the expression
that represents vector b interms of a. (July2011) 10M
To plot a point
Begin at origin
Travel along the x basis vector [direction] scaled by x coord, then along
the y basis vector scaled by the y coord, then finally along the z basis
vector scaled by the z coord.
UNIT – 5
Ans: Translation
Multiplies the current matrix by a matrix that moves (translates) an object by the given x,
y, and z values
Rotation
Scaling
Equations :
Translation: Pf = T + P
xf = xo + dx
yf = yo + dy
Rotation: Pf = R · P
xf = xo * cos - yo *sin
yf = xo * sin + yo *cos
Scale: Pf = S · P
xf = sx * xo
yf = sy * yo
4. Explain the basic transformations in 3D and represent them in matrix form. (Jun2010)
10M
Ans: Translation:
Scaling:
Rotation:
UNIT -6
VIEWING
1. With neat sketches, explain the various types of views that are employed in computer
graphics systems. (Jun2012) 10M
Ans:
Parallel projection has parallel projectors. Here the viewer is assumed to be present at
infinity. So here we have a “Direction of projection (DOP)” instead of center of
projection(COP).
Orthographic Projections :
2. Briefly discuss the following along with the functions used for the purpose in open GL
i) Perspective projections
ii) Orthogonal projections (Jun2012) 10M
Ans: i) Perspective projection has a COP where all the projector lines converge.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(fFOV, fAspect , fNearPlane, fFarPlane);
Orthogonal projections
Parameters
left, right
Specify the coordinates for the left and right vertical clipping planes.
bottom, top
Specify the coordinates for the bottom and top horizontal clipping planes.
nearVal, farVal
Specify the distances to the nearer and farther depth clipping planes.
These values are negative if the plane is to be behind the viewer.
3. Explain the various types of axonometric projections. (Dec2011) 7M
Ans: Projectors are orthogonal to the projection plane , but projection plane can move relative to
object.
Classification by how many angles of a corner of a projected cube are the same
none: trimetric
two: dimetric
three: isometric
4. What is canonical view volume? Explain the mapping of a given view volume to the
canonical form. (Dec2011) 7M
First the view volume specified by the glortho function is mapped to the canonical form
Canonical Form : default view volume centerd at the origin and having sides of length 2.
2 right left
0 0
right left right left
2 top bottom
0 0
top bottom top bottom
2 far near
0 0
near far far near
0 0 0 1
UNIT – 7
1. Explain phong lighting model. Indicate the advantages and disadvantages. (Jun2012)
10M
We need 9 coefficients to characterize the light source with ambient, diffuse and specular
components.The Illumination array for the ith light source is given by the matrix:
Lira Liga Liba
Li = Lird Ligd Libd
Lirs Ligs Libs
The intensity for each color source can be computed by adding the ambient,specular and diffuse
components.
E.g. Red intensity that we see from source I:
Iir = RiraLira + RirdLird+ RirsLirs = Ira+Ird+Irs
Since the necessary computations are same for each light source,
I = Ia+Id+Is
2. What are the different methods available for shading a polygon? Briefly discuss any 2
of them. (Jun2012) 10M
Flat shading
In case of flat shading there are distinct boundaries after color interpolation
3 vectors needed for shading are: l,n,v .The openGL function to enable flat shading is :
glShadeModel(GL_FLAT)
For a flat polygon,n is constant as the normal n is same at all points on the polygon.Also if we
assume a distant viewer, the vector v is constant and if we consider a distant light source then
the vector l is also a constant.Here all the 3 vectors are constant and therefore the shading
calculations needs to be done only once for an entire polygon and each point on the polygon is
assigned the same shade. This technique is known as Flat shading.
Disadvantage : But if we consider light sources and the viewer near the polygon, then flat
shading will show differences in shading and the human eye is very sensitive to slightest of such
differences due to the principle of “Lateral Inhibition”
Gouraud shading, also called intensity interpolation, provides a way to display smooth-shaded
polygons by defining the RGB color components of each polygon vertex. It operates by first
interpolating the RGB values between the vertical vertices along each edge. This gives us the
RGB components for the left and right edges of each scan line (pixel row). We then display each
row of pixels by horizontally interpolating the RGB values between that row's left and right
edges. This produces a remarkably smooth-shaded polygon. Fastgraph supports Gouraud shading
for direct color virtual buffers, but not for 256-color virtual buffers.
#define maxx 20
#define maxy 25
#define dx 15
#define dy 10
GLfloat x[maxx]={0.0},y[maxy]={0.0};
GLfloat x0=50,y0=50; // initial values for x, y
GLint i,j;
void init()
{
glClearColor(1.0,1.0,1.0,1.0);
glColor3f(1.0,0.0,0.0);
glPointSize(5.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0,499.0,0.0,499.0);
glutPostRedisplay(); // request redisplay
}
void display(void)
{
/* clear window */
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(0.0, 0.0, 1.0); // set color to blue
/* draw rectangles */
for(i=0;i<maxx;i++)
x[i]=x0+i*dx; // compute x[i]
for(j=0;j<maxy;j++)
y[j]=y0+j*dy; // compute y[i]
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB
glutInitWindowSize(500, 400); // create a 500x400 window
glutInitWindowPosition(0, 0); // ...in the upper left
glutCreateWindow("Rectangular Mesh"); // create the window
glutDisplayFunc(display); // setup callbacks
init();
glutMainLoop(); // start it running
}
4. Describe any two types of light sources. (Jul2011) 4M
Ans: General light sources are difficult to work with because we must integrate light coming
from all points on the source.
Point source
Spotlight
Ambient light
#include <stdlib.h>
#include <math.h>
#include <iostream>
#include <GL/glut.h>
/* initial tetrahedron */
int n;
int mode;
{
if (mode==0) glBegin(GL_LINE_LOOP);
else glBegin(GL_POLYGON);
if(mode==1 || mode==2) glNormal3dv(a);
glVertex3dv(a);
if(mode==2) glNormal3dv(b);
glVertex3dv(b);
if(mode==2) glNormal3dv(c);
glVertex3dv(c);
glEnd();
}
UNIT -8
IMPLEMENTATION
Ans: Consider drawing a line on a raster grid where we restrict the allowable slopes of the line to
the range .
If we further restrict the line-drawing routine so that it always increments x as it plots, it becomes
clear that, having plotted a point at (x,y), the routine has a severely limited range of options as to
where it may put the next point on the line:
So, working in the first positive octant of the plane, line drawing becomes a matter of deciding
between two possibilities at each step.
We can draw a diagram of the situation which the plotting program finds itself in having plotted
(x,y).
In plotting (x,y) the line drawing routine will, in general, be making a compromise between what
it would like to draw and what the resolution of the screen actually allows it to draw. Usually the
plotted point (x,y) will be in error, the actual, mathematical point on the line will not be
addressable on the pixel grid. So we associate an error, , with each y ordinate, the real value of
y should be . This error will range from -0.5 to just under +0.5.
In moving from x to x+1 we increase the value of the true (mathematical) y-ordinate by an
amount equal to the slope of the line, m. We will choose to plot (x+1,y) if the difference between
this new value and y is less than 0.5.
Otherwise we will plot (x+1,y+1). It should be clear that by so doing we minimise the total error
between the mathematical line segment and what actually gets drawn on the display.
The error resulting from this new point can now be written back into , this will allow us to
repeat the whole process for the next point along the line, at x+2.
The new value of error can adopt one of two possible values, depending on what new point is
plotted. If (x+1,y) is chosen, the new value of error is given by:
Otherwise it is:
This gives an algorithm for a DDA which avoids rounding operations, instead using the error
variable to control plotting:
This still employs floating point values. Consider, however, what happens if we multiply across
both sides of the plotting test by and then by 2:
The update rules for the error on each step may also be cast into form. Consider the floating-
point versions of the update rules:
which is in form.
Using this new ``error'' value, , with the new test and update equations gives Bresenham's
integer-only line drawing algorithm:
3. Derive the mathematical formula for bresenham’s mid point line algorithm. (Dec2011)
10M
Ans: Consider drawing a line on a raster grid where we restrict the allowable slopes of the line to
the range .
If we further restrict the line-drawing routine so that it always increments x as it plots, it becomes
clear that, having plotted a point at (x,y), the routine has a severely limited range of options as to
where it may put the next point on the line:
So, working in the first positive octant of the plane, line drawing becomes a matter of deciding
between two possibilities at each step.
We can draw a diagram of the situation which the plotting program finds itself in having plotted
(x,y).
In plotting (x,y) the line drawing routine will, in general, be making a compromise between what
it would like to draw and what the resolution of the screen actually allows it to draw. Usually the
plotted point (x,y) will be in error, the actual, mathematical point on the line will not be
addressable on the pixel grid. So we associate an error, , with each y ordinate, the real value of
y should be . This error will range from -0.5 to just under +0.5.
In moving from x to x+1 we increase the value of the true (mathematical) y-ordinate by an
amount equal to the slope of the line, m. We will choose to plot (x+1,y) if the difference between
this new value and y is less than 0.5.
Otherwise we will plot (x+1,y+1). It should be clear that by so doing we minimise the total error
between the mathematical line segment and what actually gets drawn on the display.
The error resulting from this new point can now be written back into , this will allow us to
repeat the whole process for the next point along the line, at x+2.
The new value of error can adopt one of two possible values, depending on what new point is
plotted. If (x+1,y) is chosen, the new value of error is given by:
Otherwise it is:
This gives an algorithm for a DDA which avoids rounding operations, instead using the error
variable to control plotting:
This still employs floating point values. Consider, however, what happens if we multiply across
both sides of the plotting test by and then by 2:
The update rules for the error on each step may also be cast into form. Consider the floating-
point versions of the update rules:
which is in form.
Using this new ``error'' value, , with the new test and update equations gives Bresenham's
integer-only line drawing algorithm:
Ans: You might have noticed in some of your OpenGL pictures that lines, especially nearly
horizontal or nearly vertical ones, appear jagged. These jaggies appear because the ideal line is
approximated by a series of pixels that must lie on the pixel grid. The jaggedness is called
aliasing, and this section describes antialiasing techniques to reduce it. Figure 6-2 shows two
intersecting lines, both aliased and antialiased. The pictures have been magnified to show the
effect.