Cse Vi Computer Graphics and Visualization 10CS65 Notes PDF
Cse Vi Computer Graphics and Visualization 10CS65 Notes PDF
Cse Vi Computer Graphics and Visualization 10CS65 Notes PDF
SYLLABUS
m
PART - A
UNIT - 1
INTRODUCTION: Applications of computer graphics; A graphics system; Images:
Physical and synthetic; Imaging systems; The synthetic camera model; The programmer’s
interface; Graphics architectures; Programmable pipelines; Performance characteristics.
co
Graphics Programming: The Sierpinski gasket; Programming two-dimensional applications.
7 Hours
UNIT - 2
THE OPENGL: The OpenGL API; Primitives and a6ributes; Color; Viewing; Control
functions; The Gasket program; Polygons and recursion; The three-dimensional gasket;
Plo8ng implicit functions.
UNIT - 3
cs. 6 Hours
INPUT AND INTERACTION: Interaction; Input devices; Clients and servers; Display lists;
Display lists and modeling; Programming event-driven input; Menus; Picking; A simple
CAD program; Building interactive models; Animating interactive programs; Design of
vtu
interactive programs; Logic operations.
7 Hours
UNIT - 4
GEOMETRIC OBJECTS AND TRANSFORMATIONS – 1: Scalars, points, and vectors;
Three-dimensional primitives; Coordinate systems and frames; Modeling a colored cube;
Affine transformations; Rotation, translation and scaling.
6 Hours
w.
PART - B
UNIT - 5
GEOMETRIC OBJECTS AND TRANSFORMATIONS – 2: Transformations in
ww
m
TEXT BOOK:
co
5th Edition, Addison-Wesley, 2008.
REFERENCE BOOKS:
1. Computer Graphics Using OpenGL – F.S. Hill,Jr. 2nd Edition, Pearson 1.
Education, 2001.
2. Computer Graphics – James D Foley, Andries Van Dam, Steven K Feiner, John F
Hughes, Addison-wesley 1997.
cs.
3. Computer Graphics - OpenGL Version – Donald Hearn and Pauline Baker, 2nd
Edition, Pearson Education, 2003.
vtu
w.
ww
TABLE OF CONTENTS
Page No.
UNIT - 1 INTRODUCTION
m
1.2 A graphics system
co
1.4 Imaging systems
1.7
1.8
Graphics architectures cs.
Programmable pipelines; Performance characteristics
m
3.5 Display lists and modelling
co
3.7 Menus; Picking
3.10
3.11
3.12
cs.
Animating interactive programs
Logic operations
vtu
UNIT-4 GEOMETRIC OBJECTS AND
TRANSFORMATIONS – I
4.1 Scalars 48-59
w.
m
Quaternions.
UNIT - 6 VIEWING
co
6.1 Classical and computer viewing 68-78
6.4
6.5
Simple projections
Projections in OpenGL
cs.
vtu
6.6 Hidden-surface removal
m
7.9 Global illumination.
co
UNIT - 8 IMPLEMENTATION
8.1 Basic implementation strategies 88-97
8.8 Antialiasing
ww
m
co
cs.
PART - A
vtu
w.
ww
UNIT - 1 7 Hours
INTRODUCTION
A graphics system
m
Images:
co
Imaging systems
Graphics architectures
Programmable pipelines
cs.
vtu
Performance characteristics
Graphics Programming:
UNIT -1
m
Simulation & Animation
User Interfaces
1.2 Graphics systems
co
A Graphics system has 5 main elements:
Input Devices
Processor
Memory
Frame Buffer
Output Devices
cs.
vtu
w.
ww
A Frame buffer is implemented either with special types of memory chips or it can be a part
of system memory.
In simple systems the CPU does both normal and graphical processing.
Graphics processing - Take specifications of graphical primitives from application program
and assign values to the pixels in the frame buffer It is also known as Rasterization or scan
conversion.
m
Output Devices
The most predominant type of display has been the Cathode Ray Tube (CRT).
co
cs.
vtu
Various parts of a CRT :
Electron Gun – emits electron beam which strikes the phosphor coating to emit light.
Deflection Plates – controls the direction of beam. The output of the computer is
w.
Noninterlaced display: Pixels are displayed row by row at the refresh rate.
Interlaced display: Odd rows and even rows are refreshed alternately.
m
co
Image formation models
Ray tracing :
One way to form an image is to follow rays of light from a point source finding which
cs.
rays enter the lens of the camera. However, each ray of light may have multiple interactions
with objects before being absorbed or going to infinity.
vtu
w.
m
co
Cones
Color sensitive
cs.
Rods are used for : monochromatic, night vision
The paradigm which looks at creating a computer generated image as being similar to
forming an image using an optical system.
m
In case of image formation using optical systems, the image is flipped relative to the
object.
In synthetic camera model this is avoided by introducing a plane in front of the lens
co
which is called the image plane.
The angle of view of the camera poses a restriction on the part of the object which can be
viewed.
projection plane.
The application programmer uses the API functions and is shielded from the details of
its implementation.
The device driver is responsible to interpret the output of the API and converting it
into a form understood by the particular hardware.
m
Writes pixels directly to frame buffer
E.g. : write_pixel(x,y,color)
In order to obtain images of objects close to the real world, we need 3-D object model.
co
3-D APIs (OpenGL - basics)
To follow the synthetic camera model discussed earlier, the API should support:
Objects, viewers, light sources, material properties.
cs.
OpenGL defines primitives through a list of vertices.
Primitives: simple geometric objects having a simple relation between a list of vertices
Simple prog to draw a triangular polygon :
glBegin(GL_POLYGON)
vtu
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(0.0, 1.0, 0.0);
glVertex3f(0.0, 0.0, 1.0);
glEnd( );
Specifying viewer or camera:
w.
Material properties
– Absorption: color properties
– Scattering
Modeling Rendering Paradigm :
Viewing image formation as a 2 step process
m
Modeling Rendering
co
E.g. Producing a single frame in an animation:
cs.
2nd step : Adding effects, light sources and other details
The interface can be a file with the model and additional info for final rendering.
Here the host system runs the application and generates vertices of the image.
ww
Display processor assembles instructions to generate image once & stores it in the
Display List. This is executed repeatedly to avoid flicker.
The whole process is independent of the host system.
m
Terminologies :
Latency : time taken from the first stage till the end result is produced.
Throughput : Number of outputs per given time.
co
Graphics Pipeline :
cs.
Process objects one at a time in the order they are generated by the application
All steps can be implemented in hardware on the graphics card
vtu
Vertex Processor
Much of the work in the pipeline is in converting object representations from one
coordinate system to another
– Object coordinates
– Camera (eye) coordinates
w.
– Screen coordinates
Every change of coordinates is equivalent to a matrix transformation
Vertex processor also computes vertex colors
Primitive Assembly
ww
Vertices must be collected into geometric objects before clipping and rasterization can take
place
– Line segments
– Polygons
– Curves and surfaces
Clipping
Just as a real camera cannot “see” the whole world, the virtual camera can only see part of the
world or object space
– Objects that are not within this volume are said to be clipped out of the scene
m
Rasterization :
If an object is not clipped out, the appropriate pixels in the frame buffer must be
co
assigned colors
Rasterizer produces a set of fragments for each object
Fragments are “potential pixels”
– Have a location in frame bufffer
–
cs.
Color and depth attributes
Vertex attributes are interpolated over objects by the rasterizer
Fragment Processor :
Fragments are processed to determine the color of the corresponding pixel in the
vtu
frame buffer
Colors can be determined by texture mapping or interpolation of vertex colors
Fragments may be blocked by other fragments closer to the camera
– Hidden-surface removal
w.
m
Definition of basic OpenGL types :
E.g. – glVertex2i(Glint xi, Glint yi)
or
co
#define GLfloat float.
GLfloat vertex[3]
glVertex3fv(vertex)
E.g. prog :
glBegin(GL_LINES);
glVertex3f(x1,y1,z1);
glVertex3f(x2,y2,z2);
cs.
vtu
glEnd();
The sierpinski gasket display() function :
void display()
{
GLfloat vertices[3][3] = {{0.0,0.0,0.0},{25.0,50.0,0.0},{50.0,0.0,0.0}};
w.
glBegin(GL_POINTS);
for (k=0;k<5000;k++){
j=rand()%3;
p[0] = (p[0] + vertices[j][0])/2; /* compute new location */
p[1] = (p[1] + vertices[j][1])/2;
/* display new point */
glVertex3fv(p);
}
glEnd();
glFlush();
}
Coordinate Systems :
One of the major advances in the graphics systems allows the users to work on any
m
coordinate systems that they desire.
The user’s coordinate system is known as the “world coordinate system”
The actual coordinate system on the output device is known as the screen coordinates.
co
The graphics system is responsible to map the user’s coordinate to the screen
coordinate.
cs.
vtu
w.
ww
UNIT - 2 6 Hours
THE OPENGL
m
Primitives and attributes
Color
co
Viewing
Control functions
UNIT-2
THE OPENGL
2.1 The OpenGL API
OpenGL is a software interface to graphics hardware.
This interface consists of about 150 distinct commands that you use to specify the objects
m
and operations needed to produce interactive three-dimensional applications.
OpenGL is designed as a streamlined, hardware-independent interface to be implemented
on many different hardware platforms.
co
To achieve these qualities, no commands for performing windowing tasks or obtaining
user input are included in OpenGL; instead, you must work through whatever windowing
system controls the particular hardware you’re using.
The following list briefly describes the major graphics operations which OpenGL performs to
render an image on the screen.
cs.
1. Construct shapes from geometric primitives, thereby creating mathematical descriptions of
objects.
(OpenGL considers points, lines, polygons, images, and bitmaps to be primitives.)
vtu
2. Arrange the objects in three-dimensional space and select the desired vantage point for
viewing the composed scene.
3. Calculate the color of all the objects. The color might be explicitly assigned by the
w.
application, determined from specified lighting conditions, obtained by pasting a texture onto
the objects, or some combination of these three actions.
ww
4. Convert the mathematical description of objects and their associated color information to
pixels on the screen. This process is called rasterization.
Primitive functions : Defines low level objects such as points, line segments, polygons
etc.
Attribute functions : Attributes determine the appearance of objects
– Color (points, lines, polygons)
m
co
Display vertices
cs.
vtu
Viewing functions : Allows us to specify various views by describing the camera’s
position and orientation.
‘Input functions : Allows us to deal with a diverse set of input devices like keyboard,
mouse etc
ww
Control functions : Enables us to initialize our programs, helps in dealing with any
errors during execution of the program.
Query functions : Helps query information about the properties of the particular
implementation.
The entire graphics system can be considered as a state machine getting inputs from the
application prog.
m
2.3 Primitives and attributes
OpenGL supports 2 types of primitives :
co
Geometric primitives (vertices, line segments..) – they pass through the geometric
pipeline
buffer.
Line segments
cs.
Raster primitives (arrays of pixels) – passes through a separate pipeline to the frame
vtu
GL_LINES
w.
GL_LINE_STRIP
GL_LINE_LOOP
Polygons :
Polygons :Object that has a border that can be described by a line loop & also has a well
ww
defined interior
Properties of polygon for it to be rendered correctly :
Simple – No 2 edges of a polygon cross each other
Convex – All points on the line segment between any 2 points inside the object, or on
its boundary, are inside the object.
Flat – All the vertices forming the polygon lie in the same plane . E.g. a triangle.
Polygon Issues
m
Fans and strips allow us to approximate curved surfaces in a simple way.
E.g. – a unit sphere can be described by the following set of equations :
co
X(Θ,Φ)=sin Θ cos Φ,
Y(Θ,Φ)=cos Θ sin Φ,
Z(Θ,Φ)=sin Φ
cs.
The sphere shown is constructed using quad strips.
A circle could be approximated using Quad strips.
The poles of the sphere are constructed using triangle fans as can be seen in the diagram
vtu
w.
ww
Graphics Text :
A graphics application should also be able to provide textual display.
There are 2 forms of text :
– Stroke text – Like any other geometric object, vertices are used to define line
segments & curves that form the outline of each character.
– Raster text – Characters are defined as rectangles of bits called bit blocks.
bit-block-transfer : the entire block of bits can be moved to the frame buffer using a single
function call.
2.5 Color
m
3 color theory – “If 2 colors produce the same tristimulus values, then they are visually
indistinguishable.”
co
Additive color model – Adding together the primary colors to get the percieved colors.
E.g. CRT.
cs.
Subtractive color model – Colored pigments remove color components from light that is
striking the surface. Here the primaries are the complimentary colors : cyan, magenta and
yellow.
vtu
RGB color
Each color component is stored separately in the frame buffer
Usually 8 bits per component in buffer
Note in glColor3f the color values range from 0.0 (none) to 1.0 (all), whereas in
glColor3ub the values range from 0 to 255
w.
ww
The color as set by glColor becomes part of the state and will be used until changed
– Colors and other attributes are not part of the object but are assigned when the
object is rendered
We can create conceptual vertex colors by code such as
glColor
glVertex
glColor
glVertex
RGBA color system :
This has 4 arguments – RGB and alpha
m
alpha – Opacity.
glClearColor(1.0,1.0,1.0,1.0)
This would render the window white since all components are equal to 1.0, and is opaque
co
as alpha is also set to 1.0
Indexed color
Colors are indices into tables of RGB values
Requires less memory
o indices usually 8 bits
o not as important now
cs.
Memory inexpensive
vtu
Need more colors for shading
w.
2.6 Viewing
ww
The default viewing conditions in computer image formation are similar to the settings on a
basic camera with a fixed lens
The Orthographic view
Direction of Projection : When image plane is fixed and the camera is moved far from
the plane, the projectors become parallel and the COP becomes “direction of
projection”
OpenGL Camera
OpenGL places a camera at the origin in object space pointing in the negative z
direction
The default viewing volume is a box centered at the origin with a side of length 2
m
co
Orthographic view
cs.
In the default orthographic view, points are projected forward along the z axis onto theplane
vtu
z=0
w.
z=0
ww
glMatrixMode (GL_PROJECTION)
Transformation functions are incremental so we start with an identity matrix and alter it with
a projection matrix that gives the view volume
glLoadIdentity();
glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);
m
2.7 Control Functions (interaction with windows)
Window – A rectangular area of our display.
co
Modern systems allow many windows to be displayed on the screen (multiwindow
environment).
glutInit allows application to get command line arguments and initializes system
gluInitDisplayMode requests properties for the window (the rendering context)
vtu
o RGB color
o Single buffering
o Properties logically ORed together
glutWindowSize in pixels
glutWindowPosition from top-left corner of display
w.
We may obtain undesirable output if the aspect ratio of the viewing rectangle
(specified by glOrtho), is not same as the aspect ratio of the window (specified by
glutInitWindowSize)
Viewport – A rectangular area of the display window, whose height and width can be
adjusted to match that of the clipping window, to avoid distortion of the images.
void glViewport(Glint x, Glint y, GLsizei w, GLsizei h) ;
m
co
The main, display and myinit functions
In our application, once the primitive is rendered onto the display and the application
program ends, the window may disappear from the display.
Event processing loop :
void glutMainLoop();
cs.
Graphics is sent to the screen through a function called display callback.
void glutDisplayFunc(function name)
vtu
The function myinit() is used to set the OpenGL state variables dealing with viewing and
attributes.
Control Functions
w.
glutInit(int *argc, char **argv) initializes GLUT and processes any command line
arguments (for X, this would be options like -display and -geometry). glutInit() should be
called before any other GLUT routine.
ww
glutInitWindowSize(int width, int size) specifies the size, in pixels, of your window.
int glutCreateWindow(char *string) creates a window with an OpenGL context. It
returns a unique identifier for the new window. Be warned: Until glutMainLoop() is
called.
m
co
cs.
vtu
w.
ww
UNIT - 3 7 Hours
Interaction
Input devices
m
Clients and servers
Display lists
co
Display lists and modeling
Menus; Picking
Logic operations.
w.
ww
UNIT - 3
m
Ivan Sutherland (MIT 1963) established the basic interactive paradigm that characterizes
interactive computer graphics:
– User sees an object on the display
co
– User points to (picks) the object with an input device (light pen, mouse,
trackball)
– Object changes (moves, rotates, morphs)
– Repeat
3.2 Input devices
Devices can be described either by
o Physical properties
cs.
Mouse
vtu
Keyboard
Trackball
o Logical Properties
What is returned to program via API
A position
w.
An object identifier
Modes
o How and when input is obtained
ww
Request or event
Logical Devices
Consider the C and C++ code
o C++: cin >> x;
m
o C: scanf (“%d”, &x);
What is the input device?
o Can’t tell from the code
co
o Could be keyboard, file, output from another program
The code provides logical input
o A number (an int) is returned to the program regardless of the physical device
Input Modes
Input devices contain a trigger which can be used to send a signal to the operating
system
o Button on mouse
o Pressing or releasing a key
When triggered, input devices return information (their measure) to the system
o Mouse returns position information
o Keyboard returns ASCII code
Request Mode
Input provided to program only when user triggers the device
Typical of keyboard input
– Can erase (backspace), edit, correct until enter (return) key (the trigger) is
depressed
m
co
Event Mode
cs.
Most systems have more than one input device, each of which can be triggered at an
arbitrary time by a user
Each trigger generates an event whose measure is put in an event queue which can be
vtu
examined by the user program
Event Types
w.
Idle: nonevent
o Define what should be done if no other event is in queue
m
co
3.4 Display Lists
cs.
The Display Processor in modern graphics systems could be considered as a graphics server.
Retained mode - The host compiles the graphics program and this compiled set is
vtu
maintained in the server within the display list.
The redisplay happens by a simple function call issued from the client to the server
It avoids network clogging
Avoids executing the commands time and again by the client
w.
glVertex2f(1.0,1.0);
glEnd();
glEndList();
GL_COMPILE – Tells the system to send the list to the server but not to display the
contents
GL_COMPILE_AND_EXECUTE – Immediate display of the contents while the list
is being constructed.
Each time the point is to be displayed on the server, the function is executed.
glCallList(PNT);
glCallLists function executes multiple lists with a single function call
m
for each char, and then store the font on the server using these display lists
A function to draw ASCII characters
co
void OurFont(char c)
{
switch(c)
{
case ‘O’ : cs.
glTranslatef(0.5,0.5,0.0); /* move to the center */
glBegin(GL_QUAD_STRIP)
vtu
for (i=0;i<12;i++) /* 12 vertices */
{
angle = 3.14159/6.0 * i; /* 30 degrees in radians */
glVertex2f(0.4 * cos(angle)+0.5, 0.4 * sin(angle)+0.5)
glVertex2f(0.5 * cos(angle)+0.5, 0.5 * sin(angle)+0.5)
w.
}
glEnd();
break;
}
ww
Fonts in GLUT
GLUT provides a few raster and stroke fonts
m
Building hierarchical models involves incorporating relationships between various
parts of a model
co
#define EYE 1 glTranslatef(……);
glCallList(EYE);
glNewList(EYE);
glEndList();
# define FACE 2
cs.
/* code to draw eye */
vtu
glNewList(FACE);
/* Draw outline */
glTranslatef(…..)
glCallList(EYE);
w.
Pointing Devices :
A mouse event occurs when one of the buttons of the mouse is pressed or released
ww
Window Events
Most windows system allows user to resize window.
This is a window event and it poses several problems like
– Do we redraw all the images
– The aspect ratio
– Do we change the size or attributes of the
m
primitives to suit the new window
co
{
/* first adjust clipping box */
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
cs.
gluOrtho2D(0.0,(GLdouble)w, 0.0, (GLdouble)h);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
vtu
/* adjust viewport */
glViewport(0,0,w,h);
}
w.
Keyboard Events
When a keyboard event occurs, the ASCII code for the key that generated the event and the
mouse location are returned.
E.g.
ww
m
display function will be executed.
The function ensures that the display will be drawn only once each time the program
goes through the event loop.
co
Idle Callback is invoked when there are no other events to be performed.
Its typical use is to continuously generate graphical primitives when nothing else is
happening.
cs.
Idle callback : glutIdleFunc(function name)
Window Management
GLUT supports creation of multiple windows
vtu
Id = glutCreateWindow(“second window”);
To set a particular window as the current window where the image has to be rendered
glutSetWindow(id);
w.
3.7 Menus
Three steps
o Define entries for the menu
o Define action for each menu item
Action carried out if entry selected
– Attach menu to a mouse button
menu_id = glutCreateMenu(mymenu);
glutAddmenuEntry(“clear Screen”, 1);
gluAddMenuEntry(“exit”, 2);
glutAttachMenu(GLUT_RIGHT_BUTTON);
Menu callback
m
void mymenu(int id)
{
if(id == 1) glClear();
co
if(id == 2) exit(0);
}
Add submenus by
cs.
Note each menu has an id that is returned when it is created
Rendering Modes
OpenGL can render in one of three modes selected by glRenderMode(mode)
– GL_RENDER: normal rendering to the frame buffer (default)
– GL_FEEDBACK: provides list of primitives rendered but no output to the
frame buffer
m
glPushName(GLuint name): push id on name buffer
glPopName(): pop top of name buffer
glLoadName(GLuint name): replace top name on buffer
co
id is set by application program to identify objects
GLint viewport[4];
if (button == GLUT_LEFT_BUTTON && state== GLUT_DOWN)
{
/* initialize the name stack */
glInitNames();
glPushName(0);
m
glSelectBuffer(SIZE, nameBuffer)l
co
glGetIntegerv(GL_VIEWPORT, viewport); //gets the current viewport
glMatrixMode(GL_PROJECTION);
cs.
/* save original viewing matrix */
glPushMatrix();
vtu
glLoadIdentity();
draw_objects(GL_SELECT);
ww
glMatrixMode(GL_PROJECTION);
hits = glRenderMode(GL_RENDER);
processHits(hits, nameBuff);
/* normal render */
m
glutPostRedisplay();
}
}
co
void draw_objects(GLenum mode)
{
if (mode == GL_SELECT)
glLoadName(1);
glColor3f(1.0,0.0,0.0)
glRectf(-0.5,-0.5,1.0,1.0);
cs.
vtu
if (mode == GL_SELECT)
glLoadName(2);
glColor3f(0.0,0.0,1.0)
glRectf(-1.0,-1.0,0.5,0.5);
w.
float color[3];
} object;
Define array of 100 objects & index to last object in the list.
object table[100];
int last_object;
m
Entering info into the object:
table[last_object].type = SQUARE;
table[last_object].x = x0;
co
table[last_object].y = y0;
table[last_object].color[0] = red;
…..
last_object ++;
cs.
To display all the objects, the code looks like this:
for (i=0;i<last_object;i++)
vtu
{
switch(table[i].type)
{
case 0: break;
case 1:
w.
{
glColor3fv(table[i].color);
triangle(table[i].x,table[i].y);
break;
ww
}
…..
}
In order to add code for deleting an object, we include some extra information in the
object structure:
float bb[2][2];
bb[0][0] = x0-1.0;
bb[0][1] = y0-1.0;….
The points x=cos θ, y=sin θ always lies on a unit circle regardless of the value of θ.
m
In order to increase θ by a fixed amount whenever nothing is happening, we use the
idle function
void(idle)
co
{
theta+ =2;
If (theta > 360.0) theta - = 360.0;
}
glutPostRedisplay();
cs.
In order to turn the rotation feature on and off, we can include a mouse function as
vtu
follows :
Void mouse(int button, int state, intx, int y)
{
if (button == GLUT_LEFT_BUTTON && state == GLUT_DOWN)
glutIdleFunc(idle);
w.
Double Buffering
We have 2 color buffers for our disposal called the Front and the Back buffers.
Front buffer is the one which is always displayed.
Back buffer is the one on which we draw
Function call to swap buffers :
glutSwapBuffers();
By default openGl writes on to the back buffer.
But this can be controlled using
glDrawBuffer(GL_BACK);
glDrawBuffer(FRONT_AND_BACK);
Writing Modes
XOR write
m
co
cs.
Usual (default) mode: source replaces destination (d’ = s)
o Cannot write temporary lines this way because we cannot recover what was
“under” the line in a fast simple way
vtu
Exclusive OR mode (XOR) (d’ = d s)
o x y x =y
– Hence, if we use XOR mode to write a line, we can draw it a second time and
line is erased!
w.
Rubberbanding
Switch to XOR write mode
Draw object
ww
– For line can use first mouse click to fix one endpoint and then use motion
callback to continuously update the second endpoint
– Each time mouse is moved, redraw line which erases it and then draw line
from fixed first position to to new second position
– At end, switch back to normal drawing mode and draw line
– Works for other objects: rectangles, circles
XOR in OpenGL
There are 16 possible logical operations between two bits
m
co
cs.
vtu
w.
ww
UNIT - 4 6 Hrs
Scalars
m
Three-dimensional primitives
co
Modelling a colored cube
Affine transformations
UNIT - 4 6 Hrs
The basic geometric objects and relationship among them can be described using the three
m
fundamental types called scalars, points and vectors.
Geometric Objects.
co
Points:
One of the fundamental geometric objects is a point.
cs.
location property, mathematically point neither a size nor a shape.
Points are useful in specifying objects but not sufficient.
Scalars:
Scalars are objects that obey a set of rules that are abstraction of the operations
of ordinary arithmetic.
vtu
Thus, addition and multiplication are defined and obey the usual rules such as
commutativity and associativity and also every scalar has multiplicative and
additive inverses.
Vector:
w.
Another basic object which has both direction and magnitude, however, vector
does not have a fixed location in space.
Directed line segment shown in figure below connects two points has both
direction i.e, orientation and magnitude i.e., its length so it is called as a vector
ww
because of vectors does not have fixed position, the directed line segments shown in figure
below are identical because they have the same direction and magnitude.
Vector lengths can be altered by the scalar components, so the line segment A shown in
m
figure below is twice t he length of line segment B
co
B A=2B
We can also combine directed line segments as shown in figure below by using the head and
tail rule
cs.
vtu
D=A+B B
A
w.
We obtained new vector D from two vectors A and B by connecting head of A to tail of B.
Magnitude and direction of vector D is determined from the tail of A to the head of B, we can
call D has sum of A and B, so we can write it as D=A+B.
ww
Consider the two directed line segments A and E shown in figure below with the same length
but opposite direction. We can define the vector E in terms of A as E =-A, so the vector E is
called inverse vector of A. The sum of vectors A and E is called Zero vector, which is
denoted as 0, that has a zero magnitude and orientation is undefined.
The purpose of the graphics pipeline is to create images and display them on your
screen. The graphics pipeline takes geometric data representing an object or scene
(typically in three dimensions) and creates a two-dimensional image from it.
Your application supplies the geometric data as a collection of vertices that form
polygons, lines, and points.
The resulting image typically represents what an observer or camera would see from a
m
particular vantage point.
As the geometric data flows through the pipeline, the GPU's vertex processor
transforms the constituent vertices into one or more different coordinate systems, each
of which serves a particular purpose. Cg vertex programs provide a way for you to
program these transformations yourself.
co
Figure 4-1 illustrates the conventional arrangement of transforms used to process vertex
positions. The diagram annotates the transitions between each transform with the coordinate
space used for vertex positions as the positions pass from one transform to the next.
cs.
vtu
w.
ww
modeling the cube by assuming that vertices of the case are available through an array of
vertices i.e,
{{-1.0, -1.0, -1.0},{1.0, -1.0, -1.0}, {1.0, 1.0, -1.0},{-1.0, 1.0, -1.0} {-1.0, -1.0, 1.0}, {1.0, -
1.0, 1.0}, {1.0, 1.0, 1.0}, {-1.0, 1.0, 1.0}}
m
We can also use object oriented form by 3D point type as follows
co
The vertices of the cube can be defined as follows
cs.
{{-1.0, -1.0, -1.0},{1.0, -1.0, -1.0}, {1.0, 1.0, -1.0},{-1.0, 1.0, -1.0} {-1.0, -1.0, 1.0}, {1.0, -
1.0, 1.0}, {1.0, 1.0, 1.0}, {-1.0, 1.0, 1.0}}
We can use the list of points to specify the faces of the cube. For example one face is
vtu
glBegin (GL_POLYGON);
glEnd ();
ww
When we are defining the 3D polygon, we have to be careful about the order in which we
specify the vertices, because each polygon has two sides. Graphics system can display either
or both of them. From the camera’s perspective, we need to distinguish between the two faces
of a polygon. The order in which the vertices are specified provides this information. In the
above example we used the order 0,3,2,1 for the first face. The order 1,0,2,3 would be same
because the final vertex in polygon definition is always linked back to the first, but the order
0,1,2,3 is different.
We call face outward facing, if the vertices are traversed in a counter clockwise order, when
the face is viewed from the outside.
m
In our example, the order 0,3,2,1 specifies outward face of the cube. Whereas the order
0,1,2,3 specifies the back face of the same polygon.
co
1 2
0 3
cs.
By specifying front and back carefully, we will can eliminate faces that are not visible.
vtu
OpenGL can treat inward and outward facing polygons differently.
vertices and edges]. We use a structure, the vertex list (shown in Fig. below) that is both
simple and useful.
ww
The data specifying the location of the vertices contain the geometry and can be stored as a
simple list or array, such as in vertices the vertex list. The top level entity is a cube and is
composed of six faces. Each face consists of four ordered vertices. Each vertex can be
specified indirectly through its index. This can be represented like figure shown above..
m
We can use the vertex list to define a color cube. We can define a function quad to draw
quadrilaterals polygons specified by pointers into the vertex list. The color cube specifies the
six faces, taking care to make them all outward facing as follows.
co
GLfloatVertices [8] [3] = {{-1.0, -1.0, -1.0}, {1.0, -1.0, -1.0}, {1.0, 1.0, -1.0}, {-1.0, 1.0, -1.0}
{-1.0, -1.0, 1.0}, {1.0, -1.0, 1.0}, {1.0, 1.0, 1.0}, {-1.0, 1.0, 1.0}}
GLfloat color [8] [3] = {{0.0, 0.0, 0.0}, {1.0, 0.0, 0.0}, {1.0, 1.0, 0.0}, {0.0, 1.0, 0.0}, {0.0,
cs.
0.0, 1.0}, {1.0, 0.0, 1.0}, {1.0, 1.0, 1.0}, {0.0, 1.0, 1.0}};
{
vtu
glBegin (GL_QUADS);
glcolor3fv (colors[a]);
glVertex3fv(vertices[a]);
w.
glcolor3fv(colors[b]);
glVertex3fv(vertices[b]);
glcolor3fv(colors[c]);
ww
glVertex3fv (vertices[c]);
glcolor3fv (colors[d]);
glVertex3fv(vertices[d]);
glEnd();
Void colorcube ()
quad (0,3,2,1);
quad (2,3,7,6);
m
quad (0, 4,7,3);
co
quad (4, 5, 6, 7);
Vertex arrays
cs.
Although we used vertex lists to model the cube, it requires many openGL function calls. For
example, the above function make 60 openGL calls: six faces, each of which needs a glBegin,
a glEnd, four calls to glColor, and four calls to glVertex. Each of which involves overhead &
data transfer. This problem can be solved by using vertex arrays.
vtu
Vertex arrays provide a method for encapsulating the information in data structure such that
we can draw polyhedral objects with only few function calls.
The first two steps are called initialization part and the third step is called display callback.
OpenGL allows many different types of arrays; here we are using two such arrays called
color and vertex arrays. The arrays can be enabled as follows.
glEnableClientstate (GL_COLOR_ARRAY)
glEnableClientstate (GL_VERTEX_ARRAY).
The arrays are same as before. Next, we identify where the arrays are as follows.
The first three arguments state that the elements are 3D colors and vertices stored as floats &
that the elements are contagious in the arrays. Fourth argument is pointer to the array holding
the data. Next, provide information in data structure about the relationship between the
m
vertices the faces of the cube by specifying an array that holds the 24 ordered vertex indices
for the six faces.
co
GLubytecubeIndices [24] = {0,3,2,1,2,3,7,6,0,4,7,3,1,2,6,5,4,5,6,7,0,1,5,4};
Each successive four indices describe a face of the cube, Draw the array through
glDrawElements which replaces all glVertex & glcolor calls in the display.
cs.
glDrawElements(GLenum type, GLsizei n, GLenum format, void * pointer)
1) for (i=0;,i<6;i++)
vtu
glDrawElements(GL_POLYGON,4,GL_UNSIGNED_BYTE, & cubeIndices[4*i]);
maps variables (e.g. pixel intensity values located at position in an input image) into
new variables (e.g. in an output image) by applying a linear combination of
ww
translation, rotation, scaling and/or shearing (i.e. non-uniform scaling in some directions)
operations.
By defining only the B matrix, this transformation can carry out pure translation:
Pure rotation uses the A matrix and is defined as (for positive angles being clockwise
rotations):
m
Here, we are working in image coordinates, so the y axis goes downward. Rotation formula
can be defined for when the y axis goes upward.
co
Similarly, pure scaling is:
cs.
(Note that several different affine transformations are often combined to produce a resultant
transformation. The order in which the transformations occur is significant since a translation
followed by a rotation is not necessarily equivalent to the converse.)
vtu
Since the general affine transformation is defined by 6 constants, it is possible to define this
transformation by specifying the new output image locations of any three input image
coordinate pairs. (In practice, many more points are measured and a least squares
method is used to find the best fitting transform.)
Translation
void glTranslate{fd} (TYPE x, TYPE y, TYPE z);
Multiplies the current matrix by a matrix that moves (translates) an object by the given x, y,
and z values
ww
Rotation
m
co
cs.
Scaling
vtu
void glScale{fd} (TYPEx, TYPE y, TYPEz);
Multiplies the current matrix by a matrix that stretches, shrinks, or reflects an
object along the axes.
w.
ww
Equations :
Translation: Pf = T + P
xf = xo + dx
yf = yo + dy
Rotation: Pf = R · P
xf = xo * cos - yo *sin
m
yf = xo * sin + yo *cos
Scale: Pf = S · P
co
xf = sx * xo
yf = sy * yo
cs.
vtu
w.
ww
m
co
cs.
vtu
PART - B
w.
ww
UNIT - 5 5 Hrs
Concatenation of transformations
m
OpenGL transformation matrices
co
Quaternions.
cs.
vtu
w.
ww
UNIT - 5 5 Hrs
m
a homogeneous position. When we express a vector position as an <x, y, z> quantity, we
assume that there is an implicit 1 for its w component.
Mathematically, the w value is the value by which you would divide the x, y,
co
and z components to obtain the conventional 3D (nonhomogeneous) position, as shown in
Equation 4-1.
Equation 4-1 Converting Between Nonhomogeneous and Homogeneous Positions
cs.
Expressing positions in this homogeneous form has many advantages.
For one, multiple transformations, including projective transformations required for
vtu
perspective 3D views, can be combined efficiently into a single 4x4 matrix.
Also, using homogeneous positions makes it unnecessary to perform expensive
intermediate divisions and to create special cases involving perspective views.
Homogeneous positions are also handy for representing directions and curved
surfaces described by rational polynomials.
w.
ww
Concatenation of transformations
Rotate a house about the origin
Rotate the house about one of its corners
– translate so that a corner of the house is at the origin
– rotate the house about the origin
World Space
m
Object space for a particular object gives it no spatial relationship with respect to other
objects. The purpose of world space is to provide some absolute reference for all the objects
in your scene. How a world-space coordinate system is established is arbitrary. For example,
co
you may decide that the origin of world space is the center of your room. Objects in the
room are then positioned relative to the center of the room and some notion of scale (Is a
unit of distance a foot or a meter?) and some notion of orientation (Does the positive y-axis
The way an object, specified in object space, is positioned within world space is by means of
vtu
a modeling transform. For example, you may need to rotate, translate, and scale the 3D
model of a chair so that the chair is placed properly within your room's world-space
coordinate system. Two chairs in the same room may use the same 3D chair model but have
different modeling transforms, so that each chair exists at a distinct location in the room.
You can mathematically represent all the transforms in this chapter as a 4x4 matrix. Using
w.
the properties of matrices, you can combine several translations, rotations, scales, and
projections into a single 4x4 matrix by multiplying them together. When you concatenate
matrices in this way, the combined matrix also represents the combination of the respective
transforms. This turns out to be very powerful, as you will see.
ww
If you multiply the 4x4 matrix representing the modeling transform by the object-space
position in homogeneous form (assuming a 1 for the w component if there is no
explicit w component), the result is the same position transformed into world space. This
same matrix math principle applies to all subsequent transforms discussed in this chapter.
Figure 4-2 illustrates the effect of several different modeling transformations. The left side
of the figure shows a robot modeled in a basic pose with no modeling transformations
applied. The right side shows what happens to the robot after you apply a series of modeling
transformations to its various body parts. For example, you must rotate and translate the
right arm to position it as shown. Further transformations may be required to translate and
rotate the newly posed robot into the proper position and orientation in world space.
m
co
Figure 4-2 The Effect of Modeling Transformations
Eye Space
cs.
Ultimately, you want to look at your scene from a particular viewpoint (the "eye"). In the
coordinate system known as eye space(or view space), the eye is located at the origin of the
coordinate system. Following the standard convention, you orient the scene so the eye is
vtu
looking down one direction of the z-axis. The "up" direction is typically the
positive y direction..
transform. Once again, you express the view transform with a 4x4 matrix.
The typical view transform combines a translation that moves the eye position in world
space to the origin of eye space and then rotates the eye appropriately. By doing this, the
view transform defines the position and orientation of the viewpoint.
ww
Figure 4-3 illustrates the view transform. The left side of the figure shows the robot from
Figure 4-2 along with the eye, which is positioned at <0, 0, 5> in the world-space coordinate
system. The right side shows them in eye space. Observe that eye space positions the origin
at the eye. In this example, the view transform translates the robot in order to move it to the
correct position in eye space. After the translation, the robot ends up at <0, 0, -5> in eye
space, while the eye is at the origin. In this example, eye space and world space share the
positive y-axis as their "up" direction and the translation is purely in the z direction.
Otherwise, a rotation might be required as well as a translation.
m
Figure 4-3 The Effect of the Viewing Transformation
co
The Modelview Matrix
Most lighting and other shading computations involve quantities such as positions and
cs.
surface normals. In general, these computations tend to be more efficient when performed in
either eye space or object space. World space is useful in your application for establishing
the overall spatial relationships between objects in a scene, but it is not particularly efficient
for lighting and other shading computations.
vtu
For this reason, we typically combine the two matrices that represent the modeling and view
transforms into a single matrix known as the modelview matrix. You can combine the two
matrices by simply multiplying the view matrix by the modeling matrix.
Clip Space
w.
Once positions are in eye space, the next step is to determine what positions are actually
viewable in the image you eventually intend to render. The coordinate system subsequent to
eye space is known as clip space, and coordinates in this space are calledclip coordinates.
The vertex position that a Cg vertex program outputs is in clip space. Every vertex program
ww
optionally outputs parameters such as texture coordinates and colors, but a vertex
program always outputs a clip-space position. As you have seen in earlier examples,
the POSITION semantic is used to indicate that a particular vertex program output is the
clip-space position.
The projection transform defines a view frustum that represents the region of eye space
where objects are viewable. Only polygons, lines, and points that are within the view
frustum are potentially viewable when rasterized into an image. OpenGL and Direct3D have
slightly different rules for clip space. In OpenGL, everything that is viewable must be within
an axis-aligned cube such that the x, y, and z components of its clip-space position are less
m
w y w, and -w z w. Direct3D has the same clipping requirement for x and y,
but the zrequirement is 0 z w. These clipping rules assume that the clip-space
co
position is in homogeneous form, because they rely on w.
The projection transform provides the mapping to this clip-space axis-aligned cube
containing the viewable region of clip space from the viewable region of eye space—
The 4x4 matrix that corresponds to the projection transform is known as the projection
matrix.
vtu
Figure 4-4 illustrates how the projection matrix transforms the robot in eye space from
Figure 4-3 into clip space. The entire robot fits into clip space, so the resulting image should
picture the robot without any portion of the robot being clipped.
w.
ww
two clip-space definitions is not apparent. Typically, the application is responsible for
providing the appropriate projection matrix to Cg programs.
m
co
cs.
vtu
w.
ww
UNIT - 6 7 hrs
VIEWING
m
Positioning of the camera
Simple projections
co
Projections in OpenGL
Hidden-surface removal
Parallel-projection matrices
Perspective-projection matrices
cs.
vtu
Projections and shadows.
w.
ww
UNIT - 6 7 hrs
VIEWING
6.1 Classical Viewing
3 basic elements for viewing :
– One or more objects
m
– A viewer with a projection surface
– Projectors that go from the object(s) to the projection surface
co
Classical views are based on the relationship among these elements
Perspective projection has a COP where all the projector lines converge.
ww
Parallel projection has parallel projectors. Here the viewer is assumed to be present at infinity.
So here we have a “Direction of projection(DOP)” instead of center of projection(COP).
m
co
Types Of Planar Geometric Projections :
cs.
vtu
Orthographic Projections :
w.
A viewer needs more than 2 views to visualize what an object looks like from its
multiview orthographic projection.
m
Cannot see what object really looks like because many surfaces hidden from view
– Often we add the isometric
co
Axonometric Projections
Projectors are orthogonal to the projection plane , but projection plane can move
relative to object.
cs.
Classification by how many angles of a corner of a projected cube are the same
vtu
none: trimetric
two: dimetric
three: isometric
Perspective Viewing
Characterized by diminution of size i.e. when the objects move farther from the viewer it
appears smaller.
Major use is in architecture and animation.
(imp) Viewing with a Computer
There are three aspects of the viewing process, all of which are implemented in the
m
pipeline,
– Positioning the camera
Setting the model-view matrix
co
– Selecting a lens
Setting the projection matrix
– Clipping
In OpenGL, initially the object and camera frames are the same
–
vtu
Default model-view matrix is an identity
The camera is located at origin and points in the negative z direction
OpenGL also specifies a default view volume that is a cube with sides of length 2
centered at the origin
– Default projection matrix is an identity
w.
ww
m
We can move the camera to any desired position by a sequence of rotations and
translations
Example: to position the camera to obtain a side view :
co
– Rotate the camera
– Move it away from origin
– Model-view matrix C = TR
cs.
vtu
w.
E.g. 1 : Code to view an object present at the origin from a positive x axis
First the camera should be moved away from the object
ww
glMatrixMode(GL_MODELVIEW)
glLoadIdentity();
glTranslatef(0.0, 0.0, -d);
glRotatef(90.0, 0.0, 1.0, 0.0);
Consider that we would like to get an isometric view of a cube centered at origin.
Consider the camera is placed somewhere along the positive z axis.
m
R = RxRy
Rx is of the form : 1 0 0 0
0 cosθ -sin θ 0
co
0 sin θ cosθ 0
0 0 0 1
given by :
1
0 1
0 0
0
0
0
cs.
[ NOTE : The default matrix for homogeneous coordinate, right handed, 3D system is
vtu
0 0 1 0
0 0 0 1
Refer chap 4 (transformations) ]
Ry is of the form :
cosθ 0 sinθ 0
w.
0 1 0 0
-sin θ 0 cos θ 0
0 0 0 1
ww
The Look At function : glLookAt(eyex, eyey, eyez, atx, aty, atz, upx, upy, upz)
m
OpenGL Perspective
co
In case of orthographic projections, the default view volume is a parallelpiped. This is
because the projector lines are parallel and they do not converge at any point(theoritically
they converge at infinity)
cs.
In case of perspective projections, the projector lines converge at the COP. Hence the view
volume would be a frustum rather than a parellelpiped.
The frustum (part of a pyramid) as shown in the diagram has a near and a far plane. The
vtu
objects within this would be within the view volume and visible to the viewer.
glFrustum(left,right,bottom,top,near,far)
w.
ww
m
where aspect = w/h
Fov = field of view (The area that the lens would
co
cover is determined by the angle shown in the diagram)
cs.
A graphics system passes all the faces of a 3d object down the graphics pipeline to generate
the image. But the viewr might not be able to view all these phases. For e.g . all the 6 faces of
a cube might not be visible to a viewer.Thus the graphics system must be careful as to which
vtu
surfaces it has to display.
Hidden surface – removal algortithms are those that remove the surfaces of the image that
should not be visible to the viewer.
w.
2 types:
Object Space Algorithm : Orders the surfaces of the objects in such a way that
rendering them would provide the correct image.
Image Space Algorithm : Keeps track of the distance of the point rasterized from the
ww
projection plane.
– The nearest point from the projection plane is what gets rendered.
– E.g z buffer algorithm.
Culling : For convex objects like sphere, the parts of the object which are away from
the viewer can be eliminated or culled before the rasterizer.
glEnable(GL_CULL);
A mesh is a set of polygons that share vertices and edges.Used to calculate topographical
elevations.
m
co
cs.
Suppose that the heights are given as a function of y as :
y = f(x,z)
vtu
then by taking samples of x and z ,y can be calculated as follows :
yij = (xi,zj)
yi+1,j = xi+1,zj
yi,j+1 = xi,zj+1
yi+1,j+1 = xi+1,zj+1
w.
Normalization
Rather than derive a different projection matrix for each type of projection, we can
ww
convert all projections to orthogonal projections with the default view volume
This strategy allows us to use standard transformations in the pipeline and makes for
efficient clipping
We stay in four-dimensional homogeneous coordinates through both the modelview
and projection transformations
o Both these transformations are nonsingular
o Default to identity matrices (orthogonal view)
Normalization lets us clip against simple cube regardless of type of projection
Orthogonal Normalization
m
glOrtho(left,right,bottom,top,near,far)
co
cs.
vtu
First the view volume specified by the glortho function is mapped to the canonical form
Canonical Form : default view volume centerd at the origin and having sides of length 2.
2 right left
0 0
right left right left
2 top bottom
0 0
top bottom top bottom
2 far near
0 0
near far far near
0 0 0 1
UNIT - 7 6 Hrs
Light sources
m
The Phong lighting model
Computation of vectors
co
Polygonal shading
UNIT - 7 6 Hrs
m
The openGL API provides set of functions to implement lighting, shading and material
properties in the programs.
We need lighting because :
co
Light-material interactions cause each point to have a different color or shade
All these following properties affect the way an object looks
– Light sources
–
–
–
Material properties
Location of viewer
Surface orientation
cs.
Types of Materials
vtu
Specular surfaces – These surfaces exhibit high reflectivity. In these surfaces, the
angle of incidence is almost equal to the angle of reflection.
Diffuse surfaces – These are the surfaces which have a matt finish. These types of
surfaces scatter light
Translucent surfaces – These surfaces allow the light falling on them to partially pass
w.
through them.
The smoother a surface, the more reflected light is concentrated in the direction a perfect
mirror would reflect light. A very rough surface scatters light in all directions.
Rendering Equation
ww
The infinite scattering and absorption of light can be described by the rendering
equation. Rendering equation is global and includes
– Shadows
– Multiple scattering from object to object
Bidirectional Reflection Distribution function (BRDF)
The reflection, transmission, absorption of light is described by a single func BRDF
It is described by the following :
Frequency of light
m
is reflected and the rest is absorbed
The reflected light is scattered in a manner that depends on the smoothness and
orientation of the surface
co
Simple Light Sources
Spotlight : This source can be considered as a restrict light from ideal point source. A
spotlight origins at a particular point and covers only a specific area in a cone shape.
vtu
Ambient light
– Same amount of light everywhere in scene
– Can model contribution of many sources and reflecting surfaces
Any kind of light source will have 3 component colors namely R,G and B
w.
Point source
Emits light equally in all directions.
A point source located at p0 can be characterized by 3 component color function:
L(p0) = (Lr(p0),Lg(p0),Lb(p0))
ww
Ambient light
The ambient illumination is given by : Ambient illumination = I a
And the RGB components are represented by
Lar,Lag,Lab
Where La – scalar representing each component
Spotlights
A spotlight can be characterized by :
– A cone whose apex is at Ps
– Pointing in the direction Is
– Width determined by an angle θ
• Cosines are convenient functions for lighting calculations
m
The Phong Lighting Model
co
Phong developed a simple model that can be computed rapidly
It considers three components
o Diffuse
o Specular
o Ambient
And Uses four vectors
–
cs.
To source represented by the vector l
vtu
– To viewer represented by the vector v
– Normal represented by the vector n
– Perfect reflector represented by the vector r
w.
ww
We need 9 coefficients to characterize the light source with ambient, diffuse and specular
components.The Illumination array for the ith light source is given by the matrix:
Lira Liga Liba
Li = Lird Ligd Libd
Lirs Ligs Libs
The intensity for each color source can be computed by adding the ambient,specular and
diffuse components.
E.g. Red intensity that we see from source I:
m
Since the ambient reflection co efficient is some positive factor,
0<=ka<=1
Therefore Ia = kaLa
co
Diffuse Reflection
A Lambertian Surface has:
Perfectly diffuse reflector
–
–
reflected light ~cos qi
cs.
Light scattered equally in all directions
Here the light reflected is proportional to the vertical component of incoming light
m
Here Ir is the reflected intensity
Ks = Absorption coefficient
I = Incoming intensity and cosαφ
co
The Shininess Coefficient
Metals are lustrous by nature so they have a higher sineness coefficient. The figure below
cs.
shows shineness coefficients for different materials:
vtu
Computation of Vectors
ww
Normal vectors :
Given 3 non collinear points (p0,p1,p2) on a plane , the normal can be calculated by
n =(p2-p0) X (p1-p0)
If a surface is described implicitly by the function : f(p) = f(x,y,z) =0 and if p & p0 are
2 points close to each other on a smooth surface
Normal to Sphere
Implicit function f(x,y.z)=0
Normal given by gradient
Sphere f(p)=p·p-1
Parametric Form
m
For sphere
x=x(u,v)=cos u sin v
co
y=y(u,v)=cos u cos v
z= z(u,v)=sin u
Flat shading
In case of flat shading there are distinct boundaries after color interpolation 3 vectors
w.
m
o glEnable(GL_LIGHTi) i=0,1…..
For each light source, we can set an RGB for the diffuse, specular, and ambient parts,
and the position
co
GLfloat diffuse0[]={1.0, 0.0, 0.0, 1.0};
GLfloat ambient0[]={1.0, 0.0, 0.0, 1.0};
GLfloat specular0[]={1.0, 0.0, 0.0, 1.0};
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
cs.
Glfloat light0_pos[]={1.0, 2.0, 3,0, 1.0};
Lighting and Shading needs to be done properly to the hidden surfaces as well. In order to
enable the shading for hidden surfaces, we use
glLightModeli(GL_LIGHT_TWO_SIDED,GL_TRUE)
Material Properties
All material properties are specified by :
glMaterialfv( GLenum face, GLenum type, GLfloat *pointer_to_array)
We have seen that each material has a different ambient, diffuse and specular properties.
GLfloat ambient[] = {1.0,0.0,0.0,1.0}
GLfloat diffuse[] = {1.0,0.8,0.0,1.0}
m
GLfloat specular[] = {1.0, 1.0, 1.0,1.0}
Defining shineness and emissive properties
glMaterialf(GL_FRONT_AND_BACK,GL_SHINENESS,100.0)
co
GLfloat emission[] = {0.0,0.3,0.3,1.0};
glMaterialfv(GL_FRONT_AND_BACK,GL_EMISSION,
emission)
Defining Material Structures
typedef struct materialStruct
GLfloat ambient[4];
GLfloat diffuse[4];
cs.
vtu
GLfloat specular[4];
GLfloat shineness;
materialStruct;
Global Illumination
w.
Ray tracer
- Considers the ray tracing model to find out the intensity at any point on the
object
ww
UNIT - 8 8 Hrs
IMPLEMENTATION
m
Clipping
Line-segment clipping
co
Polygon clipping
Rasterization
Bresenham’s algorithm
cs.
vtu
Polygon rasterization
Hidden-surface removal
Antialiasing
w.
Display considerations.
ww
UNIT - 8 8 Hrs
IMPLEMENTATION
8.1 Clipping
Clipping is a process of removing the parts of the image that fall outside the view volume
since it would not be a part of the final image seen on the screen.
m
co
cs.
vtu
Sutherland and Cohen 2D Clipping Algorithm
Basic Idea
Encode the line endpoints
w.
Successively divide the line segments so that they are completely contained in the
window or completely lies outside the window
ww
m
Bresenham’s mid point line algorithm
co
Consider drawing a line on a raster grid where we restrict the allowable slopes of the line to
the range .
If we further restrict the line-drawing routine so that it always increments x as it plots, it
becomes clear that, having plotted a point at (x,y), the routine has a severely limited range of
So, working in the first positive octant of the plane, line drawing becomes a matter of
vtu
deciding between two possibilities at each step.
We can draw a diagram of the situation which the plotting program finds itself in having
plotted (x,y).
w.
ww
In plotting (x,y) the line drawing routine will, in general, be making a compromise between
what it would like to draw and what the resolution of the screen actually allows it to draw.
Usually the plotted point (x,y) will be in error, the actual, mathematical point on the line will
not be addressable on the pixel grid. So we associate an error, , with each y ordinate, the
real value of y should be . This error will range from -0.5 to just under +0.5.
In moving from x to x+1 we increase the value of the true (mathematical) y-ordinate by an
amount equal to the slope of the line, m. We will choose to plot (x+1,y) if the difference
between this new value and y is less than 0.5.
Otherwise we will plot (x+1,y+1). It should be clear that by so doing we minimise the total
error between the mathematical line segment and what actually gets drawn on the display.
m
The error resulting from this new point can now be written back into , this will allow us to
repeat the whole process for the next point along the line, at x+2.
co
The new value of error can adopt one of two possible values, depending on what new point is
plotted. If (x+1,y) is chosen, the new value of error is given by:
Otherwise it is:
This still employs floating point values. Consider, however, what happens if we multiply
The update rules for the error on each step may also be cast into form. Consider the
floating-point versions of the update rules:
m
Multiplying through by yields:
co
which is in form.
m
co
cs.
vtu
w.
ww
Rasterization
Consider drawing a line on a raster grid where we restrict the allowable slopes of the line to
the range .
If we further restrict the line-drawing routine so that it always increments x as it plots, it
becomes clear that, having plotted a point at (x,y), the routine has a severely limited range of
m
options as to where it may put the next point on the line:
It may plot the point (x+1,y), or:
It may plot the point (x+1,y+1).
co
So, working in the first positive octant of the plane, line drawing becomes a matter of
deciding between two possibilities at each step.
We can draw a diagram of the situation which the plotting program finds itself in having
plotted (x,y).
cs.
vtu
In plotting (x,y) the line drawing routine will, in general, be making a compromise between
w.
what it would like to draw and what the resolution of the screen actually allows it to draw.
Usually the plotted point (x,y) will be in error, the actual, mathematical point on the line will
not be addressable on the pixel grid. So we associate an error, , with each y ordinate, the
ww
real value of y should be . This error will range from -0.5 to just under +0.5.
In moving from x to x+1 we increase the value of the true (mathematical) y-ordinate by an
amount equal to the slope of the line, m. We will choose to plot (x+1,y) if the difference
between this new value and y is less than 0.5.
Otherwise we will plot (x+1,y+1). It should be clear that by so doing we minimise the total
error between the mathematical line segment and what actually gets drawn on the display.
The error resulting from this new point can now be written back into , this will allow us to
repeat the whole process for the next point along the line, at x+2.
The new value of error can adopt one of two possible values, depending on what new point is
plotted. If (x+1,y) is chosen, the new value of error is given by:
Otherwise it is:
m
This gives an algorithm for a DDA which avoids rounding operations, instead using the error
co
variable to control plotting:
cs.
vtu
This still employs floating point values. Consider, however, what happens if we multiply
The update rules for the error on each step may also be cast into form. Consider the
floating-point versions of the update rules:
which is in form.
m
Using this new ``error'' value, , with the new test and update equations gives Bresenham's
integer-only line drawing algorithm:
co
cs.
Integer only - hence efficient (fast).
vtu
Multiplication by 2 can be implemented by left-shift.
Drawing the objects that are closer to the viewing position and eliminating objects which are
obscured by other “nearer” objects
- Object Space: Compares objects and parts of object to each other to determine which
surfaces and lines should be labeled as invisible
Generally used for hidden line removal
- Image Space: Visibility is determined point by point at each pixel position on the
projection plane Generally used for hidden surface removal Back face culling is also a
form of hidden surface removal
Painter’s Algorithm
m
♦ Problems:
- Objects must be drawn in a particular order based upon their distance from the view point
- If the viewing position is changed, the drawing order must be changed
co
Z Buffering
Commonly used image-space algorithm which uses a depth or Z buffer to keep track of the
cs.
distance from the projection plane to each point on the object
- For each pixel position, the surface with the smallest z coordinate is visible
- Depth or Z values are usually normalized to values between zero and one
vtu
Z Buffer Algorithm
1. Clear the color buffer to the background color
2. Initialize all xy coordinates in the Z buffer to one
3. For each fragment of each surface, compare depth values to those already stored in the
Z buffer
w.
- Calculate the distance from the projection plane for each xy position on the surface
- If the distance is less than the value currently stored in the Z buffer:
Set the corresponding position in the color buffer to the color of the fragment
Set the value in the Z buffer to the distance to that object
ww
♦ Comments
- Z-buffer testing can increase application performance
- Software buffers are much slower than specialized hardware depth buffers
- The number of bitplanes associated with the Z buffer determine its precision or resolution