Computer Graphics OpenGL Lecture Notes
Computer Graphics OpenGL Lecture Notes
www.opengl.org Slide OpenGL: What can it do?: Imaging part: works on pixels, bitmaps Geometry part: works on vertices, polygons uses a rendering pipeline that starts from data and ends with a display device.
Slide OpenGL: More info: Application Program Interface based on C-style function calls industry standard: one of several (Java3D, DirectX are others) stable, reliable and portable 1
Slide OpenGL Functions: glVertex2i() gl is the prex of all OpenGL function names Vertex is a function name 2i describes the arguments: two integers
Slide OpenGL Datatypes: GLenum,GLboolean,GLbiteld unsigned datatypes GLvoid pseudo datatype for pointers and return values GLbyte,GLshort,GLint 1,2,4-byte signed GLubyte,GLushort,GLuint 1,2,4-byte unsigned GLsizei 4-byte signed size datatype
Slide OpenGL Datatypes: GLoat single precision oat GLclampf single precision oat in [0,1] GLdouble double precision oat GLclampd double precision oat in [0,1]
glBegin(GL_LINES); glVertex2i(100, 50); glVertex2i(100, 130); glEnd();
Slide What are those numbers?: There is no predened way of interpreting the coordinates OpenGL can work with different coordinate systems 4
Chapter 2
CG Basics
Slide Lecture 4: Coordinate Systems, Viewports, World Windows Clipping Relative Drawing Parameterized Curves Double Buffering for Animation
Note that these terms can be used both for 2D and for 3D.
The part of this space that we want to display is called world window.
Slide A simple example: sx = sy = A = C B D = = = Ax + C By + D V.r V.l W.r W.l V.l AW.l V.t V.b W.t W.b V.b bW.b
Slide In OpenGL:
void setWindow(float left, float right, float bottom, float top) { glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluOrtho2D(left, right, bottom,top); } void setViewport(int left, int right, int bottom, int top) { glViewport(left,bottom,right-left,top-bottom); }
Slide Clipping:
What happens to parts of the world that are outside of the world window? Answer: They are not drawn. How to identify the parts of the world that are to be drawn? Input: the endpoints of a line and a world window Clipping Lines: identifying the segment of a line to be drawn Output: the new endpoints of the line (if anything is to be drawn)
Slide Clipping:
First step: Testing for trivial accept or reject Cohen Sutherland Clipping Algorithm For each point do four tests, compute 4 bit word: 1. Is P to the left of the world window? 2. Is P above the top of the world window? 3. Is P to the right of the world window? 4. Is P below the bottom of the world window?
Find the point where the line touches the world window border Move the outer point to the border of the window repeat all until trivial accept or reject
use two functions MOVE R EL and LINE R EL to move/draw relative to CP implementation is obvious. (or can be found in the book on page 105)
(absolute angle)
TURN
(relative angle)
FORWARD
(distance,
Implementation obvious: maintain additional current direction (CD) in a static global variable, use simple (sin, cos) trigonometry functions for FORWARD.
Arcs are partially drawn circles, instead of dividing the circle, divide the arc
Implicitly: Give a function F so that F (x, y ) = 0 for all points of the curve
some curves are neiter, e.g. the circle needs two functions y =
some curves are single valued in x: F (x, y ) = y g (x) or in y:F (x, y ) = x h(y ) R2 x2 and y = R2 x2
For some cases, we can use the implicit form to dene an inside and an outside of a curve: F (x, y ) < 0 inside, F (x, y ) > 0 outside
Example: x(t) = W cos(t), y (t) = H sin(t),t [0, 2 ] In order to nd an implicit form from a parametric form, we can use the two x(t) and y (t) equations to eliminate t and nd a relationship that holds true for all t. ` x 2 ` y 2 + H =1 For the Ellipse: W
Slide Superellipses:
A superellipse is dened by the implicit form x(t) = W cos(t)| cos(t)2/n1 | y (t) = H sin(t)| sin(t)2/n1 | A supercircle is a superellipse with W = H . `
x W
` y n
H
=1
10
Helix: x(t) = cos(t), y (t) = sin(t), z (t) = bt Toroidal spiral: x(t) = (a sin(ct) + b) cos(t) y (t) = (a sin(ct) + b) sin(t) z (t) = a cos(ct)
Slide Lecture 5:
Vectors Lines and Planes in 3D space afne representation the dot product and the cross product homogenous representations intersection and clipping
Slide Vectors:
We all remember what vectors are, right? The difference of two points is a vector The sum of a point and a vector is a point A linear combination av + bw is a vector Lets write w = a1 v1 + a2 v2 + + an vn If a1 + a2 + + an = 1 this is called an afne combination if additionally ai 0 for i = 1 . . . n , this is a convex combination p 2 2 2 + w2 + + Wn To nd the length of a vector, we can use Pythagoras: |w| = w1
Slide Vectors:
11
When we know the length, we can normalize the vector, i.e. bring it to unit length: a = a/|a|. We can call such a unit vector a direction. P The dot product of two vectors is a b = n i=1 vi wi has the well-known properties a b = b a (Symmetry) (a + c) b = a b + c b (Linearity) (sa) b = s(a b) (Homogeneity) |b|2 = b b We can play the usual algebraic games with vectors (simplication of equations)
The distance from a point C to the line through A in direction v is |v (C A)|/|v |. Projections are used to simulate reections
ab
12
This leads to a distinction between points and vectors by using a fourth coefcient in the so-called homogenous representation of points and vectors.
13
It can also be represented in a point normal form with a point in the plane and a normal vector. For any point R in the plane n (R B ) = 0. A part of the plane restricted by the length of two vectors is called a planar patch.
Slide intersections:
Every line segment has a parent line. We can rst nd the intersection of the parent lines and then see if the intersection point is in both line segments In order to intersect a plane with a line, we describe the line parametrically and the plane in the point normal form. Solving this equation gives us a hit time t that can be put into the parametric representation of the line to identify the hitpoint.
Slide Lecture 6:
Transformations in 2D in 3D in OpenGL
Slide Transformations:
Transformations are an easy way to reuse shapes A transformation can also be used to present different views of the same object Transformations are used in animations.
14
There are two principle ways do see transformations: object transformations are applied to the coordinates of each point of an object, the coordinate system is unchanged coordinate transformations denes a new coordinate system in terms of the old coordinate system and represents all points of the object in the new coordinate system. A transformation is a function that mapps a point P to a point Q, Q is called the image of P .
m12 m22 0
1 10 Px m13 m23 A @ Py A 1 1
or we can also transform vectors with the same matrix 1 10 1 0 0 Vx m11 m12 m13 Wx @ Wy A = @ m21 m22 m23 A @ Vy A 0 0 0 1 0
0 Sy 0
1 10 0 Vx 0 A @ Vy A 1 1
15
sin() cos() 0
10 1 0 Px 0 A @ Py A 1 1
h 1 0
1 10 0 Px 0 A @ Py A 1 1
sin() cos() 0
10 1 Px 0 0 A @ Py A 1 1
0
1 Sy
10 1 0 Px 0 A @ Py A 1 1
1 10 Px m13 m23 A @ Py A 1 1
16
As afne transformations are simple matrix multiplications, we can combine several operations to a single matrix. In a matrix multiplication of transformations, the sequence of translations can be read from right to left. We can also take this combined matrix and reconstruct the four basic operations M =(translation)(shear)(scaling)(rotation) (this is for 2D only)
1 C C A
Slide Translation...:
As expected: 0 1 Qx B Qy C B C @ Qz A 1 = 0 1 B 0 B @ 0 0 0 1 0 0 0 0 1 0 10 m14 Px B m24 C C B Py m34 A @ Pz 1 1 1 C C A
17
Slide Shearing...:
in one direction 0 1 Qx B Qy C B C @ Qz A 1 = 0 1 B f B @ 0 0 0 1 0 0 0 0 1 0 10 0 Px B Py 0 C CB 0 A @ Pz 1 1 1 C C A
18
To apply the sequence of transformations M1 , M2 , M3 to a point P , calculate Q = M3 M2 M1 P . An additional transformation must be premultiplied. To apply the sequence of transformations M1 , M2 , M3 to a coordinate system, calculate M = M1 M2 M3 . A point P in the transformed coordinate system has the coordinates M P in the original coordinate system. An additional transformation must be postmultiplied.
glMatrixMode(GL_MODELVIEW); glTranslated(dx,dy,0);
Rotation in 2d:
19
glMatrixMode(GL_MODELVIEW); glRotated(angle,0.0,0.0,1.0);
Slide 3D Pipeline:
The 3d Pipeline uses three matrix transformations to display objects The modelview matrix The projection matrix The viewport matrix The modelview matrix can be seen as a composition of two matrices: a model matrix and a view matrix.
Slide in OpenGL:
Set up the projection matrix and the viewing volume:
Aiming the camera. Put it at eye, look at look and upwards is up.
20
glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(eye_x,eye_y,eye_z, look_x,look_y,look_z,up_x,up_y,up_z);
glutWireTeapot(GLdouble size);
21
Slide Lecture 7: Wrapup of the lab session How was it again with those coordinates? representing hierarchic object structures perspective
22
everything in the view volume is parallel-projected to the window and displayed in the viewport. Everything else is clipped off. We continue to use the parallel projection, but make use of the z component to display 3D objects.
Aiming the camera. Put it at eye, look at look and upwards is up.
glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(eye_x,eye_y,eye_z, look_x,look_y,look_z,up_x,up_y,up_z);
23
The Teapot
glutWireTeapot(GLdouble size);
24
Slide Perspective:
Our current parallel projection is quite poor in giving us a real view of things. That is because it is ignoring the z component which leads to ambiguities.
Slide Perspective:
25
from http://www.leinroden.de/
Aiming the camera. Put it at eye, look at look and upwards is up. (no change here)
Slide Perspective:
The point perspective in OpenGL resolves some ambiguities but it cannot solve all ambiguities
Slide Perspective:
from http://www.worldofescher.com
27
my = mz =
PN 1
i=0
PN 1
i=0
Slide Lecture 9: Shading Toy physics and shading models diffuse reection specular reections and everything in OpenGL
Slide Shading:
Displaying Wireframe models is easy from a computational viewpoint But it creates lots of ambiguities that even perspective projection cannot remove If we model objects as solids, we would like them to look normal. One way to produce such a normal view is to simulate the physical processes that inuence their appearance (Ray Tracing). This is computationally very expensive.
28
We need a cheaper way that gives us some realism but is easy to compute. This is shading.
29
sm I d = I s d |s ||m|
Id is the intensity of the light source, d is the diffuse reection coefcient. We do not want negative intensities, so we set negative values of the cosine term to zero.
Again, Id is the intensity of the light source, sp is the specular reection coefcient. f is determined experimentally and lies between 1 and 200. Finding r is computationally expensive.
The falloff of the cosine function is now a different one. But this can be compensated by chosing a different f . Of course all these models are not very realistic, but easy to compute.
30
Slide In OpenGL:
Creating a light source:
OpenGL handles up to 8 light sources LIGHT0 to LIGHT7. Giving a vector instead of a position creates a light source of innite distance. This type of light source is called directional instead of positional.
Colors are specied in the RGBA model. A stands for alpha. For the moment, we set alpha to 1.0.
31
Ambient Light:
32
Slide Lecture 10: Smooth objects Representation Generic Shapes Flat vs. Smooth Shading Perspective and (pseudo) Depth
Slide Mesh approximations: Smooth objects can be approximated with ne meshes. For shading, we want to preserve the information that these objects are actually smooth so that we can shade them round. The basic approach: Use a parametric representation of the object and polygonalize it. (also called tesselation)
33
F is also called the inside-outside-function: F < 0:inside, F = 0 on the surface, F > 0 outside.
n(u0 , v0 ) is the normal vector in surface point P (u0 , v0 ). p p n(u0 , v0 ) = u v u=u0 ,v=v0
34
Slide Shading:
Flat shading: Compute the color for each face, ll the entire face with the color
For each scanline at ys compute colorlef t by linear interpolation between the color of the top and bottom of the left edge. Then ll the scanline by linear interpolation between colorlef t and colorright. in OpenGL:glShadeModel(GL_SMOOTH); Compute colorright the same way.
35
Only draw the pixel if its pseudodepth is lower, and update the pseudodepth if the pixel is drawn. Again, compute the correct pseudodepth for the endpoints of the scanline and use interpolation in between.
Slide Lecture 11: Smooth objects demo Flat vs. Smooth Shading demo Perspective and (pseudo) Depth
36
37
Only draw the pixel if its pseudodepth is lower, and update the pseudodepth if the pixel is drawn. Again, compute the correct pseudodepth for the endpoints of the scanline and use interpolation in between.
A better perspective projection is the following: Py Px ,N (x , y ) = N Pz Pz N is the distance from the eye to the near plane.
The parallel projection is the most simple one. It removes the z-Component.
But its more convenient to set the pseudodepth to a xed interval, i.e. 1 . . . 1. And its convenient to use the same denominator Pz .
38
This projection matrix computes the pseudodepth and the perspective projection at the same time: 0 1 N 0 0 0 B 0 N 0 0 C C P = B @ 0 0 a b A 0 0 1 0
Slide Pixmaps:
From Lecture 2: A Pixel is a point sample and a pixmap (or pixel map or bitmap) is created by sampling an original discrete points. In order to restore an image from pixels, we have to apply areconstruction lter. Reconstruction lters are e.g. Box, Linear, Cubic, Gaussian... OpenGL is another method to create these point samples: for every pixel in the viewport window, OpenGL determines its color value.
Slide Pixmaps:
Internally, OpenGL stores these pixmaps in buffers. The call to glutInitDisplayMode() allocates the basic draw buffer(s).
39
Slide Colors:
Visible light is a contiuum, so there is no natural way to represent color Inspired by human perception three spectral components: red, green, blue binary representation of the component values, different standards example: 16-bit RGB (565): one short, 5 bits for red and blue, 6 bits for green.
Slide Colors:
Y/Cr/Cb based on the CIE Cromaticity Diagram used for TV applications: compatible with old B/W TV standards Y: greyscale component, Cr: red-green-component, Cb: blue-green-component possibility to reduce bandwith for color signal
Slide Colors:
HSI model hue: color (i.e. dominant wavelength), saturation: ratio between white and color, intensity: ratio between black and color good for computer vision applications
40
Slide Colors:
CYM(K) model subtractive color model: white light is ltered, spectral components are removed. C: cyan (removes red) Y: yellow (removes blue) M: magenta (removes green) K: coal (i.e. black) removes everything. often used in print production
Slide Colors:
Conversion between different color models (and output devices) often leads to different colors. In order to get the right color, the devices have to be color-corrected. This is the task of a color management system.
Display devices react nonlinearily: A intensity value of128 is less than half as bright than 255.
41
is a measure of opacity, (1 ) is transparency = 1 Pixel is fully opaque = 0 Pixel is fully transparent 0 < < 1 Pixel is semi transparent
The A stands for , i.e. Alpha and indicates the transparent regions of a pixmap.
Slide Compositing:
The alpha values of a pixmap are called the alpha matte of the pixmap
42
Bnew = (1 )Bold + F
Given two pixels F (foreground) and B (background) and for the foreground pixel.
The process of merging two images with alpha mattes is called compositing or alpha blending.
But storing the pixels already premultiplied with their opcaity removes the effect. This is called associated color or opacity-weighted color.
and computing the new alpha: new = (1 )old + is the of the background pixel.
Slide Textures:
Textures are pixmaps that are applied to faces.
They can be displayed in all the different surface coefcients of the object, i.e. intensity or reection coefcients. Texture pixmaps can either be stored beforehand or created by the program (procedural textures).
43
Slide Textures:
OpenGL needs to know which part of the texture belongs to which part of the face. Therefore, the vertices of the object are both specied in 3D worldspace and in texture coordinates. When rendering, OpenGL uses interpolated texture coordinates to nd the right part of the texture.
www.debevec.org
44