Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
6 views

CG Module 6 - Visible Surface Detection Algorithm & Animation

Uploaded by

panditpiyush2005
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

CG Module 6 - Visible Surface Detection Algorithm & Animation

Uploaded by

panditpiyush2005
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Q1) Classification of visible surface detection algorithm

In the realistic graphics display, we have to identify those parts of a scene that
are visible from a chosen
viewing position. The various algorithms that are used for that are referred to
as Visible-surface detection methods or hidden-surface elimination
methods.

Types of Visible Surface Detection Methods:


o Object-space methods and
o Image-space methods
Visible-surface detection algorithms are broadly classified according to
whether they deal with object definitions directly or with their projected
images. These two approaches are called object-space methods and image-
space methods, respectively.
An object-space method compares objects and parts of objects to each other
within the scene definition to determine which surfaces, as a whole, we should
label as visible. In an image-space algorithm, visibility is decided point by
point at each pixel position on the projection plane. Most visible-surface
algorithms use image-space methods, although object space methods can be
used effectively to locate visible surfaces in some cases. Line display
algorithms, on the other hand, generally use object-space methods to identify
visible lines in wire frame displays, but many image-space visible-surface
algorithms can be adapted
easily to visible-line detection.
Visible Surface Detection Methods:
We will see four methods for Detecting Visible surface Methods. They are:
1. Back Face Detection Method
2. Depth Buffer Method
3. Scan line Method
4. Depth Sorting Method

Q2) Explain Back-Face Detection Method


When we project 3-D objects on a 2-D screen, we need to detect the faces
that are hidden on 2D.
Back-Face detection, also known as Plane Equation method, is an
object space method in which objects and parts of objects are compared
to find out the visible surfaces. Let us consider a triangular surface that
whose visibility needs to decided. The idea is to check if the triangle will
be facing away from the viewer or not. If it does so, discard it for the
current frame and move onto the next one. Each surface has a normal
vector. If this normal vector is pointing in the direction of the center of
projection, then it is a front face and can be seen by the viewer. If this
normal vector is pointing away from the center of projection, then it is a
back face and can not be seen by the viewer.

Algorithm for left-handed system :

1) Compute N for every face of object.


2) If (C.(Z component) > 0)
then a back face and don't draw
else
front face and draw

The Back-face detection method is very simple. For the left-handed


system, if the Z component of the normal vector is positive, then it is a
back face. If the Z component of the vector is negative, then it is a front
face.
Algorithm for right-handed system :

1) Compute N for every face of object.


2) If (C.(Z component) < 0)
then a back face and don't draw
else
front face and draw

Thus, for the right-handed system, if the Z component of the normal


vector is negative, then it is a back face. If the Z component of the vector
is positive, then it is a front face.

Back-face detection can identify all the hidden surfaces in a scene that
contain non-overlapping convex polyhedra.

Recalling the polygon surface equation :

Ax + By + Cz + D < 0
While determining whether a surface is back-face or front face, also consider
the viewing direction. The normal of the surface is given by :

N = (A, B, C)
A polygon is a back face if Vview.N > 0. But it should be kept in mind that after
application of the viewing transformation, viewer is looking down the negative
Z-axis. Therefore, a polygon is back face if :

(0, 0, -1).N > 0


or if C < 0
Viewer will also be unable to see surface with C = 0, therefore, identifying a
polygon surface as a back face if : C <= 0.

Considering (a),

V.N = |V||N|Cos(angle)
if 0 <= angle 0 and V.N > 0
Hence, Back-face.

Considering (b),

V.N = |V||N|Cos(angle)
if 90 < angle <= 180, then
cos(angle) < 0 and V.N < 0
Hence, Front-face.

Limitations :
1) This method works fine for convex polyhedra, but not necessarily for
concave polyhedra.
2) This method can only be used on solid objects modeled as a polygon mesh.

Q3) Explain Area Subdivision Algorithm

It was invented by John Warnock and also called a Warnock Algorithm. It is


based on a divide & conquer method. It uses fundamental of area coherence.
It is used to resolve the visibility of algorithms. It classifies polygons in two
cases i.e. trivial and non-trivial.

Trivial cases are easily handled. Non trivial cases are divided into four equal
subwindows. The windows are again further subdivided using recursion until
all polygons classified trivial and non trivial.
Classification of Scheme
It divides or classifies polygons in four categories:

1. Inside surface
2. Outside surface
3. Overlapping surface
4. Surrounding surface

1. Inside surface: It is surface which is completely inside the surrounding


window or specified boundary as shown in fig (c)

2. Outside surface: The polygon surface completely outside the surrounding


window as shown in fig (a)

3.Overlapping surface: It is polygon surface which completely encloses the


surrounding window as shown in fig (b)

4. Overlapping surface: It is surface partially inside or partially outside the


surface area as shown in fig (c)
Q4) Z-Buffer or Depth-Buffer method
When viewing a picture containing non transparent objects and
surfaces, it is not possible to see those objects from view which are
behind from the objects closer to eye. To get the realistic screen image,
removal of these hidden surfaces is must. The identification and
removal of these surfaces is called as the Hidden-surface problem.

Z-buffer, which is also known as the Depth-buffer method is one of the


commonly used method for hidden surface detection. It is an Image
space method. Image space methods are based on the pixel to be
drawn on 2D. For these methods, the running time complexity is the
number of pixels times number of objects. And the space complexity is
two times the number of pixels because two arrays of pixels are
required, one for frame buffer and the other for the depth buffer.

The Z-buffer method compares surface depths at each pixel position on


the projection plane. Normally z-axis is represented as the depth. The
algorithm for the Z-buffer method is given below :

Algorithm :

First of all, initialize the depth of each pixel.


i.e, d(i, j) = infinite (max length)
Initialize the color value for each pixel
as c(i, j) = background color
for each polygon, do the following steps :

for (each pixel in polygon's projection)


{
find depth i.e, z of polygon
at (x, y) corresponding to pixel (i, j)

if (z < d(i, j))


{
d(i, j) = z;
c(i, j) = color;
}
}
Let’s consider an example to understand the algorithm in a better way.
Assume the polygon given is as below :

In starting, assume that the depth of each pixel is infinite.

As the z value i.e, the depth value at every place in the given
polygon is 3, on applying the algorithm, the result is:
Now, let’s change the z values. In the figure given below, the z
values goes from 0 to 3.

In starting, the depth of each pixel will be infinite as :

Now, the z values generated on the pixel will be different which


are as shown below :
Therefore, in the Z buffer method, each surface is processed separately
one position at a time across the surface. After that the depth values i.e,
the z values for a pixel are compared and the closest i.e, (smallest z)
surface determines the color to be displayed in frame buffer. The z
values, i.e, the depth values are usually normalized to the range [0, 1].
When the z = 0, it is known as Back Clipping Pane and when z = 1, it is
called as the Front Clipping Pane.

In this method, 2 buffers are used :

• Frame buffer
• Depth buffer

Calculation of depth :
As we know that the equation of the plane is :

ax + by + cz + d = 0, this implies

z = -(ax + by + d)/c, c!=0


Calculation of each depth could be very expensive, but the computation
can be reduced to a single add per pixel by using an increment method
as shown in figure below :

Let’s denote the depth at point A as Z and at point B as Z’. Therefore :

AX + BY + CZ + D = 0 implies
Z = (-AX - BY - D)/C ------------(1)

Similarly, Z' = (-A(X + 1) - BY -D)/C ----------(2)

Hence from (1) and (2), we conclude :

Z' = Z - A/C ------------(3)


Hence, calculation of depth can be done by recording the plane
equation of each polygon in the (normalized) viewing coordinate system
and then using the incremental method to find the depth Z.
So, to summarize, it can be said that this approach compares surface
depths at each pixel position on the projection plane. Object depth is
usually measured from the view plane along the z-axis of a viewing
system.
Example :

Let S1, S2, S3 are the surfaces. The surface closest to projection
plane is called visible surface. The computer would start
(arbitrarily) with surface 1 and put it’s value into the buffer. It
would do the same for the next surface. It would then check
each overlapping pixel and check to see which one is closer to
the viewer and then display the appropriate color. As at view-
plane position (x, y), surface S1 has the smallest depth from the
view plane, so it is visible at that position.
Points to remember :
1) Z buffer method does not require pre-sorting of polygons.
2) This method can be executed quickly even with many polygons.
3) This can be implemented in hardware to overcome the speed
problem.
4) No object to object comparison is required.
5) This method can be applied to non-polygonal objects.
6) Hardware implementation of this algorithm are available in some
graphics workstations.
7) The method is simple to use and does not require additional data
structure.
8) The z-value of a polygon can be calculated incrementally.
9) Cannot be applied on transparent surfaces i.e, it only deals with
opaque surfaces. For ex :

10) If only a few objects in the scene are to be rendered, then this
method is less attractive because of additional buffer and the overhead
involved in updating the buffer.
11) Wastage of time may occur because of drawing of hidden objects.
Traditional animation is a technique where each frame of an
animation is drawn by hand. This process is very time-consuming
and labor-intensive, but it can create beautiful and expressive
animation.

Q5) Explain Traditional Animations Techniques

• Storyboarding: The first step is to create a storyboard, which is a


series of sketches that map out the plot of the animation.

• Character design: The animators will then design the characters,


giving them a unique look and personality.
• Keyframing: The key animator will then draw the key frames of the
animation, which are the most important poses in a scene.
• In-betweening: In-betweeners will then draw the frames in between
the key frames, creating a smooth flow of movement.

• Cels and painting: The drawings are then transferred to transparent


sheets of celluloid (cels) and painted.
• Backgrounds: Background artists will paint the backgrounds for the
animation.
• Filming: Finally, the cels are photographed one frame at a time,
creating the illusion of movement.

Traditional animation is a classic art form that has been used to create some
of the most beloved animated films of all time. Although computer
animation has become more popular in recent years, traditional animation
is still used today in some films and television shows.
O App
Q7) Describe Various Principles Of Traditional Animation
Animation is defined as a series of images rapidly changing to
create an illusion of movement. We replace the previous image
with a new image which is a little bit shifted. Animation Industry
is having a huge market nowadays. To make an efficacious
animation there are some principles to be followed.

Principle of Animation:

There are 12 major principles for an effective and easy to


communicate animation.

1) Squash and Stretch:


This principle works over the physical properties that are
expected to change in any process. Ensuring proper
squash and stretch makes our animation more
convincing. For Example: When we drop a ball from
height, there is a change in its physical property. When
the ball touches the surface, it bends slightly which should
be depicted in animation properly.

2) Anticipation:
Anticipation works on action.Animation has broadly divided into
3 phases:

1. Preparation phase
2. Movement phase
3. Finish

In Anticipation, we make our audience prepare for


action. It helps to make our animation look more
realistic. For Example: Before hitting the ball through
the bat, the actions of batsman comes under
anticipation. This are those actions in which the
batsman prepares for hitting the ball.

3. Arcs:
In Reality, humans and animals move in arcs.
Introducing the concept of arcs will increase the
realism. This principle of animation helps us to
implement the realism through projectile motion
also. For Example, The movement of the hand
of bowler comes under projectile motion while
doing bowling.

4. Slow in-Slow out:


While performing animation, one should
always keep in mind that in reality object takes
time to accelerate and slow down. To make
our animation look realistic, we should always
focus on its slow in and slow out
proportion. For Example, It takes time for a
vehicle to accelerate when it is started and
similarly when it stops it takes time.

5. Appeal:
Animation should be appealing to the audience and
must be easy to understand. The syntax or font style
used should be easily understood and appealing to the
audience. Lack of symmetry and complicated design of
character should be avoided.
6. Timing: Velocity with which object is moving effects
animation a lot. The speed should be handled with care
in case of animation. For Example, An fast-moving
object can show an energetic person while a slow-
moving object can symbolize a lethargic person. The
number of frames used in a slowly moving object is less
as compared to the fast-moving object.

7. 3D Effect:
By giving 3D effects we can make our
animation more convincing and effective.
In 3D Effect, we convert our object in a 3-
dimensional plane i.e., X-Y-Z plane which
improves the realism of the object. For
Example, a square can give a 2D effect but
cube can give a 3D effect which appears
more realistic.

8. Exaggeration:

Exaggeration deals with the physical features and


emotions. In Animation, we represent emotions
and feeling in exaggerated form to make it more
realistic. If there is more than one element in a
scene then it is necessary to make a balance
between various exaggerated elements to avoid
conflicts.
9. Stagging:
Stagging is defined as the presentation of the
primary idea, mood or action. It should always be
in presentable and easy to manner. The purpose
of defining principle is to avoid unnecessary
details and focus on important features only. The
primary idea should always be clear and
unambiguous.

10.Secondary Action:
Secondary actions are more important than
primary action as they represent the animation as
a whole. Secondary actions support the primary
or main idea. For Example, A person drinking a
hot tea, then his facial expressions, movement of
hands, etc comes under the secondary actions.

11.Follow Through:
It refers to the action which continues to move
even after the completion of action. This type of
action helps in the generation of more idealistic
animations. For Example: Even after throwing a
ball, the movement of hands continues.

12.Overlap:
It deals with the nature in which before ending
the first action, the second action starts. For
Example: Consider a situation when we are
drinking Tea from the right hand and holding
a sandwich in the left hand. While drinking a
tea, our left-hand start showing movement
towards the mouth which shows the
interference of the second action before the
end of the first action.

You might also like