A New Narrow Phase Collision Detection Algorithm Using Height Projection
A New Narrow Phase Collision Detection Algorithm Using Height Projection
ABSTRACT
This paper presents a rapid and novel method for many broad phase, section 4 demonstrate the narrow phase
body dynamic collision detection at interactive rates. algorithm, and the result and analysis are given in
Most of the work in this algorithm is prepared before section 5,and finally the paper is concluded in section 6.
system initiation saving the consumed time as much as
possible. The work is divided into two phases, thefirst is
the broad phase where we check the object bounding 2 COLLISION DETECTION PHASES
volume (B V) collision occurrence and is implemented in
parallel by using the graphics processing unit (GPU), In the collision detection objects pass through multiple
and the second is narrow phase where we use the filters. The goal of these filters is to determine which
computer graphics techniques and the GPU to reduce pairs of potentially moving objects are of interest, and
the needed time for data reconstruction, which is done in rejects all pairs of objects that do not intersect. In the
our method before system initiation. The three important case of rigid bodies, it helps to consider these filters in
advantages in this method are the high accuracy, being two phases which are the broad phase and narrow phase
a general method for all shapes, and any algorithm can [1]. So collision detection and response can be
use this method easily to enhance its performance. represented as a pipeline [2]. The task in the broad phase
is to reject the low possible collided objects before
performing the heavy calculations and that by fitting the
1. INTRODUCTION object in a simple bounding spheres or boxes, and
perform a simple test to save the calculation collision
Most interactive virtual reality, augmented reality, time unless the spheres or boxes intersect. Hashing
animation, computer-aided design, robotics, and schemes can be employed in some cases to reduce the
physics-based simulations systems which have dynamic number of pairs of spheres or boxes that have to be
bodies,must estimate the collision between these bodies tested for overlap [3]. The basic idea of detecting
to add reality to the systems. And in the presence of collisions is checking the geometry of the target objects
complex objects which have high details and the more for interpenetration through static interference test [4].
flexibility requirement of the systems, there are many The bounding volume hierarchy (BVH) is a method
algorithms for collision detection which tried to which makes a tree structure on a set of elements. Each
consume the time of collision calculation to satisfying element is stored in a leaf of the tree and contains the
these requirements in an efficient manner. So our data of the BV which enclose the object. BVH method
technique is implemented to enhance these methods fully simplifies preliminary tests and reduces the number
depending on the parallel processing capability of the of expensive intersection tests between pairs of objects
GPU, and speeding up the collision calculations needed or primitives. In essence, BV overlap tests determine the
for the triangle-to-triangle intersection tests by testing separating axes [5]. BVH methods performed better on
the possible collided polygons only as shown in figure 1. rigid objects, because it doesn't require updating in the
The polygons in the capsules front are tested only as in shape or dimension. A spatial subdivision method has
figure 1. b. which act the face of collision. similarities with the bounding volume method except
that it is subdivides the space not the objects [4], it
subdivides the space into equal grid cells,each cell is at
least as large as the largest object's BV. Each cell
contains a list of each object whose dimension
occurrence is within that cell. The collision test between
two objects is performed if they exist in the same cell or
a b two adjacent cells. Cameron [6] discusses space-time
Figure i-possible intersected polygons in two objects method by extruding objects into space-time as swept
volumes and then performing intersection tests in four
It is done by performing this preparation separately dimensions (4D), to predict the time and place of
before system initiation,and test the intersection of these collision. In sort and sweep the bounding volume of
polygons after the system is initiated in parallel. each object is projected onto the x, y,or z axis, defining
The rest of the paper is organized as follows: section 2 one-dimensional collision interval. If the collision
provides an overview of collision detection techniques, intervals do not overlap,the two objects cannot possibly
section 3 demonstrate the parallel implementation of the collide.
Authorized licensed use limited to: SOUTHWEST JIAOTONG UNIVERSITY. Downloaded on January 18,2024 at 13:18:01 UTC from IEEE Xplore. Restrictions apply.
111
In the narrow phase, the calculation will be more
accurate, and it consumes a larger time in the ,=;=M=a in�kI o=P ��..
:::;� l :::; :;:::
calculation, so there are many techniques to save this N 0 rma�elocitY
time. A time-critical detection algorithm checks for
collisions between successively tighter approximations
to the objects' real surfaces. After each step of this
progressive refinement, the application can stop the
algorithm if it exceeds its time budget [7]. Each collided Yes
pairs taken from the broad phase are entered into the
narrow phase for checking the BV partners collision.
Because of the complexity of this phase, it is prefer to
use convex shapes in the context of real-time physics �===;!J
Collision
It--7de"'te''
...
'' c'7t'' io'''"n-+ -.: } Narrow phase
As
Stage 1; offline processing algorithm:
shown in figure 2 we first determine the current
I-For each triangle in the object draw its xy plane
position for object bi by integrating its normal and
projection in an image according to its x, and y
collision response velocity if exist, then walk through
vertices.
the broad phase in the system by spherical BV, and take
2-From the height value of each triangle head point,
each pair of the collided BV from this phase into narrow
interpolate the height value at each pixel on its
phase, and perform triangle-to-triangle intersection. If
surface, and replace each pixel data red (R), green
the intersection is occurred the collision response
(G),and blue (B) by its interpolated height value.
velocity is computed. Finally return to the first step.
3-collect all images in a final one.
The spatial subdivision is the best method for parallel
4-In the final image if two triangles intersected in an
processing because every partition in the space can be
area, draw the pixels with the highest value for each
handled as a unit and processed separately by the GPu.
triangle.
Due to parallelism a problem in the spatial subdivision
5-Check the final faces appeared in the image:
may arise,a single object that overlaps multiple cells can
-If it is the first projection angl e,or the current faces
be involved in more than one collision test at the same
are different from the previous projection faces,
time. Therefore, some mechanisms must exist to solve
construct a counter initialized by 0 and store it in a
this problem. So each cell is designed to be at least as
file followed by the current faces indices.
large as the bounding volume of the largest object, and
-Else increase the previous projection counter by 1.
test the BV collision for each object in the space to its
6-Rotate the object by one degree about x, and/or y
own and neighboring cell's objects sequentially. This
axis to get all face projections of the object.
operation is applied to all objects in parallel, taking into
7-Go to step 1 until reaching last angle combination.
account that the test between each BV pair of objects is
done only once. The index of all objects that will enter
further tests in the narrow phase is extracted from this
phase structured in a collided pairs. In the offline stage the idea is depend on that each object
triangles has a normal vector direction. And an image is
generated for each triangle which contains its x, y, and z
Authorized licensed use limited to: SOUTHWEST JIAOTONG UNIVERSITY. Downloaded on January 18,2024 at 13:18:01 UTC from IEEE Xplore. Restrictions apply.
112
dimension data as we will declare below,then collect all region of triangle 1, 2, and 3 is found, and then replace
triangle images in a final one according to triangles x, y, pixel data for each point in the triangle surface by its
and z dimension appearance to get the final appeared height value as shown in figure 4. It declares the result
triangles. And to reduce the storage size take a subset output from the knot model where these colors represent
samples (faces indices) that will appear finally in the the height value in the responsible pixel location. After
image and save these data in a file on a specific location storing the projection data in the final image we begin to
scan the image to get all final faces indices appeared in
on the hard disk (HD), repeat these steps after rotating
the image. This data is sufficient to retain the projection
the object one degree about x, and/or y axis, and group
angle faces which possibly collided.
the output data in a file,where each group represents all
faces indices for the object (projection data) which will
appear in front of the collision face to a specified
projection angle as declared in figure 1. For more
optimization of the storage size, if the current faces
group has the same faces of the previous projection
group it would not be written and in the beginning of
each group write the number of occurrence for this
group faces. Figure 4 - the height projection of a knot.
The image Cartesian coordinates is used to store x and y
position of the projection,while the z values is stored in Then rotate the object about x, and/or y axis by one
pixel's R, G,and B as shown in figure 4. The height data degree and return to the first step until last angles
will be calculated from the lowest point in the object as a combination, where the rotation about x, and y axes is
reference. And in the pixel's alpha value the face index enough to get all object projection faces.
for the triangle that contains this pixel is stored to be And begin to enter the online stage as follow:
Firstly face_groups, and angle_faces arrays are
used in the data sub sampling. If the number of triangles
constructed, the second one contains the first index of
are higher than 256 the faces indices is saved in another
the projection data which exist in the first array for each
array and related to the pixel location. In step 2 in the angle.
offline stage the height values at triangle surface points
is interpolated from the height dimension of its 3 head
point vertices,where this data will be used in step 4. Stage 2; online processing algorithm:
I-Load all stored file data into face_groups array.
y 2-In angle_faces array store the first index of the
ll:�
projection data in face_groups array for each angle.
3-Apply the broad phase collision detection test.
4-For each broad phase collided pair find each object
2
3 5
rotation matrix about x,y,and z axis.
5-For each object in the collided pair calculate the
x
angles between the new local z axis after rotation and
Figure 3 - interpolation method. the vector reaches the two objects centeriods on the
local xz, and yz projection plane.
As shown in figure 3 the polygon edge with end point 6-Apply steps 4,5 for each collided pair in parallel.
vertices at points 1,2 is intersected by scan line at point
7-For each object in the collided pair call the faces
4. A fast method for obtaining the height at point 4 is to
data from face_groups array through angle_faces
interpolate between the values at vertices 1 and 2 using
only the horizontal displacement of the scan line [12]. array,and perform all triangle-to-triangle intersection
We find first Z4 from the following equation. tests combination in parallel, if the intersection
occurred calculate each object force affect on the
(1) other.
8-In parallel sort the collided object and its forces by
object ill where object ill is the attributes of the
Where Z4 represents the height at point 4. Similarly Zs
object which is used to access additional object
can be calculated as: properties.
9-Collect each object's forces that affected by m
Zs =
XS-X3
--
zz + xz-xs
-- Z3 parallel to find the final velocity for each object.
XZ-X3 XZ-X3
(2)
From these two boundary heights, a linear interpolation The online stage concentrates on the real time collision
is used to obtain heights for positions across the scan calculations, so it is important to save the time of
line. The height for point p in figure 3 is calculated from calculation and that by parallelizing triangle-to-triangle
the heights at points 4 and 5 as: intersection operation on the possibly collided triangles
Y4-Yp YP-YSz only as follow.
Z -
- Z + 4 (3)
p Y4-YS 5 Y4-YS When the system is initialized the subsample sets of all
projections begin to be loaded in face_groups array, and
the array angle_faces is constructed of 360*360
Thus the height value for each point p in the surface elements that point to the beginning of the specified
Authorized licensed use limited to: SOUTHWEST JIAOTONG UNIVERSITY. Downloaded on January 18,2024 at 13:18:01 UTC from IEEE Xplore. Restrictions apply.
113
angle projection data in face_groups array, In this way updates the object attributes (position and velocity) to
the time need of loading data is consumed once at move the objects through space. We use Euler
system initiation. After that apply the broad phase integration for simplicity, the velocity is updated based
technique which we discussed in section 3. Then catch on the applied forces and gravity, and then the position
all pairs of collided objects and calculate the angle of the is updated based on the velocity. Then return another
collision projection for each object as shown in figure 5. time to the broad phase.
y
5. RESULT AND ANALYSIS
Authorized licensed use limited to: SOUTHWEST JIAOTONG UNIVERSITY. Downloaded on January 18,2024 at 13:18:01 UTC from IEEE Xplore. Restrictions apply.
114
No. of tested No. of collided Collision estimation time in ms triangles that appeared in the face of collision is only
object pa ir knot tour tested. The data is stored in an image as the GPU can
100 70 53.16 31.56
200 160 113.6 64.92
deal with it easily.
300 254 194.1 100.6
400 338 234.4 138.76
500 412 280.91 156.81 REFERENCES
600 496 343.2 188.44
700 570 395.43 220.65
[1] P.M. Hubbard," Collision Detection for Interactive
800 704 435.76 257.74
900 800 475.99 293.76
Graphics Applications, IEEE Trans. Visualization
II
with all 20
-
Robotics and Automation, 6(3),291-302,1990.
triangles
[7] P. M. Hubbard," Approximating Polyhedra with
10
# tested Spheres for Time-Critical Collision Detection,
checking
II
Authorized licensed use limited to: SOUTHWEST JIAOTONG UNIVERSITY. Downloaded on January 18,2024 at 13:18:01 UTC from IEEE Xplore. Restrictions apply.
115