This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: )roach.
stness.
r what
n code.
utputs,
arence.
This is if func—
iriables
)sen to
onflict.
reas in >wever,
version Chapter 2 Collision Detection
Design Issues Designing an efficient collision detection system is a bit like putting a puzzle together:
a lot of pieces must be connected before the big picture starts to appear. In a similar
fashion, the majority of this book is concerned with examining the individual pieces
that go into different approaches to collision detection. The big picture will become
clear over the course of the book. This chapter provides a quick overview of a number
of issues that must be considered in selecting among approaches, and how the com—
ponents of these approaches relate. This chapter also introduces a number of terms,
defined and explained further in following chapters. More in—depth coverage of the
items touched upon here is provided throughout remaining chapters of the book. 2.1 Collision Algorithm Design Factors There are several factors affecting the choices made in designing a collision detection
system. These factors will be broken down into the following categories: 1. Application domain representation. The geometrical representations used for the
scene and its objects have a direct bearing on the algorithms used. With
fewer restrictions put on these representations, more general collision detection
solutions have to be used, with possible performance repercussions. 2. Diﬂerent types of queries. Generally, the more detailed query types and results are,
the more computational effort required to obtain them. Additional data struc—
tures may be required to support certain queries. Not all object representations support all query types. 3. Environment simulation parameters. The simulation itself contains several param—
eters having a direct impact on a collision detection system. These include how 7 8 Chapter 2 Collision Detection Design Issues many objects there are, their relative sizes and positions, if and how they move, if
they are allowed to interpenetrate, and whether they are rigid or ﬂexible. . Perforinance. Real—time collision detection systems operate under strict time and
size restrictions. With time and space always being a trade—off, several features are
usually balanced to meet stated performance requirements. . Robustness. Not all applications require the same level of physical simulation. For
example, stacking of bricks on top of each other requires much more sophistication
from a collision detection system than does having a basketball bouncing on a
basketball court. The ball bouncing slightly too early or at a somewhat larger
angle will go unnoticed, but even the slightest errors in computing contact points
of stacked bricks is likely to result in their slowly starting to interpenetrate or slide
off each other. . Ease of implementation and use. Most projects are on a time frame. Scheduling
features of a collision detection system means nothing if the system cannot
be completed and put in use on time. Decisions regarding implementational
simplicity therefore play a large role in what approach is taken. These issues are covered in further detail in the remainder of the chapter. 2.2 Application Domain Representation To select appropriate collision detection algorithms, it is important to consider the
types of geometrical representations that will be used for the scene and its objects. This
section talks brieﬂy about various object representations, how simplified geometry
can be used instead of modeling geometry, and how application—specific knowledge
can allow specialized solutions to be used over more generic solutions. Object Representations Most current hardware uses triangles as the fundamental rendering primitive.
Consequently, a polygonal representation is a natural choice for scenes and scene
objects, as well as for their corresponding collision geometry. The most generic polyg—
onal representation is the polygon soap: an unordered collection of polygons with no
connectivity information specifying how one polygon relates to another. With no
inherent constraints, the polygon soup is an attractive representation for artists and
level designers. Algorithms operating on polygon soups apply to any collection of
polygons but tend to be less efficient and less robust than those relying on additional
information. For example, a polygon soup contains no information regarding the
”inside” of an object, so there is no easy way of finding out if an object has somehow ove, if e and
:es are in. For
cation
; on a
larger
points
r slide :luling
:annot
Ltional .er the
:s. This
imetry
vledge nitive.
scene
Polyg
rith no
"1th no
its and
tion of
itional
mg the
nehow 2.2 Application Domain Representation 9 Figure 2.1 Geometrical models, like the one pictured, are commonly built from a collection
of polygon meshes. erroneously ended up inside another object. The additional information mentioned
could include which edges connect to what vertices and what faces connect to a
given face, Whether the object forms a closed solid, and whether the object is convex
or concave. Polygons maybe connected to one another at their edges to form a larger polygonal
surface called a polygon mesh. Building objects from a collection of polygon meshes is
one of the most common methods for authoring geometrical models (Figure 2.1). Polygonal objects are defined in terms of their vertices, edges, and faces. When
constructed in this way, objects are said to have an explicit representation. Implicit
objects refer to spheres, cones, cylinders, ellipsoids, tori, and other geometric prim—
itives that are not explicitly defined in such a manner but implicitly through a
mathematical expression. Implicit objects are often described as a function mapping
from 3D space to real numbers, f : R3 —> R, where the points given by f (x, i ,z) < O
constitute the interior, f (x, y, z) = 0 the boundary, and f (x, y, z) > 0 the exterior of the
object (Figure 2.2). An object boundary defined by an implicit function is called an
implicit surface. Implicit objects can be used as rough approximations of scene objects
for quick rejection culling. The implicit form may allow for fast intersection tests,
especially with lines and rays — a fact utilized in ray tracing applications. Several
examples of implicit tests are provided in Chapter 5. Convex polygonal objects can also be described as the intersection of a number
of halfspaces. For example, a cube can be expressed as the intersection of six half—
spaces, each halfspace ”trimming away”the portion of space that lies outside a face of 10 Chapter 2 Collision Detection Design Issues x2+y2+z25r2 Figure 2.2 An implicitly defined sphere (where the sphere is defined as the boundary plus
the interior). Figure 2.3 (a) A cube with a cylindrical hole through it. (b) The CSG construction tree for
the left—hand object, where a cylinder is subtracted from the cube. the cube. Halfspaces and halfspace intersection volumes are described in more detail
in Chapter 3. Geometric primitives such as spheres, boxes, and cylinders are also the building
blocks of objects constructed via the constructive solid geometry (CSG) framework.
CSG objects are recursively formed through applying set—theoretic operations (such
as union, intersection, or difference) on basic geometric shapes or other CSG objects,
allowing arbitrarily complex objects to be constructed. Thus, a CSG object is repre—
sented as a (binary) tree, with set—theoretic operations given in the internal nodes
and geometry primitives in the leaves (Figure 2.3). CSG objects are implicit in that
vertices, edges, and faces are not directly available. A strength of CSG modeling is that the resulting objects are always valid —without
cracks and other problems that plague polygonal representations. CSG is also a
volume representation, making it easy to determine if, for example, a query point lies
inside the CSG object. CSG on polyhedral objects can be implemented through the
processes described in, for example, [Laidlaw86] and [Thibault87]. However, it can be difficult to achieve robust implementations due to numerical imprecision in the
calculations involved. 2.2 Application Domain Representation 11 2.2.2 Collision Versus Rendering Geometry Although it is possible to pass rendering geometry directly into a collision system,
there are severalreasons it is better to have separate geometry with which collision
detection is performed. 1. Graphics platforms have advanced to the point where rendering geometry is
becoming too complex to be used to perform Collision detection or physics. In
addition, there is a usually a limit as to how accurate collisions must be. Thus, rather
than using the same geometry used for rendering, a simplified proxy geometry can
be substituted in its place for collision detection. For games, for example, it is com
mon to rely on simple geometric shapes such as spheres and boxes to represent the
game object, regardless of object complexity. If the proxy objects collide, the actual
objects are assumed to collide as well. These simple geometric shapes, or bound
ing volumes, are frequently used to accelerate collision queries regardless of what
geometry representation is used. Bounding volumes are typically made to encap
sulate the geometry fully. Bounding volumes are discussed in detail in Chapter 4. iary plus 2. For modern hardware, geometry tends to be given in very specific formats (such as
triangle strips and indexed vertex buffers), which lend themselves to fast rendering
but not to collision detection. Rather than decoding these structures on the fly
(even though the decoded data can be cached for reuse), it is usually more efficient
to provide special collision geometry. In addition, graphics hardware often enforces tree for triangle—only formats. For collision geometry, efficiency sometimes can be had by supporting other, nontriangle, primitives. 3. The required data and data organization of rendering geometry and collision
geometry are likely to vary drastically. Whereas static rendering data might be :e detail sorted by material, collision data are generally organized spatially. Rendering
. geometry requires embedded data such as material information, vertex colors,
“111de and texture coordinates, whereas collision geometry needs associated surface
rework. properties. Separating the two and keeping all collisionrelevant information
15 (.SUCh together makes the collision data smaller. Smaller data, in turn, leads to efﬁciency
objects. improvements due to better data cache coherency.
3:31;; 4. Sometimes the collision geometry differs from the rendered geometry by design.
. in that For example, the knee—deep powder snow in a snowboarding game can be mod—
eled by a collision surface two feet below the rendered representation of the snow
without surface. Walking in ankle—deep swaying grass or wading in waist—deep murky
5 also a water can be handled similarly. Even if rendering geometry is used as collision
oint lies geometry, there must beprovisions for excluding some rendering geometry from
ugh the (and for including add1t10nal nonrendering geometry in) the colliSion geometry
r, it can , data set.
1 in the 5. For simulation purposes, collision data must be kept around even when rendering data can be thrown out as not Visible. With the collision geometry being smaller 12 Chapter 2 Collision Detection Design Issues than the corresponding rendering geometry, the permanent memory footprint is
therefore reduced. 6. The original geometry might be given as a polygon soup or mesh, whereas the
simulation requires a solid—object representation. In this case, it is much easier to
compute solid proxy geometry than to attempt to somehow solidify the original ‘
geometrical representation. However, there are some potential drawbacks to using separate collision geometry. 1. Data duplication (primarily of vertices) causes additional memory to be used. This
problem may be alleviated by creating some or all of the collision geometry from
the rendering geometry on the fly through linearization caching (as described in
Section 13.5 and onward). 2. Extra work may be required to produce and maintain two sets of similar geometry.
Building the proxy geometry by hand will impair the schedule of the designer
creating it. If it is built by a tool, that tool must be written before the collision
system becomes usable. In addition, if there is a need to manually modify the tool
output, the changes must somehow be communicated back into the tool and the
original data set. 3. If built and maintained separately, the rendering and collision geometries may
mismatch in places. When the collision geometry does not fill the same volume
as the render geometry, objects may partially disappear into or ﬂoat above the
surface of other objects. 4. Versioning and other logistics problems can show up for the two geometries. Was
the collision geometry really rebuilt when the rendering geometry changed? If
created manually, which comes first: collision geometry or rendering geometry?
And how do you update one when the other changes? For games, using proxy geometry that is close to (but may not exactly match)
actual Visuals works quite well. Perceptually, humans are not very good at detecting
whether exact collisions are taking place. The more objects involved and the faster
they move, the less likely the player is to spot any discrepancies. Humans are also bad
at predicting what the outcome of a collision should be, which allows liberties to be
taken with the collision response as well. In games, collision detection and response
can effectively be governed by ”if it looks right, it is right.” Other applications have
stricter accuracy requirements. Collision Algorithm Specialization Rather than having one all—encompassing collision detection system, it is often wise
to provide specialized collision systems for speciﬁc scenarios. An example of where 2.3 Types of Queries 13 irint is I specialization is relevant is particle collisions. Rather than sending particles one by one through the normal collision system, they are better handled and submitted
as the _ for collisionas groups of particles, where the groups may form and reform based
sier to on context. Particles may even be excluded from collision, in cases where the lack of 'iginal collision is not noticeable. ’ Another example is the use of separate algorithms for detecting collision between
an object and other objects and between the object and the scene. Object—object
collisions might even be further specialized so that a player character and fast—moving netry. projectiles are handled differently from other objects. For example, a case where all r objects always collide against the player character is better handled as a hard—coded i. This test rather than inserting the player character into the general collision system. I from Consider also the simulation of large worlds. For small worlds, collision data can bed in be held in memory at all times. For the large, seamless world, however, collision data
must be loaded and unloaded as the world is traversed. In the latter case, having metry. objects separate from the world structure is again an attractive choice, so the objects
signer are not affected by changes to the world structure. A possible drawback of having
llision separate structures for holding, say, objects and world, is that querying now entails
1e tool traversing two data structures as opposed to just one. nd the ‘8 may 2 .3 Types of Queries olume V6 the The most straightforward collision query is the interference detection or intersection
testing problem: answering the Boolean question of whether two (static) objects, A 3. Was and B, are overlapping at their given positions and orientations. Boolean intersec— ;ed? If tion queries are both fast and easy to implement and are therefore commonly used. netry? However, sometimes a Boolean result is not enough and the parts intersecting must ‘ be found. The problem of intersection finding is a more difficult one, involving finding
one or more points of contact. natch) For some applications, ﬁnding any one point in common between the objects :ecti n g might be sufficient. In others, such as in rigid—body simulations, the set of contact— faster _ ing points (the contact manifold) may need to be determined. Robustly computing
so bad the contact manifold is a difficult problem. Overall, approximate queries —— where the 3 to be __ answers are only required to be accurate up to a given tolerance — are much easier to spo n s e deal with than exact queries. Approximate queries are commonplace in games. Addi— s have tionally, in games, collision queries are generally required to report specific collision properties assigned to the objects and their boundaries. For example, such properties
may include slipperiness of a road surface or climbability of a wall surface. If objects penetrate, some applications require finding the penetration depth. The
penetration depth is usually defined in terms of the minimum translational distance: the
length of the shortest movement vector that would separate the objects. Computing n wise this movement vector is a difﬁcult problem, in general. The separation distance between
where two disjoint objectsA and B is defined as the minimum of the distances between points 14 Chapter 2 Collision Detection Design Issues in A and points in B. When the distance is zero, the objects are intersecting. Having
a distance measure between two objects is useful in that it allows for prediction of
the next time of collision. A more general problem is that of finding the closest points
of A and B: a point in A and a point in B giving the separation distance between
the objects. Note that the closest points are not necessarily unique; there may be
an infinite number of closest points. For dynamic objects, computing the next time
of collision is known as the estimated time of arrian (ETA) or time of impact (TOT)
computation. The ETA value can be used to, for instance, control the time step in a
rigid—body simulation. Type of motion is one of the simulation parameters discussed
further in the next section. 2.4 Environment Simulation Parameters As mentioned earlier in the chapter, several parameters of a simulation directly affect
what are appropriate choices for a collision detection system. To illustrate some of the
issues they may cause, the following sections look specifically at how the number of
objects and how the objects move relate to collision processing. Number of Objects Because any one object can potentially collide with any other object, a simulation
with n objects requires (11 — 1) + (n — 2) + '   + 1 = n(n — 1)/2 = 0(712) pairwise tests,
worst case. Due to the quadratic time complexity, naively testing every object pair
for collision quickly becomes too expensive even for moderate values of n. Reducing
the cost associated with the pairwise test will only linearly affect runtime. To really
speed up the process, the number of pairs tested must be reduced. This reduction is
performed by separating the collision handling of multiple objects into two phases:
the broad phase and the narrow phase. The broad phase identifies smaller groups of objects that may be colliding and
quickly excludes those that definitely are not. The narrow phase constitutes the pair—
wise tests within subgroups. It is responsible for determining the exact collisions, if
any. The broad and narrow phases are sometimes called n—body processing and pair
processing, respectively. Figure 2.4 illustrates how broad—phase processing reduces the workload through a
divide—and—conquer strategy. For the 11 objects (illustrated by boxes), an all—pairs test
would require 55 individual pair tests. After broadphase processing has produced 5
disjoint subgroups (indicated by the shaded areas), only 10 individual pair tests would
have to be performed in the narrow phase. M...
View
Full Document
 Spring '10
 Jhala,A

Click to edit the document details