Real-Time Continuous Collision Detection Based on Swept Volume and Depth Texture

Author(s):  
Ji Wang ◽  
Zhengjun Zhai ◽  
Xiaobin Cai
2013 ◽  
Vol 756-759 ◽  
pp. 3189-3193
Author(s):  
Xiao Dong Shao ◽  
Wei Gao ◽  
Huan Lling Liu

A novel algorithm, which can check the collision point of rigid objects continuously and solve the problem of penetration and crossing in collision detection effectively, is presented in this paper. At each simulation moment, the adaptive test lines (ATLs) are first constructed based on the velocity vector of the moving object and then the intersection between the ATLs and the environment is calculated. The collision happens when the intersection is not empty and the collision point is obtained through crossing-frame processing. By checking the interference between body and ATLs instead of bodies, we greatly improve the detection efficiency. It avoids missing collisions for an object with arbitrary shape or in any motion states. Simulation results show that our algorithm runs faster than the general continuous collision detection algorithms and has similar detection effects to the swept volume algorithm.


2005 ◽  
Vol 5 (2) ◽  
pp. 126-137 ◽  
Author(s):  
Stephane Redon ◽  
Ming C. Lin ◽  
Dinesh Manocha ◽  
Young J. Kim

We present a novel algorithm to perform continuous collision detection for articulated models. Given two discrete configurations of the links of an articulated model, we use an “arbitrary in-between motion” to interpolate its motion between two successive time steps and check the resulting trajectory for collisions. Our approach uses a three-stage pipeline: (1) dynamic bounding-volume hierarchy (D-BVH) culling based on interval arithmetic; (2) culling refinement using the swept volume of line swept spheres (LSS’) and graphics hardware accelerated queries; (3) exact contact computation using OBB trees and continuous collision detection between triangular primitives. The overall algorithm computes the time of collision and contact locations, and prevents any interpenetration between the articulated model and the environment. We have implemented the algorithm and tested its performance on a 2.4GHz Pentium PC with 1Gbyte of RAM and a NVIDIA GeForce FX 5800 graphics card. In practice, our algorithm is able to perform accurate and continuous collision detection between articulated models and modestly complex environments at nearly interactive rates.


2007 ◽  
Vol 16 (2) ◽  
pp. 206-223 ◽  
Author(s):  
Young J Kim ◽  
Stephane Redon ◽  
Ming C Lin ◽  
Dinesh Manocha ◽  
Jim Templeman

We present an interactive algorithm for continuous collision detection between a moving avatar and its surrounding virtual environment. Our algorithm is able to compute the first time of contact between the avatar and the environment interactively, and also guarantees within a user-provided error threshold that no collision ever happens before the first contact occurs. We model the avatar as an articulated body using line skeletons with constant offsets and the virtual environment as a collection of polygonized objects. Given the position and orientation of the avatar at discrete time steps, we use an arbitrary in-between motion to interpolate the path for each link between discrete instances. We bound the swept space of each link using interval arithmetic and dynamically compute a bounding volume hierarchy (BVH) to cull links that are not in close proximity to the objects in the virtual environment. The swept volumes (SVs) of the remaining links are used to check for possible interference and estimate the time of collision between the surface of the SV and the rest of the objects. Furthermore, we use graphics hardware to accelerate collision queries on the dynamically generated swept surfaces. Our approach requires no precomputation and is applicable to general articulated bodies that do not contain a loop. We have implemented the algorithm on a 2.8 GHz Pentium IV PC with an NVIDIA GeForce 6800 Ultra graphics card and applied it to an avatar with 16 links, moving in a virtual environment composed of hundreds of thousands of polygons. Our prototype system is able to detect all contacts between the moving avatar and the environment in 10–30 ms.


2006 ◽  
Vol 22 (2) ◽  
pp. 213-224 ◽  
Author(s):  
Y.-K. Choi ◽  
W. Wang ◽  
Y. Liu ◽  
M.-S. Kim

Sign in / Sign up

Export Citation Format

Share Document