Physics Engines Evaluation Based on Model Representation Analysis

Author(s):  
Germánico González Badillo ◽  
Hugo I. Medellín Castillo ◽  
Theodore Lim ◽  
Víctor E. Espinoza López

Virtual environments (VE) are becoming a popular way to interact with virtual objects in several applications such as design, training, planning, etc. Physics simulation engines (PSE) used in games development can be used to increase the realism in virtual environments (VE) by enabling the virtual objects with dynamic behavior and collision detection. There exist several PSE available to be integrated with VE, each PSE uses different model representation methods to create the collision shape and compute virtual object dynamic behavior. The performance of physics based VEs is directly related to the PSE ability and its method to represent virtual objects. This paper analyzes different freely available PSEs — Bullet and the two latest versions of PhysX (v2.8 and 3.1) — based on their model representation algorithms, and evaluates them by performing various assembly tasks with different geometry complexity. The evaluation is based on the collision detection performance and their influence on haptic-virtual assembly process. The results have allowed the identification of the strengths and weaknesses of each PSE according to its representation method.

Author(s):  
Germanico Gonzalez ◽  
Hugo I. Medellin ◽  
Theodore Lim ◽  
James M. Ritchie ◽  
Raymond C. W. Sung

Physical based modelling (PBM) uses physics simulation engines (PSE) to provide the dynamic behaviour and collision detection of virtual objects in virtual environments emulating the real world. There exists a variety of PSEs, each one with pros and cons according to the application in which they are employed. Each physics engine uses its proprietary collision detection algorithm. Collision detection is a key aspect of assembly tasks and its performance is dependent on the way virtual objects are represented. In general, objects can be divided into two groups: convex and concave, the latter being the most common and challenging for collision detection algorithms. This study reports on three different methods to represent concave objects. GIMPACT, Hierarchical Approximate Convex Decomposition (HACD) and Approximate Convex Decomposition (ACD), which are evaluated and compared based on their collision detection performances. An exact convex decomposition algorithm, named as ConvexFT, is also proposed and analyzed in this paper. Finally the performance of the three existing methods and the ConvexFT proposed approach are compared in order to assess which model representation algorithm is best suited for haptic-virtual assembly tasks.


Author(s):  
Germánico González Badillo ◽  
Hugo I. Medellín-Castillo ◽  
Víctor E. Espinoza López

Virtual assembly systems have become popular in recent years due to its ability to simulate natural interaction between parts and ease of manipulation by the user. One of the most relevant technologies used in virtual assembly systems are haptic devices that provide force feedback and allow simulating real word conditions, such as weight, inertia, texture and collisions. Physics simulation engines (PSE) are another important tool used to simulate a realistic behavior in virtual assembly systems by enabling the effect of gravity and collision response of the virtual objects, resulting in a real world behavior. However, the use of haptic systems together with physics simulation engines is costly in terms of computing resources. This cost is mainly associated to collision detection between virtual objects, and increases when the shapes represented within the PSE are more complex, resulting in a poor performance of the virtual assembly system, making very difficult to simulate the assembly of complex parts or use several parts in the assembly. The present work shows a new algorithm to simulate complex objects, by using a different representation of the same object according with its dynamic state during the assembly process. The results show that the use of mixed model representation reduce the computing time when assembling objects, thus improving the performance of the virtual assembly system and finally allowing a better comfort and performance of the user during the assembly process. The system HAMS (Haptic Assembly and Manufacturing System) was used for the experimental validation, also the simulation of four assembly tasks that simulate real assembly objects, has been conducted.


2019 ◽  
Vol 9 (9) ◽  
pp. 1797
Author(s):  
Chen ◽  
Lin

Augmented reality (AR) is an emerging technology that allows users to interact with simulated environments, including those emulating scenes in the real world. Most current AR technologies involve the placement of virtual objects within these scenes. However, difficulties in modeling real-world objects greatly limit the scope of the simulation, and thus the depth of the user experience. In this study, we developed a process by which to realize virtual environments that are based entirely on scenes in the real world. In modeling the real world, the proposed scheme divides scenes into discrete objects, which are then replaced with virtual objects. This enables users to interact in and with virtual environments without limitations. An RGB-D camera is used in conjunction with simultaneous localization and mapping (SLAM) to obtain the movement trajectory of the user and derive information related to the real environment. In modeling the environment, graph-based segmentation is used to segment point clouds and perform object segmentation to enable the subsequent replacement of objects with equivalent virtual entities. Superquadrics are used to derive shape parameters and location information from the segmentation results in order to ensure that the scale of the virtual objects matches the original objects in the real world. Only after the objects have been replaced with their virtual counterparts in the real environment converted into a virtual scene. Experiments involving the emulation of real-world locations demonstrated the feasibility of the proposed rendering scheme. A rock-climbing application scenario is finally presented to illustrate the potential use of the proposed system in AR applications.


1999 ◽  
Vol 4 (1) ◽  
pp. 8-17 ◽  
Author(s):  
G Jansson ◽  
H Petrie ◽  
C Colwell ◽  
D. Kornbrot ◽  
J. Fänger ◽  
...  

This paper is a fusion of two independent studies investigating related problems concerning the use of haptic virtual environments for blind people: a study in Sweden using a PHANToM 1.5 A and one in the U.K. using an Impulse Engine 3000. In general, the use of such devices is a most interesting option to provide blind people with information about representations of the 3D world, but the restriction at each moment to only one point of contact between observer and virtual object might decrease their effectiveness. The studies investigated the perception of virtual textures, the identification of virtual objects and the perception of their size and angles. Both sighted (blindfolded in one study) and blind people served as participants. It was found (1) that the PHANToM can effectively render textures in the form of sandpapers and simple 3D geometric forms and (2) that the Impulse Engine can effectively render textures consisting of grooved surfaces, as well as 3D objects, properties of which were, however, judged with some over- or underestimation. When blind and sighted participants' performance was compared differences were found that deserves further attention. In general, the haptic devices studied have demonstrated the great potential of force feedback devices in rendering relatively simple environments, in spite of the restricted ways they allow for exploring the virtual world. The results highly motivate further studies of their effectiveness, especially in more complex contexts.


2002 ◽  
Vol 11 (6) ◽  
pp. 591-609 ◽  
Author(s):  
Roy A. Ruddle ◽  
Justin C. D. Savage ◽  
Dylan M. Jones

A set of rules is presented for the design of interfaces that allow virtual objects to be manipulated in 3D virtual environments (VEs). The rules differ from other interaction techniques because they focus on the problems of manipulating objects in cluttered spaces rather than open spaces. Two experiments are described that were used to evaluate the effect of different interaction rules on participants' performance when they performed a task known as “the piano mover's problem.” This task involved participants in moving a virtual human through parts of a virtual building while simultaneously manipulating a large virtual object that was held in the virtual human's hands, resembling the simulation of manual materials handling in a VE for ergonomic design. Throughout, participants viewed the VE on a large monitor, using an “over-the-shoulder” perspective. In the most cluttered VEs, the time that participants took to complete the task varied by up to 76% with different combinations of rules, thus indicating the need for flexible forms of interaction in such environments.


Author(s):  
Abdeldjallil Naceri ◽  
Thierry Hoinville ◽  
Ryad Chellali ◽  
Jesus Ortiz ◽  
Shannon Hennig

The main objective of this paper is to investigate whether observers are able to perceive depth of virtual objects within virtual environments during reaching tasks. In other words, we tackled the question of observer immersion in a displayed virtual environment. For this purpose, eight observers were asked to reach for a virtual objects displayed within their peripersonal space in two conditions: condition one provided a small virtual sphere that was displayed beyond the subjects index finger as an extension of their hand and condition two provided no visual feedback. In addition, audio feedback was provided when the contact with the virtual object was made in both conditions. Although observers slightly overestimated depth within the peripersonal space, they accurately aimed for the virtual objects based on the kinematics analysis. Furthermore, no significant difference was found concerning the movement between conditions for all observers. Observers accurately targeted the virtual point correctly with regard to time and space. This suggests the virtual environment sufficiently simulated the information normally present in the central nervous system.


2009 ◽  
Vol 8 (2) ◽  
pp. 1-6 ◽  
Author(s):  
Peng Song ◽  
Hang Yu ◽  
Stefan Winkler

Mixed reality applications can provide users with enhanced interaction experiences by integrating virtual and real world objects in a mixed environment. Through the mixed reality interface, a more realistic and immersive control style is achieved compared to the traditional keyboard and mouse input devices. The interface proposed in this paper consists of a stereo camera, which tracks the user's hands and fingers robustly and accurately in the 3D space. To enable a physically realistic experience in the interaction, a physics engine is adopted for the simulating the physics of virtual object manipulation. The objects can be picked up and tossed with physical characteristics, such as gravity and collisions which occur in the real world. Detection and interaction in our system is fully computer-vision based, without any markers or additional sensors. We demonstrate this gesture-based interface using two mixed reality game implementations: finger fishing, in which a player can simulate fishing for virtual objects with his/her fingers as in a real environment, and Jenga, which is a simulation of the well-known tower building game. A user study is conducted and reported to demonstrate the accuracy, effectiveness and comfort of using this interactive interface.


Sign in / Sign up

Export Citation Format

Share Document