Analysis of Influence Factors of Virtual Human Real-Time Driven Accuracy and its Optimization in Virtual Reality Environment

Author(s):  
Shiguang Qiu ◽  
Xu Jing ◽  
Xiumin Fan ◽  
Qichang He ◽  
Dianliang Wu

In the case that a real operator drives a virtual human in real-time using the motion capture method and performs complex products assembling and disassembling simulation, a very high driven accuracy is needed to meet the quality requirements of interactivity and simulation results. In order to improve the driven accuracy in virtual reality environment, a method is put forward which analyzes the influence factors of virtual human real-time driven accuracy and optimize the factors. A systematical analysis of factors affecting the accuracy is given. The factors can be sorted into hardware factors and software factors. We find out that the software factors are the main ones affecting the accuracy, and it is very hard to analyse their influence separately. Therefore, we take the virtual human kinematic system as a fuzzy system and improve the real-time driven accuracy using an optimization method. Firstly, a real-time driven model is built on dynamic constraints and body joint rotation information and supports personalized human driven. Secondly, a function is established to describe the driven error during interactive operations in the virtual environment. Then, based on the principle of minimum cumulative error, we establish an optimization model with a specified optimization zone and constraints set according to the standard Chinese adult dimensions. Next, the model is solved using genetic algorithm to get the best virtual human segment dimensions matching the real operator. Lastly, the method is verified with an example of auto engine virtual assembly. The result shows that the method can improve the driven accuracy effectively.

Author(s):  
Kevin Lesniak ◽  
Conrad S. Tucker ◽  
Sven Bilen ◽  
Janis Terpenny ◽  
Chimay Anumba

Immersive virtual reality systems have the potential to transform the manner in which designers create prototypes and collaborate in teams. Using technologies such as the Oculus Rift or the HTC Vive, a designer can attain a sense of “presence” and “immersion” typically not experienced by traditional CAD-based platforms. However, one of the fundamental challenges of creating a high quality immersive virtual reality experience is actually creating the immersive virtual reality environment itself. Typically, designers spend a considerable amount of time manually designing virtual models that replicate physical, real world artifacts. While there exists the ability to import standard 3D models into these immersive virtual reality environments, these models are typically generic in nature and do not represent the designer’s intent. To mitigate these challenges, the authors of this work propose the real time translation of physical objects into an immersive virtual reality environment using readily available RGB-D sensing systems and standard networking connections. The emergence of commercial, off-the shelf RGB-D sensing systems such as the Microsoft Kinect, have enabled the rapid 3D reconstruction of physical environments. The authors present a methodology that employs 3D mesh reconstruction algorithms and real time rendering techniques to capture physical objects in the real world and represent their 3D reconstruction in an immersive virtual realilty environment with which the user can then interact. A case study involving a commodity RGB-D sensor and multiple computers connected through standard TCP internet connections is presented to demonstrate the viability of the proposed methodology.


Author(s):  
Kevin Lesniak ◽  
Janis Terpenny ◽  
Conrad S. Tucker ◽  
Chimay Anumba ◽  
Sven G. Bilén

With design teams becoming more distributed, the sharing and interpreting of complex data about design concepts/prototypes and environments have become increasingly challenging. The size and quality of data that can be captured and shared directly affects the ability of receivers of that data to collaborate and provide meaningful feedback. To mitigate these challenges, the authors of this work propose the real-time translation of physical objects into an immersive virtual reality environment using readily available red, green, blue, and depth (RGB-D) sensing systems and standard networking connections. The emergence of commercial, off-the-shelf RGB-D sensing systems, such as the Microsoft Kinect, has enabled the rapid three-dimensional (3D) reconstruction of physical environments. The authors present a method that employs 3D mesh reconstruction algorithms and real-time rendering techniques to capture physical objects in the real world and represent their 3D reconstruction in an immersive virtual reality environment with which the user can then interact. Providing these features allows distributed design teams to share and interpret complex 3D data in a natural manner. The method reduces the processing requirements of the data capture system while enabling it to be portable. The method also provides an immersive environment in which designers can view and interpret the data remotely. A case study involving a commodity RGB-D sensor and multiple computers connected through standard TCP internet connections is presented to demonstrate the viability of the proposed method.


Author(s):  
Nobuyoshi Terashima

On the Internet, a cyberspace is created where people communicate together, usually by using textual messages. Therefore, they cannot see each other in cyberspace. Whenever they communicate, it is desirable for them to see each other as if they were gathered at the same place. To achieve this, various kinds of concepts have been proposed, such as a collaborative environment, Tele-Immersion, and tele-presence (Sherman & Craig, 2003). In this article, HyperReality (HR) is introduced. HR is a communication paradigm between the real and the virtual (Terashima, 1995, 2002; Terashima & Tiffin, 2002; Terashima, Tiffin, & Ashworth, in press). The real means a real inhabitant, such as a real human or a real animal. The virtual means a virtual inhabitant, a virtual human, or a virtual animal. HR provides a communication environment where inhabitants, real or virtual, that are at different locations, meet and do cooperative work together as if they were gathered at the same place. HR can be developed based on virtual reality (VR) and telecommunications technologies.


2019 ◽  
Vol 71 ◽  
pp. 05010
Author(s):  
V. Dobrova ◽  
P. Labzina ◽  
N. Ageenko ◽  
S. Menshenina

Globalization and innovation have recently resulted in the extensive use of the latest technological products practically everywhere, and in education especially. Various technologies are now employed in different spheres of education. Virtual Reality (VR) is a global innovative technology with great potentials and enormous pedagogical possibilities that offers new methods and techniques for education. The main features of it are visibility, security, involvement, presence and focusing. It enables to combine the computer-generated virtual information and the real environment in real time. The presented VR language program is based on the concept of 3D modeling and semantic frame method.


2012 ◽  
Vol 588-589 ◽  
pp. 1320-1323
Author(s):  
Li Xia Wang

This paper takes the virtual reality technology as a core, has established the housing virtual reality roaming display system, Under the premise of the detailed analysis of system architecture, We focus on how to form the terrain database and the scenery three-dimensional database by using the MultiGen Creator, and call OpenGVS through MSVC to carry on the real-time scene control and the method of the complex special effect realization.


2010 ◽  
Vol 40-41 ◽  
pp. 388-391 ◽  
Author(s):  
Shou Xiang Zhang

An unmanned mining technology for the fully mechanized longwall face automation production is proposed and studied. The essential technology will bring the longwall face production into visualization through the Virtual Reality (VR) and Augmented Reality (AR) union. Based on the visual theoretical model of the longwall face, the combination of virtual and reality, the real-time interactive and the 3D registration function were realized. The Key technology and Alpha channel are used to the combination of the real long wall face and the virtual user.


Sign in / Sign up

Export Citation Format

Share Document