Immersive Distributed Design Through Real-Time Capture, Translation, and Rendering of Three-Dimensional Mesh Data1

Author(s):  
Kevin Lesniak ◽  
Janis Terpenny ◽  
Conrad S. Tucker ◽  
Chimay Anumba ◽  
Sven G. Bilén

With design teams becoming more distributed, the sharing and interpreting of complex data about design concepts/prototypes and environments have become increasingly challenging. The size and quality of data that can be captured and shared directly affects the ability of receivers of that data to collaborate and provide meaningful feedback. To mitigate these challenges, the authors of this work propose the real-time translation of physical objects into an immersive virtual reality environment using readily available red, green, blue, and depth (RGB-D) sensing systems and standard networking connections. The emergence of commercial, off-the-shelf RGB-D sensing systems, such as the Microsoft Kinect, has enabled the rapid three-dimensional (3D) reconstruction of physical environments. The authors present a method that employs 3D mesh reconstruction algorithms and real-time rendering techniques to capture physical objects in the real world and represent their 3D reconstruction in an immersive virtual reality environment with which the user can then interact. Providing these features allows distributed design teams to share and interpret complex 3D data in a natural manner. The method reduces the processing requirements of the data capture system while enabling it to be portable. The method also provides an immersive environment in which designers can view and interpret the data remotely. A case study involving a commodity RGB-D sensor and multiple computers connected through standard TCP internet connections is presented to demonstrate the viability of the proposed method.

Author(s):  
Kevin Lesniak ◽  
Conrad S. Tucker ◽  
Sven Bilen ◽  
Janis Terpenny ◽  
Chimay Anumba

Immersive virtual reality systems have the potential to transform the manner in which designers create prototypes and collaborate in teams. Using technologies such as the Oculus Rift or the HTC Vive, a designer can attain a sense of “presence” and “immersion” typically not experienced by traditional CAD-based platforms. However, one of the fundamental challenges of creating a high quality immersive virtual reality experience is actually creating the immersive virtual reality environment itself. Typically, designers spend a considerable amount of time manually designing virtual models that replicate physical, real world artifacts. While there exists the ability to import standard 3D models into these immersive virtual reality environments, these models are typically generic in nature and do not represent the designer’s intent. To mitigate these challenges, the authors of this work propose the real time translation of physical objects into an immersive virtual reality environment using readily available RGB-D sensing systems and standard networking connections. The emergence of commercial, off-the shelf RGB-D sensing systems such as the Microsoft Kinect, have enabled the rapid 3D reconstruction of physical environments. The authors present a methodology that employs 3D mesh reconstruction algorithms and real time rendering techniques to capture physical objects in the real world and represent their 3D reconstruction in an immersive virtual realilty environment with which the user can then interact. A case study involving a commodity RGB-D sensor and multiple computers connected through standard TCP internet connections is presented to demonstrate the viability of the proposed methodology.


2012 ◽  
Vol 588-589 ◽  
pp. 1320-1323
Author(s):  
Li Xia Wang

This paper takes the virtual reality technology as a core, has established the housing virtual reality roaming display system, Under the premise of the detailed analysis of system architecture, We focus on how to form the terrain database and the scenery three-dimensional database by using the MultiGen Creator, and call OpenGVS through MSVC to carry on the real-time scene control and the method of the complex special effect realization.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5909
Author(s):  
Qingyu Jia ◽  
Liang Chang ◽  
Baohua Qiang ◽  
Shihao Zhang ◽  
Wu Xie ◽  
...  

Real-time 3D reconstruction is one of the current popular research directions of computer vision, and it has become the core technology in the fields of virtual reality, industrialized automatic systems, and mobile robot path planning. Currently, there are three main problems in the real-time 3D reconstruction field. Firstly, it is expensive. It requires more varied sensors, so it is less convenient. Secondly, the reconstruction speed is slow, and the 3D model cannot be established accurately in real time. Thirdly, the reconstruction error is large, which cannot meet the requirements of scenes with accuracy. For this reason, we propose a real-time 3D reconstruction method based on monocular vision in this paper. Firstly, a single RGB-D camera is used to collect visual information in real time, and the YOLACT++ network is used to identify and segment the visual information to extract part of the important visual information. Secondly, we combine the three stages of depth recovery, depth optimization, and deep fusion to propose a three-dimensional position estimation method based on deep learning for joint coding of visual information. It can reduce the depth error caused by the depth measurement process, and the accurate 3D point values of the segmented image can be obtained directly. Finally, we propose a method based on the limited outlier adjustment of the cluster center distance to optimize the three-dimensional point values obtained above. It improves the real-time reconstruction accuracy and obtains the three-dimensional model of the object in real time. Experimental results show that this method only needs a single RGB-D camera, which is not only low cost and convenient to use, but also significantly improves the speed and accuracy of 3D reconstruction.


Author(s):  
Shiguang Qiu ◽  
Xu Jing ◽  
Xiumin Fan ◽  
Qichang He ◽  
Dianliang Wu

In the case that a real operator drives a virtual human in real-time using the motion capture method and performs complex products assembling and disassembling simulation, a very high driven accuracy is needed to meet the quality requirements of interactivity and simulation results. In order to improve the driven accuracy in virtual reality environment, a method is put forward which analyzes the influence factors of virtual human real-time driven accuracy and optimize the factors. A systematical analysis of factors affecting the accuracy is given. The factors can be sorted into hardware factors and software factors. We find out that the software factors are the main ones affecting the accuracy, and it is very hard to analyse their influence separately. Therefore, we take the virtual human kinematic system as a fuzzy system and improve the real-time driven accuracy using an optimization method. Firstly, a real-time driven model is built on dynamic constraints and body joint rotation information and supports personalized human driven. Secondly, a function is established to describe the driven error during interactive operations in the virtual environment. Then, based on the principle of minimum cumulative error, we establish an optimization model with a specified optimization zone and constraints set according to the standard Chinese adult dimensions. Next, the model is solved using genetic algorithm to get the best virtual human segment dimensions matching the real operator. Lastly, the method is verified with an example of auto engine virtual assembly. The result shows that the method can improve the driven accuracy effectively.


2018 ◽  
Vol 40 (15) ◽  
pp. 4091-4104 ◽  
Author(s):  
Dejing Ni ◽  
AYC Nee ◽  
SK Ong ◽  
Huijun Li ◽  
Chengcheng Zhu ◽  
...  

Remote manipulation of a robot without assistance in an unstructured environment is a challenging task for operators. In this paper, a novel methodology for haptic constraints in a point cloud augmented virtual reality environment is proposed to address this human operation limitation. The proposed method generates haptic constraints in real time for an unstructured environment, including regional constraints and guidance constraints. A modified implicit surface method is applied for regional constraint generation for the entire point cloud. Additionally, the isosurface derived from the implicit surface is proposed for real-time three-dimensional artificial force field estimation. For guidance constraint generation, a new incremental prediction and local artificial force field generation method based on the modified sigmoid model is proposed in an unstructured point cloud virtual reality environment. With the generated haptic constraints, the operator can control the robot to realize obstacle avoidance and easily reach the target tasks. System evaluation is conducted, and the result demonstrates the effectiveness of the proposed method. In addition, a 10-participant study with users who control the robot to three specific targets shows that the system can enhance human operation efficiency and reduce time costs by at least 59% compared with no-haptic-constraint operations. Additionally, the designed questionnaire also demonstrates that the proposed methodology can reduce the workload during human operations.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1069
Author(s):  
Deyby Huamanchahua ◽  
Adriana Vargas-Martinez ◽  
Ricardo Ramirez-Mendoza

Exoskeletons are an external structural mechanism with joints and links that work in tandem with the user, which increases, reinforces, or restores human performance. Virtual Reality can be used to produce environments, in which the intensity of practice and feedback on performance can be manipulated to provide tailored motor training. Will it be possible to combine both technologies and have them synchronized to reach better performance? This paper consists of the kinematics analysis for the position and orientation synchronization between an n DoF upper-limb exoskeleton pose and a projected object in an immersive virtual reality environment using a VR headset. To achieve this goal, the exoskeletal mechanism is analyzed using Euler angles and the Pieper technique to obtain the equations that lead to its orientation, forward, and inverse kinematic models. This paper extends the author’s previous work by using an early stage upper-limb exoskeleton prototype for the synchronization process.


2020 ◽  
Vol 22 (Supplement_3) ◽  
pp. iii461-iii461
Author(s):  
Andrea Carai ◽  
Angela Mastronuzzi ◽  
Giovanna Stefania Colafati ◽  
Paul Voicu ◽  
Nicola Onorini ◽  
...  

Abstract Tridimensional (3D) rendering of volumetric neuroimaging is increasingly been used to assist surgical management of brain tumors. New technologies allowing immersive virtual reality (VR) visualization of obtained models offer the opportunity to appreciate neuroanatomical details and spatial relationship between the tumor and normal neuroanatomical structures to a level never seen before. We present our preliminary experience with the Surgical Theatre, a commercially available 3D VR system, in 60 consecutive neurosurgical oncology cases. 3D models were developed from volumetric CT scans and MR standard and advanced sequences. The system allows the loading of 6 different layers at the same time, with the possibility to modulate opacity and threshold in real time. Use of the 3D VR was used during preoperative planning allowing a better definition of surgical strategy. A tailored craniotomy and brain dissection can be simulated in advanced and precisely performed in the OR, connecting the system to intraoperative neuronavigation. Smaller blood vessels are generally not included in the 3D rendering, however, real-time intraoperative threshold modulation of the 3D model assisted in their identification improving surgical confidence and safety during the procedure. VR was also used offline, both before and after surgery, in the setting of case discussion within the neurosurgical team and during MDT discussion. Finally, 3D VR was used during informed consent, improving communication with families and young patients. 3D VR allows to tailor surgical strategies to the single patient, contributing to procedural safety and efficacy and to the global improvement of neurosurgical oncology care.


2018 ◽  
Vol 10 (6) ◽  
pp. 168781401878363 ◽  
Author(s):  
Nien-Tsu Hu ◽  
Pu-Sheng Tsai ◽  
Ter-Feng Wu ◽  
Jen-Yang Chen ◽  
Lin Lee

This article explores the construction of a geometric virtual reality platform for the environmental navigation. Non-panoramic photos and wearable electronics with Bluetooth wireless transmission functions are used to combine the user’s actions with the virtual reality environment in a first-person virtual reality platform. The 3ds Max animation software is used to create three-dimensional models of real buildings. These models are combined with the landscape models in Unity3d to create a virtual campus scene that matches real landscape. The wearable device included an ATMega168 chip as a microcontroller; it was connected to a three-axis accelerometer, a gyroscope, and a Bluetooth transmitter to detect and transmit various movements of the user. Although the development of the mechatronics, software, and engineering involved in the three-dimensional animation are the main objective, we believe that the methods and techniques can be modified for various purposes. After the system architecture was created and the operations of the platform were verified, wearable devices and virtual reality scenes are concluded to be able to be used together seamlessly.


Sign in / Sign up

Export Citation Format

Share Document