A Semi-Automatic Modeling System for Quick Generation of Large Virtual Reality Models

Author(s):  
Gabriele Guidi ◽  
Laura Loredana Micoli

In the last few years virtual reality applications have started to be introduced in the wide retail field, with immersive 3D models used as a tool for orienting strategic, logistic and marketing choices. However, in the aforementioned applications, the digitalization of the entire Point Of Sale (POS) has not yet been implemented as a standard process for the complexity related to the generation of thousands of texturized 3D models of single products. This work presents an original integrated system for the semi-automatic 3D modeling of simple 3D packages according to a pre-defined classification of shapes, and their management in a data base. Such approach allows to dramatically minimize the modeling time needed for each model and, therefore, of the whole shop, making economically sustainable the reverse modeling of commercial environments. A key advantage of the implemented process is that it can be used by operators non expert in 3D modeling and can be reapplied in several different fields.

2013 ◽  
Vol 455 ◽  
pp. 580-584
Author(s):  
Yan Fu Zhang ◽  
A Chun Wang

To meet the requirements of improving design efficiency and decreasing technique mistakes in the enterprise whose productions are customized by users, a method is introduced to develop CAD/CAPP integrated system. The functions of the integrated CAD/CAPP system developed according to this method include 3D modeling automatically assembled, 2D engineering drawing automatically drawn, process planning sheets automatically filled and parameters of key parts checked and optimized. This method is based on adding special features to the 3D modeling. An example CAD/CAPP integrated system of Metal Bellows Expansion Joints (MBEJ) is developed by using VisualBasic6.0 based on SolidWorks platform. This system can design MBEJ part parameters per GB/T12777 standard. These parameters have been used to generate MBEJ 3D models, engineering drawings and processing planning sheets automatically. All the functions have been integrated in SolidWorks software environment. The drawing paper and processing planning sheets can be modified manually to satisfy special demand.


2021 ◽  
Vol 11 (24) ◽  
pp. 11613
Author(s):  
Agapi Chrysanthakopoulou ◽  
Konstantinos Kalatzis ◽  
Konstantinos Moustakas

Virtual reality (VR) and 3D modeling technologies have become increasingly powerful tools for multiple fields, such as education, architecture, and cultural heritage. Museums are no longer places for only placing and exhibiting collections and artworks. They use such technologies to offer a new way of communicating art and history with their visitors. In this paper, we present the initial results of a proposed workflow towards highlighting and interpreting a historic event with the use of an immersive and interactive VR experience and the utilization of multiple senses of the user. Using a treadmill for navigating and haptic gloves for interacting with the environment, combined with the detailed 3D models, deepens the sense of immersion. The results of our study show that engaging multiple senses and visual manipulation in an immersive 3D environment can effectively enhance the perception of visual realism and evoke a stronger sense of presence, amplifying the educational and informative experience in a museum.


2011 ◽  
Vol 88-89 ◽  
pp. 559-563
Author(s):  
Yuan Luo ◽  
Ai Zhu Ren

OpenFlight is one of the standard formats in Virtual Reality applications. But it only provides crude 3D modeling. Therefore most of the users prefer to build up 3D models in 3DS Max software, while the models created in 3DS MAX could not be used directly in a virtual reality environment. This paper proposed a solution for format conversion between 3DS Max and OpenFlight. Among others it focused on combining triangle meshes in 3DS format into entire faces. A program followed the method proved its availability. The application of this program in a transportation safety education system showed that it has good effects.


Author(s):  
Alex J. Deakyne ◽  
Paul A. Iaizzo

Abstract 3D modeling of anatomical features and medical devices are being used at increasing rates within the medical field. These 3D models allow for a wide variety of uses, such as educational or for medical device optimization. Next steps in such utilization are to perform computational and simulated device deployments within unique human anatomies. Such computational deployments have and will yield new perspectives and understandings relative to how given devices fit within the varied anatomies of varied patient populations. While these simulated device deployments offer many benefits, they are often time consuming to both develop and perform. Here, we present new functionalities to perform computational cardiac device deployments within virtual reality (VR). This functionality offers increased control of where the device is to be deployed, reducing the times required to perform computational device deployments in unique anatomical models.


2020 ◽  
Vol 22 (Supplement_3) ◽  
pp. iii461-iii461
Author(s):  
Andrea Carai ◽  
Angela Mastronuzzi ◽  
Giovanna Stefania Colafati ◽  
Paul Voicu ◽  
Nicola Onorini ◽  
...  

Abstract Tridimensional (3D) rendering of volumetric neuroimaging is increasingly been used to assist surgical management of brain tumors. New technologies allowing immersive virtual reality (VR) visualization of obtained models offer the opportunity to appreciate neuroanatomical details and spatial relationship between the tumor and normal neuroanatomical structures to a level never seen before. We present our preliminary experience with the Surgical Theatre, a commercially available 3D VR system, in 60 consecutive neurosurgical oncology cases. 3D models were developed from volumetric CT scans and MR standard and advanced sequences. The system allows the loading of 6 different layers at the same time, with the possibility to modulate opacity and threshold in real time. Use of the 3D VR was used during preoperative planning allowing a better definition of surgical strategy. A tailored craniotomy and brain dissection can be simulated in advanced and precisely performed in the OR, connecting the system to intraoperative neuronavigation. Smaller blood vessels are generally not included in the 3D rendering, however, real-time intraoperative threshold modulation of the 3D model assisted in their identification improving surgical confidence and safety during the procedure. VR was also used offline, both before and after surgery, in the setting of case discussion within the neurosurgical team and during MDT discussion. Finally, 3D VR was used during informed consent, improving communication with families and young patients. 3D VR allows to tailor surgical strategies to the single patient, contributing to procedural safety and efficacy and to the global improvement of neurosurgical oncology care.


i-com ◽  
2020 ◽  
Vol 19 (2) ◽  
pp. 67-85
Author(s):  
Matthias Weise ◽  
Raphael Zender ◽  
Ulrike Lucke

AbstractThe selection and manipulation of objects in Virtual Reality face application developers with a substantial challenge as they need to ensure a seamless interaction in three-dimensional space. Assessing the advantages and disadvantages of selection and manipulation techniques in specific scenarios and regarding usability and user experience is a mandatory task to find suitable forms of interaction. In this article, we take a look at the most common issues arising in the interaction with objects in VR. We present a taxonomy allowing the classification of techniques regarding multiple dimensions. The issues are then associated with these dimensions. Furthermore, we analyze the results of a study comparing multiple selection techniques and present a tool allowing developers of VR applications to search for appropriate selection and manipulation techniques and to get scenario dependent suggestions based on the data of the executed study.


Fast track article for IS&T International Symposium on Electronic Imaging 2021: Imaging and Multimedia Analytics in a Web and Mobile World 2021 proceedings.


2021 ◽  
Author(s):  
Haowen Jiang ◽  
Sunitha Vimalesvaran ◽  
Jeremy King Wang ◽  
Kee Boon Lim ◽  
Sreenivasulu Reddy Mogali ◽  
...  

BACKGROUND Virtual reality (VR) is a digital education modality that produces a virtual manifestation of the real world and it has been increasingly used in medical education. As VR encompasses different modalities, tools and applications, there is a need to explore how VR has been employed in medical education. OBJECTIVE The objective of this scoping review is to map existing research on the use of VR in undergraduate medical education and to identify areas of future research METHODS We performed a search of 4 bibliographic databases in December 2020, with data extracted using a standardized data extraction form. The data was narratively synthesized and reported in line with the PRISMA-ScR guidelines. RESULTS Of 114 included studies, 69 studies (61%) reported the use of commercially available surgical VR simulators. Other VR modalities included 3D models (15 [14%]) and virtual worlds (20 [18%]), mainly used for anatomy education. Most of the VR modalities included were semi-immersive (68 [60%]) and of high interactivity (79 [70%]). There is limited evidence on the use of more novel VR modalities such as mobile VR and virtual dissection tables (8 [7%]), as well as the use of VR for training of non-surgical and non-psychomotor skills (20 [18%]) or in group setting (16 [14%]). Only 3 studies reported the use conceptual frameworks or theories in the design of VR. CONCLUSIONS Despite extensive research available on VR in medical education, there continues to be important gaps in the evidence. Future studies should explore the use of VR for the development of non-psychomotor skills and in areas other than surgery and anatomy.


Author(s):  
Shengjun Tang ◽  
Qing Zhu ◽  
Wu Chen ◽  
Walid Darwish ◽  
Bo Wu ◽  
...  

RGB-D sensors are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks with respect to 3D dense mapping of indoor environments. First, they only allow a measurement range with a limited distance (e.g., within 3 m) and a limited field of view. Second, the error of the depth measurement increases with increasing distance to the sensor. In this paper, we propose an enhanced RGB-D mapping method for detailed 3D modeling of large indoor environments by combining RGB image-based modeling and depth-based modeling. The scale ambiguity problem during the pose estimation with RGB image sequences can be resolved by integrating the information from the depth and visual information provided by the proposed system. A robust rigid-transformation recovery method is developed to register the RGB image-based and depth-based 3D models together. The proposed method is examined with two datasets collected in indoor environments for which the experimental results demonstrate the feasibility and robustness of the proposed method


Author(s):  
P. Clini ◽  
L. Ruggeri ◽  
R. Angeloni ◽  
M. Sasso

Thanks to their playful and educational approach Virtual Museum systems are very effective for the communication of Cultural Heritage. Among the latest technologies Immersive Virtual Reality is probably the most appealing and potentially effective to serve this purpose; nevertheless, due to a poor user-system interaction, caused by an incomplete maturity of a specific technology for museum applications, it is still quite uncommon to find immersive installations in museums.<br> This paper explore the possibilities offered by this technology and presents a workflow that, starting from digital documentation, makes possible an interaction with archaeological finds or any other cultural heritage inside different kinds of immersive virtual reality spaces.<br> Two different cases studies are presented: the National Archaeological Museum of Marche in Ancona and the 3D reconstruction of the Roman Forum of Fanum Fortunae. Two different approaches not only conceptually but also in contents; while the Archaeological Museum is represented in the application simply using spherical panoramas to give the perception of the third dimension, the Roman Forum is a 3D model that allows visitors to move in the virtual space as in the real one.<br> In both cases, the acquisition phase of the artefacts is central; artefacts are digitized with the photogrammetric technique Structure for Motion then they are integrated inside the immersive virtual space using a PC with a HTC Vive system that allows the user to interact with the 3D models turning the manipulation of objects into a fun and exciting experience.<br> The challenge, taking advantage of the latest opportunities made available by photogrammetry and ICT, is to enrich visitors’ experience in Real Museum making possible the interaction with perishable, damaged or lost objects and the public access to inaccessible or no longer existing places promoting in this way the preservation of fragile sites.


Sign in / Sign up

Export Citation Format

Share Document