Augmented Reality System Calibration for Assembly Support With the Microsoft HoloLens

Author(s):  
Rafael Radkowski ◽  
Sravya Kanunganti

The Microsoft HoloLens is the latest augmented reality (AR) capable head-mounted-display (HMD) with the potential to leverage AR applications in manufacturing and design. Its optical system and the embedded tracking capability are superior to many precursor HMDs and mitigate several known obstacles such as size, massive weight, visual quality, and tracking latency. Especially the last one, the not-noticeable tracking latency, is a convincing factor for people outside an AR community. Along with its onboard tracking, it allows the HoloLens to populate the physical world with virtual objects and to maintain their position while the user is moving. Although these capabilities are already convincing, the majority of applications in assembly and design require a precise alignment of virtual objects with physical parts. Especially, if a user moves the majority of components in an application situation, thus, virtual information need to move along with the physical part to convey them semantically correct. Object tracking and automatic registration are required to establish this functionality. This paper introduces an AR system which integrates an external range camera-based tracking system and the HoloLens. It incorporates two calibration procedures, which are required to register virtual 3D objects with physical components. This AR system can be used for different visualization tasks along the product life-cycle, spanning the range from training to decision making, although our major area is currently manual assembly.

2019 ◽  
Vol 9 (14) ◽  
pp. 2933 ◽  
Author(s):  
Ju Young Oh ◽  
Ji Hyung Park ◽  
Jung-Min Park

This paper proposes an interaction method to conveniently manipulate a virtual object by combining touch interaction and head movements for a head-mounted display (HMD), which provides mobile augmented reality (AR). A user can conveniently manipulate a virtual object with touch interaction recognized from the inertial measurement unit (IMU) attached to the index finger’s nail and head movements tracked by the IMU embedded in the HMD. We design two interactions that combine touch and head movements, to manipulate a virtual object on a mobile HMD. Each designed interaction method manipulates virtual objects by controlling ray casting and adjusting widgets. To evaluate the usability of the designed interaction methods, a user evaluation is performed in comparison with the hand interaction using Hololens. As a result, the designed interaction method receives positive feedback that virtual objects can be manipulated easily in a mobile AR environment.


2015 ◽  
Vol 75 (4) ◽  
Author(s):  
Ajune Wanis Ismail ◽  
Mark Bilinghust ◽  
Mohd Shahrizal Sunar

In this paper, we describe a new tracking approach for object handling in Augmented Reality (AR). Our approach improves the standard vision-based tracking system during marker extraction and its detection stage. It transforms a unique tracking pattern into set of vertices which are able to perform interaction such as translate, rotate, and copy. This is based on arobust real-time computer vision algorithm that tracks a paddle that a person uses for input. A paddle pose pattern is constructed in a one-time calibration process and through vertex-based calculation of the camera pose relative to the paddle we can show 3D graphics on top of it. This allows the user to look at virtual objects from different viewing angles in the AR interface and perform 3D object manipulation. This approach was implemented using marker-based tracking to improve the tracking in term of the accuracy and robustness in manipulating 3D objects in real-time. We demonstrate our improved tracking system with a sample Tangible AR application, and describe how the system could be improved in the future.


2012 ◽  
Vol 24 (05) ◽  
pp. 435-445
Author(s):  
Ren-Guey Lee ◽  
Sheng-Chung Tien ◽  
Chun-Chang Chen ◽  
Yu-Ying Chen

In this paper, rehabilitation tools are proposed and implemented to assist patients with stroke and body dysfunction via auxiliary physical activity. By integrating the entertainment of games and the needs of rehabilitation and utilizing motor assessment scale (MAS) as the building blocks, we propose a game system developed for assessment of stroke rehabilitation by using augmented reality (AR) technology. By means of application of AR Markers and based on related parameters of Wii remotes, various assessment games have been implemented, and vivid pictures can be presented to users via a head-mounted display by seamless combination of real environment and virtual objects. This game system takes various assessment scales into consideration, and each scale is specifically designed and individually integrated to enable the relevant capacity for assessment of motor functions. According to the experimental results, the accuracy rate of the users in successfully following the game steps is 91.2%, and the accuracy rate of the system in assessing the MAS categories is as high as 94.6%, which confirms the feasibility of our proposed and implemented rehabilitation game system.


2018 ◽  
Vol 62 (2) ◽  
pp. 25-37
Author(s):  
Márton Szemenyei ◽  
Ferenc Vajda

Object recognition in 3D scenes is one of the fundamental tasks in computer vision. It is used frequently in robotics or augmented reality applications [1]. In our work we intend to apply 3D shape recognition to create a Tangible Augmented Reality system that is able to pair virtual and real objects in natural indoors scenes. In this paper we present a method for arranging virtual objects in a real-world scene based on primitive shape graphs. For our scheme, we propose a graph node embedding algorithm for graphs with vectorial nodes and edges, and genetic operators designed to improve the quality of the global setup of virtual objects. We show that our methods improve the quality of the arrangement significantly.


2021 ◽  
Vol 5 (11) ◽  
pp. 66
Author(s):  
Michael Chan ◽  
Alvaro Uribe-Quevedo ◽  
Bill Kapralos ◽  
Michael Jenkin ◽  
Norman Jaimes ◽  
...  

Direct ophthalmoscopy (DO) is a medical procedure whereby a health professional, using a direct ophthalmoscope, examines the eye fundus. DO skills are in decline due to the use of interactive diagnostic equipment and insufficient practice with the direct ophthalmoscope. To address the loss of DO skills, physical and computer-based simulators have been developed to offer additional training. Among the computer-based simulations, virtual and augmented reality (VR and AR, respectively) allow simulated immersive and interactive scenarios with eye fundus conditions that are difficult to replicate in the classroom. VR and AR require employing 3D user interfaces (3DUIs) to perform the virtual eye examination. Using a combination of a between-subjects and within-subjects paradigm with two groups of five participants, this paper builds upon a previous preliminary usability study that compared the use of the HTC Vive controller, the Valve Index controller, and the Microsoft HoloLens 1 hand gesticulation interaction methods when performing a virtual direct ophthalmoscopy eye examination. The work described in this paper extends our prior work by considering the interactions with the Oculus Quest controller and Oculus Quest hand-tracking system to perform a virtual direct ophthalmoscopy eye examination while allowing us to compare these methods without our prior interaction techniques. Ultimately, this helps us develop a greater understanding of usability effects for virtual DO examinations and virtual reality in general. Although the number of participants was limited, n = 5 for Stage 1 (including the HTC Vive controller, the Valve Index controller, and the Microsoft HoloLens hand gesticulations), and n = 13 for Stage 2 (including the Oculus Quest controller and the Oculus Quest hand tracking), given the COVID-19 restrictions, our initial results comparing VR and AR 3D user interactions for direct ophthalmoscopy are consistent with our previous preliminary study where the physical controllers resulted in higher usability scores, while the Oculus Quest’s more accurate hand motion capture resulted in higher usability when compared to the Microsoft HoloLens hand gesticulation.


2017 ◽  
Vol 22 (1) ◽  
pp. 39-53 ◽  
Author(s):  
Fabrizio Cutolo ◽  
Antonio Meola ◽  
Marina Carbone ◽  
Sara Sinceri ◽  
Federico Cagnazzo ◽  
...  

2015 ◽  
Vol 75 (2) ◽  
Author(s):  
Hasan Alhajhamad ◽  
Mohd Shahrizal Sunar

This paper describes the main problems in realistic Augmented Reality scenes. In order to view the real world with additional computer-generated information in a seamless and realistic integration, there are several research challenges that can be identified; some related to camera tracking, system design, user interaction, and rendering. The focus on each of these aspects was thoroughly explored by several methods and techniques. This study is considered to be an exploration for an Augmented Reality rendering technique. This technique focuses on increasing the realism in the AR scene. Thus, in order to realize the AR scene in a more realistic way, there are four main issues; light source detection, well-designed virtual objects that can have true reflex of the real environment, then integration of a real-time accurate soft shadow.


Author(s):  
Minghui Sun ◽  
Xinyu Wu ◽  
Zhihua Fan ◽  
Liyan Dong

Human-computer interaction (HCI) has developed rapidly in recent years, and more and more researchers are interested in applying HCI techniques into education. Compared with traditional approaches in the real world, gesture recognition is considered as a reasonable alternative since it is vivid and flexible. However, most of educational equipment nowadays achieves the function of augmented reality, without any interaction. This paper implemented a prototype, not only based on augmented reality system, but also especially we think about the interactive design. Accessibility is achieved by mobile devices and the dynamic switch of gesture recognition. By this interactive method, children are able to interact with the virtual objects easily and naturally. Consequently, children can have a profound and deep understanding of what they learn, and the quality of education will be improved.


2005 ◽  
Vol 14 (5) ◽  
pp. 528-549 ◽  
Author(s):  
Jannick P. Rolland ◽  
Frank Biocca ◽  
Felix Hamza-Lup ◽  
Yanggang Ha ◽  
Ricardo Martins

Distributed systems technologies supporting 3D visualization and social collaboration will be increasing in frequency and type over time. An emerging type of head-mounted display referred to as the head-mounted projection display (HMPD) was recently developed that only requires ultralight optics (i.e., less than 8 g per eye) that enables immersive multiuser, mobile augmented reality 3D visualization, as well as remote 3D collaborations. In this paper a review of the development of lightweight HMPD technology is provided, together with insight into what makes this technology timely and so unique. Two novel emerging HMPD-based technologies are then described: a teleportal HMPD (T-HMPD) enabling face-to-face communication and visualization of shared 3D virtual objects, and a mobile HMPD (M-HMPD) designed for outdoor wearable visualization and communication. Finally, the use of HMPD in medical visualization and training, as well as in infospaces, two applications developed in the ODA and MIND labs respectively, are discussed.


Sign in / Sign up

Export Citation Format

Share Document