3D optical see-through head-mounted display based augmented reality system and its application

Author(s):  
Zhenliang Zhang ◽  
Dongdong Weng ◽  
Yue Liu ◽  
Li Xiang
2017 ◽  
Vol 22 (1) ◽  
pp. 39-53 ◽  
Author(s):  
Fabrizio Cutolo ◽  
Antonio Meola ◽  
Marina Carbone ◽  
Sara Sinceri ◽  
Federico Cagnazzo ◽  
...  

Author(s):  
Rafael Radkowski ◽  
Sravya Kanunganti

The Microsoft HoloLens is the latest augmented reality (AR) capable head-mounted-display (HMD) with the potential to leverage AR applications in manufacturing and design. Its optical system and the embedded tracking capability are superior to many precursor HMDs and mitigate several known obstacles such as size, massive weight, visual quality, and tracking latency. Especially the last one, the not-noticeable tracking latency, is a convincing factor for people outside an AR community. Along with its onboard tracking, it allows the HoloLens to populate the physical world with virtual objects and to maintain their position while the user is moving. Although these capabilities are already convincing, the majority of applications in assembly and design require a precise alignment of virtual objects with physical parts. Especially, if a user moves the majority of components in an application situation, thus, virtual information need to move along with the physical part to convey them semantically correct. Object tracking and automatic registration are required to establish this functionality. This paper introduces an AR system which integrates an external range camera-based tracking system and the HoloLens. It incorporates two calibration procedures, which are required to register virtual 3D objects with physical components. This AR system can be used for different visualization tasks along the product life-cycle, spanning the range from training to decision making, although our major area is currently manual assembly.


Author(s):  
Eugene Hayden ◽  
Kang Wang ◽  
Chengjie Wu ◽  
Shi Cao

This study explores the design, implementation, and evaluation of an Augmented Reality (AR) prototype that assists novice operators in performing procedural tasks in simulator environments. The prototype uses an optical see-through head-mounted display (OST HMD) in conjunction with a simulator display to supplement sequences of interactive visual and attention-guiding cues to the operator’s field of view. We used a 2x2 within-subject design to test two conditions: with/without AR-cues, each condition had a voice assistant and two procedural tasks (preflight and landing). An experiment examined twenty-six novice operators. The results demonstrated that augmented reality had benefits in terms of improved situation awareness and accuracy, however, it yielded longer task completion time by creating a speed-accuracy trade-off effect in favour of accuracy. No significant effect on mental workload is found. The results suggest that augmented reality systems have the potential to be used by a wider audience of operators.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3061
Author(s):  
Alice Lo Valvo ◽  
Daniele Croce ◽  
Domenico Garlisi ◽  
Fabrizio Giuliano ◽  
Laura Giarré ◽  
...  

In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback.


2021 ◽  
Author(s):  
Nina Rohrbach ◽  
Joachim Hermsdörfer ◽  
Lisa-Marie Huber ◽  
Annika Thierfelder ◽  
Gavin Buckingham

AbstractAugmented reality, whereby computer-generated images are overlaid onto the physical environment, is becoming significant part of the world of education and training. Little is known, however, about how these external images are treated by the sensorimotor system of the user – are they fully integrated into the external environmental cues, or largely ignored by low-level perceptual and motor processes? Here, we examined this question in the context of the size–weight illusion (SWI). Thirty-two participants repeatedly lifted and reported the heaviness of two cubes of unequal volume but equal mass in alternation. Half of the participants saw semi-transparent equally sized holographic cubes superimposed onto the physical cubes through a head-mounted display. Fingertip force rates were measured prior to lift-off to determine how the holograms influenced sensorimotor prediction, while verbal reports of heaviness after each lift indicated how the holographic size cues influenced the SWI. As expected, participants who lifted without augmented visual cues lifted the large object at a higher rate of force than the small object on early lifts and experienced a robust SWI across all trials. In contrast, participants who lifted the (apparently equal-sized) augmented cubes used similar force rates for each object. Furthermore, they experienced no SWI during the first lifts of the objects, with a SWI developing over repeated trials. These results indicate that holographic cues initially dominate physical cues and cognitive knowledge, but are dismissed when conflicting with cues from other senses.


2013 ◽  
Vol 60 (9) ◽  
pp. 2636-2644 ◽  
Author(s):  
Hussam Al-Deen Ashab ◽  
Victoria A. Lessoway ◽  
Siavash Khallaghi ◽  
Alexis Cheng ◽  
Robert Rohling ◽  
...  

2009 ◽  
Vol 5 (4) ◽  
pp. 415-422 ◽  
Author(s):  
Ramesh Thoranaghatte ◽  
Jaime Garcia ◽  
Marco Caversaccio ◽  
Daniel Widmer ◽  
Miguel A. Gonzalez Ballester ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document