scholarly journals Table Top Augmented Reality System for Conceptual Design and Prototyping

Author(s):  
Michael VanWaardhuizen ◽  
James Oliver ◽  
Jesus Gimeno

The AugmenTable is a desktop augmented reality workstation intended for conceptual design and prototyping. It combines a thin form factor display, inexpensive web cameras, and a PC into a unique system that enables natural interaction with virtual and physical parts. This initial implementation of the AugmenTable takes advantage of the popular open source augmented reality software platform ARToolkit to enable manual interaction with physical parts, as well as interaction with virtual parts via a physically marked pointer or a color-marked fingertip. This paper describes similar previous work, the methods used to create the AugmenTable, the novel interaction it affords users, and a number of avenues for advancing the system in the future.

10.29007/lv22 ◽  
2018 ◽  
Author(s):  
Dzmitry Tsetserukou ◽  
Mikhail Matrosov ◽  
Olga Volkova ◽  
Evgeny Tsykunov

We propose a new paradigm of human-drone interaction through projecting image on the ground and foot gestures. The proposed technology allowed creating a new type of tangible interaction with drone, e.g., DroneBall game for augmented sport and FlyMap to let a drone know where to fly.We developed LightAir system that makes possible information sharing, GPS- navigating, controlling and playing with drones in a tangible way. In contrast to the hand gestures, that are common for smartphones, we came up with the idea of foot gestures and projected image for intelligent human-drone interaction. Such gestures make communication with the drone intuitive, natural, and safe. To our knowledge, it is the world’s first system that provides the human-drone bilateral tangible interaction. The paper also introduces the novel concept of Drone Haptics.


Author(s):  
Jonathan Shapey ◽  
Thomas Dowrick ◽  
Rémi Delaunay ◽  
Eleanor C. Mackle ◽  
Stephen Thompson ◽  
...  

Abstract Purpose Image-guided surgery (IGS) is an integral part of modern neuro-oncology surgery. Navigated ultrasound provides the surgeon with reconstructed views of ultrasound data, but no commercial system presently permits its integration with other essential non-imaging-based intraoperative monitoring modalities such as intraoperative neuromonitoring. Such a system would be particularly useful in skull base neurosurgery. Methods We established functional and technical requirements of an integrated multi-modality IGS system tailored for skull base surgery with the ability to incorporate: (1) preoperative MRI data and associated 3D volume reconstructions, (2) real-time intraoperative neurophysiological data and (3) live reconstructed 3D ultrasound. We created an open-source software platform to integrate with readily available commercial hardware. We tested the accuracy of the system’s ultrasound navigation and reconstruction using a polyvinyl alcohol phantom model and simulated the use of the complete navigation system in a clinical operating room using a patient-specific phantom model. Results Experimental validation of the system’s navigated ultrasound component demonstrated accuracy of $$<4.5\,\hbox {mm}$$ < 4.5 mm and a frame rate of 25 frames per second. Clinical simulation confirmed that system assembly was straightforward, could be achieved in a clinically acceptable time of $$<15\,\hbox {min}$$ < 15 min and performed with a clinically acceptable level of accuracy. Conclusion We present an integrated open-source research platform for multi-modality IGS. The present prototype system was tailored for neurosurgery and met all minimum design requirements focused on skull base surgery. Future work aims to optimise the system further by addressing the remaining target requirements.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3061
Author(s):  
Alice Lo Valvo ◽  
Daniele Croce ◽  
Domenico Garlisi ◽  
Fabrizio Giuliano ◽  
Laura Giarré ◽  
...  

In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback.


Author(s):  
Rompapas Damien Constantine ◽  
Daniel Flores Quiros ◽  
Charlton Rodda ◽  
Bryan Christopher Brown ◽  
Noah Benjamin Zerkin ◽  
...  

2013 ◽  
Vol 60 (9) ◽  
pp. 2636-2644 ◽  
Author(s):  
Hussam Al-Deen Ashab ◽  
Victoria A. Lessoway ◽  
Siavash Khallaghi ◽  
Alexis Cheng ◽  
Robert Rohling ◽  
...  

2017 ◽  
Vol 26 (1) ◽  
pp. 1-15 ◽  
Author(s):  
Vito Modesto Manghisi ◽  
Michele Gattullo ◽  
Michele Fiorentino ◽  
Antonio Emmanuele Uva ◽  
Francescomaria Marino ◽  
...  

Text legibility in augmented reality with optical see-through displays can be challenging due to the interaction with the texture on the background. Literature presents several approaches to predict legibility of text superimposed over a specific image, but their validation with an AR display and with images taken from the industrial domain is scarce. In this work, we propose novel indices extracted from the background images, displayed on an LCD screen, and we compare them with those proposed in literature designing a specific user test. We collected the legibility user ratings by displaying white text over 13 industrial background images to 19 subjects using an optical see-through head-worn display. We found that most of the proposed indices have a significant correlation with user ratings. The main result of this work is that some of the novel indices proposed had a better correlation than those used before in the literature to predict legibility. Our results prove that industrial AR developers can effectively predict text legibility by simply running image analysis on the background image.


Sign in / Sign up

Export Citation Format

Share Document