A Virtual Harp With Physical String Vibrations in an Augmented Reality Environment

Author(s):  
Tanasha Taylor ◽  
Shana Smith ◽  
David Suh

This research provides a prototype of a computer-generated harp, using physical string vibrations with haptic feedback in an augmented reality environment. The individuals, immersed in an augmented reality environment using a head mounted display, play the virtual harp with the Phantom Omni haptic device receiving realistic interactions from the strings of the harp. Most previous musical instruments research only provides feedback in the form of visual and audio cues, but not haptic cues. The proposed project is designed to provide individuals with all three forms of cues for interacting with a computer-generated harp. This computer-generated harp is modeled as a realistic harp and includes physics for string vibrations to provide the individuals a traditional instrument-like interaction. This prototype will be applied towards interactive musical experiences and development of skills during music therapy for individuals with disabilities.

Author(s):  
Tanasha Taylor ◽  
Shana Smith ◽  
Karljohan L. Palmerius

The goal of this research study was to develop a music therapy tool using a computer-generated harp which could provide users with visual, audio, and haptic feedback during interaction with the virtual instrument. Realistic 3D visual and haptic feedback was provided through immersion in a portable augmented reality-based system composed of a video see-through head mounted display (HMD) and a Sensable Phantom Omni haptic device. Users play the virtual harp by using the Phantom Omni haptic device to pluck or strum the strings of the harp. Users can also freely move the harp in the augmented reality environment to provide a more realistic experience, similar to that of playing a traditional musical instrument. The system will be used to provide interactive musical experiences and to develop motor skills among individuals with disabilities through music therapy. A virtual therapist feature was developed which can be used by a therapist without musical knowledge to observe a user during therapy exercises or by a user to engage in self-motivated therapy exercises outside the therapy room. With the virtual therapy feature, users can follow a simple pre-determined sequence of notes using color-coded strings. User testing was completed to study usability, therapeutic effectiveness, and patients’ therapy motivation.


Author(s):  
Terek Arce ◽  
Henry Fuchs ◽  
Kyla McMullen

Currently available augmented reality systems have a narrow field of view, giving users only a small window to look through to find holograms in the environment. The challenge for developers is to direct users’ attention to holograms outside this window. To alleviate this field of view constraint, most research has focused on hardware improvements to the head mounted display. However, incorporating 3D audio cues into programs could also aid users in this localization task. This paper investigates the effectiveness of 3D audio on hologram localization. A comparison of 3D audio, visual, and mixed-mode stimuli shows that users are able to localize holograms significantly faster under conditions that include 3D audio. To our knowledge, this is the first study to explore the use of 3D audio in localization tasks using augmented reality systems. The results provide a basis for the incorporation of 3D audio in augmented reality applications.


2021 ◽  
Vol 11 (15) ◽  
pp. 6932
Author(s):  
Yongseok Lee ◽  
Somang Lee ◽  
Dongjun Lee

We propose a novel wearable haptic device that can provide kinesthetic haptic feedback for stiffness rendering of virtual objects in augmented reality (AR). Rendering stiffness of objects using haptic feedback is crucial for realistic finger-based object manipulation, yet challenging particularly in AR due to the co-presence of a real hand, haptic device, and rendered AR objects in the scenes. By adopting passive actuation with a tendon-based transmission mechanism, the proposed haptic device can generate kinesthetic feedback strong enough for immersive manipulation and prevention of inter-penetration in a small-form-factor, while maximizing the wearability and minimizing the occlusion in AR usage. A selective locking module is adopted in the device to allow for the rendering of the elasticity of objects. We perform an experimental study of two-finger grasping to verify the efficacy of the proposed haptic device for finger-based manipulation in AR. We also quantitatively compare/articulate the effects of different types of feedbacks across haptic and visual sense (i.e., kinesthetic haptic feedback, vibrotactile haptic feedback, and visuo-haptic feedback) for stiffness rendering of virtual objects in AR for the first time.


Author(s):  
Eugene Hayden ◽  
Kang Wang ◽  
Chengjie Wu ◽  
Shi Cao

This study explores the design, implementation, and evaluation of an Augmented Reality (AR) prototype that assists novice operators in performing procedural tasks in simulator environments. The prototype uses an optical see-through head-mounted display (OST HMD) in conjunction with a simulator display to supplement sequences of interactive visual and attention-guiding cues to the operator’s field of view. We used a 2x2 within-subject design to test two conditions: with/without AR-cues, each condition had a voice assistant and two procedural tasks (preflight and landing). An experiment examined twenty-six novice operators. The results demonstrated that augmented reality had benefits in terms of improved situation awareness and accuracy, however, it yielded longer task completion time by creating a speed-accuracy trade-off effect in favour of accuracy. No significant effect on mental workload is found. The results suggest that augmented reality systems have the potential to be used by a wider audience of operators.


2021 ◽  
Author(s):  
Nina Rohrbach ◽  
Joachim Hermsdörfer ◽  
Lisa-Marie Huber ◽  
Annika Thierfelder ◽  
Gavin Buckingham

AbstractAugmented reality, whereby computer-generated images are overlaid onto the physical environment, is becoming significant part of the world of education and training. Little is known, however, about how these external images are treated by the sensorimotor system of the user – are they fully integrated into the external environmental cues, or largely ignored by low-level perceptual and motor processes? Here, we examined this question in the context of the size–weight illusion (SWI). Thirty-two participants repeatedly lifted and reported the heaviness of two cubes of unequal volume but equal mass in alternation. Half of the participants saw semi-transparent equally sized holographic cubes superimposed onto the physical cubes through a head-mounted display. Fingertip force rates were measured prior to lift-off to determine how the holograms influenced sensorimotor prediction, while verbal reports of heaviness after each lift indicated how the holographic size cues influenced the SWI. As expected, participants who lifted without augmented visual cues lifted the large object at a higher rate of force than the small object on early lifts and experienced a robust SWI across all trials. In contrast, participants who lifted the (apparently equal-sized) augmented cubes used similar force rates for each object. Furthermore, they experienced no SWI during the first lifts of the objects, with a SWI developing over repeated trials. These results indicate that holographic cues initially dominate physical cues and cognitive knowledge, but are dismissed when conflicting with cues from other senses.


2020 ◽  
Vol 4 (4) ◽  
pp. 78
Author(s):  
Andoni Rivera Pinto ◽  
Johan Kildal ◽  
Elena Lazkano

In the context of industrial production, a worker that wants to program a robot using the hand-guidance technique needs that the robot is available to be programmed and not in operation. This means that production with that robot is stopped during that time. A way around this constraint is to perform the same manual guidance steps on a holographic representation of the digital twin of the robot, using augmented reality technologies. However, this presents the limitation of a lack of tangibility of the visual holograms that the user tries to grab. We present an interface in which some of the tangibility is provided through ultrasound-based mid-air haptics actuation. We report a user study that evaluates the impact that the presence of such haptic feedback may have on a pick-and-place task of the wrist of a holographic robot arm which we found to be beneficial.


2011 ◽  
Vol 69 (suppl_1) ◽  
pp. ons14-ons19 ◽  
Author(s):  
Cristian J Luciano ◽  
P Pat Banerjee ◽  
Brad Bellotte ◽  
G Michael Oh ◽  
Michael Lemole ◽  
...  

Abstract BACKGROUND: We evaluated the use of a part-task simulator with 3D and haptic feedback as a training tool for a common neurosurgical procedure - placement of thoracic pedicle screws. OBJECTIVE: To evaluate the learning retention of thoracic pedicle screw placement on a high-performance augmented reality and haptic technology workstation. METHODS: Fifty-one fellows and residents performed thoracic pedicle screw placement on the simulator. The virtual screws were drilled into a virtual patient's thoracic spine derived from a computed tomography data set of a real patient. RESULTS: With a 12.5% failure rate, a 2-proportion z test yielded P = .08. For performance accuracy, an aggregate Euclidean distance deviation from entry landmark on the pedicle and a similar deviation from the target landmark in the vertebral body yielded P = .04 from a 2-sample t test in which the rejected null hypothesis assumes no improvement in performance accuracy from the practice to the test sessions, and the alternative hypothesis assumes an improvement. CONCLUSION: The performance accuracy on the simulator was comparable to the accuracy reported in literature on recent retrospective evaluation of such placements. The failure rates indicated a minor drop from practice to test sessions, and also indicated a trend (P = .08) toward learning retention resulting in improvement from practice to test sessions. The performance accuracy showed a 15% mean score improvement and more than a 50% reduction in standard deviation from practice to test. It showed evidence (P = .04) of performance accuracy improvement from practice to test session.


2017 ◽  
Vol 26 (1) ◽  
pp. 16-41 ◽  
Author(s):  
Jonny Collins ◽  
Holger Regenbrecht ◽  
Tobias Langlotz

Virtual and augmented reality, and other forms of mixed reality (MR), have become a focus of attention for companies and researchers. Before they can become successful in the market and in society, those MR systems must be able to deliver a convincing, novel experience for the users. By definition, the experience of mixed reality relies on the perceptually successful blending of reality and virtuality. Any MR system has to provide a sensory, in particular visually coherent, set of stimuli. Therefore, issues with visual coherence, that is, a discontinued experience of a MR environment, must be avoided. While it is very easy for a user to detect issues with visual coherence, it is very difficult to design and implement a system for coherence. This article presents a framework and exemplary implementation of a systematic enquiry into issues with visual coherence and possible solutions to address those issues. The focus is set on head-mounted display-based systems, notwithstanding its applicability to other types of MR systems. Our framework, together with a systematic discussion of tangible issues and solutions for visual coherence, aims at guiding developers of mixed reality systems for better and more effective user experiences.


Sign in / Sign up

Export Citation Format

Share Document