scholarly journals Review of the Augmented Reality Systems for Shoulder Rehabilitation

Information ◽  
2019 ◽  
Vol 10 (5) ◽  
pp. 154 ◽  
Author(s):  
Rosanna Maria Viglialoro ◽  
Sara Condino ◽  
Giuseppe Turini ◽  
Marina Carbone ◽  
Vincenzo Ferrari ◽  
...  

Literature shows an increasing interest for the development of augmented reality (AR) applications in several fields, including rehabilitation. Current studies show the need for new rehabilitation tools for upper extremity, since traditional interventions are less effective than in other body regions. This review aims at: Studying to what extent AR applications are used in shoulder rehabilitation, examining wearable/non-wearable technologies employed, and investigating the evidence supporting AR effectiveness. Nine AR systems were identified and analyzed in terms of: Tracking methods, visualization technologies, integrated feedback, rehabilitation setting, and clinical evaluation. Our findings show that all these systems utilize vision-based registration, mainly with wearable marker-based tracking, and spatial displays. No system uses head-mounted displays, and only one system (11%) integrates a wearable interface (for tactile feedback). Three systems (33%) provide only visual feedback; 66% present visual-audio feedback, and only 33% of these provide visual-audio feedback, 22% visual-audio with biofeedback, and 11% visual-audio with haptic feedback. Moreover, several systems (44%) are designed primarily for home settings. Three systems (33%) have been successfully evaluated in clinical trials with more than 10 patients, showing advantages over traditional rehabilitation methods. Further clinical studies are needed to generalize the obtained findings, supporting the effectiveness of the AR applications.

2019 ◽  
Vol 9 (23) ◽  
pp. 5123 ◽  
Author(s):  
Diego Vaquero-Melchor ◽  
Ana M. Bernardos

Nowadays, Augmented-Reality (AR) head-mounted displays (HMD) deliver a more immersive visualization of virtual contents, but the available means of interaction, mainly based on gesture and/or voice, are yet limited and obviously lack realism and expressivity when compared to traditional physical means. In this sense, the integration of haptics within AR may help to deliver an enriched experience, while facilitating the performance of specific actions, such as repositioning or resizing tasks, that are still dependent on the user’s skills. In this direction, this paper gathers the description of a flexible architecture designed to deploy haptically enabled AR applications both for mobile and wearable visualization devices. The haptic feedback may be generated through a variety of devices (e.g., wearable, graspable, or mid-air ones), and the architecture facilitates handling the specificity of each. For this reason, within the paper, it is discussed how to generate a haptic representation of a 3D digital object depending on the application and the target device. Additionally, the paper includes an analysis of practical, relevant issues that arise when setting up a system to work with specific devices like HMD (e.g., HoloLens) and mid-air haptic devices (e.g., Ultrahaptics), such as the alignment between the real world and the virtual one. The architecture applicability is demonstrated through the implementation of two applications: (a) Form Inspector and (b) Simon Game, built for HoloLens and iOS mobile phones for visualization and for UHK for mid-air haptics delivery. These applications have been used to explore with nine users the efficiency, meaningfulness, and usefulness of mid-air haptics for form perception, object resizing, and push interaction tasks. Results show that, although mobile interaction is preferred when this option is available, haptics turn out to be more meaningful in identifying shapes when compared to what users initially expect and in contributing to the execution of resizing tasks. Moreover, this preliminary user study reveals some design issues when working with haptic AR. For example, users may be expecting a tailored interface metaphor, not necessarily inspired in natural interaction. This has been the case of our proposal of virtual pressable buttons, built mimicking real buttons by using haptics, but differently interpreted by the study participants.


2014 ◽  
Vol 26 (5) ◽  
pp. 580-591 ◽  
Author(s):  
Robert M. Philbrick ◽  
◽  
Mark B. Colton ◽  

<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00260005/06.jpg"" width=""300"" />Haptic and audio 3D feedback</div> Unmanned aerial vehicles (UAVs) have many potential applications in indoor environments. However, limited visual feedback makes it difficult to pilot UAVs in cluttered and enclosed spaces. Haptic feedback combined with visual feedback has been shown to reduce the number of collisions of UAVs in indoor environments, but has generally resulted in an increase in the mental workload of the operator. This paper investigates the potential of combining novel haptic and 3D audio feedback to provide additional information to operators of UAVs to improve performance and reduce workload. Two haptic feedback and two 3D audio feedback algorithms are presented and tested in a simulation-based human subject experiment. Operator workload is quantified using standard measures and a novel application of behavioral entropy. Experimental results indicate that 3D haptic feedback improved UAV pilot performance. Pilot workload was also improved for one of the haptic algorithms in one of the control directions (lateral). The 3D audio feedback algorithms investigated in this study neither improved nor degraded pilot performance. </span>


2014 ◽  
Vol 112 (12) ◽  
pp. 3189-3196 ◽  
Author(s):  
Chiara Bozzacchi ◽  
Robert Volcic ◽  
Fulvio Domini

Perceptual estimates of three-dimensional (3D) properties, such as the distance and depth of an object, are often inaccurate. Given the accuracy and ease with which we pick up objects, it may be expected that perceptual distortions do not affect how the brain processes 3D information for reach-to-grasp movements. Nonetheless, empirical results show that grasping accuracy is reduced when visual feedback of the hand is removed. Here we studied whether specific types of training could correct grasping behavior to perform adequately even when any form of feedback is absent. Using a block design paradigm, we recorded the movement kinematics of subjects grasping virtual objects located at different distances in the absence of visual feedback of the hand and haptic feedback of the object, before and after different training blocks with different feedback combinations (vision of the thumb and vision of thumb and index finger, with and without tactile feedback of the object). In the Pretraining block, we found systematic biases of the terminal hand position, the final grip aperture, and the maximum grip aperture like those reported in perceptual tasks. Importantly, the distance at which the object was presented modulated all these biases. In the Posttraining blocks only the hand position was partially adjusted, but final and maximum grip apertures remained unchanged. These findings show that when visual and haptic feedback are absent systematic distortions of 3D estimates affect reach-to-grasp movements in the same way as they affect perceptual estimates. Most importantly, accuracy cannot be learned, even after extensive training with feedback.


2015 ◽  
Vol 10 (1) ◽  
pp. 57-61 ◽  
Author(s):  
Giuseppe Meccariello ◽  
Federico Faedi ◽  
Saleh AlGhamdi ◽  
Filippo Montevecchi ◽  
Elisabetta Firinu ◽  
...  

Author(s):  
Wakana Ishihara ◽  
Karen Moxon ◽  
Sheryl Ehrman ◽  
Mark Yarborough ◽  
Tina L. Panontin ◽  
...  

This systematic review addresses the plausibility of using novel feedback modalities for brain–computer interface (BCI) and attempts to identify the best feedback modality on the basis of the effectiveness or learning rate. Out of the chosen studies, it was found that 100% of studies tested visual feedback, 31.6% tested auditory feedback, 57.9% tested tactile feedback, and 21.1% tested proprioceptive feedback. Visual feedback was included in every study design because it was intrinsic to the response of the task (e.g. seeing a cursor move). However, when used alone, it was not very effective at improving accuracy or learning. Proprioceptive feedback was most successful at increasing the effectiveness of motor imagery BCI tasks involving neuroprosthetics. The use of auditory and tactile feedback resulted in mixed results. The limitations of this current study and further study recommendations are discussed.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2234
Author(s):  
Sebastian Kapp ◽  
Michael Barz ◽  
Sergey Mukhametov ◽  
Daniel Sonntag ◽  
Jochen Kuhn

Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR is interesting for applied research based on eye tracking in cognitive or educational sciences for example. While some research toolkits for VR already exist, only a few target AR scenarios. In this work, we present an open-source eye tracking toolkit for reliable gaze data acquisition in AR based on Unity 3D and the Microsoft HoloLens 2, as well as an R package for seamless data analysis. Furthermore, we evaluate the spatial accuracy and precision of the integrated eye tracker for fixation targets with different distances and angles to the user (n=21). On average, we found that gaze estimates are reported with an angular accuracy of 0.83 degrees and a precision of 0.27 degrees while the user is resting, which is on par with state-of-the-art mobile eye trackers.


2020 ◽  
Vol 4 (4) ◽  
pp. 78
Author(s):  
Andoni Rivera Pinto ◽  
Johan Kildal ◽  
Elena Lazkano

In the context of industrial production, a worker that wants to program a robot using the hand-guidance technique needs that the robot is available to be programmed and not in operation. This means that production with that robot is stopped during that time. A way around this constraint is to perform the same manual guidance steps on a holographic representation of the digital twin of the robot, using augmented reality technologies. However, this presents the limitation of a lack of tangibility of the visual holograms that the user tries to grab. We present an interface in which some of the tangibility is provided through ultrasound-based mid-air haptics actuation. We report a user study that evaluates the impact that the presence of such haptic feedback may have on a pick-and-place task of the wrist of a holographic robot arm which we found to be beneficial.


2015 ◽  
Vol 1 (1) ◽  
pp. 534-537 ◽  
Author(s):  
T. Mentler ◽  
C. Wolters ◽  
M. Herczeg

AbstractIn the healthcare domain, head-mounted displays (HMDs) with augmented reality (AR) modalities have been reconsidered for application as a result of commercially available products and the needs for using computers in mobile context. Within a user-centered design approach, interviews were conducted with physicians, nursing staff and members of emergency medical services. Additionally practitioners were involved in evaluating two different head-mounted displays. Based on these measures, use cases and usability considerations according to interaction design and information visualization were derived and are described in this contribution.


Sign in / Sign up

Export Citation Format

Share Document