haptic cues
Recently Published Documents


TOTAL DOCUMENTS

92
(FIVE YEARS 7)

H-INDEX

13
(FIVE YEARS 0)

2021 ◽  
Vol 2 ◽  
Author(s):  
Sungchul Jung ◽  
Robert. W Lindeman

The concepts of “immersion” and “presence” have been considered as staple metrics for evaluating the quality of virtual reality experiences for more than five decades, even as the concepts themselves have evolved in terms of both technical and psychological aspects. To enhance the user’s experience, studies have investigated the impact of different visual, auditory, and haptic stimuli in various contexts to mainly explore the concepts of “plausibility illusion” and “place illusion”. Previous research has sometimes shown a positive correlation between increased realism and an increase in presence, but not always, and thus, very little of the work around the topic of presence reports an unequivocal correlation. Indeed, one might classify the overall findings within the field around presence as “messy”. Better (or more) visual, auditory, or haptic cues, or increased agency, may lead to increased realism, but not necessarily increased presence, and may well depend on the application context. Rich visual and audio cues in concert contribute significantly to both realism and presence, but the addition of tactile cues, gesture input support, or a combination of these might improve realism, but not necessarily presence. In this paper, we review previous research and suggest a possible theory to better define the relationship between increases in sensory-based realism and presence, and thus help VR researchers create more effective experiences.


Micromachines ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 640
Author(s):  
Linshuai Zhang ◽  
Shuoxin Gu ◽  
Shuxiang Guo ◽  
Takashi Tamiya

A teleoperated robotic catheter operating system is a solution to avoid occupational hazards caused by repeated exposure radiation of the surgeon to X-ray during the endovascular procedures. However, inadequate force feedback and collision detection while teleoperating surgical tools elevate the risk of endovascular procedures. Moreover, surgeons cannot control the force of the catheter/guidewire within a proper range, and thus the risk of blood vessel damage will increase. In this paper, a magnetorheological fluid (MR)-based robot-assisted catheter/guidewire surgery system has been developed, which uses the surgeon’s natural manipulation skills acquired through experience and uses haptic cues to generate collision detection to ensure surgical safety. We present tests for the performance evaluation regarding the teleoperation, the force measurement, and the collision detection with haptic cues. Results show that the system can track the desired position of the surgical tool and detect the relevant force event at the catheter. In addition, this method can more readily enable surgeons to distinguish whether the proximal force exceeds or meets the safety threshold of blood vessels.


2021 ◽  
Vol 5 (EICS) ◽  
pp. 1-26
Author(s):  
Carlos Bermejo ◽  
Lik Hang Lee ◽  
Paul Chojecki ◽  
David Przewozny ◽  
Pan Hui

The continued advancement in user interfaces comes to the era of virtual reality that requires a better understanding of how users will interact with 3D buttons in mid-air. Although virtual reality owns high levels of expressiveness and demonstrates the ability to simulate the daily objects in the physical environment, the most fundamental issue of designing virtual buttons is surprisingly ignored. To this end, this paper presents four variants of virtual buttons, considering two design dimensions of key representations and multi-modal cues (audio, visual, haptic). We conduct two multi-metric assessments to evaluate the four virtual variants and the baselines of physical variants. Our results indicate that the 3D-lookalike buttons help users with more refined and subtle mid-air interactions (i.e. lesser press depth) when haptic cues are available; while the users with 2D-lookalike buttons unintuitively achieve better keystroke performance than the 3D counterparts. We summarize the findings, and accordingly, suggest the design choices of virtual reality buttons among the two proposed design dimensions.


2021 ◽  
Vol 8 ◽  
Author(s):  
Marco Costanzo ◽  
Giuseppe De Maria ◽  
Ciro Natale

Modern scenarios in robotics involve human-robot collaboration or robot-robot cooperation in unstructured environments. In human-robot collaboration, the objective is to relieve humans from repetitive and wearing tasks. This is the case of a retail store, where the robot could help a clerk to refill a shelf or an elderly customer to pick an item from an uncomfortable location. In robot-robot cooperation, automated logistics scenarios, such as warehouses, distribution centers and supermarkets, often require repetitive and sequential pick and place tasks that can be executed more efficiently by exchanging objects between robots, provided that they are endowed with object handover ability. Use of a robot for passing objects is justified only if the handover operation is sufficiently intuitive for the involved humans, fluid and natural, with a speed comparable to that typical of a human-human object exchange. The approach proposed in this paper strongly relies on visual and haptic perception combined with suitable algorithms for controlling both robot motion, to allow the robot to adapt to human behavior, and grip force, to ensure a safe handover. The control strategy combines model-based reactive control methods with an event-driven state machine encoding a human-inspired behavior during a handover task, which involves both linear and torsional loads, without requiring explicit learning from human demonstration. Experiments in a supermarket-like environment with humans and robots communicating only through haptic cues demonstrate the relevance of force/tactile feedback in accomplishing handover operations in a collaborative task.


2021 ◽  
Vol 11 (4) ◽  
pp. 1367
Author(s):  
Jorge C. S. Cardoso ◽  
Jorge M. Ribeiro

Tangible User Interface (TUI) represents a huge potential for Virtual Reality (VR) because tangibles can naturally provide rich haptic cues which are often missing in VR experiences that make use of standard controllers. We are particularly interested in implementing TUIs for smartphone-based VR, given the lower usage barrier and easy deployment. In order to keep the overall system simple and accessible, we have explored object detection through visual markers, using the smartphone’s camera. In order to help VR experience designers, in this work we present a design space for marker-based TUI for VR. We have mapped this design space by developing several marker-based tangible interaction prototypes and through a formative study with professionals with different backgrounds. We then instantiated the design space in a Tangible VR Book which we evaluate with remote user studies inspired by the vignette methodology.


2020 ◽  
Vol 96 (4) ◽  
pp. 590-605 ◽  
Author(s):  
Subhash Jha ◽  
M.S. Balaji ◽  
Joann Peck ◽  
Jared Oakley ◽  
George D. Deitz

Author(s):  
Richard L. Greatbatch ◽  
Hyungil Kim ◽  
Zachary R. Doerzaph ◽  
Robert Llaneras

New automated driving systems are constantly being developed and integrated into vehicles. At the current state of technology, these features still require drivers to monitor performance and resume control when required by the systems. To cue drivers to take control, a takeover request (TOR) is presented with auditory, visual, and haptic cues. To characterize current TOR practices, a literature review was conducted to review types of human-machine interfaces (HMI’s) and their associated message presentation. Twenty-six articles were identified after searching keywords across journal articles and conference proceedings. HMIs and message types were identified and classified. Results indicated that TORs are more commonly used as general alerts to gain driver attention to driving tasks, rather than to request drivers to engage in a specific action or explain context of the TOR. Literature suggests that future systems may focus more on not only alerting drivers but providing additional context to those alerts.


2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Changjin Wan ◽  
Pingqiang Cai ◽  
Xintong Guo ◽  
Ming Wang ◽  
Naoji Matsuhisa ◽  
...  

Abstract Human behaviors are extremely sophisticated, relying on the adaptive, plastic and event-driven network of sensory neurons. Such neuronal system analyzes multiple sensory cues efficiently to establish accurate depiction of the environment. Here, we develop a bimodal artificial sensory neuron to implement the sensory fusion processes. Such a bimodal artificial sensory neuron collects optic and pressure information from the photodetector and pressure sensors respectively, transmits the bimodal information through an ionic cable, and integrates them into post-synaptic currents by a synaptic transistor. The sensory neuron can be excited in multiple levels by synchronizing the two sensory cues, which enables the manipulating of skeletal myotubes and a robotic hand. Furthermore, enhanced recognition capability achieved on fused visual/haptic cues is confirmed by simulation of a multi-transparency pattern recognition task. Our biomimetic design has the potential to advance technologies in cyborg and neuromorphic systems by endowing them with supramodal perceptual capabilities.


Sign in / Sign up

Export Citation Format

Share Document