scholarly journals “Blurry Touch Finger”: Touch-Based Interaction for Mobile Virtual Reality with Clip-on Lenses

2020 ◽  
Vol 10 (21) ◽  
pp. 7920
Author(s):  
Youngwon Ryan Kim ◽  
Suhan Park ◽  
Gerard J. Kim

In this paper, we propose and explore a touch screen based interaction technique, called the “Blurry Touch Finger” for EasyVR, a mobile VR platform with non-isolating flip-on glasses that allows the fingers accessible to the screen. We demonstrate that, with the proposed technique, the user is able to accurately select virtual objects, seen under the lenses, directly with the fingers even though they are blurred and physically block the target object. This is possible owing to the binocular rivalry that renders the fingertips semi-transparent. We carried out a first stage basic evaluation assessing the object selection performance and general usability of Blurry Touch Finger. The study has revealed that, for objects with the screen space sizes greater than about 0.5 cm, the selection performance and usability of the Blurry Touch Finger, as applied in the EasyVR configuration, was comparable to or higher than those with both the conventional head-directed and hand/controller based ray-casting selection methods. However, for smaller sized objects, much below the size of the fingertip, the touch based selection was both less performing and usable due to the usual fat finger problem and difficulty in stereoscopic focus.

2014 ◽  
Vol 644-650 ◽  
pp. 3988-3993
Author(s):  
Zhong Lan Luo ◽  
Kai Lu ◽  
Xiao Ping Wang

3D object picking, or 3D object selection, is a very important human-computer interaction technique. By touching the object in the 2D screen, users can change some properties of the corresponding object in the 3D virtual world. 3D object picking is widely studied and applied in the following three situations: 1) the target object is significantly occluded; 2) the target object has small size; 3) and the target object is in a dynamically changing location. Since most previous 3D object picking techniques were developed and tested in static or negligent contexts, many 3D applications tended to incorporate small size and dynamic objects as part of their typical interaction, which doesn’t meet the needs of the development of today’s touch screen technology. Thus, this paper proposes a new color-based picking technique in 3D scene to solve this problem. We provide two different solutions for different application scenes: for the target object whose surface has a fixed color, we take the fixed color as the picking tag; as to the object whose surface has customized texture, we regard the invisible color package set by us as the picking tag. The evaluation shows that we have obtained significant increases of the picking accuracy in many different scenes when compared with the improved ray-casting picking technique.


2019 ◽  
Vol 27 (1) ◽  
pp. 68-79 ◽  
Author(s):  
Veronica Weser ◽  
Dennis R. Proffitt

We developed a novel interaction technique that allows virtual reality (VR) users to experience “weight” when hefting virtual, weightless objects. With this technique the perception of weight is evoked via constraints on the speed with which objects can be lifted. When hefted, heavier virtual objects move slower than lighter virtual objects. If lifters move faster than the lifted object, the object will fall. This constraint causes lifters to move slowly when lifting heavy objects. In two studies we showed that the size-weight illusion (SWI) is evoked when this technique is employed. The SWI occurs when two items of identical weight and different size are lifted and the smaller item is perceived as heavier than the larger item. The persistence of this illusion in VR indicates that participants bring their real-world knowledge of the relationship between size and weight to their virtual experience, and suggests that our interaction technique succeeds in making the visible tangible.


2012 ◽  
Vol 11 (3) ◽  
pp. 25-31 ◽  
Author(s):  
Joon Hao Chuah ◽  
Benjamin Lok

Input devices such as Nintendo Wiimotes are often used to select and manipulate virtual objects. While simple to use and easily available, these devices have some limitations. When used with common large displays such as televisions, they support only indirect manipulation. These devices also require the user to learn and remember which buttons map to which functions. We propose overcoming these limitations by using a smartphone as an interaction device. Smartphones, like Wiimotes, are readily available and easy to operate. Unlike the Wiimote, the smartphone has a touchscreen that can display the selected object, allowing the user to directly manipulate the object. Further, the touchscreen can customize the interface and provide buttons with clearly labeled functions specific to the object. We report on the lessons learned in integrating and using a smartphone as the interaction device for two applications. The first is a mixed reality game focused on general object selection and pose manipulation. We used this game in a pilot study evaluating usability. The second is an adaptation of an existing virtual reality application. This application demonstrated the ease of adaptation as well as improvements from using a smartphone.


2021 ◽  
Vol 5 (ISS) ◽  
pp. 1-23
Author(s):  
Marco Moran-Ledesma ◽  
Oliver Schneider ◽  
Mark Hancock

When interacting with virtual reality (VR) applications like CAD and open-world games, people may want to use gestures as a means of leveraging their knowledge from the physical world. However, people may prefer physical props over handheld controllers to input gestures in VR. We present an elicitation study where 21 participants chose from 95 props to perform manipulative gestures for 20 CAD-like and open-world game-like referents. When analyzing this data, we found existing methods for elicitation studies were insufficient to describe gestures with props, or to measure agreement with prop selection (i.e., agreement between sets of items). We proceeded by describing gestures as context-free grammars, capturing how different props were used in similar roles in a given gesture. We present gesture and prop agreement scores using a generalized agreement score that we developed to compare multiple selections rather than a single selection. We found that props were selected based on their resemblance to virtual objects and the actions they afforded; that gesture and prop agreement depended on the referent, with some referents leading to similar gesture choices, while others led to similar prop choices; and that a small set of carefully chosen props can support multiple gestures.


Author(s):  
Marisa Pascarelli Agrello ◽  
Marianina Impagliazzo ◽  
Joaquim José Escola

ResumoNo presente artigo apresentamos a experiência realizada com o uso dos softwares de realidade aumentada (RA) e a realidade virtual (RV) em cenários para o Ensino das Ciências objetivando atender a Era da Educação 4 por meio de manipulação de objetos virtuais.Com aplicações distintas, as duas tecnologias são complementares e se configuram como ferramentas adicionais aos docentes com a proposta de elevar a qualidade das aulas e a geração de uma aprendizagem significativa representando uma ponte entre a educação e a tecnologia. Como objetos virtuais de aprendizagem (OVA), deverão ser usados em sala de aula como forma de enriquecimento das experiências práticas por meio da representação virtual de temas e contextos tornando mais ativa, contextualizada e efetiva o processo de apreensão do mundo. Palavras-chave: realidade virtual, realidade aumentada, ensino das ciências, tecnologias educacionais. Abstract In the present article we present the experience with the use of software of augmented reality (RA) and virtual reality (VR) in scenarios for the Teaching of Sciences in order to attend the Age 4 of Education through manipulation of virtual objects, the two technologies are complementary and are configured as additional tools for teachers with the proposal of raising the quality of lessons and generating meaningful learning as a bridge between education and technology. As virtuais learning objects, they should be used in the classroom as a way to enrich practical experiences through virtual representation of themes and contexts, making the process of apprehension of the world more active, contextualized and effective. Keywords: virtual reality, augmented reality, science teaching, educational technologies.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258103
Author(s):  
Andreas Bueckle ◽  
Kilian Buehling ◽  
Patrick C. Shih ◽  
Katy Börner

Working with organs and extracted tissue blocks is an essential task in many medical surgery and anatomy environments. In order to prepare specimens from human donors for further analysis, wet-bench workers must properly dissect human tissue and collect metadata for downstream analysis, including information about the spatial origin of tissue. The Registration User Interface (RUI) was developed to allow stakeholders in the Human Biomolecular Atlas Program (HuBMAP) to register tissue blocks—i.e., to record the size, position, and orientation of human tissue data with regard to reference organs. The RUI has been used by tissue mapping centers across the HuBMAP consortium to register a total of 45 kidney, spleen, and colon tissue blocks, with planned support for 17 organs in the near future. In this paper, we compare three setups for registering one 3D tissue block object to another 3D reference organ (target) object. The first setup is a 2D Desktop implementation featuring a traditional screen, mouse, and keyboard interface. The remaining setups are both virtual reality (VR) versions of the RUI: VR Tabletop, where users sit at a physical desk which is replicated in virtual space; VR Standup, where users stand upright while performing their tasks. All three setups were implemented using the Unity game engine. We then ran a user study for these three setups involving 42 human subjects completing 14 increasingly difficult and then 30 identical tasks in sequence and reporting position accuracy, rotation accuracy, completion time, and satisfaction. All study materials were made available in support of future study replication, alongside videos documenting our setups. We found that while VR Tabletop and VR Standup users are about three times as fast and about a third more accurate in terms of rotation than 2D Desktop users (for the sequence of 30 identical tasks), there are no significant differences between the three setups for position accuracy when normalized by the height of the virtual kidney across setups. When extrapolating from the 2D Desktop setup with a 113-mm-tall kidney, the absolute performance values for the 2D Desktop version (22.6 seconds per task, 5.88 degrees rotation, and 1.32 mm position accuracy after 8.3 tasks in the series of 30 identical tasks) confirm that the 2D Desktop interface is well-suited for allowing users in HuBMAP to register tissue blocks at a speed and accuracy that meets the needs of experts performing tissue dissection. In addition, the 2D Desktop setup is cheaper, easier to learn, and more practical for wet-bench environments than the VR setups.


Disputatio ◽  
2017 ◽  
Vol 9 (46) ◽  
pp. 309-352 ◽  
Author(s):  
David J. Chalmers

Abstract I argue that virtual reality is a sort of genuine reality. In particular, I argue for virtual digitalism, on which virtual objects are real digital objects, and against virtual fictionalism, on which virtual objects are fictional objects. I also argue that perception in virtual reality need not be illusory, and that life in virtual worlds can have roughly the same sort of value as life in non-virtual worlds.


Author(s):  
Gabriel Zachmann

Collision detection is one of the enabling technologies in many areas, such as virtual assembly simulation, physically-based simulation, serious games, and virtual-reality based medical training. This chapter will provide a number of techniques and algorithms that provide efficient, real-time collision detection for virtual objects. They are applicable to various kinds of objects and are easy to implement.


Sign in / Sign up

Export Citation Format

Share Document