scholarly journals A Non-Touchscreen Tactile Wearable Interface as an Alternative to Touchscreen-Based Wearable Devices

Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1275 ◽  
Author(s):  
Hyoseok Yoon ◽  
Se-Ho Park

Current consumer wearable devices such as smartwatches mostly rely on touchscreen-based user interfaces. Even though touch-based user interfaces help smartphone users quickly adapt to wearable devices with touchscreens, there exist several limitations. In this paper, we propose a non-touchscreen tactile wearable interface as an alternative to touchscreens on wearable devices. We designed and implemented a joystick-integrated smartwatch prototype to demonstrate our non-touchscreen tactile wearable interface. We iteratively improved and updated our prototype to improve and polish interaction ideas and prototype integration. To show feasibility of our approach, we compared and contrasted form factors of our prototype against the latest nine commercial smartwatches in terms of their dimensions. We also show response time and accuracy of our wearable interface to discuss our rationale for an alternative and usable wearable UI. With the proposed tactile wearable user interface, we believe our approach may serve as a cohesive single interaction device to enable various cross-device interaction scenarios and applications.

Author(s):  
Claas Ahlrichs ◽  
Michael Lawo ◽  
Hendrik Iben

In the future, mobile and wearable devices will increasingly be used for interaction with surrounding technologies. When developing applications for those devices, one usually has to implement the same application for each individual device. Thus a unified framework could drastically reduce development efforts. This paper presents a framework that facilitates the development of context-aware user interfaces (UIs) with reusable components for those devices. It is based on an abstract description of an envisioned UI which is used to generate a context- and device-specific representation at run-time. Rendition in various modalities and adaption of the generated representation are also supported.


2011 ◽  
Vol 3 (3) ◽  
pp. 28-35 ◽  
Author(s):  
Claas Ahlrichs ◽  
Michael Lawo ◽  
Hendrik Iben

In the future, mobile and wearable devices will increasingly be used for interaction with surrounding technologies. When developing applications for those devices, one usually has to implement the same application for each individual device. Thus a unified framework could drastically reduce development efforts. This paper presents a framework that facilitates the development of context-aware user interfaces (UIs) with reusable components for those devices. It is based on an abstract description of an envisioned UI which is used to generate a context- and device-specific representation at run-time. Rendition in various modalities and adaption of the generated representation are also supported.


Author(s):  
Joran Deschamps ◽  
Jonas Ries

Advanced light microscopy methods are becoming increasingly popular in biological research. Their ease of use depends, besides experimental aspects, on intuitive user interfaces. The open-source software Micro-Manager offers a universal interface for microscope control but requires implementing plugins to further tailor it to specific systems. Since even similar devices can have different Micro-Manager properties (such as power percentage versus absolute power), transferring user interfaces to other systems is usually very restricted.We developed Easier Micro-Manager User interface (EMU), a Micro-Manager plugin, to simplify building flexible and reconfigurable user interfaces. EMU offers a choice of interfaces that are rapidly ready to use thanks to an intuitive configuration menu. In particular, the configuration menu allows mapping device properties to the various functions of the interface in a few clicks. Exchanging or adding new devices to the microscope no longer requires rewriting code. The EMU framework also simplifies implementing a new interface by providing the configuration and device interaction mechanisms. The user interface can be built by using a drag-and-drop tool in one’s favorite Java development environment and writing a few lines of code for compatibility with EMU.Micro-Manager users now have a powerful tool to improve the user experience on their instruments. EMU interfaces can be easily transferred to new microscopes and shared with other research groups. In the future, newly developed interfaces will be added to EMU to benefit the whole community.


2020 ◽  
Vol 23 (4) ◽  
pp. 25-29
Author(s):  
Sangeun Oh ◽  
Ahyeon Kim ◽  
Sunjae Lee ◽  
Kilho Lee ◽  
Dae R. Jeong ◽  
...  

Information ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 162
Author(s):  
Soyeon Kim ◽  
René van Egmond ◽  
Riender Happee

In automated driving, the user interface plays an essential role in guiding transitions between automated and manual driving. This literature review identified 25 studies that explicitly studied the effectiveness of user interfaces in automated driving. Our main selection criterion was how the user interface (UI) affected take-over performance in higher automation levels allowing drivers to take their eyes off the road (SAE3 and SAE4). We categorized user interface (UI) factors from an automated vehicle-related information perspective. Short take-over times are consistently associated with take-over requests (TORs) initiated by the auditory modality with high urgency levels. On the other hand, take-over requests directly displayed on non-driving-related task devices and augmented reality do not affect take-over time. Additional explanations of take-over situation, surrounding and vehicle information while driving, and take-over guiding information were found to improve situational awareness. Hence, we conclude that advanced user interfaces can enhance the safety and acceptance of automated driving. Most studies showed positive effects of advanced UI, but a number of studies showed no significant benefits, and a few studies showed negative effects of advanced UI, which may be associated with information overload. The occurrence of positive and negative results of similar UI concepts in different studies highlights the need for systematic UI testing across driving conditions and driver characteristics. Our findings propose future UI studies of automated vehicle focusing on trust calibration and enhancing situation awareness in various scenarios.


Author(s):  
Randall Spain ◽  
Jason Saville ◽  
Barry Lui ◽  
Donia Slack ◽  
Edward Hill ◽  
...  

Because advances in broadband capabilities will soon allow first responders to access and use many forms of data when responding to emergencies, it is becoming critically important to design heads-up displays to present first responders with information in a manner that does not induce extraneous mental workload or cause undue interaction errors. Virtual reality offers a unique medium for envisioning and testing user interface concepts in a realistic and controlled environment. In this paper, we describe a virtual reality-based emergency response scenario that was designed to support user experience research for evaluating the efficacy of intelligent user interfaces for firefighters. We describe the results of a usability test that captured firefighters’ feedback and reactions to the VR scenario and the prototype intelligent user interface that presented them with task critical information through the VR headset. The paper concludes with lessons learned from our development process and a discussion of plans for future research.


2021 ◽  
Vol 5 (EICS) ◽  
pp. 1-29
Author(s):  
Arthur Sluÿters ◽  
Jean Vanderdonckt ◽  
Radu-Daniel Vatavu

Intra-platform plasticity regularly assumes that the display of a computing platform remains fixed and rigid during interactions with the platform in contrast to reconfigurable displays, which can change form depending on the context of use. In this paper, we present a model-based approach for designing and deploying graphical user interfaces that support intra-platform plasticity for reconfigurable displays. We instantiate the model for E3Screen, a new device that expands a conventional laptop with two slidable, rotatable, and foldable lateral displays, enabling slidable user interfaces. Based on a UML class diagram as a domain model and a SCRUD list as a task model, we define an abstract user interface as interaction units with a corresponding master-detail design pattern. We then map the abstract user interface to a concrete user interface by applying rules for the reconfiguration, concrete interaction, unit allocation, and widget selection and implement it in JavaScript. In a first experiment, we determine display configurations most preferred by users, which we organize in the form of a state-transition diagram. In a second experiment, we address reconfiguration rules and widget selection rules. A third experiment provides insights into the impact of the lateral displays on a visual search task.


Author(s):  
Henry Larkin

Purpose – The purpose of this paper is to investigate the feasibility of creating a declarative user interface language suitable for rapid prototyping of mobile and Web apps. Moreover, this paper presents a new framework for creating responsive user interfaces using JavaScript. Design/methodology/approach – Very little existing research has been done in JavaScript-specific declarative user interface (UI) languages for mobile Web apps. This paper introduces a new framework, along with several case studies that create modern responsive designs programmatically. Findings – The fully implemented prototype verifies the feasibility of a JavaScript-based declarative user interface library. This paper demonstrates that existing solutions are unwieldy and cumbersome to dynamically create and adjust nodes within a visual syntax of program code. Originality/value – This paper presents the Guix.js platform, a declarative UI library for rapid development of Web-based mobile interfaces in JavaScript.


Robotica ◽  
2007 ◽  
Vol 25 (5) ◽  
pp. 521-527 ◽  
Author(s):  
Harsha Medicherla ◽  
Ali Sekmen

SUMMARYAn understanding of how humans and robots can successfully interact to accomplish specific tasks is crucial in creating more sophisticated robots that may eventually become an integral part of human societies. A social robot needs to be able to learn the preferences and capabilities of the people with whom it interacts so that it can adapt its behaviors for more efficient and friendly interaction. Advances in human– computer interaction technologies have been widely used in improving human–robot interaction (HRI). It is now possible to interact with robots via natural communication means such as speech. In this paper, an innovative approach for HRI via voice-controllable intelligent user interfaces is described. The design and implementation of such interfaces are described. The traditional approaches for human–robot user interface design are explained and the advantages of the proposed approach are presented. The designed intelligent user interface, which learns user preferences and capabilities in time, can be controlled with voice. The system was successfully implemented and tested on a Pioneer 3-AT mobile robot. 20 participants, who were assessed on spatial reasoning ability, directed the robot in spatial navigation tasks to evaluate the effectiveness of the voice control in HRI. Time to complete the task, number of steps, and errors were collected. Results indicated that spatial reasoning ability and voice-control were reliable predictors of efficiency of robot teleoperation. 75% of the subjects with high spatial reasoning ability preferred using voice-control over manual control. The effect of spatial reasoning ability in teleoperation with voice-control was lower compared to that of manual control.


Sign in / Sign up

Export Citation Format

Share Document