AugIR Meets GestureCards: A Digital Sketching Environment for Gesture-Based Applications

Author(s):  
Marc Hesenius ◽  
Markus Kleffmann ◽  
Volker Gruhn

Abstract To gain a common understanding of an application’s layouts, dialogs and interaction flows, development teams often sketch user interface (UI). Nowadays, they must also define multi-touch gestures, but tools for sketching UIs often lack support for custom gestures and typically just integrate a basic predefined gesture set, which might not suffice to specifically tailor the interaction to the desired use cases. Furthermore, sketching can be enhanced with digital means, but it remains unclear whether digital sketching is actually beneficial when designing gesture-based applications. We extended the AugIR, a digital sketching environment, with GestureCards, a hybrid gesture notation, to allow software engineers to define custom gestures when sketching UIs. We evaluated our approach in a user study contrasting digital and analog sketching of gesture-based UIs.

2015 ◽  
Vol 78 (2-2) ◽  
Author(s):  
Nuraini Hidayah Sulaiman ◽  
Masitah Ghazali

Guidelines for designing and developing a learning prototype that are compatible with the limited capabilities of children with Cerebral Palsy (CP) are established in the form of a model, known as Learning Software User Interface Design Model (LSUIDM), to ensure children with CP are able to grasp the concepts of a learning software application prototype. In this paper, the LSUIDM is applied in developing a learning software application for children with CP. We present a user study on evaluating a children education game for CP children at Pemulihan dalam Komuniti in Johor Bahru. The findings from the user study shows that the game, which was built, based on the LSUIDM can be applied in the learning process for children with CP and most notably, the children are engaged and excited using the software. This paper highlights the lessons learned from the user study, which should be significant especially in improving the application. The results of the study show that the application is proven to be interactive, useful and efficient as the users used it.


2006 ◽  
Vol 3 (1) ◽  
pp. 33-52 ◽  
Author(s):  
Zeljko Obrenovic ◽  
Dusan Starcevic

In this paper we describe how existing software developing processes, such as Rational Unified Process, can be adapted in order to allow disciplined and more efficient development of user interfaces. The main objective of this paper is to demonstrate that standard modeling environments, based on the UML, can be adapted and efficiently used for user interfaces development. We have integrated the HCI knowledge into developing processes by semantically enriching the models created in each of the process activities of the process. By using UML, we can make easier use of HCI knowledge for ordinary software engineers who, usually, are not familiar with results of HCI researches, so these results can have broader and more practical effects. By providing a standard means for representing human computer interaction, we can seamlessly transfer UML models of multimodal interfaces between design and specialized analysis tools. Standardization provides a significant driving force for further progress because it codifies best practices enables and encourages reuse, and facilitates inter working between complementary tools. Proposed solutions can be valuable for software developers, who can improve quality of user interfaces and their communication with user interface designers, as well as for human computer interaction researchers, who can use standard methods to include their results into software developing processes.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258103
Author(s):  
Andreas Bueckle ◽  
Kilian Buehling ◽  
Patrick C. Shih ◽  
Katy Börner

Working with organs and extracted tissue blocks is an essential task in many medical surgery and anatomy environments. In order to prepare specimens from human donors for further analysis, wet-bench workers must properly dissect human tissue and collect metadata for downstream analysis, including information about the spatial origin of tissue. The Registration User Interface (RUI) was developed to allow stakeholders in the Human Biomolecular Atlas Program (HuBMAP) to register tissue blocks—i.e., to record the size, position, and orientation of human tissue data with regard to reference organs. The RUI has been used by tissue mapping centers across the HuBMAP consortium to register a total of 45 kidney, spleen, and colon tissue blocks, with planned support for 17 organs in the near future. In this paper, we compare three setups for registering one 3D tissue block object to another 3D reference organ (target) object. The first setup is a 2D Desktop implementation featuring a traditional screen, mouse, and keyboard interface. The remaining setups are both virtual reality (VR) versions of the RUI: VR Tabletop, where users sit at a physical desk which is replicated in virtual space; VR Standup, where users stand upright while performing their tasks. All three setups were implemented using the Unity game engine. We then ran a user study for these three setups involving 42 human subjects completing 14 increasingly difficult and then 30 identical tasks in sequence and reporting position accuracy, rotation accuracy, completion time, and satisfaction. All study materials were made available in support of future study replication, alongside videos documenting our setups. We found that while VR Tabletop and VR Standup users are about three times as fast and about a third more accurate in terms of rotation than 2D Desktop users (for the sequence of 30 identical tasks), there are no significant differences between the three setups for position accuracy when normalized by the height of the virtual kidney across setups. When extrapolating from the 2D Desktop setup with a 113-mm-tall kidney, the absolute performance values for the 2D Desktop version (22.6 seconds per task, 5.88 degrees rotation, and 1.32 mm position accuracy after 8.3 tasks in the series of 30 identical tasks) confirm that the 2D Desktop interface is well-suited for allowing users in HuBMAP to register tissue blocks at a speed and accuracy that meets the needs of experts performing tissue dissection. In addition, the 2D Desktop setup is cheaper, easier to learn, and more practical for wet-bench environments than the VR setups.


Author(s):  
Margaret F. Rox ◽  
Richard J. Hendrick ◽  
S. Duke Herrell ◽  
Robert J. Webster

There is a trend towards miniaturization in surgical robotics with the objective of making surgeries less invasive [1]. There has also been increasing recent interest in hand-held robots because of their ability to maintain the current surgical workflow [2, 3]. We have previously presented a system that integrates small-diameter concentric tube robots [4, 5] into a hand-held robotic device [3], as shown in Figure 1. This robot was designed for transurethral laser surgery in the prostate. It provides the surgeon with two dexterous manipulators through a 5mm port in a traditional transurethral endoscope. This system enables the surgeon to retract tissue and aim a fiber optic laser simultaneously to resect prostate tissue. This robot provides the surgeon with a total of ten degrees of freedom (DOF) that must be simultaneously coordinated, including endoscope orientation (3 DOF), endoscope insertion (1 DOF), as well as the tip position of each concentric tube manipulator (3 DOF per manipulator). In [3], a simple user interface was employed that involved thumb joysticks (which also had pushbutton capability) and a unidirectional index finger trigger, as shown in Figure 2 (Left). The thumb joysticks were mapped to manipulator tip motion in the plane of the endoscope image, and the trigger was used for motion perpendicular to the plane. Whether the finger trigger extended or retracted the tip of the concentric tube manipulator was toggled via the pushbutton capability of the thumb joystick. While surgeons could learn this mapping with some effort, and were able to use it to accomplish a cadaver study, the experiments made clear that further work was needed in creating an intuitive user interface — particularly with respect to how motion perpendicular to the image plane is controlled. This paper describes a first step toward improving the user interface; we integrate a bidirectional dial input in place of the unidirectional index finger trigger, so that extension and retraction perpendicular to the image plane can be controlled without the need for a pushbutton toggle. In this paper we describe the design of this dial input and present the results of a user study comparing it to the interface in [3].


Author(s):  
W. DAVID HURLEY

A long-term goal for software engineers is integrating the separate processes of user interface development and modern software development. With emergent CASE technology, software engineers can begin to explore ways to achieve this integration. Exploration involves investigating candidate methodologies that let developers apply different development strategies to different parts of an interactive system. Disciplined long-term investigation requires that the fundamental principles governing each process be fixed and that evolving development methods comprising each process be accommodated. This paper proposes a computer-based process model that fixes the principles and accommodates evolving methods. Model features include a collection of software engineering and knowledge engineering techniques that supports a development organization of human and computer-based agents, a coordination activity that supports opportunistic behavior of developers, a unifying representation that leads to mutually consistent results from developers, and an extendable topology that enhances collaboration among developers while reducing their communications burden.


Author(s):  
Federico Maria Cau ◽  
Angelo Mereu ◽  
Lucio Davide Spano

In this paper, we present an intelligent support End-User Developers (EUDevs) in creating plot lines for Point and Click games on the web. We introduce a story generator and the associated user interface, which help the EUDev in defining the game plot starting from the images providing the game setting. In particular, we detail a pipeline for creating such game plots starting from 360 degrees images. We identify salient objects in equirectangular images, and we compose the output with other two neural networks for the generation: one generating captions for 2D images and one generating the plot text. The provided suggestions can be further developed by the EUDev, modifying the generated text and saving the result. The interface supports the control of different parameters of the story generator using a user-friendly vocabulary. The results of a user study show good effectiveness and usability of the proposed interface.


Sign in / Sign up

Export Citation Format

Share Document