scholarly journals PaperCAD: A System for Interrogating CAD Drawings Using Small Mobile Computing Devices Combined with Interactive Paper

2014 ◽  
Vol 2014 ◽  
pp. 1-13
Author(s):  
WeeSan Lee ◽  
Thomas F. Stahovich

Smartphones have become indispensable computational tools. However, some tasks can be difficult to perform on a smartphone because these devices have small displays. Here, we explore methods for augmenting the display of a smartphone, or other PDA, using interactive paper. Specifically, we present a prototype interface that enables a user to interactively interrogate technical drawings using an Anoto-based smartpen and a PDA. Our software system, called PaperCAD, enables users to query geometric information from CAD drawings printed on Anoto dot-patterned paper. For example, the user can measure a distance by drawing a dimension arrow. The system provides output to the user via a smartpen’s audio speaker and the dynamic video display of a PDA. The user can select either verbose or concise audio feedback, and the PDA displays a video image of the portion of the drawing near the pen tip. The project entails advances in the interpretation of pen input, such as a method that uses contextual information to interpret ambiguous dimensions and a technique that uses a hidden Markov model to correct interpretation errors in handwritten equations. Results of a user study suggest that our user interface design and interpretation techniques are effective and that users are highly satisfied with the system.

2015 ◽  
Vol 78 (2-2) ◽  
Author(s):  
Nuraini Hidayah Sulaiman ◽  
Masitah Ghazali

Guidelines for designing and developing a learning prototype that are compatible with the limited capabilities of children with Cerebral Palsy (CP) are established in the form of a model, known as Learning Software User Interface Design Model (LSUIDM), to ensure children with CP are able to grasp the concepts of a learning software application prototype. In this paper, the LSUIDM is applied in developing a learning software application for children with CP. We present a user study on evaluating a children education game for CP children at Pemulihan dalam Komuniti in Johor Bahru. The findings from the user study shows that the game, which was built, based on the LSUIDM can be applied in the learning process for children with CP and most notably, the children are engaged and excited using the software. This paper highlights the lessons learned from the user study, which should be significant especially in improving the application. The results of the study show that the application is proven to be interactive, useful and efficient as the users used it.


Author(s):  
Meghan Chandarana ◽  
Erica L. Meszaros ◽  
Anna Trujillo ◽  
B. Danette Allen

As the number of viable applications for unmanned aerial vehicle (UAV) systems increases at an exponential rate, interfaces that reduce the reliance on highly skilled engineers and pilots must be developed. Recent work aims to make use of common human communication modalities such as speech and gesture. This paper explores a multimodal natural language interface that uses a combination of speech and gesture input modalities to build complex UAV flight paths by defining trajectory segment primitives. Gesture inputs are used to define the general shape of a segment while speech inputs provide additional geometric information needed to fully characterize a trajectory segment. A user study is conducted in order to evaluate the efficacy of the multimodal interface.


2021 ◽  
Vol 8 ◽  
Author(s):  
Daniel Butters ◽  
Emil T. Jonasson ◽  
Vijay M. Pawar

Supervising and controlling remote robot systems currently requires many specialised operators to have knowledge of the internal state of the system in addition to the environment. For applications such as remote maintenance of future nuclear fusion reactors, the number of robots (and hence supervisors) required to maintain or decommission a facility is too large to be financially feasible. To address this issue, this work explores the idea of intelligently filtering information so that a single user can supervise multiple robots safely. We gathered feedback from participants using five methods for teleoperating a semi-autonomous multi-robot system via Virtual Reality (VR). We present a novel 3D interaction method to filter the displayed information to allow the user to read information from the environment without being overwhelmed. The novelty of the interface design is the link between Semantic and Spatial filtering and the hierarchical information contained within the multi robot system. We conducted a user study including a cohort of expert robot teleoperators comparing these methods; highlighting the significant effects of 3D interface design on the performance and perceived workload of a user teleoperating many robot agents in complex environments. The results from this experiment and subjective user feedback will inform future investigations that build upon this initial work.


Author(s):  
Cynthia Kuo ◽  
Adrian Perrig ◽  
Jesse Walker

End users often find that security configuration interfaces are difficult to use. In this chapter, we explore how application designers can improve the design and evaluation of security configuration interfaces. We use IEEE 802.11 network configuration as a case study. First, we design and implement a configuration interface that guides users through secure network configuration. The key insight is that users have a difficult time translating their security goals into specific feature configurations. Our interface automates the translation from users’ high-level goals to low-level feature configurations. Second, we develop and conduct a user study to compare our interface design with commercially available products. We adapt existing user research methods to sidestep common difficulties in evaluating security applications. Using our configuration interface, non-expert users are able to secure their networks as well as expert users. In general, our research addresses prevalent issues in the design and evaluation of consumer-configured security applications.


2017 ◽  
Vol 27 (09n10) ◽  
pp. 1439-1453 ◽  
Author(s):  
Sebastian Weigelt ◽  
Tobias Hey ◽  
Walter F. Tichy

Current systems with spoken language interfaces do not leverage contextual information. Therefore, they struggle with understanding speakers’ intentions. We propose a system that creates a context model from user utterances to overcome this lack of information. It comprises eight types of contextual information organized in three layers: individual, conceptual, and hierarchical. We have implemented our approach as a part of the project PARSE. It aims at enabling laypersons to construct simple programs by dialog. Our implementation incrementally generates context including occurring entities and actions as well as their conceptualizations, state transitions, and other types of contextual information. Its analyses are knowledge- or rule-based (depending on the context type), but we make use of many well-known probabilistic NLP techniques. In a user study we have shown the feasibility of our approach, achieving [Formula: see text] scores from 72% up to 98% depending on the type of contextual information. The context model enables us to resolve complex identity relations. However, quantifying this effect is subject to future work. Likewise, we plan to investigate whether our context model is useful for other language understanding tasks, e.g. anaphora resolution, topic analysis, or correction of automatic speech recognition errors.


2021 ◽  
Vol 5 (Supplement_1) ◽  
pp. 114-114
Author(s):  
Kara Cohen ◽  
Patricia Griffiths ◽  
Tracy Mitzner

Abstract Individuals with Mild Cognitive Impairment (MCI) face many challenges, including cognitive declines and reduced independence which are associated with poor health outcomes. Although there is no cure for MCI, mind-body exercise classes may improve cognitive function and reduce risk of falls (Wayne, Yeh, & Mehta, 2018). However, such classes are often not accessible for individuals with MCI due to lack of transportation, fear of being stigmatized, or inability to find instructors who have experience working with individuals with MCI (Hobson & Middleton, 2008; Rimmer, 2005). Tele-technology, such as video-conferencing software, has the potential to remove barriers to participation by allowing individuals to attend classes from home. The goal of this study was to assess the feasibility of using tele-technology to deliver mind-body classes to individuals with MCI. We evaluated technology acceptance and usability for OneClick.chat, a web-based video-conferencing platform designed for older adults. Stakeholders (4 subject matter experts, 2 individuals with MCI, and 2 care partners) participated in a user study that included questionnaires and a short interview. The technology acceptance data indicate that OneClick.chat was perceived as easy to use. Some individuals expressed privacy and security concerns which could be addressed with additional education and support. These findings have implications for interface design, education, and training for deployment of tele-technology delivered mind-body classes for those with MCI.


Author(s):  
Mark Colley ◽  
Pascal Jansen ◽  
Enrico Rukzio ◽  
Jan Gugenheimer

Autonomous vehicles provide new input modalities to improve interaction with in-vehicle information systems. However, due to the road and driving conditions, the user input can be perturbed, resulting in reduced interaction quality. One challenge is assessing the vehicle motion effects on the interaction without an expensive high-fidelity simulator or a real vehicle. This work presents SwiVR-Car-Seat, a low-cost swivel seat to simulate vehicle motion using rotation. In an exploratory user study (N=18), participants sat in a virtual autonomous vehicle and performed interaction tasks using the input modalities touch, gesture, gaze, or speech. Results show that the simulation increased the perceived realism of vehicle motion in virtual reality and the feeling of presence. Task performance was not influenced uniformly across modalities; gesture and gaze were negatively affected while there was little impact on touch and speech. The findings can advise automotive user interface design to mitigate the adverse effects of vehicle motion on the interaction.


2021 ◽  
Author(s):  
◽  
Bridget Johnson

<p>This thesis documents three years of extensive research into the field of sonic spatial expression and is the culmination of years of fascination about all of the ways music is made. In particular, it focuses on the way sounds move through space. This research stems from artistic practice and a desire to deeply explore spatial aesthetics in sound art. A potential for further development of tools designed for aesthetic engagement with spatial attributes of music is identified. It is proposed that with new tools designed for the manipulation of spatial attributes, new spatial aesthetics might emerge. In exploring this proposition, a number of contributions to the field of spatial sound art are presented. The main approach taken is to apply new technologies to the design of spatialisation performance interfaces. It is hoped that in designing novel interfaces that specifically engage with spatial parameters, new ways for aesthetically engaging with space will be afforded for composers and performers. The tools designed all aim to exhibit a high level of intuitiveness in their control systems, allowing non- expert users access to these spatially expressive tools. Additionally, the new tools aim to provide high levels of expressivity so that advanced composers who are looking for new ways to use space expressively may also use them.  This thesis focuses on the design, development, implementation, analysis, and artistic use of new spatial interfaces. The design methodology implemented for all of the interfaces includes both testing and analysis phases that involve the composition and performance of new musical works. The development of the interfaces is closely coupled with the development of the new musical works, with each design phase applied to a new work and each new work or spatial idea exploring the new aesthetics afforded by the tools. The assessment of these new tools takes various forms: they are assessed by critical evaluation of the new works created, by user study evaluations from other composers who utilise the tools, and, where appropriate, by quantifiable methods of evaluation that are adopted to assess specific spatialisation tools.  The new musical interfaces presented, described, and evaluated in this document were conceived of as musical instruments, each affording new approaches to spatial expression. This document also details an extensive collection of new musical works that feature the interfaces. It concludes by suggesting future directions for this research body and the spatialisation interface design field.</p>


2016 ◽  
Vol 68 (5) ◽  
pp. 545-565 ◽  
Author(s):  
Kyoungsik Na ◽  
Jisu Lee

Purpose The purpose of this paper is to explore the differences between collaborative and individual search techniques in a scenario-based task focussed on query behavior, cognitive load, search time, and task type about the search. Design/methodology/approach To help understand the influences on searching for relevant information in pairs or individual contexts, the authors conducted an exploratory user study with 30 participants, using two search tasks completed in a controlled laboratory setting. Findings On the basis of the analysis, the authors found that collaborative search teams resulted in more queries, more diverse query terms, and more varied query results compared to those working individually. The study results indicated that the cognitive load imposed on the participants did not differ between a collaborative search and an individual search except for the component of performance on the NASA Task Load Index. The results further showed that the total search time was a significant difference on average between the two conditions (i.e. individual information search and collaborative information search) for the second task. And there were significant differences of the mean of total search time between the two tasks for the both conditions. The authors also found that there was no significant relationship between query behavior and the total cognitive load. Originality/value The findings from this study have implications for a better understanding of collaborative search interface design, searchers’ cognitive load, query behavior, and general collaborative information search.


Sign in / Sign up

Export Citation Format

Share Document