Kinesthetic Metaphors for Precise Spatial Manipulation: A Study of Object Rotation

Author(s):  
Ronak R. Mohanty ◽  
Vinayak R. Krishnamurthy

Abstract In this article, we report on our investigation of kinesthetic feedback as a means to provide precision, accuracy, and mitigation of arm fatigue in spatial manipulation tasks. Most works on spatial manipulation discuss the use of haptics (kinesthetic/force and tactile) primarily as a means to offer physical realism in spatial user interfaces (SUIs). Our work offers a new perspective in terms of how force-feedback can promote precise manipulations in spatial interactions to aid manual labor, controllability, and precision. To demonstrate this, we develop, implement, and evaluate three new haptics-enabled interaction techniques (kinesthetic metaphors) for precise rotation of 3D objects. The quantitative and qualitative analyses of experiments reveal that the addition of force-feedback improves precision for each of the rotation techniques. Self-reported user feedback further exposes a novel aspect of kinesthetic manipulation in its ability to mitigate arm fatigue for close-range spatial manipulation tasks.

2020 ◽  
Vol 142 (12) ◽  
Author(s):  
Meera C S ◽  
Pinisetti Swami Sairam ◽  
Vineeth Veeramalla ◽  
Adarsh Kumar ◽  
Mukul Kumar Gupta

Abstract The design perspective of interfaces has strong implications on operator intuition and safety. Haptics enabled user interfaces can enhance operator skills and enhance interactivity. In this paper, an innovative method of haptic feedback in joysticks is presented for excavator control. Haptic illusion in the device is generated with the concept of the variable stiffness actuation mechanism. The force feedback (FFB) is rendered through “haptic links,” based on the effect of digging force at each joint. The stiffness in the device varies dynamically with the load and restricts the operator motion with a resistive torque in the range of 0–0.9 Nm. The haptic joystick aims to render high-fidelity kinesthetic feedback that can help to mitigate the operator error in loading operations. The user evaluation with the joystick showed an improvement of 40% in the volume of material removed and a significant drop in error rate related to force patterns and collisions.


Author(s):  
Xiaojun Bi ◽  
Andrew Howes ◽  
Per Ola Kristensson ◽  
Antti Oulasvirta ◽  
John Williamson

This chapter introduces the field of computational interaction, and explains its long tradition of research on human interaction with technology that applies to human factors engineering, cognitive modelling, artificial intelligence and machine learning, design optimization, formal methods, and control theory. It discusses how the book as a whole is part of an argument that, embedded in an iterative design process, computational interaction design has the potential to complement human strengths and provide a means to generate inspiring and elegant designs without refuting the part played by the complicated, and uncertain behaviour of humans. The chapters in this book manifest intellectual progress in the study of computational principles of interaction, demonstrated in diverse and challenging applications areas such as input methods, interaction techniques, graphical user interfaces, information retrieval, information visualization, and graphic design.


Author(s):  
Ronak R. Mohanty ◽  
Umema H. Bohari ◽  
Vinayak ◽  
Eric Ragan

We present haptics-enabled mid-air interactions for sketching collections of three-dimensional planar curves — 3D curve-soups — as a means for 3D design conceptualization. Haptics-based mid-air interactions have been extensively studied for modeling of surfaces and solids. The same is not true for modeling curves; there is little work that explores spatiality, tangibility, and kinesthetics for curve modeling, as seen from the perspective of 3D sketching for conceptualization. We study pen-based mid air interactions for free-form curve input from the perspective of manual labor, controllability, and kinesthetic feedback. For this, we implemented a simple haptics-enabled workflow for users to draw and compose collections of planar curves on a force-enabled virtual canvas. We introduce a novel force-feedback metaphor for curve drawing, and investigate three novel rotation techniques within our workflow for both controlled and free-form sketching tasks.


Author(s):  
Derek Brock ◽  
Deborah Hix ◽  
Lynn Dievendorf ◽  
J. Gregory Trafton

Software user interfaces that provide users with more than one device, such as a mouse and keyboard, for interactively performing tasks, are now commonplace. Concerns about how to represent individual differences in patterns of use and acquisition of skill in such interfaces led the authors to develop modifications to the standard format of the User Action Notation (UAN) that substantially augment the notation's expressive power. These extensions allow the reader of an interface specification to make meaningful comparisons between functionally equivalent interaction techniques and task performance strategies in interfaces supporting multiple input devices. Furthermore, they offer researchers a new methodology for analyzing the behavioral aspects of user interfaces. These modifications are documented and their benefits discussed.


2018 ◽  
Vol 55 (1) ◽  
pp. 3-26 ◽  
Author(s):  
Floris Mosselman ◽  
Don Weenink ◽  
Marie Rosenkrantz Lindegaard

Objective: A small-scale exploration of the use of video analysis to study robberies. We analyze the use of weapons as part of the body posturing of robbers as they attempt to attain dominance. Methods: Qualitative analyses of video footage of 23 shop robberies. We used Observer XT software (version 12) for fine-grained multimodal coding, capturing diverse bodily behavior by various actors simultaneously. We also constructed story lines to understand the robberies as hermeneutic whole cases. Results: Robbers attain dominance by using weapons that afford aggrandizing posturing and forward movements. Guns rather than knives seemed to fit more easily with such posturing. Also, victims were more likely to show minimizing postures when confronted with guns. Thus, guns, as part of aggrandizing posturing, offer more support to robbers’ claims to dominance in addition to their more lethal power. In the cases where resistance occurred, robbers either expressed insecure body movements or minimizing postures and related weapon usage or they failed to impose a robbery frame as the victims did not seem to comprehend the situation initially. Conclusions: Video analysis opens up a new perspective of how violent crime unfolds as sequences of bodily movements. We provide methodological recommendations and suggest a larger scale comparative project.


2020 ◽  
Vol 2 (1) ◽  
pp. 49-59
Author(s):  
Trisni Wahyu Ningtias ◽  
Koko Joni ◽  
Riza Alfita

Technology development is increasing rapidly and effectively brings an impact on the field of technology. One of them is scanning objects using a computer. Object scanning is a technology that combines hardware to see objects and software to process data that has been received by the hardware. The traditional manufacturing process without 3D scanning involves the design, analysis and testing of prototypes needs a very long time and expensive. Whereas 3D scanning is considered capable of being more efficient and practical. This research is conducted to facilitate the 3D scanning process by utilizing sensors on the Kinect 360 camera. The object will be scanned with a 360 camera to get data on each side. The results obtained will be processed by the system and process it into 3D objects. The process of collecting object data using Eclipse software while object rotation using a stepper motor which controlled by arduino. Based on the results of tests from research, the infrared sensor on the Kinect camera is less than optimal in reflecting light back on the object that has uneven surfaces. But the sensor works well on the object that has a flat surface.


2017 ◽  
Author(s):  
Rachel Opitz ◽  
Tyler Johnson

This paper discusses the authors’ approach to designing an interface for the Gabii Project’s digital volumes that attempts to fuse elements of traditional synthetic publications and site reports with rich digital datasets. Archaeology, and classical archaeology in particular, has long engaged with questions of the formation and lived experience of towns and cities. Such studies might draw on evidence of local topography, the arrangement of the built environment, and the placement of architectural details, monuments and inscriptions (e.g. Johnson and Millett 2012). Fundamental to the continued development of these studies is the growing body of evidence emerging from new excavations. Digital techniques for recording evidence “on the ground,” notably SFM (structure from motion aka close range photogrammetry) for the creation of detailed 3D models and for scene-level modeling in 3D have advanced rapidly in recent years. These parallel developments have opened the door for approaches to the study of the creation and experience of urban space driven by a combination of scene-level reconstruction models (van Roode et al. 2012, Paliou et al. 2011, Paliou 2013) explicitly combined with detailed SFM or scanning based 3D models representing stratigraphic evidence. It is essential to understand the subtle but crucial impact of the design of the user interface on the interpretation of these models. In this paper we focus on the impact of design choices for the user interface, and make connections between design choices and the broader discourse in archaeological theory surrounding the practice of the creation and consumption of archaeological knowledge. As a case in point we take the prototype interface being developed within the Gabii Project for the publication of the Tincu House. In discussing our own evolving practices in engagement with the archaeological record created at Gabii, we highlight some of the challenges of undertaking theoretically-situated user interface design, and their implications for the publication and study of archaeological materials.


1999 ◽  
Vol 4 (1) ◽  
pp. 8-17 ◽  
Author(s):  
G Jansson ◽  
H Petrie ◽  
C Colwell ◽  
D. Kornbrot ◽  
J. Fänger ◽  
...  

This paper is a fusion of two independent studies investigating related problems concerning the use of haptic virtual environments for blind people: a study in Sweden using a PHANToM 1.5 A and one in the U.K. using an Impulse Engine 3000. In general, the use of such devices is a most interesting option to provide blind people with information about representations of the 3D world, but the restriction at each moment to only one point of contact between observer and virtual object might decrease their effectiveness. The studies investigated the perception of virtual textures, the identification of virtual objects and the perception of their size and angles. Both sighted (blindfolded in one study) and blind people served as participants. It was found (1) that the PHANToM can effectively render textures in the form of sandpapers and simple 3D geometric forms and (2) that the Impulse Engine can effectively render textures consisting of grooved surfaces, as well as 3D objects, properties of which were, however, judged with some over- or underestimation. When blind and sighted participants' performance was compared differences were found that deserves further attention. In general, the haptic devices studied have demonstrated the great potential of force feedback devices in rendering relatively simple environments, in spite of the restricted ways they allow for exploring the virtual world. The results highly motivate further studies of their effectiveness, especially in more complex contexts.


Sign in / Sign up

Export Citation Format

Share Document