Stroke-Hover Intent Recognition for Mid-Air Curve Drawing Using Multi-Point Skeletal Trajectories

Author(s):  
Umema H. Bohari ◽  
Ryan Alli ◽  
Alejandra Garcia ◽  
Vinayak R. Krishnamurthy

Abstract Drawing curves is a fundamental task in mid-air interactive applications such as 3D sketching, geometric modeling, hand-writing recognition, and authentication. Existing research in mid-air drawing is solely focused on determining what the user drew assuming that the intended curve is segmented from the continuous user-generated trajectory. In this work, our aim is to address the complementary problem: to determine when the user actually intended to draw without the use of any prescribed gestures or hand-held controllers (e.g., Wii remote, HTC Vive). In our previously published work, we demonstrated that in mid-air drawing tasks, not only it is possible to statistically learn drawing intent from hand motion, but it is also perceived to be more natural by users. Our idea was to simply classify each instance of hand trajectories as either a stroke or a hover. Our current work investigates new representations of the users’ motion beyond a single point (such as a tracked palm) to richer multi-point trajectories obtained with other skeletal joints such as wrist and elbow. We trained several binary classifiers on five such trajectory representations obtained from 3D drawing data from 25 users using a hand tracking device. We compare these representations and the corresponding classifiers for predicting user intent for mid-air drawing. Our extended approach resulted in improved prediction accuracy (mean: 80.17%, min: 79.92%, max: 91.30%) with respect to our earlier work (mean: 76.75%, min: 74.23%, max: 84.01%).

2020 ◽  
pp. 155335062094720
Author(s):  
Yuanyuan Feng ◽  
Uchenna A. Uchidiuno ◽  
Hamid R. Zahiri ◽  
Ivan George ◽  
Adrian E. Park ◽  
...  

Background. Touchless interaction devices have increasingly garnered attention for intraoperative imaging interaction, but there are limited recommendations on which touchless interaction mechanisms should be implemented in the operating room. The objective of this study was to evaluate the efficiency, accuracy, and satisfaction of 2 current touchless interaction mechanisms—hand motion and body motion for intraoperative image interaction. Methods. We used the TedCas plugin for ClearCanvas DICOM viewer to display and manipulate CT images. Ten surgeons performed 5 image interaction tasks—step-through, pan, zoom, circle measure, and line measure—on the 3 input interaction devices—the Microsoft Kinect, the Leap Motion, and a mouse. Results. The Kinect shared similar accuracy with the Leap Motion for most of the tasks. But it had an increased error rate in the step-through task. The Leap Motion led to shorter task completion time than the Kinect and was preferred by the surgeons, especially for the measure tasks. Discussion. Our study suggests that hand tracking devices, such as the Leap Motion, should be used for intraoperative imagining manipulation tasks that require high precision.


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Jun Wan ◽  
Qiuqi Ruan ◽  
Gaoyun An ◽  
Wei Li ◽  
Yanyan Liang ◽  
...  

Segmenting human hand is important in computer vision applications, for example, sign language interpretation, human computer interaction, and gesture recognition. However, some serious bottlenecks still exist in hand localization systems such as fast hand motion capture, hand over face, and hand occlusions on which we focus in this paper. We present a novel method for hand tracking and segmentation based on augmented graph cuts and dynamic model. First, an effective dynamic model for state estimation is generated, which correctly predicts the location of hands probably having fast motion or shape deformations. Second, new energy terms are brought into the energy function to develop augmented graph cuts based on some cues, namely, spatial information, hand motion, and chamfer distance. The proposed method successfully achieves hand segmentation even though the hand passes over other skin-colored objects. Some challenging videos are provided in the case of hand over face, hand occlusions, dynamic background, and fast motion. Experimental results demonstrate that the proposed method is much more accurate than other graph cuts-based methods for hand tracking and segmentation.


Author(s):  
Kumar Sambhav ◽  
Puneet Tandon ◽  
Sanjay G. Dhande

The presented work models the geometry of Single Point Cutting Tools (SPCTs) with generic profile. Presently few standard shapes of SPCTs defined in terms of projective geometry are being employed while there is a need to design free-form tools to efficiently machine free-form surfaces with few passes and chosen range of cutting angles. To be able to produce SPCT face and flanks with generic shapes through grinding, a comprehensive geometric model of the tool in terms of the varying grinding angles and the ground depths is required which helps design the tool with arbitrarily chosen tool angles. The surface modeling begins with the creation of a tool blank model followed by transformation of unbounded planes to get the cutting tool surfaces. The intersection of these surfaces with the blank gives the complete model of the tool. Having created the geometric model in two generations of generalization, the paper presents the methodology to obtain the conventional tool angles from the generic model. An illustration of the model has been provided showing variation of tool angles along the cutting edge with changing grinding parameters. When the geometric model is not to be related to the grinding parameters, the SPCT can be modeled as a composite NURBS surface which has been presented towards the end of the work.


Sensors ◽  
2019 ◽  
Vol 19 (21) ◽  
pp. 4680 ◽  
Author(s):  
Linjun Jiang ◽  
Hailun Xia ◽  
Caili Guo

Tracking detailed hand motion is a fundamental research topic in the area of human-computer interaction (HCI) and has been widely studied for decades. Existing solutions with single-model inputs either require tedious calibration, are expensive or lack sufficient robustness and accuracy due to occlusions. In this study, we present a real-time system to reconstruct the exact hand motion by iteratively fitting a triangular mesh model to the absolute measurement of hand from a depth camera under the robust restriction of a simple data glove. We redefine and simplify the function of the data glove to lighten its limitations, i.e., tedious calibration, cumbersome equipment, and hampering movement and keep our system lightweight. For accurate hand tracking, we introduce a new set of degrees of freedom (DoFs), a shape adjustment term for personalizing the triangular mesh model, and an adaptive collision term to prevent self-intersection. For efficiency, we extract a strong pose-space prior to the data glove to narrow the pose searching space. We also present a simplified approach for computing tracking correspondences without the loss of accuracy to reduce computation cost. Quantitative experiments show the comparable or increased accuracy of our system over the state-of-the-art with about 40% improvement in robustness. Besides, our system runs independent of Graphic Processing Unit (GPU) and reaches 40 frames per second (FPS) at about 25% Central Processing Unit (CPU) usage.


2016 ◽  
Vol 82 (10) ◽  
pp. 872-875 ◽  
Author(s):  
Bradley Genovese ◽  
Steven Yin ◽  
Sohail Sareh ◽  
Michael Devirgilio ◽  
Laith Mukdad ◽  
...  

With changes in work hour limitations, there is an increasing need for objective determination of technical proficiency. Electromagnetic hand-motion analysis has previously shown only time to completion and number of movements to correlation with expertise. The present study was undertaken to evaluate the efficacy of hand-motion-tracking analysis in determining surgical skill proficiency. A nine-degree-of-freedom sensor was used and mounted on the superior aspect of a needle driver. A one-way analysis of variance and Welch's t test were performed to evaluate significance between subjects. Four Novices, four Trainees, and three Experts performed a large vessel patch anastomosis on a phantom tissue. Path length, total number of movements, absolute velocity, and total time were analyzed between groups. Compared to the Novices, Expert subjects exhibited significantly decreased total number of movements, decreased instrument path length, and decreased total time to complete tasks. There were no significant differences found in absolute velocity between groups. In this pilot study, we have identified significant differences in patterns of motion between Novice and Expert subjects. These data warrant further analysis for its predictive value in larger cohorts at different levels of training and may be a useful tool in competence-based training paradigms in the future.


2012 ◽  
Vol 235 ◽  
pp. 68-73
Author(s):  
Hai Bo Pang ◽  
You Dong Ding

Hand gesture provides an attractive alternative to cumbersome interface devices for human computer interface. Many hand gesture recognition methods using visual analysis have been proposed. In our research, we exploit multiple cues including divergence features, vorticity features and hand motion direction vector. Divergence and vorticity are derived from the optical flow for hand gesture recognition in videos. Then these features are computed by principal component analysis method. The hand tracking algorithm finds the hand centroids for every frame, computes hand motion direction vector. At last, we introduced dynamic time warping method to verify the robustness of our features. Those experimental results demonstrate that the proposed approach yields a satisfactory recognition rate.


2021 ◽  
Vol 11 (7) ◽  
pp. 2943
Author(s):  
Francisco Gomez-Donoso ◽  
Felix Escalona ◽  
Nadia Nasri ◽  
Miguel Cazorla

In this work, we introduce HaReS, a hand rehabilitation system. Our proposal integrates a series of exercises, jointly developed with a foundation for those with motor and cognitive injuries, that are aimed at improving the skills of patients and the adherence to the rehabilitation plan. Our system takes advantage of a low-cost hand-tracking device to provide a quantitative analysis of the performance of the patient. It also integrates a low-cost surface electromyography (sEMG) sensor in order to provide insight about which muscles are being activated while completing the exercises. It is also modular and can be deployed on a social robot. We tested our proposal in two different facilities for rehabilitation with high success. The therapists and patients felt more motivation while using HaReS, which improved the adherence to the rehabilitation plan. In addition, the therapists were able to provide services to more patients than when they used their traditional methodology.


Sign in / Sign up

Export Citation Format

Share Document