Virtual Fixture Generation for Task Planning with Complex Geometries

Author(s):  
Andrew Sharp ◽  
Mitch W. Pryor

Abstract Many robotic processes require the system to maintain a tool's orientation and distance from a surface. To do so, researchers often use Virtual Fixtures (VFs) to either guide the robot along a path or forbid it from leaving the workspace. Previous efforts relied on volumetric primitives (planes, cylinders, etc.) or raw sensor data to define VFs. However, those approaches only work for a small subset of real-world objects. Extending this approach is complicated not only by VF generation but also generalizing user traversal of the VF to command a robot trajectory remotely. In this work, we present the concept of Task VFs, which convert layers of point cloud based Guidance VF into a bidirectional graph structure and pair it with a Forbidden Region VF. These VFs are hardware-agnostic and can be generated from virtually any source data, including from parametric objects (superellipsoids, supertoroids, etc.), meshes (including from CAD), and real-time sensor data for open-world scenarios. We address surface convexity and concavity since these and distance to the task surface determine the size and resolution of VF layers. This paper then presents the Manipulator-to-Task Transform Tool for Task VF visualization and to limit human-robot interaction ambiguities. Testing confirmed generation success, and users performed spatially discrete experiments to evaluate Task VF usability complex geometries, which showed their interpretability. The Manipulator-to-Task Transform Tool applies many robotic applications, including collision avoidance, process design, training, task definition, etc. for virtually any geometry.

2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Saad Albawi ◽  
Oguz Bayat ◽  
Saad Al-Azawi ◽  
Osman N. Ucan

Recently, social touch gesture recognition has been considered an important topic for touch modality, which can lead to highly efficient and realistic human-robot interaction. In this paper, a deep convolutional neural network is selected to implement a social touch recognition system for raw input samples (sensor data) only. The touch gesture recognition is performed using a dataset previously measured with numerous subjects that perform varying social gestures. This dataset is dubbed as the corpus of social touch, where touch was performed on a mannequin arm. A leave-one-subject-out cross-validation method is used to evaluate system performance. The proposed method can recognize gestures in nearly real time after acquiring a minimum number of frames (the average range of frame length was from 0.2% to 4.19% from the original frame lengths) with a classification accuracy of 63.7%. The achieved classification accuracy is competitive in terms of the performance of existing algorithms. Furthermore, the proposed system outperforms other classification algorithms in terms of classification ratio and touch recognition time without data preprocessing for the same dataset.


Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 993 ◽  
Author(s):  
Bin Yang ◽  
Dingyi Gan ◽  
Yongchuan Tang ◽  
Yan Lei

Quantifying uncertainty is a hot topic for uncertain information processing in the framework of evidence theory, but there is limited research on belief entropy in the open world assumption. In this paper, an uncertainty measurement method that is based on Deng entropy, named Open Deng entropy (ODE), is proposed. In the open world assumption, the frame of discernment (FOD) may be incomplete, and ODE can reasonably and effectively quantify uncertain incomplete information. On the basis of Deng entropy, the ODE adopts the mass value of the empty set, the cardinality of FOD, and the natural constant e to construct a new uncertainty factor for modeling the uncertainty in the FOD. Numerical example shows that, in the closed world assumption, ODE can be degenerated to Deng entropy. An ODE-based information fusion method for sensor data fusion is proposed in uncertain environments. By applying it to the sensor data fusion experiment, the rationality and effectiveness of ODE and its application in uncertain information fusion are verified.


2020 ◽  
Vol 20 (14) ◽  
pp. 7918-7928
Author(s):  
Yingzhong Tian ◽  
Guopeng Wang ◽  
Long Li ◽  
Tao Jin ◽  
Fengfeng Xi ◽  
...  

2021 ◽  
Author(s):  
◽  
Callum Robinson

<p>MARVIN (Mobile Autonomous Robotic Vehicle for Indoor Navigation) was once the flagship of Victoria University’s mobile robotic fleet. However, over the years MARVIN has become obsolete. This thesis continues the the redevelopment of MARVIN, transforming it into a fully autonomous research platform for human-robot interaction (HRI).  MARVIN utilises a Segway RMP, a self balancing mobility platform. This provides agile locomotion, but increases sensor processing complexity due to its dynamic pitch. MARVIN’s existing sensing systems (including a laser rangefinder and ultrasonic sensors) are augmented with tactile sensors and a Microsoft Kinect v2 RGB-D camera for 3D sensing. This allows the detection of the obstacles often found in MARVIN’s unmodified office-like operating environment.  These sensors are processed using novel techniques to account for the Segway’s dynamic pitch. A newly developed navigation stack takes the processed sensor data to facilitate localisation, obstacle detection and motion planning.  MARVIN’s inherited humanoid robotic torso is augmented with a touch screen and voice interface, enabling HRI. MARVIN’s HRI capabilities are demonstrated by implementing it as a robotic guide. This implementation is evaluated through a usability study and found to be successful.  Through evaluations of MARVIN’s locomotion, sensing, localisation and motion planning systems, in addition to the usability study, MARVIN is found to be capable of both autonomous navigation and engaging HRI. These developed features open a diverse range of research directions and HRI tasks that MARVIN can be used to explore.</p>


2011 ◽  
Vol 20 (3) ◽  
pp. 191-206 ◽  
Author(s):  
Behzad Khademian ◽  
Jacob Apkarian ◽  
Keyvan Hashtrudi-Zaad

This paper investigates the effect of environmental factors on user performance in a dual-user haptic guidance system. The system under study allows for interaction between both users, the trainee and the trainer, to collaboratively perform a common task in a shared virtual environment. User studies are carried out to experimentally evaluate the users' performance while following square and circular trajectories with two viewpoints of the environment (top view and front view), while the virtual manipulator tool moves in free motion or against forbidden-region virtual fixtures. The performance is measured and statistically evaluated against task completion time, tracking accuracy, and user energy exchange. The studies revealed that changing the environment geometry from a square to a circle results in reduced task completion time and tracking error. Changing the environment viewpoint from top to front decreases the task completion time in both geometries. Forbidden-region virtual fixtures increase energy exchange by both users and decrease task completion time while compromising the tracking performance in the square-following task. However, when visual feedback is removed in the presence of the fixtures, the square tracking performance improves. The results also indicate a strong relationship between user dominance and tracking error only when the experiment is time-limited.


2014 ◽  
Vol 28 (22) ◽  
pp. 1507-1518 ◽  
Author(s):  
Sina Nia Kosari ◽  
Fredrik Rydén ◽  
Thomas S. Lendvay ◽  
Blake Hannaford ◽  
Howard Jay Chizeck

2022 ◽  
Vol 11 (1) ◽  
pp. 1-50
Author(s):  
Bahar Irfan ◽  
Michael Garcia Ortiz ◽  
Natalia Lyubova ◽  
Tony Belpaeme

User identification is an essential step in creating a personalised long-term interaction with robots. This requires learning the users continuously and incrementally, possibly starting from a state without any known user. In this article, we describe a multi-modal incremental Bayesian network with online learning, which is the first method that can be applied in such scenarios. Face recognition is used as the primary biometric, and it is combined with ancillary information, such as gender, age, height, and time of interaction to improve the recognition. The Multi-modal Long-term User Recognition Dataset is generated to simulate various human-robot interaction (HRI) scenarios and evaluate our approach in comparison to face recognition, soft biometrics, and a state-of-the-art open world recognition method (Extreme Value Machine). The results show that the proposed methods significantly outperform the baselines, with an increase in the identification rate up to 47.9% in open-set and closed-set scenarios, and a significant decrease in long-term recognition performance loss. The proposed models generalise well to new users, provide stability, improve over time, and decrease the bias of face recognition. The models were applied in HRI studies for user recognition, personalised rehabilitation, and customer-oriented service, which showed that they are suitable for long-term HRI in the real world.


2017 ◽  
Vol 36 (13-14) ◽  
pp. 1579-1594 ◽  
Author(s):  
Guilherme Maeda ◽  
Marco Ewerton ◽  
Gerhard Neumann ◽  
Rudolf Lioutikov ◽  
Jan Peters

This paper proposes a method to achieve fast and fluid human–robot interaction by estimating the progress of the movement of the human. The method allows the progress, also referred to as the phase of the movement, to be estimated even when observations of the human are partial and occluded; a problem typically found when using motion capture systems in cluttered environments. By leveraging on the framework of Interaction Probabilistic Movement Primitives, phase estimation makes it possible to classify the human action, and to generate a corresponding robot trajectory before the human finishes his/her movement. The method is therefore suited for semi-autonomous robots acting as assistants and coworkers. Since observations may be sparse, our method is based on computing the probability of different phase candidates to find the phase that best aligns the Interaction Probabilistic Movement Primitives with the current observations. The method is fundamentally different from approaches based on Dynamic Time Warping that must rely on a consistent stream of measurements at runtime. The resulting framework can achieve phase estimation, action recognition and robot trajectory coordination using a single probabilistic representation. We evaluated the method using a seven-degree-of-freedom lightweight robot arm equipped with a five-finger hand in single and multi-task collaborative experiments. We compare the accuracy achieved by phase estimation with our previous method based on dynamic time warping.


Sign in / Sign up

Export Citation Format

Share Document