A Quasi-Static Model for Studying Physical Interaction Between a Soft Robotic Digit and a Human Finger

Author(s):  
Mahdi Haghshenas-Jaryani ◽  
Muthu B. J. Wijesundara

This paper presents the development of a framework based on a quasi-statics concept for modeling and analyzing the physical human-robot interaction in soft robotic hand exoskeletons used for rehabilitation and human performance augmentation. This framework provides both forward and inverse quasi-static formulations for the interaction between a soft robotic digit and a human finger which can be used for the calculation of angular motions, interaction forces, actuation torques, and stiffness at human joints. This is achieved by decoupling the dynamics of the soft robotic digit and the human finger with similar interaction forces acting on both sides. The presented theoretical models were validated by a series of numerical simulations based on a finite element model which replicates similar human-robot interaction. The comparison of the results obtained for the angular motion, interaction forces, and the estimated stiffness at the joints indicates the accuracy and effectiveness of the quasi-static models for predicting the human-robot interaction.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Fazlur Rashid ◽  
Devin Burns ◽  
Yun Seong Song

AbstractUnderstanding the human motor control strategy during physical interaction tasks is crucial for developing future robots for physical human–robot interaction (pHRI). In physical human–human interaction (pHHI), small interaction forces are known to convey their intent between the partners for effective motor communication. The aim of this work is to investigate what affects the human’s sensitivity to the externally applied interaction forces. The hypothesis is that one way the small interaction forces are sensed is through the movement of the arm and the resulting proprioceptive signals. A pHRI setup was used to provide small interaction forces to the hand of seated participants in one of four directions, while the participants were asked to identify the direction of the push while blindfolded. The result shows that participants’ ability to correctly report the direction of the interaction force was lower with low interaction force as well as with high muscle contraction. The sensitivity to the interaction force direction increased with the radial displacement of the participant’s hand from the initial position: the further they moved the more correct their responses were. It was also observed that the estimated stiffness of the arm varies with the level of muscle contraction and robot interaction force.


Sensors ◽  
2020 ◽  
Vol 20 (1) ◽  
pp. 296 ◽  
Author(s):  
Caroline P. C. Chanel ◽  
Raphaëlle N. Roy ◽  
Frédéric Dehais ◽  
Nicolas Drougard

The design of human–robot interactions is a key challenge to optimize operational performance. A promising approach is to consider mixed-initiative interactions in which the tasks and authority of each human and artificial agents are dynamically defined according to their current abilities. An important issue for the implementation of mixed-initiative systems is to monitor human performance to dynamically drive task allocation between human and artificial agents (i.e., robots). We, therefore, designed an experimental scenario involving missions whereby participants had to cooperate with a robot to fight fires while facing hazards. Two levels of robot automation (manual vs. autonomous) were randomly manipulated to assess their impact on the participants’ performance across missions. Cardiac activity, eye-tracking, and participants’ actions on the user interface were collected. The participants performed differently to an extent that we could identify high and low score mission groups that also exhibited different behavioral, cardiac and ocular patterns. More specifically, our findings indicated that the higher level of automation could be beneficial to low-scoring participants but detrimental to high-scoring ones, and vice versa. In addition, inter-subject single-trial classification results showed that the studied behavioral and physiological features were relevant to predict mission performance. The highest average balanced accuracy (74%) was reached using the features extracted from all input devices. These results suggest that an adaptive HRI driving system, that would aim at maximizing performance, would be capable of analyzing such physiological and behavior markers online to further change the level of automation when it is relevant for the mission purpose.


Author(s):  
Shan G. Lakhmani ◽  
Julia L. Wright ◽  
Michael R. Schwartz ◽  
Daniel Barber

Human-robot interaction requires communication, however what form this communication should take to facilitate effective team performance is still undetermined. One notion is that effective human-agent communications can be achieved by combining transparent information-sharing techniques with specific communication patterns. This study examines how transparency and a robot’s communication patterns interact to affect human performance in a human-robot teaming task. Participants’ performance in a target identification task was affected by the robot’s communication pattern. Participants missed identifying more targets when they worked with a bidirectionally communicating robot than when they were working with a unidirectionally communicating one. Furthermore, working with a bidirectionally communicating robot led to fewer correct identifications than working with a unidirectionally communicating robot, but only when the robot provided less transparency information. The implications these findings have for future robot interface designs are discussed.


2018 ◽  
Vol 9 (1) ◽  
pp. 221-234 ◽  
Author(s):  
João Avelino ◽  
Tiago Paulino ◽  
Carlos Cardoso ◽  
Ricardo Nunes ◽  
Plinio Moreno ◽  
...  

Abstract Handshaking is a fundamental part of human physical interaction that is transversal to various cultural backgrounds. It is also a very challenging task in the field of Physical Human-Robot Interaction (pHRI), requiring compliant force control in order to plan the arm’s motion and for a confident, but at the same time pleasant grasp of the human user’s hand. In this paper,we focus on the study of the hand grip strength for comfortable handshakes and perform three sets of physical interaction experiments between twenty human subjects in the first experiment, thirty-five human subjects in the second one, and thirty-eight human subjects in the third one. Tests are made with a social robot whose hands are instrumented with tactile sensors that provide skin-like sensation. From these experiments, we: (i) learn the preferred grip closure according to each user group; (ii) analyze the tactile feedback provided by the sensors for each closure; (iii) develop and evaluate the hand grip controller based on previous data. In addition to the robot-human interactions, we also learn about the robot executed handshake interactions with inanimate objects, in order to detect if it is shaking hands with a human or an inanimate object. This work adds physical human-robot interaction to the repertory of social skills of our robot, fulfilling a demand previously identified by many users of the robot.


2021 ◽  
Vol 3 ◽  
Author(s):  
Alberto Martinetti ◽  
Peter K. Chemweno ◽  
Kostas Nizamis ◽  
Eduard Fosch-Villaronga

Policymakers need to consider the impacts that robots and artificial intelligence (AI) technologies have on humans beyond physical safety. Traditionally, the definition of safety has been interpreted to exclusively apply to risks that have a physical impact on persons’ safety, such as, among others, mechanical or chemical risks. However, the current understanding is that the integration of AI in cyber-physical systems such as robots, thus increasing interconnectivity with several devices and cloud services, and influencing the growing human-robot interaction challenges how safety is currently conceptualised rather narrowly. Thus, to address safety comprehensively, AI demands a broader understanding of safety, extending beyond physical interaction, but covering aspects such as cybersecurity, and mental health. Moreover, the expanding use of machine learning techniques will more frequently demand evolving safety mechanisms to safeguard the substantial modifications taking place over time as robots embed more AI features. In this sense, our contribution brings forward the different dimensions of the concept of safety, including interaction (physical and social), psychosocial, cybersecurity, temporal, and societal. These dimensions aim to help policy and standard makers redefine the concept of safety in light of robots and AI’s increasing capabilities, including human-robot interactions, cybersecurity, and machine learning.


Robotics ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 18 ◽  
Author(s):  
Younsse Ayoubi ◽  
Med Laribi ◽  
Said Zeghloul ◽  
Marc Arsicault

Unlike “classical” industrial robots, collaborative robots, known as cobots, implement a compliant behavior. Cobots ensure a safe force control in a physical interaction scenario within unknown environments. In this paper, we propose to make serial robots intrinsically compliant to guarantee safe physical human–robot interaction (pHRI), via our novel designed device called V2SOM, which stands for Variable Stiffness Safety-Oriented Mechanism. As its name indicates, V2SOM aims at making physical human–robot interaction safe, thanks to its two basic functioning modes—high stiffness mode and low stiffness mode. The first mode is employed for normal operational routines. In contrast, the low stiffness mode is suitable for the safe absorption of any potential blunt shock with a human. The transition between the two modes is continuous to maintain a good control of the V2SOM-based cobot in the case of a fast collision. V2SOM presents a high inertia decoupling capacity which is a necessary condition for safe pHRI without compromising the robot’s dynamic performances. Two safety criteria of pHRI were considered for performance evaluations, namely, the impact force (ImpF) criterion and the head injury criterion (HIC) for, respectively, the external and internal damage evaluation during blunt shocks.


2021 ◽  
pp. 683-690
Author(s):  
Irene Pippo ◽  
Jacopo Zenzeri ◽  
Giovanni Berselli ◽  
Diego Torazza

2019 ◽  
Vol 112 ◽  
pp. 323-331 ◽  
Author(s):  
Arnaldo G. Leal-Junior ◽  
Camilo R. Díaz ◽  
Maria José Pontes ◽  
Carlos Marques ◽  
Anselmo Frizera

2018 ◽  
Vol 2018 ◽  
pp. 1-11
Author(s):  
Rui Wu ◽  
He Zhang ◽  
Tao Peng ◽  
Le Fu ◽  
Jie Zhao

In this research, properties of variable admittance controller and variable impedance controller were simulated by MATLAB firstly, which reflected the good performance of these two controllers under trajectory tracking and physical interaction. Secondly, a new mode of learning from demonstration (LfD) that conforms to human intuitive and has good interaction performances was developed by combining the electromyogram (EMG) signals and variable impedance (admittance) controller in dragging demonstration. In this learning by demonstration mode, demonstrators not only can interact with manipulator intuitively, but also can transmit end-effector trajectories and impedance gain scheduling to the manipulator for learning. A dragging demonstration experiment in 2D space was carried out with such learning mode. Experimental results revealed that the designed human-robot interaction and demonstration mode is conducive to demonstrators to control interaction performance of manipulator directly, which improves accuracy and time efficiency of the demonstration task. Moreover, the trajectory and impedance gain scheduling could be retained for the next learning process in the autonomous compliant operations of manipulator.


Sign in / Sign up

Export Citation Format

Share Document