Modulation of Musical Sound Clips for Robot’s Dynamic Emotional Expression

2011 ◽  
Vol 23 (3) ◽  
pp. 451-457
Author(s):  
Eun-Sook Jee ◽  
◽  
Chong Hui Kim ◽  
Hisato Kobayashi ◽  
◽  
...  

Sound is an important medium for human-robot interaction. Single sound or music clip is not enough to express delicate emotions, especially it is almost impossible to represent emotional changings. This paper tries to express different emotional levels of sounds and their transitions. In this paper, happiness, sadness, anger, and surprise are considered as a basic set of robots’ emotion. By using previous proposed nominal sound clips of the four emotions, this paper proposes a method to reproduce the different emotional levels of sounds by modulating their musical parameters ‘tempo,’ ‘pitch,’ and ‘volume.’ Basic experiments whether human subject can discern three different emotional intensity levels of the four emotions are carried out. By comparing the recognition rate, the proposed modulation works fairly well and at least shows possibility of letting humans identify three intensity levels of emotions. Since the modulation can be done by dynamically changing the three musical parameters of sound clip, our method can be expanded to dynamical changing of emotional sounds.

2008 ◽  
Vol 5 (4) ◽  
pp. 213-223 ◽  
Author(s):  
Shuhei Ikemoto ◽  
Takashi Minato ◽  
Hiroshi Ishiguro

In this paper, we investigate physical human–robot interaction (PHRI) as an important extension of traditional HRI research. The aim of this research is to develop a motor learning system that uses physical help from a human helper. We first propose a new control system that takes advantage of inherent joint flexibility. This control system is applied on a new humanoid robot called CB2. In order to clarify the difference between successful and unsuccessful interaction, we conduct an experiment where a human subject has to help the CB2robot in its rising-up motion. We then develop a new measure that demonstrates the difference between smooth and non-smooth physical interactions. An analysis of the experiment’s data, based on the introduced measure, shows significant differences between experts and beginners in human–robot interaction.


Robotica ◽  
2014 ◽  
Vol 33 (1) ◽  
pp. 1-18 ◽  
Author(s):  
Alberto Poncela ◽  
Leticia Gallardo-Estrella

SUMMARYVerbal communication is the most natural way of human–robot interaction. Such an interaction is usually achieved by means of a human-robot interface (HRI). In this paper, a HRI is presented to teleoperate a robotic platform via the user's voice. Hence, a speech recognition system is necessary. In this work, a user-dependent acoustic model for Spanish speakers has been developed to teleoperate a robot with a set of commands. Experimental results have been successful, both in terms of a high recognition rate and the navigation of the robot under the control of the user's voice.


2021 ◽  
Author(s):  
Xiangyun Li ◽  
QI LU ◽  
Jiali Chen ◽  
Kang Li

In this work, the uncertainty and disturbance estimator (UDE)-based robust region tracking controller for a robot manipulator is developed to achieve the moving target region trajectory tracking and the compliant human-robot interaction simultaneously. Utilizing the back-stepping control approach, the UDE is seamlessly fused into the region tracking control framework to estimate and compensate the model uncertainty and external disturbance, such as unknown payload, unmodeled joint coupling effect and friction. The regional feedback error is derived from the potential function to drive the robot manipulator end-effector converging into the target region, where the robot manipulator can be passively manipulated based on the needs of human to achieve the compliant physical human-robot interaction. Extensive experimental studies are carried out with a universal robots 10 manipulator to validate the effectiveness of the proposed method for moving region trajectory tracking, handling unknown payload and compliant physical human-robot interaction. The superior robustness of the proposed approach is demonstrated by comparison with the existing controller under the adverse effect of unknown payload. The humanrobot interaction is achieved in a shared autonomy manner with the cooperation of the manipulator and the human subject to accomplish the temperature measurement task, where the variation in human-subject height and the complexity of aiming the thermometer are successfully accommodated.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Pingao Huang ◽  
Hui Wang ◽  
Yuan Wang ◽  
Zhiyuan Liu ◽  
Oluwarotimi Williams Samuel ◽  
...  

Towards providing efficient human-robot interaction, surface electromyogram (EMG) signals have been widely adopted for the identification of different limb movement intentions. Since the available EMG signal sensors are highly susceptible to external interferences such as electromagnetic artifacts and muscle fatigues, the quality of EMG recordings would be mostly corrupted, which may decay the performance of EMG-based control systems. Given the fact that the muscle shape changes (MSC) would be different when doing various limb movements, the MSC signal would be nonsensitive to electromagnetic artifacts and muscle fatigues and maybe promising for movement intention recognition. In this study, a novel nanogold flexible and stretchable sensor was developed for the acquisition of MSC signals utilized for decoding multiple classes of limb movement intents. More precisely, four sensors were used to measure the MSC signals from the right forearm of each subject when they performed seven classes of movements. Also, six different features were extracted from the measured MSC signals, and a linear discriminant analysis- (LDA-) based classifier was built for movement classification tasks. The experimental results showed that using MSC signals could achieve an average recognition rate of about 96.06 ± 1.84% by properly placing the four flexible and stretchable sensors on the forearm. Additionally, when the MSC sampling rate was greater than 100 Hz and the analysis window length was greater than 20 ms, the movement recognition accuracy would be only slightly increased. These pilot results suggest that the MSC-based method should be feasible in movement identifications for human-robot interaction, and at the same time, they provide a systematic reference for the use of the flexible and stretchable sensors in human-robot interaction systems.


2012 ◽  
Vol 605-607 ◽  
pp. 1656-1660
Author(s):  
Temsiri Sapsaman ◽  
Teerawat Benjawilaikul

To enhance the human-robot interaction social robots have been developed with focuses on facial expression and verbal language. However, little to none has been done on emotional expression through robot’s body language. This work uses a parameterization method with human’s emotion theory and experiments to find robot parameters for expressing emotions through body language. Mapping is done on 2- and 3-dimensional emotion space, and obtained coefficients can be used to determine the influence level of motion parameters to emotion domains.


2021 ◽  
pp. 229-238
Author(s):  
Aurelie Clodic ◽  
Rachid Alami

AbstractJoint action in the sphere of human–human interrelations may be a model for human–robot interactions. Human–human interrelations are only possible when several prerequisites are met, inter alia: (1) that each agent has a representation within itself of its distinction from the other so that their respective tasks can be coordinated; (2) each agent attends to the same object, is aware of that fact, and the two sets of “attentions” are causally connected; and (3) each agent understands the other’s action as intentional. The authors explain how human–robot interaction can benefit from the same threefold pattern. In this context, two key problems emerge. First, how can a robot be programed to recognize its distinction from a human subject in the same space, to detect when a human agent is attending to something, to produce signals which exhibit their internal state and make decisions about the goal-directedness of the other’s actions such that the appropriate predictions can be made? Second, what must humans learn about robots so they are able to interact reliably with them in view of a shared goal? This dual process is here examined by reference to the laboratory case of a human and a robot who team up in building a stack with four blocks.


2000 ◽  
Vol 29 (544) ◽  
Author(s):  
Dolores Canamero ◽  
Jakob Fredslund

We report work on a LEGO robot capable of displaying several emotional expressions in response to physical contact. Our motivation has been to explore believable emotional exchanges to achieve plausible interaction with a simple robot. We have worked toward this goal in two ways. <p>First, acknowledging the importance of physical manipulation in children's interactions, interaction with the robot is through tactile stimulation; the various kinds of stimulation that can elicit the robot's emotions are grounded in a model of emotion activation based on different stimulation patterns.</p><p>Second, emotional states need to be clearly conveyed. We have drawn inspiration from theories of human basic emotions with associated universal facial expressions, which we have implemented in a caricaturized face. We have conducted experiments on both children and adults to assess the recognizability of these expressions.</p>


Sign in / Sign up

Export Citation Format

Share Document