scholarly journals Sustainable Human–Robot Collaboration Based on Human Intention Classification

2021 ◽  
Vol 13 (11) ◽  
pp. 5990
Author(s):  
Chiuhsiang Joe Lin ◽  
Rio Prasetyo Lukodono

Sustainable manufacturing plays a role in ensuring products’ economic characteristics and reducing energy and resource consumption by improving the well-being of human workers and communities and maintaining safety. Using robots is one way for manufacturers to increase their sustainable manufacturing practices. Nevertheless, there are limitations to directly replacing humans with robots due to work characteristics and practical conditions. Collaboration between robots and humans should accommodate human capabilities while reducing loads and ineffective human motions to prevent human fatigue and maximize overall performance. Moreover, there is a need to establish early and fast communication between humans and machines in human–robot collaboration to know the status of the human in the activity and make immediate adjustments for maximum performance. This study used a deep learning algorithm to classify muscular signals of human motions with accuracy of 88%. It indicates that the signal could be used as information for the robot to determine the human motion’s intention during the initial stage of the entire motion. This approach can increase not only the communication and efficiency of human–robot collaboration but also reduce human fatigue by the early detection of human motion patterns. To enhance human well-being, it is suggested that a human–robot collaboration assembly line adopt similar technologies for a sustainable human–robot collaboration workplace.

Author(s):  
Julius Yong Wu Jien ◽  
Aslina Baharum ◽  
Shaliza Hayati A. Wahab ◽  
Nordin Saad ◽  
Muhammad Omar ◽  
...  

Face recognition is the use of biometric innovations that can see or validate a person by seeing and investigating designs depending on the shape of the individual. Face recognition is used largely for the purpose of well-being, despite the fact that passion for different areas of use is growing. Overall, face recognition innovations are worth considering because they have the potential for broad legal jurisdiction and different business applications. It is widely used in many spaces. How it works is a product of facial recognition processing facial geometry. The hole between the ear and the good way from the front to the jaw are the main variables. This code distinguishes the highlight of the face that is important for your facial separation and creates your facial expression. Therefore, this study gives an overview of age detection using a different combination of machine learning and image processing methods on the image dataset.


2009 ◽  
Vol 06 (03) ◽  
pp. 537-560 ◽  
Author(s):  
GUTEMBERG GUERRA-FILHO

In this paper, we present the steps required for the construction of a praxicon, a structured lexicon of human actions, through the learning of grammar systems for human actions. The discovery of a Human Activity Language involves learning the syntax of human motion which requires the construction of this praxicon. The morphology inference process assumes that a non-arbitrary symbolic representation of the human movement is given. Thus, to analyze the morphology of a particular action, we are given a symbolic representation for the motion of each actuator associated with several repeated performances of this action. As a formal model, we propose a new Parallel Synchronous Grammar System where each component grammar corresponds to an actuator. We present a novel parallel learning algorithm to induce this grammar system. Our representation explicitly contains the set of joints (degrees of freedom) actually responsible for achieving the goal aimed by the activity, the motion performed by each participating actuator, and the synchronization rules modeling coordination among these actuators. We evaluated our inference approach with synthetic data and real human motion data. The algorithm manages to induce the correct grammar system even when the input contains noise. Therefore, our approach was successful in both representational and learning aspects, and may serve as a tool to parse movement, learn patterns, and to generate actions.


2021 ◽  
Vol 13 (9) ◽  
pp. 1779
Author(s):  
Xiaoyan Yin ◽  
Zhiqun Hu ◽  
Jiafeng Zheng ◽  
Boyong Li ◽  
Yuanyuan Zuo

Radar beam blockage is an important error source that affects the quality of weather radar data. An echo-filling network (EFnet) is proposed based on a deep learning algorithm to correct the echo intensity under the occlusion area in the Nanjing S-band new-generation weather radar (CINRAD/SA). The training dataset is constructed by the labels, which are the echo intensity at the 0.5° elevation in the unblocked area, and by the input features, which are the intensity in the cube including multiple elevations and gates corresponding to the location of bottom labels. Two loss functions are applied to compile the network: one is the common mean square error (MSE), and the other is a self-defined loss function that increases the weight of strong echoes. Considering that the radar beam broadens with distance and height, the 0.5° elevation scan is divided into six range bands every 25 km to train different models. The models are evaluated by three indicators: explained variance (EVar), mean absolute error (MAE), and correlation coefficient (CC). Two cases are demonstrated to compare the effect of the echo-filling model by different loss functions. The results suggest that EFnet can effectively correct the echo reflectivity and improve the data quality in the occlusion area, and there are better results for strong echoes when the self-defined loss function is used.


Sign in / Sign up

Export Citation Format

Share Document