Sensor Fusion for Industrial Applications Using Transducer Markup Language

Author(s):  
Valerie J. Yoder ◽  
Steven W. Havens ◽  
Arthur J. Na ◽  
Rachel E. Weingrad

Manufacturing processes would greatly benefit from fusing data from many disparate sensors, but systems today do not fully exploit available sensor data. Disparate sensors could include Coordinate Measurement Machines (CMM), laser surface scanners, micro sensors, cameras, acoustic devices, thermocouples, or other various devices which provide measurement or visual data. Often, sensor data requires separate customized software for each type of sensor system, as opposed to having common tools for use across a wide array of sensor systems. This process of stove-piping requires proprietary software for analysis and display of each sensor type, and inhibits interoperability. There are several challenges to sensor fusion which need to be addressed. First, many sensors providing data are heterogeneous in phenomena detection and operation, providing measurements of different target attributes. This makes the measurements very difficult to fuse directly. Second, these disparate sensors are asynchronous in time. The collection, integration, buffering and transmitting time can each affect the way time is calculated and stored by the sensor. Transducer Markup Language (TML), developed by IRIS Corporation, addresses these challenges. This paper describes TML and addresses examples of industrial applications of TML-enabled transducer networks.

Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1685
Author(s):  
Sakorn Mekruksavanich ◽  
Anuchit Jitpattanakul

Sensor-based human activity recognition (S-HAR) has become an important and high-impact topic of research within human-centered computing. In the last decade, successful applications of S-HAR have been presented through fruitful academic research and industrial applications, including for healthcare monitoring, smart home controlling, and daily sport tracking. However, the growing requirements of many current applications for recognizing complex human activities (CHA) have begun to attract the attention of the HAR research field when compared with simple human activities (SHA). S-HAR has shown that deep learning (DL), a type of machine learning based on complicated artificial neural networks, has a significant degree of recognition efficiency. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are two different types of DL methods that have been successfully applied to the S-HAR challenge in recent years. In this paper, we focused on four RNN-based DL models (LSTMs, BiLSTMs, GRUs, and BiGRUs) that performed complex activity recognition tasks. The efficiency of four hybrid DL models that combine convolutional layers with the efficient RNN-based models was also studied. Experimental studies on the UTwente dataset demonstrated that the suggested hybrid RNN-based models achieved a high level of recognition performance along with a variety of performance indicators, including accuracy, F1-score, and confusion matrix. The experimental results show that the hybrid DL model called CNN-BiGRU outperformed the other DL models with a high accuracy of 98.89% when using only complex activity data. Moreover, the CNN-BiGRU model also achieved the highest recognition performance in other scenarios (99.44% by using only simple activity data and 98.78% with a combination of simple and complex activities).


Author(s):  
L. Orazi ◽  
A. Rota ◽  
B. Reggiani

AbstractLaser surface hardening is rapidly growing in industrial applications due to its high flexibility, accuracy, cleanness and energy efficiency. However, the experimental process optimization can be a tricky task due to the number of involved parameters, thus suggesting for alternative approaches such as reliable numerical simulations. Conventional laser hardening models compute the achieved hardness on the basis of microstructure predictions due to carbon diffusion during the process heat thermal cycle. Nevertheless, this approach is very time consuming and not allows to simulate real complex products during laser treatments. To overcome this limitation, a novel simplified approach for laser surface hardening modelling is presented and discussed. The basic assumption consists in neglecting the austenite homogenization due to the short time and the insufficient carbon diffusion during the heating phase of the process. In the present work, this assumption is experimentally verified through nano-hardness measurements on C45 carbon steel samples both laser and oven treated by means of atomic force microscopy (AFM) technique.


2021 ◽  
Vol 11 (9) ◽  
pp. 3921
Author(s):  
Paloma Carrasco ◽  
Francisco Cuesta ◽  
Rafael Caballero ◽  
Francisco J. Perez-Grau ◽  
Antidio Viguria

The use of unmanned aerial robots has increased exponentially in recent years, and the relevance of industrial applications in environments with degraded satellite signals is rising. This article presents a solution for the 3D localization of aerial robots in such environments. In order to truly use these versatile platforms for added-value cases in these scenarios, a high level of reliability is required. Hence, the proposed solution is based on a probabilistic approach that makes use of a 3D laser scanner, radio sensors, a previously built map of the environment and input odometry, to obtain pose estimations that are computed onboard the aerial platform. Experimental results show the feasibility of the approach in terms of accuracy, robustness and computational efficiency.


2021 ◽  
Vol 4 (1) ◽  
pp. 3
Author(s):  
Parag Narkhede ◽  
Rahee Walambe ◽  
Shruti Mandaokar ◽  
Pulkit Chandel ◽  
Ketan Kotecha ◽  
...  

With the rapid industrialization and technological advancements, innovative engineering technologies which are cost effective, faster and easier to implement are essential. One such area of concern is the rising number of accidents happening due to gas leaks at coal mines, chemical industries, home appliances etc. In this paper we propose a novel approach to detect and identify the gaseous emissions using the multimodal AI fusion techniques. Most of the gases and their fumes are colorless, odorless, and tasteless, thereby challenging our normal human senses. Sensing based on a single sensor may not be accurate, and sensor fusion is essential for robust and reliable detection in several real-world applications. We manually collected 6400 gas samples (1600 samples per class for four classes) using two specific sensors: the 7-semiconductor gas sensors array, and a thermal camera. The early fusion method of multimodal AI, is applied The network architecture consists of a feature extraction module for individual modality, which is then fused using a merged layer followed by a dense layer, which provides a single output for identifying the gas. We obtained the testing accuracy of 96% (for fused model) as opposed to individual model accuracies of 82% (based on Gas Sensor data using LSTM) and 93% (based on thermal images data using CNN model). Results demonstrate that the fusion of multiple sensors and modalities outperforms the outcome of a single sensor.


2014 ◽  
Vol 607 ◽  
pp. 791-794 ◽  
Author(s):  
Wei Kang Tey ◽  
Che Fai Yeong ◽  
Yip Loon Seow ◽  
Eileen Lee Ming Su ◽  
Swee Ho Tang

Omnidirectional mobile robot has gained popularity among researchers. However, omnidirectional mobile robot is rarely been applied in industry field especially in the factory which is relatively more dynamic than normal research setting condition. Hence, it is very important to have a stable yet reliable feedback system to allow a more efficient and better performance controller on the robot. In order to ensure the reliability of the robot, many of the researchers use high cost solution in the feedback of the robot. For example, there are researchers use global camera as feedback. This solution has increases the cost of the robot setup fee to a relatively high amount. The setup system is also hard to modify and lack of flexibility. In this paper, a novel sensor fusion technique is proposed and the result is discussed.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 4029 ◽  
Author(s):  
Jiaxuan Wu ◽  
Yunfei Feng ◽  
Peng Sun

Activity of daily living (ADL) is a significant predictor of the independence and functional capabilities of an individual. Measurements of ADLs help to indicate one’s health status and capabilities of quality living. Recently, the most common ways to capture ADL data are far from automation, including a costly 24/7 observation by a designated caregiver, self-reporting by the user laboriously, or filling out a written ADL survey. Fortunately, ubiquitous sensors exist in our surroundings and on electronic devices in the Internet of Things (IoT) era. We proposed the ADL Recognition System that utilizes the sensor data from a single point of contact, such as smartphones, and conducts time-series sensor fusion processing. Raw data is collected from the ADL Recorder App constantly running on a user’s smartphone with multiple embedded sensors, including the microphone, Wi-Fi scan module, heading orientation of the device, light proximity, step detector, accelerometer, gyroscope, magnetometer, etc. Key technologies in this research cover audio processing, Wi-Fi indoor positioning, proximity sensing localization, and time-series sensor data fusion. By merging the information of multiple sensors, with a time-series error correction technique, the ADL Recognition System is able to accurately profile a person’s ADLs and discover his life patterns. This paper is particularly concerned with the care for the older adults who live independently.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Nilamadhab Mishra ◽  
Hsien-Tsung Chang ◽  
Chung-Chih Lin

In an indoor safety-critical application, sensors and actuators are clustered together to accomplish critical actions within a limited time constraint. The cluster may be controlled by a dedicated programmed autonomous microcontroller device powered with electricity to perform in-network time critical functions, such as data collection, data processing, and knowledge production. In a data-centric sensor network, approximately 3–60% of the sensor data are faulty, and the data collected from the sensor environment are highly unstructured and ambiguous. Therefore, for safety-critical sensor applications, actuators must function intelligently within a hard time frame and have proper knowledge to perform their logical actions. This paper proposes a knowledge discovery strategy and an exploration algorithm for indoor safety-critical industrial applications. The application evidence and discussion validate that the proposed strategy and algorithm can be implemented for knowledge discovery within the operational framework.


Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 823 ◽  
Author(s):  
Mingyang Geng ◽  
Shuqi Liu ◽  
Zhaoxia Wu

Autonomously following a man-made trail in the wild is a challenging problem for robotic systems. Recently, deep learning-based approaches have cast the trail following problem as an image classification task and have achieved great success in the vision-based trail-following problem. However, the existing research only focuses on the trail-following task with a single-robot system. In contrast, many robotic tasks in reality, such as search and rescue, are conducted by a group of robots. While these robots are grouped to move in the wild, they can cooperate to lead to a more robust performance and perform the trail-following task in a better manner. Concretely, each robot can periodically exchange the vision data with other robots and make decisions based both on its local view and the information from others. This paper proposes a sensor fusion-based cooperative trail-following method, which enables a group of robots to implement the trail-following task by fusing the sensor data of each robot. Our method allows each robot to face the same direction from different altitudes to fuse the vision data feature on the collective level and then take action respectively. Besides, considering the quality of service requirement of the robotic software, our method limits the condition to implementing the sensor data fusion process by using the “threshold” mechanism. Qualitative and quantitative experiments on the real-world dataset have shown that our method can significantly promote the recognition accuracy and lead to a more robust performance compared with the single-robot system.


2020 ◽  
Vol 835 ◽  
pp. 306-316
Author(s):  
Haitham Elgazzar ◽  
Shimaa El-Hadad ◽  
Hassan Abdel-Sabour

316L stainless steel is used in various industrial applications including chemical, biomedical and mechanical industries due to its good mechanical properties and corrosion resistance. Recycling of 316L stainless steel scrap without significantly reducing its value has received recently great attention because of the environmental regulations. In the current work, 316L stainless steel scrap was recycled via casting using Skull induction melting technique. The casted products subsequently subjected to laser surface melting process to improve its surface properties to be used for harsh environment. The results showed defect free surfaces with homogeneous microstructures. Nano size grains were also obtained due to rapid solidification process. Such nano size grains are preferred for extending the usage of the 316L stainless steel in new applications.Corresponding author: E-Mail: [email protected]


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3786 ◽  
Author(s):  
Huang ◽  
Hsieh ◽  
Liu ◽  
Cheng ◽  
Hsu ◽  
...  

The interior space of large-scale buildings, such as hospitals, with a variety of departments, is so complicated that people may easily lose their way while visiting. Difficulties in wayfinding can cause stress, anxiety, frustration and safety issues to patients and families. An indoor navigation system including route planning and localization is utilized to guide people from one place to another. The localization of moving subjects is a critical-function component in an indoor navigation system. Pedestrian dead reckoning (PDR) is a technology that is widely employed for localization due to the advantage of being independent of infrastructure. To improve the accuracy of the localization system, combining different technologies is one of the solutions. In this study, a multi-sensor fusion approach is proposed to improve the accuracy of the PDR system by utilizing a light sensor, Bluetooth and map information. These simple mechanisms are applied to deal with the issue of accumulative error by identifying edge and sub-edge information from both Bluetooth and the light sensor. Overall, the accumulative error of the proposed multi-sensor fusion approach is below 65 cm in different cases of light arrangement. Compared to inertial sensor-based PDR system, the proposed multi-sensor fusion approach can improve 90% of the localization accuracy in an environment with an appropriate density of ceiling-mounted lamps. The results demonstrate that the proposed approach can improve the localization accuracy by utilizing multi-sensor data and fulfill the feasibility requirements of localization in an indoor navigation system.


Sign in / Sign up

Export Citation Format

Share Document