Fusion of Multiple Ultrasonic Sensor Data and Image Data for Measuring an Object’s Motion

2005 ◽  
Vol 17 (1) ◽  
pp. 36-43
Author(s):  
Kazunori Umeda ◽  
◽  
Jun Ota ◽  
Hisayuki Kimura ◽  
◽  
...  

Robot sensing requires two types of observation – intensive and wide-angle. We selected multiple ultrasonic sensors for intensive observation and an image sensor for wide-angle observation in measuring a moving object’s motion with sensors in two kinds of fusion – one fusing multiple ultrasonic sensor data and the other fusing the two types of sensor data. The fusion of multiple ultrasonic sensor data takes advantage of object movement from a measurement range of an ultrasonic sensor to another sensor’s range. They are formulated in a Kalman filter framework. Simulation and experiments demonstrate the effectiveness and applicability to an actual robot system.

2020 ◽  
Vol 71 (06) ◽  
pp. 530-537
Author(s):  
HAKAN YÜKSEL ◽  
MELIHA OKTAV BULUT

Sensors can capture and scan many objects in real time for military, security, health and industrial applications. Sensorscan be made smaller, cheaper and more energy efficient due to rapid changes in technology. Low-cost sensors areattractive alternatives to high cost laser scanners in recent years. The Kinect sensor can measure depth data with lowcost and high resolution by scanning the environment. In this study, this sensor collected data on users in front of ascanner, and the depth data results were tested. The process was repeated with four different body positions, and theresults were analysed. The sensor data was reliable versus real measurements. When compared the depth data takenby the sensor with the real measures, the reliability rate is found significance. The difference between the depth imagedata of different users, different positions and different body measures and real data is 0.35 to 1.15 cm. This shows thatthe sensor’s results are close to real data. When the accuracy of the sensor against real measurements is examined,it is seen that these values are between 98.46 % and 99.6 %. Thus, this depth image sensor is reliable and can be usedas an alternative and cheaper way for body measurements.


2020 ◽  
Vol 71 (06) ◽  
pp. 530-537
Author(s):  
HAKAN YÜKSEL ◽  
MELIHA OKTAV BULUT

Sensors can capture and scan many objects in real time for military, security, health and industrial applications. Sensorscan be made smaller, cheaper and more energy efficient due to rapid changes in technology. Low-cost sensors areattractive alternatives to high cost laser scanners in recent years. The Kinect sensor can measure depth data with lowcost and high resolution by scanning the environment. In this study, this sensor collected data on users in front of ascanner, and the depth data results were tested. The process was repeated with four different body positions, and theresults were analysed. The sensor data was reliable versus real measurements. When compared the depth data takenby the sensor with the real measures, the reliability rate is found significance. The difference between the depth imagedata of different users, different positions and different body measures and real data is 0.35 to 1.15 cm. This shows thatthe sensor’s results are close to real data. When the accuracy of the sensor against real measurements is examined,it is seen that these values are between 98.46 % and 99.6 %. Thus, this depth image sensor is reliable and can be usedas an alternative and cheaper way for body measurements.


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2730 ◽  
Author(s):  
Varuna De Silva ◽  
Jamie Roche ◽  
Ahmet Kondoz

Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor for free space detection. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression-based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a uncertainty aware free space detection algorithm.


2020 ◽  
Vol 2020 (1) ◽  
pp. 91-95
Author(s):  
Philipp Backes ◽  
Jan Fröhlich

Non-regular sampling is a well-known method to avoid aliasing in digital images. However, the vast majority of single sensor cameras use regular organized color filter arrays (CFAs), that require an optical-lowpass filter (OLPF) and sophisticated demosaicing algorithms to suppress sampling errors. In this paper a variety of non-regular sampling patterns are evaluated, and a new universal demosaicing algorithm based on the frequency selective reconstruction is presented. By simulating such sensors it is shown that images acquired with non-regular CFAs and no OLPF can lead to a similar image quality compared to their filtered and regular sampled counterparts. The MATLAB source code and results are available at: http://github. com/PhilippBackes/dFSR


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Honggang Wang ◽  
Ruixue Yu ◽  
Ruoyu Pan ◽  
Mengyuan Liu ◽  
Qiongdan Huang ◽  
...  

Purpose In manufacturing environments, mobile radio frequency identification (RFID) robots need to quickly identify and collect various types of passive tag and active tag sensor data. The purpose of this paper is to design a robot system compatible with ultra high frequency (UHF) band passive and active RFID applications and to propose a new anti-collision protocol to improve identification efficiency for active tag data collection. Design/methodology/approach A new UHF RFID robot system based on a cloud platform is designed and verified. For the active RFID system, a grouping reservation–based anti-collision algorithm is proposed in which an inventory round is divided into reservation period and polling period. The reservation period is divided into multiple sub-slots. Grouped tags complete sub-slot by randomly transmitting a short reservation frame. Then, in the polling period, the reader accesses each tag by polling. When tags’ reply collision occurs, the reader tries to re-query collided tags once, and the pre-reply tags avoid collisions through random back-off and channel activity detection. Findings The proposed algorithm achieves a maximum theoretical system throughput of about 0.94, and very few tag data frame transmissions overhead. The capture effect and channel activity detection in physical layer can effectively improve system throughput and reduce tag data transmission. Originality/value In this paper, the authors design and verify the UHF band passive and active hybrid RFID robot architecture based on cloud collaboration. And, the proposed anti-collision algorithm would improve active tag data collection speed and reduce tag transmission overhead in complex manufacturing environments.


Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 823 ◽  
Author(s):  
Mingyang Geng ◽  
Shuqi Liu ◽  
Zhaoxia Wu

Autonomously following a man-made trail in the wild is a challenging problem for robotic systems. Recently, deep learning-based approaches have cast the trail following problem as an image classification task and have achieved great success in the vision-based trail-following problem. However, the existing research only focuses on the trail-following task with a single-robot system. In contrast, many robotic tasks in reality, such as search and rescue, are conducted by a group of robots. While these robots are grouped to move in the wild, they can cooperate to lead to a more robust performance and perform the trail-following task in a better manner. Concretely, each robot can periodically exchange the vision data with other robots and make decisions based both on its local view and the information from others. This paper proposes a sensor fusion-based cooperative trail-following method, which enables a group of robots to implement the trail-following task by fusing the sensor data of each robot. Our method allows each robot to face the same direction from different altitudes to fuse the vision data feature on the collective level and then take action respectively. Besides, considering the quality of service requirement of the robotic software, our method limits the condition to implementing the sensor data fusion process by using the “threshold” mechanism. Qualitative and quantitative experiments on the real-world dataset have shown that our method can significantly promote the recognition accuracy and lead to a more robust performance compared with the single-robot system.


2010 ◽  
Vol 39 ◽  
pp. 523-528
Author(s):  
Xin Hua Yang ◽  
Yuan Yuan Shang ◽  
Da Wei Xu ◽  
Hui Zhuo Niu

This paper introduces a design of a high-speed image acquisition system based on Avalon bus which is supported with SOPC technology. Some peripherals embedded in Avalon bus were customized and utilized in this system, such as imaging unit, decoding unit and storage unit, and these improved the speed of the whole imaging system. The data is compressed to three-fourths of the original by the decoding unit. A custom DMA is designed for moving the image data to the two caches of the SDRAM. This approach discards the method that FIFO must be put up in the traditional data acquisition system. And therefore, it reduced the CPU’s task for data moving. At the same time, the image acquisition and the data transmission can complete a parallel job. Finally, the design is worked on the high-speed image acquisition system which is made up of 2K*2K CMOS image sensor. And it improved the image acquisition speed by three ways: data encoding, custom DMA controller and the parallel processing.


Sign in / Sign up

Export Citation Format

Share Document