scholarly journals 3D Distance Measurement from a Camera to a Mobile Vehicle, Using Monocular Vision

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Saúl Martínez-Díaz

Estimation of distance from objects in real-world scenes is an important topic in several applications such as navigation of autonomous robots, simultaneous localization and mapping (SLAM), and augmented reality (AR). Even though there is a technology for this purpose, in some cases, this technology has some disadvantages. For example, GPS systems are susceptible to interference, especially in places surrounded by buildings, under bridges or indoors; alternatively, RGBD sensors can be used, but they are expensive, and their operational range is limited. Monocular vision is a low-cost suitable alternative that can be used indoor and outdoor. However, monocular odometry is challenging because the object location can be known up a scale factor. Moreover, when objects are moving, it is necessary to estimate the location from consecutive images accumulating error. This paper introduces a new method to compute the distance from a single image of the desired object, with known dimensions, captured with a monocular calibrated vision system. This method is less restrictive than other proposals in the state-of-the-art literature. For the detection of interest points, a Region-based Convolutional Neural Network combined with a corner detector were used. The proposed method was tested on a standard dataset and images acquired by a low-cost and low-resolution webcam, under noncontrolled conditions. The system was tested and compared with a calibrated stereo vision system. Results showed the similar performance of both systems, but the monocular system accomplished the task in less time.

Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2795
Author(s):  
Lahemer ◽  
Rad

In this paper, the problem of Simultaneous Localization And Mapping (SLAM) is addressed via a novel augmented landmark vision-based ellipsoidal SLAM. The algorithm is implemented on a NAO humanoid robot and is tested in an indoor environment. The main feature of the system is the implementation of SLAM with a monocular vision system. Distinguished landmarks referred to as NAOmarks are employed to localize the robot via its monocular vision system. We henceforth introduce the notion of robotic augmented reality (RAR) and present a monocular Extended Kalman Filter (EKF)/ellipsoidal SLAM in order to improve the performance and alleviate the computational effort, to provide landmark identification, and to simplify the data association problem. The proposed SLAM algorithm is implemented in real-time to further calibrate the ellipsoidal SLAM parameters, noise bounding, and to improve its overall accuracy. The augmented EKF/ellipsoidal SLAM algorithms are compared with the regular EKF/ellipsoidal SLAM methods and the merits of each algorithm is also discussed in the paper. The real-time experimental and simulation studies suggest that the adaptive augmented ellipsoidal SLAM is more accurate than the conventional EKF/ellipsoidal SLAMs.


2014 ◽  
Vol 51 (1) ◽  
pp. 66-75 ◽  
Author(s):  
Yun-Hua Wu ◽  
Yang Gao ◽  
Jia-Wei Lin ◽  
Robin Raus ◽  
Shi-Jie Zhang ◽  
...  

Biomimetics ◽  
2018 ◽  
Vol 3 (4) ◽  
pp. 34 ◽  
Author(s):  
Susan Frost ◽  
Leslie Yates ◽  
Hiroyuki Kumagai

Identifying appropriate sites for landing a spacecraft or building permanent structures is critical for extraterrestrial exploration. By tracking the movement of land masses and structures on a planetary surface, scientists can better predict issues that could affect the integrity of the site or structures. A lightweight, low-cost, low-power bioinspired optical sensor is being developed at the National Aeronautics and Space Administration (NASA) Ames Research Center to remotely measure small displacements of land masses on either side of a fault. This paper describes the sensor, which is inspired by the compound eye vision system found in many insects, and the algorithms developed to estimate displacement. The results are presented for indoor and outdoor tests using the sensor to measure the displacement of a specially designed target that is located 0.35, 6, and 30 m from the sensor and is moved 10 mm to the left and right of a centered position, simulating the displacement of land masses on either side of a fault. Measurement uncertainties estimates were a few tenths of a millimeter when the target was located 0.35 and 6 m from the sensor. At the 30 m distance, corrections were required to obtain accuracies in the order of 1 mm.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Supakorn Harnsoongnoen ◽  
Nuananong Jaroensuk

AbstractThe water displacement and flotation are two of the most accurate and rapid methods for grading and assessing freshness of agricultural products based on density determination. However, these techniques are still not suitable for use in agricultural inspections of products such as eggs that absorb water which can be considered intrusive or destructive and can affect the result of measurements. Here we present a novel proposal for a method of non-destructive, non-invasive, low cost, simple and real—time monitoring of the grading and freshness assessment of eggs based on density detection using machine vision and a weighing sensor. This is the first proposal that divides egg freshness into intervals through density measurements. The machine vision system was developed for the measurement of external physical characteristics (length and breadth) of eggs for evaluating their volume. The weighing system was developed for the measurement of the weight of the egg. Egg weight and volume were used to calculate density for grading and egg freshness assessment. The proposed system could measure the weight, volume and density with an accuracy of 99.88%, 98.26% and 99.02%, respectively. The results showed that the weight and freshness of eggs stored at room temperature decreased with storage time. The relationship between density and percentage of freshness was linear for the all sizes of eggs, the coefficient of determination (R2) of 0.9982, 0.9999, 0.9996, 0.9996 and 0.9994 for classified egg size classified 0, 1, 2, 3 and 4, respectively. This study shows that egg freshness can be determined through density without using water to test for water displacement or egg flotation which has future potential as a measuring system important for the poultry industry.


2021 ◽  
Vol 101 (3) ◽  
Author(s):  
Korbinian Nottensteiner ◽  
Arne Sachtler ◽  
Alin Albu-Schäffer

AbstractRobotic assembly tasks are typically implemented in static settings in which parts are kept at fixed locations by making use of part holders. Very few works deal with the problem of moving parts in industrial assembly applications. However, having autonomous robots that are able to execute assembly tasks in dynamic environments could lead to more flexible facilities with reduced implementation efforts for individual products. In this paper, we present a general approach towards autonomous robotic assembly that combines visual and intrinsic tactile sensing to continuously track parts within a single Bayesian framework. Based on this, it is possible to implement object-centric assembly skills that are guided by the estimated poses of the parts, including cases where occlusions block the vision system. In particular, we investigate the application of this approach for peg-in-hole assembly. A tilt-and-align strategy is implemented using a Cartesian impedance controller, and combined with an adaptive path executor. Experimental results with multiple part combinations are provided and analyzed in detail.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


2021 ◽  
Vol 13 (12) ◽  
pp. 2351
Author(s):  
Alessandro Torresani ◽  
Fabio Menna ◽  
Roberto Battisti ◽  
Fabio Remondino

Mobile and handheld mapping systems are becoming widely used nowadays as fast and cost-effective data acquisition systems for 3D reconstruction purposes. While most of the research and commercial systems are based on active sensors, solutions employing only cameras and photogrammetry are attracting more and more interest due to their significantly minor costs, size and power consumption. In this work we propose an ARM-based, low-cost and lightweight stereo vision mobile mapping system based on a Visual Simultaneous Localization And Mapping (V-SLAM) algorithm. The prototype system, named GuPho (Guided Photogrammetric System) also integrates an in-house guidance system which enables optimized image acquisitions, robust management of the cameras and feedback on positioning and acquisition speed. The presented results show the effectiveness of the developed prototype in mapping large scenarios, enabling motion blur prevention, robust camera exposure control and achieving accurate 3D results.


Sign in / Sign up

Export Citation Format

Share Document