scholarly journals Safe Visual Navigation via Deep Learning and Novelty Detection

Author(s):  
Charles Richter ◽  
Nicholas Roy
2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Lars Banko ◽  
Phillip M. Maffettone ◽  
Dennis Naujoks ◽  
Daniel Olds ◽  
Alfred Ludwig

AbstractWe apply variational autoencoders (VAE) to X-ray diffraction (XRD) data analysis on both simulated and experimental thin-film data. We show that crystal structure representations learned by a VAE reveal latent information, such as the structural similarity of textured diffraction patterns. While other artificial intelligence (AI) agents are effective at classifying XRD data into known phases, a similarly conditioned VAE is uniquely effective at knowing what it doesn’t know: it can rapidly identify data outside the distribution it was trained on, such as novel phases and mixtures. These capabilities demonstrate that a VAE is a valuable AI agent for aiding materials discovery and understanding XRD measurements both ‘on-the-fly’ and during post hoc analysis.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4719
Author(s):  
Malik Haris ◽  
Jin Hou

Nowadays, autonomous vehicle is an active research area, especially after the emergence of machine vision tasks with deep learning. In such a visual navigation system for autonomous vehicle, the controller captures images and predicts information so that the autonomous vehicle can safely navigate. In this paper, we first introduced small and medium-sized obstacles that were intentionally or unintentionally left on the road, which can pose hazards for both autonomous and human driving situations. Then, we discuss Markov random field (MRF) model by fusing three potentials (gradient potential, curvature prior potential, and depth variance potential) to segment the obstacles and non-obstacles into the hazardous environment. Since the segment of obstacles is done by MRF model, we can predict the information to safely navigate the autonomous vehicle form hazardous environment on the roadway by DNN model. We found that our proposed method can segment the obstacles accuracy from the blended background road and improve the navigation skills of the autonomous vehicle.


2021 ◽  
Author(s):  
Jing Li ◽  
Jialin Yin ◽  
Lin Deng

Abstract In the development of modern agriculture, the intelligent use of mechanical equipment is one of the main signs for agricultural modernization. Navigation technology is the key technology for agricultural machinery to control autonomously in operating environment, and it is a hotspot in the field of intelligent research on agricultural machinery. Facing the accuracy requirements of autonomous navigation for intelligent agricultural robots, this paper proposes a visual navigation algorithm for agricultural robots based on deep learning image understanding. The method first uses cascaded deep convolutional network and hybrid dilated convolution fusion method to process images collected by vision system. Then it extracts the route of processed images based on improved Hough transform algorithm. At the same time, the posture of agricultural robots is adjusted to realize autonomous navigation. Finally, our proposed method is verified by using non-interference experimental scenes and noisy experimental scenes. Experimental results show that the method can perform autonomous navigation in complex and noisy environments, and has good practicability and applicability.


2017 ◽  
Author(s):  
Christoph Sommer ◽  
Rudolf Hoefler ◽  
Matthias Samwer ◽  
Daniel W. Gerlich

AbstractSupervised machine learning is a powerful and widely used method to analyze high-content screening data. Despite its accuracy, efficiency, and versatility, supervised machine learning has drawbacks, most notably its dependence on a priori knowledge of expected phenotypes and time-consuming classifier training. We provide a solution to these limitations with CellCognition Explorer, a generic novelty detection and deep learning framework. Application to several large-scale screening data sets on nuclear and mitotic cell morphologies demonstrates that CellCognition Explorer enables discovery of rare phenotypes without user training, which has broad implications for improved assay development in high-content screening.


Mathematics ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. 2140
Author(s):  
Oleg Kupervasser ◽  
Hennadii Kutomanov ◽  
Ori Levi ◽  
Vladislav Pukshansky ◽  
Roman Yavich

In the paper, visual navigation of a drone is considered. The drone navigation problem consists of two parts. The first part is finding the real position and orientation of the drone. The second part is finding the difference between desirable and real position and orientation of the drone and creation of the correspondent control signal for decreasing the difference. For the first part of the drone navigation problem, the paper presents a method for determining the coordinates of the drone camera with respect to known three-dimensional (3D) ground objects using deep learning. The algorithm has two stages. It causes the algorithm to be easy for interpretation by artificial neural network (ANN) and consequently increases its accuracy. At the first stage, we use the first ANN to find coordinates of the object origin projection. At the second stage, we use the second ANN to find the drone camera position and orientation. The algorithm has high accuracy (these errors were found for the validation set of images as differences between positions and orientations, obtained from a pretrained artificial neural network, and known positions and orientations), it is not sensitive to interference associated with changes in lighting, the appearance of external moving objects and the other phenomena where other methods of visual navigation are not effective. For the second part of the drone navigation problem, the paper presents a method for stabilization of drone flight controlled by autopilot with time delay. Indeed, image processing for navigation demands a lot of time and results in a time delay. However, the proposed method allows to get stable control in the presence of this time delay.


2021 ◽  
Vol 6 (55) ◽  
pp. eabf3320
Author(s):  
Anthony T. Fragoso ◽  
Connor T. Lee ◽  
Austin S. McCoy ◽  
Soon-Jo Chung

Visual terrain-relative navigation (VTRN) is a localization method based on registering a source image taken from a robotic vehicle against a georeferenced target image. With high-resolution imagery databases of Earth and other planets now available, VTRN offers accurate, drift-free navigation for air and space robots even in the absence of external positioning signals. Despite its potential for high accuracy, however, VTRN remains extremely fragile to common and predictable seasonal effects, such as lighting, vegetation changes, and snow cover. Engineered registration algorithms are mature and have provable geometric advantages but cannot accommodate the content changes caused by seasonal effects and have poor matching skill. Approaches based on deep learning can accommodate image content changes but produce opaque position estimates that either lack an interpretable uncertainty or require tedious human annotation. In this work, we address these issues with targeted use of deep learning within an image transform architecture, which converts seasonal imagery to a stable, invariant domain that can be used by conventional algorithms without modification. Our transform preserves the geometric structure and uncertainty estimates of legacy approaches and demonstrates superior performance under extreme seasonal changes while also being easy to train and highly generalizable. We show that classical registration methods perform exceptionally well for robotic visual navigation when stabilized with the proposed architecture and are able to consistently anticipate reliable imagery. Gross mismatches were nearly eliminated in challenging and realistic visual navigation tasks that also included topographic and perspective effects.


Mathematics ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. 2175
Author(s):  
Oleg Kupervasser ◽  
Hennadii Kutomanov ◽  
Michael Mushaelov ◽  
Roman Yavich

This paper presents the visual navigation method for determining the position and orientation of a ground robot using a diffusion map of robot images (obtained from a camera in an upper position—e.g., tower, drone) and for investigating robot stability with respect to desirable paths and control with time delay. The time delay appears because of image processing for visual navigation. We consider a diffusion map as a possible alternative to the currently popular deep learning, comparing the possibilities of these two methods for visual navigation of ground robots. The diffusion map projects an image (described by a point in multidimensional space) to a low-dimensional manifold preserving the mutual relationships between the data. We find the ground robot’s position and orientation as a function of coordinates of the robot image on the low-dimensional manifold obtained from the diffusion map. We compare these coordinates with coordinates obtained from deep learning. The algorithm has higher accuracy and is not sensitive to changes in lighting, the appearance of external moving objects, and other phenomena. However, the diffusion map needs a larger calculation time than deep learning. We consider possible future steps for reducing this calculation time.


2021 ◽  
Vol 54 (2) ◽  
pp. 1-38
Author(s):  
Guansong Pang ◽  
Chunhua Shen ◽  
Longbing Cao ◽  
Anton Van Den Hengel

Anomaly detection, a.k.a. outlier detection or novelty detection, has been a lasting yet active research area in various research communities for several decades. There are still some unique problem complexities and challenges that require advanced approaches. In recent years, deep learning enabled anomaly detection, i.e., deep anomaly detection , has emerged as a critical direction. This article surveys the research of deep anomaly detection with a comprehensive taxonomy, covering advancements in 3 high-level categories and 11 fine-grained categories of the methods. We review their key intuitions, objective functions, underlying assumptions, advantages, and disadvantages and discuss how they address the aforementioned challenges. We further discuss a set of possible future opportunities and new perspectives on addressing the challenges.


Sign in / Sign up

Export Citation Format

Share Document