scholarly journals Using Omnidirectional Vision to Create a Model of the Environment: A Comparative Evaluation of Global-Appearance Descriptors

2016 ◽  
Vol 2016 ◽  
pp. 1-21 ◽  
Author(s):  
L. Payá ◽  
O. Reinoso ◽  
Y. Berenguer ◽  
D. Úbeda

Nowadays, the design of fully autonomous mobile robots is a key discipline. Building a robust model of the unknown environment is an important ability the robot must develop. Using this model, this robot must be able to estimate its current position and to navigate to the target points. The use of omnidirectional vision sensors is usual to solve these tasks. When using this source of information, the robot must extract relevant information from the scenes both to build the model and to estimate its position. The possible frameworks include the classical approach of extracting and describing local features or working with the global appearance of the scenes, which has emerged as a conceptually simple and robust solution. While feature-based techniques have been extensively studied in the literature, appearance-based ones require a full comparative evaluation to reveal the performance of the existing methods and to tune correctly their parameters. This work carries out a comparative evaluation of four global-appearance techniques in map building tasks, using omnidirectional visual information as the only source of data from the environment.

2017 ◽  
Vol 2017 ◽  
pp. 1-20 ◽  
Author(s):  
L. Payá ◽  
A. Gil ◽  
O. Reinoso

Nowadays, the field of mobile robotics is experiencing a quick evolution, and a variety of autonomous vehicles is available to solve different tasks. The advances in computer vision have led to a substantial increase in the use of cameras as the main sensors in mobile robots. They can be used as the only source of information or in combination with other sensors such as odometry or laser. Among vision systems, omnidirectional sensors stand out due to the richness of the information they provide the robot with, and an increasing number of works about them have been published over the last few years, leading to a wide variety of frameworks. In this review, some of the most important works are analysed. One of the key problems the scientific community is addressing currently is the improvement of the autonomy of mobile robots. To this end, building robust models of the environment and solving the localization and navigation problems are three important abilities that any mobile robot must have. Taking it into account, the review concentrates on these problems; how researchers have addressed them by means of omnidirectional vision; the main frameworks they have proposed; and how they have evolved in recent years.


Author(s):  
Aleksey V. Kutuzov

The article substantiates the need to use Internet monitoring as a priority source of information in countering extremism. Various approaches to understanding the defi nition of the category of «operational search», «law enforcement» monitoring of the Internet are analysed, the theoretical development of the implementation of this category in the science of operational search is investigated. The goals and subjects of law enforcement monitoring are identifi ed. The main attention is paid to the legal basis for the use of Internet monitoring in the detection and investigation of extremist crimes. In the course of the study hermeneutic, formal-logical, logical-legal and comparative-legal methods were employed, which were used both individually and collectively in the analysis of legal norms, achievements of science and practice, and development of proposals to refi ne the conduct of operational-search measures on the Internet when solving extremist crimes. The author’s defi nition of «operational-search monitoring» of the Internet is provided. Proposals have been made to improve the activities of police units when conducting monitoring of the Internet in the context of the search for relevant information to the disclosure and investigation of crimes of that category.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Katrina R. Quinn ◽  
Lenka Seillier ◽  
Daniel A. Butts ◽  
Hendrikje Nienborg

AbstractFeedback in the brain is thought to convey contextual information that underlies our flexibility to perform different tasks. Empirical and computational work on the visual system suggests this is achieved by targeting task-relevant neuronal subpopulations. We combine two tasks, each resulting in selective modulation by feedback, to test whether the feedback reflected the combination of both selectivities. We used visual feature-discrimination specified at one of two possible locations and uncoupled the decision formation from motor plans to report it, while recording in macaque mid-level visual areas. Here we show that although the behavior is spatially selective, using only task-relevant information, modulation by decision-related feedback is spatially unselective. Population responses reveal similar stimulus-choice alignments irrespective of stimulus relevance. The results suggest a common mechanism across tasks, independent of the spatial selectivity these tasks demand. This may reflect biological constraints and facilitate generalization across tasks. Our findings also support a previously hypothesized link between feature-based attention and decision-related activity.


Perception ◽  
2017 ◽  
Vol 46 (12) ◽  
pp. 1412-1426 ◽  
Author(s):  
Elmeri Syrjänen ◽  
Marco Tullio Liuzza ◽  
Håkan Fischer ◽  
Jonas K. Olofsson

Disgust is a core emotion evolved to detect and avoid the ingestion of poisonous food as well as the contact with pathogens and other harmful agents. Previous research has shown that multisensory presentation of olfactory and visual information may strengthen the processing of disgust-relevant information. However, it is not known whether these findings extend to dynamic facial stimuli that changes from neutral to emotionally expressive, or if individual differences in trait body odor disgust may influence the processing of disgust-related information. In this preregistered study, we tested whether a classification of dynamic facial expressions as happy or disgusted, and an emotional evaluation of these facial expressions, would be affected by individual differences in body odor disgust sensitivity, and by exposure to a sweat-like, negatively valenced odor (valeric acid), as compared with a soap-like, positively valenced odor (lilac essence) or a no-odor control. Using Bayesian hypothesis testing, we found evidence that odors do not affect recognition of emotion in dynamic faces even when body odor disgust sensitivity was used as moderator. However, an exploratory analysis suggested that an unpleasant odor context may cause faster RTs for faces, independent of their emotional expression. Our results further our understanding of the scope and limits of odor effects on facial perception affect and suggest further studies should focus on reproducibility, specifying experimental circumstances where odor effects on facial expressions may be present versus absent.


2019 ◽  
Author(s):  
Roy S Hessels

Gaze – where one looks, how long, and when – plays an essential part in human social behaviour. While many aspects of social gaze have been reviewed, there is no comprehensive review or theoretical framework that describes how gaze to faces supports face-to-face interaction. In this review, I address the following questions: (1) When does gaze need to be allocated to a particular region of a face in order to provide the relevant information for successful interaction; (2) How do humans look at other people, and faces in particular, regardless of whether gaze needs to be directed at a particular region to acquire the relevant visual information; (3) How does gaze support the regulation of interaction? The work reviewed spans psychophysical research, observational research and eye-tracking research in both lab-based and interactive contexts. Based on the literature overview, I sketch a framework for future research based on dynamic systems theory. The framework holds that gaze should be investigated in relation to sub-states of the interaction, encompassing sub-states of the interactors, the content of the interaction as well as the interactive context. The relevant sub-states for understanding gaze in interaction vary over different timescales from microgenesis to ontogenesis and phylogenesis. The framework has important implications for vision science, psychopathology, developmental science and social robotics.


2014 ◽  
Vol 2014 ◽  
pp. 1-23 ◽  
Author(s):  
Francisco Amorós ◽  
Luis Payá ◽  
Oscar Reinoso ◽  
Walterio Mayol-Cuevas ◽  
Andrew Calway

In this work we present a topological map building and localization system for mobile robots based on global appearance of visual information. We include a comparison and analysis of global-appearance techniques applied to wide-angle scenes in retrieval tasks. Next, we define multiscale analysis, which permits improving the association between images and extracting topological distances. Then, a topological map-building algorithm is proposed. At first, the algorithm has information only of some isolated positions of the navigation area in the form of nodes. Each node is composed of a collection of images that covers the complete field of view from a certain position. The algorithm solves the node retrieval and estimates their spatial arrangement. With these aims, it uses the visual information captured along some routes that cover the navigation area. As a result, the algorithm builds a graph that reflects the distribution and adjacency relations between nodes (map). After the map building, we also propose a route path estimation system. This algorithm takes advantage of the multiscale analysis. The accuracy in the pose estimation is not reduced to the nodes locations but also to intermediate positions between them. The algorithms have been tested using two different databases captured in real indoor environments under dynamic conditions.


2020 ◽  
Vol 10 (18) ◽  
pp. 6480
Author(s):  
Vicente Román ◽  
Luis Payá ◽  
Sergio Cebollada ◽  
Óscar Reinoso

In this work, an incremental clustering approach to obtain compact hierarchical models of an environment is developed and evaluated. This process is performed using an omnidirectional vision sensor as the only source of information. The method is structured in two loop closure levels. First, the Node Level Loop Closure process selects the candidate nodes with which the new image can close the loop. Second, the Image Level Loop Closure process detects the most similar image and the node with which the current image closed the loop. The algorithm is based on an incremental clustering framework and leads to a topological model where the images of each zone tend to be clustered in different nodes. In addition, the method evaluates when two nodes are similar and they can be merged in a unique node or when a group of connected images are different enough to the others and they should constitute a new node. To perform the process, omnidirectional images are described with global appearance techniques in order to obtain robust descriptors. The use of such technique in mapping and localization algorithms is less extended than local features description, so this work also evaluates the efficiency in clustering and mapping techniques. The proposed framework is tested with three different public datasets, captured by an omnidirectional vision system mounted on a robot while it traversed three different buildings. This framework is able to build the model incrementally, while the robot explores an unknown environment. Some relevant parameters of the algorithm adapt their value as the robot captures new visual information to fully exploit the features’ space, and the model is updated and/or modified as a consequence. The experimental section shows the robustness and efficiency of the method, comparing it with a batch spectral clustering algorithm.


2013 ◽  
Vol 312 ◽  
pp. 685-689 ◽  
Author(s):  
Jing Chen ◽  
Jing Li Niu ◽  
Dong Hai Chen

With the computer image processing and technology development, vision sensors in mobile robot navigation and obstacle recognition was paid more and more attention. In this paper Adaboost algorithm is used to identify obstacles of intelligent wheelchair in Visual c + +6.0 platforms. With the AdaBoost algorithm training strong classifier for obstacle detection, then use the classifier to detect the target obstacle. Fuzzy neural network is used to fusion sonar information and visual information of wheelchair make the obstacle avoidance path of the wheelchair to be more intelligent and optimization.


Information ◽  
2020 ◽  
Vol 11 (8) ◽  
pp. 376 ◽  
Author(s):  
Cornelia Ferner ◽  
Clemens Havas ◽  
Elisabeth Birnbacher ◽  
Stefan Wegenkittl ◽  
Bernd Resch

In the event of a natural disaster, geo-tagged Tweets are an immediate source of information for locating casualties and damages, and for supporting disaster management. Topic modeling can help in detecting disaster-related Tweets in the noisy Twitter stream in an unsupervised manner. However, the results of topic models are difficult to interpret and require manual identification of one or more “disaster topics”. Immediate disaster response would benefit from a fully automated process for interpreting the modeled topics and extracting disaster relevant information. Initializing the topic model with a set of seed words already allows to directly identify the corresponding disaster topic. In order to enable an automated end-to-end process, we automatically generate seed words using older Tweets from the same geographic area. The results of two past events (Napa Valley earthquake 2014 and hurricane Harvey 2017) show that the geospatial distribution of Tweets identified as disaster related conforms with the officially released disaster footprints. The suggested approach is applicable when there is a single topic of interest and comparative data available.


Sign in / Sign up

Export Citation Format

Share Document