scholarly journals Towards a biologically-inspired vision system for the control of locomotion in complex environments

2013 ◽  
Vol 13 (9) ◽  
pp. 753-753
Author(s):  
S. Bonneaud ◽  
W. H. Warren ◽  
K. Olfers ◽  
G. Irwin ◽  
T. Serre
10.5772/7543 ◽  
2009 ◽  
Author(s):  
Fernando Lopez-Garcia ◽  
Xose Ramon ◽  
Xose Manuel ◽  
Raquel Dosil

2018 ◽  
pp. 458-493
Author(s):  
Li-Minn Ang ◽  
Kah Phooi Seng ◽  
Christopher Wing Hong Ngau

Biological vision components like visual attention (VA) algorithms aim to mimic the mechanism of the human vision system. Often VA algorithms are complex and require high computational and memory requirements to be realized. In biologically-inspired vision and embedded systems, the computational capacity and memory resources are of a primary concern. This paper presents a discussion for implementing VA algorithms in embedded vision systems in a resource constrained environment. The authors survey various types of VA algorithms and identify potential techniques which can be implemented in embedded vision systems. Then, they propose a low complexity and low memory VA model based on a well-established mainstream VA model. The proposed model addresses critical factors in terms of algorithm complexity, memory requirements, computational speed, and salience prediction performance to ensure the reliability of the VA in a resource constrained environment. Finally a custom softcore microprocessor-based hardware implementation on a Field-Programmable Gate Array (FPGA) is used to verify the implementation feasibility of the presented model.


2009 ◽  
Vol 09 (04) ◽  
pp. 495-510 ◽  
Author(s):  
WEIREN SHI ◽  
ZUOJIN LI ◽  
XIN SHI ◽  
ZHI ZHONG

The human vision system is a very sophisticated image processing and objects recognition mechanism. However, it is a challenge to simulate the human or animal vision system to automate visual function in machines, because it is difficult to account for the view-invariant perception of universals such as environmental objects or processes and the explicit perception of featural parts and wholes in visual scenes. In this paper, we first present an introduction to the importance of biologically inspired computer vision and review general and key vision functions from neuroscience perspective. And most significantly, we give an important summarization to and discussion on the specific applications of biologically inspired modeling, including biologically inspired image pre-processing, image perception, and objects recognition. In the end, we give some important and challenging topics of computer vision for future work.


Author(s):  
Amirhossein Jamalian ◽  
Fred H. Hamker

A rich stream of visual data enters the cameras of a typical artificial vision system (e.g., a robot) and considering the fact that processing this volume of data in real-rime is almost impossible, a clever mechanism is required to reduce the amount of trivial visual data. Visual Attention might be the solution. The idea is to control the information flow and thus to improve vision by focusing the resources merely on some special aspects instead of the whole visual scene. However, does attention only speed-up processing or can the understanding of human visual attention provide additional guidance for robot vision research? In this chapter, first, some basic concepts of the primate visual system and visual attention are introduced. Afterward, a new taxonomy of biologically-inspired models of attention, particularly those that are used in robotics applications (e.g., in object detection and recognition) is given and finally, future research trends in modelling of visual attention and its applications are highlighted.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-18
Author(s):  
Heng Zhang ◽  
Yingbai Hu ◽  
Jianghua Duan ◽  
Qing Gao ◽  
Langcheng Huo ◽  
...  

Mobile manipulators are widely used in different fields for transferring and grasping tasks such as in medical assisting devices, industrial production, and hotel services. It is challenging to improve navigation accuracies and grasping success rates in complex environments. In this paper, we develop a multisensor-based mobile grasping system which is configured with a vision system and a novel gripper set in an UR5 manipulator. Additionally, an error term of a cost function based on DWA (dynamic window approach) is proposed to improve the navigation performance of the mobile platform through visual guidance. In the process of mobile grasping, the size and position of the object can be identified by a visual recognition algorithm, and then the finger space and chassis position can be automatically adjusted; thus, the object can be grasped by the UR5 manipulator and gripper. To demonstrate the proposed methods, comparison experiments are also conducted using our developed mobile grasping system. According to the analysis of the experimental results, the motion accuracy of the mobile chassis has been improved significantly, satisfying the requirements of navigation and grasping success rates, as well as achieving a high performance over a wide grasping size range from 1.7 mm to 200 mm.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Yunchao Tang ◽  
Mingyou Chen ◽  
Yunfan Lin ◽  
Xueyu Huang ◽  
Kuangyu Huang ◽  
...  

A four-ocular vision system is proposed for the three-dimensional (3D) reconstruction of large-scale concrete-filled steel tube (CFST) under complex testing conditions. These measurements are vitally important for evaluating the seismic performance and 3D deformation of large-scale specimens. A four-ocular vision system is constructed to sample the large-scale CFST; then point cloud acquisition, point cloud filtering, and point cloud stitching algorithms are applied to obtain a 3D point cloud of the specimen surface. A point cloud correction algorithm based on geometric features and a deep learning algorithm are utilized, respectively, to correct the coordinates of the stitched point cloud. This enhances the vision measurement accuracy in complex environments and therefore yields a higher-accuracy 3D model for the purposes of real-time complex surface monitoring. The performance indicators of the two algorithms are evaluated on actual tasks. The cross-sectional diameters at specific heights in the reconstructed models are calculated and compared against laser rangefinder data to test the performance of the proposed algorithms. A visual tracking test on a CFST under cyclic loading shows that the reconstructed output well reflects the complex 3D surface after correction and meets the requirements for dynamic monitoring. The proposed methodology is applicable to complex environments featuring dynamic movement, mechanical vibration, and continuously changing features.


Sign in / Sign up

Export Citation Format

Share Document