A compact omnidirectional stereo camera for autonomous driving

Author(s):  
Ryota Kawamata ◽  
Keiichi Betsui ◽  
Takeshi Shimano ◽  
Kazuyoshi Yamazaki
2016 ◽  
Vol 6 ◽  
pp. 28 ◽  
Author(s):  
M. Steininger ◽  
C. Stephan ◽  
C. Böhm ◽  
F. Sauer ◽  
R. Zink

Motivated by the hype around driverless cars and the challenges of the sensor integration and data processing, this paper presents a model for using a XBox One Microsoft Kinect stereo camera as sensor for mapping the surroundings. Today, the recognition of the environment of the car is mostly done by a mix of sensors like LiDAR, RADAR and cameras. In the case of the outdoor delivery challenge Robotour 2016 with model cars in scale 1:5, it is our goal to solve the task with one camera only. To this end, a three-stage approach was developed. The test results show that our approach can detect and locate objects at a range of up to eight meters in order to incorporate them as barriers in the navigation process.


Author(s):  
W. Omar ◽  
I. Lee ◽  
G. Lee ◽  
K. M. Park

Abstract. This paper focus on traffic light distance measurement using stereo camera which is a very important and challenging task in image processing domain, where it is used in several systems such as Driving Safety Support Systems (DSSS), autonomous driving and traffic mobility. In this paper, we propose an integrated traffic light distance measurement system for self-driving based on stereo image processing. Therefore, an algorithm to spatially locate the detected traffic light is required in order to make these detections useful. In this paper, an algorithm to detect, classify the traffic light colours and spatially locate traffic light are integrated. Detection and colours classification are made simultaneously via YOLOv3, using RGB images. 3D traffic light localization is achieved by estimating the distance from the vehicle to the traffic light, by looking at detector 2D bounding boxes and the disparity map generated by stereo camera. Moreover, Gaussian YOLOv3 weights based on KITTI and Berkeley datasets has been replaced with the COCO dataset. Therefore, a detection algorithm that can cope with mislocalizations is required in autonomous driving applications. This paper proposes an integrated method for improving the detection accuracy and traffic lights colours classification while supporting a real-time operation by modelling the bounding box (bbox) of YOLOv3. The obtained results show fair results within 20 meters away from the sensor, while misdetection and classification appeared in further distance.


CICTP 2020 ◽  
2020 ◽  
Author(s):  
Kun Jiang ◽  
Yunlong Wang ◽  
Shengjie Kou ◽  
Diange Yang
Keyword(s):  

2013 ◽  
Vol 133 (9) ◽  
pp. 595-598
Author(s):  
Kenji SUZUKI ◽  
Hisaaki ISHIDA ◽  
Hirofumi INOSE ◽  
Rui KOBAYASHI
Keyword(s):  

2020 ◽  
Vol 2020 (14) ◽  
pp. 306-1-306-6
Author(s):  
Florian Schiffers ◽  
Lionel Fiske ◽  
Pablo Ruiz ◽  
Aggelos K. Katsaggelos ◽  
Oliver Cossairt

Imaging through scattering media finds applications in diverse fields from biomedicine to autonomous driving. However, interpreting the resulting images is difficult due to blur caused by the scattering of photons within the medium. Transient information, captured with fast temporal sensors, can be used to significantly improve the quality of images acquired in scattering conditions. Photon scattering, within a highly scattering media, is well modeled by the diffusion approximation of the Radiative Transport Equation (RTE). Its solution is easily derived which can be interpreted as a Spatio-Temporal Point Spread Function (STPSF). In this paper, we first discuss the properties of the ST-PSF and subsequently use this knowledge to simulate transient imaging through highly scattering media. We then propose a framework to invert the forward model, which assumes Poisson noise, to recover a noise-free, unblurred image by solving an optimization problem.


Sign in / Sign up

Export Citation Format

Share Document