scholarly journals A Novel Robot Vision Applicable to Real-time Target Tracking

2003 ◽  
Vol 15 (2) ◽  
pp. 185-191 ◽  
Author(s):  
Kazuhiro Shimonomura ◽  
◽  
Keisuke Inoue ◽  
Seiji Kameda ◽  
Tetsuya Yagi ◽  
...  

We designed a vision system with a novel architecture composed of a silicon retina, an analog CMOS VLSI intelligent sensor, and FPGA. Two basic pre-processes are done with the silicon retina: a Laplacian-Gaussian (∇2G)-like spatial filtering and a subtraction of consecutive frames. Analog outputs of the silicon retina were binarized and transferred to FPGA in which digital image processing was executed. The system was applied to real-time target tracking under indoor illumination. Namely, the center of a target object was found as the median of the binarized image. The object could be tracked within the video frame rate in indoor illumination. The system has a compact hardware and a low power consumption and therefore is suitable for robot vision.

2001 ◽  
Vol 13 (6) ◽  
pp. 614-620 ◽  
Author(s):  
Kazuhiro Shimonomura ◽  
◽  
Seiji Kameda ◽  
Kazuo Ishii ◽  
Tetsuya Yagi ◽  
...  

A Robot vision system was designed using a silicon retina, which has been developed to mimick the parallel circuit structure of the vertebrate retina. The silicon retina used here is an analog CMOS very large-scale integrated circuit, which executes Laplacian-Gaussian like filtering on the image in real time. The processing is robust to change of illumination condition. Analog circuit modules were designed to detect the contour from the output image of the silicon retina and to binarize the output image. The images processed by the silicon retina as well as those by the analog circuit modules are received by the DOS/V-compatible mother-board with NTSC signal, which enables higher level processings using digital image processing techniques. This novel robot vision system can achieve real time and robust processings in natural illumination condition with a compact hardware and a low power consumption.


2005 ◽  
Vol 17 (2) ◽  
pp. 121-129 ◽  
Author(s):  
Yoshihiro Watanabe ◽  
◽  
Takashi Komuro ◽  
Shingo Kagami ◽  
Masatoshi Ishikawa

Real-time image processing at high frame rates could play an important role in various visual measurement. Such image processing can be realized by using a high-speed vision system imaging at high frame rates and having appropriate algorithms processed at high speed. We introduce a vision chip for high-speed vision and propose a multi-target tracking algorithm for the vision chip utilizing the unique features. We describe two visual measurement applications, target counting and rotation measurement. Both measurements enable excellent measurement precision and high flexibility because of high-frame-rate visual observation achievable. Experimental results show the advantages of vision chips compared with conventional visual systems.


2015 ◽  
Vol 734 ◽  
pp. 168-171
Author(s):  
Xing Ze Li ◽  
Ling Zhu ◽  
Yi Hua

Aim at the real-time problem of industrial robot vision system, design a embedded robot vision system based on DSP microprocessor. This system can use CCD camera and the ultrasonic sensor to collect the target environment information. It also can use the processor DSP to process the images and recognize target. And then through the communication module, send results in the form of wireless to the upper computer, providing target object information for robot control layer. This system completes the software and hardware system design, image collection & processing and robot control, as well as meet the real-time requirements of machine vision system.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5368
Author(s):  
Atul Sharma ◽  
Sushil Raut ◽  
Kohei Shimasaki ◽  
Taku Senoo ◽  
Idaku Ishii

This study develops a projector–camera-based visible light communication (VLC) system for real-time broadband video streaming, in which a high frame rate (HFR) projector can encode and project a color input video sequence into binary image patterns modulated at thousands of frames per second and an HFR vision system can capture and decode these binary patterns into the input color video sequence with real-time video processing. For maximum utilization of the high-throughput transmission ability of the HFR projector, we introduce a projector–camera VLC protocol, wherein a multi-level color video sequence is binary-modulated with a gray code for encoding and decoding instead of pure-code-based binary modulation. Gray code encoding is introduced to address the ambiguity with mismatched pixel alignments along the gradients between the projector and vision system. Our proposed VLC system consists of an HFR projector, which can project 590 × 1060 binary images at 1041 fps via HDMI streaming and a monochrome HFR camera system, which can capture and process 12-bit 512 × 512 images in real time at 3125 fps; it can simultaneously decode and reconstruct 24-bit RGB video sequences at 31 fps, including an error correction process. The effectiveness of the proposed VLC system was verified via several experiments by streaming offline and live video sequences.


Author(s):  
Satoshi Hoshino ◽  
◽  
Kyohei Niimura

Mobile robots equipped with camera sensors are required to perceive humans and their actions for safe autonomous navigation. For simultaneous human detection and action recognition, the real-time performance of the robot vision is an important issue. In this paper, we propose a robot vision system in which original images captured by a camera sensor are described by the optical flow. These images are then used as inputs for the human and action classifications. For the image inputs, two classifiers based on convolutional neural networks are developed. Moreover, we describe a novel detector (a local search window) for clipping partial images around the target human from the original image. Since the camera sensor moves together with the robot, the camera movement has an influence on the calculation of optical flow in the image, which we address by further modifying the optical flow for changes caused by the camera movement. Through the experiments, we show that the robot vision system can detect humans and recognize the action in real time. Furthermore, we show that a moving robot can achieve human detection and action recognition by modifying the optical flow.


Sign in / Sign up

Export Citation Format

Share Document