Device and System Development of General Purpose Digital Vision Chip

2000 ◽  
Vol 12 (5) ◽  
pp. 515-520 ◽  
Author(s):  
Takashi Komuro ◽  
◽  
Shingo Kagami ◽  
Idaku Ishii ◽  
Masatoshi Ishikawa

We have been developing a VLSI device called vision chip in which photo detectors are integrated with paralled processing elements and that realizes high speed robot control using visual feedback. Using a 0.35μm CMOS process, we have developed a 16 × 16 prototype chip and have demonstrated some image acquiring and processing experiments. A vision system which includes the vision chip has also been constructed.

Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 2989
Author(s):  
Peng Liu ◽  
Yan Song

Vision processing chips have been widely used in image processing and recognition tasks. They are conventionally designed based on the image signal processing (ISP) units directly connected with the sensors. In recent years, convolutional neural networks (CNNs) have become the dominant tools for many state-of-the-art vision processing tasks. However, CNNs cannot be processed by a conventional vision processing unit (VPU) with a high speed. On the other side, the CNN processing units cannot process the RAW images from the sensors directly and an ISP unit is required. This makes a vision system inefficient with a lot of data transmission and redundant hardware resources. Additionally, many CNN processing units suffer from a low flexibility for various CNN operations. To solve this problem, this paper proposed an efficient vision processing unit based on a hybrid processing elements array for both CNN accelerating and ISP. Resources are highly shared in this VPU, and a pipelined workflow is introduced to accelerate the vision tasks. We implement the proposed VPU on the Field-Programmable Gate Array (FPGA) platform and various vision tasks are tested on it. The results show that this VPU achieves a high efficiency for both CNN processing and ISP and shows a significant reduction in energy consumption for vision tasks consisting of CNNs and ISP. For various CNN tasks, it maintains an average multiply accumulator utilization of over 94% and achieves a performance of 163.2 GOPS with a frequency of 200 MHz.


2012 ◽  
Vol 106 (8-9) ◽  
pp. 453-463 ◽  
Author(s):  
Haiyan Wu ◽  
Ke Zou ◽  
Tianguang Zhang ◽  
Alexander Borst ◽  
Kolja Kühnlenz

Robotica ◽  
1990 ◽  
Vol 8 (1) ◽  
pp. 47-60 ◽  
Author(s):  
David Vernon

SUMMARYA prototype robot system for automated handling of flexible electrical wires of variable length is described. The handling process involves the selection of a single wire from a tray of many, grasping the wire close to its end with a robot manipulator, and either placing the end in a crimping press or, for tinning applications, dipping the end in a bath of molten solder. This system relies exclusively on the use of vision to identify the position and orientation of the wires prior to their being grasped by the robot end-effector. Two distinct vision algorithms are presented. The first approach utilises binary imaging techniques and involves object segmentation by thresholding followed by thinning and image analysis. An alternative general-purpose approach, based on more robust grey-scale processing techniques, is also described. This approach relies in the analysis of object boundaries generated using a dynamic contour-following algorithm. A simple Robot Control Language (RCL) is described which facilitates robot control in a Cartesian frame of reference and object description using frames (homogeneous transformations). The integration of this language with the robot vision system is detailed, and, in particular, a camera model which compensates for both photometric distortion and manipulator inaccuracies is presented. The system has been implemented using conventional computer architectures; average sensing cycle times of two and six seconds have been achieved for the grey-scale and binary vision algorithms, respectively.


2016 ◽  
Vol 25 (4) ◽  
pp. 299-321 ◽  
Author(s):  
Tomohiro Sueishi ◽  
Hiromasa Oku ◽  
Masatoshi Ishikawa

Dynamic projection mapping (DPM) is a type of projection-based augmented reality that aligns projected content with a moving physical object. In order to be able to adjust the projection to fast motions of moving objects, DPM requires high-speed visual feedback. An option to reduce the temporal delay of adjusting the projection to imperceptible levels is to use mirror-based high-speed optical axis controllers. However, using such controllers for capturing visual feedback requires a sufficient amount of illumination of the moving object. This leads to a trade-off between tracking stability and quality of projection content. In this article, we propose a system that combines mirror-based high-speed tracking with using a retroreflective background. The proposed tracking technique observes the silhouette of the target object by episcopic illumination and is robust against illumination changes. It also maintains high-speed, accident-avoidant tracking by performing background subtraction in an active vision system and employing an adaptive windows technique. This allows us to create a DPM with an imperceptible temporal delay, high tracking stability and high visual quality. We analyze the proposed system regarding the visual quality of the retroreflective background, the tracking stability under illumination and disturbance conditions, and the visual consistency relative to delay in the presence of pose estimation. In addition, we demonstrate application scenarios for the proposed DPM system.


1990 ◽  
Vol 2 (6) ◽  
pp. 417-417
Author(s):  
Michitaka Kameyama ◽  

In the realization of intelligent robots, highly intelligent manipulation and movement techniques are required such as intelligent man-machine interfaces, intelligent information processing for path planning and problem solutions, practical robot vision, and high-speed sensor signal processing. Thus, very high-speed processing to cope with vast amounts of data as well as the development of various algorithms has become important subjects. To fulfill such requirements, the development of high-performance computer architecture using advanced microelectronics technology is required. For these purposes, the development of implementing computer systems’ for robots will be classified as follows: (a) Use of general-purpose computers As the performance of workstations and personal computers is increased year by year, software development is the major task without requiring hardware development except the interfaces with peripheral equipment. Since current high-level languages and software can be applied, the approach is excellent in case of system development, but the processing performance is limited. (b) Use of commercially available (V) LSI chips This is an approach to design a computer system by the combination of commercially available LSIs. Since the development of both hardware and software is involved in this system development, the development period tends to be longer than in (a). These chips include general-purpose microprocessors, memory chips, digital signal processors (DSPs) and multiply-adder LSIs. Though the kinds of available chips are limited to some degree, the approach can cope with a considerably high-performance specifications because a number of chips can be flexibly used. (c) Design, development and system configuration of VLSI chips This is an approach to develop new special-purpose VLSI chips using ASIC (Application Specific Integrated Circuit) technology, that is, semicustom or full-custom technology. If these attain practical use and are marketed, they will be widely used as high-performance VLSI chips of the level (b). Since a very high-performance specification must be satisfied, the study of very high performance VLSI computer architecture becomes very important. But this approach involving chip development requires a very long period in the design-development from the determination of processor specifications to the system configuration using the fabricated chips. For the above three approaches, the order from the viewpoint of ease of development will be (a), (b) and (c), while that from the viewpoint of performance will be (c), (b) and (a). Each approach is not exclusive but is complementary each other. For example, the development of new chips by (c) can also give new impact as the components of (a) and (b). Further, the common point of these approaches is that performance improvement by highly parallel architecture becomes important. This special edition introduces, from the above standpoint, the latest information on the present state and' future prospects of the computer techniques in Japan. We hope that this edition will contribute to the development of this field.


2005 ◽  
Vol 17 (2) ◽  
pp. 121-129 ◽  
Author(s):  
Yoshihiro Watanabe ◽  
◽  
Takashi Komuro ◽  
Shingo Kagami ◽  
Masatoshi Ishikawa

Real-time image processing at high frame rates could play an important role in various visual measurement. Such image processing can be realized by using a high-speed vision system imaging at high frame rates and having appropriate algorithms processed at high speed. We introduce a vision chip for high-speed vision and propose a multi-target tracking algorithm for the vision chip utilizing the unique features. We describe two visual measurement applications, target counting and rotation measurement. Both measurements enable excellent measurement precision and high flexibility because of high-frame-rate visual observation achievable. Experimental results show the advantages of vision chips compared with conventional visual systems.


1974 ◽  
Vol 18 (5) ◽  
pp. 498-506
Author(s):  
H. Alsberg ◽  
R. Nathan

The role of vision in teleoperation has been recognized as an important element in the man-machine control loop. In most applications of remote manipulation, direct vision cannot be used. To overcome this handicap, the human operator's control capabilities are augmented by a “tele”-vision system. This medium provides a practical and useful link between workspace and the control station from which the operator performs his tasks. The function of the video system is to reproduce the original scenes in pictorial form. Systematic errors in terms of photometry, resolution, geometry and perhaps color can be removed by decalibration procedures. Human performance deteriorates when the images are degraded as a result of instrumental and transmission limitations. Recovering images from various degradation effects is commonly referred to as restoration. Image enhancement is used to bring out selected qualities in a picture to increase the perception of the observer. At the Image Processing Laboratory (IPL) of JPL, we employ a general purpose digital computer (IBM 360/44) utilizing an extensive special purpose software system (VICAR) to perform an almost unlimited repertoire of processing operations. This approach has proven to be most flexible, versatile and suitable for experimental work. Guided by the experience of the IPL and the recent advances in LSI technology, we are reporting on special hardwired algorithms which have speeded up the processing by several orders of magnitude. Although quantum limited imaging was made possible by noise removal and contrast enhancement as part of a development in electron microscopy, these methods and experiences are transferrable to other teleoperator applications. The processing and enhancement of images are controlled by the operator/scientist matching his perceptual needs to optimally adjust the instrument. Central to the near real time image processing is a high speed digital solid state mass memory operating at input/output speeds compatible with standard TV rates. Thus, the operator, as the most important link in the loop, is provided with a real time interactive display which enables him to perceive the remote workspace as required to execute remote manipulation tasks.


Sign in / Sign up

Export Citation Format

Share Document