An Intelligent High-Frame-Rate Video Logging System for Abnormal Behavior Analysis

2011 ◽  
Vol 23 (1) ◽  
pp. 53-65 ◽  
Author(s):  
Yao-DongWang ◽  
◽  
Idaku Ishii ◽  
Takeshi Takaki ◽  
Kenji Tajima ◽  
...  

This paper introduces a high-speed vision system called IDP Express, which can execute real-time image processing and High-Frame-Rate (HFR) video recording simultaneously. In IDP Express, 512×512 pixel images from two camera heads and the processed results on a dedicated FPGA (Field Programmable Gate Array) board are transferred to standard PC memory at a rate of 1000 fps or more. Owing to the simultaneous HFR video processing and recording, IDP Express can be used as an intelligent video logging system for long-term high-speed phenomenon analysis. In this paper, a real-time abnormal behavior detection algorithm was implemented on IDP-Express to capture HFR videos of crucial moments of unpredictable abnormal behaviors in high-speed periodic motions. Several experiments were performed for a high-speed slider machine with repetitive operation at a frequency of 15 Hz and videos of the abnormal behaviors were automatically recorded to verify the effectiveness of our intelligent HFR video logging system.

Author(s):  
Idaku Ishii ◽  
Tetsuro Tatebe ◽  
Qingyi Gu ◽  
Yuta Moriue ◽  
Takeshi Takaki ◽  
...  

2015 ◽  
Vol 27 (1) ◽  
pp. 12-23 ◽  
Author(s):  
Qingyi Gu ◽  
◽  
Sushil Raut ◽  
Ken-ichi Okumura ◽  
Tadayoshi Aoyama ◽  
...  

<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270001/02.jpg"" width=""300"" />Synthesized panoramic images</div> In this paper, we propose a real-time image mosaicing system that uses a high-frame-rate video sequence. Our proposed system can mosaic 512 × 512 color images captured at 500 fps as a single synthesized panoramic image in real time by stitching the images based on their estimated frame-to-frame changes in displacement and orientation. In the system, feature point extraction is accelerated by implementing a parallel processing circuit module for Harris corner detection, and hundreds of selected feature points in the current frame can be simultaneously corresponded with those in their neighbor ranges in the previous frame, assuming that frame-to-frame image displacement becomes smaller in high-speed vision. The efficacy of our system for improved feature-based real-time image mosaicing at 500 fps was verified by implementing it on a field-programmable gate array (FPGA)-based high-speed vision platform and conducting several experiments: (1) capturing an indoor scene using a camera mounted on a fast-moving two-degrees-of-freedom active vision, (2) capturing an outdoor scene using a hand-held camera that was rapidly moved in a periodic fashion by hand. </span>


2021 ◽  
Author(s):  
Jamin Islam

For the purpose of autonomous satellite grasping, a high-speed, low-cost stereo vision system is required with high accuracy. This type of system must be able to detect an object and estimate its range. Hardware solutions are often chosen over software solutions, which tend to be too slow for high frame-rate applications. Designs utilizing field programmable gate arrays (FPGAs) provide flexibility and are cost effective versus solutions that provide similar performance (i.e., Application Specific Integrated Circuits). This thesis presents the architecture and implementation of a high frame-rate stereo vision system based on an FPGA platform. The system acquires stereo images, performs stereo rectification and generates disparity estimates at frame-rates close to 100 fpSi and on a large-enough FPGA, it can process 200 fps. The implementation presents novelties in performance and in the choice of the algorithm implemented. It achieves superior performance to existing systems that estimate scene depth. Furthermore, it demonstrates equivalent accuracy to software implementations of the dynamic programming maximum likelihood stereo correspondence algorithm.


2020 ◽  
Author(s):  
Idaku Ishii ◽  
Deepak Kumar ◽  
Sushil Raut ◽  
Kohei Shimasaki ◽  
Taku Senoo

Abstract An informative object pointing method using a spatiotemporal-modulated pattern projection is proposed to recognize and localize pointed objects by using a distantly located high-frame-rate vision system. We developed a prototype for projection-mapping-based object pointing that consists of an AI-camera-enabled projection (AiCP) system used as a transmitter, for informative projection mapping, and an HFR vision system operated as a receiver. The AiCP system detects multiple objects in real time at 30 fps with a CNN-based object detector, and simultaneously encodes and projects the recognition results of the detector as 480-Hz-modulated light patterns on to the objects to be pointed. The multiple 480-fps cameras can directly recognize and track the objects pointed at by the AiCP system without camera calibration or complex recognition methods by decoding the brightness signals of pixels in the images. To demonstrate the eectiveness of our proposed method, several desktop experiments using miniature objects and scenes were conducted under various conditions.


2013 ◽  
Vol 25 (4) ◽  
pp. 586-595 ◽  
Author(s):  
Motofumi Kobatake ◽  
◽  
Tadayoshi Aoyama ◽  
Takeshi Takaki ◽  
Idaku Ishii

In this paper, we propose a novel concept of realtime microscopic particle image velocimetry (PIV) for apparent high-speed microchannel flows in lab-on-achip (LOC). We introduce a frame-straddling dualcamera high-speed vision system that synchronizes two different camera inputs for the same camera view with a submicrosecond time delay. In order to improve upper and lower limits of measurable velocity in microchannel flow observation, we designed an improved gradient-based optical flow algorithm that adaptively selects a pair of images in the optimal frame-straddling time between the two camera inputs based on the amplitude of the estimated optical flow. This avoids large image displacement between frames that often generates serious errors in optical flow estimation. Our method is implemented using software on a frame-straddling dual-camera high-speed vision platform that captures real-time video and processes 512 × 512 pixel images at 2000 fps for the two camera heads and controls the frame-straddling time delay between them from 0 to 0.25 ms with 9.9 ns step. Our microscopic PIV system with frame-straddling dualcamera high-speed vision simultaneously estimates the velocity distribution of high-speed microchannel flow at 1 × 108pixel/s or more. Results of experiments using real microscopic flows on microchannels thousands of µm wide on LOCs verify the performance of the real-time microscopic PIV system we developed.


2009 ◽  
Vol 2009 (0) ◽  
pp. _1A1-C12_1-_1A1-C12_4
Author(s):  
Tetsuro Tatebe ◽  
Yuta Moriue ◽  
Takeshi Takaki ◽  
Idaku Ishii ◽  
Kenji Tajima

2018 ◽  
Vol 30 (1) ◽  
pp. 117-127
Author(s):  
Xianwu Jiang ◽  
Qingyi Gu ◽  
Tadayoshi Aoyama ◽  
Takeshi Takaki ◽  
Idaku Ishii ◽  
...  

In this study, we develop a real-time high-frame-rate vision system with frame-by-frame automatic exposure (AE) control that can simultaneously synthesize multiple images with different exposure times into a high-dynamic-range (HDR) image for scenarios with dynamic change in illumination. By accelerating the video capture and processing for time-division multithread AE control at the millisecond level, the proposed system can virtually function as multiple AE cameras with different exposure times. This system can capture color HDR images of 512 × 512 pixels in real time at 500 fps by synthesizing four 8-bit color images with different exposure times at consecutive frames, captured at an interval of 2 ms, with pixel-level parallel processing accelerated by a GPU (Graphic Processing Unit) board. Several experimental results for scenarios with a large change in illumination are demonstrated to confirm the performance of the proposed system for real-time HDR imaging.


2005 ◽  
Vol 17 (2) ◽  
pp. 121-129 ◽  
Author(s):  
Yoshihiro Watanabe ◽  
◽  
Takashi Komuro ◽  
Shingo Kagami ◽  
Masatoshi Ishikawa

Real-time image processing at high frame rates could play an important role in various visual measurement. Such image processing can be realized by using a high-speed vision system imaging at high frame rates and having appropriate algorithms processed at high speed. We introduce a vision chip for high-speed vision and propose a multi-target tracking algorithm for the vision chip utilizing the unique features. We describe two visual measurement applications, target counting and rotation measurement. Both measurements enable excellent measurement precision and high flexibility because of high-frame-rate visual observation achievable. Experimental results show the advantages of vision chips compared with conventional visual systems.


2021 ◽  
Author(s):  
Jamin Islam

For the purpose of autonomous satellite grasping, a high-speed, low-cost stereo vision system is required with high accuracy. This type of system must be able to detect an object and estimate its range. Hardware solutions are often chosen over software solutions, which tend to be too slow for high frame-rate applications. Designs utilizing field programmable gate arrays (FPGAs) provide flexibility and are cost effective versus solutions that provide similar performance (i.e., Application Specific Integrated Circuits). This thesis presents the architecture and implementation of a high frame-rate stereo vision system based on an FPGA platform. The system acquires stereo images, performs stereo rectification and generates disparity estimates at frame-rates close to 100 fpSi and on a large-enough FPGA, it can process 200 fps. The implementation presents novelties in performance and in the choice of the algorithm implemented. It achieves superior performance to existing systems that estimate scene depth. Furthermore, it demonstrates equivalent accuracy to software implementations of the dynamic programming maximum likelihood stereo correspondence algorithm.


2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Chengfei Wu ◽  
Zixuan Cheng

Public safety issues have always been the focus of widespread concern of people from all walks of life. With the development of video detection technology, the detection of abnormal human behavior in videos has become the key to preventing public safety issues. Particularly, in student groups, the detection of abnormal human behavior is very important. Most existing abnormal human behavior detection algorithms are aimed at outdoor activity detection, and the indoor detection effects of these algorithms are not ideal. Students spend most of their time indoors, and modern classrooms are mostly equipped with monitoring equipment. This study focuses on the detection of abnormal behaviors of indoor humans and uses a new abnormal behavior detection framework to realize the detection of abnormal behaviors of indoor personnel. First, a background modeling method based on a Gaussian mixture model is used to segment the background image of each image frame in the video. Second, block processing is performed on the image after segmenting the background to obtain the space-time block of each frame of the image, and this block is used as the basic representation of the detection object. Third, the foreground image features of each space-time block are extracted. Fourth, fuzzy C-means clustering (FCM) is used to detect outliers in the data sample. The contribution of this paper is (1) the use of an abnormal human behavior detection framework that is effective indoors. Compared with the existing abnormal human behavior detection methods, the detection framework in this paper has a little difference in terms of its outdoor detection effects. (2) Compared with other detection methods, the detection framework used in this paper has a better detection effect for abnormal human behavior indoors, and the detection performance is greatly improved. (3) The detection framework used in this paper is easy to implement and has low time complexity. Through the experimental results obtained on public and manually created data sets, it can be demonstrated that the performance of the detection framework used in this paper is similar to those of the compared methods in outdoor detection scenarios. It has a strong advantage in terms of indoor detection. In summary, the proposed detection framework has a good practical application value.


Sign in / Sign up

Export Citation Format

Share Document