Video Stream Analysis in Clouds: An Object Detection and Classification Framework for High Performance Video Analytics

2019 ◽  
Vol 7 (4) ◽  
pp. 1152-1167 ◽  
Author(s):  
Ashiq Anjum ◽  
Tariq Abdullah ◽  
M. Fahim Tariq ◽  
Yusuf Baltaci ◽  
Nick Antonopoulos
2017 ◽  
Vol 2 (1) ◽  
pp. 80-87
Author(s):  
Puyda V. ◽  
◽  
Stoian. A.

Detecting objects in a video stream is a typical problem in modern computer vision systems that are used in multiple areas. Object detection can be done on both static images and on frames of a video stream. Essentially, object detection means finding color and intensity non-uniformities which can be treated as physical objects. Beside that, the operations of finding coordinates, size and other characteristics of these non-uniformities that can be used to solve other computer vision related problems like object identification can be executed. In this paper, we study three algorithms which can be used to detect objects of different nature and are based on different approaches: detection of color non-uniformities, frame difference and feature detection. As the input data, we use a video stream which is obtained from a video camera or from an mp4 video file. Simulations and testing of the algoritms were done on a universal computer based on an open-source hardware, built on the Broadcom BCM2711, quad-core Cortex-A72 (ARM v8) 64-bit SoC processor with frequency 1,5GHz. The software was created in Visual Studio 2019 using OpenCV 4 on Windows 10 and on a universal computer operated under Linux (Raspbian Buster OS) for an open-source hardware. In the paper, the methods under consideration are compared. The results of the paper can be used in research and development of modern computer vision systems used for different purposes. Keywords: object detection, feature points, keypoints, ORB detector, computer vision, motion detection, HSV model color


2020 ◽  
Vol 219 (10) ◽  
Author(s):  
Dominic Waithe ◽  
Jill M. Brown ◽  
Katharina Reglinski ◽  
Isabel Diez-Sevilla ◽  
David Roberts ◽  
...  

Object detection networks are high-performance algorithms famously applied to the task of identifying and localizing objects in photography images. We demonstrate their application for the classification and localization of cells in fluorescence microscopy by benchmarking four leading object detection algorithms across multiple challenging 2D microscopy datasets. Furthermore we develop and demonstrate an algorithm that can localize and image cells in 3D, in close to real time, at the microscope using widely available and inexpensive hardware. Furthermore, we exploit the fast processing of these networks and develop a simple and effective augmented reality (AR) system for fluorescence microscopy systems using a display screen and back-projection onto the eyepiece. We show that it is possible to achieve very high classification accuracy using datasets with as few as 26 images present. Using our approach, it is possible for relatively nonskilled users to automate detection of cell classes with a variety of appearances and enable new avenues for automation of fluorescence microscopy acquisition pipelines.


2015 ◽  
Vol 35 (6) ◽  
pp. 795-802 ◽  
Author(s):  
Feng-Sheng Lin ◽  
Chia-Ping Shen ◽  
Chia-Hung Liu ◽  
Han Lin ◽  
Chi-Ying F. Huang ◽  
...  

2021 ◽  
Vol 273 ◽  
pp. 12116
Author(s):  
I.O. Egorochkina ◽  
E.S. Tsygankova ◽  
E.A. Shlyakhova ◽  
I.A. Serebryanaya ◽  
D.S. Serebryanaya

The project “SportTrainer” is presented -an information system of video analytics and personal recommendations for the prevention of hypo-dynamics and increasing the effectiveness of personal training of students who want to achieve a certain sports result and improve their figure. The developed system “SportTrainer” realizes consulting on training programs, nutrition, provides control over the implementation of the selected programs and the achievement of goals. A digital trainer can suggest how effective the selected program is for a particular individual, selects the necessary exercises, make adjustments to the training program and diet, simulate and visualize progress. In the research work, modern technologies of video analytics, computer vision, machine learning and video streaming were used, which allow real-time processing of a video stream, analysis and pattern recognition, and fully automate the process of personal training at home. The implementation of the developed activities of the “SportTrainer” project was carried out in three control groups, represented by bachelors, masters and teachers. Statistical processing of basic indicators and results achieved in the process of testing the “SportTrainer” system has been carried out. The presented statistical data confirm the effectiveness of the program within the control training groups. The developed system is applicable to a wide range of the population, effective in conditions of social distancing due to a pandemic situation, since it improves conditions for home, including professional sports. In the future, a sharp increase in their share is expected in comparison with traditional training techniques.


1998 ◽  
Vol 5 (45) ◽  
Author(s):  
Morten Vadskær Jensen ◽  
Brian Nielsen

We present the design and implementation of a high performance layered video codec, designed for deployment in bandwidth heterogeneous networks. The codec combines wavelet based subband decomposition and discrete cosine transforms to facilitate layered spatial and SNR (signal-to-noise ratio) coding for bit-rate adaptation to a wide range of receiver capabilities. We show how a test video stream can be partitioned into several distinct layers of increasing visual quality and bandwidth requirements, with the difference between highest and lowest requirement being 47 : 1. Through the use of the Visual Instruction Set on SUN's Ultra-SPARC platform we demonstrate how SIMD parallel image processing enables real-time layered encoding and decoding in software. Our 384 * 320 * 24-bit test video stream is partitioned into 21 layers at a speed of 39 frames per second and reconstructed at 28 frames per second. Our VIS accelerated encoder stages are about 3-4 times as fast as an optimized C version. We find that this speed-up is well worth the extra implementation effort.


Sign in / Sign up

Export Citation Format

Share Document