scholarly journals Real-Time Analysis of Athletes’ Physical Condition in Training Based on Video Monitoring Technology of Optical Flow Equation

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Cuijuan Wang

This article is dedicated to the research of video motion segmentation algorithms based on optical flow equations. First, some mainstream segmentation algorithms are studied, and on this basis, a segmentation algorithm for spectral clustering analysis of athletes’ physical condition in training is proposed. After that, through the analysis of the existing methods, compared with some algorithms that only process a single frame in the video, this article analyzes the continuous multiple frames in the video and extracts the continuous multiple frames of the sampling points through the Lucas-Kanade optical flow method. We densely sampled feature points contain as much motion information as possible in the video and then express this motion information through trajectory description and finally achieve segmentation of moving targets through clustering of motion trajectories. At the same time, the basic concepts of image segmentation and video motion target segmentation are described, and the division standards of different video motion segmentation algorithms and their respective advantages and disadvantages are analyzed. The experiment determines the initial template by comparing the gray-scale variance of the image, uses the characteristic optical flow to estimate the search area of the initial template in the next frame, reduces the matching time, judges the template similarity according to the Hausdorff distance, and uses the adaptive weighted template update method for the templates with large deviations. The simulation results show that the algorithm can achieve long-term stable tracking of moving targets in the mine, and it can also achieve continuous tracking of partially occluded moving targets.

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3722
Author(s):  
Byeongkeun Kang ◽  
Yeejin Lee

Motion in videos refers to the pattern of the apparent movement of objects, surfaces, and edges over image sequences caused by the relative movement between a camera and a scene. Motion, as well as scene appearance, are essential features to estimate a driver’s visual attention allocation in computer vision. However, the fact that motion can be a crucial factor in a driver’s attention estimation has not been thoroughly studied in the literature, although driver’s attention prediction models focusing on scene appearance have been well studied. Therefore, in this work, we investigate the usefulness of motion information in estimating a driver’s visual attention. To analyze the effectiveness of motion information, we develop a deep neural network framework that provides attention locations and attention levels using optical flow maps, which represent the movements of contents in videos. We validate the performance of the proposed motion-based prediction model by comparing it to the performance of the current state-of-art prediction models using RGB frames. The experimental results for a real-world dataset confirm our hypothesis that motion plays a role in prediction accuracy improvement, and there is a margin for accuracy improvement by using motion features.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Jie Shen ◽  
Mengxi Xu ◽  
Xinyu Du ◽  
Yunbo Xiong

Video surveillance is an important data source of urban computing and intelligence. The low resolution of many existing video surveillance devices affects the efficiency of urban computing and intelligence. Therefore, improving the resolution of video surveillance is one of the important tasks of urban computing and intelligence. In this paper, the resolution of video is improved by superresolution reconstruction based on a learning method. Different from the superresolution reconstruction of static images, the superresolution reconstruction of video is characterized by the application of motion information. However, there are few studies in this area so far. Aimed at fully exploring motion information to improve the superresolution of video, this paper proposes a superresolution reconstruction method based on an efficient subpixel convolutional neural network, where the optical flow is introduced in the deep learning network. Fusing the optical flow features between successive frames can compensate for information in frames and generate high-quality superresolution results. In addition, in order to improve the superresolution, a superpixel convolution layer is added after the deep convolution network. Finally, experimental evaluations demonstrate the satisfying performance of our method compared with previous methods and other deep learning networks; our method is more efficient.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1085
Author(s):  
Kaifeng Zhang ◽  
Dan Li ◽  
Jiayun Huang ◽  
Yifei Chen

The detection of pig behavior helps detect abnormal conditions such as diseases and dangerous movements in a timely and effective manner, which plays an important role in ensuring the health and well-being of pigs. Monitoring pig behavior by staff is time consuming, subjective, and impractical. Therefore, there is an urgent need to implement methods for identifying pig behavior automatically. In recent years, deep learning has been gradually applied to the study of pig behavior recognition. Existing studies judge the behavior of the pig only based on the posture of the pig in a still image frame, without considering the motion information of the behavior. However, optical flow can well reflect the motion information. Thus, this study took image frames and optical flow from videos as two-stream input objects to fully extract the temporal and spatial behavioral characteristics. Two-stream convolutional network models based on deep learning were proposed, including inflated 3D convnet (I3D) and temporal segment networks (TSN) whose feature extraction network is Residual Network (ResNet) or the Inception architecture (e.g., Inception with Batch Normalization (BN-Inception), InceptionV3, InceptionV4, or InceptionResNetV2) to achieve pig behavior recognition. A standard pig video behavior dataset that included 1000 videos of feeding, lying, walking, scratching and mounting from five kinds of different behavioral actions of pigs under natural conditions was created. The dataset was used to train and test the proposed models, and a series of comparative experiments were conducted. The experimental results showed that the TSN model whose feature extraction network was ResNet101 was able to recognize pig feeding, lying, walking, scratching, and mounting behaviors with a higher average of 98.99%, and the average recognition time of each video was 0.3163 s. The TSN model (ResNet101) is superior to the other models in solving the task of pig behavior recognition.


1983 ◽  
Vol 27 (12) ◽  
pp. 996-1000
Author(s):  
Dean H. Owen ◽  
Lawrence J. Hettinger ◽  
Shirley B. Tobias ◽  
Lawrence Wolpert ◽  
Rik Warren

Several methods are presented for breaking linkages among global optical flow and texture variables in order to assess their usefulness in experiments requiring observers to distinguish change in speed or heading of simulated self motion from events representing constant speed or level flight. Results of a series of studies testing for sensitivity to flow acceleration or deceleration, flow-pattern expansion variables, and the distribution of optical texture density are presented. Theoretical implications for determining the metrics of visual self-motion information, and practical relevance for pilot and flight simulator evaluation and for low-level, high-speed flight are discussed.


1989 ◽  
Author(s):  
A. Verri ◽  
S. Uras ◽  
E. D. Micheli

2019 ◽  
Vol 19 (7) ◽  
pp. 2 ◽  
Author(s):  
Alexander Goettker ◽  
Doris I. Braun ◽  
Karl R. Gegenfurtner

Sign in / Sign up

Export Citation Format

Share Document