Experimental research on motion detection accuracy of joint transform correlator

2012 ◽  
Author(s):  
Guang Lin ◽  
Qi Li ◽  
Huajun Feng ◽  
Zhihai Xu
2019 ◽  
Vol 16 (04) ◽  
pp. 1950017
Author(s):  
Sheng Liu ◽  
Yangqing Wang ◽  
Fengji Dai ◽  
Jingxiang Yu

Motion detection and object tracking play important roles in unsupervised human–machine interaction systems. Nevertheless, the human–machine interaction would become invalid when the system fails to detect the scene objects correctly due to occlusion and limited field of view. Thus, robust long-term tracking of scene objects is vital. In this paper, we present a 3D motion detection and long-term tracking system with simultaneous 3D reconstruction of dynamic objects. In order to achieve the high precision motion detection, an optimization framework with a novel motion pose estimation energy function is provided in the proposed method by which the 3D motion pose of each object can be estimated independently. We also develop an accurate object-tracking method which combines 2D visual information and depth. We incorporate a novel boundary-optimization segmentation based on 2D visual information and depth to improve the robustness of tracking significantly. Besides, we also introduce a new fusion and updating strategy in the 3D reconstruction process. This strategy brings higher robustness to 3D motion detection. Experiments results show that, for synthetic sequences, the root-mean-square error (RMSE) of our system is much smaller than Co-Fusion (CF); our system performs extremely well in 3D motion detection accuracy. In the case of occlusion or out-of-view on real scene data, CF will suffer the loss of tracking or object-label changing, by contrast, our system can always keep the robust tracking and maintain the correct labels for each dynamic object. Therefore, our system is robust to occlusion and out-of-view application scenarios.


2018 ◽  
Vol 15 (4) ◽  
pp. 172988141878363 ◽  
Author(s):  
Han Wei ◽  
Qiao Peng

This article proposes a motion detection method for real-time video analysis. It is the fundamental principle that the parts of the moving objects and the local changes of the images captured by static cameras are strongly correlated. Peak signal-to-noise ratio calculated in a block can characterize the significance of the changes in this area. Moving objects can therefore be detected by thresholding the peak signal-to-noise ratio of the blocks between two adjacent frames. The block-wise scheme used in this frame difference method can explore the local correlation of the movement in both space and time domains. This approach is robust to analyze the video images with noise and high variance caused by environmental changes, such as illuminations changes. Compared with other methods, the proposed method can achieve relatively high detection accuracy with less computation time, where real-time motion detection is available. Experimental results show that the proposed method cost averagely 50% of the running time of ViBe with 3.5% increase of the F-score on detection accuracy.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Dingchao Zheng ◽  
Yangzhi Zhang ◽  
Zhijian Xiao

To enhance the effect of motion detection, a Gaussian modeling algorithm is proposed to fix holes and breaks caused by the conventional frame difference method. The proposed algorithm uses an improved three-frame difference method. A three-frame image sequence with one frame interval is selected for pairwise difference calculation. The logical “OR” operation is used to achieve fast motion detection and to reduce voids and fractures. The Gaussian algorithm establishes an adaptive learning model to make the size and contour of the motion detection more accurate. The motion extracted by the improved three-frame difference method and Gaussian model is logically summed to obtain the final motion foreground picture. Moreover, a moving target detection method, based on the U-Net deep learning network, is proposed to reduce the dependency of deep learning on the number of training datasets. It helps the algorithm to train models on small datasets. Next, it calculates the ratio of the number of positive and negative samples in the dataset and uses the reciprocal of the ratio as the sample weight to deal with the imbalance of positive and negative samples. Finally, a threshold is set to predict the results for obtaining the moving object detection accuracy. Experimental results show that the algorithm can suppress the generation and rupture of holes and reduce the noise. Also, it can quickly and accurately detect movement to meet the design requirements.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Qichang Xu

Aiming at the shortcomings of traditional moving target detection methods in complex scenes such as low detection accuracy and high complexity, and not considering the overall structure information of the video frame image, this paper proposes a moving-target detection based on sensor network. First, a low-power motion detection wireless sensor network node is designed to obtain motion detection information in real time. Secondly, the background of the video scene is quickly extracted by the time domain averaging method, and the video sequence and the background image are channel-merged to construct a deep full convolutional network model. Finally, the network model is used to learn the deep features of the video scene and output the pixel-level classification results to achieve moving target detection. This method not only can adapt to complex video scenes of different sizes but also has a simple background extraction method, which effectively improves the detection speed.


2009 ◽  
Vol 29 (7) ◽  
pp. 1796-1800 ◽  
Author(s):  
葛鹏 Ge Peng ◽  
陈跃庭 Chen Yueting ◽  
李奇 Li Qi ◽  
冯华君 Feng Huajun ◽  
徐之海 Xu Zhihai ◽  
...  

Author(s):  
Bernhard Guggenberger ◽  
Andreas J. Jocham ◽  
Birgit Jocham ◽  
Alexander Nischelwitzer ◽  
Helmut Ritschl

Demographic changes associated with an expanding and aging population will lead to an increasing number of orthopedic surgeries, such as joint replacements. To support patients’ home exercise programs after total hip replacement and completing subsequent inpatient rehabilitation, a low-cost, smartphone-based augmented reality training game (TG) was developed. To evaluate its motion detection accuracy, data from 30 healthy participants were recorded while using the TG. A 3D motion analysis system served as reference. The TG showed differences of 18.03 mm to 24.98 mm along the anatomical axes. Surveying the main movement direction of the implemented exercises (squats, step-ups, side-steps), differences between 10.13 mm to 24.59 mm were measured. In summary, the accuracy of the TG’s motion detection is sufficient for use in exergames and to quantify progress in patients’ performance. Considering the findings of this study, the presented exer-game approach has potential as a low-cost, easily accessible support for patients in their home exercise program.


2006 ◽  
Vol 27 (4) ◽  
pp. 218-228 ◽  
Author(s):  
Paul Rodway ◽  
Karen Gillies ◽  
Astrid Schepman

This study examined whether individual differences in the vividness of visual imagery influenced performance on a novel long-term change detection task. Participants were presented with a sequence of pictures, with each picture and its title displayed for 17  s, and then presented with changed or unchanged versions of those pictures and asked to detect whether the picture had been changed. Cuing the retrieval of the picture's image, by presenting the picture's title before the arrival of the changed picture, facilitated change detection accuracy. This suggests that the retrieval of the picture's representation immunizes it against overwriting by the arrival of the changed picture. The high and low vividness participants did not differ in overall levels of change detection accuracy. However, in replication of Gur and Hilgard (1975) , high vividness participants were significantly more accurate at detecting salient changes to pictures compared to low vividness participants. The results suggest that vivid images are not characterised by a high level of detail and that vivid imagery enhances memory for the salient aspects of a scene but not all of the details of a scene. Possible causes of this difference, and how they may lead to an understanding of individual differences in change detection, are considered.


Sign in / Sign up

Export Citation Format

Share Document