Recognizing human facial expressions from long image sequences using optical flow

1996 ◽  
Vol 18 (6) ◽  
pp. 636-642 ◽  
Author(s):  
Y. Yacoob ◽  
L.S. Davis
Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3722
Author(s):  
Byeongkeun Kang ◽  
Yeejin Lee

Motion in videos refers to the pattern of the apparent movement of objects, surfaces, and edges over image sequences caused by the relative movement between a camera and a scene. Motion, as well as scene appearance, are essential features to estimate a driver’s visual attention allocation in computer vision. However, the fact that motion can be a crucial factor in a driver’s attention estimation has not been thoroughly studied in the literature, although driver’s attention prediction models focusing on scene appearance have been well studied. Therefore, in this work, we investigate the usefulness of motion information in estimating a driver’s visual attention. To analyze the effectiveness of motion information, we develop a deep neural network framework that provides attention locations and attention levels using optical flow maps, which represent the movements of contents in videos. We validate the performance of the proposed motion-based prediction model by comparing it to the performance of the current state-of-art prediction models using RGB frames. The experimental results for a real-world dataset confirm our hypothesis that motion plays a role in prediction accuracy improvement, and there is a margin for accuracy improvement by using motion features.


IJARCCE ◽  
2015 ◽  
pp. 468-473
Author(s):  
Kazi Md. Shahiduzzaman ◽  
Khan Mamun Reza ◽  
Nusrat Tazin

2013 ◽  
Vol 1 (1) ◽  
pp. 14-25 ◽  
Author(s):  
Tsuyoshi Miyazaki ◽  
Toyoshiro Nakashima ◽  
Naohiro Ishii

The authors describe an improved method for detecting distinctive mouth shapes in Japanese utterance image sequences. Their previous method uses template matching. Two types of mouth shapes are formed when a Japanese phone is pronounced: one at the beginning of the utterance (the beginning mouth shape, BeMS) and the other at the end (the ending mouth shape, EMS). The authors’ previous method could detect mouth shapes, but it misdetected some shapes because the time period in which the BeMS was formed was short. Therefore, they predicted that a high-speed camera would be able to capture the BeMS with higher accuracy. Experiments showed that the BeMS could be captured; however, the authors faced another problem. Deformed mouth shapes that appeared in the transition from one shape to another were detected as the BeMS. This study describes the use of optical flow to prevent the detection of such mouth shapes. The time period in which the mouth shape is deformed is detected using optical flow, and the mouth shape during this time is ignored. The authors propose an improved method of detecting the BeMS and EMS in Japanese utterance image sequences by using template matching and optical flow.


Author(s):  
Kazuhiko Kawamoto ◽  
◽  
Naoya Ohnishi ◽  
Atsushi Imiya ◽  
Reinhard Klette ◽  
...  

A matching algorithm that evaluates the difference between model and calculated flows for obstacle detection in video sequences is presented. A stabilization method for obstacle detection by median filtering to overcome instability in the computation of optical flow is also presented. Since optical flow is a scene-independent measurement, the proposed algorithm can be applied to various situations, whereas most of existing color- and texture-based algorithms depend on specific scenes, such as roadway and indoor scenes. An experiment is conducted with three real image sequences, in which a static box or a moving toy car appears, to evaluate the performance in terms of accuracy under varying thresholds using a receiver operating characteristic (ROC) curve. For the three image sequences, the ROC curves show, in the best case, that the false positive fraction and the true positive fraction is 19.0% and 79.6%, 11.4% and 84.5%, 19.0% and 85.4%, respectively. The processing time per frame is 19.38msec. on 2.0GHz Pentium 4, which is less than the video-frame rate.


2014 ◽  
Vol 538 ◽  
pp. 375-378 ◽  
Author(s):  
Xi Yuan Chen ◽  
Jing Peng Gao ◽  
Yuan Xu ◽  
Qing Hua Li

This paper proposed a new algorithm for optical flow-based monocular vision (MV)/ inertial navigation system (INS) integrated navigation. In this mode, a downward-looking camera is used to get the image sequences, which is used to estimate the velocity of the mobile robot by using optical flow algorithm. INS is employed for the yaw variation. In order to evaluate the performance of the proposed method, a real indoor test has done. The result shows that the proposed method has good performance for velocity estimation. It can be applied to the autonomous navigation of mobile robots when the Global Positioning System (GPS) and code wheel is unavailable.


Sign in / Sign up

Export Citation Format

Share Document