scholarly journals SIFT Feature-Based Video Camera Boundary Detection Algorithm

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Lingqiang Kong

Aiming at the problem of low accuracy of edge detection of the film and television lens, a new SIFT feature-based camera detection algorithm was proposed. Firstly, multiple frames of images are read in time sequence and converted into grayscale images. The frame image is further divided into blocks, and the average gradient of each block is calculated to construct the film dynamic texture. The correlation of the dynamic texture of adjacent frames and the matching degree of SIFT features of two frames were compared, and the predetection results were obtained according to the matching results. Next, compared with the next frame of the dynamic texture and SIFT feature whose step size is lower than the human eye refresh frequency, the final result is obtained. Through experiments on multiple groups of different types of film and television data, high recall rate and accuracy rate can be obtained. The algorithm in this paper can detect the gradual change lens with the complex structure and obtain high detection accuracy and recall rate. A lens boundary detection algorithm based on fuzzy clustering is realized. The algorithm can detect sudden changes/gradual changes of the lens at the same time without setting a threshold. It can effectively reduce the factors that affect lens detection, such as flash, movies, TV, and advertisements, and can reduce the influence of camera movement on the boundaries of movies and TVs. However, due to the complexity of film and television, there are still some missing and false detections in this algorithm, which need further study.

Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 537 ◽  
Author(s):  
Huafeng Wu ◽  
Qingshun Meng ◽  
Jiangfeng Xian ◽  
Xiaojun Mei ◽  
Christophe Claramunt ◽  
...  

Wireless Sensor Networks (WSNs) have been extensively applied in ecological environment monitoring. Typically, event boundary detection is an effective method to determine the scope of an event area in large-scale environment monitoring. This paper proposes a novel lightweight Entropy based Event Boundary Detection algorithm (EEBD) in WSNs. We first develop a statistic model using information entropy to figure out the probability that a sensor is a boundary sensor. The EEBD is independently executed on each wireless sensor in order to judge whether it is a boundary sensor node, by comparing the values of entropy against the threshold which depends on the boundary width. Simulation results demonstrate that the EEBD is computable and offers valuable detection accuracy of boundary nodes with both low and high network node density. This study also includes experiments that verify the EEBD which is applicable in a real ocean environmental monitoring scenario using WSNs.


2018 ◽  
Vol 29 (1) ◽  
pp. 364-377
Author(s):  
B. Sirisha ◽  
B. Sandhya ◽  
Chandra Sekhar Paidimarry ◽  
A.S. Chandrasekhara Sastry

Abstract Conventional integer order differential operators suffer from poor feature detection accuracy and noise immunity, which leads to image misalignment. A new affine-based fractional order feature detection algorithm is proposed to detect syntactic and semantic structures from the backscattered signal of a TerraSAR-X band stripmap image. To further improve the alignment accuracy, we propose to adapt a view synthesis approach in the standard pipeline of feature-based image alignment. Experiments were performed to test the effectiveness and robustness of the view synthesis approach using a fractional order feature detector. The evaluation results showed that the proposed method achieves high precision and robust alignment of look-angle-varied TerraSAR-X images. The affine features detected using the fractional order operator are more stable and have strong capacity to reduce sturdy speckle noise.


2018 ◽  
Vol 2018 ◽  
pp. 1-8
Author(s):  
Jia Li ◽  
Pei Wu ◽  
Feilong Kang ◽  
Lina Zhang ◽  
Chuanzhong Xuan

The study of the self-protective behaviors of dairy cows suffering dipteral insect infestation is important for evaluating the breeding environment and cows’ selective breeding. The current practices for measuring diary cows’ self-protective behaviors are mostly by human observation, which is not only tedious but also inefficient and inaccurate. In this paper, we develop an automatic monitoring system based on video analysis. First, an improved optical flow tracking algorithm based on Shi-Tomasi corner detection is presented. By combining the morphological features of head, leg, and tail movements, this method effectively reduces the number of Shi-Tomasi points, eliminates interference from background movement, reduces the computational complexity of the algorithm, and improves detection accuracy. The detection algorithm is used to calculate the number of tail, leg, and head movements by using an artificial neural network. The accuracy range of the tail and head reached [0.88, 1] and the recall rate was [0.87, 1]. The method proposed in this paper which provides objective measurements can help researchers to more effectively analyze dairy cows’ self-protective behaviors and the living environment in the process of dairy cow breeding and management.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4870
Author(s):  
Liyun Xiao ◽  
Peng Zhou ◽  
Ke Xu ◽  
Xiaofang Zhao

To address the problem of low detection rate caused by the close alignment and multi-directional position of text words in practical application and the need to improve the detection speed of the algorithm, this paper proposes a multi-directional text detection algorithm based on improved YOLOv3, and applies it to natural text detection. To detect text in multiple directions, this paper introduces a method of box definition based on sliding vertices. Then, a new rotating box loss function MD-Closs based on CIOU is proposed to improve the detection accuracy. In addition, a step-by-step NMS method is used to further reduce the amount of calculation. Experimental results show that on the ICDAR 2015 data set, the accuracy rate is 86.2%, the recall rate is 81.9%, and the timeliness is 21.3 fps, which shows that the proposed algorithm has a good detection effect on text detection in natural scenes.


Author(s):  
Dongxian Yu ◽  
Jiatao Kang ◽  
Zaihui Cao ◽  
Neha Jain

In order to solve the current traffic sign detection technology due to the interference of various complex factors, it is difficult to effectively carry out the correct detection of traffic signs, and the robustness is weak, a traffic sign detection algorithm based on the region of interest extraction and double filter is designed.First, in order to reduce environmental interference, the input image is preprocessed to enhance the main color of each logo.Secondly, in order to improve the extraction ability Of Regions Of Interest, a Region Of Interest (ROI) detector based on Maximally Stable Extremal Regions (MSER) and Wave Equation (WE) was defined, and candidate Regions were selected through the ROI detector.Then, an effective HOG (Histogram of Oriented Gradient) descriptor is introduced as the detection feature of traffic signs, and SVM (Support Vector Machine) is used to classify them into traffic signs or background.Finally, the context-aware filter and the traffic light filter are used to further identify the false traffic signs and improve the detection accuracy.In the GTSDB database, three kinds of traffic signs, which are indicative, prohibited and dangerous, are tested, and the results show that the proposed algorithm has higher detection accuracy and robustness compared with the current traffic sign recognition technology.


2021 ◽  
Vol 13 (10) ◽  
pp. 1909
Author(s):  
Jiahuan Jiang ◽  
Xiongjun Fu ◽  
Rui Qin ◽  
Xiaoyan Wang ◽  
Zhifeng Ma

Synthetic Aperture Radar (SAR) has become one of the important technical means of marine monitoring in the field of remote sensing due to its all-day, all-weather advantage. National territorial waters to achieve ship monitoring is conducive to national maritime law enforcement, implementation of maritime traffic control, and maintenance of national maritime security, so ship detection has been a hot spot and focus of research. After the development from traditional detection methods to deep learning combined methods, most of the research always based on the evolving Graphics Processing Unit (GPU) computing power to propose more complex and computationally intensive strategies, while in the process of transplanting optical image detection ignored the low signal-to-noise ratio, low resolution, single-channel and other characteristics brought by the SAR image imaging principle. Constantly pursuing detection accuracy while ignoring the detection speed and the ultimate application of the algorithm, almost all algorithms rely on powerful clustered desktop GPUs, which cannot be implemented on the frontline of marine monitoring to cope with the changing realities. To address these issues, this paper proposes a multi-channel fusion SAR image processing method that makes full use of image information and the network’s ability to extract features; it is also based on the latest You Only Look Once version 4 (YOLO-V4) deep learning framework for modeling architecture and training models. The YOLO-V4-light network was tailored for real-time and implementation, significantly reducing the model size, detection time, number of computational parameters, and memory consumption, and refining the network for three-channel images to compensate for the loss of accuracy due to light-weighting. The test experiments were completed entirely on a portable computer and achieved an Average Precision (AP) of 90.37% on the SAR Ship Detection Dataset (SSDD), simplifying the model while ensuring a lead over most existing methods. The YOLO-V4-lightship detection algorithm proposed in this paper has great practical application in maritime safety monitoring and emergency rescue.


2021 ◽  
Vol 11 (2) ◽  
pp. 851
Author(s):  
Wei-Liang Ou ◽  
Tzu-Ling Kuo ◽  
Chin-Chieh Chang ◽  
Chih-Peng Fan

In this study, for the application of visible-light wearable eye trackers, a pupil tracking methodology based on deep-learning technology is developed. By applying deep-learning object detection technology based on the You Only Look Once (YOLO) model, the proposed pupil tracking method can effectively estimate and predict the center of the pupil in the visible-light mode. By using the developed YOLOv3-tiny-based model to test the pupil tracking performance, the detection accuracy is as high as 80%, and the recall rate is close to 83%. In addition, the average visible-light pupil tracking errors of the proposed YOLO-based deep-learning design are smaller than 2 pixels for the training mode and 5 pixels for the cross-person test, which are much smaller than those of the previous ellipse fitting design without using deep-learning technology under the same visible-light conditions. After the combination of calibration process, the average gaze tracking errors by the proposed YOLOv3-tiny-based pupil tracking models are smaller than 2.9 and 3.5 degrees at the training and testing modes, respectively, and the proposed visible-light wearable gaze tracking system performs up to 20 frames per second (FPS) on the GPU-based software embedded platform.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1081
Author(s):  
Tamon Miyake ◽  
Shintaro Yamamoto ◽  
Satoshi Hosono ◽  
Satoshi Funabashi ◽  
Zhengxue Cheng ◽  
...  

Gait phase detection, which detects foot-contact and foot-off states during walking, is important for various applications, such as synchronous robotic assistance and health monitoring. Gait phase detection systems have been proposed with various wearable devices, sensing inertial, electromyography, or force myography information. In this paper, we present a novel gait phase detection system with static standing-based calibration using muscle deformation information. The gait phase detection algorithm can be calibrated within a short time using muscle deformation data by standing in several postures; it is not necessary to collect data while walking for calibration. A logistic regression algorithm is used as the machine learning algorithm, and the probability output is adjusted based on the angular velocity of the sensor. An experiment is performed with 10 subjects, and the detection accuracy of foot-contact and foot-off states is evaluated using video data for each subject. The median accuracy is approximately 90% during walking based on calibration for 60 s, which shows the feasibility of the static standing-based calibration method using muscle deformation information for foot-contact and foot-off state detection.


2016 ◽  
Vol 23 (4) ◽  
pp. 579-592 ◽  
Author(s):  
Jaromir Przybyło ◽  
Eliasz Kańtoch ◽  
Mirosław Jabłoński ◽  
Piotr Augustyniak

Abstract Videoplethysmography is currently recognized as a promising noninvasive heart rate measurement method advantageous for ubiquitous monitoring of humans in natural living conditions. Although the method is considered for application in several areas including telemedicine, sports and assisted living, its dependence on lighting conditions and camera performance is still not investigated enough. In this paper we report on research of various image acquisition aspects including the lighting spectrum, frame rate and compression. In the experimental part, we recorded five video sequences in various lighting conditions (fluorescent artificial light, dim daylight, infrared light, incandescent light bulb) using a programmable frame rate camera and a pulse oximeter as the reference. For a video sequence-based heart rate measurement we implemented a pulse detection algorithm based on the power spectral density, estimated using Welch’s technique. The results showed that lighting conditions and selected video camera settings including compression and the sampling frequency influence the heart rate detection accuracy. The average heart rate error also varies from 0.35 beats per minute (bpm) for fluorescent light to 6.6 bpm for dim daylight.


Sign in / Sign up

Export Citation Format

Share Document