scholarly journals Athlete Behavior Recognition Technology Based on Siamese-RPN Tracker Model

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Changhui Gao

With the rapid development of deep learning algorithms, it is gradually applied in UAV (Unmanned Aerial Vehicle) driving, visual recognition, target tracking, behavior recognition, and other fields. In the field of sports, many scientists put forward the research of target tracking and recognition technology based on deep learning algorithms for athletes’ trajectory and behavior capture. Based on the target tracking algorithm, a regional proposal network RPN algorithm combined with the twin regional proposal network Siamese algorithm is proposed to study the tracking and recognition technology of athletes’ behavior. Then, the adaptive updating network is used to track the behavior target of athletes, and the simulation model of behavior recognition is established. This algorithm is different from the traditional twin network algorithm. It can accurately take the athlete’s behavior as the target candidate box in model training and reduce the interference of environment and other factors on model recognition. The results show that the Siamese-RPN algorithm can reduce the interference from the background and environment when tracking the athletes’ target behavior trajectory. This algorithm can improve the training behavior recognition model, ignore the background interference elements of the behavior image, and improve the accuracy and overall performance of the model. Compared with the traditional twin network method for sports behavior recognition, the Siamese-RPN algorithm studied in this paper can perform offline operations and distinguish the interference factors of athletes’ background environment. It can quickly capture the characteristic points of athletes’ behavior as the data input of the tracking model, so it has excellent popularization and application value.

Author(s):  
Qianfan Wu ◽  
Adel Boueiz ◽  
Alican Bozkurt ◽  
Arya Masoomi ◽  
Allan Wang ◽  
...  

Predicting disease status for a complex human disease using genomic data is an important, yet challenging, step in personalized medicine. Among many challenges, the so-called curse of dimensionality problem results in unsatisfied performances of many state-of-art machine learning algorithms. A major recent advance in machine learning is the rapid development of deep learning algorithms that can efficiently extract meaningful features from high-dimensional and complex datasets through a stacked and hierarchical learning process. Deep learning has shown breakthrough performance in several areas including image recognition, natural language processing, and speech recognition. However, the performance of deep learning in predicting disease status using genomic datasets is still not well studied. In this article, we performed a review on the four relevant articles that we found through our thorough literature review. All four articles used auto-encoders to project high-dimensional genomic data to a low dimensional space and then applied the state-of-the-art machine learning algorithms to predict disease status based on the low-dimensional representations. This deep learning approach outperformed existing prediction approaches, such as prediction based on probe-wise screening and prediction based on principal component analysis. The limitations of the current deep learning approach and possible improvements were also discussed.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Tao Hong ◽  
Qiye Yang ◽  
Peng Wang ◽  
Jinmeng Zhang ◽  
Wenbo Sun ◽  
...  

Unmanned aerial vehicles (UAVs) have increased the convenience of urban life. Representing the recent rapid development of drone technology, UAVs have been widely used in fifth-generation (5G) cellular networks and the Internet of Things (IoT), such as drone aerial photography, express drone delivery, and drone traffic supervision. However, owing to low altitude and low speed, drones can only limitedly monitor and detect small target objects, resulting in frequent intrusion and collision. Traditional methods of monitoring the safety of drones are mostly expensive and difficult to implement. In smart city construction, a large number of smart IoT cameras connected to 5G networks are installed in the city. Captured drone images are transmitted to the cloud via a high-speed and low-latency 5G network, and machine learning algorithms are used for target detection and tracking. In this study, we propose a method for real-time tracking of drone targets by using the existing monitoring network to obtain drone images in real time and employing deep learning methods by which drones in urban environments can be guided. To achieve real-time tracking of UAV targets, we employed the tracking-by-detection mode in machine learning, with the network-modified YOLOv3 (you only look once v3) as the target detector and Deep SORT as the target tracking correlation algorithm. We established a drone tracking dataset that contains four types of drones and 2800 pictures in different environments. The tracking model we trained achieved 94.4% tracking accuracy in real-time UAV target tracking and a tracking speed of 54 FPS. These results comprehensively demonstrate that our tracking model achieves high-precision real-time UAV target tracking at a reduced cost.


2018 ◽  
Author(s):  
Qianfan Wu ◽  
Adel Boueiz ◽  
Alican Bozkurt ◽  
Arya Masoomi ◽  
Allan Wang ◽  
...  

Predicting disease status for a complex human disease using genomic data is an important, yet challenging, step in personalized medicine. Among many challenges, the so-called curse of dimensionality problem results in unsatisfied performances of many state-of-art machine learning algorithms. A major recent advance in machine learning is the rapid development of deep learning algorithms that can efficiently extract meaningful features from high-dimensional and complex datasets through a stacked and hierarchical learning process. Deep learning has shown breakthrough performance in several areas including image recognition, natural language processing, and speech recognition. However, the performance of deep learning in predicting disease status using genomic datasets is still not well studied. In this article, we performed a review on the four relevant articles that we found through our thorough literature review. All four articles used auto-encoders to project high-dimensional genomic data to a low dimensional space and then applied the state-of-the-art machine learning algorithms to predict disease status based on the low-dimensional representations. This deep learning approach outperformed existing prediction approaches, such as prediction based on probe-wise screening and prediction based on principal component analysis. The limitations of the current deep learning approach and possible improvements were also discussed.


Kybernetes ◽  
2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Shubham Bharti ◽  
Arun Kumar Yadav ◽  
Mohit Kumar ◽  
Divakar Yadav

PurposeWith the rise of social media platforms, an increasing number of cases of cyberbullying has reemerged. Every day, large number of people, especially teenagers, become the victim of cyber abuse. A cyberbullied person can have a long-lasting impact on his mind. Due to it, the victim may develop social anxiety, engage in self-harm, go into depression or in the extreme cases, it may lead to suicide. This paper aims to evaluate various techniques to automatically detect cyberbullying from tweets by using machine learning and deep learning approaches.Design/methodology/approachThe authors applied machine learning algorithms approach and after analyzing the experimental results, the authors postulated that deep learning algorithms perform better for the task. Word-embedding techniques were used for word representation for our model training. Pre-trained embedding GloVe was used to generate word embedding. Different versions of GloVe were used and their performance was compared. Bi-directional long short-term memory (BLSTM) was used for classification.FindingsThe dataset contains 35,787 labeled tweets. The GloVe840 word embedding technique along with BLSTM provided the best results on the dataset with an accuracy, precision and F1 measure of 92.60%, 96.60% and 94.20%, respectively.Research limitations/implicationsIf a word is not present in pre-trained embedding (GloVe), it may be given a random vector representation that may not correspond to the actual meaning of the word. It means that if a word is out of vocabulary (OOV) then it may not be represented suitably which can affect the detection of cyberbullying tweets. The problem may be rectified through the use of character level embedding of words.Practical implicationsThe findings of the work may inspire entrepreneurs to leverage the proposed approach to build deployable systems to detect cyberbullying in different contexts such as workplace, school, etc and may also draw the attention of lawmakers and policymakers to create systemic tools to tackle the ills of cyberbullying.Social implicationsCyberbullying, if effectively detected may save the victims from various psychological problems which, in turn, may lead society to a healthier and more productive life.Originality/valueThe proposed method produced results that outperform the state-of-the-art approaches in detecting cyberbullying from tweets. It uses a large dataset, created by intelligently merging two publicly available datasets. Further, a comprehensive evaluation of the proposed methodology has been presented.


Author(s):  
Jia Lu ◽  
Wei Qi Yan

With the cost decrease of security monitoring facilities such as cameras, video surveillance has been widely applied to public security and safety such as banks, transportation, shopping malls, etc. which allows police to monitor abnormal events. Through deep learning, authors can achieve high performance of human behavior detection and recognition by using model training and tests. This chapter uses public datasets Weizmann dataset and KTH dataset to train deep learning models. Four deep learning models were investigated for human behavior recognition. Results show that YOLOv3 model is the best one and achieved 96.29% of mAP based on Weizmann dataset and 84.58% of mAP on KTH dataset. The chapter conducts human behavior recognition using deep learning and evaluates the outcomes of different approaches with the support of the datasets.


2022 ◽  
Author(s):  
Nils Koerber

In recent years the amount of data generated by imaging techniques has grown rapidly along with increasing computational power and the development of deep learning algorithms. To address the need for powerful automated image analysis tools for a broad range of applications in the biomedical sciences, we present the Microscopic Image Analyzer (MIA). MIA combines a graphical user interface that obviates the need for programming skills with state-of-the-art deep learning algorithms for segmentation, object detection, and classification. It runs as a standalone, platform-independent application and is compatible with commonly used open source software packages. The software provides a unified interface for easy image labeling, model training and inference. Furthermore the software was evaluated in a public competition and performed among the top three for all tested data sets. The source code is available on https://github.com/MIAnalyzer/MIA.


2020 ◽  
Vol 39 (6) ◽  
pp. 8927-8935
Author(s):  
Bing Zheng ◽  
Dawei Yun ◽  
Yan Liang

Under the impact of COVID-19, research on behavior recognition are highly needed. In this paper, we combine the algorithm of self-adaptive coder and recurrent neural network to realize the research of behavior pattern recognition. At present, most of the research of human behavior recognition is focused on the video data, which is based on the video number. At the same time, due to the complexity of video image data, it is easy to violate personal privacy. With the rapid development of Internet of things technology, it has attracted the attention of a large number of experts and scholars. Researchers have tried to use many machine learning methods, such as random forest, support vector machine and other shallow learning methods, which perform well in the laboratory environment, but there is still a long way to go from practical application. In this paper, a recursive neural network algorithm based on long and short term memory (LSTM) is proposed to realize the recognition of behavior patterns, so as to improve the accuracy of human activity behavior recognition.


2020 ◽  
Vol 2 ◽  
pp. 58-61 ◽  
Author(s):  
Syed Junaid ◽  
Asad Saeed ◽  
Zeili Yang ◽  
Thomas Micic ◽  
Rajesh Botchu

The advances in deep learning algorithms, exponential computing power, and availability of digital patient data like never before have led to the wave of interest and investment in artificial intelligence in health care. No radiology conference is complete without a substantial dedication to AI. Many radiology departments are keen to get involved but are unsure of where and how to begin. This short article provides a simple road map to aid departments to get involved with the technology, demystify key concepts, and pique an interest in the field. We have broken down the journey into seven steps; problem, team, data, kit, neural network, validation, and governance.


Sign in / Sign up

Export Citation Format

Share Document