scholarly journals Real-time Respiratory Tumor Motion Prediction based on Temporal Convolutional Neural Network (Preprint)

2021 ◽  
Author(s):  
Panchun Chang ◽  
Jun Dang ◽  
Jianrong Dai ◽  
Wenzheng Sun

BACKGROUND Dynamic tracking of tumor with radiation beam in radiation therapy requires prediction of real-time target location ahead of beam delivery as the treatment with beam or gating tracking brings in time latency. OBJECTIVE A deep learning model based on a temporal convolutional neural network (TCN) using multiple external makers was developed to predict internal target location through multiple external markers in this study. METHODS The respiratory signals from 69 treatment fractions of 21 cancer patients treated with the Cyberknife Synchrony device were used to train and test the model. The reported model’s performance was evaluated through comparing with a long short term memory model in terms of root-mean-square-error (RMSE) between real and predicted respiratory signals. Besides, the effect of external marker number was also investigated. RESULTS The average RMSEs (mm) for 480-ms ahead of prediction using TCN model in the superior–inferior (SI), anterior–posterior (AP) and left–right (LR) and radial directions were 0.49, 0.28, 0.25 and 0.67, respectively. CONCLUSIONS The experiment results demonstrated that the TCN respiratory prediction model could predict the respiratory signals with sub-millimeter accuracy.

Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1186
Author(s):  
Ranjana Koshy ◽  
Ausif Mahmood

Face liveness detection is a critical preprocessing step in face recognition for avoiding face spoofing attacks, where an impostor can impersonate a valid user for authentication. While considerable research has been recently done in improving the accuracy of face liveness detection, the best current approaches use a two-step process of first applying non-linear anisotropic diffusion to the incoming image and then using a deep network for final liveness decision. Such an approach is not viable for real-time face liveness detection. We develop two end-to-end real-time solutions where nonlinear anisotropic diffusion based on an additive operator splitting scheme is first applied to an incoming static image, which enhances the edges and surface texture, and preserves the boundary locations in the real image. The diffused image is then forwarded to a pre-trained Specialized Convolutional Neural Network (SCNN) and the Inception network version 4, which identify the complex and deep features for face liveness classification. We evaluate the performance of our integrated approach using the SCNN and Inception v4 on the Replay-Attack dataset and Replay-Mobile dataset. The entire architecture is created in such a manner that, once trained, the face liveness detection can be accomplished in real-time. We achieve promising results of 96.03% and 96.21% face liveness detection accuracy with the SCNN, and 94.77% and 95.53% accuracy with the Inception v4, on the Replay-Attack, and Replay-Mobile datasets, respectively. We also develop a novel deep architecture for face liveness detection on video frames that uses the diffusion of images followed by a deep Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) to classify the video sequence as real or fake. Even though the use of CNN followed by LSTM is not new, combining it with diffusion (that has proven to be the best approach for single image liveness detection) is novel. Performance evaluation of our architecture on the REPLAY-ATTACK dataset gave 98.71% test accuracy and 2.77% Half Total Error Rate (HTER), and on the REPLAY-MOBILE dataset gave 95.41% accuracy and 5.28% HTER.


Author(s):  
Funa Zhou ◽  
Zhiqiang Zhang ◽  
Danmin Chen

Analysis of one-dimensional vibration signals is the most common method used for safety analysis and health monitoring of rotary machines. How to effectively extract features involved in one-dimensional sequence data is crucial for the accuracy of real-time fault diagnosis. This article aims to develop more effective means of extracting useful features potentially involved in one-dimensional vibration signals. First, an improved parallel long short-term memory called parallel long short-term memory with peephole is designed by adding a peephole connection before each forget gate to prevent useless information transferring in the cell. It can not only solve the memory bottleneck problem of traditional long short-term memory for long sequence but also can make full use of all possible information helpful for feature extraction. Second, a fusion network with new training mechanism is designed to fuse features extracted from parallel long short-term memory with peephole and convolutional neural network, respectively. The fusion network can incorporate two-dimensional screenshot image into comprehensive feature extraction. It can provide more accurate fault diagnosis result since two-dimensional screenshot image is another form of expression for one-dimensional vibration sequence involving additional trend and locality information. Finally, real-time two-dimensional screenshot image is fed into convolutional neural network to secure a real-time online diagnosis which is the primary requirement of the engineers in health monitoring. Validity of the proposed method is verified by fault diagnosis for rolling bearing and gearbox.


2020 ◽  
Vol 13 (4) ◽  
pp. 627-640 ◽  
Author(s):  
Avinash Chandra Pandey ◽  
Dharmveer Singh Rajpoot

Background: Sentiment analysis is a contextual mining of text which determines viewpoint of users with respect to some sentimental topics commonly present at social networking websites. Twitter is one of the social sites where people express their opinion about any topic in the form of tweets. These tweets can be examined using various sentiment classification methods to find the opinion of users. Traditional sentiment analysis methods use manually extracted features for opinion classification. The manual feature extraction process is a complicated task since it requires predefined sentiment lexicons. On the other hand, deep learning methods automatically extract relevant features from data hence; they provide better performance and richer representation competency than the traditional methods. Objective: The main aim of this paper is to enhance the sentiment classification accuracy and to reduce the computational cost. Method: To achieve the objective, a hybrid deep learning model, based on convolution neural network and bi-directional long-short term memory neural network has been introduced. Results: The proposed sentiment classification method achieves the highest accuracy for the most of the datasets. Further, from the statistical analysis efficacy of the proposed method has been validated. Conclusion: Sentiment classification accuracy can be improved by creating veracious hybrid models. Moreover, performance can also be enhanced by tuning the hyper parameters of deep leaning models.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4916
Author(s):  
Ali Usman Gondal ◽  
Muhammad Imran Sadiq ◽  
Tariq Ali ◽  
Muhammad Irfan ◽  
Ahmad Shaf ◽  
...  

Urbanization is a big concern for both developed and developing countries in recent years. People shift themselves and their families to urban areas for the sake of better education and a modern lifestyle. Due to rapid urbanization, cities are facing huge challenges, one of which is waste management, as the volume of waste is directly proportional to the people living in the city. The municipalities and the city administrations use the traditional wastage classification techniques which are manual, very slow, inefficient and costly. Therefore, automatic waste classification and management is essential for the cities that are being urbanized for the better recycling of waste. Better recycling of waste gives the opportunity to reduce the amount of waste sent to landfills by reducing the need to collect new raw material. In this paper, the idea of a real-time smart waste classification model is presented that uses a hybrid approach to classify waste into various classes. Two machine learning models, a multilayer perceptron and multilayer convolutional neural network (ML-CNN), are implemented. The multilayer perceptron is used to provide binary classification, i.e., metal or non-metal waste, and the CNN identifies the class of non-metal waste. A camera is placed in front of the waste conveyor belt, which takes a picture of the waste and classifies it. Upon successful classification, an automatic hand hammer is used to push the waste into the assigned labeled bucket. Experiments were carried out in a real-time environment with image segmentation. The training, testing, and validation accuracy of the purposed model was 0.99% under different training batches with different input features.


Sign in / Sign up

Export Citation Format

Share Document