Neural Denoising for Path Tracing of Medical Volumetric Data

Author(s):  
Nikolai Hofmann ◽  
Jana Martschinke ◽  
Klaus Engel ◽  
Marc Stamminger

In this paper, we transfer machine learning techniques previously applied to denoising surface-only Monte Carlo renderings to path-traced visualizations of medical volumetric data. In the domain of medical imaging, path-traced videos turned out to be an efficient means to visualize and understand internal structures, in particular for less experienced viewers such as students or patients. However, the computational demands for the rendering of high-quality path-traced videos are very high due to the large number of samples necessary for each pixel. To accelerate the process, we present a learning-based technique for denoising path-traced videos of volumetric data by increasing the sample count per pixel; both through spatial (integrating neighboring samples) and temporal filtering (reusing samples over time). Our approach uses a set of additional features and a loss function both specifically designed for the volumetric case. Furthermore, we present a novel network architecture tailored for our purpose, and introduce reprojection of samples to improve temporal stability and reuse samples over frames. As a result, we achieve good image quality even from severely undersampled input images, as visible in the teaser image.

2020 ◽  
pp. 1042-1059 ◽  
Author(s):  
Ammar Almomani ◽  
Mohammad Alauthman ◽  
Firas Albalas ◽  
O. Dorgham ◽  
Atef Obeidat

This article describes how as network traffic grows, attacks on traffic become more complicated and harder to detect. Recently, researchers have begun to explore machine learning techniques with cloud computing technologies to classify network threats. So, new and creative ways are needed to enhance intrusion detection system. This article addresses the source of the above issues through detecting an intrusion in cloud computing before it further disrupts normal network operations, because the complexity of malicious attack techniques have evolved from traditional malicious attack technologies (direct malicious attack), which include different malicious attack classes, such as DoS, Probe, R2L, and U2R malicious attacks, especially the zero-day attack in online mode. The proposed online intrusion detection cloud system (OIDCS) adopts the principles of the new spiking neural network architecture called NeuCube algorithm. It is proposed that this system is the first filtering system approach that utilizes the NeuCube algorithm. The OIDCS inherits the hybrid (supervised/unsupervised) learning feature of the NeuCube algorithm and uses this algorithm in an online system with lifelong learning to classify input while learning the system. The system is accurate, especially when working with a zero-day attack, reaching approximately 97% accuracy based on the to-be-remembered (TBR) encoding algorithm.


2019 ◽  
Author(s):  
Jacob Witten ◽  
Zack Witten

AbstractAntimicrobial peptides (AMPs) are naturally occurring or synthetic peptides that show promise for treating antibiotic-resistant pathogens. Machine learning techniques are increasingly used to identify naturally occurring AMPs, but there is a dearth of purely computational methods to design novel effective AMPs, which would speed AMP development. We collected a large database, Giant Repository of AMP Activities (GRAMPA), containing AMP sequences and associated MICs. We designed a convolutional neural network to perform combined classification and regression on peptide sequences to quantitatively predict AMP activity against Escherichia coli. Our predictions outperformed the state of the art at AMP classification and were also effective at regression, for which there were no publicly available comparisons. We then used our model to design novel AMPs and experimentally demonstrated activity of these AMPs against the pathogens E. coli, Pseudomonas aeruginosa, and Staphylococcus aureus. Data, code, and neural network architecture and parameters are available at https://github.com/zswitten/Antimicrobial-Peptides.


2021 ◽  
Vol 1 (4) ◽  
pp. 22-26
Author(s):  
Ankita Saha ◽  
Chanda Pathak ◽  
Sourav Saha

The importance of cybersecurity is on the rise as we have become more technologically dependent on the internet than ever before. Cybersecurity implies the process of protecting and recovering computer systems, networks, devices, and programs from any cyber attack. Cyber attacks are an increasingly sophisticated and evolving danger to our sensitive data, as attackers employ new methods to circumvent traditional security controls. Cryptanalysis is mainly used to crack cryptographic security systems and gain access to the contents of the encrypted messages, even if the key is unknown. It focuses on deciphering the encrypted data as it works with ciphertext, ciphers, and cryptosystems to understand how they work and find techniques for weakening them. For classical cryptanalysis, the recovery of ciphertext is difficult as the time complexity is exponential. The traditional cryptanalysis requires a significant amount of time, known plaintexts, and memory. Machine learning may reduce the computational complexity in cryptanalysis. Machine learning techniques have recently been applied in cryptanalysis, steganography, and other data-securityrelated applications. Deep learning is an advanced field of machine learning which mainly uses deep neural network architecture. Nowadays, deep learning techniques are usually explored extensively to solve many challenging problems of artificial intelligence. But not much work has been done on deep learning-based cryptanalysis. This paper attempts to summarize various machine learning based approaches for cryptanalysis along with discussions on the scope of application of deep learning techniques in cryptography.


2018 ◽  
Vol 8 (2) ◽  
pp. 96-112 ◽  
Author(s):  
Ammar Almomani ◽  
Mohammad Alauthman ◽  
Firas Albalas ◽  
O. Dorgham ◽  
Atef Obeidat

This article describes how as network traffic grows, attacks on traffic become more complicated and harder to detect. Recently, researchers have begun to explore machine learning techniques with cloud computing technologies to classify network threats. So, new and creative ways are needed to enhance intrusion detection system. This article addresses the source of the above issues through detecting an intrusion in cloud computing before it further disrupts normal network operations, because the complexity of malicious attack techniques have evolved from traditional malicious attack technologies (direct malicious attack), which include different malicious attack classes, such as DoS, Probe, R2L, and U2R malicious attacks, especially the zero-day attack in online mode. The proposed online intrusion detection cloud system (OIDCS) adopts the principles of the new spiking neural network architecture called NeuCube algorithm. It is proposed that this system is the first filtering system approach that utilizes the NeuCube algorithm. The OIDCS inherits the hybrid (supervised/unsupervised) learning feature of the NeuCube algorithm and uses this algorithm in an online system with lifelong learning to classify input while learning the system. The system is accurate, especially when working with a zero-day attack, reaching approximately 97% accuracy based on the to-be-remembered (TBR) encoding algorithm.


2015 ◽  
Vol 813-814 ◽  
pp. 1058-1062 ◽  
Author(s):  
M. Saimurugan ◽  
T. Praveenkumar ◽  
P. Krishnakumar ◽  
K.I. Ramachandran

Gearbox is the only medium which balances the power and torque relations for the appropriate operating conditions, at very high speeds it controls the power output of the drive unit. Its application is wide in the field of automotive and industries. Condition monitoring of gearbox access the operating condition of the gearbox components such as gears and, bearings to take necessary condition based maintenance to avoid the machine downtime and operation losses. This paper identifies the suitable accelerometer position to acquire vibration signals for identification of gear faults using machine learning techniques. The study includes 2 fault class, 2 gear speeds (1st and 4th gear), 3 loading conditions and, 3 operating speeds each for 2 sensor locations. Features were collected for each class in both sensor location points from accelerometer. Statistical features were extracted and the classification efficiencies were calculated from both SVM and J48 Decision tree algorithm.


Author(s):  
Jagruti Jain ◽  
Chitra Desai ◽  
Mrunali Chavan

Palm vein authentication has high level of accuracy because it is located inside the body and does not change over the life and cannot be stolen. These papers present an analysis of palm vein pattern recognition algorithms, techniques, methodologies and system. It discusses the technical aspects of recent approaches for the following processes; detection of region of interest (ROI), segment of palm vein pattern, features extraction, and matching. The results show that, there is no benchmark database exists for palm vein recognition. For all processes, there are many machine learning techniques with very high accuracy.


2020 ◽  
Author(s):  
Said Ouala ◽  
Lucas Drumetz ◽  
Bertrand Chapron ◽  
Ananda Pascual ◽  
Fabrice Collard ◽  
...  

<p>Within the geosciences community, data-driven techniques have encountered a great success in the last few years. This is principally due to the success of machine learning techniques in several image and signal processing domains. However, when considering the data-driven simulation of ocean and atmospheric fields, the application of these methods is still an extremely challenging task due to the fact that the underlying dynamics usually depend on several complex hidden variables, which makes the learning and simulation process much more challenging.</p><p>In this work, we aim to extract Ordinary Differential Equations (ODE) from partial observations of a system. We propose a novel neural network architecture guided by physical and mathematical considerations of the underlying dynamics. Specifically, our architecture is able to simulate the dynamics of the system from a single initial condition even if the initial condition does not lie in the attractor spanned by the training data. We show on different case studies the effectiveness of the proposed framework both in capturing long term asymptotic patterns of the dynamics of the system and in addressing data assimilation issues which relates to the short term forecasting performance of our model.</p>


2020 ◽  
Vol 638 ◽  
pp. A134
Author(s):  
José A. de Diego ◽  
Jakub Nadolny ◽  
Ángel Bongiovanni ◽  
Jordi Cepa ◽  
Mirjana Pović ◽  
...  

Context. The accurate classification of hundreds of thousands of galaxies observed in modern deep surveys is imperative if we want to understand the universe and its evolution. Aims. Here, we report the use of machine learning techniques to classify early- and late-type galaxies in the OTELO and COSMOS databases using optical and infrared photometry and available shape parameters: either the Sérsic index or the concentration index. Methods. We used three classification methods for the OTELO database: (1) u − r color separation, (2) linear discriminant analysis using u − r and a shape parameter classification, and (3) a deep neural network using the r magnitude, several colors, and a shape parameter. We analyzed the performance of each method by sample bootstrapping and tested the performance of our neural network architecture using COSMOS data. Results. The accuracy achieved by the deep neural network is greater than that of the other classification methods, and it can also operate with missing data. Our neural network architecture is able to classify both OTELO and COSMOS datasets regardless of small differences in the photometric bands used in each catalog. Conclusions. In this study we show that the use of deep neural networks is a robust method to mine the cataloged data.


2006 ◽  
Author(s):  
Christopher Schreiner ◽  
Kari Torkkola ◽  
Mike Gardner ◽  
Keshu Zhang

2020 ◽  
Vol 12 (2) ◽  
pp. 84-99
Author(s):  
Li-Pang Chen

In this paper, we investigate analysis and prediction of the time-dependent data. We focus our attention on four different stocks are selected from Yahoo Finance historical database. To build up models and predict the future stock price, we consider three different machine learning techniques including Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN) and Support Vector Regression (SVR). By treating close price, open price, daily low, daily high, adjusted close price, and volume of trades as predictors in machine learning methods, it can be shown that the prediction accuracy is improved.


Sign in / Sign up

Export Citation Format

Share Document