scholarly journals A New Pooling Approach Based on Zeckendorf’s Theorem for Texture Transfer Information

Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 279
Author(s):  
Vincent Vigneron ◽  
Hichem Maaref ◽  
Tahir Q. Syed

The pooling layer is at the heart of every convolutional neural network (CNN) contributing to the invariance of data variation. This paper proposes a pooling method based on Zeckendorf’s number series. The maximum pooling layers are replaced with Z pooling layer, which capture texels from input images, convolution layers, etc. It is shown that Z pooling properties are better adapted to segmentation tasks than other pooling functions. The method was evaluated on a traditional image segmentation task and on a dense labeling task carried out with a series of deep learning architectures in which the usual maximum pooling layers were altered to use the proposed pooling mechanism. Not only does it arbitrarily increase the receptive field in a parameterless fashion but it can better tolerate rotations since the pooling layers are independent of the geometric arrangement or sizes of the image regions. Different combinations of pooling operations produce images capable of emphasizing low/high frequencies, extract ultrametric contours, etc.

2020 ◽  
Vol 10 (21) ◽  
pp. 7817
Author(s):  
Ivana Marin ◽  
Ana Kuzmanic Skelin ◽  
Tamara Grujic

The main goal of any classification or regression task is to obtain a model that will generalize well on new, previously unseen data. Due to the recent rise of deep learning and many state-of-the-art results obtained with deep models, deep learning architectures have become one of the most used model architectures nowadays. To generalize well, a deep model needs to learn the training data well without overfitting. The latter implies a correlation of deep model optimization and regularization with generalization performance. In this work, we explore the effect of the used optimization algorithm and regularization techniques on the final generalization performance of the model with convolutional neural network (CNN) architecture widely used in the field of computer vision. We give a detailed overview of optimization and regularization techniques with a comparative analysis of their performance with three CNNs on the CIFAR-10 and Fashion-MNIST image datasets.


2020 ◽  
Author(s):  
Hao Zhang ◽  
Jianguang Han ◽  
Heng Zhang ◽  
Yi Zhang

<p>The seismic waves exhibit various types of attenuation while propagating through the subsurface, which is strongly related to the complexity of the earth. Anelasticity of the subsurface medium, which is quantified by the quality factor Q, causes dissipation of seismic energy. Attenuation distorts the phase of the seismic data and decays the higher frequencies in the data more than lower frequencies. Strong attenuation effect resulting from geology such as gas pocket is a notoriously challenging problem for high resolution imaging because it strongly reduces the amplitude and downgrade the imaging quality of deeper events. To compensate this attenuation effect, first we need to accurately estimate the attenuation model (Q). However, it is challenging to directly derive a laterally and vertically varying attenuation model in depth domain from the surface reflection seismic data. This research paper proposes a method to derive the anomalous Q model corresponding to strong attenuative media from marine reflection seismic data using a deep-learning approach, the convolutional neural network (CNN). We treat Q anomaly detection problem as a semantic segmentation task and train an encoder-decoder CNN (U-Net) to perform a pixel-by-pixel prediction on the seismic section to invert a pixel group belongs to different level of attenuation probability which can help to build up the attenuation model. The proposed method in this paper uses a volume of marine 3D reflection seismic data for network training and validation, which needs only a very small amount of data as the training set due to the feature of U-Net, a specific encoder-decoder CNN architecture in semantic segmentation task. Finally, in order to evaluate the attenuation model result predicted by the proposed method, we validate the predicted heterogeneous Q model using de-absorption pre-stack depth migration (Q-PSDM), a high-resolution depth imaging result with reasonable compensation is obtained.</p>


2020 ◽  
Vol 8 (3) ◽  
pp. 234-238
Author(s):  
Nur Choiriyati ◽  
Yandra Arkeman ◽  
Wisnu Ananta Kusuma

An open challenge in bioinformatics is the analysis of the sequenced metagenomes from the various environments. Several studies demonstrated bacteria classification at the genus level using k-mers as feature extraction where the highest value of k gives better accuracy but it is costly in terms of computational resources and computational time. Spaced k-mers method was used to extract the feature of the sequence using 111 1111 10001 where 1 was a match and 0 was the condition that could be a match or did not match. Currently, deep learning provides the best solutions to many problems in image recognition, speech recognition, and natural language processing. In this research, two different deep learning architectures, namely Deep Neural Network (DNN) and Convolutional Neural Network (CNN), trained to approach the taxonomic classification of metagenome data and spaced k-mers method for feature extraction. The result showed the DNN classifier reached 90.89 % and the CNN classifier reached 88.89 % accuracy at the genus level taxonomy.


Author(s):  
Helen Chen ◽  
Shubhankar Mohapatra ◽  
George Michalopoulos ◽  
Xi He ◽  
Ian McKillop

Using deep learning to advance personalized healthcare requires data about patients to be collected and aggregated from disparate sources that often span institutions and geographies. Researchers regularly come face-to-face with legitimate security and privacy policies that constrain access to these data. In this work, we present a vision for privacy-preserving federated neural network architectures that permit data to remain at a custodian’s institution while enabling the data to be discovered and used in neural network modeling. Using a diabetes dataset, we demonstrate that accuracy and processing efficiencies using federated deep learning architectures are equivalent to the models built on centralized datasets.


2019 ◽  
Vol 26 (11) ◽  
pp. 1181-1188 ◽  
Author(s):  
Isabel Segura-Bedmar ◽  
Pablo Raez

Abstract Objective The goal of the 2018 n2c2 shared task on cohort selection for clinical trials (track 1) is to identify which patients meet the selection criteria for clinical trials. Cohort selection is a particularly demanding task to which natural language processing and deep learning can make a valuable contribution. Our goal is to evaluate several deep learning architectures to deal with this task. Materials and Methods Cohort selection can be formulated as a multilabeling problem whose goal is to determine which criteria are met for each patient record. We explore several deep learning architectures such as a simple convolutional neural network (CNN), a deep CNN, a recurrent neural network (RNN), and CNN-RNN hybrid architecture. Although our architectures are similar to those proposed in existing deep learning systems for text classification, our research also studies the impact of using a fully connected feedforward layer on the performance of these architectures. Results The RNN and hybrid models provide the best results, though without statistical significance. The use of the fully connected feedforward layer improves the results for all the architectures, except for the hybrid architecture. Conclusions Despite the limited size of the dataset, deep learning methods show promising results in learning useful features for the task of cohort selection. Therefore, they can be used as a previous filter for cohort selection for any clinical trial with a minimum of human intervention, thus reducing the cost and time of clinical trials significantly.


Computation ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 3
Author(s):  
Sima Sarv Ahrabi ◽  
Michele Scarpiniti ◽  
Enzo Baccarelli ◽  
Alireza Momenzadeh

In parallel with the vast medical research on clinical treatment of COVID-19, an important action to have the disease completely under control is to carefully monitor the patients. What the detection of COVID-19 relies on most is the viral tests, however, the study of X-rays is helpful due to the ease of availability. There are various studies that employ Deep Learning (DL) paradigms, aiming at reinforcing the radiography-based recognition of lung infection by COVID-19. In this regard, we make a comparison of the noteworthy approaches devoted to the binary classification of infected images by using DL techniques, then we also propose a variant of a convolutional neural network (CNN) with optimized parameters, which performs very well on a recent dataset of COVID-19. The proposed model’s effectiveness is demonstrated to be of considerable importance due to its uncomplicated design, in contrast to other presented models. In our approach, we randomly put several images of the utilized dataset aside as a hold out set; the model detects most of the COVID-19 X-rays correctly, with an excellent overall accuracy of 99.8%. In addition, the significance of the results obtained by testing different datasets of diverse characteristics (which, more specifically, are not used in the training process) demonstrates the effectiveness of the proposed approach in terms of an accuracy up to 93%.


Machines ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 130
Author(s):  
Hyunsoo Lee ◽  
Seok-Youn Han ◽  
Keejun Park ◽  
Hoyoung Lee ◽  
Taesoo Kwon

Train running safety is considered one of the key criteria for advanced highway trains and bogies. While a number of existing research studies have focused on its measurement and monitoring, this study proposes a new and effective train running a safety prediction framework. The wheel derail coefficient, wheel rate of load reduction, and wheel lateral pressure are considered the decision variables for the safety framework. Data for actual measured rail conditions and vibration-based signals are used as the input data. However, advanced trains and bogies are influenced more by their inertial structures and mechanisms than by railway conditions and external environments. In order to reflect their inertial influences, past data of output variables are used as recurrent data. The proposed framework shares advantages of a general deep neural network and a recurrent neural network. To prove the effectiveness of the proposed hybrid deep-learning framework, numerical analyses using an actual measured train-railway model and transit simulation are conducted and compared with the existing deep learning architectures.


2021 ◽  
Author(s):  
Rohan Bhansali ◽  
Rahul Kumar

AbstractBurns are the fourth most prevalent unintentional injury around the world, and when left untreated can become permanent and sometimes fatal. An important aspect of treating burn injuries is accurate and efficient diagnosis. Classifying the three primary types of burns – superficial dermal, deep dermal, and full thickness – is essential in determining the necessity of surgery, which is often critical to the afflicted patient’s survival. Unfortunately, reconstructive burn surgeons and dermatologists are merely able to diagnose these types of burns with approximately 50-75% accuracy. As a result, we propose the use of an eight-layer convolutional neural network, BurnNet, for rapid and precise burn classification with 99.87% accuracy. We applied affine transformations to artificially augment our dataset and found that our model attained near perfect metrics across the board, demonstrating the high propensity of deep learning architectures in burn classification.


Sign in / Sign up

Export Citation Format

Share Document