Images super-resolution by optimal deep AlexNet architecture for medical application: A novel DOCALN1

2020 ◽  
Vol 39 (6) ◽  
pp. 8259-8272
Author(s):  
Sudhakar Sengan ◽  
L. Arokia Jesu Prabhu ◽  
V. Ramachandran ◽  
V. Priya ◽  
Logesh Ravi ◽  
...  

In the last decade, numerous researches have been focused on Image Super-Resolution (SR); this recreation or improvement model is vital in different research areas. Recently, deep learning algorithm finds useful to advance in the resolution of the medical output. Here, we devise a novel Deep Convolutional Network model along with the optimal learning rate of the Rectified Linear Unit (ReLU) intended for Medical Image Super-Resolution (MISR). For getting the optimal values of Deep Learning AlexNet structure, Modified Crow Search (MCS) is utilized, which is mainly depends on the behavior of crow sets. The chosen Alexnet lacks in a sort of suitable supervision for upgrading execution of the proposed model that effectively aims to overfit. The proposed design, i.e., MISR, named Deep Optimal Convolutional AlexNet (DOCALN), derives the optimal values of learning rates of the ReLU activation function. Based on this optimal deep learning structure, the Low Resolution (LR) medical images can be applied. Experimentation results of our proposed model are compared with variants of Convolution Neural Networks (CNN) concerning different measures such as image quality assessment, SR efficiency analysis, and execution time.

2020 ◽  
Vol 10 (3) ◽  
pp. 854 ◽  
Author(s):  
Jiali Tang ◽  
Chenrong Huang ◽  
Jian Liu ◽  
Hongjin Zhu

Current mainstream super-resolution algorithms based on deep learning use a deep convolution neural network (CNN) framework to realize end-to-end learning from low-resolution (LR) image to high-resolution (HR) images, and have achieved good image restoration effects. However, as the number of layers in the network is increased, better results are not necessarily obtained, and there will be problems such as slow training convergence, mismatched sample blocks, and unstable image restoration results. We propose a preclassified deep-learning algorithm (MGEP-SRCNN) using Multilabel Gene Expression Programming (MGEP), which screens out a sample sub-bank with high relevance to the target image before image block extraction, preclassifies samples in a multilabel framework, and then performs nonlinear mapping and image reconstruction. The algorithm is verified through standard images, and better objective image quality is obtained. The restoration effect under different magnification conditions is also better.


Author(s):  
A. Valli Bhasha ◽  
B. D. Venkatramana Reddy

Diverse image super-resolution (SR) techniques have been implemented to reconstruct the high-resolution (HR) images from input images through lower spatial resolutions. However, the evaluation of the perceptual quality of SR images remains an important and complex research problem. This paper proposes a new image SR model with the intention of attaining maximum Peak Signal-to-Noise Ratio (PSNR). The conversion of low-resolution (LR) images from the HR images is performed by bicubic interpolation-based downsampling and upsampling. Then, the four sub-bands of LR and HR images are generated by the novel Adaptive Wavelet Lifting approach, in which the filter modes are optimized using the proposed SA-CBO. From this technique, LR wavelet sub-bands (LRSB) for LR images and HR wavelet sub-bands (HRSB) for HR images are formed. With the help of the LRSB and HRSB images, the residual images are formed by the adoption of the optimized Activation function and optimized hidden neurons in a deep convolutional neural network (CNN). The improvement in both the adaptive wavelet lifting approach and deep CNN is made by the self-adaptive-colliding bodies optimization (SA-CBO). Finally, the inverse adaptive wavelet lifting approach is used to produce the final SR image. Experimental results on publicly available SR image quality databases confirm the effectiveness and generalization ability of the proposed method compared with the traditional image quality assessment algorithms.


PLoS ONE ◽  
2020 ◽  
Vol 15 (10) ◽  
pp. e0241313
Author(s):  
Zhengqiang Xiong ◽  
Manhui Lin ◽  
Zhen Lin ◽  
Tao Sun ◽  
Guangyi Yang ◽  
...  

Author(s):  
Qiang Yu ◽  
Feiqiang Liu ◽  
Long Xiao ◽  
Zitao Liu ◽  
Xiaomin Yang

Deep-learning (DL)-based methods are of growing importance in the field of single image super-resolution (SISR). The practical application of these DL-based models is a remaining problem due to the requirement of heavy computation and huge storage resources. The powerful feature maps of hidden layers in convolutional neural networks (CNN) help the model learn useful information. However, there exists redundancy among feature maps, which can be further exploited. To address these issues, this paper proposes a lightweight efficient feature generating network (EFGN) for SISR by constructing the efficient feature generating block (EFGB). Specifically, the EFGB can conduct plain operations on the original features to produce more feature maps with parameters slightly increasing. With the help of these extra feature maps, the network can extract more useful information from low resolution (LR) images to reconstruct the desired high resolution (HR) images. Experiments conducted on the benchmark datasets demonstrate that the proposed EFGN can outperform other deep-learning based methods in most cases and possess relatively lower model complexity. Additionally, the running time measurement indicates the feasibility of real-time monitoring.


Technologies ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 14
Author(s):  
James Dzisi Gadze ◽  
Akua Acheampomaa Bamfo-Asante ◽  
Justice Owusu Agyemang ◽  
Henry Nunoo-Mensah ◽  
Kwasi Adu-Boahen Opare

Software-Defined Networking (SDN) is a new paradigm that revolutionizes the idea of a software-driven network through the separation of control and data planes. It addresses the problems of traditional network architecture. Nevertheless, this brilliant architecture is exposed to several security threats, e.g., the distributed denial of service (DDoS) attack, which is hard to contain in such software-based networks. The concept of a centralized controller in SDN makes it a single point of attack as well as a single point of failure. In this paper, deep learning-based models, long-short term memory (LSTM) and convolutional neural network (CNN), are investigated. It illustrates their possibility and efficiency in being used in detecting and mitigating DDoS attack. The paper focuses on TCP, UDP, and ICMP flood attacks that target the controller. The performance of the models was evaluated based on the accuracy, recall, and true negative rate. We compared the performance of the deep learning models with classical machine learning models. We further provide details on the time taken to detect and mitigate the attack. Our results show that RNN LSTM is a viable deep learning algorithm that can be applied in the detection and mitigation of DDoS in the SDN controller. Our proposed model produced an accuracy of 89.63%, which outperformed linear-based models such as SVM (86.85%) and Naive Bayes (82.61%). Although KNN, which is a linear-based model, outperformed our proposed model (achieving an accuracy of 99.4%), our proposed model provides a good trade-off between precision and recall, which makes it suitable for DDoS classification. In addition, it was realized that the split ratio of the training and testing datasets can give different results in the performance of a deep learning algorithm used in a specific work. The model achieved the best performance when a split of 70/30 was used in comparison to 80/20 and 60/40 split ratios.


Sign in / Sign up

Export Citation Format

Share Document