scholarly journals Performance Evaluation of Super-Resolution Methods Using Deep-Learning and Sparse-Coding for Improving the Image Quality of Magnified Images in Chest Radiographs

2017 ◽  
Vol 07 (03) ◽  
pp. 100-111 ◽  
Author(s):  
Kensuke Umehara ◽  
Junko Ota ◽  
Naoki Ishimaru ◽  
Shunsuke Ohno ◽  
Kentaro Okamoto ◽  
...  
2017 ◽  
Author(s):  
Junko Ota ◽  
Kensuke Umehara ◽  
Naoki Ishimaru ◽  
Shunsuke Ohno ◽  
Kentaro Okamoto ◽  
...  

Author(s):  
Sven Rothlubbers ◽  
Hannah Strohm ◽  
Klaus Eickel ◽  
Jurgen Jenne ◽  
Vincent Kuhlen ◽  
...  

2020 ◽  
Vol 7 (3) ◽  
pp. 432
Author(s):  
Windi Astuti

Various types of image processing that can be done by computers, such as improving image quality is one of the fields that is quite popular until now. Improving the quality of an image is necessary so that someone can observe the image clearly and in detail without any disturbance. An image can experience major disturbances or errors in an image such as the image of the screenshot is used as a sample. The results of the image from the screenshot have the smallest sharpness and smoothness of the image, so to get a better image is usually done enlargement of the image. After the screenshot results are obtained then, the next process is cropping the image and the image looks like there are disturbances such as visible blur and cracked. To get an enlarged image (Zooming image) by adding new pixels or points. This is done by the super resolution method, super resolution has three stages of completion, first Registration, Interpolation, and Reconstruction. For magnification done by linear interpolation and reconstruction using a median filter for image refinement. This method is expected to be able to solve the problem of improving image quality in image enlargement applications. This study discusses that the process carried out to implement image enlargement based on the super resolution method is then built by using R2013a matlab as an editor to edit programs


2021 ◽  
Vol 12 ◽  
Author(s):  
Ashika Mani ◽  
Tales Santini ◽  
Radhika Puppala ◽  
Megan Dahl ◽  
Shruthi Venkatesh ◽  
...  

Background: Magnetic resonance (MR) scans are routine clinical procedures for monitoring people with multiple sclerosis (PwMS). Patient discomfort, timely scheduling, and financial burden motivate the need to accelerate MR scan time. We examined the clinical application of a deep learning (DL) model in restoring the image quality of accelerated routine clinical brain MR scans for PwMS.Methods: We acquired fast 3D T1w BRAVO and fast 3D T2w FLAIR MRI sequences (half the phase encodes and half the number of slices) in parallel to conventional parameters. Using a subset of the scans, we trained a DL model to generate images from fast scans with quality similar to the conventional scans and then applied the model to the remaining scans. We calculated clinically relevant T1w volumetrics (normalized whole brain, thalamic, gray matter, and white matter volume) for all scans and T2 lesion volume in a sub-analysis. We performed paired t-tests comparing conventional, fast, and fast with DL for these volumetrics, and fit repeated measures mixed-effects models to test for differences in correlations between volumetrics and clinically relevant patient-reported outcomes (PRO).Results: We found statistically significant but small differences between conventional and fast scans with DL for all T1w volumetrics. There was no difference in the extent to which the key T1w volumetrics correlated with clinically relevant PROs of MS symptom burden and neurological disability.Conclusion: A deep learning model that improves the image quality of the accelerated routine clinical brain MR scans has the potential to inform clinically relevant outcomes in MS.


2021 ◽  
Vol 13 (19) ◽  
pp. 3859
Author(s):  
Joby M. Prince Czarnecki ◽  
Sathishkumar Samiappan ◽  
Meilun Zhou ◽  
Cary Daniel McCraine ◽  
Louis L. Wasson

The radiometric quality of remotely sensed imagery is crucial for precision agriculture applications because estimations of plant health rely on the underlying quality. Sky conditions, and specifically shadowing from clouds, are critical determinants in the quality of images that can be obtained from low-altitude sensing platforms. In this work, we first compare common deep learning approaches to classify sky conditions with regard to cloud shadows in agricultural fields using a visible spectrum camera. We then develop an artificial-intelligence-based edge computing system to fully automate the classification process. Training data consisting of 100 oblique angle images of the sky were provided to a convolutional neural network and two deep residual neural networks (ResNet18 and ResNet34) to facilitate learning two classes, namely (1) good image quality expected, and (2) degraded image quality expected. The expectation of quality stemmed from the sky condition (i.e., density, coverage, and thickness of clouds) present at the time of the image capture. These networks were tested using a set of 13,000 images. Our results demonstrated that ResNet18 and ResNet34 classifiers produced better classification accuracy when compared to a convolutional neural network classifier. The best overall accuracy was obtained by ResNet34, which was 92% accurate, with a Kappa statistic of 0.77. These results demonstrate a low-cost solution to quality control for future autonomous farming systems that will operate without human intervention and supervision.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6457
Author(s):  
Hayat Ullah ◽  
Muhammad Irfan ◽  
Kyungjin Han ◽  
Jong Weon Lee

Due to recent advancements in virtual reality (VR) and augmented reality (AR), the demand for high quality immersive contents is a primary concern for production companies and consumers. Similarly, the topical record-breaking performance of deep learning in various domains of artificial intelligence has extended the attention of researchers to contribute to different fields of computer vision. To ensure the quality of immersive media contents using these advanced deep learning technologies, several learning based Stitched Image Quality Assessment methods have been proposed with reasonable performances. However, these methods are unable to localize, segment, and extract the stitching errors in panoramic images. Further, these methods used computationally complex procedures for quality assessment of panoramic images. With these motivations, in this paper, we propose a novel three-fold Deep Learning based No-Reference Stitched Image Quality Assessment (DLNR-SIQA) approach to evaluate the quality of immersive contents. In the first fold, we fined-tuned the state-of-the-art Mask R-CNN (Regional Convolutional Neural Network) on manually annotated various stitching error-based cropped images from the two publicly available datasets. In the second fold, we segment and localize various stitching errors present in the immersive contents. Finally, based on the distorted regions present in the immersive contents, we measured the overall quality of the stitched images. Unlike existing methods that only measure the quality of the images using deep features, our proposed method can efficiently segment and localize stitching errors and estimate the image quality by investigating segmented regions. We also carried out extensive qualitative and quantitative comparison with full reference image quality assessment (FR-IQA) and no reference image quality assessment (NR-IQA) on two publicly available datasets, where the proposed system outperformed the existing state-of-the-art techniques.


2022 ◽  
Author(s):  
Torsten Schlett ◽  
Christian Rathgeb ◽  
Olaf Henniger ◽  
Javier Galbally ◽  
Julian Fierrez ◽  
...  

The performance of face analysis and recognition systems depends on the quality of the acquired face data, which is influenced by numerous factors. Automatically assessing the quality of face data in terms of biometric utility can thus be useful to detect low-quality data and make decisions accordingly. This survey provides an overview of the face image quality assessment literature, which predominantly focuses on visible wavelength face image input. A trend towards deep learning based methods is observed, including notable conceptual differences among the recent approaches, such as the integration of quality assessment into face recognition models. Besides image selection, face image quality assessment can also be used in a variety of other application scenarios, which are discussed herein. Open issues and challenges are pointed out, i.a. highlighting the importance of comparability for algorithm evaluations, and the challenge for future work to create deep learning approaches that are interpretable in addition to providing accurate utility predictions.


Sign in / Sign up

Export Citation Format

Share Document