scholarly journals Principal Component Wavelet Networks for Solving Linear Inverse Problems

Symmetry ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 1083
Author(s):  
Bernard Tiddeman ◽  
Morteza Ghahremani

In this paper we propose a novel learning-based wavelet transform and demonstrate its utility as a representation in solving a number of linear inverse problems—these are asymmetric problems, where the forward problem is easy to solve, but the inverse is difficult and often ill-posed. The wavelet decomposition is comprised of the application of an invertible 2D wavelet filter-bank comprising symmetric and anti-symmetric filters, in combination with a set of 1×1 convolution filters learnt from Principal Component Analysis (PCA). The 1×1 filters are needed to control the size of the decomposition. We show that the application of PCA across wavelet subbands in this way produces an architecture equivalent to a separable Convolutional Neural Network (CNN), with the principal components forming the 1×1 filters and the subtraction of the mean forming the bias terms. The use of an invertible filter bank and (approximately) invertible PCA allows us to create a deep autoencoder very simply, and avoids issues of overfitting. We investigate the construction and learning of such networks, and their application to linear inverse problems via the Alternating Direction of Multipliers Method (ADMM). We use our network as a drop-in replacement for traditional discrete wavelet transform, using wavelet shrinkage as the projection operator. The results show good potential on a number of inverse problems such as compressive sensing, in-painting, denoising and super-resolution, and significantly close the performance gap with Generative Adversarial Network (GAN)-based methods.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Erick Costa de Farias ◽  
Christian di Noia ◽  
Changhee Han ◽  
Evis Sala ◽  
Mauro Castelli ◽  
...  

AbstractRobust machine learning models based on radiomic features might allow for accurate diagnosis, prognosis, and medical decision-making. Unfortunately, the lack of standardized radiomic feature extraction has hampered their clinical use. Since the radiomic features tend to be affected by low voxel statistics in regions of interest, increasing the sample size would improve their robustness in clinical studies. Therefore, we propose a Generative Adversarial Network (GAN)-based lesion-focused framework for Computed Tomography (CT) image Super-Resolution (SR); for the lesion (i.e., cancer) patch-focused training, we incorporate Spatial Pyramid Pooling (SPP) into GAN-Constrained by the Identical, Residual, and Cycle Learning Ensemble (GAN-CIRCLE). At $$2\times $$ 2 × SR, the proposed model achieved better perceptual quality with less blurring than the other considered state-of-the-art SR methods, while producing comparable results at $$4\times $$ 4 × SR. We also evaluated the robustness of our model’s radiomic feature in terms of quantization on a different lung cancer CT dataset using Principal Component Analysis (PCA). Intriguingly, the most important radiomic features in our PCA-based analysis were the most robust features extracted on the GAN-super-resolved images. These achievements pave the way for the application of GAN-based image Super-Resolution techniques for studies of radiomics for robust biomarker discovery.


2021 ◽  
Author(s):  
Ruhallah Ahmadian ◽  
Mehdi Ghatee ◽  
Johan Wahlstrom

Driver identification is an important research area in intelligent transportation systems, with applications in commercial freight transport and usage-based insurance. One way to perform the identification is to use smartphones as sensor devices. By extracting features from smartphone-embedded sensors, various machine learning methods can identify the driver. The identification becomes particularly challenging when the number of drivers increases. In this situation, there is often not enough data for successful driver identification. This paper uses a Generative Adversarial Network (GAN) for data augmentation to solve the problem of lacking data. Since GAN diversifies the drivers' data, it extends the applicability of the driver identification. Although GANs are commonly used in image processing for image augmentation, their use for driving signal augmentation is novel. Our experiments prove their utility in generating driving signals emanating from the Discrete Wavelet Transform (DWT) on smartphones' accelerometer and gyroscope signals. After collecting the augmented data, their histograms along the overlapped windows are fed to machine learning methods covered by a Stacked Generalization Method (SGM). The presented hybrid GAN-SGM approach identifies drivers with 97% accuracy, 98% precision, 97% recall, and 97% F1-measure that outperforms standard machine learning methods that process features extracted by the statistical, spectral, and temporal approaches.


2021 ◽  
Author(s):  
Ruhallah Ahmadian ◽  
Mehdi Ghatee ◽  
Johan Wahlstrom

Driver identification is an important research area in intelligent transportation systems, with applications in commercial freight transport and usage-based insurance. One way to perform the identification is to use smartphones as sensor devices. By extracting features from smartphone-embedded sensors, various machine learning methods can identify the driver. The identification becomes particularly challenging when the number of drivers increases. In this situation, there is often not enough data for successful driver identification. This paper uses a Generative Adversarial Network (GAN) for data augmentation to solve the problem of lacking data. Since GAN diversifies the drivers' data, it extends the applicability of the driver identification. Although GANs are commonly used in image processing for image augmentation, their use for driving signal augmentation is novel. Our experiments prove their utility in generating driving signals emanating from the Discrete Wavelet Transform (DWT) on smartphones' accelerometer and gyroscope signals. After collecting the augmented data, their histograms along the overlapped windows are fed to machine learning methods covered by a Stacked Generalization Method (SGM). The presented hybrid GAN-SGM approach identifies drivers with 97% accuracy, 98% precision, 97% recall, and 97% F1-measure that outperforms standard machine learning methods that process features extracted by the statistical, spectral, and temporal approaches.


2021 ◽  
Vol 13 (9) ◽  
pp. 1858
Author(s):  
Xubin Feng ◽  
Wuxia Zhang ◽  
Xiuqin Su ◽  
Zhengpu Xu

High spatial quality (HQ) optical remote sensing images are very useful for target detection, target recognition and image classification. Due to the influence of imaging equipment accuracy and atmospheric environment, HQ images are difficult to acquire, while low spatial quality (LQ) remote sensing images are very easy to acquire. Hence, denoising and super-resolution (SR) reconstruction technology are the most important solutions to improve the quality of remote sensing images very effectively, which can lower the cost as much as possible. Most existing methods usually only employ denoising or SR technology to obtain HQ images. However, due to the complex structure and the large noise of remote sensing images, the quality of the remote sensing image obtained only by denoising method or SR method cannot meet the actual needs. To address these problems, a method of reconstructing HQ remote sensing images based on Generative Adversarial Network (GAN) named “Restoration Generative Adversarial Network with ResNet and DenseNet” (RRDGAN) is proposed, which can acquire better quality images by incorporating denoising and SR into a unified framework. The generative network is implemented by fusing Residual Neural Network (ResNet) and Dense Convolutional Network (DenseNet) in order to consider denoising and SR problems at the same time. Then, total variation (TV) regularization is used to furthermore enhance the edge details, and the idea of Relativistic GAN is explored to make the whole network converge better. Our RRDGAN is implemented in wavelet transform (WT) domain, since different frequency parts could be handled separately in the wavelet domain. The experimental results on three different remote sensing datasets shows the feasibility of our proposed method in acquiring remote sensing images.


2021 ◽  
Vol 58 (8) ◽  
pp. 0810005
Author(s):  
查体博 Zha Tibo ◽  
罗林 Luo Lin ◽  
杨凯 Yang Kai ◽  
张渝 Zhang Yu ◽  
李金龙 Li Jinlong

Author(s):  
Khaled ELKarazle ◽  
Valliappan Raman ◽  
Patrick Then

Age estimation models can be employed in many applications, including soft biometrics, content access control, targeted advertising, and many more. However, as some facial images are taken in unrestrained conditions, the quality relegates, which results in the loss of several essential ageing features. This study investigates how introducing a new layer of data processing based on a super-resolution generative adversarial network (SRGAN) model can influence the accuracy of age estimation by enhancing the quality of both the training and testing samples. Additionally, we introduce a novel convolutional neural network (CNN) classifier to distinguish between several age classes. We train one of our classifiers on a reconstructed version of the original dataset and compare its performance with an identical classifier trained on the original version of the same dataset. Our findings reveal that the classifier which trains on the reconstructed dataset produces better classification accuracy, opening the door for more research into building data-centric machine learning systems.


Sign in / Sign up

Export Citation Format

Share Document