scholarly journals Nonlinear wavefront reconstruction with convolutional neural networks for Fourier-based wavefront sensors

2020 ◽  
Vol 28 (11) ◽  
pp. 16644
Author(s):  
R. Landman ◽  
S. Y. Haffert
Author(s):  
Robin Swanson ◽  
Kiriakos Kutulakos ◽  
Suresh Sivanandam ◽  
Masen Lamb ◽  
Carlos Correia

Mathematics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 15
Author(s):  
Sergio Luis Suárez Gómez ◽  
Francisco García Riesgo ◽  
Carlos González Gutiérrez ◽  
Luis Fernando Rodríguez Ramos ◽  
Jesús Daniel Santos

Mathematical modelling methods have several limitations when addressing complex physics whose calculations require considerable amount of time. This is the case of adaptive optics, a series of techniques used to process and improve the resolution of astronomical images acquired from ground-based telescopes due to the aberrations introduced by the atmosphere. Usually, with adaptive optics the wavefront is measured with sensors and then reconstructed and corrected by means of a deformable mirror. An improvement in the reconstruction of the wavefront is presented in this work, using convolutional neural networks (CNN) for data obtained from the Tomographic Pupil Image Wavefront Sensor (TPI-WFS). The TPI-WFS is a modified curvature sensor, designed for measuring atmospheric turbulences with defocused wavefront images. CNNs are well-known techniques for its capacity to model and predict complex systems. The results obtained from the presented reconstructor, named Convolutional Neural Networks in Defocused Pupil Images (CRONOS), are compared with the results of Wave-Front Reconstruction (WFR) software, initially developed for the TPI-WFS measurements, based on the least-squares fit. The performance of both reconstruction techniques is tested for 153 Zernike modes and with simulated noise. In general, CRONOS showed better performance than the reconstruction from WFR in most of the turbulent profiles, with significant improvements found for the most turbulent profiles; overall, obtaining around 7% of improvements in wavefront restoration, and 18% of improvements in Strehl.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Author(s):  
Edgar Medina ◽  
Roberto Campos ◽  
Jose Gabriel R. C. Gomes ◽  
Mariane R. Petraglia ◽  
Antonio Petraglia

Sign in / Sign up

Export Citation Format

Share Document