scholarly journals Sub-Diffraction Visible Imaging Using Macroscopic Fourier Ptychography and Regularization by Denoising

Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3154 ◽  
Author(s):  
Zhixin Li ◽  
Desheng Wen ◽  
Zongxi Song ◽  
Gang Liu ◽  
Weikang Zhang ◽  
...  

Imaging past the diffraction limit is of significance to an optical system. Fourier ptychography (FP) is a novel coherent imaging technique that can achieve this goal and it is widely used in microscopic imaging. Most phase retrieval algorithms for FP reconstruction are based on Gaussian measurements which cannot extend straightforwardly to long range, sub-diffraction imaging setup because of laser speckle noise corruption. In this work, a new FP reconstruction framework is proposed for macroscopic visible imaging. When compared with existing research, the reweighted amplitude flow algorithm is adopted for better signal modeling, and the Regularization by Denoising (RED) scheme is introduced to reduce the effects of speckle. Experiments demonstrate that the proposed method can obtain state-of-the-art recovered results on both visual and quantitative metrics without increasing computation cost, and it is flexible for real imaging applications.

2016 ◽  
Vol 49 (4) ◽  
pp. 1245-1252 ◽  
Author(s):  
Stefano Marchesini ◽  
Hari Krishnan ◽  
Benedikt J. Daurer ◽  
David A. Shapiro ◽  
Talita Perciano ◽  
...  

Ever brighter light sources, fast parallel detectors and advances in phase retrieval methods have made ptychography a practical and popular imaging technique. Compared to previous techniques, ptychography provides superior robustness and resolution at the expense of more advanced and time-consuming data analysis. By taking advantage of massively parallel architectures, high-throughput processing can expedite this analysis and provide microscopists with immediate feedback. These advances allow real-time imaging at wavelength-limited resolution, coupled with a large field of view. This article describes a set of algorithmic and computational methodologies used at the Advanced Light Source and US Department of Energy light sources. These are packaged as a CUDA-based software environment namedSHARP(http://camera.lbl.gov/sharp), aimed at providing state-of-the-art high-throughput ptychography reconstructions for the coming era of diffraction-limited light sources.


2019 ◽  
Vol 9 (20) ◽  
pp. 4377
Author(s):  
Li ◽  
Wen ◽  
Song ◽  
Jiang ◽  
Zhang ◽  
...  

Imaging correlography, an effective method for long-distance imaging, recovers an object using only the knowledge of the Fourier modulus, without needing phase information. It is not sensitive to atmospheric turbulence or optical imperfections. However, the unreliability of traditional phase retrieval algorithms in imaging correlography has hindered their development. In this work, we join imaging correlography and ptychography together to overcome such obstacles. Instead of detecting the whole object, the object is measured part-by-part with a probe moving in a ptychographic way. A flexible optimization framework is proposed to reconstruct the object rapidly and reliably within a few iterations. In addition, novel image space denoising regularization is plugged into the loss function to reduce the effects of input noise and improve the perceptual quality of the recovered image. Experiments demonstrate that four-fold resolution gains are achievable for the proposed imaging method. We can obtain satisfactory results for both visual and quantitative metrics with one-sixth of the measurements in the conventional imaging correlography. Therefore, the proposed imaging technique is more suitable for long-range practical applications.


2019 ◽  
Vol 11 (24) ◽  
pp. 2921 ◽  
Author(s):  
Jingyu Li ◽  
Ying Li ◽  
Yayuan Xiao ◽  
Yunpeng Bai

In order to remove speckle noise from original synthetic aperture radar (SAR) images effectively and efficiently, this paper proposes a hybrid dilated residual attention network (HDRANet) with residual learning for SAR despeckling. Firstly, HDRANet employs the hybrid dilated convolution (HDC) in lightweight network architecture to enlarge the receptive field and aggregate global information. Then, a simple yet effective attention module, convolutional block attention module (CBAM), is integrated into the proposed model to constitute a residual HDC attention block through skip connection, which further enhances representation power and performance of the model. Extensive experimental results on the synthetic and real SAR images demonstrate the superior performance of HDRANet over the state-of-the-art methods in terms of quantitative metrics and visual quality.


2020 ◽  
Vol 8 (1) ◽  
pp. 84-90
Author(s):  
R. Lalchhanhima ◽  
◽  
Debdatta Kandar ◽  
R. Chawngsangpuii ◽  
Vanlalmuansangi Khenglawt ◽  
...  

Fuzzy C-Means is an unsupervised clustering algorithm for the automatic clustering of data. Synthetic Aperture Radar Image Segmentation has been a challenging task because of the presence of speckle noise. Therefore the segmentation process can not directly rely on the intensity information alone but must consider several derived features in order to get satisfactory segmentation results. In this paper, it is attempted to use the fuzzy nature of classification for the purpose of unsupervised region segmentation in which FCM is employed. Different features are obtained by filtering of the image by using different spatial filters and are selected for segmentation criteria. The segmentation performance is determined by the accuracy compared with a different state of the art techniques proposed recently.


2019 ◽  
Vol 11 (16) ◽  
pp. 1933 ◽  
Author(s):  
Yangyang Li ◽  
Ruoting Xing ◽  
Licheng Jiao ◽  
Yanqiao Chen ◽  
Yingte Chai ◽  
...  

Polarimetric synthetic aperture radar (PolSAR) image classification is a recent technology with great practical value in the field of remote sensing. However, due to the time-consuming and labor-intensive data collection, there are few labeled datasets available. Furthermore, most available state-of-the-art classification methods heavily suffer from the speckle noise. To solve these problems, in this paper, a novel semi-supervised algorithm based on self-training and superpixels is proposed. First, the Pauli-RGB image is over-segmented into superpixels to obtain a large number of homogeneous areas. Then, features that can mitigate the effects of the speckle noise are obtained using spatial weighting in the same superpixel. Next, the training set is expanded iteratively utilizing a semi-supervised unlabeled sample selection strategy that elaborately makes use of spatial relations provided by superpixels. In addition, a stacked sparse auto-encoder is self-trained using the expanded training set to obtain classification results. Experiments on two typical PolSAR datasets verified its capability of suppressing the speckle noise and showed excellent classification performance with limited labeled data.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rujia Li ◽  
Liangcai Cao

AbstractPhase retrieval seeks to reconstruct the phase from the measured intensity, which is an ill-posed problem. A phase retrieval problem can be solved with physical constraints by modulating the investigated complex wavefront. Orbital angular momentum has been recently employed as a type of reliable modulation. The topological charge l is robust during propagation when there is atmospheric turbulence. In this work, topological modulation is used to solve the phase retrieval problem. Topological modulation offers an effective dynamic range of intensity constraints for reconstruction. The maximum intensity value of the spectrum is reduced by a factor of 173 under topological modulation when l is 50. The phase is iteratively reconstructed without a priori knowledge. The stagnation problem during the iteration can be avoided using multiple topological modulations.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
J. Buijs ◽  
J. van der Gucht ◽  
J. Sprakel

Abstract Laser speckle imaging is a powerful imaging technique that visualizes microscopic motion within turbid materials. At current two methods are widely used to analyze speckle data: one is fast but qualitative, the other quantitative but computationally expensive. We have developed a new processing algorithm based on the fast Fourier transform, which converts raw speckle patterns into maps of microscopic motion and is both fast and quantitative, providing a dynamnic spectrum of the material over a frequency range spanning several decades. In this article we show how to apply this algorithm and how to measure a diffusion coefficient with it. We show that this method is quantitative and several orders of magnitude faster than the existing quantitative method. Finally we harness the potential of this new approach by constructing a portable laser speckle imaging setup that performs quantitative data processing in real-time on a tablet.


Author(s):  
Hengyi Cai ◽  
Hongshen Chen ◽  
Yonghao Song ◽  
Xiaofang Zhao ◽  
Dawei Yin

Humans benefit from previous experiences when taking actions. Similarly, related examples from the training data also provide exemplary information for neural dialogue models when responding to a given input message. However, effectively fusing such exemplary information into dialogue generation is non-trivial: useful exemplars are required to be not only literally-similar, but also topic-related with the given context. Noisy exemplars impair the neural dialogue models understanding the conversation topics and even corrupt the response generation. To address the issues, we propose an exemplar guided neural dialogue generation model where exemplar responses are retrieved in terms of both the text similarity and the topic proximity through a two-stage exemplar retrieval model. In the first stage, a small subset of conversations is retrieved from a training set given a dialogue context. These candidate exemplars are then finely ranked regarding the topical proximity to choose the best-matched exemplar response. To further induce the neural dialogue generation model consulting the exemplar response and the conversation topics more faithfully, we introduce a multi-source sampling mechanism to provide the dialogue model with both local exemplary semantics and global topical guidance during decoding. Empirical evaluations on a large-scale conversation dataset show that the proposed approach significantly outperforms the state-of-the-art in terms of both the quantitative metrics and human evaluations.


Sign in / Sign up

Export Citation Format

Share Document