scholarly journals Efficient and Noise Robust Photon-Counting Imaging with First Signal Photon Unit Method

Photonics ◽  
2021 ◽  
Vol 8 (6) ◽  
pp. 229
Author(s):  
Kangjian Hua ◽  
Bo Liu ◽  
Zhen Chen ◽  
Liang Fang ◽  
Huachuang Wang

Efficient photon-counting imaging in low signal photon level is challenging, especially when noise is intensive. In this paper, we report a first signal photon unit (FSPU) method to rapidly reconstruct depth image from sparse signal photon counts with strong noise robustness. The method consists of acquisition strategy and reconstruction strategy. Different statistic properties of signal and noise are exploited to quickly distinguish signal unit during acquisition. Three steps, including maximum likelihood estimation (MLE), anomaly censorship and total variation (TV) regularization, are implemented to recover high quality images. Simulations demonstrate that the method performs much better than traditional photon-counting methods such as peak and cross-correlation methods, and it also has better performance than the state-of-the-art unmixing method. In addition, it could reconstruct much clearer images than the first photon imaging (FPI) method when noise is severe. An experiment with our photon-counting LIDAR system was conducted, which indicates that our method has advantages in sparse photon-counting imaging application, especially when signal to noise ratio (SNR) is low. Without the knowledge of noise distribution, our method reconstructed the clearest depth image which has the least mean square error (MSE) as 0.011, even when SNR is as low as −10.85 dB.

Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5548
Author(s):  
Milan Smetana ◽  
Lukas Behun ◽  
Daniela Gombarska ◽  
Ladislav Janousek

Solution of inverse problem in eddy-current non-destructive evaluation of material defects is concerned in this study. A new inverse algorithm incorporating three methods is proposed. The wavelet transform of sensed eddy-current responses complemented by the principal component analysis and followed by the neural network classification are employed for this purpose. The goal is to increase the noise robustness of the evaluation. The proposed inverse algorithm is tested using real eddy-current response data gained from artificial electro-discharge machined notches made in austenitic stainless-steel biomaterial. Eddy-current responses due to the material defects are acquired using a newly developed eddy-current probe that senses separately three spatial components of the perturbed electromagnetic field. The presented results clearly show that the error in evaluation of material defect depth using the proposed algorithm is less than 10% even when the signal-to-noise ratio is as high as 10 dB.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ibtissame Khaoua ◽  
Guillaume Graciani ◽  
Andrey Kim ◽  
François Amblard

AbstractFor a wide range of purposes, one faces the challenge to detect light from extremely faint and spatially extended sources. In such cases, detector noises dominate over the photon noise of the source, and quantum detectors in photon counting mode are generally the best option. Here, we combine a statistical model with an in-depth analysis of detector noises and calibration experiments, and we show that visible light can be detected with an electron-multiplying charge-coupled devices (EM-CCD) with a signal-to-noise ratio (SNR) of 3 for fluxes less than $$30\,{\text{photon}}\,{\text{s}}^{ - 1} \,{\text{cm}}^{ - 2}$$ 30 photon s - 1 cm - 2 . For green photons, this corresponds to 12 aW $${\text{cm}}^{ - 2}$$ cm - 2 ≈ $$9{ } \times 10^{ - 11}$$ 9 × 10 - 11 lux, i.e. 15 orders of magnitude less than typical daylight. The strong nonlinearity of the SNR with the sampling time leads to a dynamic range of detection of 4 orders of magnitude. To detect possibly varying light fluxes, we operate in conditions of maximal detectivity $${\mathcal{D}}$$ D rather than maximal SNR. Given the quantum efficiency $$QE\left( \lambda \right)$$ Q E λ of the detector, we find $${ \mathcal{D}} = 0.015\,{\text{photon}}^{ - 1} \,{\text{s}}^{1/2} \,{\text{cm}}$$ D = 0.015 photon - 1 s 1 / 2 cm , and a non-negligible sensitivity to blackbody radiation for T > 50 °C. This work should help design highly sensitive luminescence detection methods and develop experiments to explore dynamic phenomena involving ultra-weak luminescence in biology, chemistry, and material sciences.


Author(s):  
Сергей Клавдиевич Абрамов ◽  
Виктория Валерьевна Абрамова ◽  
Сергей Станиславович Кривенко ◽  
Владимир Васильевич Лукин

The article deals with the analysis of the efficiency and expedience of applying filtering based on the discrete cosine transform (DCT) for one-dimensional signals distorted by white Gaussian noise with a known or a priori estimated variance. It is shown that efficiency varies in wide limits depending upon the input ratio of signal-to-noise and degree of processed signal complexity. It is offered a method for predicting filtering efficiency according to the traditional quantitative criteria as the ratio of mean square error to the variance of additive noise and improvement of the signal-to-noise ratio. Forecasting is performed based on dependences obtained by regression analysis. These dependencies can be described by simple functions of several types parameters of which are determined as the result of least mean square fitting. It is shown that for sufficiently accurate prediction, only one statistical parameter calculated in the DCT domain can be preliminarily evaluated (before filtering), and this parameter can be calculated in a relatively small number of non-overlapping or partially overlapping blocks of standard size (for example, 32 samples). It is analyzed the variations of efficiency criteria variations for a set of realizations; it is studied factors that influence prediction accuracy. It is demonstrated that it is possible to carry out the forecasting of filtering efficiency for several possible values of the DCT-filter parameter used for threshold setting and, then, to recommend the best value for practical use. An example of using such an adaptation procedure for the filter parameter setting for processing the ECG signal that has not been used in the determination of regression dependences is given. As a result of adaptation, the efficiency of filtering can be essentially increased – benefit can reach 0.5-1 dB. An advantage of the proposed procedures of adaptation and prediction is their universality – they can be applied for different types of signals and different ratios of signal-to-noise.


2020 ◽  
Vol 6 (3) ◽  
pp. 11
Author(s):  
Naoyuki Awano

Depth sensors are important in several fields to recognize real space. However, there are cases where most depth values in a depth image captured by a sensor are constrained because the depths of distal objects are not always captured. This often occurs when a low-cost depth sensor or structured-light depth sensor is used. This also occurs frequently in applications where depth sensors are used to replicate human vision, e.g., when using the sensors in head-mounted displays (HMDs). One ideal inpainting (repair or restoration) approach for depth images with large missing areas, such as partial foreground depths, is to inpaint only the foreground; however, conventional inpainting studies have attempted to inpaint entire images. Thus, under the assumption of an HMD-mounted depth sensor, we propose a method to inpaint partially and reconstruct an RGB-D depth image to preserve foreground shapes. The proposed method is comprised of a smoothing process for noise reduction, filling defects in the foreground area, and refining the filled depths. Experimental results demonstrate that the inpainted results produced using the proposed method preserve object shapes in the foreground area with accurate results of the inpainted area with respect to the real depth with the peak signal-to-noise ratio metric.


2020 ◽  
Vol 8 (9) ◽  
pp. 678
Author(s):  
Nan Zou ◽  
Zhenqi Jia ◽  
Jin Fu ◽  
Jia Feng ◽  
Mengqi Liu

Considering the requirement of the near-field calibration under strong underwater multipath condition, a high-precision geometric calibration method based on maximum likelihood estimation is proposed. It can be used as both auxiliary-calibration and self-calibration. According to the near-field geometry error model, the objective function of nonlinear optimization problem is constructed by using the unconditional maximum likelihood estimator. The influence of multipath on geometric calibration is studied. The strong reflections are considered as the coherent sources, and the compensation strategy for auxiliary-calibration is realized. The optimization method (differential evolution, DE) is used to solve the geometry errors and sources’ position. The method in this paper is compared with the eigenvector method. The simulation results show that the method in this paper is more accurate than the eigenvector method especially under high signal-to-noise ratio (SNR) and multipath environment. Experiment results further verify the effectiveness.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7052
Author(s):  
Pei-Chun Su ◽  
Elsayed Z. Soliman ◽  
Hau-Tieng Wu

An automatic accurate T-wave end (T-end) annotation for the electrocardiogram (ECG) has several important clinical applications. While there have been several algorithms proposed, their performance is usually deteriorated when the signal is noisy. Therefore, we need new techniques to support the noise robustness in T-end detection. We propose a new algorithm based on the signal quality index (SQI) for T-end, coined as tSQI, and the optimal shrinkage (OS). For segments with low tSQI, the OS is applied to enhance the signal-to-noise ratio (SNR). We validated the proposed method using eleven short-term ECG recordings from QT database available at Physionet, as well as four 14-day ECG recordings which were visually annotated at a central ECG core laboratory. We evaluated the correlation between the real-world signal quality for T-end and tSQI, and the robustness of proposed algorithm to various additive noises of different types and SNR’s. The performance of proposed algorithm on arrhythmic signals was also illustrated on MITDB arrhythmic database. The labeled signal quality is well captured by tSQI, and the proposed OS denoising help stabilize existing T-end detection algorithms under noisy situations by making the mean of detection errors decrease. Even when applied to ECGs with arrhythmia, the proposed algorithm still performed well if proper metric is applied. We proposed a new T-end annotation algorithm. The efficiency and accuracy of our algorithm makes it a good fit for clinical applications and large ECG databases. This study is limited by the small size of annotated datasets.


2020 ◽  
Vol 10 (6) ◽  
pp. 1930
Author(s):  
Chengkun Fu ◽  
Huaibin Zheng ◽  
Gao Wang ◽  
Yu Zhou ◽  
Hui Chen ◽  
...  

Three-dimensional (3D) imaging under the condition of weak light and low signal-to-noise ratio is a challenging task. In this paper, a 3D imaging scheme based on time-correlated single-photon counting technology is proposed and demonstrated. The 3D imaging scheme, which is composed of a pulsed laser, a scanning mirror, single-photon detectors, and a time-correlated single-photon counting module, employs time-correlated single-photon counting technology for 3D LiDAR (Light Detection and Ranging). Aided by the range-gated technology, experiments show that the proposed scheme can image the object when the signal-to-noise ratio is decreased to −13 dB and improve the structural similarity index of imaging results by 10 times. Then we prove the proposed scheme can image the object in three dimensions with a lateral imaging resolution of 512 × 512 and an axial resolution of 4.2 mm in 6.7 s. At last, a high-resolution 3D reconstruction of an object is also achieved by using the photometric stereo algorithm.


Sign in / Sign up

Export Citation Format

Share Document