Fingerprint Image De-Noising Using Wavelet Transform with the Comparison of Filtered and After Compression Filtered Noise Image

2019 ◽  
Vol 11 (11) ◽  
pp. 1125-1133
Author(s):  
Munmun Mondal ◽  
Md. Rafiqul Islam

Fingerprint is becoming the part of our day to day life right from our home to workplace. Now a days for security and safety purpose prime importance is given by it. Also, Fingerprint identification is one of the most popular biometric technologies and which is highly used in criminal investigations, commercial applications, and so on. The performance of a fingerprint image-matching algorithm depends heavily on the quality of the input fingerprint images. It is very important to acquire good quality images. The use of wavelet transform improves the quality of an image and reduces noise level. So, in this research, different compression techniques are used to overcome this problem. Also, we have used different wavelets transformation for compression of fingerprint images. Image quality before compression and after compression are measured by Mean Squared Error (MSE), Signal-to-Noise Ratio (SNR) and Peak Signal-to-Noise Ratio (PSNR). This work is done in MATLAB using DSP and wavelet toolbox. At last, we have compared the filtered noise image method and the compression filtered noise image method.

2014 ◽  
Vol 2 (2) ◽  
pp. 47-58
Author(s):  
Ismail Sh. Baqer

A two Level Image Quality enhancement is proposed in this paper. In the first level, Dualistic Sub-Image Histogram Equalization DSIHE method decomposes the original image into two sub-images based on median of original images. The second level deals with spikes shaped noise that may appear in the image after processing. We presents three methods of image enhancement GHE, LHE and proposed DSIHE that improve the visual quality of images. A comparative calculations is being carried out on above mentioned techniques to examine objective and subjective image quality parameters e.g. Peak Signal-to-Noise Ratio PSNR values, entropy H and mean squared error MSE to measure the quality of gray scale enhanced images. For handling gray-level images, convenient Histogram Equalization methods e.g. GHE and LHE tend to change the mean brightness of an image to middle level of the gray-level range limiting their appropriateness for contrast enhancement in consumer electronics such as TV monitors. The DSIHE methods seem to overcome this disadvantage as they tend to preserve both, the brightness and contrast enhancement. Experimental results show that the proposed technique gives better results in terms of Discrete Entropy, Signal to Noise ratio and Mean Squared Error values than the Global and Local histogram-based equalization methods


Author(s):  
Mourad Talbi ◽  
Med Salim Bouhlel

Background: In this paper, we propose a secure image watermarking technique which is applied to grayscale and color images. It consists in applying the SVD (Singular Value Decomposition) in the Lifting Wavelet Transform domain for embedding a speech image (the watermark) into the host image. Methods: It also uses signature in the embedding and extraction steps. Its performance is justified by the computation of PSNR (Pick Signal to Noise Ratio), SSIM (Structural Similarity), SNR (Signal to Noise Ratio), SegSNR (Segmental SNR) and PESQ (Perceptual Evaluation Speech Quality). Results: The PSNR and SSIM are used for evaluating the perceptual quality of the watermarked image compared to the original image. The SNR, SegSNR and PESQ are used for evaluating the perceptual quality of the reconstructed or extracted speech signal compared to the original speech signal. Conclusion: The Results obtained from computation of PSNR, SSIM, SNR, SegSNR and PESQ show the performance of the proposed technique.


10.14311/606 ◽  
2004 ◽  
Vol 44 (4) ◽  
Author(s):  
V. Matz ◽  
M. Kreidl ◽  
R. Šmíd

In ultrasonic testing it is very important to recognize the fault echoes buried in a noisy signal. The fault echo characterizes a flaw in the material. An important requirement on ultrasonic signal filtering is zero-time shift, because the position of ultrasonic echoes is essential. This requirement is accomplished using the discrete wavelet transform (DWT), which is used for reducing the signal-to-noise ratio. This paper evaluates the quality of filtering using the discrete wavelet transform. Additional computer simulations of the proposed algorithms are presented.


Segmentation separates an image into different sections badsed on the desire of the user. Segmentation will be carried out in an image, until the region of interest (ROI) of an object is extracted. Segmentation reliability predicts the progress of the various segmentation techniques. In this paper, various segmentation methods are proposed and quality of segmentation is verified by using quality metrics like Mean Squared Error (MSE),Signal to Noise Ratio (SNR), Peak- Signal to Noise Ratio (PSNR), Edge Preservation Index (EPI) and Structural Similarity Index Metric (SSIM).


2018 ◽  
Vol 7 (3.31) ◽  
pp. 1
Author(s):  
Bavanari Satyanarayana ◽  
Aama Abdulelah

In this work, Discrete Laguerre Wavelet Transform (DLWT) was used in the processing of images where they were divided into blocks and each block dimension is equal to matrix dimension obtained from DLWT. The concepts Peak Signal to Noise Ratio and Mean Squared Error were used. The examples used to prove the efficiency of the proposed method where good and convincing accounts were obtained.  


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5540
Author(s):  
Nayeem Hasan ◽  
Md Saiful Islam ◽  
Wenyu Chen ◽  
Muhammad Ashad Kabir ◽  
Saad Al-Ahmadi

This paper proposes an encryption-based image watermarking scheme using a combination of second-level discrete wavelet transform (2DWT) and discrete cosine transform (DCT) with an auto extraction feature. The 2DWT has been selected based on the analysis of the trade-off between imperceptibility of the watermark and embedding capacity at various levels of decomposition. DCT operation is applied to the selected area to gather the image coefficients into a single vector using a zig-zig operation. We have utilized the same random bit sequence as the watermark and seed for the embedding zone coefficient. The quality of the reconstructed image was measured according to bit correction rate, peak signal-to-noise ratio (PSNR), and similarity index. Experimental results demonstrated that the proposed scheme is highly robust under different types of image-processing attacks. Several image attacks, e.g., JPEG compression, filtering, noise addition, cropping, sharpening, and bit-plane removal, were examined on watermarked images, and the results of our proposed method outstripped existing methods, especially in terms of the bit correction ratio (100%), which is a measure of bit restoration. The results were also highly satisfactory in terms of the quality of the reconstructed image, which demonstrated high imperceptibility in terms of peak signal-to-noise ratio (PSNR ≥ 40 dB) and structural similarity (SSIM ≥ 0.9) under different image attacks.


The research constitutes a distinctive technique of steganography of image. The procedure used for the study is Fractional Random Wavelet Transform (FRWT). The contrast between wavelet transform and the aforementioned FRWT is that it comprises of all the benefits and features of the wavelet transform but with additional highlights like randomness and partial fractional value put up into it. As a consequence of the fractional value and the randomness, the algorithm will give power and a rise in the surveillance layers for steganography. The stegano image will be acquired after administrating the algorithm which contains not only the coated image but also the concealed image. Despite the overlapping of two images, any diminution in the grade of the image is not perceived. Through this steganographic process, we endeavor for expansion in surveillance and magnitude as well. After running the algorithm, various variables like Mean Square Error (MSE) and Peak Signal to Noise ratio (PSNR) are deliberated. Through the intended algorithm, a rise in the power and imperceptibility is perceived and it can also support diverse modification such as scaling, translation and rotation with algorithms which previously prevailed. The irrefutable outcome demonstrated that the algorithm which is being suggested is indeed efficacious.


2019 ◽  
Vol 829 ◽  
pp. 252-257
Author(s):  
Azhari ◽  
Yohanes Hutasoit ◽  
Freddy Haryanto

CBCT is a modernized technology in producing radiograph image on dentistry. The image quality excellence is very important for clinicians to interpret the image, so the result of diagnosis produced becoming more accurate, appropriate, thus minimizing the working time. This research was aimed to assess the image quality using the blank acrylic phantom polymethylmethacrylate (PMMA) (C­5H8O2)n in the density of 1.185 g/cm3 for evaluating the homogeneity and uniformity of the image produced. Acrylic phantom was supported with a tripod and laid down on the chin rest of the CBCT device, then the phantom was fixed, and the edge of the phantom was touched by the bite block. Furthermore, the exposure of the X-ray was executed toward the acrylic phantom with various kVp and mAs, from 80 until 90, with the range of 5 kV and the variation of mA was 3, 5, and 7 mA respectively. The time exposure was kept constant for 25 seconds. The samples were taken from CBCT acrylic images, then as much as 5 ROIs (Region of Interest) was chosen to be analyzed. The ROIs determination was analyzed by using the ImageJ® software for recognizing the influence of kVp and mAs towards the image uniformity, noise and SNR. The lowest kVp and mAs had the result of uniformity value, homogeneity and signal to noise ratio of 11.22; 40.35; and 5.96 respectively. Meanwhile, the highest kVp and mAs had uniformity value, homogeneity and signal to noise ratio of 16.96; 26.20; and 5.95 respectively. There were significant differences between the image uniformity and homogeneity on the lowest kVp and mAs compared to the highest kVp and mAs, as analyzed with the ANOVA statistics analysis continued with the t-student post-hoc test with α = 0.05. However, there was no significant difference in SNR as analyzed with the ANOVA statistic analysis. The usage of the higher kVp and mAs caused the improvement of the image homogeneity and uniformity compared to the lower kVp and mAs.


2019 ◽  
Author(s):  
Wei Yi Lee ◽  
Rosita Hamidi ◽  
Deva Ghosh ◽  
Mohd Hafiz Musa

1988 ◽  
Vol 132 ◽  
pp. 35-38
Author(s):  
Dennis C. Ebbets ◽  
Sara R. Heap ◽  
Don J. Lindler

The G-HRS is one of four axial scientific instruments which will fly aboard the Hubble Space Telescope (ref 1,2). It will produce spectroscopic observations in the 1050 A ≤ λ ≤ 3300 A region with greater spectral, spatial and temporal resolution than has been possible with previous space-based instruments. Five first order diffraction gratings and one Echelle provide three modes of spectroscopic operation with resolving powers of R = λ/ΔΔ = 2000, 20000 and 90000. Two magnetically focused, pulse-counting digicon detectors, which differ only in the nature of their photocathodes, produce data whose photometric quality is usually determined by statistical noise in the signal (ref 3). Under ideal circumstances the signal to noise ratio increases as the square root of the exposure time. For some observations detector dark count, instrumental scattered light or granularity in the pixel to pixel sensitivity will cause additional noise. The signal to noise ratio of the net spectrum will then depend on several parameters, and will increase more slowly with exposure time. We have analyzed data from the ground based calibration programs, and have developed a theoretical model of the HRS performance (ref 4). Our results allow observing and data reduction strategies to be optimized when factors other than photon statistics influence the photometric quality of the data.


Sign in / Sign up

Export Citation Format

Share Document