scholarly journals Quantization-Based Image Watermarking by Using a Normalization Scheme in the Wavelet Domain

Information ◽  
2018 ◽  
Vol 9 (8) ◽  
pp. 194 ◽  
Author(s):  
Jinhua Liu ◽  
Qiu Tu ◽  
Xinye Xu

To improve the invisibility and robustness of quantization-based image watermarking algorithms, we developed an improved quantization image watermarking method based on the wavelet transform and normalization strategy used in this study. In the process of watermark encoding, the sorting strategy of wavelet coefficients is used to calculate the quantization step size. Its robustness lies in the normalization-based watermark embedding and the control of its amount of modification on each wavelet coefficient by utilizing the proper quantization parameter in a high entropy image region. In watermark detection, the original unmarked image is not required, and the probability of false alarms and the probability of detection are discussed through experimental simulation. Experimental results show the effectiveness of the proposed watermarking. Furthermore, the proposed method has stronger robustness than the alternative quantization-based watermarking algorithm.

Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1506 ◽  
Author(s):  
Kai Zhou ◽  
Yunming Zhang ◽  
Jing Li ◽  
Yantong Zhan ◽  
Wenbo Wan

In the robust image watermarking framework, watermarks are usually embedded in the direct current (DC) coefficients in discrete cosine transform (DCT) domain, since the DC coefficients have a larger perceptual capacity than any alternating current (AC) coefficients. However, DC coefficients are also excluded from watermark embedding with the consideration of avoiding block artifacts in watermarked images. Studies on human vision suggest that perceptual characteristics can achieve better image fidelity. With this perspective, we propose a novel spatial–perceptual embedding for a color image watermarking algorithm that includes the robust just-noticeable difference (JND) guidance. The logarithmic transform function is used for quantization embedding. Meanwhile, an adaptive quantization step is modeled by incorporating the partial AC coefficients. The novelty and effectiveness of the proposed framework are supported by JND perceptual guidance for spatial pixels. Experiments validate that the proposed watermarking algorithm produces a significantly better performance.


Author(s):  
Surya Prasada Rao Borra ◽  
Kongara Ramanjaneyulu ◽  
K. Raja Rajeswari

An image watermarking method using Discrete Wavelet Transform (DWT) and Genetic Algorithm (GA) is presented for applications like content authentication and copyright protection. This method is robust to various image attacks. For watermark detection/extraction, the cover image is not essential. Gray scale images of size 512 × 512 as cover image and binary images of size 64 × 64 as watermark are used in the simulation of the proposed method. Watermark embedding is done in the DWT domain. 3rd and 2nd level detail sub-band coefficients are selected for further processing. Selected coefficients are arranged in different blocks. The size of the block and the number blocks depends on the size of the watermark. One watermark bit is embedded in each block. Then, inverse DWT operation is performed to get the required watermarked image. This watermarked image is used for transmission and distribution purposes. In case of any dispute over the ownership, the hidden watermark is decoded to solve the problem. Threshold-based method is used for watermark extraction. Control parameters are identified and optimized based on GA for targeted performance in terms of PSNR and NCC. Performance comparison is done with the existing works and substantial improvement is witnessed.


Entropy ◽  
2018 ◽  
Vol 20 (12) ◽  
pp. 945 ◽  
Author(s):  
Jinhua Liu ◽  
Shan Wu ◽  
Xinye Xu

Conventional quantization-based watermarking may be easily estimated by averaging on a set of watermarked signals via uniform quantization approach. Moreover, the conventional quantization-based method neglects the visual perceptual characteristics of the host signal; thus, the perceptible distortions would be introduced in some parts of host signal. In this paper, inspired by the Watson’s entropy masking model and logarithmic quantization index modulation (LQIM), a logarithmic quantization-based image watermarking method is developed by using the wavelet transform. Furthermore, the novel method improves the robustness of watermarking based on a logarithmic quantization strategy, which embeds the watermark data into the image blocks with high entropy value. The main significance of this work is that the trade-off between invisibility and robustness is simply addressed by using the logarithmic quantizaiton approach, which applies the entropy masking model and distortion-compensated scheme to develop a watermark embedding method. In this manner, the optimal quantization parameter obtained by minimizing the quantization distortion function effectively controls the watermark strength. In terms of watermark decoding, we model the wavelet coefficients of image by the generalized Gaussian distribution (GGD) and calculate the bit error probability of proposed method. Performance of the proposed method is analyzed and verified by simulation on real images. Experimental results demonstrate that the proposed method has the advantages of imperceptibility and strong robustness against attacks covering JPEG compression, additive white Gaussian noise (AWGN), Gaussian filtering, Salt&Peppers noise, scaling and rotation attack, etc.


Electronics ◽  
2019 ◽  
Vol 8 (12) ◽  
pp. 1462 ◽  
Author(s):  
Jinhua Liu ◽  
Yunbo Rao ◽  
Yuanyuan Huang

Imperceptibility and robustness are the two complementary, but fundamental requirements of any digital image watermarking method. To improve the invisibility and robustness of multiplicative image watermarking, a complex wavelet based watermarking algorithm is proposed by using the human visual texture masking and visual saliency model. First, image blocks with high entropy are selected as the watermark embedding space to achieve imperceptibility. Then, an adaptive multiplicative watermark embedding strength factor is designed by utilizing texture masking and visual saliency to enhance robustness. Furthermore, the complex wavelet coefficients of the low frequency sub-band are modeled by a Gaussian distribution, and a watermark decoding method is proposed based on the maximum likelihood criterion. Finally, the effectiveness of the watermarking is validated by using the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) through experiments. Simulation results demonstrate the invisibility of the proposed method and its strong robustness against various attacks, including additive noise, image filtering, JPEG compression, amplitude scaling, rotation attack, and combinational attack.


2013 ◽  
Vol 416-417 ◽  
pp. 1210-1213
Author(s):  
Hua Wen Ai ◽  
Ping Feng Liu ◽  
Sheng Cong Dong

In order to resist print and scan attacks, a kind of digital halftone image watermarking algorithm is proposed, which is based on the edge detection and improved error diffusion. The edge of gray image is gotten using canny detection. Calculate the noise visibility function values of the edge points. Then, sort the values in ascending order and select the size that equal to the length of watermark as the location of watermark embedding. While the grayscale image turns to halftone image using the improved error diffusion algorithm, binary watermark is embedded in the edge position. Watermark is pretreated with Arnold before embedding to improve the safety of watermark. Experiment results show that the algorithm is good resistance to print and scan attacks, while resistance to shearing, noise and jpeg compression attacks.


2021 ◽  
pp. 1-12
Author(s):  
Junqing Ji ◽  
Xiaojia Kong ◽  
Yajing Zhang ◽  
Tongle Xu ◽  
Jing Zhang

The traditional blind source separation (BSS) algorithm is mainly used to deal with signal separation under the noiseless model, but it does not apply to data with the low signal to noise ratio (SNR). To solve the problem, an adaptive variable step size natural gradient BSS algorithm based on an improved wavelet threshold is proposed in this paper. Firstly, an improved wavelet threshold method is used to reduce the noise of the signal. Secondly, the wavelet coefficient layer with obvious periodicity is denoised using a morphological component analysis (MCA) algorithm, and the processed wavelet coefficients are recombined to obtain the ideal model. Thirdly, the recombined signal is pre-whitened, and a new separation matrix update formula of natural gradient algorithm is constructed by defining a new separation degree estimation function. Finally, the adaptive variable step size natural gradient blind source algorithm is used to separate the noise reduction signal. The results show that the algorithm can not only adaptively adjust the step size according to different signals, but also improve the convergence speed, stability and separation accuracy.


Author(s):  
Evan S. Bentley ◽  
Richard L. Thompson ◽  
Barry R. Bowers ◽  
Justin G. Gibbs ◽  
Steven E. Nelson

AbstractPrevious work has considered tornado occurrence with respect to radar data, both WSR-88D and mobile research radars, and a few studies have examined techniques to potentially improve tornado warning performance. To date, though, there has been little work focusing on systematic, large-sample evaluation of National Weather Service (NWS) tornado warnings with respect to radar-observable quantities and the near-storm environment. In this work, three full years (2016–2018) of NWS tornado warnings across the contiguous United States were examined, in conjunction with supporting data in the few minutes preceding warning issuance, or tornado formation in the case of missed events. The investigation herein examines WSR-88D and Storm Prediction Center (SPC) mesoanalysis data associated with these tornado warnings with comparisons made to the current Warning Decision Training Division (WDTD) guidance.Combining low-level rotational velocity and the significant tornado parameter (STP), as used in prior work, shows promise as a means to estimate tornado warning performance, as well as relative changes in performance as criteria thresholds vary. For example, low-level rotational velocity peaking in excess of 30 kt (15 m s−1), in a near-storm environment which is not prohibitive for tornadoes (STP > 0), results in an increased probability of detection and reduced false alarms compared to observed NWS tornado warning metrics. Tornado warning false alarms can also be reduced through limiting warnings with weak (<30 kt), broad (>1nm) circulations in a poor (STP=0) environment, careful elimination of velocity data artifacts like sidelobe contamination, and through greater scrutiny of human-based tornado reports in otherwise questionable scenarios.


2018 ◽  
Vol 33 (6) ◽  
pp. 1501-1511 ◽  
Author(s):  
Harold E. Brooks ◽  
James Correia

Abstract Tornado warnings are one of the flagship products of the National Weather Service. We update the time series of various metrics of performance in order to provide baselines over the 1986–2016 period for lead time, probability of detection, false alarm ratio, and warning duration. We have used metrics (mean lead time for tornadoes warned in advance, fraction of tornadoes warned in advance) that work in a consistent way across the official changes in policy for warning issuance, as well as across points in time when unofficial changes took place. The mean lead time for tornadoes warned in advance was relatively constant from 1986 to 2011, while the fraction of tornadoes warned in advance increased through about 2006, and the false alarm ratio slowly decreased. The largest changes in performance take place in 2012 when the default warning duration decreased, and there is an apparent increased emphasis on reducing false alarms. As a result, the lead time, probability of detection, and false alarm ratio all decrease in 2012. Our analysis is based, in large part, on signal detection theory, which separates the quality of the warning system from the threshold for issuing warnings. Threshold changes lead to trade-offs between false alarms and missed detections. Such changes provide further evidence for changes in what the warning system as a whole considers important, as well as highlighting the limitations of measuring performance by looking at metrics independently.


2017 ◽  
Vol 14 ◽  
pp. 187-194 ◽  
Author(s):  
Stefano Federico ◽  
Marco Petracca ◽  
Giulia Panegrossi ◽  
Claudio Transerici ◽  
Stefano Dietrich

Abstract. This study investigates the impact of the assimilation of total lightning data on the precipitation forecast of a numerical weather prediction (NWP) model. The impact of the lightning data assimilation, which uses water vapour substitution, is investigated at different forecast time ranges, namely 3, 6, 12, and 24 h, to determine how long and to what extent the assimilation affects the precipitation forecast of long lasting rainfall events (> 24 h). The methodology developed in a previous study is slightly modified here, and is applied to twenty case studies occurred over Italy by a mesoscale model run at convection-permitting horizontal resolution (4 km). The performance is quantified by dichotomous statistical scores computed using a dense raingauge network over Italy. Results show the important impact of the lightning assimilation on the precipitation forecast, especially for the 3 and 6 h forecast. The probability of detection (POD), for example, increases by 10 % for the 3 h forecast using the assimilation of lightning data compared to the simulation without lightning assimilation for all precipitation thresholds considered. The Equitable Threat Score (ETS) is also improved by the lightning assimilation, especially for thresholds below 40 mm day−1. Results show that the forecast time range is very important because the performance decreases steadily and substantially with the forecast time. The POD, for example, is improved by 1–2 % for the 24 h forecast using lightning data assimilation compared to 10 % of the 3 h forecast. The impact of the false alarms on the model performance is also evidenced by this study.


Cryptography ◽  
2020 ◽  
pp. 480-497
Author(s):  
Lin Gao ◽  
Tiegang Gao ◽  
Jie Zhao

This paper proposed a reversible medical image watermarking scheme using Redundant Discrete Wavelet Transform (RDWT) and sub-sample. To meet the highly demand of the perceptional quality, the proposed scheme embedding the watermark by modifying the RDWT coefficients. The sub-sample scheme is introduced to the proposed scheme for the enhancement of the embedding capacity. Moreover, to meet the need of security, a PWLCM based image encryption algorithm is introduced for encrypting the image after the watermark embedding. The experimental results suggests that the proposed scheme not only meet the highly demand of the perceptional quality, but also have better embedding capacity than former DWT based scheme. Also the encryption scheme could protect the image contents efficiently.


Sign in / Sign up

Export Citation Format

Share Document