scholarly journals Low-Complexity Rate-Distortion Optimization of Sampling Rate and Bit-Depth for Compressed Sensing of Images

Entropy ◽  
2020 ◽  
Vol 22 (1) ◽  
pp. 125
Author(s):  
Qunlin Chen ◽  
Derong Chen ◽  
Jiulu Gong ◽  
Jie Ruan

Compressed sensing (CS) offers a framework for image acquisition, which has excellent potential in image sampling and compression applications due to the sub-Nyquist sampling rate and low complexity. In engineering practices, the resulting CS samples are quantized by finite bits for transmission. In circumstances where the bit budget for image transmission is constrained, knowing how to choose the sampling rate and the number of bits per measurement (bit-depth) is essential for the quality of CS reconstruction. In this paper, we first present a bit-rate model that considers the compression performance of CS, quantification, and entropy coder. The bit-rate model reveals the relationship between bit rate, sampling rate, and bit-depth. Then, we propose a relative peak signal-to-noise ratio (PSNR) model for evaluating distortion, which reveals the relationship between relative PSNR, sampling rate, and bit-depth. Finally, the optimal sampling rate and bit-depth are determined based on the rate-distortion (RD) criteria with the bit-rate model and the relative PSNR model. The experimental results show that the actual bit rate obtained by the optimized sampling rate and bit-depth is very close to the target bit rate. Compared with the traditional CS coding method with a fixed sampling rate, the proposed method provides better rate-distortion performance, and the additional calculation amount amounts to less than 1%.

Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1354
Author(s):  
Qunlin Chen ◽  
Derong Chen ◽  
Jiulu Gong

Block compressed sensing (BCS) is a promising technology for image sampling and compression for resource-constrained applications, but it needs to balance the sampling rate and quantization bit-depth for a bit-rate constraint. In this paper, we summarize the commonly used CS quantization frameworks into a unified framework, and a new bit-rate model and a model of the optimal bit-depth are proposed for the unified CS framework. The proposed bit-rate model reveals the relationship between the bit-rate, sampling rate, and bit-depth based on the information entropy of generalized Gaussian distribution. The optimal bit-depth model can predict the optimal bit-depth of CS measurements at a given bit-rate. Then, we propose a general algorithm for choosing sampling rate and bit-depth based on the proposed models. Experimental results show that the proposed algorithm achieves near-optimal rate-distortion performance for the uniform quantization framework and predictive quantization framework in BCS.


2018 ◽  
Vol 8 (2) ◽  
pp. 343-375 ◽  
Author(s):  
Sajjad Beygi ◽  
Shirin Jalali ◽  
Arian Maleki ◽  
Urbashi Mitra

Abstract Modern image and video compression codes employ elaborate structures in an effort to encode them using a small number of bits. Compressed sensing (CS) recovery algorithms, on the other hand, use such structures to recover the signals from a few linear observations. Despite the steady progress in the field of CS, the structures that are often used for signal recovery are still much simpler than those employed by state-of-the-art compression codes. The main goal of this paper is to bridge this gap by answering the following question: can one employ a compression code to build an efficient (polynomial time) CS recovery algorithm? In response to this question, the compression-based gradient descent (C-GD) algorithm is proposed. C-GD, which is a low-complexity iterative algorithm, is able to employ a generic compression code for CS and therefore enlarges the set of structures used in CS to those used by compression codes. Three theoretical contributions are provided: a convergence analysis of C-GD, a characterization of the required number of samples as a function of the rate-distortion function of the compression code and a robustness analysis of C-GD to additive white Gaussian noise and other non-idealities in the measurement process. Finally, the presented simulation results show that, in image CS, using compression codes such as JPEG2000, C-GD outperforms state-of-the-art methods, on average, by about $2$–$3$ dB in peak signal-to-noise ratio.


2020 ◽  
Vol 12 (7) ◽  
pp. 120 ◽  
Author(s):  
Thanuja Mallikarachchi ◽  
Dumidu Talagala ◽  
Hemantha Kodikara Arachchi ◽  
Chaminda Hewage ◽  
Anil Fernando

Video playback on mobile consumer electronic (CE) devices is plagued by fluctuations in the network bandwidth and by limitations in processing and energy availability at the individual devices. Seen as a potential solution, the state-of-the-art adaptive streaming mechanisms address the first aspect, yet the efficient control of the decoding-complexity and the energy use when decoding the video remain unaddressed. The quality of experience (QoE) of the end-users’ experiences, however, depends on the capability to adapt the bit streams to both these constraints (i.e., network bandwidth and device’s energy availability). As a solution, this paper proposes an encoding framework that is capable of generating video bit streams with arbitrary bit rates and decoding-complexity levels using a decoding-complexity–rate–distortion model. The proposed algorithm allocates rate and decoding-complexity levels across frames and coding tree units (CTUs) and adaptively derives the CTU-level coding parameters to achieve their imposed targets with minimal distortion. The experimental results reveal that the proposed algorithm can achieve the target bit rate and the decoding-complexity with 0.4% and 1.78% average errors, respectively, for multiple bit rate and decoding-complexity levels. The proposed algorithm also demonstrates a stable frame-wise rate and decoding-complexity control capability when achieving a decoding-complexity reduction of 10.11 (%/dB). The resultant decoding-complexity reduction translates into an overall energy-consumption reduction of up to 10.52 (%/dB) for a 1 dB peak signal-to-noise ratio (PSNR) quality loss compared to the HM 16.0 encoded bit streams.


A new progressive image transmission system was proposed in this research paper for effective usage of communication bandwidth. At first, the superpixel based saliency detection method was used for segmenting the foreground region from the background region, because it gives more saliency information of an image with the benefit of color contrast. Then, Integer Wavelet Transform (IWT) was applied in the foreground image, which delivers A good quality of the image and also the compression ratio of the image was decent. Additionally, optimized neural network and modified Set Partitioned in Hierarchical Tree (SPIHT) algorithm were applied in the background image that delivers good rate distortion properties in the noise free environment and also enhances the image visual experience. In modified SPIHT, the sub-tree roots were not excluded that helps to encode and quantize the wavelet coefficients effectively. Also, it delivers more information to the image edges that effectively improves the subjective visual experience. Experiment report showed that the proposed work enhanced the Peak Signal to Noise Ratio (PSNR) upto 5dB compared to the existing work.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6217
Author(s):  
Sovannarith Heng ◽  
Phet Aimtongkham ◽  
Van Nhan Vo ◽  
Tri Gia Nguyen ◽  
Chakchai So-In

The transmission of high-volume multimedia content (e.g., images) is challenging for a resource-constrained wireless multimedia sensor network (WMSN) due to energy consumption requirements. Redundant image information can be compressed using traditional compression techniques at the cost of considerable energy consumption. Fortunately, compressed sensing (CS) has been introduced as a low-complexity coding scheme for WMSNs. However, the storage and processing of CS-generated images and measurement matrices require substantial memory. Block compressed sensing (BCS) can mitigate this problem. Nevertheless, allocating a fixed sampling to all blocks is impractical since each block holds different information. Although solutions such as adaptive block compressed sensing (ABCS) exist, they lack robustness across various types of images. As a solution, we propose a holistic WMSN architecture for image transmission that performs well on diverse images by leveraging saliency and standard deviation features. A fuzzy logic system (FLS) is then used to determine the appropriate features when allocating the sampling, and each corresponding block is resized using CS. The combined FLS and BCS algorithms are implemented with smoothed projected Landweber (SPL) reconstruction to determine the convergence speed. The experiments confirm the promising performance of the proposed algorithm compared with that of conventional and state-of-the-art algorithms.


Electronics ◽  
2019 ◽  
Vol 8 (11) ◽  
pp. 1285 ◽  
Author(s):  
Ahmad Mouri Zadeh Khaki ◽  
Ebrahim Farshidi ◽  
Sawal Hamid MD Ali ◽  
Masuri Othman

An all-digital voltage-controlled oscillator (VCO)-based second-order multi-stage noise-shaping (MASH) ΔΣ time-to-digital converter (TDC) is presented in this paper. The prototype of the proposed TDC was implemented on an Altera Stratix IV FPGA board. In order to improve the performance over conventional TDCs, a multirating technique is employed in this work in which higher sampling rate is used for higher stages. Experimental results show that the multirating technique had a significant influence on improving signal-to-noise ratio (SNR), from 43.09 dB without multirating to 61.02 dB with multirating technique (a gain of 17.93 dB) by quadrupling the sampling rate of the second stage. As the proposed design works in the time-domain and does not consist of any loop and calibration block, no time-to-voltage conversion is needed which results in low complexity and power consumption. A built-in oscillator and phase-locked loops (PLLs) of the FPGA board are utilized to generate sampling clocks at different frequencies. Therefore, no external clock needs to be applied to the proposed TDC. Two cases with different sampling rates were examined by the proposed design to demonstrate the capability of the technique. It can be implied that, by employing multirating technique and increasing sampling frequency, higher SNR can be achieved.


Entropy ◽  
2020 ◽  
Vol 22 (1) ◽  
pp. 76 ◽  
Author(s):  
Jiayin Yu ◽  
Shiyu Guo ◽  
Xiaomeng Song ◽  
Yaqin Xie ◽  
Erfu Wang

In this paper, a new image encryption transmission algorithm based on the parallel mode is proposed. This algorithm aims to improve information transmission efficiency and security based on existing hardware conditions. To improve efficiency, this paper adopts the method of parallel compressed sensing to realize image transmission. Compressed sensing can perform data sampling and compression at a rate much lower than the Nyquist sampling rate. To enhance security, this algorithm combines a sequence signal generator with chaotic cryptography. The initial sensitivity of chaos, used in a measurement matrix, makes it possible to improve the security of an encryption algorithm. The cryptographic characteristics of chaotic signals can be fully utilized by the flexible digital logic circuit. Simulation experiments and analyses show that the algorithm achieves the goal of improving transmission efficiency and has the capacity to resist illegal attacks.


Entropy ◽  
2020 ◽  
Vol 22 (3) ◽  
pp. 345
Author(s):  
Emeka Abakasanga ◽  
Nir Shlezinger ◽  
Ron Dabora

Man-made communications signals are typically modelled as continuous-time (CT) wide-sense cyclostationary (WSCS) processes. As modern processing is digital, it is applied to discrete-time (DT) processes obtained by sampling the CT processes. When sampling is applied to a CT WSCS process, the statistics of the resulting DT process depends on the relationship between the sampling interval and the period of the statistics of the CT process: When these two parameters have a common integer factor, then the DT process is WSCS. This situation is referred to as synchronous sampling. When this is not the case, which is referred to as asynchronous sampling, the resulting DT process is wide-sense almost cyclostationary (WSACS). The sampled CT processes are commonly encoded using a source code to facilitate storage or transmission over wireless networks, e.g., using compress-and-forward relaying. In this work, we study the fundamental tradeoff between rate and distortion for source codes applied to sampled CT WSCS processes, characterized via the rate-distortion function (RDF). We note that while RDF characterization for the case of synchronous sampling directly follows from classic information-theoretic tools utilizing ergodicity and the law of large numbers, when sampling is asynchronous, the resulting process is not information stable. In such cases, the commonly used information-theoretic tools are inapplicable to RDF analysis, which poses a major challenge. Using the information-spectrum framework, we show that the RDF for asynchronous sampling in the low distortion regime can be expressed as the limit superior of a sequence of RDFs in which each element corresponds to the RDF of a synchronously sampled WSCS process (yet their limit is not guaranteed to exist). The resulting characterization allows us to introduce novel insights on the relationship between sampling synchronization and the RDF. For example, we demonstrate that, differently from stationary processes, small differences in the sampling rate and the sampling time offset can notably affect the RDF of sampled CT WSCS processes.


e-Polymers ◽  
2020 ◽  
Vol 20 (1) ◽  
pp. 103-110
Author(s):  
Anfu Guo ◽  
Hui Li ◽  
Jie Xu ◽  
Jianfeng Li ◽  
Fangyi Li

AbstractThe performance of Polystyrene microporous foaming (PS-MCF) materials is influenced by their microstructures. Therefore, it is essential for industrializing them to investigate the relationship between their microstructure and material properties. In this study, the relationship between the microstructure, compressive property, and thermal conductivity of the PS-MCF materials was studied systematically. The results show that the ideal foaming pressure of PS-MCF materials, obtaining compression performance, is around 20 MPa. In addition, the increase of temperature causes the decrease of sample density. It effects that the compression modulus and strength increase with the decrease of foaming temperature. Because the expansion rate and cell diameter of the PS-MCF materials reduce the thickness of cell wall, they are also negatively correlated with their mechanical properties. Moreover, there is a negative linear correlation between the thermal conductivity and cell rate, whereas the cell diameter is positively correlated with the thermal conductivity.


Sign in / Sign up

Export Citation Format

Share Document