scholarly journals Adaptive Binary Arithmetic Coder-Based Image Feature and Segmentation in the Compressed Domain

2012 ◽  
Vol 2012 ◽  
pp. 1-14 ◽  
Author(s):  
Hsi-Chin Hsin ◽  
Tze-Yun Sung ◽  
Yaw-Shih Shieh ◽  
Carlo Cattani

Image compression is necessary in various applications, especially for efficient transmission over a band-limited channel. It is thus desirable to be able to segment an image in the compressed domain directly such that the burden of decompressing computation can be avoided. Motivated by the adaptive binary arithmetic coder (MQ coder) of JPEG2000, we propose an efficient scheme to segment the feature vectors that are extracted from the code stream of an image. We modify the Compression-based Texture Merging (CTM) algorithm to alleviate the influence of overmerging problem by making use of the rate distortion information. Experimental results show that the MQ coder-based image segmentation is preferable in terms of the boundary displacement error (BDE) measure. It has the advantage of saving computational cost as the segmentation results even at low rates of bits per pixel (bpp) are satisfactory.

2012 ◽  
Vol 2012 ◽  
pp. 1-7
Author(s):  
Ying-Shen Juang ◽  
Hsi-Chin Hsin ◽  
Tze-Yun Sung ◽  
Yaw-Shih Shieh ◽  
Carlo Cattani

Original images are often compressed for the communication applications. In order to avoid the burden of decompressing computations, it is thus desirable to segment images in the compressed domain directly. This paper presents a simple rate-distortion-based scheme to segment images in the JPEG2000 domain. It is based on a binary arithmetic code table used in the JPEG2000 standard, which is available at both encoder and decoder; thus, there is no need to transmit the segmentation result. Experimental results on the Berkeley image database show that the proposed algorithm is preferable in terms of the running time and the quantitative measures: probabilistic Rand index (PRI) and boundary displacement error (BDE).


Author(s):  
Wei Li ◽  
Peng Ren

The entropy coding of context-adaptive binary arithmetic coding (CABAC) has been utilized in the H.265/HEVC for higher coding efficiency. But the related complexity also causes a bottleneck for its low-delay applications, owing to the employment of inter-symbol dependency in CABAC. In this paper, a fast bit-rate estimation method is proposed to skip the actual entropy coding of CABAC in mode decision to meet the requirement of low-delay implementations. The presented scheme firstly parses the characteristics of syntax elements and then guided by the principle of CABAC, an efficient scheme is derived following. It is very beneficial for reducing the computational complexity and saving the encoding time in H.265/HEVC mode decision. Experimental results demonstrate that the proposed fast algorithm can reduce the CABAC encoding time by 68% in average with negligible degradation in the rate-distortion performance.


1988 ◽  
Vol 32 (6) ◽  
pp. 717-726 ◽  
Author(s):  
W. B. Pennebaker ◽  
J. L. Mitchell ◽  
G. G. Langdon ◽  
R. B. Arps

2021 ◽  
Vol 28 (2) ◽  
pp. 163-182
Author(s):  
José L. Simancas-García ◽  
Kemel George-González

Shannon’s sampling theorem is one of the most important results of modern signal theory. It describes the reconstruction of any band-limited signal from a finite number of its samples. On the other hand, although less well known, there is the discrete sampling theorem, proved by Cooley while he was working on the development of an algorithm to speed up the calculations of the discrete Fourier transform. Cooley showed that a sampled signal can be resampled by selecting a smaller number of samples, which reduces computational cost. Then it is possible to reconstruct the original sampled signal using a reverse process. In principle, the two theorems are not related. However, in this paper we will show that in the context of Non Standard Mathematical Analysis (NSA) and Hyperreal Numerical System R, the two theorems are equivalent. The difference between them becomes a matter of scale. With the scale changes that the hyperreal number system allows, the discrete variables and functions become continuous, and Shannon’s sampling theorem emerges from the discrete sampling theorem.


Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. N15-N27 ◽  
Author(s):  
Carlos A. M. Assis ◽  
Henrique B. Santos ◽  
Jörg Schleicher

Acoustic impedance (AI) is a widely used seismic attribute in stratigraphic interpretation. Because of the frequency-band-limited nature of seismic data, seismic amplitude inversion cannot determine AI itself, but it can only provide an estimate of its variations, the relative AI (RAI). We have revisited and compared two alternative methods to transform stacked seismic data into RAI. One is colored inversion (CI), which requires well-log information, and the other is linear inversion (LI), which requires knowledge of the seismic source wavelet. We start by formulating the two approaches in a theoretically comparable manner. This allows us to conclude that both procedures are theoretically equivalent. We proceed to check whether the use of the CI results as the initial solution for LI can improve the RAI estimation. In our experiments, combining CI and LI cannot provide superior RAI results to those produced by each approach applied individually. Then, we analyze the LI performance with two distinct solvers for the associated linear system. Moreover, we investigate the sensitivity of both methods regarding the frequency content present in synthetic data. The numerical tests using the Marmousi2 model demonstrate that the CI and LI techniques can provide an RAI estimate of similar accuracy. A field-data example confirms the analysis using synthetic-data experiments. Our investigations confirm the theoretical and practical similarities of CI and LI regardless of the numerical strategy used in LI. An important result of our tests is that an increase in the low-frequency gap in the data leads to slightly deteriorated CI quality. In this case, LI required more iterations for the conjugate-gradient least-squares solver, but the final results were not much affected. Both methodologies provided interesting RAI profiles compared with well-log data, at low computational cost and with a simple parameterization.


2014 ◽  
Vol 989-994 ◽  
pp. 3605-3608
Author(s):  
Cong Lin ◽  
Chi Man Pun

A novel adaptive image feature reduction approach for object tracking using vectorized texture feature is proposed in this paper. Our contributions are three-fold: 1) a statistical discriminative appearance model using texture feature was proposed. 2) Majority of dimensions of the features are removed by judging their errors of the chosen distribution model. The remaining dimensions are most discriminative ones for classification task. The dimension reduction has advantages of reducing the computational cost in classification stage. 3) An adaptive learning rate was proposed to handle drifts caused by long term occlusion. Preliminary experimental results are satisfactory and compared to state-of-the-art object tracking methods.


MATEMATIKA ◽  
2019 ◽  
Vol 35 (1) ◽  
pp. 1-11 ◽  
Author(s):  
Manoj Kumar ◽  
Pratik Gupta

Signcryption schemes are compact and specially suited for efficiency-critical applications such as smart card dependent systems. Several researchers have performed a large number of significant applications of signcryption such as authenticated key recovery and key establishment in one mall data packet, secure ATM networks as well as light weight electronic transaction protocols and multi-casting over the internet. In this paper we have proposed an efficient and efficient scheme of signcryption symmetric key solutions, using elliptic curves by reducing senders computational cost. It needs two elliptic curve point multiplication for sender and comparative study of computational cost for sender and recipient as well as there is no any inverse computation for sender and recipient. This makes it more crucial than others.


Sign in / Sign up

Export Citation Format

Share Document