scholarly journals Rice sequences of relations

Author(s):  
Antonio Montalbán

We propose a framework to study the computational complexity of definable relations on a structure. Many of the notions we discuss are old, but the viewpoint is new. We believe that all the pieces fit together smoothly under this new point of view. We also survey related results in the area. More concretely, we study the space of sequences of relations over a given structure. On this space, we develop notions of c.e.-ness, reducibility, join and jump. These notions are equivalent to other notions studied in other settings. We explain the equivalences and differences between these notions.

Author(s):  
Maciej Liskiewicz ◽  
Ulrich Wölfel

This chapter provides an overview, based on current research, on theoretical aspects of digital steganography— a relatively new field of computer science that deals with hiding secret data in unsuspicious cover media. We focus on formal analysis of security of steganographic systems from a computational complexity point of view and provide models of secure systems that make realistic assumptions of limited computational resources of involved parties. This allows us to look at steganographic secrecy based on reasonable complexity assumptions similar to ones commonly accepted in modern cryptography. In this chapter we expand the analyses of stego-systems beyond security aspects, which practitioners find difficult to implement (if not impossible to realize), to the question why such systems are so difficult to implement and what makes these systems different from practically used ones.


2008 ◽  
Vol 32 ◽  
pp. 525-564 ◽  
Author(s):  
S. Bouveret ◽  
J. Lang

We consider the problem of allocating fairly a set of indivisible goods among agents from the point of view of compact representation and computational complexity. We start by assuming that agents have dichotomous preferences expressed by propositional formulae. We express efficiency and envy-freeness in a logical setting, which reveals unexpected connections to nonmonotonic reasoning. Then we identify the complexity of determining whether there exists an efficient and envy-free allocation, for several notions of efficiency, when preferences are represented in a succinct way (as well as restrictions of this problem). We first study the problem under the assumption that preferences are dichotomous, and then in the general case.


2007 ◽  
Vol 07 (02) ◽  
pp. 303-320
Author(s):  
MOHAMED ALI BEN AYED ◽  
AMINE SAMET ◽  
NOURI MASMOUDI

A merging procedure joining search pattern and variable block size motion estimation for H.264/AVC is proposed in this paper. The principal purpose of the proposed methods is the reduction of the computational complexity for block matching module. In fact, there are numerous contributions in the literature aiming the reduction of the computational cost needed for motion estimation. The best solution from a qualitative point of view is the full search that considers every possible detail. The computational effort required is enormous and this makes motion estimation by far the most important computational bottleneck in video coding systems. Our approach invests and exploits the center-biased characteristics of the real world video sequences, aiming to achieve an acceptable image quality while independently targeting the reduction of the computational complexity. The simulations results demonstrated that the proposal performs well.


2003 ◽  
Vol 358 (1435) ◽  
pp. 1293-1309 ◽  
Author(s):  
Jean-Daniel Zucker

In artificial intelligence, abstraction is commonly used to account for the use of various levels of details in a given representation language or the ability to change from one level to another while preserving useful properties. Abstraction has been mainly studied in problem solving, theorem proving, knowledge representation (in particular for spatial and temporal reasoning) and machine learning. In such contexts, abstraction is defined as a mapping between formalisms that reduces the computational complexity of the task at stake. By analysing the notion of abstraction from an information quantity point of view, we pinpoint the differences and the complementary role of reformulation and abstraction in any representation change. We contribute to extending the existing semantic theories of abstraction to be grounded on perception, where the notion of information quantity is easier to characterize formally. In the author's view, abstraction is best represented using abstraction operators, as they provide semantics for classifying different abstractions and support the automation of representation changes. The usefulness of a grounded theory of abstraction in the cartography domain is illustrated. Finally, the importance of explicitly representing abstraction for designing more autonomous and adaptive systems is discussed.


2005 ◽  
Vol 02 (01) ◽  
pp. 45-53 ◽  
Author(s):  
S. E. EL-KHAMY ◽  
M. M. HADHOUD ◽  
M. I. DESSOUKY ◽  
B. M. SALAM ◽  
F. E. ABD EL-SAMIE

This paper presents a least squares block by block adaptive approach for the acquisition of high resolution (HR) images from available (LR) images. The suggested algorithm is based on the segmentation of the image to overlapping blocks and the interpolation of each block separately. The purpose of the overlapping of blocks is to avoid edge effects. An adaptive 2D least squares approach, which considers the image acquisition model, is used in the minimization of the estimation error of each block. In this suggested algorithm, a weight matrix of moderate dimensions is estimated in a small number of iterations to interpolate each block. This algorithm avoids the large computational complexity due to the matrices of large dimensions required to interpolate the image as a whole. The performance of the proposed algorithm is studied for different LR images with different SNRs. The performance of the proposed algorithm is also compared to the standard as well as the warped distance cubic O-MOMS image interpolation algorithms from the PSNR point of view.


Author(s):  
Sumit Kaur ◽  
R.K Bansal

Superpixel segmentation showed to be a useful preprocessing step in many computer vision applications. Superpixel’s purpose is to reduce the redundancy in the image and increase efficiency from the point of view of the next processing task. This led to a variety of algorithms to compute superpixel segmentations, each with individual strengths and weaknesses. Many methods for the computation of superpixels were already presented. A drawback of most of these methods is their high computational complexity and hence high computational time consumption. K mean based SLIC method shows better performance as compare to other while evaluating on the bases of under segmentation error and boundary recall, etc parameters.


1996 ◽  
Vol 61 (2) ◽  
pp. 515-540 ◽  
Author(s):  
Patrick Cegielski ◽  
Yuri Matiyasevich ◽  
Denis Richard

AbstractLet be a first-order structure; we denote by DEF() the set of all first-order definable relations and functions within . Let π be any one-to-one function from ℕ into the set of prime integers.Let ∣ and • be respectively the divisibility relation and multiplication as function. We show that the sets DEF(ℕ, π, ∣) and DEF(ℕ, π, •) are equal. However there exists function π such that the set DEF(ℕ, +, ∣), or, equivalently, DEF(ℕ, π, •) is not equal to DEF(ℕ, +, •). Nevertheless, in all cases there is an {π, •}-definable and hence also {π, |}-definable structure over π which is isomorphic to 〈ℕ, +, •〉. Hence theories TH(ℕ, π, ∣) and TH(ℕ, π, •) are undecidable.The binary relation of equipotence between two positive integers saying that they have equal number of prime divisors is not definable within the divisibility lattice over positive integers. We prove it first by comparing the lower bound of the computational complexity of the additive theory of positive integers and of the upper bound of the computational complexity of the theory of the mentioned lattice.The last section provides a self-contained alternative proof of this latter result based on a decision method linked to an elimination of quantifiers via specific tables.


2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Apinya Innok ◽  
Peerapong Uthansakul ◽  
Monthippa Uthansakul

The method of MIMO beamforming has gained a lot of attention. The eigen beamforming (EB) technique provides the best performance but requiring full channel information. However, it is impossible to fully acquire the channel in a real fading environment. To overcome the limitations of the EB technique, the quantized beamforming (QB) technique was proposed by using only some feedback bits instead of full channel information to calculate the suitable beamforming vectors. Unfortunalely, the complexity of finding the beamforming vectors is the limitation of the QB technique. In this paper, we propose a new technique named as angular beamforming (AB) to overcome drawbacks of QB technique. The proposed technique offers low computational complexity for finding the suitable beamforming vectors. In this paper, we also present the feasibility implementation of the proposed AB method. The experiments are undertaken mainly to verify the concept of the AB technique by utilizing the Butler matrix as a two-bit AB processor. The experimental implementation and the results demonstrate that the proposed technique is attractive from the point of view of easy implementation without much computational complexity and low cost.


Author(s):  
Wenkai Liu ◽  
Jianyuan Kang ◽  
Xianya Fu ◽  
Mengmeng Zhang ◽  
Zhi Liu ◽  
...  

For the virtual reality 360[Formula: see text] videos, equirectangular projection (ERP) is a commonly used projection format. However, its high resolution brings extraordinary huge computational complexity in encoding. In order to speed up the intra coding process, a fast coding unit (CU) partitioning algorithm based on regional decision tree is proposed in this paper. The frame image is divided into two regions from a statistical point of view, and the earlysplit and pruned decision trees are established using light weight sample attributes for each region. With the help of these decision trees, the CU partitioning process is accelerated. Compared with the original algorithm of HM16.20, the proposed algorithm can reduce the encoding time by 28%, while BD-rate only increases by 0.27%.


Sign in / Sign up

Export Citation Format

Share Document