approximate decision
Recently Published Documents


TOTAL DOCUMENTS

23
(FIVE YEARS 2)

H-INDEX

6
(FIVE YEARS 0)

Author(s):  
Mario Barbareschi ◽  
Salvatore Barone ◽  
Nicola Mazzocca

AbstractSo far, multiple classifier systems have been increasingly designed to take advantage of hardware features, such as high parallelism and computational power. Indeed, compared to software implementations, hardware accelerators guarantee higher throughput and lower latency. Although the combination of multiple classifiers leads to high classification accuracy, the required area overhead makes the design of a hardware accelerator unfeasible, hindering the adoption of commercial configurable devices. For this reason, in this paper, we exploit approximate computing design paradigm to trade hardware area overhead off for classification accuracy. In particular, starting from trained DT models and employing precision-scaling technique, we explore approximate decision tree variants by means of multiple objective optimization problem, demonstrating a significant performance improvement targeting field-programmable gate array devices.


2018 ◽  
Vol 61 (10) ◽  
pp. 813-817 ◽  
Author(s):  
A. V. Zimin ◽  
I. V. Burkova ◽  
V. V. Mit'kov ◽  
V. V. Zimin

One of the important factors (may be, the main one) determining duration of initial (trial) operation of Enterprise Resource Planning (EPR) is quality of users training to collaboration in integrated (at the level of elementary transactions) control system. It is obvious that duration of initial operation and corresponding losses from the incidents arising at EPR operation can be significantly reduced not only due to high-quality design and EPR testing, but, considerably, due to increase in level of competences of users reached by their training. Mathematical definition of training program development for EPR users of large metallurgical company is given in the article. The main criterion there is general increment of users’ competences as a result of implementation of training program. Procedure of task solution is based on method of network programming which relies on structural and similar network representation of criterion and restrictions. General scheme and an example of solution of the studied task are provided in which separate estimated tasks are solved by method of dichotomizing programming. Received approximate solutions of an objective can be improved by means of finding global optimum of initial task by method of branches and borders in which values of criterion function of found approximate decision are applied as borders. In practice it is expedient to consider a task, opposite to described in the article in which costs of training are the criterion. Owing to structural similarity of functions of an increment of competences and costs of training the return task can be decided according to the same scheme that it is offered for a direct task. The considered task can be generalized by taking into account the preferences of users regarding the significance of individual programs relative to others by introducing appropriate “scales”. General scheme of task solution won’t change.


2018 ◽  
Author(s):  
Tom Gur

The study of property testing is concerned with algorithms that solve approximate decision problems, while only probing a small fraction of their inputs. More specifically, a tester for a property Pi receives query access to an object x and is required to determine whether x is in Pi or x is far from Pi, using as little as possible queries to x.A fundamental question that arises in any computational model is to understand the power of proof systems within the model. Indeed, the P neq NP conjecture, which deals with the power of proofs in polynomial time computation, is arguably the most important open problem in the theory of computation. The focus of this thesis is on understanding the power and limitations of proof systems within the framework of property testing.We study locally verifiable proofs of proximity (LVPP), which are probabilistic proofs systems wherein the verifier queries a sublinear number of bits of a statement and is only required to reject statements that are far from valid. In their most basic form, the verifier receives, in addition to query access to the statement, also free access to a proof of sublinear length; such proof systems are called Merlin-Arthur proofs of proximity (MAP) and can be viewed as the MA (i.e., ``randomized NP'') analogue of property testing.Other notable forms of LVPPs include interactive proofs of proximity (IPP), in which the verifier is allowed to communicate with an omniscient prover (rather than obtain a static proof), and probabilistically checkable proofs of proximity (PCPP), in which the verifier is only allowed to make a small number of queries to both statement and proof (which is typically longer than the statement, in the case of PCPPs). These proofs systems can be viewed as the IP and PCP analogues of property testing.In this thesis, we initiate the study of some types of LVPPs and continue the study of others. Our main contributions include:Introducing the notion of non-interactive (Merlin-Arthur) proofs of proximity (MAP) and initiating its systematic study.Exponential separations between the power of property testers, MAPs, and IPPs. In particular, denoting by PT, MAP, and IPP the classes of properties that admit testers and verifiers of polylogarithmic query and communication complexity, we show that PT subsetneq MAP subsetneq IPP, which can be interpreted as separating BPP, MA, and IP in the settings of property testing.A hierarchy theorem for IPPs, which shows that the power of IPPs gradually increases with the number of rounds of communication allowed between the prover and the verifier.Constructions of MAPs and IPPs for several complexity classes, including constraint satisfaction problems (such as 3SAT formulas), properties of graphs, languages accepted by small branching programs, and context-free languages; as well as a strong form of PCPPs for affine subspaces.Several constructions of error-correcting codes admitting local features (such as a strong form of local testability, a relaxed form of local decodability, and testability of numerous subcodes) that are useful for applications to LVPPs.


Author(s):  
LEEHTER YAO ◽  
KUEI-SUNG WENG

A fuzzy classifier using multiple ellipsoids to approximate decision regions for classification is designed in this paper. To learn the sizes and orientations of ellipsoids, an algorithm called evolutionary ellipsoidal classification algorithm (EECA) that integrates the genetic algorithm (GA) with the Gustafson-Kessel algorithm (GKA) is proposed. Within EECA the GA is employed to learn the size of every ellipsoid. With the size of every ellipsoid encoded and intelligently estimated in the GA chromosome, GKA is utilized to learn the corresponding ellipsoid. GKA is able to adapt the distance norm to the underlying distribution of the prototype data points for an assigned ellipsoid size. A process called directed initialization is proposed to improve EECA's learning efficiency. Because EECA learns the data point distribution in every cluster by adjusting an ellipsoid with suitable size and orientation, the information contained in the ellipsoid is further utilized to improve the cluster validity. A cluster validity measure based on the ratio of summation for each intra-cluster scatter with respect to the inter-cluster separation is defined in this paper. The proposed cluster validity measure takes advantage of EECA's learning capability and serves as an effective index for determining the adequate number of ellipsoids required for classification.


Sign in / Sign up

Export Citation Format

Share Document