An Experimental Comparison of Viewpoint-Specific and Viewpoint-Independent Models of Object Representation

2000 ◽  
Vol 53 (3) ◽  
pp. 792-824 ◽  
Author(s):  
Michael B. Johnston ◽  
Anthony Hayes

Four experiments that investigate the cognitive representation of objects in human observers are reported. Two broad classes of theory were examined: viewpoint-specific and viewpoint-independent models. The former postulate that the data structures underpinning object recognition correspond to discrete views and require additional processing to access them from unfamiliar viewpoints. The latter postulate data structures that are independent of any particular viewpoint and can be directly accessed from a wide range of viewpoints. Two experimental tasks were used: a sequential matching paradigm and a cognitive learning paradigm. Findings favour viewpoint-specific models over viewpoint-independent models.

2018 ◽  
Vol 18 (3-4) ◽  
pp. 470-483 ◽  
Author(s):  
GREGORY J. DUCK ◽  
JOXAN JAFFAR ◽  
ROLAND H. C. YAP

AbstractMalformed data-structures can lead to runtime errors such as arbitrary memory access or corruption. Despite this, reasoning over data-structure properties for low-level heap manipulating programs remains challenging. In this paper we present a constraint-based program analysis that checks data-structure integrity, w.r.t. given target data-structure properties, as the heap is manipulated by the program. Our approach is to automatically generate a solver for properties using the type definitions from the target program. The generated solver is implemented using a Constraint Handling Rules (CHR) extension of built-in heap, integer and equality solvers. A key property of our program analysis is that the target data-structure properties are shape neutral, i.e., the analysis does not check for properties relating to a given data-structure graph shape, such as doubly-linked-lists versus trees. Nevertheless, the analysis can detect errors in a wide range of data-structure manipulating programs, including those that use lists, trees, DAGs, graphs, etc. We present an implementation that uses the Satisfiability Modulo Constraint Handling Rules (SMCHR) system. Experimental results show that our approach works well for real-world C programs.


2017 ◽  
Vol 44 (2) ◽  
pp. 203-229 ◽  
Author(s):  
Javier D Fernández ◽  
Miguel A Martínez-Prieto ◽  
Pablo de la Fuente Redondo ◽  
Claudio Gutiérrez

The publication of semantic web data, commonly represented in Resource Description Framework (RDF), has experienced outstanding growth over the last few years. Data from all fields of knowledge are shared publicly and interconnected in active initiatives such as Linked Open Data. However, despite the increasing availability of applications managing large-scale RDF information such as RDF stores and reasoning tools, little attention has been given to the structural features emerging in real-world RDF data. Our work addresses this issue by proposing specific metrics to characterise RDF data. We specifically focus on revealing the redundancy of each data set, as well as common structural patterns. We evaluate the proposed metrics on several data sets, which cover a wide range of designs and models. Our findings provide a basis for more efficient RDF data structures, indexes and compressors.


Author(s):  
Madoc Sheehan

Developing an engineering student's awareness of sustainability through the embedding of sustainability curricula is widely considered to be essential to modernising chemical engineering degree programs. In this chapter, the chemical engineering program at James Cook University is used as a case study to illustrate the design and sequencing of embedded curricula associated with developing a students' awareness of sustainability. There are a wide range of examples of skills, techniques, and characteristics associated with developing this awareness. In this chapter, an approach is described whereby a set of generic and interdisciplinary capabilities are developed to provide a degree of flexibility in how sustainability is interpreted and taught. A cognitive learning matrix is utilised as a design tool that facilitates determination of new subject learning outcomes aligned with the sustainability capabilities. A variety of curriculum examples are introduced and described.


2020 ◽  
Vol 15 (2) ◽  
pp. 136-143
Author(s):  
Omid Akbarzadeh ◽  
Mohammad R. Khosravi ◽  
Mehdi Shadloo-Jahromi

Background: Achieving the best possible classification accuracy is the main purpose of each pattern recognition scheme. An interesting area of classifier design is to design for biomedical signal and image processing. Materials and Methods: In the current work, in order to increase recognition accuracy, a theoretical frame for combination of classifiers is developed. This method uses different pattern representations to show that a wide range of existing algorithms could be incorporated as the particular cases of compound classification where all the pattern representations are used jointly to make an accurate decision. Results: The results show that the combination rules developed under the Naive Bayes and Fuzzy integral method outperforms other classifier combination schemes. Conclusion: The performance of different combination schemes has been studied through an experimental comparison of different classifier combination plans. The dataset used in the article has been obtained from biological signals.


2015 ◽  
Vol 27 (2) ◽  
pp. 277-295 ◽  
Author(s):  
MAXIME CROCHEMORE ◽  
COSTAS S. ILIOPOULOS ◽  
ALESSIO LANGIU ◽  
FILIPPO MIGNOSI

Given a set $\mathcal{D}$ of q documents, the Longest Common Substring (LCS) problem asks, for any integer 2 ⩽ k ⩽ q, the longest substring that appears in k documents. LCS is a well-studied problem having a wide range of applications in Bioinformatics: from microarrays to DNA sequences alignments and analysis. This problem has been solved by Hui (2000International Journal of Computer Science and Engineering15 73–76) by using a famous constant-time solution to the Lowest Common Ancestor (LCA) problem in trees coupled with the use of suffix trees.In this article, we present a simple method for solving the LCS problem by using suffix trees (STs) and classical union-find data structures. In turn, we show how this simple algorithm can be adapted in order to work with other space efficient data structures such as the enhanced suffix arrays (ESA) and the compressed suffix tree.


Author(s):  
Hammad Mazhar

This paper describes an open source parallel simulation framework capable of simulating large-scale granular and multi-body dynamics problems. This framework, called Chrono::Parallel, builds upon the modeling capabilities of Chrono::Engine, another open source simulation package, and leverages parallel data structures to enable scalable simulation of large problems. Chrono::Parallel is somewhat unique in that it was designed from the ground up to leverage parallel data structures and algorithms so that it scales across a wide range of computer architectures and yet has a rich modeling capability for simulating many different types of problems. The modeling capabilities of Chrono::Parallel will be demonstrated in the context of additive manufacturing and 3D printing by modeling the Selective Layer Sintering layering process and simulating large complex interlocking structures which require compression and folding to fit into a 3D printer’s build volume.


2015 ◽  
Vol 22 (12) ◽  
pp. 2359-2363 ◽  
Author(s):  
Mengjie Hu ◽  
Zhenzhong Wei ◽  
Mingwei Shao ◽  
Guangjun Zhang

Sign in / Sign up

Export Citation Format

Share Document