physical symbol system
Recently Published Documents


TOTAL DOCUMENTS

8
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

2020 ◽  
Vol 07 (01) ◽  
pp. 25-38
Author(s):  
Piotr Bołtuć

The main problem for AI consciousness is to operate within the right kind of AI. We distinguish between the traditional computing (GOFAI), and the computing based on stochastic pattern optimization. The latter will be called here computing at the edge of chaos. Optimization of learning patterns, which is the gist of its success, often happens between the areas of too much repetitive order and those of hard to predict and control stochastic processes. This is to change the focus from the opposition of symbolic versus sub-symbolic computing; symbols can appear at different granularities and the hedge between The Physical Symbol System Hypothesis and neural nets seems no longer the most productive cut to make. Computing at the edge of chaos is promising for AGI, especially for AGI consciousness. The second problem for AI consciousness is to work with the right definitions of consciousness.


Author(s):  
Frances Egan

The article gives an overview of several distinct theses demonstrating representationalism in cognitive science. Strong representationalism is the view that representational mental states have a specific form, in particular, that they are functionally characterizable relations to internal representations. The proponents of strong representationalism typically suggest that the system of internal representations constitutes a language with a combinatorial syntax and semantics. Braddon-Mitchell and Jackson argued that mental representations might be more analogous to maps than to sentences. Waskan argued that mental representations are akin to scale models. Fodor and Fodor and Pylyshyn argued that certain pervasive features of thought can only be explained by the hypothesis that thought takes place in a linguistic medium. A physical symbol system (PSS) hypothesis is a version of strong representationalism, the idea that representational mental states are functionally characterizable relations to internal representations. The representational content has a significant role in computational models of cognitive capacities. The internal states and structures posited in computational theories of cognition are distally interpreted in such theories. The distal objects and properties that determine the representational content of the posited internal states and structures serve to type-individuate a computationally characterized mechanism. Strong Representationalism, as exemplified by the PSS hypothesis, construes mental processes as operations on internal representations.


2008 ◽  
Vol 31 (2) ◽  
pp. 109-130 ◽  
Author(s):  
Derek C. Penn ◽  
Keith J. Holyoak ◽  
Daniel J. Povinelli

AbstractOver the last quarter century, the dominant tendency in comparative cognitive psychology has been to emphasize the similarities between human and nonhuman minds and to downplay the differences as “one of degree and not of kind” (Darwin 1871). In the present target article, we argue that Darwin was mistaken: the profound biological continuity between human and nonhuman animals masks an equally profound discontinuity between human and nonhuman minds. To wit, there is a significant discontinuity in the degree to which human and nonhuman animals are able to approximate the higher-order, systematic, relational capabilities of a physical symbol system (PSS) (Newell 1980). We show that this symbolic-relational discontinuity pervades nearly every domain of cognition and runs much deeper than even the spectacular scaffolding provided by language or culture alone can explain. We propose a representational-level specification as to where human and nonhuman animals' abilities to approximate a PSS are similar and where they differ. We conclude by suggesting that recent symbolic-connectionist models of cognition shed new light on the mechanisms that underlie the gap between human and nonhuman minds.


2002 ◽  
Vol 21 (1) ◽  
pp. 9-19
Author(s):  
Thow Yick Liang

As humankind ventures deeper into the intelligence era, a totally re-defined mindset is essential to ensure its continuity. With the emerging new environment, human organizations must behave as intelligent beings, in the same manner as biological entities are competing for survival in an ecological system. They must learn, self-organize, adapt, compete and evolve. Thus, human systems can no longer be like machine. Consequently, the structures and characteristics of the industrial era will have to be dismantled. This shift in paradigm requires all human organizations to re-design their structure and operations around intelligence. Therefore, to strategize for the future, the first initiative human organizations need to adopt is to establish an intelligent structure, and to nurture an orgmind and its collective intelligence. A significant component of the orgmind is an intelligence enhancer comprising three entities, namely, intelligence, knowledge structure and theory. These entities interact continuously among themselves, supported by at least one physical symbol system. Eventually, the accuracy and appropriateness of the language used helps to enhance the engagement of the interacting agents in organization. In this respect, the ability to learn continuously, to adapt quickly, and to evolve effectively, is sustained by the intelligence enhancer.


Sign in / Sign up

Export Citation Format

Share Document