Colocationist Answers to the Grounding Problem

Theoria ◽  
2021 ◽  
Author(s):  
Marta Campdelacreu
Keyword(s):  
2010 ◽  
Vol 156 (2) ◽  
pp. 173-197 ◽  
Author(s):  
Louis deRosset
Keyword(s):  

2017 ◽  
Vol 16 ◽  
pp. 15002 ◽  
Author(s):  
Luca Ghislanzoni ◽  
Luca Benetti ◽  
Tommaso Misuri ◽  
Giovanni Cesaretti ◽  
Lorenzo Fontani

2018 ◽  
Vol 82 ◽  
pp. 333-364 ◽  
Author(s):  
Kathrin Koslicki

AbstractConcrete particular objects (e.g. living organisms) figure saliently in our everyday experience as well as our in our scientific theorizing about the world. Ahylomorphicanalysis of concrete particular objects holds that these entities are, in some sense, compounds of matter (hūlē) and form (morphēoreidos). TheGrounding Problemasks why an object and its matter (e.g. a statue and the clay that constitutes it) can apparently differ with respect to certain of their properties (e.g. the clay's ability to survive being squashed, as compared to the statue's inability to do so), even though they are otherwise so much alike. In this paper, I argue that a hylomorphic analysis of concrete particular objects, in conjunction with a non-modal conception of essence of the type encountered for example in the works of Aristotle and Kit Fine, has the resources to yield a solution to the Grounding Problem.


Author(s):  
Andrew Brenner

Composition occurs when one or more objects are parts of another object. The metaphysics of composition concerns the nature of composition – i.e. what it is, and how it works. Some of the more important questions philosophers have regarding the metaphysics of composition are the following: (1) When does composition occur? This is van Inwagen’s ‘Special Composition Question’. Four prominent answers to this question include: (i) objects compose another object when those former objects are in contact; (ii) any two or more objects compose another object; (iii) objects never compose another object; (iv) objects compose another object when the activities of the former objects constitute a life. (2) Are composite objects identical with their parts? Proponents of ‘composition as identity’ answer ‘yes’ to this question. There are two primary variants of composition as identity, ‘strong’ composition as identity and ‘weak’ composition as identity. The most prominent objection to strong composition as identity is an objection from Leibniz’s Law: composite objects cannot be identical with their parts, since they seem to have properties which their parts do not have. (3) Is it possible for one object to constitute another object? Here ‘constitution’ is the relation which is alleged to obtain between, for example, a clay statue and the lump of clay from which it is formed. We can distinguish between the thesis that constitution is identity, and the thesis that constitution is not identity. The chief motivation which leads some philosophers to reject the thesis that constitution is not identity is the ‘grounding problem’ for that thesis. (4) Are there, in addition to composite objects, the ‘forms’ of those objects, and if so, what is the relationship between composite objects and their forms? We can distinguish between (at least) two variants of hylomorphism (the thesis that objects have forms), with the main distinction between the two views being whether or not they regard forms as being among the parts of composite objects.


Author(s):  
Angelo Loula ◽  
João Queiroz

The topic of representation acquisition, manipulation and use has been a major trend in Artificial Intelligence since its beginning and persists as an important matter in current research. Particularly, due to initial focus on development of symbolic systems, this topic is usually related to research in symbol grounding by artificial intelligent systems. Symbolic systems, as proposed by Newell & Simon (1976), are characterized as a highlevel cognition system in which symbols are seen as “[lying] at the root of intelligent action” (Newell and Simon, 1976, p.83). Moreover, they stated the Physical Symbol Systems Hypothesis (PSSH), making the strong claim that “a physical symbol system has the necessary and sufficient means for general intelligent action” (p.87). This hypothesis, therefore, sets equivalence between symbol systems and intelligent action, in such a way that every intelligent action would be originated in a symbol system and every symbol system is capable of intelligent action. The symbol system described by Newell and Simon (1976) is seen as a computer program capable of manipulating entities called symbols, ‘physical patterns’ combined in expressions, which can be created, modified or destroyed by syntactic processes. Two main capabilities of symbol systems were said to provide the system with the properties of closure and completeness, and so the system itself could be built upon symbols alone (Newell & Simon, 1976). These capabilities were designation – expressions designate objects – and interpretation – expressions could be processed by the system. The question was, and much of the criticism about symbol systems came from it, how these systems, built upon and manipulating just symbols, could designate something outside its domain. Symbol systems lack ‘intentionality’, stated John Searle (1980), in an important essay in which he described a widely known mental experiment (Gedankenexperiment), the ‘Chinese Room Argument’. In this experiment, Searle places himself in a room where he is given correlation rules that permits him to determine answers in Chinese to question also in Chinese given to him, although Searle as the interpreter knows no Chinese. To an outside observer (who understands Chinese), the man in this room understands Chinese quite well, even though he is actually manipulating non-interpreted symbols using formal rules. For an outside observer the symbols in the questions and answers do represent something, but for the man in the room the symbols lack intentionality. The man in the room acts like a symbol system, which relies only in symbolic structures manipulation by formal rules. For such systems, the manipulated tokens are not about anything, and so they cannot even be regarded as representations. The only intentionality that can be attributed to these symbols belongs to who ever uses the system, sending inputs that represent something to them and interpreting the output that comes out of the system. (Searle, 1980) Therefore, intentionality is the important feature missing in symbol systems. The concept of intentionality is of aboutness, a “feature of certain mental states by which they are directed at or about objects and states of affairs in the world” (Searle, 1980), as a thought being about a certain place.1 Searle (1980) points out that a ‘program’ itself can not achieve intentionality, because programs involve formal relations and intentionality depends on causal relations. Along these lines, Searle leaves a possibility to overcome the limitations of mere programs: ‘machines’ – physical systems causally connected to the world and having ‘causal internal powers’ – could reproduce the necessary causality, an approach in the same direction of situated and embodied cognitive science and robotics. It is important to notice that these ‘machines’ should not be just robots controlled by a symbol system as described before. If the input does not come from a keyboard and output goes to a monitor, but rather came in from a video camera and then out to motors, it would not make a difference since the symbol system is not aware of this change. And still in this case, the robot would not have intentional states (Searle 1980). Symbol systems should not depend on formal rules only, if symbols are to represent something to the system. This issue brought in another question, how symbols could be connected to what they represent, or, as stated by Harnad (1990) defining the Symbol Grounding Problem: “How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols?” The Symbol Grounding Problem, therefore, reinforces two important matters. First that symbols do not represent anything to a system, at least not what they were said to ‘designate’. Only someone operating the system could recognize those symbols as referring to entities outside the system. Second, the symbol system cannot hold its closure in relating symbols only with other symbols; something else should be necessary to establish a connection between symbols and what they represent. An analogy made by Harnad (1990) is with someone who knows no Chinese but tries to learn Chinese from a Chinese/Chinese dictionary. Since terms are defined by using other terms and none of them is known before, the person is kept in a ‘dictionary-goround’ without ever understanding those symbols. The great challenge for Artificial Intelligence researchers then is to connect symbols to what they represent, and also to identify the consequences that the implementation of such connection would make to a symbol system, e.g. much of the descriptions of symbols by means of other symbols would be unnecessary when descriptions through grounding are available. It is important to notice that the grounding process is not just about giving sensors to an artificial system so it would be able to ‘see’ the world, since it ‘trivializes’ the symbol grounding problem and ignores the important issue about how the connection between symbols and objects are established (Harnad, 1990).


Sign in / Sign up

Export Citation Format

Share Document