scholarly journals A Framework for Bridging the Gap Between Symbolic and Non-Symbolic AI

Author(s):  
Gehan Abouelseoud ◽  
Amin Shoukry
Keyword(s):  
1987 ◽  
Vol 1 (2) ◽  
pp. 95-109 ◽  
Author(s):  
P. Smolensky
Keyword(s):  

2021 ◽  
Author(s):  
Luciano Serafini ◽  
Artur d’Avila Garcez ◽  
Samy Badreddine ◽  
Ivan Donadello ◽  
Michael Spranger ◽  
...  

The recent availability of large-scale data combining multiple data modalities has opened various research and commercial opportunities in Artificial Intelligence (AI). Machine Learning (ML) has achieved important results in this area mostly by adopting a sub-symbolic distributed representation. It is generally accepted now that such purely sub-symbolic approaches can be data inefficient and struggle at extrapolation and reasoning. By contrast, symbolic AI is based on rich, high-level representations ideally based on human-readable symbols. Despite being more explainable and having success at reasoning, symbolic AI usually struggles when faced with incomplete knowledge or inaccurate, large data sets and combinatorial knowledge. Neurosymbolic AI attempts to benefit from the strengths of both approaches combining reasoning with complex representation of knowledge and efficient learning from multiple data modalities. Hence, neurosymbolic AI seeks to ground rich knowledge into efficient sub-symbolic representations and to explain sub-symbolic representations and deep learning by offering high-level symbolic descriptions for such learning systems. Logic Tensor Networks (LTN) are a neurosymbolic AI system for querying, learning and reasoning with rich data and abstract knowledge. LTN introduces Real Logic, a fully differentiable first-order language with concrete semantics such that every symbolic expression has an interpretation that is grounded onto real numbers in the domain. In particular, LTN converts Real Logic formulas into computational graphs that enable gradient-based optimization. This chapter presents the LTN framework and illustrates its use on knowledge completion tasks to ground the relational predicates (symbols) into a concrete interpretation (vectors and tensors). It then investigates the use of LTN on semi-supervised learning, learning of embeddings and reasoning. LTN has been applied recently to many important AI tasks, including semantic image interpretation, ontology learning and reasoning, and reinforcement learning, which use LTN for supervised classification, data clustering, semi-supervised learning, embedding learning, reasoning and query answering. The chapter presents some of the main recent applications of LTN before analyzing results in the context of related work and discussing the next steps for neurosymbolic AI and LTN-based AI models.


Author(s):  
Peter Kåhre

My proposal is based on my doctoral dissertation On the Shoulders of AI-technology : Sociology of Knowledge and Strong Artificial Intelligence which I succesfully defended on May 29th 2009. E-published http://www.lu.se/o.o.i.s?id=12588&postid=1389611 The dissertation is concerned with Sociology’s stance in the debate on Strong Artificial Intelligence,.i.e. AI-systems that is able to shape knowledge on their own. There is a need for sociologists to realize the difference between two approaches to constructing AI systems: Symbolic AI (or Classic AI) and Connectionistic AI in a distributed model – DAI. Sociological literature shows a largely critical attitude towards Symbolic AI, an attitude that is justified. The main theme of the dissertation is that DAI is not only compatible with Sociology’s approach to what is social, but also constitutes an apt model of how a social system functions. This is consolidated with help from german sociologist Niklas Luhmann’s social systems theory. A lot of sociologists criticize AI because they think that diversity is important and can only be comprehended in informal circumstances that only humans interacting together can handle. They mean that social intelligence is needed to make something out of diversity and informalism. Luhmann´s systems theory gives the opposite perspective. It tells us that it is social systems that communicate and produce new knowledge structures out of contincency. Psychological systems, i.e. humans, can only think within the circumstances the social system offer. In that way human thoughts are bound by formalism. Diversity is constructed when the social systems interact with complexity in their environments. They reduce the complexity and try to present it as meaningful diversity. Today when most of academic literature is electronically stored and is accessible through the Internet from al over the world, DAI can help social systems to observe and reduce complexity in this global dimension. It is pointed out that human consciousness is limited in handling this global dimension. Therefore is it reasonable to argue that DAI in at least this dimension has a stronger intelligence than humans have. I will argue that Luhmann´s social theory and DAI give a god model to analyze the conditions for diversity in the Internet society. Further, the discussion about strong AI gives a lot of opportunities to discuss what sort of information literacy is needed and it also gives some perspective to discuss the concept of IL I have observed that the concept has evolved from something that coined some formal capacities, to something that has to do with a capacity to observe informal relations. That discussion can easily be compared to a parallel discussion within the debate about strong AI.


Author(s):  
Michael Sioutis ◽  
Diedrich Wolter

Qualitative Spatial & Temporal Reasoning (QSTR) is a major field of study in Symbolic AI that deals with the representation and reasoning of spatio- temporal information in an abstract, human-like manner. We survey the current status of QSTR from a viewpoint of reasoning approaches, and identify certain future challenges that we think that, once overcome, will allow the field to meet the demands of and adapt to real-world, dynamic, and time-critical applications of highly active areas such as machine learning and data mining.


Data ◽  
2019 ◽  
Vol 4 (2) ◽  
pp. 63
Author(s):  
Dimitrios Koutsomitropoulos ◽  
Spiridon Likothanassis ◽  
Panos Kalnis

One cannot help but classify the continuous birth and demise of Artificial Intelligence (AI) trends into the everlasting theme of the battle between connectionist and symbolic AI [...]


2020 ◽  
Vol 30 (2) ◽  
pp. 157-193 ◽  
Author(s):  
Sergey Astakhov

A conflict between artificial intelligence (AI) researchers and phenomenologist Hubert Dreyfus arose in the 1960s and continued until the 2000s. The creators of the first AI programs believed that skill acquisition is a matter of solving problems by using particular mental representations,or heuristics. Dreyfus set out to prove that heuristics are not needed for skill acquisition because the human mind and body are capable of reacting to problematic situations in a flexible way without any mental representations. By clarifying the backstory of the conflict and analyzing the fundamental contradictions between the two theories of skill, the article shows how the phenomenology of skill acquisition originated from a critique of symbolic AI. Dreyfus developed his understanding of interconnections between mind and body in opposition to the associationism in the theories of Herbert Simon, Allen Newell and Edward Feigenbaum. He maintained that human beings have fringe consciousness, insight and tolerance of ambiguity and that they have a specific body structure and needs which make it possible to discriminate between relevant and irrelevant features in the environment and get a maximum grip of it. The author analyzes how theories of learning created within symbolic AI influenced Dreyfus’s five-stage model of skill acquisition. That model explained why programs by Simon and his colleagues achieved initial success, but it also exposed their limitations. To clarify the teleology of skill, Dreyfus explored how the idea of motor intentionality is connected with neural network modelling. Two perspectives on the role of Dreyfus in the history of AI are outlined together with the reasons why his philosophy had almost no effect on the AI community even though it was influential in the social sciences and humanities. Finally, current challenges facing the phenomenology of skill acquisition are explored.


Author(s):  
Bruce MacLennan

The history of artificial intelligence (AI) is commonly supposed to begin with Turing’s (1950) discussions of machine intelligence, and to have been defined as a field at the 1956 Dartmouth Summer Research Project on Artificial Intelligence. However, the ideas on which AI is based, and in particular those on which symbolic AI (see below) is based, have a very long history in the Western intellectual tradition, dating back to ancient Greece (see also McCorduck, 2004). It is important for modern researchers to understand this history for it reflects problematic assumptions about the nature of knowledge and cognition: assumptions that can impede the progress of AI if accepted uncritically.


2007 ◽  
Vol 19 (1) ◽  
pp. 17-28 ◽  
Author(s):  
Qingxiang Wu ◽  
David A. Bell ◽  
Girijesh Prasad ◽  
Thomas Martin McGinnity

Sign in / Sign up

Export Citation Format

Share Document