allen newell
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 0)

H-INDEX

4
(FIVE YEARS 0)

Author(s):  
Jonathan Bendor

Although Herbert Simon and Allen Newell studied problem-solving by experts as well as nonexperts, political scientists generally understand “bounded rationality” to refer primarily to cognitive constraints: how we fall short of completely rational decision-making. This incomplete understanding deprives us of an enormously useful intellectual legacy, built not only by Newell and Simon but also by a wide array of cognitive scientists who have explored how humans have collectively solved very difficult problems such as eliminating smallpox or designing nuclear submarines. This chapter surveys this richer understanding of bounded rationality. Cognitive capacities receive as much attention as cognitive constraints. The chapter reports work on how cultural storehouses of knowledge and certain organizational arrangements amplify our cognitive capacities in both the short and the long run. Finally, it extracts from the literature a set of thematically related propositions that are building blocks for constructing macro-theories of politics out of cognitively realistic micro-premises.


Author(s):  
Pascual Martínez Freire

Después de una breve caracterización de las ciencias cognitivas, se destaca la existencia de dos enfoques erróneos en filosofía del conocimiento (uno erudito-histórico y otro meramente especulativo) apostando por el enfoque propio de las ciencias cognitivas, donde el sujeto de cognición es procesador de información (siguiendo la idea de Alan Turing) y tanto puede ser un humano como una máquina o un animal. En segundo lugar se caracteriza el sujeto cognitivo como un sistema físico de símbolos (SFS), de acuerdo con la tesis de Allen Newell y Herbert Simon, comentando y ejemplificando tal tesis. Y finalmente se distingue entre diversos grados de cognición en sentido amplio, tanto en animales como en máquinas, así como se destaca la cognición en sentido propio o cognición inteligente, que comprende las operaciones de concebir, juzgar y razonar. 


Author(s):  
Serapio Cazana Canchis

Gottfried Wilhelm Leibniz a inicios del s. XVIII diseñó una máquina de cálculos, en la que creía como posible se expresara en términos lógicos. En esta misma tarea se embarcó Jacques de Vaucanson quien construyó estatuas que manifestaban conductas básicas. Esto ahora no extraña, pero en aquella época debió parecer asombroso y hasta mágico o herético. Actualmente, desde los inicios de la inteligencia artificial tanto en su teoría como en su aplicación tecnológica, se viene gestando la posibilidad de generar un tipo de inteligencia no humana y que sea capaz de emular o de realizar las mismas conductas hasta llegar al propio pensamiento, parte nuclear de la inteligencia humana, donde cabe preguntarse ¿es esto posible? Este ensayo analiza esta posibilidad desde el concepto de inteligencia artificial de Allen Newell.


Author(s):  
Gian Piero Zarri

In 1982, Allen Newell introduced the “knowledge level” principle (Newell, 1982) and revolutionized the traditional way of conceiving the relationships between knowledge management and computer science. According to this principle, the knowledge level represents the highest level in the description of any structured system: Situated above and independent from the “symbol level,” it describes the observed behaviour of the system as a function of the knowledge employed, and independently of the way this knowledge is eventually represented/implemented at the symbol level. “The knowledge level permits predicting and understanding behaviour without having an operational model of the processing that is actually being done by the agent” (Newell, 1982, p. 108). An arbitrary system is then interpreted as a rational agent that interacts with its environment in order to attain, according to the knowledge it owns, a given goal in the best way; from a strict knowledge level point of view, this system is then considered as a sort of “black box” to be modeled on the basis of its input/output behaviour, without making any hypothesis on its internal structure. To sum up, the knowledge level principle emphasises the why (i.e., the goals), and the what (i.e., the different tasks to be accomplished and the domain knowledge) more than the how (i.e., the way of implementing these tasks and of putting this domain knowledge to use).


Author(s):  
Gian Piero Zarri

In 1982, Allen Newell introduced the “knowledge level” principle (Newell, 1982) and revolutionized the traditional way of conceiving the relationships between knowledge management and computer science. According to this principle, the knowledge level represents the highest level in the description of any structured system: Situated above and independent from the “symbol level,” it describes the observed behaviour of the system as a function of the knowledge employed, and independently of the way this knowledge is eventually represented/implemented at the symbol level. “The knowledge level permits predicting and understanding behaviour without having an operational model of the processing that is actually being done by the agent” (Newell, 1982, p. 108). An arbitrary system is then interpreted as a rational agent that interacts with its environment in order to attain, according to the knowledge it owns, a given goal in the best way; from a strict knowledge level point of view, this system is then considered as a sort of “black box” to be modeled on the basis of its input/output behaviour, without making any hypothesis on its internal structure. To sum up, the knowledge level principle emphasises the why (i.e., the goals), and the what (i.e., the different tasks to be accomplished and the domain knowledge) more than the how (i.e., the way of implementing these tasks and of putting this domain knowledge to use).


Author(s):  
Attila Benko ◽  
Cecília Sik Lányi

George Boole was the first to describe a formal language for logic reasoning in 1847. The next milestone in artificial intelligence history was in 1936, when Alan M. Turing described the Turing-machine. Warren McCulloch and Walter Pitts created the model of artificial neurons in 1943, and it was in 1944 when J. Neumann and O. Morgenstern determined the theory of decision, which provided a complete and formal frame for specifying the preferences of agents. In 1949 Donald Hebb presented a value changing rule for the connections of the artificial neurons that provide the chance of learning, and Marvin Minsky and Dean Edmonds created the first neural computer in 1951. Artificial intelligence (AI) was born in the summer of 1956, when John McCarthy first defined the term. It was the first time the subject caught the attention of researchers, and it was discussed at a conference at Dartmouth. The next year, the first general problem solver was tested, and one year later, McCarty?regarded as the father of AI?announced the LISP language for creating AI software. Lisp, which stands for list processing, is still used regularly today. Herbert Simon in 1965 stated: “Machines will be capable, within twenty years, of doing any work a man can do.” However, years later scientists realized that creating an algorithm that can do anything a human can do is nearly impossible. Nowadays, AI has a new meaning: creating intelligent agents to help us do our work faster and easier (Russel & Norvig, 2005; McDaniel, 1994; Shirai & Tsujii, 1982; Mitchell, 1996; Schreiber, 1999). Perceptrons was a demonstration of the limits of simple neural networks published by Marvin Minsky and Seymour Papert in 1968. In 1970, the first International Joint Conference on Artificial Intelligence was held in Washington, DC. PROLOG, a new language for generating AI systems, was created by Alain Colmerauer in 1972. In 1983, Johnson Laird, Paul Rosenbloom, and Allen Newell completed CMU dissertations on SOAR.


2011 ◽  
pp. 250-263
Author(s):  
Gian Piero Zarri

In 1982, Allen Newell introduced the “knowledge level” principle (Newell, 1982) and revolutionized the traditional way of conceiving the relationships between knowledge management and computer science. According to this principle, the knowledge level represents the highest level in the description of any structured system: Situated above and independent from the “symbol level,” it describes the observed behaviour of the system as a function of the knowledge employed, and independently of the way this knowledge is eventually represented/implemented at the symbol level. “The knowledge level permits predicting and understanding behaviour without having an operational model of the processing that is actually being done by the agent” (Newell, 1982, p. 108). An arbitrary system is then interpreted as a rational agent that interacts with its environment in order to attain, according to the knowledge it owns, a given goal in the best way; from a strict knowledge level point of view, this system is then considered as a sort of “black box” to be modeled on the basis of its input/output behaviour, without making any hypothesis on its internal structure. To sum up, the knowledge level principle emphasises the why (i.e., the goals), and the what (i.e., the different tasks to be accomplished and the domain knowledge) more than the how (i.e., the way of implementing these tasks and of putting this domain knowledge to use).


Author(s):  
S. G. Pulman

Karen Spärck Jones produced over 200 publications, including nine books, in her long research career. She received many awards and honours, including the Association for Computing Machinery (Special Interest Group in Information Retrieval) Salton Award in 1988; the American Society for Information Science and Technology Award of Merit in 2002; and the joint Association for Computing Machinery and Association for the Advancement of Artificial Intelligence Allen Newell Award in 2007. Karen also worked hard to try to improve the position of women in computing and to attract more women to the discipline. She was a founder member of the ‘women@cl’ network based at the Computer Laboratory and was always unstinting with her time when women students and researchers asked her advice.


2010 ◽  
Vol 15 (1) ◽  
pp. 259-281 ◽  
Author(s):  
Flávio Marcelo Risuenho dos Santos ◽  
Richard Perassi Luiz de Sousa
Keyword(s):  

Há uma diversidade de interpretações dos conceitos de termos como "informação" e "conhecimento", implicando as idéias de valor da informação e de valor do conhecimento, com suas características meramente estruturais. Isso dificulta o entendimento e a aceitação de novas áreas de estudos como, por exemplo, a área de Engenharia e Gestão do Conhecimento. Com base na revisão de parte dessa diversidade teórica e de suas críticas, este texto propõe a inclusão e a expansão do termo "cognição" como fator necessário à diferenciação entre os termos "informação" e "conhecimento", no campo de Engenharia e Gestão do Conhecimento. Além disso, são destacadas algumas implicações desse posicionamento epistemológico na revisão dos processos de conversão do conhecimento, previstos por Nonaka e Takeuchi (1997), e de modelagem ao nível do conhecimento, proposto por Allen Newell (1981).


Sign in / Sign up

Export Citation Format

Share Document