LisaSoros-IDA_ACognitiveAgentArchitecture

LisaSoros-IDA_ACognitiveAgentArchitecture - IDA A Cognitive...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon
IDA: A Cognitive Agent Architecture 1, Stan Franklin 2 Arpad Kelemen 2 , and Lee McCauley Institute for Intelligent Systems The University of Memphis For most of its four decades of existence artificial intelligence has devoted its attention primarily to studying and emulating individual functions of intelligence. During the last decade, researchers have expanded their efforts to include systems modeling a number of cognitive functions (Albus, 1991, 1996; Ferguson, 1995; Hayes-Roth, 1995; Jackson, 1987; Johnson and Scanlon, 1987; Laird, Newall, and Rosenbloom, 1987; Newell, 1990; Pollack, ????; Riegler, 1997; Sloman, 1995). There’s also been a movement in recent years towards producing systems situated within some environment (Akman, 1998; Brooks, 1990; Maes, 1990b). Some recent work of the first author and his colleagues have combined these two trends by experimenting with cognitive agents (Bogner, Ramamurthy, and Franklin to appear; Franklin and Graesser forthcoming; McCauley and Franklin, to appear; Song and Franklin , forthcoming; Zhang, Franklin and Dasgupta, 1998; Zhang et al, 1998). This paper briefly describes the architecture of one such agent. By an autonomous agent (Franklin and Graesser 1997) we mean a system situated in, and part of, an environment, which senses that environment, and acts on it, over time, in pursuit of its own agenda. It acts in such a way as to possibly influence what it senses at a later time. That is, the agent is structurally coupled to its environment (Maturana 1975, Maturana and Varela 1980). Biological examples of autonomous agents include humans and most animals. Non-biological examples include some mobile robots, and various computational agents, including artificial life agents, software agents and computer viruses. Here we’ll be concerned with autonomous software agents ‘living’ in real world computing systems. Such autonomous software agents, when equipped with cognitive (interpreted broadly) features chosen from among multiple senses, perception, short and long term memory, attention, planning, reasoning, problem solving, learning, emotions, moods, attitudes, multiple drives, etc., will be called cognitive agents (Franklin 1997). Such agents promise to be more flexible, more adaptive, more human-like than any currently existing software because of their ability to learn, and to deal with novel input and unexpected situations. But, how do we design such agents? On way is to model them after humans. We’ve chosen to design and implement such cognitive agents within the constraints of the global workspace theory of consciousness, a psychological theory that gives a high-level, abstract account of human consciousness and broadly sketches it architecture (Baars, 1988, 1997). We’ll call such agents “conscious” software agents. Global workspace theory postulates that human cognition is implemented by a multitude of relatively
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 2
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 07/30/2011 for the course COP 4810 taught by Professor Staff during the Spring '11 term at University of Central Florida.

Page1 / 5

LisaSoros-IDA_ACognitiveAgentArchitecture - IDA A Cognitive...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online