Four principal features of autonomous control systems are left both unaddressed and unaddressable... more Four principal features of autonomous control systems are left both unaddressed and unaddressable by present-day engineering methodologies: (1) The ability to operate effectively in environments that are only partially known at design time; (2) A level of generality that allows a system to reassess and redefine the fulfillment of its mission in light of unexpected constraints or other unforeseen changes in the environment; (3) The ability to operate effectively in environments of significant complexity; and (4) The ability to degrade gracefully— how it can continue striving to achieve its main goals when resources become scarce, or in light of other expected or unexpected constraining factors that impede its progress. We describe new methodological and engineering principles for addressing these shortcomings, that we have used to design a machine that becomes increasingly better at behaving in underspecified circumstances, in a goal-directed way, on the job, by modeling itself and i...
ABSTRACT We present a cognitive architecture whose main constituents are allowed to grow through ... more ABSTRACT We present a cognitive architecture whose main constituents are allowed to grow through a situated experience in the world. Such an architectural growth is bootstrapped from a minimal initial knowledge and the architecture itself is built around the biologically-inspired notion of internal models. The key idea, supported by findings in cognitive neuroscience, is that the same internal models used in overt goal-directed action execution can be covertly re-enacted in simulation to provide a unifying explanation to a number of apparently unrelated individual and social phenomena, such as state estimation, action and intention understanding, imitation learning and mindreading. Thus, rather than reasoning over abstract symbols, we rely on the biologically plausible processes firmly grounded in the actual sensorimotor experience of the agent. The article describes how such internal models are learned in the first place, either through individual experience or by observing and imitating other skilled agents, and how they are used in action planning and execution. Furthermore, we explain how the architecture continuously adapts its internal agency and how increasingly complex cognitive phenomena, such as continuous learning, prediction and anticipation, result from an interplay of simpler principles. We describe an early evaluation of our approach in a classical AI problem-solving domain: the Sokoban puzzle.
ABSTRACT An important part of human intelligence is the ability to use language. Humans learn how... more ABSTRACT An important part of human intelligence is the ability to use language. Humans learn how to use language in a society of language users, which is probably the most effective way to learn a language from the ground up. Principles that might allow an artificial agents to learn language this way are not known at present. Here we present a framework which begins to address this challenge. Our auto-catalytic, endogenous, reflective architecture (AERA) supports the creation of agents that can learn natural language by observation. We present results from two experiments where our S1 agent learns human communication by observing two humans interacting in a realtime mock television interview, using gesture and situated language. Results show that S1 can learn multimodal complex language and multimodal communicative acts, using a vocabulary of 100 words with numerous sentence formats, by observing unscripted interaction between the humans, with no grammar being provided to it a priori, and only high-level information about the format of the human interaction in the form of high-level goals of the interviewer and interviewee and a small ontology. The agent learns both the pragmatics, semantics, and syntax of complex sentences spoken by the human subjects on the topic of recycling of objects such as aluminum cans, glass bottles, plastic, and wood, as well as use of manual deictic reference and anaphora.
Advances in Intelligent Systems and Computing, 2013
A key goal in designing an artificial intelligence capable of performing complex tasks is a mecha... more A key goal in designing an artificial intelligence capable of performing complex tasks is a mechanism that allows it to efficiently choose appropriate and relevant actions in a variety of situations and contexts. Nowhere is this more obvious than in the case of building a general intelligence, where the contextual choice and application of actions must be done in the presence of large numbers of alternatives, both subtly and obviously distinct from each other. We present a framework for action selection based on the concurrent activity of ...
Systems intended to operate in dynamic, complex environments – without intervention from their de... more Systems intended to operate in dynamic, complex environments – without intervention from their designers or significant amounts of domain-dependent information provided at design time – must be equipped with a sufficient level of existential autonomy. This feature of naturally intelligent systems has largely been missing from cognitive architectures created to date, due in part to the fact that high levels of existential autonomy require systems to program themselves; good principles for self-programming have remained elusive. Achieving this with the major programming methodologies in use today is not likely, as these are without exception designed to be used by the human mind: Producing self-programming systems that can grow from first principles using these therefore requires first solving the AI problem itself – the very problem we are trying to solve. Advances in existential autonomy call for a new programming paradigm, with self-programming squarely at its center. The principles of such a paradigm are likely to be fundamentally different from prevailing approaches; among the desired features for a programming language designed for automatic self-programming are (a) support for autonomous knowledge acquisition, (b) real-time and any-time operation, (c) reflectivity, and (d) massive parallelization. With these and other requirements guiding our work, we have created a programming paradigm and language called Replicode. Here we discuss the reasoning behind our approach and the main motivations and features that set this work from apart from prior approaches.
Four principal features of autonomous control systems are left both unaddressed and unaddressable... more Four principal features of autonomous control systems are left both unaddressed and unaddressable by present-day engineering methodologies: (1) The ability to operate effectively in environments that are only partially known at design time; (2) A level of generality that allows a system to reassess and redefine the fulfillment of its mission in light of unexpected constraints or other unforeseen changes in the environment; (3) The ability to operate effectively in environments of significant complexity; and (4) The ability to degrade gracefully— how it can continue striving to achieve its main goals when resources become scarce, or in light of other expected or unexpected constraining factors that impede its progress. We describe new methodological and engineering principles for addressing these shortcomings, that we have used to design a machine that becomes increasingly better at behaving in underspecified circumstances, in a goal-directed way, on the job, by modeling itself and i...
ABSTRACT We present a cognitive architecture whose main constituents are allowed to grow through ... more ABSTRACT We present a cognitive architecture whose main constituents are allowed to grow through a situated experience in the world. Such an architectural growth is bootstrapped from a minimal initial knowledge and the architecture itself is built around the biologically-inspired notion of internal models. The key idea, supported by findings in cognitive neuroscience, is that the same internal models used in overt goal-directed action execution can be covertly re-enacted in simulation to provide a unifying explanation to a number of apparently unrelated individual and social phenomena, such as state estimation, action and intention understanding, imitation learning and mindreading. Thus, rather than reasoning over abstract symbols, we rely on the biologically plausible processes firmly grounded in the actual sensorimotor experience of the agent. The article describes how such internal models are learned in the first place, either through individual experience or by observing and imitating other skilled agents, and how they are used in action planning and execution. Furthermore, we explain how the architecture continuously adapts its internal agency and how increasingly complex cognitive phenomena, such as continuous learning, prediction and anticipation, result from an interplay of simpler principles. We describe an early evaluation of our approach in a classical AI problem-solving domain: the Sokoban puzzle.
ABSTRACT An important part of human intelligence is the ability to use language. Humans learn how... more ABSTRACT An important part of human intelligence is the ability to use language. Humans learn how to use language in a society of language users, which is probably the most effective way to learn a language from the ground up. Principles that might allow an artificial agents to learn language this way are not known at present. Here we present a framework which begins to address this challenge. Our auto-catalytic, endogenous, reflective architecture (AERA) supports the creation of agents that can learn natural language by observation. We present results from two experiments where our S1 agent learns human communication by observing two humans interacting in a realtime mock television interview, using gesture and situated language. Results show that S1 can learn multimodal complex language and multimodal communicative acts, using a vocabulary of 100 words with numerous sentence formats, by observing unscripted interaction between the humans, with no grammar being provided to it a priori, and only high-level information about the format of the human interaction in the form of high-level goals of the interviewer and interviewee and a small ontology. The agent learns both the pragmatics, semantics, and syntax of complex sentences spoken by the human subjects on the topic of recycling of objects such as aluminum cans, glass bottles, plastic, and wood, as well as use of manual deictic reference and anaphora.
Advances in Intelligent Systems and Computing, 2013
A key goal in designing an artificial intelligence capable of performing complex tasks is a mecha... more A key goal in designing an artificial intelligence capable of performing complex tasks is a mechanism that allows it to efficiently choose appropriate and relevant actions in a variety of situations and contexts. Nowhere is this more obvious than in the case of building a general intelligence, where the contextual choice and application of actions must be done in the presence of large numbers of alternatives, both subtly and obviously distinct from each other. We present a framework for action selection based on the concurrent activity of ...
Systems intended to operate in dynamic, complex environments – without intervention from their de... more Systems intended to operate in dynamic, complex environments – without intervention from their designers or significant amounts of domain-dependent information provided at design time – must be equipped with a sufficient level of existential autonomy. This feature of naturally intelligent systems has largely been missing from cognitive architectures created to date, due in part to the fact that high levels of existential autonomy require systems to program themselves; good principles for self-programming have remained elusive. Achieving this with the major programming methodologies in use today is not likely, as these are without exception designed to be used by the human mind: Producing self-programming systems that can grow from first principles using these therefore requires first solving the AI problem itself – the very problem we are trying to solve. Advances in existential autonomy call for a new programming paradigm, with self-programming squarely at its center. The principles of such a paradigm are likely to be fundamentally different from prevailing approaches; among the desired features for a programming language designed for automatic self-programming are (a) support for autonomous knowledge acquisition, (b) real-time and any-time operation, (c) reflectivity, and (d) massive parallelization. With these and other requirements guiding our work, we have created a programming paradigm and language called Replicode. Here we discuss the reasoning behind our approach and the main motivations and features that set this work from apart from prior approaches.
Uploads