Explanation can form the basis, in any lawfully behaving environment, of plans, summaries, justif... more Explanation can form the basis, in any lawfully behaving environment, of plans, summaries, justifications, analysis and predictions, and serve as a method for probing their validity. For systems with general intelligence, an equally important reason to generate explanations is for directing cumulative knowledge acquisition: Lest they be born knowing everything, a general machine intelligence must be able to handle novelty. This can only be accomplished through a systematic logical analysis of how, in the face of novelty, effective control is achieved and maintained—in other words, through the systematic explanation of experience. Explanation generation is thus a requirement for more powerful AI systems, not only for their owners (to verify proper knowledge and operation) but for the AI itself—to leverage its existing knowledge when learning something new. In either case, assigning the automatic generation of explanation to the system itself seems sensible, and quite possibly unavoidable. In this paper we argue that the quality of an agent’s explanation generation mechanism is based on how well it fulfils three goals – or purposes – of explanation production: Uncovering unknown or hidden patterns, highlighting or identifying relevant causal chains, and identifying incorrect background assumptions. We present the arguments behind this conclusion and briefly describe an implemented self-explaining system, AERA (Autocatlytic Endogenous Reflective Architecture), capable of goal-directed self-explanation: Autonomously explaining its own behavior as well as its acquired knowledge of tasks and environment.
Journal of Artificial Intelligence and Consciousness, 2021
This paper introduces a new metamodel-based knowledge representation that significantly improves ... more This paper introduces a new metamodel-based knowledge representation that significantly improves autonomous learning and adaptation. While interest in hybrid machine learning/symbolic AI systems leveraging, for example, reasoning and knowledge graphs, is gaining popularity, we find there remains a need for both a clear definition of knowledge and a metamodel to guide the creation and manipulation of knowledge. Some of the benefits of the metamodel we introduce in this paper include a solution to the symbol grounding problem, cumulative learning and federated learning. We have applied the metamodel to problems ranging from time series analysis, computer vision and natural language understanding and have found that the metamodel enables a wide variety of learning mechanisms ranging from machine learning, to graph network analysis and learning by reasoning engines to interoperate in a highly synergistic way. Our metamodel-based projects have consistently exhibited unprecedented accurac...
An important feature of human learning is the ability to continuously accept new information and ... more An important feature of human learning is the ability to continuously accept new information and unify it with existing knowledge, a process that proceeds largely automatically and without catastrophic side-effects. A generally intelligent machine (AGI) should be able to learn a wide range of tasks in a variety of environments. Knowledge acquisition in partially-known and dynamic task-environments cannot happen all-at-once, and AGI-aspiring systems must thus be capable of cumulative learning: efficiently making use of existing knowledge while learning new things, increasing the scope of ability and knowledge incrementally-without catastrophic forgetting or damaging existing skills. Many aspects of such learning have been addressed in artificial intelligence (AI) research, but relatively few examples of cumulative learning have been demonstrated to date and no generally accepted explicit definition exists of this category of learning. Here we provide a general definition of cumulativ...
The AAAI-05 workshops were held on Saturday and Sunday, July 9-10, in Pittsburgh, Pennsylvania. T... more The AAAI-05 workshops were held on Saturday and Sunday, July 9-10, in Pittsburgh, Pennsylvania. The thirteen workshops were Contexts and Ontologies: Theory, Practice and Applications, Educational Data Mining, Exploring Planning and Scheduling for Web Services, Grid and Autonomic Computing, Human Comprehensible Machine Learning, Inference for Textual Question Answering, Integrating Planning into Scheduling, Learning in Computer Vision, Link Analysis, Mobile Robot Workshop, Modular Construction of ...
The AAAI-05 workshops were held on Saturday and Sunday, July 9-10, in Pittsburgh, Pennsylvania. T... more The AAAI-05 workshops were held on Saturday and Sunday, July 9-10, in Pittsburgh, Pennsylvania. The thirteen workshops were Contexts and Ontologies: Theory, Practice and Applications, Educational Data Mining, Exploring Planning and Scheduling for Web Services, Grid and Autonomic Computing, Human Comprehensible Machine Learning, Inference for Textual Question Answering, Integrating Planning into Scheduling, Learning in Computer Vision, Link Analysis, Mobile Robot Workshop, Modular Construction of ...
A key goal in designing an artificial intelligence capable of performing complex tasks is a mecha... more A key goal in designing an artificial intelligence capable of performing complex tasks is a mechanism that allows it to efficiently choose appropriate and relevant actions in a variety of situations and contexts. Nowhere is this more obvious than in the case of building a general intelligence, where the contextual choice and application of actions must be done in the presence of large numbers of alternatives, both subtly and obviously distinct from each other. We present a framework for action selection based on the concurrent activity of ...
Presence: Teleoperators and Virtual Environments, 1993
The most common visual feedback technique in teleoperation is in the form of monoscopic video dis... more The most common visual feedback technique in teleoperation is in the form of monoscopic video displays. As robotic autonomy increases and the human operator takes on the role of a supervisor, three-dimensional information is effectively presented by multiple, televised, two-dimensional (2-D) projections showing the same scene from different angles. To analyze how people go about using such segmented information for estimations about three-dimensional (3-D) space, 18 subjects were asked to determine the position of a stationary pointer in space; eye movements and reaction times (RTs) were recorded during a period when either two or three 2-D views were presented simultaneously, each showing the same scene from a different angle. The results revealed that subjects estimated 3-D space by using a simple algorithm of feature search. Eye movement analysis supported the conclusion that people can efficiently use multiple 2-D projections to make estimations about 3-D space without reconstru...
Explanation can form the basis, in any lawfully behaving environment, of plans, summaries, justif... more Explanation can form the basis, in any lawfully behaving environment, of plans, summaries, justifications, analysis and predictions, and serve as a method for probing their validity. For systems with general intelligence, an equally important reason to generate explanations is for directing cumulative knowledge acquisition: Lest they be born knowing everything, a general machine intelligence must be able to handle novelty. This can only be accomplished through a systematic logical analysis of how, in the face of novelty, effective control is achieved and maintained—in other words, through the systematic explanation of experience. Explanation generation is thus a requirement for more powerful AI systems, not only for their owners (to verify proper knowledge and operation) but for the AI itself—to leverage its existing knowledge when learning something new. In either case, assigning the automatic generation of explanation to the system itself seems sensible, and quite possibly unavoidable. In this paper we argue that the quality of an agent’s explanation generation mechanism is based on how well it fulfils three goals – or purposes – of explanation production: Uncovering unknown or hidden patterns, highlighting or identifying relevant causal chains, and identifying incorrect background assumptions. We present the arguments behind this conclusion and briefly describe an implemented self-explaining system, AERA (Autocatlytic Endogenous Reflective Architecture), capable of goal-directed self-explanation: Autonomously explaining its own behavior as well as its acquired knowledge of tasks and environment.
Journal of Artificial Intelligence and Consciousness, 2021
This paper introduces a new metamodel-based knowledge representation that significantly improves ... more This paper introduces a new metamodel-based knowledge representation that significantly improves autonomous learning and adaptation. While interest in hybrid machine learning/symbolic AI systems leveraging, for example, reasoning and knowledge graphs, is gaining popularity, we find there remains a need for both a clear definition of knowledge and a metamodel to guide the creation and manipulation of knowledge. Some of the benefits of the metamodel we introduce in this paper include a solution to the symbol grounding problem, cumulative learning and federated learning. We have applied the metamodel to problems ranging from time series analysis, computer vision and natural language understanding and have found that the metamodel enables a wide variety of learning mechanisms ranging from machine learning, to graph network analysis and learning by reasoning engines to interoperate in a highly synergistic way. Our metamodel-based projects have consistently exhibited unprecedented accurac...
An important feature of human learning is the ability to continuously accept new information and ... more An important feature of human learning is the ability to continuously accept new information and unify it with existing knowledge, a process that proceeds largely automatically and without catastrophic side-effects. A generally intelligent machine (AGI) should be able to learn a wide range of tasks in a variety of environments. Knowledge acquisition in partially-known and dynamic task-environments cannot happen all-at-once, and AGI-aspiring systems must thus be capable of cumulative learning: efficiently making use of existing knowledge while learning new things, increasing the scope of ability and knowledge incrementally-without catastrophic forgetting or damaging existing skills. Many aspects of such learning have been addressed in artificial intelligence (AI) research, but relatively few examples of cumulative learning have been demonstrated to date and no generally accepted explicit definition exists of this category of learning. Here we provide a general definition of cumulativ...
The AAAI-05 workshops were held on Saturday and Sunday, July 9-10, in Pittsburgh, Pennsylvania. T... more The AAAI-05 workshops were held on Saturday and Sunday, July 9-10, in Pittsburgh, Pennsylvania. The thirteen workshops were Contexts and Ontologies: Theory, Practice and Applications, Educational Data Mining, Exploring Planning and Scheduling for Web Services, Grid and Autonomic Computing, Human Comprehensible Machine Learning, Inference for Textual Question Answering, Integrating Planning into Scheduling, Learning in Computer Vision, Link Analysis, Mobile Robot Workshop, Modular Construction of ...
The AAAI-05 workshops were held on Saturday and Sunday, July 9-10, in Pittsburgh, Pennsylvania. T... more The AAAI-05 workshops were held on Saturday and Sunday, July 9-10, in Pittsburgh, Pennsylvania. The thirteen workshops were Contexts and Ontologies: Theory, Practice and Applications, Educational Data Mining, Exploring Planning and Scheduling for Web Services, Grid and Autonomic Computing, Human Comprehensible Machine Learning, Inference for Textual Question Answering, Integrating Planning into Scheduling, Learning in Computer Vision, Link Analysis, Mobile Robot Workshop, Modular Construction of ...
A key goal in designing an artificial intelligence capable of performing complex tasks is a mecha... more A key goal in designing an artificial intelligence capable of performing complex tasks is a mechanism that allows it to efficiently choose appropriate and relevant actions in a variety of situations and contexts. Nowhere is this more obvious than in the case of building a general intelligence, where the contextual choice and application of actions must be done in the presence of large numbers of alternatives, both subtly and obviously distinct from each other. We present a framework for action selection based on the concurrent activity of ...
Presence: Teleoperators and Virtual Environments, 1993
The most common visual feedback technique in teleoperation is in the form of monoscopic video dis... more The most common visual feedback technique in teleoperation is in the form of monoscopic video displays. As robotic autonomy increases and the human operator takes on the role of a supervisor, three-dimensional information is effectively presented by multiple, televised, two-dimensional (2-D) projections showing the same scene from different angles. To analyze how people go about using such segmented information for estimations about three-dimensional (3-D) space, 18 subjects were asked to determine the position of a stationary pointer in space; eye movements and reaction times (RTs) were recorded during a period when either two or three 2-D views were presented simultaneously, each showing the same scene from a different angle. The results revealed that subjects estimated 3-D space by using a simple algorithm of feature search. Eye movement analysis supported the conclusion that people can efficiently use multiple 2-D projections to make estimations about 3-D space without reconstru...
Uploads
Papers by K. Þórisson
and serve as a method for probing their validity. For systems with general intelligence, an equally important reason to generate explanations is
for directing cumulative knowledge acquisition: Lest they be born knowing everything, a general machine intelligence must be able to handle
novelty. This can only be accomplished through a systematic logical
analysis of how, in the face of novelty, effective control is achieved and
maintained—in other words, through the systematic explanation of experience. Explanation generation is thus a requirement for more powerful
AI systems, not only for their owners (to verify proper knowledge and
operation) but for the AI itself—to leverage its existing knowledge when
learning something new. In either case, assigning the automatic generation of explanation to the system itself seems sensible, and quite possibly
unavoidable. In this paper we argue that the quality of an agent’s explanation generation mechanism is based on how well it fulfils three goals –
or purposes – of explanation production: Uncovering unknown or hidden
patterns, highlighting or identifying relevant causal chains, and identifying incorrect background assumptions. We present the arguments behind
this conclusion and briefly describe an implemented self-explaining system, AERA (Autocatlytic Endogenous Reflective Architecture), capable
of goal-directed self-explanation: Autonomously explaining its own behavior as well as its acquired knowledge of tasks and environment.
and serve as a method for probing their validity. For systems with general intelligence, an equally important reason to generate explanations is
for directing cumulative knowledge acquisition: Lest they be born knowing everything, a general machine intelligence must be able to handle
novelty. This can only be accomplished through a systematic logical
analysis of how, in the face of novelty, effective control is achieved and
maintained—in other words, through the systematic explanation of experience. Explanation generation is thus a requirement for more powerful
AI systems, not only for their owners (to verify proper knowledge and
operation) but for the AI itself—to leverage its existing knowledge when
learning something new. In either case, assigning the automatic generation of explanation to the system itself seems sensible, and quite possibly
unavoidable. In this paper we argue that the quality of an agent’s explanation generation mechanism is based on how well it fulfils three goals –
or purposes – of explanation production: Uncovering unknown or hidden
patterns, highlighting or identifying relevant causal chains, and identifying incorrect background assumptions. We present the arguments behind
this conclusion and briefly describe an implemented self-explaining system, AERA (Autocatlytic Endogenous Reflective Architecture), capable
of goal-directed self-explanation: Autonomously explaining its own behavior as well as its acquired knowledge of tasks and environment.