Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Argument Schemes and a Dialogue System for Explainable Planning

Published: 30 September 2023 Publication History

Abstract

Artificial Intelligence (AI) is being increasingly deployed in practical applications. However, there is a major concern whether AI systems will be trusted by humans. To establish trust in AI systems, there is a need for users to understand the reasoning behind their solutions. Therefore, systems should be able to explain and justify their output. Explainable AI Planning is a field that involves explaining the outputs, i.e., solution plans produced by AI planning systems to a user. The main goal of a plan explanation is to help humans understand reasoning behind the plans that are produced by the planners. In this article, we propose an argument scheme-based approach to provide explanations in the domain of AI planning. We present novel argument schemes to create arguments that explain a plan and its key elements and a set of critical questions that allow interaction between the arguments and enable the user to obtain further information regarding the key elements of the plan. Furthermore, we present a novel dialogue system using the argument schemes and critical questions for providing interactive dialectical explanations.

1 Introduction

Artificial intelligence (AI) researchers are increasingly concerned that whether the systems they build will be trusted by humans. One mechanism for increasing trust is to make AI systems capable of explaining their reasoning. In this article, we provide a mechanism for explaining the output of an AI planning system.
Automated planning [16] is one of the subfields of AI that focuses on developing techniques to create efficient plans, i.e., sequences of actions that should be performed to achieve a set of goals. In practical applications, for instance, this set of actions can be passed to a robot, or a manufacturing system, that can follow the plan and produce the desired result. Explainable AI Planning (XAIP) [15] is a field that involves explaining AI planning systems to a user. The main goal of a plan explanation is to help humans understand reasoning behind the plans produced by the planners. Approaches to this problem include explaining planner decision-making processes as well as forming explanations from the models. Previous work on model-based explanations of plans includes Reference [32].
To provide explanations for plans, we make use of argumentation. Argumentation [31] is a logical model of reasoning that is aimed at the evaluation of possible conclusions or claims by considering reasons for and against them. These reasons, i.e., arguments and counter-arguments, provide support for and against the conclusions or claims, through a combination of dialectical and logical reasoning. Argumentation is connected to the idea of establishing trust in AI systems by explaining the results and processes of the computation of a solution or decision, and has been used in many applications in multi-agent planning [35] and practical reasoning [2]. Argumentation has also been used in explanation dialogues. A dialogue system [6] for argumentation and explanation consists of a communication language that defines the speech acts and protocols that allow transitions in the dialogue. This allows the explainee to challenge and interrogate the given explanations to gain further understanding.
In this article, our objective is to present a novel approach that shows how to use argumentation to generate explanations in the domain of AI planning. In particular, given a planning problem, and a plan that solves it, we want to be able to answer questions such as “Why a?” or “Why a rather than b?,” where a and b are actions in the plan, or “How g?,” where g is a goal. Questions like these are inherently based upon definitions held in the domain related to a particular problem and solution. Furthermore, questions regarding particular state information may arise, such as “Why a here?” To answer such questions, it is necessary to extract relevant information about actions, states, and goals from the model that underlies the plan. This information allows us to provide supporting evidence to draw conclusions within the explanation process. In addition, it allows us to create a summary explanation of the whole plan and, through dialogue and questioning, extract further information regarding the elements of the plan. These explanations are selective and social as mentioned in References [3, 21]. Selective explanations present salient points unless more detail is required by the recipient. Social explanations involve a transfer of knowledge between the explainer and explainee.
Our approach is built around a set of argument schemes [36] that create arguments that explain and justify a plan and its key elements (i.e., actions, states, and goals). We call such arguments plan explanation arguments. These schemes are complemented by critical questions that allow a user to seek further information regarding the plan elements, and allow interaction between different arguments. Plan explanation arguments can be constructed through the instantiation of argument schemes, and can be questioned with a given scheme’s associated critical questions (CQs). Given a plan explanation argument A instantiating an argument scheme, CQs are possible counter-arguments to A, and they question the premises, i.e., presumptions of A about the key elements of the plan, and so shift the burden of proof such that further arguments must be put forward to argue for the plan elements in the premises questioned. Until this burden of proof is met, a plan explanation argument A cannot be said to be justified. The plan explanation arguments enable a planning system to answer such questions at different levels of granularity. The aim of these explanation arguments is to enable users to understand the reasoning behind a given plan. Furthermore, we use the concepts of validity and suitability of plan1 to ensure that a planner is able to answer any user questions as long as the plan being explained is valid and suitable.2
In addition, we present a dialogue system consisting of a communication language with dialogue moves and a protocol utilizing the argument schemes and critical questions for providing an interactive approach for explanation and query answering, and we provide algorithms that describe the mechanization of a dialogue conversation between the planner and user in the dialogue system. To make our argumentation-based explanations for the planning study concrete, we use a version of the classic blocks world. Note that our approach is planner and domain independent, therefore, it can work on a wide range of input plans in classical planning.
The remainder of this article is structured as follows. In Section 2, we present related work. Section 3 presents the background on abstract argumentation framework and acceptability semantics for acceptable set of arguments also known as extensions. This is followed by background on the planning model in Section 4. In Section 5, we present the argument schemes for explaining plans and the critical questions. Section 6 presents the dialogue system using argument schemes and critical questions. Finally, we conclude and suggest future work in Section 7.

2 Related Work

Our research is inspired by work in practical reasoning and argumentation for multi-agent planning. However, our argument scheme-based approach generates explanations for a plan created by an AI planner, which we assume to be a single entity. There are several works in practical reasoning and argumentation for multi-agent and automated planning, and some that are very close to our work on generating argumentation-based explanations for planning decisions and outcomes. Below, we provide a review of all the relevant related work and discuss their similarities and differences compared to our approach.
One of the most well-known scheme-based approaches in practical reasoning, on which most approaches to practical reasoning build on, is presented in Reference [2]. This work uses an Action-based Alternating Transition System (AATS) [38], which is based on the agent’s knowledge of actions and the values they promote. An AATS is accompanied by a set of argument schemes and critical questions. Arguments are generated for each available action and organized into a Value-based Argumentation Framework (VAF) [5], where the preference between arguments is defined according to the values the actions promote and the goals they achieve. Thus, allowing an agent to evaluate the outcomes on the basis of the social values highlighted by the arguments. Reference [20] is an extension of Reference [2] specifically designed for multi-agent planning. In this work, a dialogue game is presented that enables agents to discuss the suitability of plans based on an argumentation scheme and associated critical questions. To improve the efficiency of dialogues, two dialogue strategies are presented for reducing the number of questions required to reach agreement, although it is argued that the relative effectiveness of the strategies is dependent on characteristics of the problem domain.
In Reference [34], a model for arguments is presented that contributes in deliberative dialogues based on argumentation schemes for arguing about norms and actions in a multi-agent system for collaborative planning. Norms take precedence over goals, and the agent is forced to always comply with norms, since violation of norms is not permitted. Reference [25] builds on the work of Reference [2] to present a scheme-based approach for normative practical reasoning where arguments are constructed for a sequence of actions. Unlike Reference [2], it permits practical reasoning in the presence of norms, hence, preferences between arguments are defined considering all possible interactions between norms and goals instead of values and goals. Reference [30] propose a framework that integrates both the reasoning and dialectical aspects of argumentation to perform normative practical reasoning, enabling an agent to act in a normative environment under conflicting goals and norms and generate explanation for agent behaviour, i.e., justification for the best plan identified, which is the solution to a normative planning problem. Reference [9] present a dialogue-based approach for explaining, understanding and debugging the behaviour of a Belief-Desire-Intention (BDI) agent. Dialogue traces are generated and analyzed to identify divergences or agreement in the traces to understand and explain the actions chosen by the BDI agent.
Reference [4] explores the use of situation calculus as a language to present arguments about a common plan in a multi-agent system and, Reference [33] presents an argumentation-based approach to deliberation, the process by which two or more agents reach a consensus on a course of action. Reference [27] proposes a formal model of argumentative dialogues for multi-agent planning, with a focus on cooperative planning, and Reference [14] presents a practical solution for multi-agent planning based upon an argumentation-based defeasible planning framework on ambient intelligence applications. In Reference [18], a grounded interaction protocol is presented, which is derived from explanation dialogues, and formalized as a new atomic dialogue type [37] in the Agent Dialogue Framework [19].
The works that are closest to our research for generating plan explanations using argumentation are given in References [8, 12, 26]. In References [8, 26], a tool has been developed that uses formal argumentation and dialogue theory, coupled with natural language generation, to explain the rationale of a hybrid software-human many-party joint plan during its enactment. Formal argumentation has been used to created a dialectical proof based on the grounded semantics [7] to justify the actions executed in a plan. More recently, in Reference [12], an assumption-based argumentation (ABA) framework [11] is used to model the planning problem and generate explanation using the related admissible semantics [13]. An argumentation-based model takes plans written in a STRIPS-like language as its inputs and re-turns ABA frameworks as its outputs. The plan construction mapping for the planning problem has a solution if and only if its corresponding ABA framework has a set of related admissible arguments with the planning goal as its topic. Our work differs from both, since we present argument schemes to generate the explanation arguments for all the key elements of the plan, and critical questions to allow interaction between the arguments. Whilst previous research provides a static explanation, in our approach, a dialogue system is presented that allows the user to engage in a dialogue conversation with the AI planner to challenge and interrogate the plan explanation arguments. An overall summarized comparison of the current work to the related work is shown in Table 1.
Table 1.
ReferenceFoundationAFContributions
Atkinson and Bench-Capon [2]AATSVAFAATS based on agent’s knowledge of actions and the values they promote.
Oren [25]AATSExAF \(^{1}\) Argument scheme-based approach for normative practical reasoning.
Toniolo et al. [34]SC \(^{2}\) BAF \(^{3}\) Argumentation-based approach for the collaborative planning problem in teams of agents.
Shams et al. [30]A STRIPS-based planning languageAAF \(^{4}\) Argumentation-based framework for planning problems in normative environments.
Fan [12]A STRIPS-based planning languageABAAssumption-based Argumentation Framework for generating explainable plans.
Caminada et al. [8], Oren et al. [26]A STRIPS-based planning languageASPIC [23]Argumentation and natural language generation for plan explanation.
Mahesar and Parsons (Current Work)A PDDL-based planning language [17]AAFArgument scheme-based approach and a dialogue system for explainable planning.
Table 1. Summarized Comparison of the Related Work
\(^{1}\) ExAF stands for Extended Argumentation Framework [22]. \(^{2}\) SC stands for Situation Calculus [29]. \(^{3}\) BAF stands for Bipolar Argumentation Framework [1]. \(^{4}\) AAF stands for Abstract Argumentation Framework [10].

3 Argumentation Model

In this section, we describe plan explanation arguments and their interactions at the level of abstract argumentation [10], together with a set of critical questions that are suitable for arguing over plan explanations, as we will later show.
An argumentation framework is a set of arguments and a binary attack relation among them. Given an argumentation framework, argumentation theory allows to identify the sets of arguments that can survive the conflicts expressed in the framework.
Definition 3.1 (Abstract Argumentation Framework [10]).
An abstract argumentation framework (AAF) is a pair \(\mathit {AAF} = (\mathcal {A}, \mathcal {R})\) , where \(\mathcal {A}\) is a set of arguments and \(\mathcal {R}\) is an attack relation \((\mathcal {R} \subseteq \mathcal {A} \times \mathcal {A})\) . The notation \((A,B) \in \mathcal {R}\) where \(A,B \in \mathcal {A}\) denotes that argument A attacks argument B.
Dung [10] originally introduced an extension approach to define the acceptability of arguments, i.e., semantics for an abstract argumentation framework. An extension is a subset of \(\mathcal {A}\) that represents the set of arguments that can be accepted together. For an \(\mathit {AAF} = (\mathcal {A}, \mathcal {R})\) :
A set \(\mathcal {E} \subseteq \mathcal {A}\) is said to be conflict free if and only if there are no \(A, B \in \mathcal {E}\) such that \((A,B) \in \mathcal {R}\) .
A set \(\mathcal {E} \subseteq \mathcal {A}\) is said to be admissible if and only if it is conflict free and defends all its arguments. \(\mathcal {E}\) defends A if and only if for every argument \(B \in \mathcal {A}\) , if we have \((B,A) \in \mathcal {R}\) , then there exists \(C \in \mathcal {E}\) such that \((C,B) \in \mathcal {R}\) .
A set \(\mathcal {E} \subseteq \mathcal {A}\) is a complete extension if and only if \(\mathcal {E}\) is an admissible set that contains all the arguments it defends.
A set \(\mathcal {E} \subseteq \mathcal {A}\) is a grounded extension if and only if \(\mathcal {E}\) is a minimal (for set inclusion) complete extension.
In Section 5, we will show how to formulate the explanation of a plan (and its key elements) as an argument in such a way that the argument is only acceptable if the plan is valid and suitable. We then adopt the grounded semantics to establish acceptability so that the explanation argument will only be acceptable if none of the objections, established using critical questions, are supported by the planning model.

4 Planning Model

In this section, we introduce the planning model that we use. This is based on an instance of the most widely used planning representation, Planning Domain Definition Language (PDDL) [17].
The main components are:
Definition 4.1 (Planning Problem).
A planning problem is a tuple \(P = \langle O, \mathit {Pr}, \bigtriangleup _I, \bigtriangleup _G, A, \Sigma , G \rangle\) , where:
(1)
O is a set of objects;
(2)
\(\mathit {Pr}\) is a set of predicates3;
(3)
\(\bigtriangleup _I \subseteq \mathit {Pr}\) is the initial state;
(4)
\(\bigtriangleup _G \subseteq \mathit {Pr}\) is the goal state, and G is the set of goals;
(5)
A is a finite, non-empty set of actions;
(6)
\(\Sigma\) is the state transition system.
We define the predicates, action, state transition system, and goal as follows.
Definition 4.2 (Predicates).
\(\mathit {Pr}\) is a set of domain predicates, i.e., properties of objects that we are interested in, that can be true or false. For a state \(s \subseteq Pr\) , \(s^+\) are predicates considered true, and \(s^- = Pr\setminus s^+\) . A state s satisfies predicate pr, denoted as \(s \models pr\) , if \(pr \in s\) , and satisfies predicate \(\lnot pr\) , denoted \(s \models \lnot pr\) , if \(pr \not\in s\) .
Definition 4.3 (Action).
An action \(a = \langle pre, post\rangle\) is composed of sets of predicates pre, post that represent a’s pre and post conditions, respectively. Given an action \(a=\langle pre, post\rangle\) , we write \(pre(a)\) and \(post(a)\) for pre and post. Postconditions are divided into \(post(a)^+\) and \(post(a)^-\) postcondition sets. An action a can be executed in state s iff the state satisfies its preconditions. The postconditions of an action are applied in the state s at which the action ends, by adding \(post(a)^+\) and deleting \(post(a)^-\) .
Definition 4.4 (State Transition System).
The state-transition system is denoted by \(\Sigma =(S,A,\gamma)\) , where:
S is the set of states.
A is a finite, non-empty set of actions.
\(\gamma : S \times A \rightarrow S\) where:
\(\gamma (S,a) \rightarrow (S \setminus post(a)^-)) \cup post(a)^+\) , if a is applicable in S;
\(\gamma (S,a) \rightarrow \mathit {undefined}\) otherwise;
S is closed under \(\gamma\) .
Definition 4.5 (Goal).
A goal achieves a certain state of affairs. Each \(g \in G\) is a set of predicates \(g=\lbrace r_1,\ldots ,r_n\rbrace\) , known as goal requirements (denoted as \(r_i\) ), that should be satisfied in the state to satisfy the goal.
We then define a plan and the associated state transitions as follows.
Definition 4.6 (Plan).
A plan \(\pi\) is a sequence of actions \(\langle a_1,\ldots ,a_n \rangle\) . A plan \(\pi\) is a solution to a planning problem P, i.e., plan \(\pi\) is valid iff:
(1)
Only the predicates in \(\bigtriangleup _I\) hold in the initial state: \(S_1 = \bigtriangleup _I\) ;
(2)
the preconditions of action \(a_i\) hold at state \(S_i\) , where \(i=1,2,\ldots ,n\) ;
(3)
\(\gamma (S,\pi)\) satisfies the set of goals G;
(4)
the set of goals satisfied by plan \(\pi\) is a non-empty \(G_\pi \ne \emptyset\) consistent subset of goals.
Furthermore, a plan \(\pi = \langle a_1,\ldots ,a_n \rangle\) is a suitable plan iff:
(1)
Plan \(\pi\) is a valid plan;
(2)
For each action \(a_i \in \pi\) , where \(i=1,2,\ldots ,n\) , there is no other action \(a_j\) where \(i\ne j\) , such that, the preconditions of action \(a_j\) hold at state \(S_i\) , or action \(a_j\) achieves goal g that action \(a_i\) has achieved.
Definition 4.7 (Extended State Transition System).
The extended state transition function for a plan is defined as follows:
\(\gamma (S, \pi) \rightarrow S\) if \(|\pi |=0\) (i.e., if \(\pi\) is empty);
\(\gamma (S, \pi) \rightarrow \gamma (\gamma (S,a_1),a_2,\ldots ,a_n)\) if \(|\pi |\gt 0\) and \(a_1\) is applicable in S;
\(\gamma (S, \pi) \rightarrow \mathit {undefined}\) otherwise.
Each action in the plan can be performed in the state that results from the application of the previous action in the sequence. After performing the final action, the set of goals \(G_\pi\) will be true. We present the following Blocks World example, which we use as a running example, to illustrate.
Example 4.1.
A classic blocks world consists of the following:
a flat surface such as a tabletop;
an adequate set of identical blocks that are identified by letters; and
the blocks can be stacked one on one to form towers of unlimited height.
We have three predicates to capture the domain:
(1)
\(\mathit {On(X,Y)}\) , block X is on block Y;
(2)
\(\mathit {Ontable(X)}\) , block X is on the table; and
(3)
\(\mathit {Clear(X)}\) , block X has nothing on it.
We have two actions, \(a_1\) and \(a_2\) :
(1)
\(a_1: \mathit {Unstack(X,Y)}\) —pick up clear block X from block Y;
\(\mathit {pre(a_1)}: \lbrace \mathit {Clear(X)}, \mathit {On(X,Y)} \rbrace\)
\(\mathit {post(a_1)^+}: \lbrace \mathit {Ontable(X)}, \mathit {Clear(Y)} \rbrace\)
\(\mathit {post(a_1)^-}: \lbrace \mathit {On(X,Y)} \rbrace\)
(2)
\(a_2: \mathit {Stack(X,Y)}\) —place block X onto clear block Y;
\(\mathit {pre(a_2)}: \lbrace \mathit {Ontable(X)}, \mathit {Clear(X)}, \mathit {Clear(Y)} \rbrace\)
\(\mathit {post(a_2)^+}: \lbrace \mathit {On(X,Y)} \rbrace\)
\(\mathit {post(a_2)^-}: \lbrace \mathit {Ontable(X)}, \mathit {Clear(Y)} \rbrace\)
The initial and goal states of the blocks world problem are shown in Figure 1
Fig. 1.
Fig. 1. Blocks World Example.
. The initial state \(\bigtriangleup _I\) is given by
\begin{equation*} \lbrace \mathit {Ontable(C)}, \mathit {On(B,C)}, \mathit {On(A,B)}, \mathit {Clear(A)} \rbrace , \end{equation*}
and the goal state \(\bigtriangleup _G\) is given by
\begin{equation*} \lbrace \mathit {On(C,A)}, \mathit {Ontable(A)},\mathit {Ontable(B)}, \mathit {Clear(C)},\mathit {Clear(B)} \rbrace . \end{equation*}
The action sequence \(\langle \mathit {Unstack(A,B)}, \mathit {Unstack(B,C)},\mathit {Stack(C,A)} \rangle\) is a valid plan and, furthermore, it is a suitable plan.

5 Argument Schemes for Explaining Plans

In scheme-based approaches [36] arguments are expressed in natural language and a set of critical questions is associated with each scheme, identifying how the scheme can be attacked. Below, we introduce a set of argument schemes for explaining a plan and its key elements, i.e., action, state, and goal, and illustrate these on our example. Though the example is simple, for ease of understanding, the approach can be used for any classical planning problem in the sense of Definition 4.1 in PDDL. The set of critical questions allows a user to ask for a summary explanation for the plan and consequently interrogate the elements of the plan by questioning the premises of the arguments put forward by the planner. The explanation arguments constructed using the argument schemes allow the planner to answer any user questions.
We first define the terms that are used in our argument schemes definitions as follows.
Definition 5.1.
Given a planning problem P:
\(\mathit {HoldPrecondition(pre(a),S)}\) denotes that the precondition \(pre(a)\) of action a holds at the state S.
\(\mathit {HoldGoal(g, S)}\) denotes that the goal g holds at the state S.
\(\mathit {HoldGoals(G, \bigtriangleup _G)}\) denotes that all the goals in the set of goals G hold at the goal state \(\bigtriangleup _G\) .
\(\mathit {ExecuteAction(a,S)}\) denotes that action a is executed at state S.
\(\mathit {NecessaryAction(a, b, S)}\) denotes that it is necessary to execute action a in the state S rather than action b.
\(\mathit {AchieveGoal(a,g)}\) denotes that action a achieves goal g.
\(\mathit {AchieveGoals(\pi ,G)}\) denotes that sequence of actions \(\pi\) achieves the set of goals G.
\(\mathit {Solution(\pi , P)}\) denotes that \(\pi\) is a solution to the planning problem P.
We define argument scheme for explaining a possible action, which is followed by an example.
Definition 5.2 (Possible Action Argument Scheme \(\mathit {Arg}_a\) )
A possible action argument \(\mathit {Arg}_a\) explains how it is possible to execute an action a:
Premise 1: \(\mathit {HoldPrecondition(pre(a), S_1)}\) . In the current state \(S_1\) , the pre-condition \(pre(a)\) of action a holds.
Conclusion: \(\mathit {ExecuteAction(a, S_1)}\) . Therefore, it is possible to execute action a in the current state \(S_1\) .
Example 5.1.
We consider the blocks world of Example 4.1. The possible action explanation argument for the first action \(\mathit {Unstack(A,B)}\) is shown as follows. Where:
\(pre(\mathit {Unstack(A,B)}) = \lbrace \mathit {Clear(A), On(A,B)} \rbrace\) .
\(S_1 = \lbrace \mathit {Ontable(C), On(B,C), On(A,B), Clear(A)} \rbrace\) .
Premise 1:
\(\mathit {HoldPrecondition(pre(\mathit {Unstack(A,B)}), S_1)}\)
In the current state \(\lbrace \mathit {Ontable(C), On(B,C), On(A,B)}, \mathit {Clear(A)} \rbrace\) , the pre-condition \(\lbrace \mathit {Clear(A), On(A,B)} \rbrace\) of action \(\mathit {Unstack(A,B)}\) holds.
Conclusion:
\(\mathit {ExcecuteAction(Unstack(A,B), \; S_1)}\)
Therefore, it is possible to execute action \(\mathit {Unstack(A,B)}\) in the current state \(\lbrace \mathit {Ontable(C), On(B,C), On(A,B), Clear(A)} \rbrace\) .
We define argument scheme for explaining a necessary action, which is followed by an example.
Definition 5.3 (Necessary Action Argument Scheme \(\mathit {Arg}_{na}\) )
A necessary action argument \(\mathit {Arg}_{na}\) explains why it is necessary to execute an action a rather than action b:
Premise 1: \(\mathit {HoldPrecondition(pre(a), S_1)}\) . In the current state \(S_1\) , the pre-condition \(pre(a)\) of action a holds.
Premise 2: \(\gamma (S_1,a) \rightarrow S_2\) . In the current state \(S_1\) , we can execute the action \(a \in \pi\) , that results in the next state \(S_2\) .
Premise 3: \(\mathit {HoldGoal(g, S_2)}\) . In the next state \(S_2\) , the goal g holds.
Premise 4: \(\mathit {AchieveGoal(a,g)}\) : The action a achieves the goal g.
Premise 5: \(\mathit {\lnot HoldPrecondition(pre(b), S_1)}\) OR \(((\gamma (S_1,b) \rightarrow S_3) \; \&\& \; \mathit {\lnot HoldGoal(g, S_3))}\) . In the current state \(S_1\) , either the pre-condition \(pre(b)\) of action b does not hold, or, when we execute the action b, that results in the next state \(S_3\) , the goal g does not hold in state \(S_3\) .
Premise 6: \(\mathit {NecessaryAction(a, b, S_1)}\) . Therefore, it is necessary to execute action a in the current state \(S_1\) rather than action b.
Example 5.2.
The necessary action explanation argument for the action \(\mathit {Unstack(A,B)}\) rather than action \(\mathit {Unstack(B,C)}\) is shown as follows. Where:
\(pre(\mathit {Unstack(A,B)}) = \lbrace \mathit {Clear(A), On(A,B)} \rbrace\) .
\(pre(\mathit {Unstack(B,C)}) = \lbrace \mathit {Clear(B), On(B,C)} \rbrace\) .
\(S_1 = \lbrace \mathit {Ontable(C), On(B,C), On(A,B), Clear(A)} \rbrace\) .
\(S_2 = \lbrace \mathit {Ontable(C)}, \mathit {On(B,C)}, \mathit {Clear(A)}, \mathit {Clear(B)}, \mathit {Ontable(A)} \rbrace\) .
Premise 1:
\(\mathit {HoldPrecondition(pre(\mathit {Unstack(A,B)}), S_1)}\)
In the current state \(\lbrace \mathit {Ontable(C), On(B,C), On(A,B)}, \mathit {Clear(A)} \rbrace\) , the pre-condition \(\lbrace \mathit {Clear(A), On(A,B)} \rbrace\) of action \(\mathit {Unstack(A,B)}\) holds.
Premise 2:
\(\gamma (S_1, \; \mathit {Unstack(A,B)}) \rightarrow S_2\)
In the current state \(\lbrace \mathit {Ontable(C)}, \mathit {On(B,C)}, \mathit {On(A,B)}, \mathit {Clear(A)} \rbrace\) , we can execute the action \(\mathit {Unstack(A,B)}\) , that results in the next state \(\lbrace \mathit {Ontable(C)}, \mathit {On(B,C)}, \mathit {Clear(A)}, \mathit {Clear(B)},\) \(\mathit {Ontable(A)} \rbrace\) .
Premise 3:
\(\mathit {HoldGoal}(\mathit {Ontable(A)}, \; S_2)\) .
In the next state \(\lbrace \mathit {Ontable(C)}, \mathit {On(B,C)}, \mathit {Clear(A)}, \mathit {Clear(B)}, \mathit {Ontable(A)} \rbrace\) , the goal \(\mathit {Ontable(A)}\) holds.
Premise 4:
\(\mathit {AchieveGoal(\mathit {Unstack(A,B)}, \mathit {Ontable(A)})}\) .
The action \(\mathit {Unstack(A,B)}\) achieves the goal \(\mathit {Ontable(A)}\) .
Premise 5:
\(\mathit {\lnot HoldPrecondition(pre(\mathit {Unstack(B,C)}), S_1)}\) .
In the current state \(\lbrace \mathit {Ontable(C), On(B,C), On(A,B)}, \mathit {Clear(A)} \rbrace\) , the pre-condition \(\lbrace \mathit {Clear(B), On(B,C)} \rbrace\) of action \(\mathit {Unstack(B,C)}\) does not hold.
Conclusion:
\(\mathit {NecessaryAction(Unstack(A,B), Unstack(B,C), S_1)}\) .
Therefore, it is necessary to execute action \(Unstack(A,B)\) in the current state \(\lbrace \mathit {Ontable(C), On(B,C), On(A,B), Clear(A)} \rbrace\) rather than action \(Unstack(B,C)\) .
We define argument scheme for explaining a state as follows.
Definition 5.4 (State Argument Scheme \(\mathit {Arg}_S\) )
4 A state argument \(\mathit {Arg}_S\) explains how the state S becomes true:
Premise 1: \(\gamma (S_1,a) \rightarrow ((S_1 \setminus post(a)^-)) \cup post(a)^+ = S)\) . In the current state \(S_1\) , we can execute the action \(a \in \pi\) , after which the negative postconditions \(post(a)^-\) do not hold and the positive postconditions \(post(a)^+\) hold, that results in the state S.
Conclusion: Therefore, the state S is true.
Example 5.3.
The state argument \(\mathit {Arg}_S\) for the state \(S = \lbrace \mathit {On(B,C), Clear(A), Clear(B),}\) \(\mathit {Ontable(A),}\) \(\mathit {Ontable(C)} \rbrace\) in the Example 4.1 is shown as follows. Where:
\(a = \mathit {Unstack(A,B)}\) .
\(post(a)^- = \lbrace \mathit {On(A,B)} \rbrace\)
\(post(a)^+ = \lbrace \mathit {Ontable(A), Clear(B)} \rbrace\)
\(S_1 = \lbrace \mathit {Ontable(C), On(B,C), On(A,B), Clear(A)} \rbrace\) .
Premise 1:
\(\gamma (S_1, \; \mathit {Unstack(A,B)}) \rightarrow ((S_1 \setminus post(a)^-)) \cup post(a)^+ = S)\) .
In the current state \(\lbrace \mathit {Ontable(C), On(B,C), On(A,B), Clear(A)} \rbrace\) , we can execute the action \(\mathit {Unstack(A,B)}\) , after which the negative postconditions \(\lbrace \mathit {On(A,B)} \rbrace\) do not hold and the positive postconditions \(\lbrace \mathit {Ontable(A), Clear(B)} \rbrace\) hold, that results in the state \(\lbrace \mathit {On(B,C), Clear(A), Clear(B), Ontable(A), Ontable(C)} \rbrace\) .
Conclusion:
Therefore, the state \(\lbrace \mathit {On(B,C), Clear(A), Clear(B), Ontable(A), Ontable(C)} \rbrace\) is true.
We define argument scheme for explaining a goal as follows.
Definition 5.5 (Goal Argument Scheme \(\mathit {Arg}_g\) )
A goal argument \(\mathit {Arg}_g\) explains how a goal is achieved by an action in the plan:
Premise 1: \(\gamma (S_1,a) \rightarrow S_2\) . In the current state \(S_1\) , we can execute the action \(a \in \pi\) , that results in the next state \(S_2\) .
Premise 2: \(\mathit {HoldGoal(g, S_2)}\) . In the next state \(S_2\) , the goal g holds.
Conclusion: \(\mathit {AchieveGoal(a,g)}\) : Therefore, the action a achieves the goal g.
Example 5.4.
The goal argument \(\mathit {Arg}_g\) for the goal \(g = \mathit {Ontable(A)}\) in the Example 4.1 is shown as follows. Where:
\(a = \mathit {Unstack(A,B)}\) .
\(S_1 = \lbrace \mathit {Ontable(C)}, \mathit {On(B,C)}, \mathit {On(A,B)}, \mathit {Clear(A)} \rbrace\) .
\(S_2 = \lbrace \mathit {Ontable(C)}, \mathit {On(B,C)}, \mathit {Clear(A)}, \mathit {Clear(B)}, \mathit {Ontable(A)} \rbrace\) .
Premise 1:
\(\gamma (S_1, \; \mathit {Unstack(A,B)}) \rightarrow S_2\)
In the current state \(\lbrace \mathit {Ontable(C)}, \mathit {On(B,C)}, \mathit {On(A,B)}, \mathit {Clear(A)} \rbrace\) , we can execute the action \(\mathit {Unstack(A,B)}\) , that results in the next state \(\lbrace \mathit {Ontable(C)}, \mathit {On(B,C)}, \mathit {Clear(A)}, \mathit {Clear(B)},\) \(\mathit {Ontable(A)} \rbrace\) .
Premise 2:
\(\mathit {HoldGoal}(\mathit {Ontable(A)}, \; S_2)\) .
In the next state \(\lbrace \mathit {Ontable(C)}, \mathit {On(B,C)}, \mathit {Clear(A)}, \mathit {Clear(B)}, \mathit {Ontable(A)} \rbrace\) , the goal \(\mathit {Ontable(A)}\) holds.
Conclusion:
\(\mathit {AchieveGoal(\mathit {Unstack(A,B)}, \mathit {Ontable(A)})}\) .
Therefore, the action \(\mathit {Unstack(A,B)}\) achieves the goal \(\mathit {Ontable(A)}\) .
We then define argument scheme for explaining a plan summary as follows.
Definition 5.6 (Plan Summary Argument Scheme \(\mathit {Arg}_{\pi }\) )
A plan summary argument \(\mathit {Arg}_{\pi }\) explains that a proposed sequence of actions \(\pi =\langle a_1,a_2,\ldots ,a_n \rangle\) is a solution to the planning problem P, because it achieves a set of goals G:
Premise 1: \(\gamma (S_1,a_1) \rightarrow S_2\) , \(\gamma (S_2,a_2) \rightarrow S_3\) ,..., \(\gamma (S_n,a_n) \rightarrow S_{n+1}\) . In the initial state \(S_1 = \bigtriangleup _I\) , we can execute the first action \(a_1\) in the sequence of actions \(\pi\) that results in the next state \(S_2\) and execute the next action \(a_2\) in the sequence in the state \(S_2\) that results in the next state \(S_3\) and carry on until the last action \(a_n\) in the sequence is executed in the state \(S_n\) that results in the goal state \(S_{n+1}=\bigtriangleup _G\) .
Premise 2: \(\mathit {HoldGoals(G, \bigtriangleup _G)}\) . In the goal state \(\bigtriangleup _G\) , all the goals in the set of goals G hold.
Premise 3: \(\mathit {AchieveGoals(\pi ,G)}\) . The sequence of actions \(\pi\) achieves the set of all goals G.
Conclusion: \(\mathit {Solution(\pi , P)}\) . Therefore, \(\pi\) is a solution to the planning problem P.
Example 5.5.
The plan summary argument \(\mathit {Arg}_{\pi }\) for the solution plan given in the Example 4.1 is shown as follows. Where:
\(S_1 = \bigtriangleup _I = \lbrace \mathit {Ontable(C), On(B,C), On(A,B)}, \mathit {Clear(A)} \rbrace\)
\(S_2 = \lbrace \mathit {On(B,C), Clear(A), Clear(B), Ontable(A)},\mathit {Ontable(C)} \rbrace\)
\(S_3 = \lbrace \mathit {Clear(A), Clear(B), Clear(C), Ontable(A), Ontable(B), Ontable(C)} \rbrace\)
\(S_4 = \bigtriangleup _G = \lbrace \mathit {On(C,A)}, \mathit {Ontable(A)}, \mathit {Ontable(B)}, \mathit {Clear(C)}, \mathit {Clear(B)} \rbrace\)
\(G = \lbrace \mathit {On(C,A)}, \mathit {Ontable(A)}, \mathit {Ontable(B)}, \mathit {Clear(C)}, \mathit {Clear(B)} \rbrace\)
\(\pi = \langle \mathit {Unstack(A,B)}, \mathit {Unstack(B,C)}, \mathit {Stack(C,A)} \rangle\)
Premise 1:
\(\gamma (S_1, \mathit {Unstack(A,B)}) \rightarrow S_2\)
In the initial state \(\lbrace \mathit {Ontable(C), On(B,C), On(A,B), Clear(A)} \rbrace\) , we can execute the action \(\mathit {Unstack(A,B)}\) that results in the next state \(\lbrace \mathit {On(B,C), Clear(A), Clear(B), Ontable(A),}\) \(\mathit {Ontable(C)} \rbrace\) .
\(\gamma (S_2, \mathit {Unstack(B,C)}) \rightarrow S_3\)
In the state \(\lbrace \mathit {On(B,C), Clear(A), Clear(B), Ontable(A), Ontable(C)} \rbrace\) , we can execute the action \(\mathit {Unstack(B,C)}\) that results in the next state \(\lbrace \mathit {Clear(A), Clear(B), Clear(C), Ontable(A),}\) \(\mathit { Ontable(B), Ontable(C)} \rbrace\) .
\(\gamma (S_3, \mathit {Stack(C,A)}) \rightarrow S_4\)
In the state \(\lbrace \mathit {Clear(A), Clear(B), Clear(C), Ontable(A), Ontable(B), Ontable(C)} \rbrace\) , we can execute the action \(\mathit {Stack(C,A)}\) that results in the goal state \(\lbrace \mathit {On(C,A)}, \mathit {Ontable(A)},\) \(\mathit {Ontable(B)}, \mathit {Clear(C)}, \mathit {Clear(B)} \rbrace\) .
Premise 2:
\(\mathit {HoldGoals}(\bigtriangleup _G, \; G)\)
In the goal state \(\lbrace \mathit {On(C,A)}, \mathit {Ontable(A)}, \mathit {Ontable(B)}, \mathit {Clear(C)}, \mathit {Clear(B)} \rbrace\) , all the goals in the set of goals \(\lbrace \mathit {On(C,A)}, \mathit {Ontable(A)}, \mathit {Ontable(B)}, \mathit {Clear(C)}, \mathit {Clear(B)} \rbrace\) hold.
Premise 3:
\(\mathit {AchieveGoals}(\pi , \; G)\)
The sequence of actions \(\langle \mathit {Unstack(A,B)},\mathit {Unstack(B,C)}, \mathit {Stack(C,A)} \rangle\) achieves the set of all goals \(\lbrace \mathit {On(C,A)}, \mathit {Ontable(A)}, \mathit {Ontable(B)}, \mathit {Clear(C)}, \mathit {Clear(B)} \rbrace\) .
Conclusion:
\(\mathit {Solution(\pi , \; P)}\)
Therefore, \(\langle \mathit {Unstack(A,B)}, \mathit {Unstack(B,C)}, \mathit {Stack(C,A)} \rangle\) is a solution to the planning problem P.
Having described the schemes and shown how they are used in our running example, we turn to the CQs. The five CQs given below describe the ways in which the arguments built using the argument schemes can interact with each other. These CQs are associated to (i.e., attack) one or more premises of the arguments constructed using the argument schemes and are in turn answered (i.e., attacked) by the other arguments, which are listed in the description.
CQ1: Is it possible for the plan \(\pi\) to be a solution?
This CQ is the first question that the user asks when presented with a solution plan \(\pi\) . The argument scheme \(\mathit {Arg}_{\pi }\) answers the CQ by constructing the summary argument for the plan \(\pi\) .
CQ2: Is it possible to execute the action a?
This CQ is associated with the following argument schemes: \(\mathit {Arg}_{\pi }\) , \(\mathit {Arg}_{S}\) , \(\mathit {Arg}_{g}\) . The argument scheme \(\mathit {Arg}_{a}\) answers the CQ by constructing the explanation argument that explains how it is possible to execute the action a.
CQ3: Is it possible to have the state S?
This CQ is associated with the following argument schemes: \(\mathit {Arg}_{\pi }\) , \(\mathit {Arg}_{a}\) , \(\mathit {Arg}_{g}\) , \(\mathit {Arg}_{na}\) . The argument scheme \(\mathit {Arg}_{S}\) answers the CQ by constructing the explanation argument for the state S.
CQ4: Is it possible to achieve the goal g?
This CQ is associated with the argument scheme \(\mathit {Arg}_{\pi }\) . The argument scheme \(\mathit {Arg}_{g}\) answers the CQ by constructing the explanation argument for the goal g.
CQ5: Is it necessary to execute action A rather than action B?
This CQ is associated with the following argument schemes: \(\mathit {Arg}_{\pi }\) , \(\mathit {Arg}_{S}\) , \(\mathit {Arg}_{g}\) . The argument scheme \(\mathit {Arg}_{na}\) answers the CQ by constructing the explanation argument that explains why it is necessary to execute action a rather than action b.
Figure 2 presents the visualization of an example argumentation graph for some arguments, CQs and attack relations, where the grounded extension \(\mathit {Gr} = \lbrace \mathit {Arg_{\pi }}, \mathit {Arg_a}, \mathit {Arg_S}, \mathit {Arg_g}, \mathit {Arg_{na}}, \mathit {Arg_{S^{\prime }}},\) \(\mathit {Arg_{a^{\prime }}}\rbrace\) .
Fig. 2.
Fig. 2. Visualization of an example argumentation graph.

5.1 Properties to Evaluate the Plan Explanation Arguments

We organise the arguments and their interactions by mapping them into a Dung [10] abstract argumentation framework denoted by \(\mathit {AAF} = (\mathcal {A}, \mathcal {R})\) , where \(\mathcal {A}\) is a set of arguments and \(\mathcal {R}\) is an attack relation \((\mathcal {R} \subseteq \mathcal {A} \times \mathcal {A})\) . \(\mathit {Args} \subset \mathcal {A}\) and \(\mathit {CQs} \subset \mathcal {A}\) , where \(\mathit {Args} = \lbrace \mathit {Arg}_{\pi }, \mathit {Arg}_a, \mathit {Arg}_{na} \mathit {Arg}_S, \mathit {Arg}_g\rbrace\) and \(\mathit {CQs} = \lbrace \mathit {CQ}_1, \mathit {CQ}_2, \mathit {CQ}_3, \mathit {CQ}_4, \mathit {CQ}_5\rbrace\) . Given the way that the plan explanation arguments were constructed, they will be acceptable under the grounded semantics if the plan is valid and suitable.
We present the properties of the plan explanation arguments as follows.
Lemma 5.1.
For a valid and suitable plan \(\pi\) , the set of arguments \(\mathit {Args}\) is complete, in that, if a \(\mathit {CQ} \in \mathit {CQs}\) exists, then it will be answered (i.e., attacked) by an \(\mathit {Arg} \in \mathit {Args}\) .
Proof.
Since \((\mathit {Arg}_{\pi }, CQ1) \in \mathcal {R}\) , \((\mathit {Arg}_a, CQ2) \in \mathcal {R}\) , \((\mathit {Arg}_S, CQ3) \in \mathcal {R}\) , \((\mathit {Arg}_g, CQ4) \in \mathcal {R}\) , and \((\mathit {Arg}_{na}, CQ5) \in \mathcal {R}\) , therefore, a unique \(\mathit {Arg} \in \mathit {Args}\) exists, that attacks a unique \(\mathit {CQ} \in \mathit {CQs}\) . Thus, \(\mathit {Args}\) is complete.□
In other words, if the plan is valid and suitable, then all the objections that can be put forward regarding the plan and its elements do not hold. In particular:
Lemma 5.2.
For a valid and suitable plan \(\pi\) , \(\mathit {Arg}_{\pi } \in \mathit {Gr}\) iff \(CQ \not\in \mathit {Gr}\) when \((\mathit {CQ}, \mathit {Arg}_{\pi }) \in \mathcal {R}\) , \(CQ \in CQs\) .
Proof.
Follows from Lemma 5.1. Since any CQ that attacks \(\mathit {Arg}_{\pi }\) is in turn attacked by an \(\mathit {Arg} \in \mathit {Args}\) , therefore, \(CQ \not\in \mathit {Gr}\) . Thus, \(\mathit {Arg}_{\pi } \in \mathit {Gr}\) .□
In other words, any single objection that can be provided regarding the plan and its elements does not hold.
In a very similar way, we can show the following:
Lemma 5.3.
For a valid and suitable plan \(\pi\) , \(\mathit {Arg}_{\pi } \in \mathit {Gr}\) iff \(\forall g \in G \; \mathit {Arg}_g \in \mathit {Gr}\) .
Proof.
Since plan \(\pi\) achieves all goals \(g \in G\) , and \(\mathit {Arg}_{g\in G}\) attack all CQs that attack the goals in the premises of \(\mathit {Arg}_{\pi }\) , therefore, \(\forall g \in G \; \mathit {Arg}_g \in \mathit {Gr}\) . Thus, \(\mathit {Arg}_{\pi } \in \mathit {Gr}\) .□
In other words, all the objections that a user might have regarding the set of goals G achieved by the plan \(\pi\) do not hold.
Lemma 5.4.
For a valid and suitable plan \(\pi\) , \(\mathit {Arg}_{\pi } \in \mathit {Gr}\) iff \(\forall a \in A \; \mathit {Arg}_a \in \mathit {Gr}\) and \(\forall na \in A \; \mathit {Arg}_{na} \in \mathit {Gr}\) .
Proof.
Since all possible actions \(a \in A\) and all necessary actions \(na \in A\) can be executed in the plan \(\pi\) , and furthermore, \(\mathit {Arg}_{a\in A}\) and \(\mathit {Arg}_{na \in A}\) attack all CQs that attack the actions in the premises of \(\mathit {Arg}_{\pi }\) , therefore, \(\forall a \in A \; \mathit {Arg}_a \in \mathit {Gr}\) and \(\forall na \in A \; \mathit {Arg}_{na} \in \mathit {Gr}\) . Thus, \(\mathit {Arg}_{\pi } \in \mathit {Gr}\) .□
In other words, all the objections of the user regarding the set of actions A in the plan \(\pi\) do not hold.
Lemma 5.5.
For a valid and suitable plan \(\pi\) , \(\mathit {Arg}_{\pi } \in \mathit {Gr}\) iff \(\forall S_i \in S \; \mathit {Arg}_{S_i} \in \mathit {Gr}\) .
Proof.
Since all states \(S_i \in S\) can be established to be true (at a certain action step) in the plan \(\pi\) , and \(\mathit {Arg}_{S_i\in S}\) attack all CQs that attack the states in the premises of \(\mathit {Arg}_{\pi }\) , therefore, \(\forall S_i \in S \; \mathit {Arg}_{S_i} \in \mathit {Gr}\) . Thus, \(\mathit {Arg}_{\pi } \in \mathit {Gr}\) .□
In other words, all the objections of the user regarding the set of states S (where \(\exists S_i \in S\) held at each action step) in the plan \(\pi\) do not hold.
Theorem 5.1.
For a plan \(\pi\) , \(\mathit {Arg}_{\pi } \in \mathit {Gr}\) iff plan \(\pi\) is valid and suitable.
Proof.
Follows immediately from Lemmas 5.1, 5.2, 5.3, 5.4, and 5.5.□
In other words, all the objections regarding the plan \(\pi\) and its elements do not hold. Therefore, plan \(\pi\) is valid and suitable, and therefore, the explanation argument is acceptable.
Theorem 5.2.
For a plan \(\pi ^{\prime }\) , \(\mathit {Arg}_{\pi ^{\prime }} \not\in \mathit {Gr}\) iff plan \(\pi ^{\prime }\) is invalid and not suitable.
Proof.
Follows immediately from Lemmas 5.1, 5.2, 5.3, 5.4, and 5.5.□
In other words, there is one (or more) objection(s) regarding the plan \(\pi ^{\prime }\) that hold(s). Therefore, plan \(\pi ^{\prime }\) is invalid and not suitable, and therefore, the explanation argument is not acceptable.
An overall summary of the properties is presented in Table 2. Collectively these properties align the notion of a valid and suitable plan with an acceptable plan explanation argument. If we assume, as we do going forward, that a user is rational in the sense of (1) accepting the arguments in the grounded extension of a Dung framework and (2) only holding as true those facts in the planning model, then if a plan is valid and suitable, that user will accept that the plan explanation argument holds. This is why we consider the argument to be a suitable explanation—it can justify all the plan elements through arguments that are grounded in the planning model, interactively constructing the kind of explanation studied by Reference [12].
Table 2.
 PurposeLemmas Followed
Lemma 5.1Argument Completeness
Lemma 5.2Argument Interaction5.1
Lemma 5.3Goal Achievement
Lemma 5.4Action Execution
Lemma 5.5State Existence
Theorem 5.1Valid and Suitable Plan5.1, 5.2, 5.3, 5.4, 5.5
Theorem 5.2Not Valid and Not Suitable Plan5.1, 5.2, 5.3, 5.4, 5.5
Table 2. Summary of Properties for Plan Explanation Arguments

6 Dialogue System Using Argument Schemes and Critical Questions

In this section, we present a system for a formal dialogue between planner and user that allows the user to explore the plan explanation arguments, raising objections and having them answered. It provides an interactive process that recursively unpacks the explanation arguments and allows the user to assure themselves of their acceptability (and hence the validity and suitability of the plan).
The dialogue takes place between two participants: (1) planner and (2) user.
The communication language consists of the legal dialogue moves by the two participants. The moves that the planner and user can use in a dialogue are:
Definition 6.1 (Planner Moves).
Planner moves consist of the following:
(1)
\(\mathit {Arg_{\pi }}\) , plan summary argument.
(2)
\(\mathit {Arg_a}\) , possible action argument.
(3)
\(\mathit {Arg_{na}}\) , necessary action argument.
(4)
\(\mathit {Arg_S}\) , state argument.
(5)
\(\mathit {Arg_g}\) , goal argument.
Definition 6.2 (User Moves).
User moves consist of the following:
(1)
\(\mathit {CQ_1}\) , is it possible for the plan \(\pi\) to be a solution?
(2)
\(\mathit {CQ_2}\) , is it possible to execute the action a?
(3)
\(\mathit {CQ_3}\) , is it possible to have the state S?
(4)
\(\mathit {CQ_4}\) , is it possible to achieve the goal g?
(5)
\(\mathit {CQ_5}\) : Is it necessary to execute action a rather than action b?
A joint commitment store is used for holding the planner and user moves (i.e., arguments) used within a dialogue.
Definition 6.3 (Commitment Store).
A commitment store denoted by \(\mathit {CS} \subseteq (\mathit {Args} \cup \mathit {CQs})\) holds all the arguments (i.e., planner moves and user moves) that the planner and user are dialectically committed to. \(\mathit {CS(pl)}\) denotes all the arguments of the planner and \(\mathit {CS(us)}\) denotes all the arguments of the user.
To ensure that the dialogue conversation ends, we define the termination conditions for the dialogue as follows.
Definition 6.4 (Termination conditions).
The dialogue terminates when any one of the following three conditions holds:
\(T_1: \mathit {PlannerMove} = null\) . When the planner is unable to generate the argument, because one of the premises of the argument is not true.
\(T_2: \mathit {UserMove} = null\) . When the user has exhaustively asked all CQs regarding the components of the plan.
\(T_3: \mathit {UserMove} = none\) . When the user does not want to ask any more questions.
Termination of the dialogue conversation results in an outcome.
Definition 6.5 (Dialogue Outcomes).
There are three possible outcomes of the dialogue:
\(O_1 =\) “Plan is invalid or not suitable, and therefore, the explanation is unacceptable.”
\(O_2 =\) “Plan is valid and suitable, and therefore, the explanation is acceptable.”
\(O_3 =\) “Explanation is acceptable.”
The user and planner both have to follow rules.
Definition 6.6 (User Move Rules).
The moves that are allowed for the user to put forward depend on the previous moves of the planner.5 The allowed user moves in response to the planner moves are given below, where the first move does not require any previous planner moves.
(1)
\(\mathit {CQ_1}\) 6
(2)
\(\mathit {Arg_a}\) : \(\mathit {CQ_3}\)
(3)
\(\mathit {Arg_{na}}\) : \(\mathit {CQ_3}\)
(4)
\(\mathit {Arg_S}\) : \(\mathit {CQ_2, CQ_5}\)
(5)
\(\mathit {Arg_g}\) : \(\mathit {CQ_2, CQ_3, CQ_5}\)
(6)
\(\mathit {Arg_{\pi }}\) : \(\mathit {CQ_2, CQ_3, CQ_4, CQ_5}\)
Definition 6.7 (Planner Move Rules).
The moves that are allowed for the planner to put forward depend on the previous move by the user. The allowed planner moves in response to the user moves are given below.
(1)
\(\mathit {CQ_1}\) : \(\mathit {Arg_{\pi }}\)
(2)
\(\mathit {CQ_2}\) : \(\mathit {Arg_a}\)
(3)
\(\mathit {CQ_3}\) : \(\mathit {Arg_S}\)
(4)
\(\mathit {CQ_4}\) : \(\mathit {Arg_g}\)
(5)
\(\mathit {CQ_5}\) : \(\mathit {Arg_{na}}\)
The dialogue has to follow certain rules (i.e., a protocol).
Definition 6.8 (Dialogue Rules).
Following are the rules of the dialogue denoted by \(\mathit {DR}\) :
(1)
The first move in the dialogue is made by the user, which is \(\mathit {CQ_1}\) .
(2)
Both players, i.e., planner and user can put forward a single move at a given step in response to each other.
(3)
Once the move is put forward, it is stored in the commitment store \(\mathit {CS}\) .
(4)
The user cannot put forward a move, i.e., argument already present in the commitment store \(\mathit {CS}\) for a plan component and the same goes for the planner.7
(5)
Each user move has to follow the user move rules given in Definition 6.6.
(6)
Each planner move has to follow the planner move rules given in Definition 6.7.
(7)
The dialogue ends when any one of the termination conditions \(T_1\) , \(T_2\) , or \(T_3\) holds.
The dialogue between the planner and user is then defined.
Definition 6.9 (Dialogue).
We define a dialogue to be a sequence of moves \(\mathcal {D}= [M_0,M_1,\ldots ,M_n]\) . The dialogue takes place between the two participants, i.e., planner and user. Each dialogue participant must follow the dialogue rules \(\mathit {DR}\) for making moves. Each move put forward by both participants is recorded and stored in the commitment store \(\mathit {CS}\) . The dialogue terminates when any one of the termination conditions \(T_1, T_2,\) or \(T_3\) holds. Based on the termination condition T, the outcome of the dialogue can be:
If \(T=T_1\) , then outcome of the dialogue is:
\(O_1 =\,\) “Plan is invalid or not suitable, and therefore, the explanation is unacceptable.”
If \(T=T_2\) , then outcome of the dialogue is:
\(O_2 =\) “Plan is valid and suitable, and therefore, the explanation is acceptable.”
If \(T=T_3\) , then outcome of the dialogue is:
\(O_3 =\) “Explanation is acceptable.”
To describe the dialectical process, we introduce Algorithms 14. Algorithm 4 presents an operational description of the dialogue \(\mathcal {D}\) in the dialogue system specified above, which allows the planner and user to find moves, i.e., arguments to put forward in a dialogue, based on the counterarguments put forward by the other party. The dialogue starts when the planner presents a plan \(\pi\) to the user and the user asks the first \(CQ_1\) . During the dialogue conversation, the planner is either able to construct an appropriate argument via the argument schemes; or the argument returned is null, indicating that it is unable to construct the argument, i.e., \(T_1\) holds, which consequently terminates the dialogue. Similarly, the user either has the option of asking a critical question regarding the premises of any one argument from the previous arguments put forward by the planner; or the user has no more questions to ask, i.e., \(T_3\) holds, which consequently terminates the dialogue. Furthermore, the user may exhaustively ask all the critical questions (and the planner successfully answers them), i.e., \(T_2\) holds, and thus, the dialogue terminates. The dialogue finishes with one of the three possible outcomes \(O_1\) , \(O_2\) , or \(O_3\) .
Algorithms 1 and 2 provide an operational description of ways that a rational user may behave. We can think of these as both a mechanism for proving that the dialogue works as desired and as a mechanism for generating allowable moves in an implementation that walks the user through an exploration of the plan. In Algorithm 1, the user is able to choose a previous planner argument in the dialogue and find all the allowed moves that she can put forward in response to that. The input of Algorithm 1 is the set of all previous planner moves, i.e., arguments generated via the argument schemes and the output is a set of user moves consisting of arguments, i.e., CQs, from which the user can further choose a single CQ. In Algorithm 2, the user can select a particular move from the set of user moves or choose not to ask any more questions. Thus, the output of Algorithm 2 is either the chosen CQ or none. In Algorithm 3, the planner can find the relevant move (i.e., argument generated via the argument schemes) to answer the user CQ. The input of Algorithm 3 is the previous user move, i.e., CQ and the output is an appropriate planner move corresponding to the user CQ. If the planner is unable to generate the argument via the argument schemes, then we assume the output planner move to be null.

6.1 Properties to Evaluate the Dialogue System

A dialogue \(\mathcal {D}\) generated in the dialogue system has the following properties. In these results, \(\mathit {Args_{\mathcal {D}}}\) denotes all the moves, i.e., arguments used by the planner in a dialogue \(\mathcal {D}\) , where \(\mathit {Args_{\mathcal {D}}} \subseteq \mathit {Args}\) . \(\mathit {CQs_{\mathcal {D}}}\) denotes all the moves, i.e., CQs used by the user in a dialogue \(\mathcal {D}\) , where \(\mathit {CQs_{\mathcal {D}}} \subseteq CQs\) .
The first four properties (i.e., Lemma 6.1 and Propositions 6.1, 6.2, 6.3) align the validity and suitability of a plan with the successful defence of the explanation argument.
Lemma 6.1.
For a given dialogue \(\mathcal {D}\) , plan \(\pi\) is valid and suitable, and therefore, its explanation is acceptable iff the planner has exhaustively answered all CQs regarding the plan and its elements, i.e., \(\mathit {CQs_{\mathcal {D}}} = CQs\) .
Proof.
Follows from Lemma 5.1. Since all the objections (i.e., CQs) regarding the plan \(\pi\) put forward by the user in the dialogue \(\mathcal {D}\) do not hold, therefore, plan \(\pi\) is valid and suitable. If we assume that a user is rational in the sense of accepting the arguments in the grounded extension of a Dung framework as shown in Lemma 5.1, and only holding to true those facts in the planning model, then if a plan is valid and suitable, that user will accept that the plan explanation holds.□
Proposition 6.1.
For a given dialogue \(\mathcal {D}\) , plan \(\pi\) ’s explanation is acceptable iff the planner has answered all CQs of the user and the user does have any more CQs to ask.
Proof.
Follows from Lemma 6.1. Since all the objections (i.e., CQs) regarding the plan \(\pi\) put forward by the user in the dialogue \(\mathcal {D}\) do not hold, therefore, plan \(\pi\) is not proven to be invalid or not suitable, which implies that user will accept that the plan explanation holds.□
Proposition 6.2.
For a given dialogue \(\mathcal {D}\) , plan \(\pi ^{\prime }\) is invalid or not suitable, and therefore, its explanation is unacceptable to the user iff the planner is unable to answer, i.e., generate an appropriate argument via the argument schemes for at least one \(\mathit {CQ} \in \mathit {CQs}_\mathcal {D}\) .
Proof.
Follows from Theorem 5.2. Since plan \(\pi ^{\prime }\) is invalid or not suitable, there is one (or more) objection(s) (i.e., CQ(s)) regarding the plan \(\pi ^{\prime }\) put forward by the user in the dialogue \(\mathcal {D}\) that hold(s). Therefore, the dialogue \(\mathcal {D}\) for an invalid plan \(\pi ^{\prime }\) results in a plan explanation that is unacceptable to the user.□
Proposition 6.3.
A dialogue \(\mathcal {D}\) for a valid and suitable plan \(\pi\) results in an explanation that is always acceptable.
Proof.
Follows from Theorem 5.1. Since plan \(\pi\) is valid and suitable, all the objections (i.e., CQs) regarding the plan \(\pi\) put forward by the user in the dialogue \(\mathcal {D}\) do not hold. Therefore, the dialogue \(\mathcal {D}\) for a valid and suitable plan \(\pi\) results in a plan explanation that is always acceptable.□
Next, we consider termination of the dialogue.
Theorem 6.1.
A dialogue \(\mathcal {D}\) for a plan \(\pi\) always terminates.
Proof.
The three termination conditions \(T_1, T_2,\) and \(T_3\) ensure that the dialogue D always terminates. We prove this by considering all three conditions:
(1)
\(T_1\) : This arises when the planner is unable to construct an appropriate argument in response to a user question. Whenever this happens the dialogue \(\mathcal {D}\) terminates.
(2)
\(T_2\) : This arises when the user has asked all possible CQs and the planner has successfully answered them. Whenever this happens the dialogue \(\mathcal {D}\) terminates, and the user is not able to put forward any more questions.
(3)
\(T_3\) : This arises when the user has no further questions to ask. Whenever this happens the dialogue \(\mathcal {D}\) terminates.
For a valid and suitable plan \(\pi\) , since \(T_2\) condition ensures that the dialogue terminates when the user has exhaustively asked all CQs regarding the elements of the plan (worst-case), and the dialogue rules DR ensure that the user cannot ask the same CQ (for the same plan element) again, therefore, the dialogue \(\mathcal {D}\) for a valid and suitable plan \(\pi\) always terminates. For an invalid or not suitable plan \(\pi ^{\prime }\) , since \(T_1\) condition will always arise before \(T_2\) , which we have already proved above will always terminate, therefore, the dialogue \(\mathcal {D}\) for an invalid or not suitable plan \(\pi ^{\prime }\) always terminates.□
Next, we show that there is a sense in which the dialogue is sound and complete with respect to the arguments that are expressed.
Theorem 6.2.
A dialogue \(\mathcal {D}\) for a valid and suitable plan \(\pi\) is complete, in that, the planner has an argument for each element of the plan and the user has a CQ for each element of the plan. Furthermore, if a user has a \(\mathit {CQ} \in \mathit {CQs_{\mathcal {D}}}\) then the planner has a corresponding argument \(\mathit {Arg} \in \mathit {Args_{\mathcal {D}}}\) to respond with.
Proof.
Follows from Theorem 5.1. Lines 2–17 of Algorithm 1 help the user in finding a CQ for each plan element corresponding to the planner argument (or the plan). Similarly, Lines 2–12 of Algorithm 3 help the planner in finding an argument for each plan element (or plan summary) corresponding to a user CQ.□
Theorem 6.3.
A dialogue \(\mathcal {D}\) for a valid and suitable plan \(\pi\) is sound, in that, \(\forall \mathit {CQ} \in \mathit {CQs_{\mathcal {D}}}\) , each user move \(\mathit {CQ}\) is correct. Similarly, \(\forall \mathit {Arg} \in \mathit {Args_{\mathcal {D}}}\) , each planner move \(\mathit {Arg}\) is correct.
Proof.
Both participants of the dialogue \(\mathcal {D}\) , i.e., planner and user must follow the dialogue rules DR. This ensure that each of them always picks a move that is legally allowed, i.e., correct at every single step in the dialogue \(\mathcal {D}\) . Since Algorithm 1 finds the set of all allowed user moves following the dialogue rules DR as given in Lines 2–17, therefore \(\forall \mathit {CQ} \in \mathit {CQs_{\mathcal {D}}}\) , each user move \(\mathit {CQ}\) is correct. Similarly, Algorithm 3 finds a planner move following the dialogue rules DR as given in Lines 2–12, therefore \(\forall \mathit {Arg} \in \mathit {Args_{\mathcal {D}}}\) , each planner move \(\mathit {Arg}\) is correct.□
Our experience is that for many planning problems, although the AI planner will produce valid and suitable plans, users do not necessarily understand that how all the elements of the plan are correct or suitable. This leads them to question the validity and suitability of the plan. We believe that our dialogue process will overcome this kind of misapprehension by allowing users to question plan elements and have the planner explain to them why those elements are included in the plan.

6.2 Example of a Dialogue

The following is one instance of a dialogue generated by the dialogue system given the blocks world of Example 4.1.
(1)
The planner presents a plan \(\pi\) , i.e., a sequence of actions to the user, which is given as follows:
\(\pi = \langle \mathit {Unstack(A,B)}, \mathit {Unstack(B,C)}, \mathit {Stack(C,A)} \rangle .\)
(2)
Algorithm 1 is called by the user to find a set of possible moves, and then, Algorithm 2 is called by the user to select a particular move (or end the dialogue if the user has no questions). Since \(\pi\) is not an argument, it is assumed that the planner has presented the plan \(\pi\) . The user asks:
CQ1: Is it possible for the plan \(\pi\) to be a solution?
(3)
Algorithm 3 is called by the planner to find a suitable move, i.e., \(\mathit {Arg}_{\pi }\) , which is the plan summary argument. Planner presents the argument \(\mathit {Arg}_{\pi }\) , which is given as follows:
Premise 1:
\(\gamma (S_1, \mathit {Unstack(A,B)}) \rightarrow S_2\)
In the initial state \(\lbrace \mathit {Ontable(C), On(B,C), On(A,B), Clear(A)} \rbrace\) , we can execute the action \(\mathit {Unstack(A,B)}\) that results in the next state \(\lbrace \mathit {On(B,C), Clear(A), Clear(B), Ontable(A), Ontable(C)} \rbrace\) .
\(\gamma (S_2, \mathit {Unstack(B,C)}) \rightarrow S_3\)
In the state \(\lbrace \mathit {On(B,C), Clear(A), Clear(B), Ontable(A), Ontable(C)} \rbrace\) , we can execute the action \(\mathit {Unstack(B,C)}\) that results in the next state \(\lbrace \mathit {Clear(A), Clear(B), Clear(C), Ontable(A), Ontable(B), Ontable(C)} \rbrace\) .
\(\gamma (S_3, \mathit {Stack(C,A)}) \rightarrow S_4\)
In the state \(\lbrace \mathit {Clear(A), Clear(B), Clear(C), Ontable(A), Ontable(B), Ontable(C)} \rbrace\) , we can execute the action \(\mathit {Stack(C,A)}\) that results in the goal state \(\lbrace \mathit {On(C,A)}, \mathit {Ontable(A)}, \mathit {Ontable(B)},\mathit {Clear(C)}, \mathit {Clear(B)} \rbrace\) .
Premise 2:
\(\mathit {HoldGoals}(\bigtriangleup _G, \; G)\)
In the goal state \(\lbrace \mathit {On(C,A)}, \mathit {Ontable(A)}, \mathit {Ontable(B)}, \mathit {Clear(C)}, \mathit {Clear(B)} \rbrace\) , all the goals in the set of goals \(\lbrace \mathit {On(C,A)}, \mathit {Ontable(A)}, \mathit {Ontable(B)}, \mathit {Clear(C)}, \mathit {Clear(B)} \rbrace\) hold.
Premise 3:
\(\mathit {AchieveGoals}(\pi , \; G)\)
The sequence of actions \(\langle \mathit {Unstack(A,B)},\mathit {Unstack(B,C)}, \mathit {Stack(C,A)} \rangle\) achieves the set of all goals \(\lbrace \mathit {On(C,A)}, \mathit {Ontable(A)}, \mathit {Ontable(B)}, \mathit {Clear(C)}, \mathit {Clear(B)} \rbrace\) .
Conclusion:
\(\mathit {Solution(\pi , \; P)}\)
Therefore, \(\langle \mathit {Unstack(A,B)}, \mathit {Unstack(B,C)}, \mathit {Stack(C,A)} \rangle\) is a solution to the planning problem P.
(4)
Algorithm 1 is called by the user, which returns a set of CQs, \(\lbrace CQ_2, CQ_3, CQ_4, CQ_5 \rbrace\) , and after that, Algorithm 2 is called by the user to select one of the CQs or end the dialogue. The user asks:
CQ2: Is it possible to execute the action \(a = \mathit {UNSTACK(A,B)}\) ?
(5)
Algorithm 3 is called by the planner to find a suitable move, i.e., \(\mathit {Arg}_a\) , which is the possible action argument for action \(a=\mathit {UNSTACK(A,B)}\) . Planner presents the argument \(\mathit {Arg}_a\) , which is given as follows:
Premise 1:
\(\mathit {HoldPrecondition(pre(\mathit {Unstack(A,B)}), S_1)}\)
In the current state \(\lbrace \mathit {Ontable(C), On(B,C), On(A,B)}, \mathit {Clear(A)} \rbrace\) , the pre-condition \(\lbrace \mathit {Clear(A), On(A,B)} \rbrace\) of action \(\mathit {Unstack(A,B)}\) holds.
Conclusion:
\(\mathit {ExcecuteAction(Unstack(A,B), \; S_1)}\)
Therefore, it is possible to execute action \(\mathit {Unstack(A,B)}\) in the current state \(\lbrace \mathit {Ontable(C), On(B,C), On(A,B), Clear(A)} \rbrace\) .
(6)
Algorithm 1 is called by the user, where the user chooses the previous planner argument \(\mathit {Arg}_{\pi }\) to question,8 which returns a set of CQs, \(\lbrace CQ_2, CQ_3, CQ_4, CQ_5\rbrace\) . After that, Algorithm 2 is called by the user to select a CQ or end the dialogue. The user asks:
CQ3: Is it possible to have the state \(S = \lbrace \mathit {On(B,C), Clear(A), Clear(B), Ontable(A), Ontable(C)} \rbrace\) ?
(7)
Algorithm 3 is called by the planner to find a suitable move, i.e., \(\mathit {Arg}_S\) , which is the state argument for state \(S = \lbrace \mathit {On(B,C), Clear(A), Clear(B), Ontable(A), Ontable(C)} \rbrace\) . Planner presents the argument \(\mathit {Arg}_S\) , which is given as follows:
Premise 1:
\(\gamma (S_1, \; \mathit {Unstack(A,B)}) \rightarrow ((S_1 \setminus post(a)^-)) \cup post(a)^+ = S)\) .
In the current state \(\lbrace \mathit {Ontable(C), On(B,C), On(A,B), Clear(A)} \rbrace\) , we can execute the action \(\mathit {Unstack(A,B)}\) , after which the negative postconditions \(\lbrace \mathit {On(A,B)} \rbrace\) do not hold and the positive postconditions \(\lbrace \mathit {Ontable(A), Clear(B)} \rbrace\) hold, that results in the state \(\lbrace \mathit {On(B,C), Clear(A), Clear(B), Ontable(A), Ontable(C)} \rbrace\) .
Conclusion:
Therefore, the state \(\lbrace \mathit {On(B,C), Clear(A), Clear(B), Ontable(A), Ontable(C)} \rbrace\) is true.
(8)
Algorithm 1 is called by the user, where the user chooses the previous planner argument \(\mathit {Arg}_{\pi }\) to question, which returns a set of CQs, \(\lbrace CQ_2, CQ_3, CQ_4, CQ_5\rbrace\) . After that, Algorithm 2 is called by the user to select a CQ or end the dialogue. The user asks:
CQ4: Is it possible to achieve the goal \(g = \mathit {Ontable(A)}\) ?
(9)
Algorithm 3 is called by the planner to find a suitable move, i.e., \(\mathit {Arg}_g\) , which is the goal argument for goal \(g = \mathit {Ontable(A)}\) . Planner presents the argument \(\mathit {Arg}_g\) , which is given as follows:
Premise 1:
\(\gamma (S_1, \; \mathit {Unstack(A,B)}) \rightarrow S_2\)
In the current state \(\lbrace \mathit {Ontable(C)}, \mathit {On(B,C)}, \mathit {On(A,B)}, \mathit {Clear(A)} \rbrace\) , we can execute the action \(\mathit {Unstack(A,B)}\) , that results in the next state \(\lbrace \mathit {Ontable(C)}, \mathit {On(B,C)}, \mathit {Clear(A)}, \mathit {Clear(B)}, \mathit {Ontable(A)} \rbrace\) .
Premise 2:
\(\mathit {HoldGoal}(\mathit {Ontable(A)}, \; S_2)\) .
In the next state \(\lbrace \mathit {Ontable(C)}, \mathit {On(B,C)}, \mathit {Clear(A)}, \mathit {Clear(B)}, \mathit {Ontable(A)} \rbrace\) , the goal \(\mathit {Ontable(A)}\) holds.
Conclusion:
\(\mathit {AchieveGoal(\mathit {Unstack(A,B)}, \mathit {Ontable(A)})}\) .
Therefore, the action \(\mathit {Unstack(A,B)}\) achieves the goal \(\mathit {Ontable(A)}\) .
(10)
Algorithm 1 is called by the user, where the user chooses the previous planner argument \(\mathit {Arg}_a\) to question, which returns a set of CQs, \(\lbrace CQ_3\rbrace\) . After that, Algorithm 2 is called by the user to select a CQ or terminate the dialogue. The user decides to terminate the dialogue and, thus, finds the explanation acceptable.
Figure 3 presents the argumentation graph of the above dialogue, where the grounded extension \(\mathit {Gr} = \lbrace \mathit {Arg_g}, \mathit {Arg_S}, \mathit {Arg_a}, \mathit {Arg_{\pi }} \rbrace\) . The outcome of the dialogue is “Explanation is acceptable,” since the user has no further questions to ask and the plan \(\pi\) has not yet proven to be invalid or not suitable. However, if at any point the planner is unable to construct a suitable argument, then the explanation will be considered as unacceptable and this would imply that the plan is invalid or not suitable. Furthermore, if the user has exhaustively asked all CQs and the planner has successfully answered them, then the explanation will be considered acceptable and the plan valid and suitable.
Fig. 3.
Fig. 3. Argumentation graph of the example dialogue.

7 Conclusions and Future Work

We have presented a novel argument scheme-based approach for generating interactive explanations in the domain of AI planning. Although the main focus of our research study was for explainable AI planning, our proposed approach is likely to be applicable to many other domains, in particular the legal domain, for explaining legal decision making [3].
The main contributions of our work are:
(1)
to present novel argument schemes to create the arguments that directly provide an explanation of the key elements of a plan;
(2)
to use the concept of critical questions to allow interaction between the arguments;
(3)
to study the properties that evaluate the plan explanation arguments using abstract argumentation theory;
(4)
to study a novel dialogue system using the argument schemes and critical questions to provide dialectical interaction between the planner and user; and
(5)
to study the properties that evaluate the dialogue system, such as, soundness, completeness and termination.
Note that our approach to generating explanation arguments is planner independent and, therefore, can work on a wide range of input plans in classical planning, and in the future, we intend to extend this to richer planning formalisms such as partial order and temporal planning. We aim to develop algorithms based on the argument schemes to automatically extract the arguments from the input planning model. We also plan to do an empirical evaluation of our proposed work on real world examples with human participants. Furthermore, we aim to explore dialogue strategies [20] for finding good moves in a dialogue, for instance, to facilitate the user to find relevant information about the plan with a minimum number of moves, or to reduce the number of moves required to reach agreement. Another avenue of future research is to model user trust [24, 28] during a dialogue conversation between the user and planner and to determine how the user trust ratings of the planner explanation arguments should affect the outcome of the dialogue.

Acknowledgments

We extend our thanks to the anonymous reviewers for their insightful comments toward improving this article. We are also thankful to Trevor Bench-Capon for valuable feedback on an earlier version of this article.

Footnotes

1
We define a valid and suitable plan in Section 4.
2
An AI planner will always come up with a valid plan for the problem it is given. However, a user might not understand how the elements of the plan are computed and may question some of the choices in plans generated by the planner.
3
We assume these are ground predicates.
4
This does not apply to the initial state \(\bigtriangleup _I\) , and we assume that the user knows the initial state is true by default.
5
User can select any one of the previous moves put forward by the planner.
6
We assume that planner has already presented the plan \(\pi\) to the user and do not consider this a move.
7
Since a previously asked user CQ for a plan component is not allowed, therefore, we assume that the planner will not repeat the same argument.
8
At this point, the user can choose any previous planner argument including the plan summary argument \(\mathit {Arg}_{\pi }\) to question further.

References

[1]
Leila Amgoud, Claudette Cayrol, Marie-Christine Lagasquie-Schiex, and P. Livet. 2008. On bipolarity in argumentation frameworks. Int. J. Intell. Syst. 23, 10 (2008), 1062–1093.
[2]
Katie Atkinson and Trevor J. M. Bench-Capon. 2007. Practical reasoning as presumptive argumentation using action based alternating transition systems. Artif. Intell. 171, 10-15 (2007), 855–874.
[3]
Katie Atkinson, Trevor J. M. Bench-Capon, and Danushka Bollegala. 2020. Explanation in AI and law: Past, present and future. Artif. Intell. 289 (2020), 103387.
[4]
Alexandros Belesiotis, Michael Rovatsos, and Iyad Rahwan. 2010. Agreeing on plans through iterated disputes. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS’10). IFAAMAS, 765–772.
[5]
Trevor J. M. Bench-Capon. 2003. Persuasion in practical argument using value-based argumentation frameworks. J. Log. Comput. 13, 3 (2003), 429–448.
[6]
Floris Bex and Douglas Walton. 2016. Combining explanation and argumentation in dialogue. Argu. Comput. 7, 1 (2016), 55–68.
[7]
Martin Caminada and Mikolaj Podlaszewski. 2012. Grounded semantics as persuasion dialogue. In Proceedings of the Conference on Computational Models of Argument (COMMA’12)(Frontiers in Artificial Intelligence and Applications, Vol. 245). IOS Press, 478–485.
[8]
Martin W. A. Caminada, Roman Kutlák, Nir Oren, and Wamberto Weber Vasconcelos. 2014. Scrutable plan enactment via argumentation and natural language generation. In Proceedings of the International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS’14). IFAAMAS/ACM, 1625–1626.
[9]
Louise A. Dennis and Nir Oren. 2022. Explaining BDI agent behaviour through dialogue. Auton. Agents Multi Agent Syst. 36, 1 (2022), 29.
[10]
P. M. Dung. 1995. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artific. Intell. 77 (1995), 321–357.
[11]
Phan Minh Dung, Robert A. Kowalski, and Francesca Toni. 2009. Assumption-Based Argumentation. Springer US. 199–218.
[12]
Xiuyi Fan. 2018. On generating explainable plans with assumption-based argumentation. In Proceedings of the 21st International Conference on Principles and Practice of Multi-Agent Systems (PRIMA’18)(Lecture Notes in Computer Science, Vol. 11224). Springer, 344–361.
[13]
Xiuyi Fan and Francesca Toni. 2015. On computing explanations in argumentation. In Proceedings of the 29th AAAI Conference on Artificial Intelligence. AAAI Press, 1496–1502.
[14]
Sergio Pajares Ferrando and Eva Onaindia. 2012. Defeasible argumentation for multi-agent planning in ambient intelligence applications. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems. 509–516.
[15]
Maria Fox, Derek Long, and Daniele Magazzeni. 2017. Explainable planning. In Proceedings of the IJCAI Workshop on Explainable AI. Retrieved from https://arxiv.org/abs/1709.10256
[16]
Malik Ghallab, Dana S. Nau, and Paolo Traverso. 2004. Automated Planning—Theory and Practice. Elsevier.
[17]
Patrik Haslum, Nir Lipovetzky, Daniele Magazzeni, and Christian Muise. 2019. An Introduction to the Planning Domain Definition Language (2nd ed.). Morgan and Claypool Publishers, 1–169.
[18]
Prashan Madumal, Tim Miller, Liz Sonenberg, and Frank Vetere. 2019. A grounded interaction protocol for explainable artificial intelligence. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS’19). International Foundation for Autonomous Agents and Multiagent Systems, 1033–1041.
[19]
Peter McBurney and Simon Parsons. 2002. Games that agents play: A formal framework for dialogues between autonomous agents. J. Log. Lang. Inf. 11, 3 (2002), 315–334.
[20]
Rolando Medellin-Gasque, Katie Atkinson, Trevor J. M. Bench-Capon, and Peter McBurney. 2013. Strategies for question selection in argumentative dialogues about plans. Argu. Comput. 4, 2 (2013), 151–179.
[21]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267 (2019), 1–38.
[22]
Sanjay Modgil. 2007. An abstract theory of argumentation that accommodates defeasible reasoning about preferences. In Proceedings of the 9th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU’07)(Lecture Notes in Computer Science, Vol. 4724), Khaled Mellouli (Ed.). Springer, 648–659.
[23]
Sanjay Modgil and Henry Prakken. 2013. A general account of argumentation with preferences. Artif. Intell. 195 (2013), 361–397.
[24]
Gideon Ogunniye, Alice Toniolo, and Nir Oren. 2017. A dynamic model of trust in dialogues. In Proceedings of the 4th International Workshop on Theory and Applications of Formal Argumentation (TAFA’17), Revised Selected Papers(Lecture Notes in Computer Science, Vol. 10757). Springer, 211–226.
[25]
Nir Oren. 2013. Argument schemes for normative practical reasoning. In Proceedings of the 2nd International Workshop on Theory and Applications of Formal Argumentation (TAFA’13), Revised Selected papers(Lecture Notes in Computer Science, Vol. 8306). Springer, 63–78.
[26]
Nir Oren, Kees van Deemter, and Wamberto Weber Vasconcelos. 2020. Argument-based plan explanation. In Knowledge Engineering Tools and Techniques for AI Planning. Springer, 173–188.
[27]
Pere Pardo, Sergio Pajares, Eva Onaindia, Lluís Godo, and Pilar Dellunde. 2011. Multiagent argumentation for cooperative planning in DeLP-POP. In Proceedings of the 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 3. 971–978.
[28]
Simon Parsons, Katie Atkinson, Zimi Li, Peter McBurney, Elizabeth Sklar, Munindar P. Singh, Karen Zita Haigh, Karl N. Levitt, and Jeff Rowe. 2014. Argument schemes for reasoning about trust. Argument Comput. 5, 2-3 (2014), 160–190.
[29]
Raymond Reiter. 1991. The frame problem in the situation calculus: A simple solution (sometimes) and a completeness result for goal regression. In Artificial and Mathematical Theory of Computation, Papers in Honor of John McCarthy on the Occasion of His Sixty-fourth Birthday, Vladimir Lifschitz (Ed.). Academic Press/Elsevier, 359–380.
[30]
Zohreh Shams, Marina De Vos, Nir Oren, and Julian A. Padget. 2020. Argumentation-based reasoning about plans, maintenance goals, and norms. ACM Trans. Auton. Adapt. Syst. 14, 3 (2020), 9:1–9:39.
[31]
Guillermo Ricardo Simari and Iyad Rahwan (Eds.). 2009. Argumentation in Artificial Intelligence. Springer.
[32]
D. E. Smith. 2012. Planning as an iterative process. Proc. Natl. Conf. Artific. Intell. 3 (Jan.2012), 2180–2185.
[33]
Yuqing Tang and Simon Parsons. 2005. Argumentation-based dialogues for deliberation. In Proceedings of the 4th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS’05). ACM, 552–559.
[34]
Alice Toniolo, Timothy J. Norman, and Katia P. Sycara. 2011. Argumentation schemes for collaborative planning. In Proceedings of the 14th International Conference on Principles and Practice of Multi-Agent Systems PRIMA(Lecture Notes in Computer Science, Vol. 7047). Springer, 323–335.
[35]
Alejandro Torreño, Eva Onaindia, Antonín Komenda, and Michal Štolba. 2018. Cooperative multi-agent planning. Comput. Surveys 50, 6 (2018), 1–32.
[36]
D. N. Walton. 1996. Argumentation Schemes for Presumptive Reasoning. L. Erlbaum Associates. 95020169
[37]
D. N. Walton and E. C. W. Krabbe. 1995. Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning. State University of New York Press.
[38]
Michael J. Wooldridge and Wiebe van der Hoek. 2005. On obligations and normative ability: Towards a logical analysis of the social contract. J. Appl. Log. 3, 3-4 (2005), 396–420.

Index Terms

  1. Argument Schemes and a Dialogue System for Explainable Planning

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Intelligent Systems and Technology
    ACM Transactions on Intelligent Systems and Technology  Volume 14, Issue 5
    October 2023
    472 pages
    ISSN:2157-6904
    EISSN:2157-6912
    DOI:10.1145/3615589
    • Editor:
    • Huan Liu
    Issue’s Table of Contents
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 30 September 2023
    Online AM: 21 July 2023
    Accepted: 10 July 2023
    Revised: 16 May 2023
    Received: 24 September 2022
    Published in TIST Volume 14, Issue 5

    Check for updates

    Author Tags

    1. Argument schemes
    2. dialogue system
    3. explanation
    4. planning

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 819
      Total Downloads
    • Downloads (Last 12 months)814
    • Downloads (Last 6 weeks)76
    Reflects downloads up to 30 Aug 2024

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media