Artificial Intelligence and Law 5: 243–248, 1997.
c 1997 Kluwer Academic Publishers. Printed in the Netherlands.
243
Introduction
Papers from the Jurix ’95 Conference
JAAP HAGE
Department of Metajuridica, Maastricht University, P.O. Box 616, 6200 MD Maastricht,
The Netherlands
E-mail: jaap.hage@metajur.unimaas.nl
Ever since 1988, the Dutch Foundation for Legal Knowledge Systems, JURIX,
organises an annual conference at alternating Dutch universities. In 1995, the
conference was held at Maastricht University. The proceedings of this conference
(Hage et al. 1995) contain 15 papers, including (summaries of) the three invited
speeches. Four selected papers were elaborated for this special issue of Artificial
Intelligence and Law.
1. Argument in Artificial Intelligence and Law
In his paper based on his invited speech at the conference, Trevor Bench-Capon
reviews the topic of argument as addressed in Artificial Intelligence and Law.
Bench-Capon opposes arguments to logical proofs. He considers formal logic as
an abstraction which is designed to deal solely with the relations between the truth
values of propositions. As a consequence, logic can neither deal with the rhetorical
features of legal arguments, nor can it handle procedural aspects. And, finally,
logical proofs lack the context which is so often essential for arguments.
Bench-Capon discusses two traditions in which arguments play an important
role. The first tradition, of which the TAXMAN, the HYPO, the CABARET, and
the BankXX systems are well-known examples, bases arguments on cases.
The second tradition, which is discussed somewhat more extensively, is the
rule-based one. Here Bench-Capon distinguishes three strands. The first strand
uses Toulmin-schemes for the presentation of reasoning and explanation of its
results. In the AI and Law-world, this line has been taken by Dick and Zeleznikow
and Stranieri.
The second strand is the use of arguments to deal with normative conflict and
the non-monotonicity of legal reasoning. The idea is that it is possible to generate
arguments on the basis of some underlying logic, which plead for incompatible
conclusions. Which of these conclusions can justifiedly be drawn depends on the
244
J. HAGE
comparison of these arguments. Work along this line stems from Prakken, Sartor,
Hage and Verheij, and Gordon.
The third strand, finally, deals with arguments as processes. This work usually
takes the form of developing dialogue games, where two parties can produce
interacting arguments. Examples of this tradition are work of Bench-Capon and
colleagues, Gordon’s Pleadings Game, and of Lodder and colleagues.
In his paper, Bench-Capon offers an illuminating discussion of these traditions and their strands. It is especially interesting how he draws the attention to
underlying similarities between seemingly very diverse work. Moreover, his paper
also evokes intriguing questions. For instance, the parallel which Bench-Capon
discovers between the case-based and the rule-based tradition, and their subtraditions is based on the notion of an argument which is characterised as being richer
than proofs of formal logic. This raises the question, which is also discussed by
Bench-Capon, after the precise relation of logic and arguments.
Bench-Capon answer this question by stripping down formal logic to the minimal relationship between the truth values of premises and conclusion of an argument
(in the narrow sense of formal logic). Arguments are richer in that they take other
aspects of reasoning into account. This answer has the disadvantage of presuming a
very narrow characterisation of formal logic. In particular it excludes nonmonotonic logics and logics which deal with rules. Nonmonotonic ‘logics’, or at least some
of them, do not define valid arguments in terms of the truth values of their premises
and conclusion, and are therefore not proper logics in the sense of Bench-Capon.
And rules are traditionally assumed not to have truth values, and consequently they
are not amenable to formal logic in the sense of Bench-Capon. Since reasoning in
the AI and Law-tradition is largely involved with rules which have no truth values,
and since reasoning with these rules is defeasible, it seems that logic in the sense
of Bench-Capon has no role at all in connection with the law. This conclusion does
not seem attractive to me; apparently Bench-Capon has stripped down formal logic
too far.
However, if we try to make up for the deficiencies of stripped down logic by
allowing formal logic to deal with more aspects of reasoning, it becomes less
clear how arguments are to be distinguished from logical proofs. Theories of good
argumentation, which aim to answer the question which arguments are good ones,
and which are not, appear to deal with the same issue as logic in a broader sense. If
formal logic is allowed to deal with more aspects of reasoning than the traditional
semantic notion of logical validity (an argument is valid if its conclusion must be
true if the premises are all true), it cannot easily be distinguished from such theories
of good argumentation. Maybe this explains Bench-Capon’s narrow definition of
formal logic, because this definition is necessary to delimit, by contrast, his central
topic of argument. But maybe the artificiality of this narrow definition shows
that where reasoning with rules is concerned, formal logic can, or should, not be
distinguished from general theories of good arguments.
INTRODUCTION
245
On this conjecture, the similarities which Bench-Capon notices between different research traditions in the AI and Law community do not disappear, but they
are to be explained in a different way. I think that the research on AI and Law
has provided a fresh approach to the general theory of legal justification, where
the traditional chasm between (semi-)formal approaches (Tammelo, Weinberger,
Alexy), and informal approaches (Viehweg, Perelman) is left behind. The notion
of an argument encompasses the work of both the (semi-)formal and the informal
tradition and makes it possible to address all issues which are relevant for legal
decision making. That is why this notion plays a central role in the work on AI and
Law. Bench-Capon deserves the credit for having pointed out this central role.
2. Representing Law in Partial Information Structures
Where the representation of legal knowledge (or – maybe better – of the law)
is at stake, there is a trade-off between what may be called easy and powerful
representation. Easy representation takes as its starting point the way in which
humans intuitively see legal rules. This way of seeing rules involves, I think, an
idea of the type of cases to which the rule applies, and the legal consequences
which the rule attaches to this kind of cases. Easy representation is easy, because
the translation from the mental to the formal representation of the law is not a big
step. Easy representation has the disadvantage that it will often abstract from the
way in which the law is represented in its official sources (read: in legislation).
Representation of the law which attempts to follow the structure of the source
(often called isomorphic representation) is more powerful in that interconnections
within and between source units are maintained in the representation. It is more
difficult, because humans cannot relate to their intuitions about the contents of rules
in creating the representation. This is in particular clear if a legal source makes use
of complex constructions to denote the cases to which it is applicable.
In his paper on representing law in partial information structures, Peek mentions
the case of so-called deeming provisions, in which the scope of application of a
rule is modified by reference to its original scope. For instance, Article 1, Section 2
of the Dutch Opium Act says that salts of particular substances are considered to
be those substances (of which they are salts). An isomorphic representation of the
Dutch Opium Law asks for a representation of this article section which mirrors
this construction of a deeming provision.
In his paper, Peek offers us a formalism for the representation of (amongst
others) the law, which is rather powerful. This formalism, the so-called feature
structure formalism, makes use of tools that were developed in the field of computational linguistics. Feature structures are a kind of partial descriptions of objects,
situations or events, or – which comes down to the same thing – full descriptions of
classes of them. They can be nested, and can be subjected to a kind of set-theoretical
operations.
246
J. HAGE
Peek uses these feature structures in the form of rule frames as they were
developed by Van Kralingen (1995). These frames are represented by means of
feature structures, which is relatively easy to do because feature structures are
very much like frames. Peek then goes on to show that rather complex legislative
constructions such as deeming provisions can be represented isomorphically by
means of his formalism.
The main question with which I was left after reading Peek’s paper is how
important isomorphic representation really is, and how much trouble we should
undertake to obtain this kind of representation. Peeks’ formalism allows a very
explicit rendering of the structure of statutory provisions and can consequently
be used for powerful forms of legal reasoning. Its power is also its weakness
however, because the formalism invites a legal knowledge engineer to make much
explicit which might have remained implicit for most practical purposes. The task
of legal knowledge representation becomes more complex than some builders of
legal knowledge systems would find palatable. Peek’s paper has the advantage that
it makes this dilemma very clear. In choosing a formalism for legal knowledge
representation one is advised to exhibit only as much structure of the represented
knowledge as is necessary for the applications of the representation. Peek has
provided us with a formalism that might do the job when much structure is desirable.
3. The Legitimacy of Legal Decision Systems
An important aspect of legal knowledge systems is that they do not only model
the law, but also determine its contents. This phenomenon is called by Oskamp
and Tragter the ‘regulating effects’ of legal knowledge systems. In their paper
Automated Legal Decision Systems in Practice: The Mirror of Reality, they discuss
what happens when automated decision systems are used to enforce the law and
what implications this should have for the development and the control of these
systems.
Oskamp and Tragter distinguish four kinds of systems. First there are automated calculation systems, which calculate, given a certain input, the amount of e.g.
alimony or smart money. Second there are legal decision support systems which do
not aim to standardise the application of the law, but nevertheless have regulating
effects thanks to the built in interpretations of rules or a way of operating prescribed by them. The third kind consists of systems that aim to enforce pre-existing
legislation. Such systems often contain interpretations of the policies adopted by
the administration to fill in their discretionary powers. And finally there are systems which are co-developed with legislation, where the legislation is built on the
supposition that it will be enforced automatically.
It is in particular this final category on which Oskamp and Tragter focus. By
means of a case study of the Dutch Study Grant Act, they show that the need
to automate the enforcement of the Act severely limits the political freedom to
determine the Act’s contents. Moreover, it holds generally for automated legal
INTRODUCTION
247
decision systems that they contain an interpretation of the law which goes beyond
what is politically authorised. It is not always clear what this additional content
amounts to, let alone that it is under control of politically responsible organs. In their
conclusion after the case studies, Oskamp and Tragter write ‘that the developers
of the present generation of automated legal decision systems have not sufficiently
taken into account the general points of attention concerning the legitimacy of these
systems and their acceptability’.
The question which follows from this conclusion is how the situation can be
improved. Oskamp and Tragter devote the rest of their paper (Section 6f.) to an
answer to this question. In this connection they discuss the evaluation of automated
legal decision systems (by which standards?, when?, and by whom?), control
over the system, both technical and from the point of view of legitimacy, and
the important role of system documentation, both technical and concerning the
system’s contents.
An important issue regarding the evaluation of legal systems is their degree
of transparency. Systems built with conventional technology can effectively hide
their legal contents because it is compiled away. This makes it less easy to evaluate
and control them. An interesting question would be to which extent the explicit
representation of the law contributes to the transparency of legal decision systems.
Oskamp and Tragter focus strongly on the examples from the Dutch law which they
discuss, and which are (to my knowledge) not knowledge systems in the sense of
systems which explicitly represented knowledge. As a consequence, this question
is not dealt with by the authors, but it might be a topic for further research.
4. Legal Agents
Legal relations evolve in time, not only because the law changes in time, but also
because events and actions cause changes in the legal situation. This aspect of the
law has received relatively little attention in the AI and Law community, with the
notable exception of the work of Gardner. The paper of Heesen, Homburg and
Offereins about the Laca-architecture provides another exception.
In their An Agent View on Law Heesen et al. describe legal agents which can
operate in the field of the Dutch Administrative Law. To provide the agents with
this capability, the agents are endowed, not only with knowledge on how the law
evolves as a consequence of certain events, but also with knowledge about how to
undertake particular actions and when to undertake them. The actions which the
agents can perform are speech acts, and the authors list a large number of them,
together with the (perlocutionary) effects that obtain if the acts are performed.
The listed speech acts include the acknowledgement of a message, informing of
certain information, accepting and rejecting a request, and consultation of some
other agent.
248
J. HAGE
The effects of the speech acts are modelled by means of state transition diagrams,
and the resulting stage descriptions in turn trigger the control knowledge which
governs the behaviour of the agents.
The paper strongly focuses on the domain of administrative law. However, the
ACA-architecture for legal agents provides a starting point for the development of
systems which can both model and contribute to the development of the law by
means of acts in the law, in particular legislation. In combination with traditional
legal reasoning systems, such systems could deal with almost every aspect of legal
reasoning. One system models the development of legal rules and relations in time,
while the other system models the legal consequences of cases, given the law as
it is at some point in time. In this way the Law and AI research extends its scope
from the statics to the dynamics of law.
5. Conclusion
As will have become clear from this introduction, the four papers deal with various
aspects of the Law and AI-enterprise. Clearly I only gave a global indication of the
contents of the papers. The best remedy for this shortcoming is to read the papers
yourself, and this is precisely what I commend the reader to do.
References
Hage, J.C., Bench-Capon, T.J.M., Cohen, M.J. & Van den Herik, H.J. (eds.) (1995). Legal Knowledge
Based Systems. Telecommunication and AI & Law. Koninklijke Vermande: Lelystad.
Van Kralingen, R.W. (1995). Frame-Based Conceptual Models of Statute Law. Thesis, Department
of Law and Computer Science, University of Leiden, Leiden.