1 Introduction
E-government emerged during the ’90s and early 2000s as the usage of ICT to provide public services to citizens and enterprises. Methods and approaches were developed, as in [
26], to define novel digital interactions between citizens and their government (C2G), between governments and other government agencies (G2G), between government and citizens (G2C), between government and employees (G2E), and between government and business/commerce (G2B). The counterpart of this digitalization shift has been the explosion of many different applications and complex information systems. These systems nowadays form a repository of legacy systems that, per se, do not integrate novel techniques (such as the ones proposed by Artificial Intelligence - AI) to achieve further improvements. These systems lack extensibility and plug-and-play capabilities, making them challenging to upgrade due their complexity and high costs. On the other end, their redesign would be too long and costly, and previous investments need also to be preserved. despite significant ICT investments over the past decade, more than 55 different information systems still exist, with an average age of over 20 years. In the case of the information system managing civil judicial processes, the underlying state transition graph comprises more than 110,000 edges.
Nowadays, Italian court offices can rely on information systems to manage the processes driving the trials but they face challenges in harnessing the full potential of the generated data for analysing and enhancing trial efficiency. This calls for exploiting and improving data and process methods and tools to make this data actionable and quantifying pertinent indicators to enhance resource usage.
In this commentary article, we discuss some possible research directions for applying AI-driven data and process science [
19,
34] within the judicial domain. Drawing from the authors’ experience and related work on the domain in Italy, we focus on the analysis and enhancement of information systems supporting Italian judicial processes. This article introduces research challenges, possible research directions, and corresponding methods for improvement actions to support the work of specific categories of legal users. These users encompass judges, their aides, the administration, and Process Officers. The latter ones, recently introduced in the Italian judicial system, aim at supporting judges and the administrative offices in addressing the current backlog, which affects the trial duration.
2 Challenges
Digitalization of judicial procedures is an ongoing process worldwide, and many consolidated systems are emerging to support the management of both criminal and civil cases. In-depth analyses of the judicial systems are periodically performed in Europe by the Council of Europe European Commission for the Efficiency of Justice (CEPEJ), also focusing on ICT technologies in the field and comparing the progress of different member states in terms of quality and efficiency.
The initial focus of digitalization has been on representing data and processes, starting from their execution traces, in order to design information systems. While there is continuous progress in digitalization, some issues affecting data and process management are still evident. About the former, the analyses currently supported by the systems are oriented mainly toward representing cases as a whole, and examining case-level statistics in different Courts or regions of the country. The large amount of available data nowadays enables a more fine-grained analysis for proposing improvements in the efficiency of handling cases.
Considering the latter, precise monitoring of the duration of processes is still a challenge due to the complexity of trials execution. Therefore, a proxy indicator (Disposition Time) has been adopted in the official reports [
18], considering the current pace of work, based on the new and closed cases in a yearly period. This indicator has shown deep differences among countries and for the different levels of judgment. Considering Italy, disposition times are still largely above the European average, and the main reason for delays is attributed in the report to the high number of pending cases.
Starting from these considerations, in this manuscript, we focus on how AI techniques can support efficiency, in particular to reduce the duration of cases, starting from the requirements that emerged in the Ministry of Justice PON Projects the authors are involved in and from the analysis of data and usage of the
Civil Telematic Process (
PCT), which has been in place for more than ten years in Courts for civil cases [
24,
25].
The initial focus of digitalization has been on representing data and processes to design information systems, starting from their execution traces. Therefore, the typical flow of work in Courts is represented, and all possible variants of work are considered in a very detailed model of the underlying process execution. With this type of information, the rapid collection of data about incoming and terminated processes is facilitated, and several indicators, including those considered in CEPEJ reports, can be easily extracted and analyzed. A recent interesting trend is the creation of real-time dashboards to analyze the situation of ongoing cases on a daily basis, instead of long periods of time, which is being advocated by CEPEJ initiatives.
Reducing the trial duration while still maintaining the quality of the process implies addressing the following requirements that emerged during the analysis of factors affecting the duration of cases:
—
Using resources efficiently: an important cause of delays is significant understaffing and limited space. For these reasons, efficient use of resources is crucial. Specifically, cases should be evenly distributed to judges, also considering their complexity, and judges could be provided with tools to facilitate their work. In addition, considering courtrooms, there should not be empty positions in available hearing slots.
—
Dealing with discrepancies between laws dictating the rules and constraints for trials execution and the processes as represented in the system: as laws are continuously evolving, judges and chancellors need to perform workarounds in the predefined processes in the system to respect such legal changes, as the system evolution is necessarily following the issuing of the laws and some effort is needed for the evolutionary maintenance of the system. The inspection and execution of such cases need an in-depth analysis of the impact of the workarounds on the whole process.
—
Understanding the impact of different legal proceedings in case of execution, such as shortened rites or different ways of performing the process steps depending on the matter at hand. Reduced rites and case matters may have implications in the global process execution and exploring different cases could lead to a more efficient process management, studying similarities among processes.
—
Identifying factors that have an important impact on process duration. These factors may have either external causes, which are not under the control of the Court, or internal, due to the organization of work. These latter could be examined for further improvement to prevent disruptions of work or send alerts to the parties involved (or responsibles) in the execution of the process. In addition, studying the similarity factors of cases can also support a more efficient organization of work.
To improve the efficiency of civil proceedings, it is essential to address the requirements mentioned above, speeding up the process and ensuring a fair trial for all parties involved. Given the large amount of data available on past cases, the emphasis has recently shifted toward analyzing data and process executions in more detail. Here AI, and in particular methods from knowledge representation and machine learning, can play a major role, provided that they can somehow be plugged over existing legacy information systems, as discussed in the following challenges that emerged to address the aforementioned requirements.
◼
Information extraction about processes, including the analysis of legal documents such as court decisions or other acts of the courts, represents a challenging task for extracting and conceptualizing information from these documents [
4,
31]. A good process analysis requires good data about the processes that are under analysis, which inevitably involves the aspect of data quality and data integration. As often occurs, useful data are scattered among the different systems involved in the process. Moreover, especially for judicial processes, data are contained in documents, seldom structured, and sometimes even not digitized. In addition to the techniques that could help in acquiring and integrating the required data (e.g., ontology-based data management, Natural Language Processing - NLP), it is fundamental to have methods and tools that can guide data identification, preparation, and collection. Recently, some solutions have been proposed to represent the flow of work in Courts, also considering all possible variants in a very detailed model of the underlying process execution. This has enabled a rapid collection of data about incoming and terminated processes, and a number of indicators can be easily extracted and analyzed. Moreover, another interesting trend is the creation of real-time dashboards, to analyze the situation of ongoing cases on a daily basis, instead of long periods, which is being advocated by CEPEJ initiatives [
17].
Can we derive process logs, suitable for process mining, without severe technical interventions on the legacy information systems?
Can we derive events concerning the process enactment on the “surface” of the system, through the observation of data and events, without deeply understanding the internals of the systems? How relevant information can be extracted?
◼
Process modeling. Process analysis requires process modeling, which is typically performed by means of simple specification languages, including statecharts, dataflows, or more recent ones such as BPMN. Due to the multiplicity of applications in which data are employed, a more general purpose specification language, equipped with a powerful logic, is needed. In the judicial domain, properties of data about trials are always strictly connected with temporal constraints. Therefore, the specification language must be able to describe temporal behaviours and be declarative more than control-flow oriented ones: it must be able to describe ordering constraints among events and specific and detailed temporal constraints.
Can we devise novel approaches to process modeling that better support process analysis about the temporal facet?
◼
Real-time process monitoring. By exploiting the knowledge developed with process analysis, it should be possible to intercept temporal anomalies as soon as they occur (in particular the delays with respect to the average times of the phases, fine grained by the typology of processes) and thus having the possibility of verifying whether the temporal anomalies have a reason for happen or if actions are needed.
Can we adapt traditional event-condition-action paradigms or develop novel reactive techniques tailored to the context of the judicial domain?
◼
Compliance checking techniques (e.g., conformance checking [
7]) can be useful especially for a performance analysis of the processes. When analyzing trials, two main indicators are of interest: (i)
Disposition Time (
DT), to measure the duration of administrative procedures in courthouses and to estimate the impact of delays in terms of costs of procedures; (ii) the
Clearance Rate (
CR), defined as the ratio between adjudged trials and new incoming cases. While such indicators are easy to compute at an aggregate level (for instance, all cases in a period), the challenge is in analyzing details of trials to identify patterns that may be associated with strong deviations of the above indicators. It is also fundamental to tackle the problem of checking whether the process verifies the correctness of the procedures. This entails checking if the process is following the steps required by the legislation (which can be described with some formalism) and if the software specifications guarantee the alignment with specifications of all paths that terminate, while paths that do not terminate are not produced.
Can we devise novel approaches to compliance specifically tailored to the analysis of temporal indicators, but also suitable to identify anomalies and patterns originating such anomalies? Is explainable compliance checking a reality?
◼
Analyzing the impact on judges’ workload is being investigated as a strategy for accelerating the delivery of justice, still preserving the right to a fair trial [
16]. This analysis is challenging for several reasons, since not all the stages of the justice procedures are under the judges’ control and the conditions according to which “unreasonable delays” can be identified in the administration of justice are not universally recognized. Some approaches have recently investigated the efficiency of single-member judicial panels compared to three-member ones [
2], highlighting the need to identify the proper complexity or gravity of cases for assigning them to single-member panels. Although authors in [
2] focused on the judicial system in Greece, some conditions also hold for the Italian one.
Can we devise novel indicators for the analysis of judges’ efficiency? How can we classify judicial cases to assign them to single-member panels? How can we extract the relevant information to identify “unreasonable delays” and to establish the complexity of judicial cases?
◼
Legal judgment writing. One of the activities that significantly impacts the duration of civil and criminal cases is the lengthy process of drafting legal documents, particularly judgments. Addressing these needs requires new tools to improve the quality of court decisions and speed up procedures and actions. This approach would not only save time but also improve the uniformity and quality of written output. Although the idea of creating a model that can assist judges in writing legal text is promising, several challenges need to be addressed. Firstly, legal language is known for its highly specialized terminology and complex lexicon structures, making it difficult to develop an accurate language model. In addition, the fine-tuning phase needs many textual documents to be collected. Then, since legal documents contain sensitive data, they will need to be preliminarily anonymized for privacy preservation aims. However, automatic anonymization solutions are far from trivial [
15]. Another criticism arises from the fact that pre-trained datasets may contain human biases and stereotypes that are then propagated to the language model (e.g., gender bias). This raises the question of whether AI is an enabler of discrimination or whether it can be utilized to support decisions with an anti-discrimination function.
Can we devise language models adapted to Italian to analyze legal documents from a process perspective?
Can a writing assistant help the Italian justice system?
◼
Predictive techniques, and in particular machine learning techniques, are being experimented to be able to predict the duration of processes, forecast process variants (or paths), analyze outliers [
24], classify processes according to their impact on the judicial system, assigning weights to files that can be adopted for balancing workload among judges. Recent approaches are focusing on analyzing the duration of single phases by applying deep learning techniques [
11].
Which techniques can be adopted in order to make accurate predictions and retain explainability?
There are really few studies on these topics in the literature for the judicial system. One example is the application of process mining techniques for analyzing Brazilian judicial data, where the results of the analysis have led to the identification of most common activities and process bottlenecks, providing awareness about inefficiencies root causes [
32] and the ability to extract temporal information from legal documents [
30]. Indeed, process mining is cited as a potential revolution in how judicial process management is approached [
27] as well as a concrete support for improving judicial efficiency [
10]. Some initial results on analyzing process characteristics in the Italian Judicial system are showing promising research directions toward advanced and predictive monitoring of trials [
6,
14,
24].
3 Research Directions
AI techniques can help address the above challenges. In this manuscript, we consider AI in a broad interpretation, including in particular those fields commonly referred to as
Knowledge Representation and Reasoning (
KRR),
Machine Learning (
ML), and
Natural Language Processing (
NLP). We focus here on discussing new research directions arising from the challenges mentioned in the previous section.
◼
Process mining. The adoption of process mining techniques requires information about the activities performed within the process. Data about the process tasks can be gathered from documents or event logs. On the one hand, the problem of extracting key concepts or relevant data from unstructured documents has recently captured the attention of researchers, seeking to distill knowledge for the disparate corpus of natural language content. Automatic retrieval of temporal information from texts such as verdicts and the creation of a link to the judicial process events could allow the identification of critical points of the process. As pointed out, this would allow identifying events without intervening on the information systems. While this activity is often performed manually through document annotation, a possible research direction for automatic text processing is the discovery of new procedures for the creation of process models from documents, by leveraging preliminary work from [
3,
33]. Finally, recent process mining techniques are investigating systematic approaches to analyze the impact of specific events on the duration of a case. An initial attempt is presented in [
22], where different causes of delays are identified and ML techniques are proposed to detect them, focusing on batching, resource contention and unavailability, and prioritization. In the judicial domain in Italy, advanced monitoring techniques are being proposed based on process mining, investigating the relationship between states and events in the trial execution [
29]. Further investigation is needed in these directions to explore the cause of delays specific to judicial cases and to adapt the existing approaches.
On the other hand, an important source of information is offered by the logs of the programs used to manage the process files. These programs, built starting from finite state automata that describe the possible state transitions of the processes, generate logs that describe the series of state transitions of the trial documents (dossiers) and that, in some way, also tell the history of the processes. In this context, data may take the form of infinite data streams, rather than finite stored data sets. Several aspects of data management need to be reconsidered in the presence of data streams. We aim at focusing primarily on the problem of defining methods to describe and verify properties on data, capable of alerting the user if some of these properties are violated. Moreover, as discussed in the previous section, other problems related to the event logs concern the difficulties of having suitable logs, i.e., complete and correct process data.
◼
Process modeling and temporal analysis. Process modeling of courthouses administrative procedures deserves attention to study the proper representational formalism to include different perspectives, and a combination of knowledge-driven and data-driven techniques in logs analysis, to take into account the semantics of logs due to existing law regulations and limits. Knowledge representation approaches, based on ontologies, temporal logics and declarative process models, still maintaining reasoning tractability, may offer the opportunity to better infer interesting facets. Clearly, in order to ensure the seamless adoption of novel approaches, validation studies with justice practitioners are needed.
Once processes are modeled, the current approaches in judicial process analysis mainly address the identification of temporal outliers using pragmatic approaches [
9] or process mining techniques [
24,
35] focusing on single cases. Possible new research directions focus on the analysis of processes more in general, by considering multiple cases for predicting the effect of improvement actions. Such analysis and predictions can successfully exploit machine learning techniques [
20]. Some initial work in analyzing process metrics using process mining techniques are described in [
32], however, further investigation in this field is needed to better understand judicial processes and their execution and potential for improvement.
◼
Process simulation and digital twins. Techniques are needed in order to assess the impact of possible improvement actions on trials in general, and how to better use resources. For instance, a possible direction that could be of interest and could be adapted to trial process analysis has been proposed in [
28], where process mining has been combined with significant events simulation to propose an approach based on digital twins for evaluating the impact of improvement actions, including process model changes and resource allocation. Process forecasting on the duration of activities and phases using ML techniques is proposed in [
1,
11]. While forecasting of case and phases durations in the judicial domain is still in the initial phases, it will be very important in the future for resource allocation and alerting.
◼
Writing assistant for legal users. AI models and methods offer the potential to automate administrative processes, including automatic document generation. Through the implementation of
Natural Language Generation (
NLG) techniques, significant advancements in performance have been achieved, resulting in improved process efficiency and providing valuable support to judges (and their assistants) in processing legal judgments. These recent advancements in language models have therefore enhanced the functionality and effectiveness of writing assistants, leading to the development of in-depth language models, with notable examples including representations of
bidirectional transformer encoders (
BERT) [
13] and the
Generative Pretrained Transformer (
GPT) series. The rapid expansion of ChatGPT in the field of text generation serves as evidence of the potential for AI to support the judicial domain. However, to overcome the challenges mentioned in this context, it is necessary to make available specific datasets consisting of Italian language resources not included in general archives available online, but rather related to the legal domain or generally to the language of legal texts (e.g., Italian news websites [
21] and Wikipedia articles [
8]). The linguistic issue remains the biggest challenge so far, but it can be addressed thanks to recent advancements in adapting GPT-2 to other languages using transfer learning methods, without unnecessary retraining [
12]. An interesting approach involves fine-tuning a transformer on a pre-processed corpus of Italian civil judgments, resulting in a GPT-2 language model that can be implemented as a writing assistant for legal users, significantly improving efficiency in writing text [
5,
23].
◼
Process and juridical documents analysis. Understanding how document flow affects judges’ workload and case outcomes and not restricting the analysis to mining process flows would help to achieve better results from the analysis of enriched document processes. A first big data pipeline toward the creation of data lakes for the judicial domain has been proposed in [
14], in which both structured data (court databases) and unstructured data (sentences and documents without structured format) were used to extract new information. However, there is a need to better link the analysis of processes and of juridical documents.
The role of AI techniques is significant in many of the directions proposed above. First of all, process mining tools can learn the actual structure of processes and support the comparison with modeled processes.
Natural Language Processing (
NLP) can support the extraction of information about events both from documents and from existing unstructured process logs.
Event-condition-action (
ECA) paradigm can support very quick reactions to anomalies. Finally, data-driven approaches can support the identification of structural anomalies in process execution. A summary of research challenges, directions, and AI contribution for process analysis in judicial systems is presented in Table
1. For each research challenge to improve judicial tasks and processes, the impact on the collected requirements is indicated. Some initial results in this direction in the Italian context are reported in [
6,
15,
23].
4 Concluding Remarks
All the research directions discussed in this article are based on data-driven approaches, which are still in their initial stages within the judicial domain. As highlighted in the CEPEJ reports, extracting quantitative data from existing systems is a challenge by itself, to the point that indicators, such as case duration, are not systematically computed and proxies like the Disposition Time, which relies on the number of incoming and finalized cases, are currently preferred. Data-driven approaches therefore introduce new challenges on their own. The first problem to be addressed is the ability of systematically extracting information about previous cases, including all details about their events, phases and associated documents. This may entail distinct legal issues depending on the current legislation and country [
36]. In general, process traces (i.e., events and states reached by cases during their execution) are more easily accessible than documents about the process. For instance, in some countries, verdicts are public, while in others, they are confidential. Other issues arising from data-driven approaches are related to the acceptance by judges of previously unfamiliar data (e.g., duration vs. disposition times for cases) and inferred values in general, as the perception of case execution may differ from a data-driven analysis. Various factors, including inherent case differences or workload variations, can influence the course of trials. On the other hand, a data-driven approach holds the promise of identifying potential areas for improvement and offering insights into outliers. However, explaining the results of a data-driven approach can be complex in general as it is distant from the consideration of single cases. Similar challenges may arise in the acceptance of generated texts due to possible errors that have to be detected and corrected. Nevertheless, the potential for improvement is substantial, and AI and ML support in the judicial domain have already shown some promising initial results. These developments must consider all external factors and adhere to legal constraints, and have a big potential in the future, as demonstrated by the rapid growth of ChatGPT in text generation. However, the authors’ experience shows that the closed-source nature of ChatGPT presents significant challenges for scientific research. In addition, users are unable to modify or access the underlying source code, which hampers data interpretation and limits model transparency. Furthermore, concerns regarding liability are pressing, if the model generates inaccurate or discriminatory results, and these issues need to be addressed. In conclusion, further work is needed to ensure the acceptability of AI, which depends both on technical aspects, but also on cultural, social, and legal factors. These areas warrant further investigation in the future.