Proceedings of the 26th Workshop of
the UK Planning and Scheduling
Special Interest Group
PlanSIG 2007
December 17-18, 2007
Prague, Czech Republic
Editor: Roman Barták
Proceedings of PlanSIG 2007
The 26th workshop of the UK Planning and Scheduling Special Interest Group
December 17-18, 2007
Prague, Czech Republic
Workshop Chair
Roman Barták
Charles University in Prague
Faculty of Mathematics and Physics
Malostranské nám. 2/25, 118 00 Praha 1, Czech Republic
e-mail: bartak@ktiml.mff.cuni.cz
Programme Committee
Ruth Aylett, Herriot-Watt, UK
Chris Beck, University of Toronto, Canada
Ken Brown, University College Cork, Ireland
Edmund Burke, University of Nottingham, UK
Luis Castillo, University of Granada, Spain
Amedeo Cesta, ISTC, Italy
Alex Coddington, University of Strathclyde, UK
Stefan Edelkamp, Universität Dortmund, Germany
Susana Fernández, Universidad Carlos III de Madrid, Spain
Maria Fox, University of Strathclyde, UK
Antonio Garrido, Universidad Politecnica Valencia, Spain
Tim Grant, University of Pretoria, South Africa
Joerg Hoffmann, University of Innsbruck, Austria
Peter Jarvis, NASA Ames Research Center, USA
Graham Kendall, University of Nottingham, UK
Philippe Laborie, ILOG, France
John Levine, University of Strathclyde, UK
Derek Long, University of Strathclyde, UK
Lee McCluskey, University of Huddersfield, UK
Amnon Meisels, Ben-Gurion University, Israel
Barry O'Sulivan, University College Cork, Ireland
Sanja Petrovic, University of Nottingham, UK
Nicola Policella, European Space Agency, Germany
Julie Porteous, University of Strathclyde, UK
Patrick Prosser, University of Glasgow, UK
Hana Rudová, Masaryk University, Czech Republic
Wheeler Ruml, University of New Hampshire, USA
Rong Qu, University of Nottingham, UK
Sam Steel, University of Essex, UK
Andrew Tuson, City University, UK
Jozsef Vancza, SZTAKI, Hungary
Roman van der Krogt, 4C, Ireland
Petr Vilím, ILOG, France
ISSN 1368-5708
Table of Contents
Planning-based Scheduling for SLA-awareness and Grid Integration
Dominic Battré, Matthias Hovestadt, Odej Kao, Axel Keller, Kerstin Voss.............................. 1
An Enhanced Weighted Graph Model for Examination/Course Timetabling
Julie R. Carrington, Nam Pham, Rong Qu, Jay Yellen ............................................................. 9
A Multi-Component Framework for Planning and Scheduling Integration
Amedeo Cesta, Simone Fratini, Federico Pecora.................................................................... 17
Scheduling Monotone Interval Orders on Typed Task Systems
Benoît Dupont de Dinechin ..................................................................................................... 25
A Note on Concurrency and Complexity in Temporal Planning
Maria Fox, Derek Long ........................................................................................................... 32
Optimisation of Generalised Policies via Evolutionary Computation
Michelle Galea, John Levine, Henrik Westerberg, Dave Humphreys .................................... 36
Assimilating Planning Domain Knowledge from Other Agents
Tim Grant ................................................................................................................................. 44
The Dimensions of Driverlog
Peter Gregory, Alan Lindsay .................................................................................................. 52
VLEPpO: A Visual Language for Problem Representation
Ourania Hatzi, Dimitris Vrakas, Nick Bassiliades, Dimosthenis Anagnostopoulos, Ioannis
Vlahavas .................................................................................................................................. 60
Constraint Programming Search Procedure for Earliness/Tardiness Job Shop Scheduling
Problem
Jan Kelbel, Zdeněk Hanzálek .................................................................................................. 67
Single-machine Scheduling with Tool Changes: A Constraint-based Approach
András Kovács,J. Christopher Beck ........................................................................................ 71
Comprehensive approach to University Timetabling Problem
Wojciech Legierski, Łukasz Domagała ................................................................................... 79
Opmaker2: Efficient Action Schema Acquisition
T.L. McCluskey, S.N. Cresswell, N.E. Richardson, M.M. West .............................................. 86
Feasibility Criteria for Investigating Potential Application Areas for AI Planning
T.L. McCluskey ........................................................................................................................ 93
Planning in Supply Chain Optimization Problem
N.H. Mohamed Radzi, Maria Fox, Derek Long .................................................................... 100
Velocity Tuning in Currents Using Constraint Logic Programming
Michaël Soulignac, Patrick Taillibert, Michel Rueher ......................................................... 106
SHORT PAPERS
Planning as a software component: A Report from the trenches
Olivier Bartheye, Éric Jacopin .............................................................................................. 117
Nurse Scheduling Web Application
Zdeněk Bäumelt, Přemysl Šůcha, Zdeněk Hanzálek ............................................................. 120
PSPSolver: An Open Source Library for the RCPSP
Javier Roca, Filip Bossuyt, Gaetan Libert ............................................................................ 124
Planning-based Scheduling
for SLA-awareness
and Grid Integration∗
Dominic Battré and Matthias Hovestadt
and Odej Kao
Technical University of Berlin, Germany
{battre,maho,okao}@cs.tu-berlin.de
Abstract
Axel Keller and Kerstin Voss
Paderborn Center for Parallel Computing
University of Paderborn, Germany
{kel,kerstinv}@upb.de
demands of future Grid systems and which properties and
capabilities are missing in current existing Grid infrastructures. Their work resulted in the idea of the Next Generation
Grid (NGG) (Priol & Snelling 2003; Jeffery (edt.) 2004;
De Roure (edt.) 2006). This work clearly identified that
guaranteed provision of reliability, transparency, and Quality of Service (QoS) is an important demand for successfully
commercialize future Grid systems. In particular, commercial users will not use a Grid system for computing business
critical jobs, if it is operating on the best-effort approach
only.
In this context, a Service Level Agreement (SLA) is a
powerful instrument for describing all expectations and obligations in the business relationship between service consumer and service provider (Sahai et al. 2002). Such an
SLA specifies the QoS requirement profile of a job. At the
Grid middleware layer many research activities already focus on integrating SLA functionality.
The EC-funded project BeInGrid (Business Experiments
in Grid (BeInGrid), EU-funded Project ) aims at fostering
the commercial uptake of the Grid. BeInGrid encompasses
numerous business experiments, where Grid technology is
to be introduced to specific business domains. Successful
experiments reached the goal of proving the benefit of applying Grid technology for commercial customers. According to the NGG, a major objective in these BeInGrid experiments is the provision of reliability as contractually expressed in negotiated SLAs.
Current resource management systems (RMS) are working on the best-effort approach, not giving any guarantees
on job completion to the user. Since these RMS are offering
their resources to Grid systems, Grid middleware has only
limited means in fulfilling all terms of negotiated SLAs.
For closing this gap between the requirements of SLAenabled Grid middleware and the capabilities of RMS,
HPC4U (Highly Predictable Cluster for Internet-Grids
(HPC4U) ) started working on an SLA-aware RMS, utilizing the mechanisms of process-, storage- and networksubsystems for realizing application-transparent fault toler-
Service level agreements (SLAs) are powerful instruments for describing all obligations and expectations
in a business relationship. It is of focal importance
for deploying Grid technology to commercial applications. The EC-funded project HPC4U (Highly Predictable Clusters for Internet Grids) aimed at introducing SLA-awareness in local resource management systems, while the EC-funded project AssessGrid introduced the notion of risk, which is associated with every business contract. This paper highlights the concept
of planning based resource management and describes
the SLA-aware scheduler developed and used in these
projects.
Introduction
In the academic domain Grid computing is well known, if
not even established. Researchers are using Grid middleware systems like Unicore or the Globus Toolkit to create
virtual organizations, dynamically sharing the transparent
access to distributed resources. Grid computing started under the solely technical question of how to provide access to
distributed high performance compute resources. Thanks to
numberless projects and initiatives, funded by national and
international bodies worldwide, Grid systems have significantly evolved meanwhile, making Grid technology adoptable in a large variety of usage scenarios.
Companies like IBM, Hewlett Packard, and Microsoft
have recognized the potential of Grid Computing already in
the early days of the Grid development, providing noticeable
efforts on research and the support of research communities.
However, the Grid did not really enter the commercial domain until the present day. Already in 2003 the European
Commission (EC) convened a group of experts to clarify the
∗
This work has been partially supported by the EU within the
6th Framework Programme under contract IST-031772 ”Advanced
Risk Assessment and Management for Trustable Grids” (AssessGrid) and IST-511531 ”Highly Predictable Cluster for InternetGrids” (HPC4U)
1
filling decision on the delay of jobs in the queues, the system has to have runtime information of these jobs. Hence,
specific backfilling strategies (like EASY and conservative
backfilling) can only be applied to environments where these
statements are available.
By switching the focus from classic high throughput computing to computation of deadline bound and business critical jobs, also the demand on the RMS and its scheduler component changes. If negotiating on service level agreements,
the system has to know about future utilization, i. e. whether
it is possible to agree on finishing the new job as requested.
Planning is an alternative approach on system scheduling (Hovestadt et al. 2003). In contrast to queuing, planning
does not only regard currently free resources and assigns
them to waiting jobs. Instead, planning based systems also
plan for the future, assigning a start time to all waiting requests. This way a schedule is generated, encompassing all
jobs in the schedule. Having such a schedule available, the
system scheduler is able to determine which jobs are scheduled to be executed at what time. Table 1 depicts the most
significant differences between queuing and planning based
systems.
A prerequisite for planning based resource management
system is the availability of run time estimates for all jobs.
Without this information the scheduler has no means to decide how long a specific resource will be used by a job.
Hence, the scheduler could not assign a start time to jobs
following in the schedule. In case the user underestimated
the runtime, the system can try to extend the runtime of this
job. If this is not possible because other jobs are scheduled
on the resource, having a high priority so that they cannot be
pushed away, the job has to be terminated or suspended in
order to have the resources available for other jobs. This
may be considered as a drawback of planning based resource management. A further drawback regards the cost
of scheduling. The scheduling process itself is significantly
more complex than in queuing based systems.
The novel approach on scheduling in planning based resource management systems allows the development of new
scheduling policies and paradigms. Beside the classic policies like FCFS, SJF (Shortest Job First), or LJF (Longest Job
First), novel policies could help to realize new objectives or
new functionalities. We are convinced that planning based
resource management is a good starting point for realizing
SLA-awareness.
ance. As central component of the HPC4U project the RMS
OpenCCS has been selected, since its planning based nature seemed to be well-suited for realizing SLA-awareness.
Within the project all features required for SLA-awareness
and SLA-compliance have been developed, e. g. an SLAaware scheduler, mechanisms for transparent checkpointing
of parallel applications, or the negotiation of new SLAs.
The HPC4U project will end 2007. The outcome of the
project allows the Grid to negotiate on SLAs with the RMS.
The RMS is only allowed to accept a new SLA, if it can ensure its fulfillment. For this, the RMS provides mechanisms
like process and storage checkpointing to realize fault tolerance and to assure the adherence with given SLAs even in
the case of resource failures. The HPC4U system is even
able to act as an active Grid component, migrating checkpointed jobs to arbitrary Grid resources, if that allows the
completion of the job according to its SLA.
In this paper we first highlight the concept of planning based resource management, a fundament for realizing
SLA-aware RMS. The main part of the paper focuses on the
specific demands of different job types on the scheduling as
well as the scheduling impact of a Grid integration. The paper ends with an overview about related work and a short
conclusion.
Planning Based Resource Management
Compute clusters have a long tradition beginning in the early
1970s with the UNIX operating system (Pfister 1997). Since
then many resource management systems evolved, bringing
functionality targeted to their specific usage domain, e. g.
capabilities on load balancing. Classic systems are mostly
used in capacity computing environments, computing large
amounts of data in time uncritical context.
Most of the resource management systems available today can be classified as queuing based systems. The scheduler of these RMS is operating one or more queues, each
of them with different priorities, properties, or constraints
(e. g. high priority queue, weekend queue) (Windisch et al.
1996). Each incoming job request is assigned to one of these
queues. The scheduling component of the RMS then orders
each queue according to the strategy of the currently active scheduling policy. A very basic strategy is FCFS (First
Come, First Served), assigning resources to jobs according
to the job’s entry time into the system. Modern RMS are
also using priority queues, reflecting the status of the particular jobs. However, resources are assigned to jobs from
the queue head, if the system has enough free resources. If
this results in idle resources, backfilling strategies can be applied for selecting matching jobs from one of the queues for
immediate out-of-order execution.
Many different strategies on backfilling have evolved,
each optimizing according to a specific objective or usage
environment. Commonly known strategies are conservative
and EASY backfilling. Both strategies only differ in their
way of selecting jobs for backfilling. While conservative
backfilling demands that the backfilled job may not delay
other waiting requests (Mu’alem & Feitelson 2001), EASY
backfilling only demands the queue head’s jobs not to be delayed (Lifka 1995). For deciding about the impact of a back-
Scheduling for Typical Scenarios
In this section typical scenarios will be described. Starting
with the submission of a regular local job, the degree of service quality will increase with each scenario. For realizing
SLA-awareness in the EC-funded projects HPC4U and AssessGrid, the resource management system OpenCCS has
been used. OpenCCS is a planning based resource management system developed at the University of Paderborn.
Details on OpenCCS can be found in (Keller & Reinefeld
2001).
2
queuing system
planning system
planned time frame
present
present and future
reception of new request insert in queues
replanning
start time known
no
all requests
runtime estimates
not necessary1
mandatory
reservations
difficult
yes, trivial
backfilling
optional
yes, implicit
examples
PBS, NQE/NQS, LL CCS, Maui Scheduler2
1
exception: backfilling
2
Maui may be configured to operate like a planning system (Jackson, Snell, & Clement 2001)
Table 1: Differences of queuing and planning systems (Hovestadt et al. 2003)
Local Job Submission
elements of N-list. Depending on the request type (Fix-Time
or Var-Time) the PM calls an associated planning function.
For example, if planning a Var-Time request, the PM tries to
place the request as soon as possible. The PM starts in the
present and moves to the future until it finds a suitable place
in the schedule.
Figure 1 depicts a typical schedule situation in a planningbased RMS. If a user submits a new job request, the system is able to match the request properties with the current
schedule, i. e. the PM and MM components of OpenCCS are
checking whether it is possible to generate a new valid system schedule. In this case, the user’s job request is accepted,
directly returning the time when the job will be allocated at
the latest. If the request cannot be realized (e. g. because the
user requested for a time slot with insufficient available resources), the job is rejected. In this situation, the user can
query the system for the earliest possible time to start the
job request.
The local job submission is the classic case of job submission, where a user connects locally to the resource management system and submits a new job. Since OpenCCS is planning based, it requires all users to specify the expected duration of their requests. The OpenCCS planner distinguishes
between Fix-Time and Var-Time resource requests. A FixTime request reserves resources for a given time interval. It
cannot be shifted on the time axis. In contrast, Var-Time requests can move on the time axis to an earlier or later time
slot (depending on the used policy). Such a shift on the time
axis might occur when other requests terminate before the
specified estimated duration.
The Planning Manager (PM) is a central component of
the OpenCCS architecture, responsible for computing a
valid, machine-independent schedule. Likewise, the Machine Manager (MM) is responsible for machine-dependent
scheduling. The separation between the hardware independent PM and the system specific MM allows to encapsulate system specific mapping heuristics in separate modules.
With this approach, system specific requests (e. g. for I/Onodes, specific partition topologies, or memory constraints)
may be considered. One task of the MM is to verify if
a schedule received from the PM can be realized with the
available hardware. The MM checks this by mapping the
user given specification with the static (e. g. topology) and
dynamic (e. g. PE availability) information on the system resources. Since OpenCCS is a planning-based RMS, the PM
generates a schedule for both current and future resource usage. Therewith it supports classic scheduling strategies like
FCFS, SJF, and LJF, considering aspects like project limits
or system wide node limits. The system administrator can
change the strategy during runtime.
The PM manages two lists while computing a schedule,
which are sorted according to the active policy.
Deadline bound Jobs
Deadline bound jobs have to be completed until a specific
time at the latest. A classic example for such a deadline
bound job is a weather service which has to complete the
computation of a weather forecast until 5am, since the forecast is to be broadcasted on TV at 6am. However, deadlines
are also of particular importance for executing workflows,
where the workflow is executed in multiple branches in parallel and where the result needs to be joined until a given
time, so that also the overall workflow result can be delivered in time.
From the resource management system’s point of view,
a deadline bound job is a Var-Time resource requests. The
user has to provide three key parameters:
• the number of required resources
• the duration of job execution
• The New list(N-list): Each incoming request is placed in
this list and waits there until the next planning phase begins.
• the deadline for job completion
The deadline bound job is a specific case of a Var-Time resource request, since it may not shift arbitrarily on the time
axis, but only within the boundaries given by the earliest
possible start time and by the deadline. This constraint has
to be regarded during the scheduling process, assigning resources early enough to allow the job to complete in time.
• The Planning list(P-list): These jobs have already been
accepted by the system. The PM takes jobs from this list
to generate the system schedule.
The PM first checks if the N-list has to be sorted according to the active policy (e. g. SJF or LJF). It then plans all
3
Figure 1: Schedule in a planning based RMS
At this, the latest time for resource allocation conforms to
the specified deadline minus the user’s specified runtime.
FCFS. The scheduler executes the following steps on an initially empty schedule, trying to place Var-Time resource requests at the earliest possible place in the new schedule:
In the case of deadline bound jobs, the correctness of the
estimated runtime of the job is crucial for the fulfillment of
the deadline. It is in the responsibility of the user to give
a correct estimate. If the provider assigns a resource at the
latest possible start time, it is the user’s responsibility if the
job did not complete in time, because he underestimated the
job’s runtime. However, users tend to overestimate the runtime of their jobs to prevent such a situation. Hence in the
typical situation the job ends long before the estimated (and
scheduled) end of time. Generally assuming the specified
runtime to be overestimated allows to postpone the point of
latest ressource allocation by the assumed amount of overestimation. However, this strategy is risky since jobs with
correctly estimated runtimes will not be able to finish until
their deadline.
1. sort all requests according to the current policy
2. place all Fix-Time resource requests ( from first P-list,
then from N-list)
3. place all deadline bound Var-Time resource requests (first
from P-list, then from N-list)
4. place all remaining Var-Time resource requests (first from
P-list, then from N-list)
Placing deadline bound Var-Time jobs according to policies like FCFS does not always result in a good schedule
quality. Placing jobs in front of the schedule just because
they arrived at the system at an early point of time (i. e.
blocking valuable resources with this job) prevents executing other jobs with perhaps even nearer deadlines. Hence,
other strategies could be applied when placing these deadline bound requests.
As an alternative approach, Deadline Monotonic Scheduling (DMS) (Audsley 1993) could be applied here, where the
Due to the nature of deadline bound jobs, the scheduler has to place them after placing all Fix-Time resource
requests, but before placing regular Var-Time resource requests. At this, it follows the main scheduling policy, e. g.
4
12000
no failures
one failure
two failures
8000
6000
4000
Overall Time (sec.)
10000
●
to be enhanced, so that a consistent image of all parallel instances can be generated. For this purpose, the cooperative
checkpoint protocol (CCP) has been developed.
Beside this stack of tools the project also evaluated other
existing checkpointing solutions. At this, fairly good experiences also have been made with the tools Berkeley Checkpointing and Restart (BLCR) and LAM-MPI. Even if parallel checkpointing is possible, these tools have significant
functionality drawbacks compared to the HPC4U stack.
By periodically checkpointing an application, the job can
be restarted from the latest checkpointed state. Hence, only
the computation steps after the latest checkpoint has to be
repeated, instead of restarting the job from scratch. Even if
the mechanisms have negligible impact on the job execution
performance and the checkpointing of large jobs can be executed in a few seconds or minutes, this has to be considered
at scheduling time.
Firstly, the effort for performing checkpoints enters the
computation for the latest possible point of start. Since the
time increases with the number of nodes and the amount
of used memory, the system can predict quite exactly the
time required for each checkpoint operation. The number of
checkpoints determines the maximum time that can be lost
due to a resource outage. It is a trade-off between reducing
the worst-case loss of computational results and reducing the
overhead of checkpointing.
The impact of the chosen checkpoint frequency on the
runtime of a job is depicted in Figure 2. It assumes a job
having a total runtime of one hour and a duration of each
checkpoint of two minutes. The three curves represent the
number of assumed resource outages. The curve depicting
the case of no resource outages occurring has its minimum in
n = 0, having no checkpoints generated. Since each checkpoint generation delays the completion of the job, each generated checkpoint is unnecessary overhead in the case of no
resource outages. If no resource outages are expected or if
a job restart is acceptable (like for best effort jobs), the best
option is to execute without checkpoints.
In the case of resource outages occurring, things look different. An increasing number of checkpoints decreases the
amount of lost compute steps lost through a resource outage,
since the system is able to resume from the latest checkpointed state. The curves have their minima at the point of
optimal trade-off between lost computation power and additional effort for executing the checkpoint operation. Moreover this number increases on increasing the number of expected outages. Where it is optimal to generate approximately four checkpoints in the case of one expected outage,
it is approximately 7 in the case of two outages.
Secondly, the scheduling policy has to be adopted for handling the case of failures. If a job is affected by a resource
outage, the entire job (not only the part of the failed node) is
removed from the schedule. It leaves the P-list and is added
to the Defect list(D-list), encompassing all jobs affected by
failures.
Then the scheduler starts the computation of a new system schedule, following the policies described above, placing jobs from D-list after jobs from P-list, but before placing
jobs from N-list. This impacts new jobs (which may be re-
●
0
●
●
2
●
●
4
●
●
●
6
●
8
●
●
10
Number of Checkpoints
Figure 2: Impact of Checkpoint Frequency on Runtime
priority increases the nearer it gets to its deadline, i.e. the
latest possible start time here. By applying Earliest Deadline First (EDF) (Buttazzo & Stankovic 1993), the scheduler
would sort all deadline bound jobs by increasing remaining
time until their latest possible point of start. This ensures
that valuable resources are first used for urgent jobs.
Resource Failures and Fault Tolerance
A cluster system consists of multiple nodes. Partitions of
these nodes are assigned to running applications, so that
multiple applications are executed in parallel. If one of the
nodes of a partition fails (e. g. due to a power outage), the
execution of the application running on this node typically
is aborted. In case of parallel applications, not only the processes of the application running on the affected node are
aborted, but the entire parallel application is affected.
Cluster systems are used for speeding up the execution
time of complex problems, but with an increasing grade of
parallelism and an increasing runtime of the job (due to the
complexity of the problem), also the possibility of a job
crash increases, because only one of the nodes has to fail
during the execution. This is a real problem for jobs running on dozens or hundreds of nodes over multiple days or
weeks.
In the EC-funded project HPC4U mechanisms have been
developed for transparently checkpointing parallel applications, i. e. all mechanisms can be applied without any modification of the job or relinking of the binary, even without
having the job owner to take any notice of the mechanisms
at all. This mechanism requires a patch to be applied to the
Linux kernel, so that the process itself then runs inside a virtual bubble. At checkpoint time, the entire bubble is saved.
For parallel applications, also the MPI implementation has
5
the data to the compute cluster. The time necessary for this
can be neglected in general. In case of Grid jobs, this so
called stage-in process has to be executed using slow WAN
connections.
For this reason, the Grid user does not only have to specify parameters like estimated runtime, number of nodes, or
deadline in the negotiation process, but also the earliest time
for starting. The deadline can only be met if both the computation and the stage-in can be completed until this time.
Since providers are usually connected over high bandwidth
connections to the Internet, the bottleneck usually is the Internet network connection of the customer. Knowing the
total amount of data that needs to be staged-in, he has to
estimate the time required for transferring it over the Internet. The earliest point for starting the job is the time where
the SLA has been committed (i. e. when the stage-in process
could start) plus the total transfer time.
As long as the schedule has sufficient free space, the job
may directly start after the estimated duration of the stage-in
process. Overestimating the time for stage-in is uncritical,
because this would only result in having the data available
at RMS side earlier than expected. In contrast, if the user
underestimated the stage-in time, the RMS is unable to start
the job at the planned time. This directly threatens the fulfillment of the deadline, if the runtime is estimated correctly
and there is no buffer between the planned end of the job
and the deadline. The RMS has two options to handle such
a situation, differing significantly in their demands on system management:
1. keeping the partition available for the job, waiting the start
until stage-in is completed
2. assigning other waiting jobs to the pending job’s resources, executing the pending job as soon as stage-in is
completed
The first option does not require any specific RMS mechanisms, since the nodes of the pending job’s partition simply
remain idle. As soon as the stage-in process has been completed, the RMS starts the job. Even if this option is simple and easily manageable, it has two major disadvantages.
First, the job is in danger of not finishing until the planned
end, since the allocation time (i. e. the estimated runtime)
is running while nodes are idle. Secondly, the overall cluster utilization is impacted, because nodes run idle instead of
computing jobs.
The second option solves both of these problems, since
nodes are used for computing other jobs and allocation time
only starts when stage-in is completed. However, this option
demands the system to support preemption of jobs. For this,
we again use the checkpointing mechanisms developed in
the HPC4U project. Since this solution provides transparent
checkpointing for parallel applications, we are able to realize preemption for parallel jobs. For preempting a job, the
job is first checkpointed and then stopped.
If other jobs are started in the partition of the pending job,
these jobs have to be preempted. The scheduler is now able
to rebuild the schedule after:
• subtracting the already executed runtime of the preempted
jobs from their estimated runtime.
jected now), but does not impact other already planned jobs.
However, if applying policies like DMS, the time until the
job’s latest point of start has to be recomputed, not taking
the originally user specified job runtime into account, but
the remaining runtime at the time of the last checkpoint.
The impact of resource failures on the system schedule
can be reduced by introducing a failure horizon. A resource
management system uses its internal monitoring mechanisms to detect problems within the cluster as soon as possible. If such a problem can not be solved by internal recovery
mechanisms of the RMS itself, the cluster administrators are
informed. The failure horizon represents the typical time required by administrators to solve such reported errors (e. g.
12 hours). The RMS only moves those jobs to the D-list
which are planned on the defect resources within the failure horizon, assuming that the resource is available again at
allocation time of all other jobs.
SLA Negotiation
The process of SLA-negotiation differs significantly from
the regular job submission interface of a resource management system. There, a user submits his job description, directly getting an information about rejection or acceptance
in return. In the latter case, the job has already entered the
system schedule.
In case of service level agreements, a multi-phase negotiation is conducted before the job finally enters the system.
The GRAAP working group (MacLaren 2003) of the Open
Grid Forum (OGF) (Open Grid Forum ) described such a
negotiation process in the WS-Agreement Negotiation specification (Andrieux et al. 2004). Here the provider answers
a job request with an SLA offer. The user has to commit to
this offer before the SLA is actually enforced.
For the scheduling component of an RMS this negotiation
process has significant implications: once the RMS has issued an SLA offer, it has to adhere to this offer until it has
been committed or canceled by the user. Timeout mechanisms ensure that SLA offers automatically expire after a
given time period (e. g. some seconds). However, at least
during this timeout period the system has to reserve system
capacity for the job in negotiation.
For this purpose, a novel list is introduced into the system:
the SLA-offer list(O-list). Jobs from this list are scheduled
within the regular scheduling process in the order P-list before D-list before O-list before N-list. It is preferable to privilege jobs from D-list than O-list, since jobs in O-list are not
yet affirmative, so that the system would not actually break
an SLA-contract but only an SLA-offer. Again, the general
policy of handling failures is to not affect other jobs, to keep
the implication of a failure as local as possible. This also
implies, that given SLA-offers should be kept if possible.
Data Staging of Grid Jobs
A second significant difference between locally submitted
jobs and jobs coming from the Grid is the aspect of data
staging. In case of local jobs it can be assumed that all
necessary job data (e. g. the application binary and all input data) are available on a local computer system, so that
fast local network connections can be used for transferring
6
• setting the end of node allocation to the minimum of specified deadline and current time plus estimated job runtime.
leable reservations, nor does it support resilience mechanisms to handle resource outages or failures.
The requirements and procedures of a protocol for negotiating SLAs were described in SNAP (Czajkowski et al.
2002). However, the important issue of how to map, implement, and assure those SLAs during the whole lifetime
of a request on the RMS layer remains to be solved. This
issue is also addressed by the architecture presented in this
paper.
The Grid community has identified the need for a standard for SLA description and negotiation. This led to the
development of WS-Agreement/-Negotiation (Andrieux et
al. 2004).
This way, the job would have its entire estimated runtime
available, as long as the delay in stage-in is not larger than
the original buffer between end of computation and deadline. It has to be noted, that the deadline compliance of the
preempted jobs is not endangered, because they already executed the time that they now get started later.
Accepting or Rejecting New Job Requests
In the previous sections it has been outlined how the demands on scheduling and system management increase with
demands coming from deadline support or Grid interface.
However, the general procedure of accepting or rejecting
new job requests remains the same.
If a resource request is submitted to the RMS, the scheduler tries to build a new valid schedule that contains this new
request. In case the scheduler succeeds, e. g. if the deadline
of the new job can be realized without violating any other
Fix-Time resource request or deadline bound Var-Time request, the new request is accepted by the system.
Conclusion and Future Work
Introducing SLA-awareness is a mandatory prerequisite for
the commercial update of the Grid. Consequently SLAawareness also has to be introduced to local resource management systems which are currently operating on a besteffort approach. The EC-funded project HCP4U aims at providing an application-transparent and software-only solution
of such an SLA-aware RMS, demanding for reliability and
fault tolerance. The HPC4U system already allows the Grid
user to negotiate on new SLAs, which will be realized by
means like process-, network,- and storage-checkpointing.
In this paper we have described the requirements of various job types and their demands on an SLA-aware scheduling. In particular we addressed the implications of a Grid integration on the scheduling policies. The described scheduling rules have been implemented within the OpenCCS resource management system, which is used in the HPC4U
project. Benefiting from the mechanisms of checkpointing
and restart, the scheduler has proved to be well suited for executing jobs to their negotiated SLAs. Presuming that spare
resources are not allocated by other SLA bound jobs, the
system is able to cope with resource outages, fulfilling the
SLAs of all jobs. Thanks to the transparent checkpointing
capabilities, these mechanisms also apply for the execution
of commercial applications, where no source code is available and recompiling or relinking is not possible. The user
even does not have to modify the way of executing the job
in the Grid. Hence, HPC4U reached its goal of providing
transparent fault tolerance.
However, the availability of spare resources proved to be
the limiting factor at restart time. If all resources of the cluster system are allocated by SLA bound jobs, the system has
no means of restarting the failure affected job, thus violating
the terms of its SLA.
Improving this situation is subject of currently ongoing
work. Firstly, the notion of buffer nodes is introduced to the
SLA-aware scheduler. These buffer nodes may only be used
for executing best-effort jobs, so that outages either affect
these buffer nodes or running best-effort jobs can be displaced by SLA-bound jobs that are affected by the resource
outage. Secondly, the checkpoint and restart mechanisms
will be used for suspending the execution of running jobs
with respect to their SLA, thus freeing allocated resources
for restarting outage affected jobs. Thirdly, the scheduler
will actively select jobs for migration over the Grid, so that
Related Work
The worldwide research in Grid computing resulted in numerous different Grid packages. Beside many commodity
Grid systems, general purpose toolkits exist such as Unicore (UNICORE Forum e.V. ) or Globus (Globus Alliance:
Globus Toolkit ). Although Globus represents the de-facto
standard for Grid toolkits, all these systems have proprietary
designs and interfaces. To ensure future interoperability of
Grid systems as well as the opportunity to customize installations, the OGSA (Open Grid Services Architecture) working group within the OGF aims to develop the architecture
for an open Grid infrastructure (GGF Open Grid Services
Architecture Working Group (OGSA WG) 2003).
In (Jeffery (edt.) 2004), important requirements for the
Next Generation Grid (NGG) were described. Among those
needs, one of the major goals is to support resource-sharing
in virtual organizations all over the world. Thus attracting commercial users to use the Grid, to develop Grid enabled applications, and to offer their resources in the Grid.
Mandatory prerequisites are flexibility, transparency, reliability, and the application of SLAs to guarantee a negotiated
QoS level.
An architecture that supports the co-allocation of multiple resource types, such as processors and network bandwidth, was presented in (Foster et al. 1999). The Globus
Architecture for Reservation and Allocation (GARA) provides ”wrapper” functions to enhance a local RMS not capable of supporting advance reservations with this functionality. This is an important step towards an integrated QoS
aware resource management. In our paper, this approach is
enhanced by SLA and monitoring facilities. These enhancements are needed in order to guarantee the compliance with
all accepted SLAs. This means, it has to be ensured that the
system works as expected at any time, not only at the time a
reservation is made. The GARA component of Globus currently does neither support the definition of SLAs or mal-
7
they can be finished on remote resources according to their
SLA.
The scheduler is also the fundament for work done in
the EC-funded project AssessGrid. Here, the notion of risk
awareness and risk management is introduced into all layers
of the Grid. This implies that the scheduler of the RMS has
to consider risks of SLA violations in all scheduling decisions.
Job Scheduling Strategies for Parallel Processing, volume 2221 of Lecture Notes in Computer Science, 87–103.
Springer Verlag.
Jeffery (edt.), K. 2004. Next Generation Grids 2: Requirements and Options for European Grids Research 20052010 and Beyond. ftp://ftp.cordis.lu/pub/
ist/docs/ngg2_eg_final.pdf.
Keller, A., and Reinefeld, A. 2001. Anatomy of a resource
management system for hpc clusters. Annual Review of
Scalable Computing 3:1–31.
Lifka, D. A. 1995. The ANL/IBM SP Scheduling System.
In D. G. Feitelson and L. Rudolph., ed., Proc. of 1st Workshop on Job Scheduling Strategies for Parallel Processing,
volume 949 of Lecture Notes in Computer Science, 295–
303. Springer Verlag.
MacLaren, J. 2003. Advanced Reservations - State of the
Art. Technical report, GRAAP Working Group, Global
Grid Forum, http://www.fz-juelich.de/zam/
RD/coop/ggf/graap/sched-graap-2.0.html.
Mu’alem, A., and Feitelson, D. G. 2001. Utilization,
Predictability, Workloads, and User Runtime Estimates in
Scheduling the IBM SP2 with Backfilling. In IEEE Trans.
Parallel & Distributed Systems 12(6), 529–543.
Open Grid Forum. http://www.ogf.org.
Pfister, G. 1997. In Search of Clusters. Prentice Hall.
Priol, T., and Snelling, D.
2003.
Next Generation Grids:
European Grids Research 2005-2010.
ftp://ftp.cordis.lu/pub/ist/docs/ngg_
eg_final.pdf.
Sahai, A.; Graupner, S.; Machiraju, V.; and van Moorsel,
A. 2002. Specifying and Monitoring Guarantees in Commercial Grids through SLA. Technical Report HPL-2002324, Internet Systems and Storage Laboratory, HP Laboratories Palo Alto.
UNICORE Forum e.V. http://www.unicore.org.
Windisch, K.; Lo, V.; Feitelson, D.; and Nitzberg, B. 1996.
A Comparison of Workload Traces from Two Production
Parallel Machines. In 6th Symposium Frontiers Massively
Parallel Computing, 319–326.
References
Andrieux, A.; Czajkowski, K.; Dan, A.; Keahey,
K.; Ludwig, H.; Nakata, T.; Pruyne, J.; Rofrano, J.;
Tuecke, S.; and Xu, M. 2004. Web Services Agreement Specification (WS-Agreement). http://www.
gridforum.org/Meetings/GGF11/Documents/
draft-ggf-graap-agreement.pdf.
Audsley, N. 1993. Deadline monotonic scheduling theory
and application. Control Engineering Practice 1:71–78.
Business Experiments in Grid (BeInGrid), EU-funded
Project. http://www.beingrid.eu.
Buttazzo, G. C., and Stankovic, J. 1993. Red: A robust earliest deadline scheduling algorithm. In 3rd intl. workshop
on responsive computing systems.
Czajkowski, K.; Foster, I.; Kesselman, C.; Sander, V.;
and Tuecke, S. 2002. SNAP: A Protocol for Negotiating Service Level Agreements and Coordinating Resource Management in Distributed Systems. In D.G. Feitelson, L. Rudolph, U. S. E., ed., Job Scheduling Strategies
for Parallel Processing, 8th InternationalWorkshop, Edinburgh,.
De Roure (edt.), D. 2006. Future for European Grids:
GRIDs and Service Oriented Knowledge Utilities. Technical report, Expert Group Report for the European Commission, Brussel.
Foster, I.; Kesselman, C.; Lee, C.; Lindell, B.; Nahrstedt,
K.; and Roy, A. 1999. A Distributed Resource Management Architecture that Supports Advance Reservations and
Co-Allocation. In 7th International Workshop on Quality
of Service (IWQoS), London, UK.
GGF Open Grid Services Architecture Working Group
(OGSA WG). 2003. Open Grid Services Architecture:
A Roadmap.
Globus Alliance: Globus Toolkit.
http://www.
globus.org.
Highly Predictable Cluster for Internet-Grids (HPC4U),
EU-funded project IST-511531. http://www.hpc4u.
org.
Hovestadt, M.; Kao, O.; Keller, A.; and Streit, A.
2003. Scheduling in HPC Resource Management Systems:
Queuing vs. Planning. In Job Scheduling Strategies for
Parallel Processing: 9th International Workshop, JSSPP,
Seattle, WA, USA.
Jackson, D.; Snell, Q.; and Clement, M. 2001. Core
Algorithms of the Maui Scheduler. In D. G. Feitelson
and L. Rudolph., ed., Proceddings of 7th Workshop on
8
An Enhanced Weighted Graph Model
for Examination/Course Timetabling
Julie R. Carrington*, Nam Pham†, Rong Qu†, Jay Yellen*†
*
Department of Mathematics and Computer Science, Rollins College
Winter Park, Florida, USA
{jcarrington, jyellen}@rollins.edu
†
Automated Scheduling, Optimisation and Planning Research Group
School of Computer Science, University of Nottingham, Nottingham, UK
{nxp, rxq, jzy} @ cs.nott.ac.uk
Given that vertex colouring is NP-Hard (Papadimitriou
and Steiglitz 1982), the development of heuristics and
corresponding approximate algorithms, which forfeit the
guarantee of optimality, has been a central part of the
research effort.
Abstract
We introduce an enhanced weighted graph model whose
vertices and edges have several attributes that make it
adaptable to a variety of examination and course timetabling scenarios. In addition, some new vertex- and colour-selection heuristics arise naturally from this model,
and our implementation allows for the use and manipulation of various combinations of them along with or separate from the classical heuristics that have been used for
decades. We include a brief description of some preliminary results for our current implementation and discuss
the further development and testing of the ideas introduced here.
Two events with a constraint between them are generally
prohibited from being assigned the same time slot, i.e.,
the edge represents a hard constraint. In some university
timetabling scenarios, another objective is to minimize
the number of students that have to take exams close
together (or courses far apart). This proximity restriction
is generally regarded as a soft constraint.
The weighted graph model introduced in 1992 (Kiaer
and Yellen 1992a) was designed to handle timetabling
instances for which the number of available time slots
(colours) is smaller than the minimum needed to construct a feasible colouring. (This minimum number is
called the chromatic number of the graph.) For instance,
in course timetabling, there is likely to be a limited number of time slots that can be used during the week, and a
conflict-free timetable may not exist. If conflicts are unavoidable, then a choice must be made on which ones to
accept.
Introduction
Background
Using graph colouring to model timetabling problems
has a long history (e.g., Broder 1964, Welsh and Powell
1967, Wood 1968, Neufeld and Tartar 1974, Brelaz
1979, Mehta 1981, and Krarup and de Werra 1982). Several survey papers have been written on this topic (e.g.,
Schmidt and Strohlein 1980, de Werra 1985, Carter
1986, Schaerf 1999, Burke, Kingston, and deWerra
2004, and Qu et al. 2006).
Distinguishing among conflicts
In a standard graph representation for a timetabling problem, the events to be scheduled are represented by vertices. A constraint (conflict) between two events indicating that they should be assigned different time slots is
represented by an edge between the two corresponding
vertices. In our case, the events are exams (or courses)
and the constraints might be that some students are enrolled in both exams or the same professor is giving both
courses. Ideally, then, such exams (courses) would be
assigned different time slots. If we associate each possible time slot with a different colour, then creating a conflict-free timetable is equivalent to constructing a feasible (or proper or valid) colouring of the vertices of the
graph, that is, a vertex colouring such that adjacent vertices (two vertices joined by an edge) are assigned different colours.
Clearly, certain conflicts are worse than others. If two
exams (or courses) require the same professor to be present or use the same equipment that cannot be shared,
then those two exams must not be scheduled at the same
time. On the other hand, if two exams happen to have
one student in common, then scheduling those two exams in the same time slot may need to be considered
acceptable. In fact, there may be situations where the
distinction between hard and soft constraints becomes
less clear. For instance, a timetable having a single student scheduled to take two exams in the same time slot
(forcing some special accommodation) may actually be
preferred to one that has 50 students taking back-to-back
exams.
9
the problem instance and others that hold and update
information that helps guide the colouring process.
Scope of Paper
This paper introduces an extension of the weighted graph
model of Kiaer and Yellen (1992a). This enhanced
model holds and keeps track of more of the information
relevant to the two sometimes opposing objectives –
minimizing total conflict penalty (or keeping it zero) and
minimizing total proximity penalty. A natural byproduct
of this approach is the emergence of some new heuristics
that appear to hold promise for their use, separately or in
combination, in fast, one-pass, approximate algorithms.
Associated with each vertex is the set of students who
must take that exam. Two vertices are joined by an edge,
and are said to be adjacent or neighbors, if it is undesirable to schedule the corresponding exams in the same
time slot. Each edge carries information that tells us how
undesirable it would be for the corresponding exams to
be scheduled in the same time slot or in time slots near
each other. In particular, each edge has two attributes:
the set of students taking both exams (intersection subset); and a positive integer indicating the conflict severity
if the exams are scheduled in the same time slot. This
second attribute is currently tied to the size of the intersection subset. However, it can also reflect factors not
tied to this intersection. For instance, if the same professor is assigned to both exams, then the severity is likely
to be set at a high level.
Such algorithms can prove useful in a number of ways.
Because solutions are produced quickly, they can be
used within a flexible, interactive decision-support system that can be adapted to a variety of timetabling scenarios.
These solutions can also be used as initial solutions in
local search and improvement based techniques, (e.g.,
Tabu Search, Simulated Annealing, Large Neighborhood
Search, Case-Based Reasoning), or as upper bounds for a
branch-and-bound algorithm (Kiaer and Yellen 1992b).
Recent research has demonstrated that these algorithms,
when hybridized effectively or integrated with other
techniques such as meta-heuristics, are highly effective
on solving timetabling problems (Qu et al. 2006).
To illustrate our model, suppose there are four available
time slots, 0, 1, 2, and 3 for five exams, E1, E2, E3, E4,
and E5. The set of students taking each of the exams is
as follows:
E1: {a, b, …, j}
E2: {k, l, …, z}
E3: {a, e, k}
E4: {b, c, d, x, y, z}
E5: {a, c, e, g, i, j}
Also, because the model lends itself to using various
combinations of heuristics for vertex and colour selection, it may prove useful in the context of hyperheuristics (Burke et al. 2003) and/or in an evolutionary
computation approach that might involve automatic generation of combinations and switching from one combination to another as the colouring progresses (see Burke
et al. 2007).
Each edge in the graph shown in Figure 1 has the subset
of students enrolled in both exams corresponding to the
endpoints of that edge.
In general it may be undesirable to assign the same time
slot (colour) to a given pair of exams for a variety of
reasons. For this example, however, we consider two
vertices to be adjacent only if there is at least one student
taking both exams.
For an up-to-date survey that includes a broad overview
and extensive bibliography of the research in this area in
the last ten years (see Qu et al. 2006).
{a,e}
Description of the Model
E3
Although we restrict our attention for this paper to examination timetabling, our model is also applicable to
course timetabling. Moreover, it incorporates more of the
problem information at input and keeps track of more
information pertaining to the partial colouring during the
colouring process than do existing timetabling models.
These features led us to the design of some new vertexand colour-selection heuristics, which we introduce in
this paper.
E5
{a,e}
{k}
E1
{a,c,e,g,i,j}
{e}
{b,c,d}
E2
{x,y,z}
E4
Figure 1: Student intersections for pairs of exams.
Each vertex in the graph corresponds to an exam to be
scheduled and each colour corresponds to a different
time slot. Accordingly, assigning colour c to vertex v is
taken to mean that the exam corresponding to v is scheduled in the time slot corresponding to c.
For our example, we set the conflict severity equal to 1,
5, or 25, according to the size of the intersection. In particular, we set the conflict severity to 1 if the intersection
size is 1 or 2, to 5 if the intersection size is 3 or 4, and to
25 if the intersection size is 5 or greater (see Figure 2).
We emphasize that these thresholds for conflict severity
are arbitrarily chosen here. If a conflict-free timetable is
a requirement, as it is in the University of Toronto problem instances (Carter, Laporte, and Lee 1996), then all
We represent various components of a typical instance of
an Examination Timetabling problem using a weighted
graph model. Each vertex and each edge are weighted
with several attributes, some that hold information from
10
penalties vectors for all of the vertices are initialized
with [0, 0] for all of their colour components. Figure 3
shows the result of colouring vertex E1 with colour 1.
conflict severities can simply be set to one since all conflicts are regarded as equally bad.
Of course, as mentioned, there will be many situations in
which the conflict severity depends on other factors. In
these situations, an edge might exist even when it corresponds to an empty intersection of students.
( [0,2], [1,0], [0,2], [0,0] )
E3
[conflictSeverity, intersectionSize]
[1, 2]
[1, 1]
E5
[1, 2]
[25, 6]
E1
E2
[1, 1]
[1, 1]
( [0,0], [0,0], [0,0], [0,0] )
[5, 3]
E2
E5
[1, 2]
[1, 2]
E3
( [0,6], [25,0], [0,6], [0,0] )
[25, 6]
E1
1
[5, 3]
[5, 3]
[1, 1]
E4
( [0,3], [5,0], [0,3], [0,0] )
Figure 3: Colour-penalties vectors after E1 is coloured 1.
E4
There may be other factors that make certain time slots
undesirable for an individual exam. For instance, if professor X is assigned to exam A and cannot be on campus
before noon. So any colour corresponding to a morning
time slot for exam A would be given a prohibitively
large conflict penalty value before the colouring begins.
[5, 3]
Figure 2: Additional edge attributes.
The proximity penalty of assigning colours ci and c j to
the endpoints of an edge is a function of how close ci
and c j are and the size of the intersection. For the Toronto problem instances, where the time slots are simply
ci = i , i = 0,1,…, the intersection size is multiplied by a
proximity weight that equals 25-|i-j| when | i − j | ≤ 5 and
0, otherwise. Our implementation uses this same evaluation for comparison purposes with the Toronto benchmark results. However, if the time slots are specified by
a day, a start time, and a duration, then our colour attributes can easily be modified to allow for the appropriate
change in the proximity evaluation function.
As each vertex is coloured, its adjacent vertices’ colourpenalties vectors are updated. The ease with which we
are able to keep track of both hard and soft constraints as
the colouring progresses creates new opportunities for
the use of more sophisticated heuristics tied to this readily accessible information.
The Basic Approximate Algorithm
Our basic algorithm consists of two steps, select a vertex
and then colour that vertex. We repeat these two steps
until all vertices are coloured. Notice that while our
model will easily accommodate more computationintensive algorithms involving backtracking, local improvement, etc., we chose for this first phase of our research to concentrate on producing fast, essentially onepass colourings.
Our overall objective is to produce colourings (timetables) with minimum total conflict (zero may be required)
and minimum total proximity penalty.
Knowing the conflict severity and size of the intersection
for each edge makes it straightforward to keep track of
the two kinds of penalties as the colouring progresses.
When a vertex gets coloured c, that colour becomes less
desirable (or forbidden) to its neighbors, as do colours in
proximity with colour c.
Summary of the Model Features and Parameters
In preparation for the next section’s discussion of heuristics, we list the key features and parameters on which the
heuristics are based. The two edge attributes, conflict
severity and intersection size, give rise to two different
versions of the traditional concept of weighted degree of
a vertex.
Our model keeps track of these two kinds of colour undesirability as follows. Each vertex v has a colourpenalties vector that indicates the undesirability of assigning each colour to that vertex with respect to conflict
penalty and proximity penalty. That is, the component of
the colour penalties vector corresponding to colour c has
two values, one is the conflict penalty incurred if v is
coloured c, and the other is the resulting proximity penalty.
• Conflict severity (of an edge) – a measure of how
undesirable it is to assign the same colour to both
endpoints of the edge. In general, this would depend
on several factors, and it could be set interactively by
the end-user.
• Intersection size (of an edge) – the size of the intersection of the two sets corresponding to the endpoints
of the edge. In exam timetabling, this is simply the
number of students taking both exams.
• Conflict degree (of a vertex) – the sum of the conflict
severities of the edges incident on the vertex.
Using our example and a simplified proximity function,
we illustrate how the colour-penalties vectors change as
the graph is coloured. Suppose that any two colours i and
j of the colours 0, 1, 2, and 3 are within proximity if they
differ by 1, then the proximity penalty incurred when the
colours of the endpoints of an edge differ by 1 equals the
intersection size. Suppose further that the colour-
11
• Intersect degree (of a vertex) – the sum of the intersection sizes of the edges incident on the vertex.
• Bad-conflict edge – an edge whose conflict severity
exceeds a specified threshold value. If a conflict-free
timetable (i.e., a feasible colouring) is required, then
this threshold is set to zero, as we do for the Toronto
problem instances.
• Bad-intersect edge – an edge whose intersection size
exceeds a specified threshold. In our current implementation, this threshold is a function of the average
of the intersection sizes of all edges; specifically, we
use the average intersection size times some constant
multiplier.
• Conflict penalty (for the colour assignment of a vertex) – a measure of how undesirable it is to assign
that colour to the vertex. This will depend on the colour assignments of the vertex’s neighbors and the
conflict severities of the relevant edges, but it could
also depend on other factors (e.g., professor, room, or
equipment constraints).
• Proximity (of two colours) – a measure of how close
together (in the case of exam timetabling) or spread
apart (for course timetabling) the two colours are.
This is often a secondary objective to optimize in
school timetabling and is typically referred to as a
soft constraint.
• Proximity penalty (for the colour assignment of a
vertex) – the sum of the proximity penalties resulting
from that colour assignment and the colour assignments of all neighbors of that vertex (determined by
the function described immediately following Figure
2).
• Colour-penalties vector (of a vertex) – indicates for
each colour the conflict penalty and proximity penalty of assigning that colour to the vertex. When a
vertex is coloured, the colour-penalties vector of each
of that vertex’s neighbors must be updated accordingly.
• Bad-conflict colour (for a vertex) – a colour whose
conflict penalty for that vertex exceeds some specified threshold (also set to zero for the Toronto instances since feasible colourings are required).
• Bad-proximity colour (for a vertex) – a colour whose
proximity penalty for that vertex exceeds some specified threshold. Similar to the bad-intersect-edge
threshold, we use average intersection size times a
(possibly different) constant multiplier.
ones. Our current implementation uses 10 ‘primitive’
heuristics for selecting the next vertex to be coloured and
four to select a colour for that vertex.
Ten Primitive Vertex-Selection Heuristics
Our colouring strategies are based on the classical and
intuitive idea that the most troublesome vertices should
be coloured first. Some of the commonly used heuristics
based on this idea have been largest saturated degree,
largest degree, and largest weighted degree.
We use variations of these, and we introduce some new
ones that focus more on the number of bad edges and the
number of bad colours. Some of these new heuristics rely
on the information kept in each vertex’s colour-penalties
vector, while others use information tied to the edges
incident on each vertex. The primitive heuristics on
which our vertex selectors are based are:
0. Maximum number of bad-conflict edges to uncoloured neighbors – vertices having the most badconflict edges among their incident edges to uncoloured neighbors.
1. Maximum number of bad-conflict colours – vertices
having the most bad-conflict colours. For the Toronto
data set, this heuristic reduces to largest saturation
degree.
2. Maximum number of bad-proximity colours – vertices having the most bad-proximity colours.
3. Maximum conflict sum – vertices with the largest sum
of their conflict colour penalties.
4. Maximum proximity sum – vertices with the largest
sum of their proximity colour penalties.
5. Maximum conflict degree to uncoloured neighbors –
vertices whose incident edges to uncoloured
neighbors have the largest sum of the conflict severities.
6. Maximum number of bad-conflict edges – vertices
having the most bad-conflict edges among their incident edges. For the Toronto data set, this reduces to
largest degree (since every edge is considered a badconflict edge).
7. Maximum number of bad-intersect edges to uncoloured neighbors – vertices having the most badintersect edges among their incident edges to uncoloured neighbors.
8. Maximum intersect degree to uncoloured neighbors –
vertices whose incident edges to uncoloured
neighbors have the largest sum of the intersection
sizes.
9. Maximum number of bad colours – a consolidation of
heuristics 1 and 2; a bad colour is one whose conflict
penalty or whose proximity penalty exceeds its respective threshold.
The thresholds for badness are easily adaptable to the
requirements of the problem, and, in a decision support
system, they could be specified by the end-user interactively. Part of this ongoing research is to study the effect
that the values of the thresholds have on the quality of
the solution and to identify features of a problem instance that determine that effect.
Observe that heuristic 7 may be better at evaluating the
difficulty of a vertex than its sum counterpart, heuristic
8. To illustrate, suppose that the edge weights in Figure 4
represent intersection size and that all neighbors of vertices v1 and v2 are uncoloured. Then heuristic 8 would
select v1, whereas, for any bad-intersect-edge threshold
greater than one, heuristic 7 would select v2, which ap-
Heuristics
Vertex selection and color selection are the two key
components of our simple, constructive algorithm, and
our strategies for both are flexible in the varied ways
they use new heuristics and variations of the traditional
12
ouring, the only vertices with any bad-conflict colours will tend to be those few that have neighbors that
have already been coloured. Thus, until several vertices
are coloured, the order in which they are selected will
tend toward a simple breadth-first order and not be an
effective predictor of the difficult-to-colour vertices.
pears to be more difficult. A similar observation can be
made for heuristic 2 versus heuristic 4.
1
1
200
v1
1
40
40
40
v2
1
40
Accordingly, the compound vertex selectors used early
in the colouring process begin with a primitive heuristic
based on the weights of incident edges (e.g., heuristic 0).
Then, after a designated number of vertices have been
selected and colored, we switch to a compound selector
that begins with heuristic 1 when it is more likely to be a
stronger predictor of the difficulty of a vertex.
40
Figure 4: Heuristic 8 would select v1 before v2.
Four Primitive Colour-Selection Heuristics
Given a vertex v that has been selected, the primitive
heuristics that we use to choose a colour for v are:
Vertex Partitioning
0. Minimum conflict penalty – a colour that has minimum conflict penalty for vertex v.
1. Minimum proximity penalty – a colour that has minimum proximity penalty for vertex v.
2. Least bad for neighbors with respect to conflict penalty – a colour which when assigned to v causes the
fewest good-to-bad conflict penalty switches for the
uncoloured neighbors of v.
3. Least bad for neighbors with respect to proximity
penalty – same as heuristic 2 but with respect to
proximity penalty.
A final innovation involves a preprocessing step that
partitions the vertex set and allows us to reduce the
amount of computation without incurring additional conflict penalties. The preprocessing is based on the following simple observation. If v is a vertex with degree less
than k, and v initially has k colours available, then v can
safely be left until last to colour, since it will always
have at least one non-conflict colour available, independent of how its neighbors are coloured and how
heavy the edge-weights are between v and its neighbors.
The preprocessing uses an iterative partitioning algorithm that places all vertices whose colouring can be
done last into the easiest-to-colour subset, say S1 . Next,
for each vertex in S1 , we calculate a reduced (quasi-)
degree of each of its neighbors and put all vertices whose
reduced degree is less than the number of colours available into the next-easiest-to-colour subset, S2 . Again, as
long as a vertex in S2 is coloured before any of its
neighbors in S1 , it can safely be left uncoloured until its
other neighbors are coloured. The process continues until
no additional vertices can be removed from
the ‘hardest’ subset and the vertices in that last subset of
the partition must be coloured first using the specified
selection criteria.
Combining Heuristics
One of the innovations of our model and implementation
is the ability to combine any number of the primitive
heuristics to form compound vertex selectors and compound colour selectors. A compound vertex selector
starts with one of the 10 primitive vertex-selection heuristics listed above. Typically there will be several vertices identified as the most difficult with respect to that
heuristic. This subset of vertices is then narrowed down
by applying a second primitive heuristic, and so on.
Thus, a compound vertex selector consists of a sequence
of primitive heuristics, where all but the first one in the
sequence, is regarded as a tiebreaker for the ones before
it. Once the subset of vertices is pared down by the combination of heuristics, some vertex is chosen from the
subset (typically the first one in the list). Compound colour selectors are similarly constructed from the four
primitive colour-selection heuristics listed above.
As long as the subsets are done in order (last to first),
vertices in all subsets except for the hardest one can be
selected arbitrarily with no possibility of incurring a conflict penalty. One simply chooses an available colour,
whose existence is guaranteed by the construction. Thus,
in a fairly sparse graph, computation can be considerably
reduced. Notice that because any penalties that result
from the colouring occur in the process of colouring the
hardest cell, any local improvement algorithms could be
applied only to that set of vertices before moving on to
colour the rest of the graph, again without incurring additional penalties at a later stage.
Switching Selectors in the Middle of a Coloring
Another feature of our model is the ability to switch from
one combination of heuristics to another at various stages
of the colouring. Including this feature was motivated by
the general observation that the effectiveness of a heuristic is likely to change as the colouring progresses. The
primitive vertex-selection heuristic 1 is perhaps the simplest illustration of this behavior. As we mentioned earlier, this heuristic is essentially the traditional saturation
degree, which has proven to be among the most preferred
heuristics for classical graph colouring. However, applying heuristic 1 in the very early stages of a colouring will
produce a huge number of ties. Moreover, early in a col-
Another potential advantage to this partitioning strategy
is that the vertex-selection process after the hardest subset has been coloured can be based solely on proximity
considerations.
13
Some Preliminary Results
Problem
We present the preliminary results of applying our approach on the Toronto benchmarks, which is available at
ftp://ftp.mie.utoronto.ca/pub/carter/testprob/. This dataset
was first introduced in (Carter, Laporte, and Lee 1996),
and since then has been extensively studied using a wide
range of algorithms in the literature. We set the number
of colors equal to the number of time slots in the Toronto
dataset. Due to the fact that two versions of the datasets
have been circulated under the same name in the last ten
years, we have renamed the problems in (Qu et al. 1996).
We used version I of the data in our experiments.
car91 I
car92 I
ear83 I
hec92 I
kfu93 I
lse91
rye92
sta83 I
tre92
ute92
uta92 I
yor83 I
Testing is ongoing and much more needs to be done.
However, we can make some initial observations.
Best
results
5.22
4.40
39.28
12.35
19.04
12.05
10.21
163.05
8.62
3.62
30.60
42.05
Settings
switch | PC | IE | vs | cs
1/23 | 90 | 1 | vs2 | cs0
1/13 | 126 | 2 | vs2 | cs0
1/5.2 | 115.5 | 1,2 | vs2 | cs0
1/5 | 16 | 1,2 | vs1 | cs0
1/14 | 134 | 1,2 | vs2 | cs0
1/32 | 192 | 1,2 | vs2 | cs0
1/28 | 133.5 | 2 | vs2 | cs0
1/26.5 | 81 | 1 | vs2 | cs1
1/39 | 207 | 20 | vs2 | cs0
1/16 | 50 | 1,2 | vs1 | cs0
1/5 | 369 | 1,2 | vs2 | cs1
1/17 | 340 | 2 | vs2 | cs0
Best
reported
4.97
4.32
36.16
10.8
14.0
10.5
7.3
158.19
8.38
3.36
25.8
39.8
Table 1. Best results with the corresponding settings for Toronto benchmarks.
Table 1 presents the best results we have obtained so far.
Although we haven't fully tested it yet, partitioning appears to improve solution quality most of the time. Except for the “sta83 I” problem instance, all results in column 2 of the table were produced using the partitioning
pre-processing.
Results from Table 1 demonstrate that for vertex selection, vs2 outperforms vs1; 10 of the 12 best results were
achieved using vs2. Changing threshold values for badness and changing the switch point between the first and
second compound vertex selector clearly affect the performance of our algorithm.
We obtained them using the following two groups of
three compound vertex selectors:
In Table 1, we also gave the best results reported in the
literature which used different constructive methods.
Although our totals for proximity penalty are, on the
average, 13% worse than the best ones reported, we believe our approach still holds promise, particularly in
view of the fact that it is, at the moment, a one-pass algorithm without any backtracking or local improvement.
The best results reported in the last column were by different approaches cited in the literature. No single algorithm outperformed others on all problems tested here.
vs1: 0 7 8 1 2 4 | 1 0 2 4 7 8 | 2 4 7 8
vs2: 0 7 8 9 4 | 9 0 7 8 2 4 | 2 4 7 8
The numbers refer to the primitive vertex-selection heuristics introduced earlier, and the vertical lines separate
the three compound selectors that form each group. The
first compound selector in a group is applied to the hardest subset until a designated fraction (the switch fraction)
of the vertices have been selected and coloured. Then the
second compound selector is applied to the rest of the
hardest subset. Finally, the third selector, which consists
of the four proximity-related primitive heuristics, is applied to the remaining (non-hard) vertices.
In general, these preliminary results indicate that the
performance of the algorithm is sensitive to the settings
of the switch points and thresholds. Although we have
some initial observations on which settings perform better on which Toronto problems, the setting of these parameters in relation to particular problems is not clear.
More research effort needs to be spent to develop more
intelligent mechanisms to adaptively choose these settings for different problems.
We used the following two groups of two compound
color selectors:
cs0: 0 1 2 3 | 0 1 3
cs1: 0 2 3 1 | 0 3 1
One of our future directions is to use heuristics to choose
how to construct the combinations of heuristics. This
hyper-heuristic approach (see Burke et al. 2003) has
been applied successfully in a range of scheduling and
optimization problems, including timetabling. It is well
known in meta-heuristics research that different heuristics perform better on different problems, or even different instances of the same problem. One of the research
challenges is concerned with the automatic design of
heuristics in solving a wider range of problems. Developing an automatic algorithm that can intelligently operate on a search space of vertex and colour selectors,
switch point selectors and threshold settings will become
one of our primary research efforts in the future.
The first compound selector in each group was applied to
the entire subset of hardest-to-color vertices, and the
second one was applied to the rest of the vertices.
As we described earlier, the thresholds for a badproximity color and a bad-intersect edge were set equal
to the average intersection size times two different constant multipliers. In the table, PC is the multiplier for the
bad-proximity color, and IE is the one for the badintersect edge.
The Settings column gives the values of the switch fraction and the multipliers, PC and IE, and indicates the
vertex and color selectors used to produce the given result.
14
edges, nor have we allowed any trade-off between
conflict penalty and proximity penalty. In timetabling
situations where conflicts must be tolerated, the enduser might specify that a certain amount of conflict
penalty is equivalent to a certain amount of proximity
penalty, e.g., a proximity violation involving 50 students equals a conflict involving one student. This
might lead naturally to a single objective function to
be minimized.
• As we mentioned at the start, the model can be
adapted to a variety of scenarios, in which a number
of parameters would be specified interactively by the
end user through an appropriate interface. Follow-up
work will include building such an interface.
Features of the Model Not Being Used Yet
There are some features of our model not used in our
current implementation that add to its robustness.
Our model can handle pre-colored vertices, that is, exams that must be assigned to certain time slots. Furthermore, if certain time slots are forbidden for a particular
exam (for example, the professor is only available on
certain days and times), then this can easily be handled
by setting an initial nonzero penalty for the relevant
color.
As we noted earlier, each color, which represents a time
slot, can have attributes associated with fairly general
information, like start time, duration and/or finish time.
For this paper we used only a single attribute, an integer
value between zero and the maximum number of time
slots in use, since we were testing our implementation on
the Toronto benchmark problems.
Acknowledgements
The research for this paper was supported by Nottingham
University, UK, the Engineering and Physics Science
Research Council (EPSRC), UK, and an Ashforth Grant
from Rollins College, USA.
Ongoing and Future Work
References
The robust model presented in this paper can be easily
extended or integrated with other techniques to develop
more advanced and powerful algorithms. We give below
some possible (and ongoing) research directions.
Broder, S., Final Examination Scheduling, Comm. of the
ACM 7 (1964), 494-498.
Brelaz, D., New methods to color the vertices of a graph.
Comm. of the ACM 22 (1979), 251-256.
• Study the effects of varying the switch points, the
badness threshold values, and the use of different
heuristic combinations. In the context of hyperheuristics, there are a number of different search
spaces to consider:
o The set of all the combinations of one or more of
the primitive vertex selectors and of the colorselectors.
o For a given group of compound vertex selectors,
the set of all switch points.
o For a given group of compound vertex selectors,
the set of threshold values for badness.
• In the context of case-based reasoning, test heuristic
combinations, thresholds, and switch points with randomly generated problem instances that are in the
Toronto format to see if certain performance patterns
emerge. Previous work on using case-based reasoning (see Burke, Petrovic and Qu, 2006) to intelligently select graph colouring heuristics demonstrated
that there are significant, wide-ranging possibilities
for research in knowledge-based heuristic design.
• Adding a backtracking component to the algorithm is
likely to lower the total proximity penalty. For instance, when every colour assignment for a selected
vertex incurs a proximity penalty above some threshold, the algorithm un-colours or re-colours some
other vertex in order to reduce the selected vertex’s
proximity penalty.
• Write an improvement method that takes a given colouring produced by our algorithm and looks for vertices whose colours can be changed to decrease the
total proximity penalty while maintaining feasibility.
• With the current implementation, we have not yet
made full use of the varying conflict severity of
Burke, E.K., Hart, E., Kendall, G., Newall, J., Ross, P.
and Schulenburg, S.: Hyperheuristics: an Emerging Direction in Modern Search Technology. In: Glover, F. and
Kochenberger, G.: Handbook of Metaheuristics, 457474, 2003.
Burke, E. K., Kingston, J. H., and de Werra, D., Applications to Timetabling, In: J. L. Gross and J. Yellen (eds.)
The Handbook of Graph Theory, Chapman Hall/CRC
Press, (2004), 445-474.
Burke, E.K., McCollum, B., Meisels, A., Petrovic, S. and
Qu, R.: A Graph-Based Hyper Heuristic for Timetabling
Problems. European Journal of Operational Research,
176 (2007) 177-192.
Burke, E.K., Petrovic, S., and Qu R., Case Based Heuristic Selection for Timetabling Problems. Journal of
Scheduling, 9 (2006) 115-132.
Carter, M. W., A Survey of Practical Applications of
Examination Timetabling Algorithms, Operations Research 34 (1986), 193-201.
Carter, M. W., Laporte, G., and Lee, S., Examination
Timetabling: Algorithmic Strategies and Applications, J.
of the Operations Research Society 47 (1996), 373-383.
de Werra, D., An Introduction to Timetabling, Euro. J.
Oper. Res. 19 (1985), 151-162.
15
Kiaer, L., and Yellen, J., Weighted Graphs and University Timetabling, Computers and Operations Research
Vol. 19, No. 1 (1992a), 59-67.
Kiaer, L., and Yellen, J., Vertex Coloring for Weighted
Graphs With Application to Timetabling, Technical Report Series – RHIT, MS TR 92-12 (1992b).
Krarup, J., and de Werra, D., Chromatic Optimisation:
Limitations, Objectives, Uses, References, Euro. J. Oper.
Res. 11 (1982), 1-19.
Mehta, N. K., The Application of a Graph Coloring
Method to an Examination Scheduling Problem, Interfaces 11 (1981), 57-64.
Neufeld, G. A. and Tartar, J., Graph Coloring Conditions
for the Existence of Solutions to the Timetable Problem,
Comm. of the ACM 17 (1974), 450-453.
Papadimitriou, C. H. and Steiglitz, K., Combinatorial
Optimization: Algorithms and Complexity, PrenticeHall, 1982.
Qu, R., Burke, E.K., McCollum, B., Merlot, L. T. G.,
and Lee, S. Y., A survey of Search Methodologies and
Automated Approaches for Examination Timetabling,
Technical Report, NOTT-CS-TR-2006-4 (2006).
Schaerf, A., A Survey of Automated Timetabling, Artificial Intelligence Review 13 (1999), 87-127.
Schmidt, G., and Strohlein, T., Timetable Construction-an Annotated Bibliography, The Computer Journal 23
(1980), 307-316.
Welsh, D. J. A., and Powell, M. B., An Upper Bound for
the Chromatic Number of a Graph and its Application to
Timetabling Problems, The Computer Journal 10 (1967),
85-86.
Wood, D. C., A System for Computing University Examination Timetables, The Computer Journal 11 (1968),
41-47.
16
A Multi-Component Framework for Planning and Scheduling Integration
Amedeo Cesta, Simone Fratini and Federico Pecora
ISTC-CNR, National Research Council of Italy
Institute for Cognitive Science and Technology
Rome, Italy
{name.surname}@istc.cnr.it
Abstract
This paper presents our recent work on O MPS, a new
timeline-based software architecture for planning and
scheduling whose features support software development for
space mission planning applications. The architecture is
based on the notions of domain components and is deeply
grounded on constraint-based reasoning. Components are entities whose properties may vary in time and which model one
or more physical subsystems which are relevant to a given
planning context. Decisions can be taken on components, and
constraints among decisions modify the components’ behaviors in time.
Introduction
This paper describes O MPS, the Open Multi-component
Planning and Scheduling architecture. O MPS implements
a timeline-driven solving strategy. The choice of using
timelines lies in their suitability for real-world problem
specifications, particularly those of the space mission planning context. Furthermore, timelines are very close to the
operational approach adopted by human planners in current space mission planning. Previous timeline-based approaches have been described in (Muscettola et al. 1992;
Muscettola 1994; Cesta & Oddi 1996; Jonsson et al. 2000;
Frank & Jónsson 2003; Smith, Frank, & Jonsson 2000). We
are evolving from our previous work on a planner called
O MP (Fratini & Cesta 2005) in which we have proposed a
uniform view of state variables and resources timelines to integrate Planning & Scheduling (P&S). While the O MP experience lead to a proof of concept solver for small scale
demonstration, the current development of O MPS is taking
place within the Advanced Planning and Scheduling Initiative (APSI) of the European Space Agency (ESA). This has
lead to a a substantial effort both in re-engineering and in
extending our previous work.
The general goal in O MPS is to provide a development
environment for enabling the design and implementation
of mission planning decision support systems to be used
by ESA staff. O MPS also inherits our previous experience in developing planning and scheduling support tools
for ESA, namely with the M EXAR, M EXAR 2 and R AXEM
systems (Cesta et al. 2007), currently in active duty at ESA’s
control center. Our aim within APSI is to generalize the approach to mission planning decision support by creating a
17
software framework that facilitates product development.
The O MPS architecture is not only influenced by
constraint-based reasoning work, but introduces also the notion of domain components as a primitive entity for knowledge modeling. Components are entities whose properties
may vary in time and which model one or more physical subsystems which are relevant to a given planning context. Decisions can be taken on components, and constraints among
decisions modify the components’ behaviors in time. Components provide the means to achieve modular decision support tool development. A component can be designed to
incorporate into a constraint-based reasoning framework entire decisional modules which have been developed independently. The underlying philosophy of O MPS is to provide a
development environment within which different, independently developed reasoning modules can be integrated seamlessly. It is useful to see a component as an entity having
both static and dynamic aspects. Static descriptions are used
to describe “what a component is”, e.g., the static property
of a light bulb is that it can be “on” or “off”. Dynamic properties are instead those features which define how the static
properties of the component may vary over time, e.g., a light
bulb can go from “on” to “off” and vice-versa.
It is tempting to associate components to the concept
of state variable a la HSTS (Muscettola et al. 1992;
Muscettola 1994). The reason for not doing so is that a state
variable models an entity with static properties. The way this
entity can change over time is typically specified through
constraints on the possible transitions and durations of the
states (e.g., through a timed automaton). A component as we
define it here represents a more general concept: its behavior
over time can be determined by non-trivial reasoning which
is internal to the component itself. This distinction is important, as it provides a way to seamlessly incorporate into the
O MPS reasoning framework objects which are themselves
capable of modifying their behavior according to non-trivial
processes, such as sophisticated reasoning algorithms.
This paper is organized as follows. First, we define the
basic building block, namely the component, providing examples which show how such an entity can be instantiated
to represent a “classical” state variable, a resource, or even
a more complex object whose temporal behavior can be described according to its own “internal dynamics”. Second,
we describe the notion of decision on a component. Again,
we provide examples to show how this concept is instantiated on different common types of components. Third,
we introduce the concepts of timeline and domain theory,
the former providing the driving feature of the solving approach, the latter describing how components interact, and
how decisions taken on components affects other components. Finally, we briefly illustrate the solving strategy implemented in the current O MPS framework and provide an
example. It is worth saying that this paper describes the general approach underlying the O MPS architecture. We do not
dwell on the theoretical aspects underlying the architecture,
for which the interested reader is referred to (Fratini 2006).
Components and Behaviors
An intrinsic property of components is that they evolve over
time, and that decisions can be taken on components which
alter their evolution. In O MPS, a component is an entity
that has a set of possible temporal evolutions over an interval of time H. The component’s evolutions over time are
named behaviors. Behaviors are modeled as temporal functions over H, and can be defined over continuous time or as
stepwise constant functions of time.
In general, a component can have many different behaviors. Each behavior describes a different way in which the
component’s properties vary in time during the temporal interval of interest. It is in general possible to provide different representations for these behaviors, depending on (1) the
chosen temporal model (continuous vs. discrete, or time
point based vs. interval based), (2) the nature of the function’s range D (finite vs. infinite, continuous vs. discrete,
symbolic vs. numeric) and (3) the type of function which
describes a behavior (general, piecewise linear, piecewise
constant, impulsive and so on).
Not every function over a given temporal interval can be
taken as a valid behavior for a component. The evolution
of components in time is subject to “physical” constraints
(or approximations thereof). We call consistent behaviors
the ones that actually correspond to a possible evolution in
time according to the real-world characteristics of the entity
we are modeling. A component’s consistent behaviors are
defined by means of consistency features. In essence, a consistency feature is a function f C which determines which
behaviors adhere to physical attributes of the real-world entity modeled by the component.
It is in general possible to have many different representations of a component’s consistency features: either explicit
(e.g., tables or allowed bounds) or implicit (e.g., constraints,
assertions, and so on). For instance, let us model a light bulb
component. A light bulb’s behaviors can take three values:
“on”, “off” and “burned”. Supposing the light bulb cannot
be fixed, a rule could state that any behavior that takes the
value “burned” at a time t is consistent if and only if such a
value is taken also for any time t′ > t. This is a declarative
approach to describing the consistency feature f C . Different
actual representations for this function can be used, depending also on the representation of the behavior.
A few more concrete examples of components and their
associated consistency features are the following.
18
State variable. Behaviors: piecewise constant functions
over a finite, discrete set of symbols which represent the
values that can be taken by the state variable. Each behavior represents a different sequence of values taken by
the component. Consistency Features: a set of sequence
constraints, i.e., a set of rules that specify which transitions between allowed values are legal, and a set of lower
and upper bounds on the duration of each allowed value.
The model can be for instance represented as a timed automaton (Alur & Dill 1994) (e.g., the three state variables
in Figure 2).
Note that a distinguishing feature of a state variable is that
not all the transitions between its values are allowed.
Resource (renewable). Behaviors: integer or real functions of time, piecewise, linear, exponential or even more
complex, depending on the model you want to set up.
Each behavior represents a different profile of resource
consumption. Consistency Feature: minimum and maximum availability. Each behavior is consistent if it lies
between the minimum and maximum availability during
the entire planning interval.
Note that a distinguishing feature of a resource is that there
are bounds of availability.
In general, the component-based approach allows to accommodate a pre-existing solving component into a larger
planning problem. For instance, it is possible to incorporate
the M EXAR 2 application (Cesta et al. 2007) as a component, the consistency property of which is not computed directly on the values taken by the behaviors, but as a function
of those behaviors1 .
Component Decisions
Now that we have defined the concept of component as the
fundamental building block of the O MPS architecture, the
next step is to define how component behaviors can be altered (within the physical constraints imposed by consistency features).
We define a component decision as a pair hτ, νi, where τ
is a given temporal element, and ν is a value. Specifically, τ
can be:
• A time instant (TI) t representing a moment in time.
• A time interval (TIN), a pair of TIs defining an interval
[ts , te ) such that te > ts .
The specific form of the value ν depends on the type of component on which the decision is defined. For instance, this
can be an amount of resource usage for a resource component, or a disjunction of allowed values for a state variable.
Regardless of the type of component, the value of any
component decision can contain parameters. In O MPS, parameters can be numeric or enumerations, and can be used
to express complex values, such as “transmit(?bitrate)” for a
1
Basically, it is computed as the difference between external
uploads and the downloaded amount stated by the values taken by
the behaviors. See (Cesta et al. 2007) for details on the M EXAR 2
application.
state variable which models a communications system. Further details on value parameters will be given in the following section.
Figure 1: The update function computes the results of a decision
on a component’s set of behaviors. The figure exemplifies this effect given the two decisions: δ ′ imposes a value d′ for the behaviors
of the component in the time instant t1 ; δ ′′ imposes that the values
of all behaviors converge to d′′ after time instant t2 .
Overall, a component decision is something that happens
somewhere in time and modifies a component’s behaviors
as described by the value ν. In O MPS, the consequences of
these decisions are computed by the components by means
an update function f U . This is a function which determines
how the component’s behaviors change as a consequence of
a given decision. In other words, a decision changes a component’s set of behaviors, and f U describes how. A decision
could state for instance “keep all the behaviors that are equal
to d′ in t1 ” and another decision could state “all the behaviors must be equal to d′′ after t2 ”. Given a decision on a
component with a given set of behaviors, the update function computes the resulting set (see Figure 1).
In the following, we instantiate the concept of decision for
the two types of components we have introduced so far.
State variable. Temporal element: a TIN. Value: a subset
of values that can be taken by the state variable (the range
of its behaviors) in the given time frame. Update Function: this kind of decision for a state variable implies the
choice of values in a given time interval. In this case the
subset of values are meant as a disjunction of allowed values in the given time interval. Applying a decision on a
set of behaviors entails that all behaviors that do not take
any of the chosen values in the given interval are excluded
from the set.
Resource (renewable). Temporal element: a TIN. Value:
quantity of resource allocated in the given interval — a
decision is basically an activity, an amount of allocated
resource in a time interval. Update Function: the resource
profile is modified by taking into account this allocation.
Outside the specified interval the profile is not affected.
Domain Theory
So far, we have defined components in isolation. When components are put together to model a real domain they cannot
19
be considered as reciprocally decoupled, rather we need to
take into account the fact that they influence each other’s
behavior.
In O MPS, it is possible to specify such inter-component
relations in what we call a domain theory. Specifically,
given a set of components, a domain theory is a function
f DT which defines how decisions taken on one component
affect the behaviors of other components. Just as a consistency feature f C describes which behaviors are allowed
with respect to the features of a single component, the domain theory specifies which of the behaviors belonging to
all modeled components are concurrently admissible.
In practice, a domain theory is a collection of synchronizations. A synchronization essentially represents a rule
stating that a certain decision on a given component (called
the reference component) can lead to the application of a
new decision on another component (called target component). More specifically, a synchronization has the form
hCi , V i −→ hCj , V ′ , Ri, where: Ci is the reference component; V is the value of a component decision on Ci which
makes the synchronization applicable; Cj is the target component on which a new decision with value V ′ will be imposed; and R is a set of relations which bind the reference
and target decisions.
In order to clarify how such inter-component relationships
are modeled as a domain theory, let us give an example.
Example 1 The planning problem consists in deciding data
transmission commands from a satellite orbiting Mars to
Earth within given visibility windows. The spacecraft’s orbits for the entire mission are given, and are not subject to
planning. The fundamental elements which constitute the
system are: the satellite’s Transmission System (TS), which
can be either in “transmit mode” on a given ground station or idle; the satellite’s Pointing System (PS); and the
satellite’s battery (BAT). In addition, an external, uncontrollable set of properties is also given, namely Ground Station
Visibility (GSV) and Solar Flux (SF). Station visibility windows are intervals of time in which given ground stations
are available for transmission, while the solar flux represents the amount of power generated by the solar panels
given the spacecraft’s orbit. since the orbits are given for
the entire mission, the power provided by the solar flux is a
given function of time sf(t). The satellite’s battery accumulates power through the solar flux and is discharged every
time the satellite is slewing or transmitting data. Finally, it
is required that the spacecraft’s battery is never discharged
beyond a given minimum power level (in order to always
maintain a minimum level of charge in case an emergency
manoeuvre needs to be performed).
Instantiation this example into the O MPS framework thus
equates to defining five components:
PS, TS and GSV. The spacecraft’s pointing and transmission systems, as well as station visibility are modeled with
three state variables. The consistency features of these
state variables (possible states, bounds on their duration,
and allowed transitions) are depicted in Figure 2. The figure also shows the synchronizations involving the three
components: one states that the value “locked(?st3)” on
Battery
Solar Flux
In summary, we employ components of three types: state
variables to model the PS, TS and GSV elements, a reusable
resource to maintain the solar flux profile, and an ad-hoc
component to model the spacecraft’s battery. Notice that this
latter component is essentially an extension of a reusable
resource: whereas a reusable resource’s update function is
trivially the sum operator (imposing an activity on a reusable
resource entails that the resource’s availability is decreased
by the value of the activity), the BAT’s update function calculates the consequences of activities as per the above integration over the planning horizon.
sf(t)
power
power
max
min
time
time
cons(t)
Pointing System
?st1 != ?st2
?st4 = ?st1
Unlocked(?st4)
[1,+INF]
Transmission System
Idle()
[1+INF]
Slewing(?st1,?st2)
[1,30]
?st2 = ?st3
?st3 = ?st5
?st3 = ?st4
Locked(?st3)
[1,+INF]
Transmit(?st5)
[1,+INF]
Decision Network
?st3 = ?st6
Ground Station Visibility
Visible(?st6)
[1,+INF]
None()
[1,+INF]
Figure 2: State variables and domain theory for the running example.
component PS requires the value “visible(?st6)” on component GSV (where ?st3 = ?st6, i.e., the two values must
refer to the same station); another synchronization asserts
that transmitting on a certain station requires the PS component to be locked on that station; lastly, both slewing
and transmission entail the use of a constant amount of
power from the battery.
SF. The solar flux is modeled as a reusable resource. Given
that the flight dynamics of the spacecraft are given (i.e.,
the angle of incidence of the Sun’s radiation with the solar panels is given), the profile of the solar flux resource is
given function time sf(t) which is not subject to changes.
Thus, decisions are never imposed on this component
(i.e., the SF component has only one behavior), rather its
behavior is solely responsible for determining power production on the battery (through the synchronization between the SF and BAT components).
BAT. The spacecraft’s battery component is modeled as follows. Its consistency features are a maximum and minimum power level (max, min), the former representing the
battery’s maximum capacity, the latter representing the
battery’s minimum depth of discharge. The BAT component’s behavior is a temporal function bat(t) representing
the battery’s level of charge. Assuming that power consumption decisions resulting from the TS and PS components are described by the function cons(t), the update
function calculates the consequences of power production
(sf(t)) and consumption on bat(t) as follows:
Rt
L0 + α 0 (sf(t) − cons(t))dt
Rt
if L0 + α 0 (sf(t) − cons(t))dt ≤ max;
bat(t) =
max
otherwise.
where L0 is the initial charge of the battery at the beginning of the planning horizon and α is a constant parameter
which approximates the charging profile.
20
The fundamental tool for defining dependencies among
component decisions are relations, of which O MPS provides
three types; temporal, value and parameter relations.
Given two component decisions, a temporal relation is
a constraint among the temporal elements of the two decisions. A temporal relation among two decisions A and B
can prescribe temporal requirements such as those modeled
by Allen’s interval algebra (Allen 1983), e.g., A EQUALS
B, or A OVERLAPS [l,u] B.
A value relation between two component decisions is a
constraint among the values of the two decisions. A value
relation among two decisions A and B can prescribe requirements such as A EQUALS B, or A DIFFERENT B (meaning that the value of decision A must be equal to or different
from the value of decision B). Notice that temporal relations
can involve any two component decisions, e.g., an activity (a
resource decision) should occur BEFORE a value choice (a
state variable decision). Conversely, value relations are defined among decisions pertaining to components of the same
type.
Lastly, a parameter relation among component decisions
is a constraint among the values of the parameters of the
two decisions. Such relations can prescribe linear inequalities between parameter variables. For instance, a parameter constraint between two decisions with values “available(?antenna, ?bandwidth)” and “transmit(?bitrate)” can be
used to express the requirement that transmission should not
use more than half the available bandwidth, i.e., ?bitrate
≤ 0.5·?bandwidth.
Component decisions and relations are maintained in a
decision network: given a set of components C, a decision
network is a graph hV, Ei, where each vertex δC ∈ V is a
component decisions defined on a component C ∈ C, and
n
m
each edge (δC
i , δC j ) is a temporal, value or parameter relan
m
tion among component decisions δC
i and δC j .
We now define the concepts of initial condition and goal.
An initial condition for our problem consists in a set of
value choices for the GSV state variable. These decisions
reflect the visibility windows given by the Earth’s position
with respect to the (given) orbit of the satellite. Notice that
the allowed values of the GSV component are not references
for a synchronization, thus they cannot lead to the insertion
in the plan of new component decisions.
Conversely, a goal consists in a set of component decisions which are intended to trigger the solving strategy to ex-
ploit the domain theory’s synchronizations to synthesize decisions. In our example, this set consists in value choices on
the TS component which assert a desired number of “transmit(?st5)” values. Notice that these value choices can be
allocated flexibly on the timeline.
In general, the characterizing feature of decisions which
define an initial condition is that these decisions do not lead
to application of the domain theory. Conversely, goals directly or indirectly entail the need to apply synchronizations
in order to reach domain theory compliance. This mechanism is the core of the solving process described in the following section.
mately represent a solution to the planning problem. Timeline management may introduce new component decisions
as well as new relations to the decision network. For this
reason, the O MPS solving process iterates domain theory application and timeline management steps until the decision
network is fully justified and a consistent set of behaviors
can be extracted from all component timelines. The specific
procedures which compose timeline management are timeline extraction, timeline scheduling and timeline completion.
Before showing how these procedures are composed to form
the core of our planning approach, we describe the three
steps in detail.
Reasoning About Timelines in O MPS
Timeline Extraction. The outcome of the domain theory
application step is a decision network where all decisions are
justified. Nevertheless, since every component decision’s
temporal element (which can be a TI or TIN) is maintained
in an underlying flexible temporal network, these decisions
are not fixed in time, rather they are free to move between
the temporal bounds obtained as a consequence of the temporal relations imposed on the temporal elements. For this
reason, a timeline must be extracted from the decision network, i.e., the flexible placement of temporal elements implies the need of synthesizing a total ordering among floating
decisions. Specifically, this process depends on the component for which extraction is performed. For a resource,
for instance, the timeline is computed by ordering the allocated activities and summing the requirements of those that
overlap. For a state variable, the effects of temporally overlapping decision are computed by intersecting the required
values, to obtain (if possible) in each time interval a value
which complies with all the decisions that overlap during
the time interval.
O MPS implements a solving strategy which is based on the
notion of timeline. A timeline is defined for a component as
an ordered sequence of its values. A component’s timeline is
defined by the set of decisions imposed on that component.
Timelines represent the consequences of the component decisions over the time axis, i.e., a timeline for a component
is the collection of all its behaviors as obtained by applying
the f U function given the component decisions taken on it.
The overall solving process implemented in O MPS is
composed of three main steps, namely domain theory application, timeline management and solution extraction.
More in detail, timeline management consists in extraction,
scheduling and completion. Indeed, a fundamental principle
of the O MPS approach is its timeline-driven solving process.
Domain Theory Application
Component decisions possess an attribute which changes
during the solving process, namely whether or not a decision is justified. O MPS’s domain application step consists
in iteratively tagging decisions as justified according to the
following rules (iterated over all decisions δ in the decision
network):
1. If δ unifies with another decision in the network, then
mark δ as justified;
Component decisions
dur ∈ [30, 77]
A(x), B(y)
2. If δ’s value unifies with the reference value of a synchronization in the domain theory, then mark δ as justified and
add the target decision(s) and relations to the decision network;
C(z)
3. If δ does not unify with any reference value in the domain
theory, mark δ as justified.
⊥
The previous definition of initial condition and goal can be
understood in terms of domain theory application as follows:
an initial condition is a set of component decisions whose
justification follows trivially from the domain, i.e., it is the
direct result of the application of step 3; a goal, on the other
hand, is a set of component decisions whose justification
leads to the application of synchronizations in the domain
theory (i.e., step 2).
Timeline Management
Timeline management is a collection of procedures which
are necessary to go from a set of decision network to a completely instantiated set of behaviors. These behaviors ulti-
21
[10, ∞)
dur ∈ [10, 23]
dur ∈ [20, 45]
B(y), C(z)
[30, ∞)
time
0
⋆
A(x), B(y)
10
30
B(y), C(z)
40
60
time
Timeline (EST)
Figure 3: Three value choices on a state variable, and the resulting
earliest start time (EST) timeline.
In the current implementation, we follow for every type
of component an earliest start-time (EST) approach, i.e., we
have a timeline where all component decisions are assumed
to occur at their earliest start time and last the shortest time
possible. Figure 3 shows the timeline extraction mechanism
for a state variable. The example illustrates two properties
of timelines, namely flaws and inconsistencies.
The first of these features depends on the fact that deci-
sions imposed on the state variable do not result in a complete coverage of the planning horizon with decisions. This
timeline in the figure contains what we call a flaw in the
interval [30, 40]. A flaw is a segment of time in which no
decision has been taken, thus the state variable within this
segment of time is not constrained to take on certain values, rather it can, in principle, assume any one of its allowed
values. The process of deciding which value(s) are admissible with respect to the state variable’s internal consistency
features (i.e., the component’s f C function) is clearly a nontrivial process. Indeed, this is precisely the objective of timeline completion.
In addition to flaws, inconsistencies can arise in the timeline. The nature of inconsistencies depends on the specific component we are dealing with. In the case of state
variables, an inconsistency occurs when two or more value
choices whose intersection is empty overlap in time. In the
example above, this occurs in the interval [0, 10]. As opposed to flaws, inconsistencies do not require the generation
of additional component decisions, rather they can be resolved by posting further temporal constraints. For instance,
the above inconsistency can be resolved by imposing a BEFORE constraint which forces (C(z)) to occur after (A(x),
B(y)). In the case of the BAT component mentioned earlier,
an inconsistency occurs when slewing and/or transmission
decisions have lead to a situation in which bat(t) ≤ min
for some t ∈ H. As in the previous example, BAT inconsistencies can be resolved by posting temporal constraints
between the over-consuming activities. In general, we call
the process of resolving inconsistencies timeline scheduling.
Timeline Scheduling. The scheduling process deals with
the problem of resolving inconsistencies. Once again, the
process depends on the component. For a resource, activity overlapping results in an inconsistency if the combined
usage of the overlapping activities requires more than the
resource’s capacity. For a state variable, any overlapping of
decision that requires a conflicting set of decisions must be
avoided. The timeline scheduling process adds constraints to
the decision network to avoid such inconsistencies through
a constraint posting algorithm (Cesta, Oddi, & Smith 2002).
Timeline Completion. This process is required for components such as state variables, where it is required that any
interval of time in a solution is covered by a decision (this is
trivially true for reusable resources as we have defined them
in this paper). If it is not possible to force an ordering among
decisions in such a way that entire planning horizon is decided, then a flaw completion routine is triggered. This step
adds new decisions to the plan.
Solution Extraction
Once domain application and timeline management have
successfully converged on a set of timelines with no inconsistencies or flaws, the next step is to extract from the timelines one or more consistent behaviors. Recall that a behavior is one particular choice of values for each temporal segment in a component’s timeline. The previous domain theory application and timeline management steps have filtered
22
out all behaviors that are not, respectively, consistent with
respect to the domain theory and the components’ consistency features. Behavior extraction deals with the problem
of determining a consistent set of fully instantiated behaviors for every component. Since every segment of a timeline potentially represents a disjunction of values, behavior
extraction must choose specific behaviors consistently. Furthermore, not all values in timeline segments are fully instantiated with respect to parameters, thus behavior extraction must also take into account the consistent instantiation
of values across all components.
Overall Solving Process
In the current O MPS solver the previously illustrated steps
are interleaved as sketched in Figure 4.
Figure 4: The O MPS solving process.
The first step in the planning process is domain theory application, whose aim is to support non-justified decisions. If
there is no way to support all the decisions in the plan, the
algorithm fails.
Once every decision has been supported, the solver tries
to extract a timeline for each component. At this point, it can
happen that some timelines are not consistent, meaning that
there exists a time interval over which conflicting decisions
overlap (an inconsistency). In such a situation, a scheduling
step is triggered. If the scheduler cannot solve all conflicts,
the solver backtracks directly to domain theory application,
and searches for a different way of supporting goals.
If the solver manages to extract a conflict-free set of timelines, it then triggers a timeline-completion step on any timeline which is found to have flaws. It may happen that some
timelines cannot be completed. In this case, the solver backtracks again to the previous domain theory application step,
and again searches for a way of justifying all decisions. If
the completion step succeeds for all timelines, the solver re-
turns to domain theory application, as timeline completion
has added decisions which are not justified.
Once all timelines are conflict-free and complete, the
solver is ready to extract behaviors. If behavior extraction
fails, the solver attempts to backtrack to timeline completion. This is because our currently implemented completion
algorithm attempts to complete all incomplete timelines separately: thus it may easily happen that a completion over
a timeline compromises behavior extraction on a different
timeline (since values are linked with synchronizations). If
this fails, the solver must return to domain theory application
in order to search for a different plan altogether.
Finally, the whole process ends when the solver succeeds
in extracting at least one behavior for each timeline. This
collection of mutually consistent behaviors represents a fully
instantiated solution to the planning problem.
Figure 5: EST timelines for the TS and GSV state variables.
Going back to our running example, the timelines of the
GSV and TS components resulting from the application of
a set of initial condition and goal decisions are shown in
Figure 5 (no initial decision or goal is specified for the PS
component). Notice that the GSV timeline is fully defined,
reflecting the fact that the GSV component is not controllable, rather it represents the evolution in time of station visibility given the fully defined flight dynamics of the satellite. The TS timeline contains five “transmit” value choices,
through which we represent our goal. These value choices
are allocated within flexible time bounds (the figure shows
an EST timeline for the component, in which these decisions are anchored to their earliest start time and duration).
As opposed to the GSV timeline, the TS timeline contains
flaws, and it is precisely these flaws that will be “filled” by
the solving algorithm. In addition, the application during
the solving process of the synchronization between the GSV
and PS components that will determine the construction of
the PS’s timeline (which is completely void of component
decisions in the initial situation), reflecting the fact that it
is necessary to point the satellite towards the visible target
before initiating transmission.
The behaviors extracted from the TS and PS components’
timelines after applying this solving procedure on our example are shown in Figure 6.
Related Work
The synthesis of O MPS is aimed at creating an extensible
problem solving architecture to support development of dif-
23
Figure 6: EST behaviors for the TS and PS state variables.
ferent applications. It is worth making a comparison with
other systems that, for different reasons, share the same goal
with O MPS.
Similarly to O MPS’s timelines, IxTeT (Ghallab & Laruelle 1994) follows a domain representation ontology based
on state attributes which assume values in a given domain.
Unlike O MPS in IxTeT system dynamics are represented
with a STRIPS-like logical formalism. Resource reasoning
is used as a conflict analyzer on top of the plan representation.
Visopt ShopFloor (Bartak 2002) is grounded on the the
idea of working with dynamic scheduling problems where it
is not possible to describe in advance activity sets that have
to be scheduled. That is the same principle behind the integration of planning into scheduling done in both O MP and
O MPS: to put a domain theory behind a scheduling problem
to gain flexibility in managing tasks and goal driven problem solving. Dynamic aspects of the problem are described
using resources with complex behaviors. These resources
are close to our state variable, but they are managed using
global constraints instead of a precedence constraint posting
approach as we are currently doing. Moreover, although we
are working on P&S integration we maintain a clear distinction between planning and scheduling at the level of modeling problem features.
HSTS (Muscettola et al. 1992; Muscettola 1994), has
been the first to propose a modeling language with explicit
representation of timelines, using the concept of state variables. In fact we are extending an HSTS-like state variables modeling language with a generic timeline oriented
approach: in O MPS timelines represents not only state variable evolutions, but also multi-capacity and consumable resources, and may arrive to include generic components having temporal functions as behaviors. A clear difference w.r.t.
HSTS is that in our approach we see different types of timelines as separate modules, while HSTS, and its derivatives
RAX-PS and EUROPA, view resources as specialized state
variables. Their view is certainly appealing but leaves the
problem of integrating in a clean way multi-capacity resources open. In fact, while it is immediate to represent binary resources as state variables, it is quite difficult to model
and handle cumulative resources. We believe that in these
cases the best way is to exploit state of the art scheduling
technologies hence our direction of seeing resources as an
independent type of components.
Conclusions
Acknowledgments
In this article we have given a preliminary overview of
O MPS a P&S system which follows a component-based,
timeline-driven approach to planning and scheduling integration. The approach draws from and attempts to generalize our previous experience in mission planning tool development for ESA (Cesta et al. 2007) and to extend our previous work on the O MP planning system (Fratini & Cesta
2005).
A distinctive feature of the O MPS architecture is that it
provides a framework for reasoning about any entity which
can be modeled as a component, i.e., as a set of properties that vary in time. This includes “classical” concepts
such as state variables (as defined in HSTS (Muscettola
et al. 1992; Muscettola 1994) and studied also in subsequent work (Cesta & Oddi 1996; Jonsson et al. 2000;
Frank & Jónsson 2003)), and renewable/consumable resources (Laborie 2003; Cesta, Oddi, & Smith 2002).
Another feature of the component-based architecture is
the possibility to modularize the reasoning algorithms that
are specific to each type of component within the component
itself, e.g., profile-based scheduling routines for resource inconsistency resolution are implemented within the resource
component itself. The more important consequence of this is
the possibility to include previously implemented/deployed
ad-hoc components within the framework. We have given
an example of this in this paper with the battery component,
which essentially extends a reusable resource. The ability
to encapsulate potentially complex modules within O MPS
components provides a strong added value in developing
real-world planning systems. Specifically, this capability
can be leveraged to include entire decisional modules which
are already present in the overall decision process within
which O MPS is deployed. An example is the M EXAR 2 system (Cesta et al. 2007)2 , whose ability to solve the Mars
Express memory dumping problem can be encapsulated into
an ad-hoc component.
The ability to employ previously developed subsystems
like M EXAR 2 benefits decision support system development
in a number of ways. From the engineering point of view, it
facilitates the task of fast prototyping, providing a means
to incorporate complex functionality by employing previously developed decision support aids. Also, this feature
contributes to increasing the reliability of development prototypes, as existing components (especially in the context
of ESA mission planning) have typically undergone intensive testing before being deployed. Second, the componentbased architecture allows to leverage the efficiency of problem de-composition. Again, M EXAR 2 provides a meaningful example, as it is a highly optimized decision support system for solving the very specific problem of memory dumping. Lastly, the ability to re-use components brings with it
the advantage of preserving potentially crucial user interface
paradigms, the re-engineering of which may be a strong deterrent for adopting innovative problem solving strategies.
The Authors are currently supported by European Space
Agency (ESA) within the Advanced Planning and Scheduling Initiative (APSI). APSI partners are VEGA GmbH, O N ERA , University of Milan and ISTC-CNR. Thanks to Angelo
Oddi and Gabriella Cortellessa for their constant support,
and to Carlo Matteo Scalzo for contributing an implementation of the battery component.
2
The M EXAR 2 system is a specific decision support aid developed by the Planning and Scheduling Team which is currently in
daily use within ESA’s Mars Express mission.
24
References
Allen, J. 1983. Maintaining knowledge about temporal intervals.
Communications of the ACM 26(11):832–843.
Alur, R., and Dill, D. L. 1994. A theory of timed automata. Theor.
Comput. Sci. 126(2):183–235.
Bartak, R. 2002. Visopt ShopFloor: On the edge of planning and
scheduling. In van Hentenryck, P., ed., Proceedings of the 8th International Conference on Principles and Practice of Constraint
Programming (CP2002), LNCS 2470, 587–602. Springer Verlag.
Cesta, A., and Oddi, A. 1996. DDL.1: A Formal Description
of a Constraint Representation Language for Physical Domains.
In M.Ghallab, M., and Milani, A., eds., New Directions in AI
Planning. IOS Press.
Cesta, A.; Cortellessa, G.; Fratini, S.; Oddi, A.; and Policella, N.
2007. An innovative product for space mission planning – an a
posteriori evaluation. In Proceedings of the 17th International
Conference on Automated Planning & Scheduling (ICAPS-07).
Cesta, A.; Oddi, A.; and Smith, S. F. 2002. A Constraint-based
method for Project Scheduling with Time Windows. Journal of
Heuristics 8(1):109–136.
Frank, J., and Jónsson, A. 2003. Constraint-based attribute and
interval planning. Constraints 8(4):339–364.
Fratini, S., and Cesta, A. 2005. The Integration of Planning into
Scheduling with OMP. In Proceedings of the 2nd Workshop on
Integrating Planning into Scheduling (WIPIS) at AAAI-05, Pittsburgh, USA.
Fratini, S. 2006. Integrating Planning and Scheduling in a
Component-Based Perspective: from Theory to Practice. Ph.D.
Dissertation, University of Rome “La Sapienza”, Faculty of Engineering, Department of Computer and System Science.
Ghallab, M., and Laruelle, H. 1994. Representation and Control
in IxTeT, a Temporal Planner. In Proceedings of the Second International Conference on Artifial Intelligence Planning Scheduling
Systems. AAAI Press.
Jonsson, A.; Morris, P.; Muscettola, N.; Rajan, K.; and Smith,
B. 2000. Planning in Interplanetary Space: Theory and Practice.
In Proceedings of the Fifth Int. Conf. on Artificial Intelligence
Planning and Scheduling (AIPS-00).
Laborie, P. 2003. Algorithms for Propagating Resource Constraints in AI Planning and Scheduling: Existing Approaches and
new Results. Artificial Intelligence 143:151–188.
Muscettola, N.; Smith, S.; Cesta, A.; and D’Aloisi, D. 1992. Coordinating Space Telescope Operations in an Integrated Planning
and Scheduling Architecture. IEEE Control Systems 12(1):28–37.
Muscettola, N. 1994. HSTS: Integrating Planning and Scheduling. In Zweben, M. and Fox, M.S., ed., Intelligent Scheduling.
Morgan Kauffmann.
Smith, D.; Frank, J.; and Jonsson, A. 2000. Bridging the gap between planning and scheduling. Knowledge Engineering Review
15(1):47–83.
Scheduling Monotone Interval Orders
on Typed Task Systems
Benoı̂t Dupont de Dinechin
STMicroelectronics STS/CEC
12, rue Jules Horowitz - BP 217. F-38019 Grenoble
benoit.dupont-de-dinechin@st.com
Abstract
We present a modification of the Leung-Palem-Pnueli parallel
processors scheduling algorithm and prove its optimality for
scheduling monotone interval orders with release dates and
deadlines on Unit Execution Time (UET) typed task systems
in polynomial time. This problem is motivated by the relaxation of Resource-Constrained Project Scheduling Problems
(RCPSP) with precedence delays and UET operations.
Introduction
Scheduling problems on typed task systems (Jaffe 1980) generalize the parallel processors scheduling
problems by introP
ducing k types {τr }1≤r≤k and 1≤r≤k mr processors with
mr processors of type τr . Each operation Oi has a type
τi ∈ {τr }1≤r≤k and may only execute on processors of type
τi . We denote typed task systems with Σk P in the α-field of
the α|β|γ scheduling problem denotation (Brucker 2004).
Scheduling typed task systems is motivated by two main
applications: resource-constrained scheduling in high-level
synthesis of digital circuits (Chaudhuri, Walker, & Mitchell
1994), and instruction scheduling in compilers for VLIW
processors (Dupont de Dinechin 2004). In high-level synthesis, execution resources correspond to the synthesized
functional units, which are partitioned by classes such as
adder or multiplier with a particular bit-width. Operations
are typed by these classes and may have non-unit execution
time. In compiler VLIW instruction scheduling, operations
usually have unit execution time (UET), however on most
VLIW processors an operation requires several resources
for execution, like in the Resource-Constrained Project
Scheduling Problems (RCPSP) (Brucker et al. 1999). In
both cases, the pipelined implementation of functional units
yield scheduling problems with precedence delays, that is,
the time required to produce a value is larger than the minimum delay between two activations of a functional unit.
We are aware of the following work in the area of typed
task systems. Jaffe (Jaffe 1980) introduces them to formalize instruction scheduling problems that arise in highperformance computers and data-flow machines, and studies the performance bounds of list scheduling. Jansen
(Jansen 1994) gives a polynomial time algorithm for problem Σk P |intOrder; pi = 1|Cmax , that is, scheduling
25
interval-ordered typed UET operations. Verriet (Verriet
1998) solves problem Σk P |intOrder; cji = 1; pi = 1|Cmax
in polynomial time, that is, interval-ordered typed UET operations subject to unit communication delays.
Interval orders are a class of precedence graphs where
UET scheduling on parallel processors is polynomial-time,
while non-UET scheduling on 2 processors is strongly NPhard (Papadimitriou & Yannakakis 1979). In particular,
Papadimitriou and Yannakakis solve P |intOrder; pi =
1|Cmax in polynomial-time. Scheduling interval orders
with communication delays on parallel processors is also
polynomial-time, as the algorithm by Ali and El-Rewini
(Ali & El-Rewini 1992) solves P |intOrder; cji = 1; pi =
1|Cmax . Verriet (Verriet 1996) further proposes a deadline modification algorithm that solves P |intOrder; cji =
1; ri ; pi = 1|Lmax in polynomial-time.
Scheduling interval orders with precedence delays on parallel processors was first considered by Palem and Simons
(Palem & Simons 1993), who introduced monotone interval orders and solve P |intOrder(mono lij ); pi = 1|Lmax
in polynomial-time. This result is generalized by LeungPalem-Pnueli algorithm (Leung, Palem, & Pnueli 2001).
In the present work, we modify the algorithm of Leung,
Palem and Pnueli (Leung, Palem, & Pnueli 2001) in order
to solve Σk P |intOrder(mono lij ); ri ; di ; pi = 1|− feasibility problems in polynomial time. The resulting algorithm
thus operates on typed tasks, allows precedence delays, and
handles release dates and deadlines. Thanks to these properties, it provides useful relaxations of the RCPSP with UET
operations and precedence delays.
The Leung-Palem-Pnueli algorithm (Leung, Palem, &
Pnueli 2001) is a parallel processors scheduling algorithm
based on deadline modification and the use of lower modified deadline first priority in a Graham list scheduling algorithm. The Leung-Palem-Pnueli algorithm (LPPA) solves
the following feasibility problems in polynomial time:
• 1|prec(lij ∈ {0, 1}); ri ; di ; pi = 1|−
• P 2|prec(lij ∈ {−1, 0}); ri ; di ; pi = 1|−
• P |intOrder(mono lij ); ri ; di ; pi = 1|−
• P |inT ree(lij = l); di ; pi = 1|−
B
Here, the lij are precedence delays with pi + lij ≥ 0.
Presentation is as follows. In the first section, we extend
the α|β|γ scheduling problem denotation and we discuss the
Graham list scheduling algorithm (GLSA) for typed task
systems. In the second section, we present our modified
Leung-Palem-Pnueli algorithm (LPPA) and prove its optimality for scheduling monotone interval orders with release
dates and deadlines on UET typed task systems in polynomial time. In the third section, we discuss the application of
this algorithm to VLIW instruction scheduling.
Deterministic Scheduling Background
A
✓✏
A
✒✑
D
C
✓✏
✓✏
✲ D
✶✒✑
✓✏✏✏✏
✒✑
✲ C ✏✏
✒✑
B
Figure 1: Set of intervals and the corresponding interval order graph.
prec(lij = l) All the precedence delays lij equal l.
Machine Scheduling Problem Denotation
In parallel processors scheduling problems, an operation set
{Oi }1≤i≤n is processed on m identical processors. Each operation Oi requires the exclusive use of one processor for pi
time units, starting at its schedule date σi . Scheduling problems may involve release dates ri and due dates di . This
constrains the schedule date σi of operation Oi as σi ≥ ri
and there is a penalty whenever Ci > di , with Ci the comdef
pletion date of Oi defined as Ci = σi + pi . For problems
where Ci ≤ di is mandatory, the di are called deadlines.
A precedence Oi ≺ Oj between two operations constrains the schedule with σi +pi ≤ σj . In case of precedence
delay lij between Oi and Oj , the scheduling constraint becomes σi + pi + lij ≤ σj . The precedence graph has one arc
(Oi , Oj ) for each precedence Oi ≺ Oj . Given an operation
Oi , we denote succOi the set of direct successors of Oi and
predOi the set of direct predecessors of Oi in the precedence
graph. The set indepOi contains the operations that are not
connected to Oi in the undirected precedence graph.
Given a scheduling problem over operation set
{Oi }1≤i≤n with release dates {ri }1≤i≤n and deadlines {di }1≤i≤n ,
the precedence-consistent release dates {ri+ }1≤i≤n are recursively defined as
def
ri+ = max(ri , maxOj ∈predOi (rj+ + pj + lji )). Likewise, the
precedence-consistent deadlines {d+
i }1≤i≤n are recursively
def
j
+
defined as d+
=
min(d
,
min
i
Oj ∈succOi (dj − pj − li )).
i
Machine scheduling problems are denoted by a triplet
α|β|γ (Brucker 2004), where α describes the processing environment, β specifies the operation properties and γ defines
the optimality criterion. Values of α, β, γ include:
α : 1 for a single processor, P for parallel processors, P m
for the given m parallel processors. We denote typed task
systems with k types by Σk P .
β : ri for release dates, di for deadlines (if γ = −) or due
dates, pi = 1 for Unit Execution Time (UET) operations.
γ : − for the feasibility, Cmax or Lmax for the minimization of these objectives.
def
The makespan is Cmax = maxi Ci and the maximum latedef
def
ness is Lmax = maxi Li : Li = Ci − di . The meaning of
the additional β fields is:
prec(lij ) Precedence delays lij , assuming lij ≥ −pi .
26
inT ree The precedence graph is an in-tree.
intOrder(mono lij ) The precedence graph weighted by
def
w(Oi , Oj ) = pi + lij is a monotone interval order.
An interval order is the transitive orientation of the complement of an interval graph (Papadimitriou & Yannakakis
1979) (see Figure 1). The important property of interval
orders is that given any two operations Oi and Oj , either
predOi ⊆ predOj or predOj ⊆ predOi (similarly for successors). This is easily understood by referring to the underlying intervals that define the interval order. Adding or
removing operations without predecessors and successors to
an interval order is still an interval order. Also, interval orders are transitively closed, that is, any transitive successor
(predecessor) must be a direct successor (predecessor).
A monotone interval order graph (Palem & Simons 1993)
is an interval order whose precedence graph (V, E) is
weighted with a non-negative function w on the arcs such
that, given any (Oi , Oj ), (Oi , Ok ) ∈ E : predOj ⊆
predOk ⇒ w(Oi , Oj ) ≤ w(Oi , Ok ). Monotone interval
orders are motivated by the application of interval orders
properties to scheduling problems with precedence delays.
Indeed, in scheduling problems with interval orders, the
precedence arc weight considered between any two operadef
tions Oi and Oj is w(Oi , Oj ) = pi with pi the processing
time of Oi . In case of monotone interval orders, the arc
def
weights are w(Oi , Oj ) = pi + lij with lij the precedence
delay between Oi and Oj . An interval order graph where
all arcs leaving any given node have the same weight is
obviously monotone, so interval order precedences without
precedence delays imply monotone interval order graphs.
Graham List Scheduling Algorithm Extension
The Graham list scheduling algorithm (GLSA) is a classic
scheduling algorithm where the time steps are considered in
non-decreasing order. For each time step, if a processor is
idle, the highest priority operation available at this time is
scheduled An operation is available if the current time step
is not earlier than the release date and all direct predecessors
have completed their execution early enough to satisfy the
precedence delays. On typed task systems, the operation
type must match the type of an idle processor.
The GLSA is optimal for P |ri ; di ; pi = 1|− and
P |ri ; pi = 1|Lmax when using the earliest deadlines (or due
dates) di first as priority (Brucker 2004) (Jackson’s rule).
This property directly extends to typed task systems:
Theorem 1 The GLSA with Jackson’s rule optimally solves
Σk P |ri ; di ; pi = 1|− and Σk P |ri ; pi = 1|Lmax .
Proof: In typed task systems, operations are partitioned by
processor type. In problem Σk P |ri ; di ; pi = 1|− (respectively Σk P |ri ; pi = 1|Lmax ), there are no precedences between operations. Therefore, optimal scheduling can be
achieved by considering operations and processors of each
type independently. For each type, the problem reduces to
P |ri ; di ; pi = 1|− (respectively P |ri ; pi = 1|Lmax ), which
is optimally solved with Jackson’s rule.
In this work, we allow precedences delays lij = −pi ⇒
σi ≤ σj , that is, precedences with zero start-start time lags.
Thus we extend the GLSA as follows: in cases of available
operations with equal priorities, schedule first the earliest
operations in the precedence topological sort order.
The Modified Leung-Palem-Pnueli Algorithm
Algorithm Description
The Leung-Palem-Pnueli algorithm (LPPA) is similar to
classic UET scheduling algorithms on parallel processors
like Garey & Johnson (Garey & Johnson 1976), in that it
uses a lower modified deadlines first priority in a GLSA.
Given a scheduling problem with deadlines {di }1≤i≤n ,
modified deadlines {d′i }1≤i≤n are such that ∀i ∈ [1, n] :
σi + pi ≤ d′i ≤ di for any schedule {σi }1≤i≤n . The distinguishing feature of the LPPA is the computation of its modified deadlines, which we call fixpoint modified deadlines1 .
Precisely, the LPPA defines a backward scheduling problem denoted B(Oi , Si ) for each operation Oi . An optimal
backward scheduling procedure computes the latest possible schedule date σi′ of operation Oi in each B(Oi , Si ). Optimal backward scheduling of B(Oi , Si ) is used to update
the current modified deadline of Oi as d′i ← σi′ + pi . This
process of deadline modification is iterated over all problems B(Oi , Si ) until a fixpoint of the modified deadlines
{d∗i }1≤i≤n is reached (Leung, Palem, & Pnueli 2001).
We modify the Leung-Palem-Pnueli algorithm (LPPA) to
compute the fixpoint modified deadlines {d∗i }1≤i≤n by executing the following procedure:
(i) Compute the precedence-consistent release dates
{ri+ }1≤i≤n ,
the precedence-consistent deadlines
{d+
i }1≤i≤n and initialize the modified deadlines
{d′i }1≤i≤n with the precedence-consistent deadlines.
(ii) For each operation Oi , define the backward scheduling
def
problem B(Oi , Si ) with Si = succOi ∪ indepOi .
(1) Let Oi be the current operation in some iteration over
{Oi }1≤i≤n .
(2) Compute the optimal backward schedule date σi′ of Oi by
optimal backward scheduling of B(Oi , Si ).
1
Leung, Palem and Pnueli call them “consistent and stable modified deadlines”.
27
(3) Update the modified deadline of Oi as d′i ← σi′ + 1.
(4) Update the modified deadlines of each Ok ∈ predOi with
d′k ← min(d′k , d′i − 1 − lki ).
(5) Go to (1) until a fixpoint of the modified deadlines
{d′i }1≤i≤n is reached.
In our modified LPPA, we define the backward scheduling problem B(Oi , Si ) as the search for a set of dates
{σj′ }Oj ∈{Oi }∪Si that satisfy:
(a) ∀Oj ∈ Si : Oi ≺ Oj ⇒ σi′ + 1 + lij ≤ σj′
(b) ∀t ∈ lN, ∀r ∈ [1, k] : |{Oj ∈ {Oi } ∪ Si ∧ τj = r ∧ σj′ =
t}| ≤ mr
(c) ∀Oj ∈ {Oi } ∪ Si : rj+ ≤ σj′ < d′j
Constraints (a) state that only the precedences between Oi
and its direct successors are kept in the backward scheduling
problem B(Oi , Si ). Constraints (b) are the resources limitations of typed task systems with UET operations. Constraints (c) ensure that operations are backward scheduled
within the precedence-consistent release dates and the current modified deadlines. An optimal backward schedule for
Oi maximizes σi′ in B(Oi , Si ).
Let {rj+ }1≤i≤n be the precedence-consistent release dates
and {d′j }1≤i≤n be the current modified deadlines. The simplest way to find the optimum backward schedule date of Oi
in B(Oi , Si ) is to search for the latest s ∈ [ri+ , d′i − 1] such
that the constrained backward scheduling problem (σi′ =
s) ∧ B(Oi , Si ) is feasible. Even though each such constrained problem can be solved in polynomial time by reducing to some Σk P |rj ; dj ; pj = 1|− over {Oi } ∪ Si , optimal
backward scheduling of B(Oi , Si ) would require pseudopolynomial time, as there are up to d′i −ri+ constrained backward scheduling problems to solve. Please note that a simple dichotomy search for the latest feasible s ∈ [ri+ , d′i − 1]
does not work, as (σi′ = s) ∧ B(Oi , Si ) is infeasible does
not imply that (σi′ = s + 1) ∧ B(Oi , Si ) is infeasible.
In order to avoid the pseudo-polynomial time complexity
of optimal backward scheduling, we rely instead on a procedure with two successive dichotomy searches for feasible
relaxations of constrained backward scheduling problems,
like in the original LPPA. Describing this procedure requires
further definitions. Assume lij = −∞ if Oi 6≺ Oj . Given a
constrained backward scheduling problem (σi′ ∈ [p, q]) ∧
B(Oi , Si ), we define a relaxation Σk P |r̂j ; dˆj ; pj = 1|−
over the operation set {Oi } ∪ Si such that:
def
r̂i = p
def
dˆi = q + 1
def
Oj ∈ Si =⇒ r̂j = max(rj+ , q + 1 + lij )
def
Oj ∈ Si =⇒ dˆj = d′j
In other words, the precedences from Oi to each direct
successor Oj ∈ Si are converted into release dates assuming
the release date and deadline of Oi respectively equal p and
q + 1. We call type 2 relaxation the resulting scheduling
problem Σk P |r̂j ; dˆj ; pj = 1|− and type 1 relaxation this
type τl
type τi
✄
Oi
✂
✻
p
✲
✄
✲ Ol
✲
✂
✄✁
✓
✲ Ok
✓ ✟
✂
✁
✓ ✟✄
✓✟ ✲ Oj
✟
✲
✂
✁
✁
❅
❅
✻
✻
Σ
p+1
✄
✂
Ok
✻
tu
q+1
Oj ✛✏ Σ′
✂ ✏✏✏
✁ ✚
✚
✏
✮
Oj ′
✂✄
✁ ✚✚
✄
✚
❂
Oi
Oj ′′
✂
✁
✂
✁
✝
✻✁ Σ ✂ ✆
✻
✻
∗
✄
✁
✄
tu + 1
di
✲
σi
Figure 2: Optimal backward scheduling proof.
Figure 3: Modified Leung-Palem-Pnueli algorithm proof.
problem when disregarding the resource constraints of Oi .
Both type 1 and type 2 relaxations are optimally solved by
the GLSA with the earliest dˆj first priority (Theorem 1). If
any relaxation is infeasible, so is the constrained backward
scheduling problem (σi′ ∈ [p, q]) ∧ B(Oi , Si ).
Observe that the type 1 relaxation is increasingly constrained as q increases, independently of the value of p. And
for any fixed q, the type 2 relaxation is increasingly constrained as p increases. Therefore, it is correct to explore
the feasibility of any of these relaxations using dichotomy
search. So the optimal backward scheduling procedure is
based on two dichotomy searches as follows.
The first dichotomy search initializes p = ri+ and q =
′
di − 1. Then it proceeds to find the latest q such that the
type 1 relaxation is feasible. The second dichotomy search
keeps q constant and finds the latest p such that the type 2
relaxation is feasible. Whenever both searches succeed, the
optimum backward schedule date of Oi is taken as σi′ = p so
the new modified deadline is d′i = p + 1. If any dichotomy
search fails, B(Oi , Si ) is assumed infeasible.
[p + 1, q]) ∧ B(Oi , Si ) is infeasible imply there is a set Σ
of operations that fill all slots of type τi in range [p + 1, q]
and prevents the GLSA from scheduling of Oi in that range
(Figure 2). So Oj ∈ Σ ⇒ dˆj ≤ dˆi = q + 1 ∧ r̂j ≥ p + 1.
Now assume exists some s ∈ [p + 1, q] such that problem
(σi′ ∈ [s, s])∧B(Oi , Si ) is feasible. This imply that problem
(σi′ ∈ [p + 1, s]) ∧ B(Oi , Si ) is also feasible. The type 2
relaxation of (σi′ ∈ [p + 1, s]) ∧ B(Oi , Si ) differs from the
type 2 relaxation of (σi′ ∈ [p + 1, q]) ∧ B(Oi , Si ) only by the
decrease of the release dates r̂j of some operations Oj ∈ Si ,
def
yet r̂j ≥ p + 1 as r̂j = max(rj+ , s + 1 + lij ) ≥ p + 1 +
1 + lij . As all the operations of Σ must still be scheduled
in range [p + 1, q] in the type 2 relaxation of (σi′ ∈ [p +
1, s]) ∧ B(Oi , Si ), there is still no scheduling slot for Oi in
that range. So problem (σi′ ∈ [p + 1, s]) ∧ B(Oi , Si ) and
problem (σi′ ∈ [s, s]) ∧ B(Oi , Si ) are infeasible.
Algorithm Proofs
Proof: The correctness of this modified Leung-PalemPnueli algorithm (LPPA), like the correctness of the original LPPA, is based on two arguments. The first argument
is that the fixpoint modified deadlines are indeed deadlines
of the original problem. This is apparent, as each backward
scheduling problem B(Oi , Si ) is a relaxation of the original scheduling problem and optimal backward scheduling
computes the latest schedule date of Oi within B(Oi , Si ) by
Theorem 2. Let us call core the GLSA that uses the earliest fixpoint modified deadlines first as priorities. The second
correctness argument is a proof that the core GLSA does not
miss any fixpoint modified deadlines.
Precisely, assume that some Oi is the earliest operation
that misses its fixpoint modified deadline d∗i in the core
GLSA schedule. In a similar way to (Leung, Palem, &
Pnueli 2001), we will prove that an earlier operation Ok necessarily misses its fixpoint modified deadline d∗k in the same
schedule. This contradiction ensures that the core GLSA
schedule does not miss any fixpoint modified deadline. The
details of this proof rely on a few definitions and observations illustrated in Figure 3.
Let r = τi be the type of operation Oi . An operation Oj
is said saturated if τj = r and d∗j ≤ d∗i . Define tu < d∗i
as the latest time step that is not filled with saturated operations on the processors of type r. If tu < 0, the problem is
infeasible, as there are not enough slots to schedule opera-
Theorem 2 The optimal backward scheduling procedure
computes the latest schedule date σi′ of Oi among the schedules that satisfy conditions (a), (b), (c) of B(Oi , Si ).
Proof: The two dichotomy searches are equivalent to linear
searches, respectively by increasing q and by increasing p.
If no feasible relaxation Σk P |r̂j ; dˆj ; pj = 1|− exist in any
of these linear searches, the backward scheduling problem
B(Oi , Si ) is obviously infeasible.
If a feasible relaxation exists in the second linear search,
this search yields a backward schedule with σi′ = p. Indeed,
let {σ̂j }Oj ∈{Oi }∪Si be schedule dates for the type 2 relaxation of (σi′ ∈ [p, q]) ∧ B(Oi , Si ). We have σ̂i = p because
the type 2 relaxation of problem (σi′ ∈ [p+1, q])∧B(Oi , Si )
is infeasible and the only difference between these two relaxations is the release date of Oi . Moreover, the dates
{σ̂j }Oj ∈{Oi }∪Si satisfy (a), (b), (c). Condition (a) is satisfied from the definition of r̂j and because σ̂i = p ≤ q.
Conditions (b) and (c) are satisfied by the GLSA.
Let us prove that the backward schedule found by the second search is in fact optimal, that is, there is no s ∈ [p + 1, q]
such that problem (σi′ ∈ [s, s]) ∧ B(Oi , Si ) is feasible.
This is obvious if p = q, so consider cases where p < q.
The type 2 relaxation of problem (σi′ ∈ [p, q]) ∧ B(Oi , Si )
is feasible while the type 2 relaxation of problem (σi′ ∈
28
Theorem 3 The
modified
algorithm
of
Palem and Pnueli solves any feasible
Σk P |intOrder(mono lij ); ri ; di ; pi = 1|−.
Leung,
problem
tions of type r on mr processors within the deadlines. Else,
some scheduling slots of type r at tu are either empty or
filled with operations Ou : d∗u > d∗i of lower priority than
saturated operations in the core GLSA. Define the operation
def
set Σ = {Oj saturated : tu < σj < d∗i } ∪ {Oi }. Define the
def
operation subset Σ′ = {Oj ∈ Σ : rj+ ≤ tu }.
Consider problem P k |intOrder(mono lij ); ri ; di ; pi =
1|−. In an interval order, given two operations Oi and Oj ,
either predOi ⊆ predOj or predOj ⊆ predOi . Select Oj ′
among Oj ∈ Σ′ such that |predOj | is minimal. As Oj ′ ∈ Σ′
is not scheduled at date tu or earlier by the core GLSA, there
must be a constraining operation Ok that is a direct prede′
cessor of operation Oj ′ with σk + 1 + lkj = σj ′ > tu ⇒
′
σk + 1 > tu − lkj . Note that Ok can have any type. Operations in predOj ′ are the direct predecessors of all operations
Oj ∈ Σ′ and no predecessor of Oj ′ is in Σ′ . Thus Ok 6∈ Σ′
and Ok is a direct predecessor of all operations Oj ∈ Σ′ .
We call stable backward schedule any optimal backward
schedule of B(Ok , Sk ) where the modified deadlines equal
def
the fixpoint modified deadlines. Since Sk = succOk ∪
indepOk , we have Σ ⊆ Sk . By the fixpoint property, we
may assume that a stable backward schedule of B(Ok , Sk )
exists. Such stable backward schedule must slot the mr (d∗i −
1−tu )+1 operations of Σ before d∗i on mr processors, so at
least one operation Oj ∈ Σ′ is scheduled at date tu or earlier
by any stable backward schedule of B(Ok , Sk ).
Theorem 2 ensures that optimal backward scheduling of
B(Ok , Sk ) satisfies the precedence delays between Ok and
Oj . Thus σk′ + 1 + lkj ≤ tu so d∗k − 1 + 1 + lkj ≤ tu . By
the monotone interval order property, predOj ′ ⊆ predOj ⇒
′
′
w(Ok , Oj ′ ) ≤ w(Ok , Oj ) ⇒ 1+lkj ≤ 1+lkj ⇒ lkj ≤ lkj for
′
Oj ′ selected above and Oj ∈ Σ′ , so d∗k ≤ tu − lkj . However
′
in the core GLSA schedule σk + 1 > tu − lkj , so Ok misses
its fixpoint modified deadline d∗k .
The overall time complexity of this modified LPPA is
the sum of the complexity of initialization steps (i-ii), of
the number of iterations times the complexity of steps (1-5)
and of the complexity of the core GLSA. Leung, Palem and
Pnueli (Leung, Palem, & Pnueli 2001) observe that the number of iterations to reach a fixpoint is upper bounded by n2 ,
a fact that still holds for our modified algorithm. As the time
complexity of the GLSA on typed task systems with k types
is within a factor k of the time complexity of the GLSA on
parallel processors, our modified LPPA has polynomial time
complexity.
In their work, Leung, Palem and Pnueli (Leung, Palem,
& Pnueli 2001) describe further techniques that enable to
lower the overall complexity of their algorithm. The first
is a proof that applying optimal backward scheduling in reverse topological order of the operations directly yields the
fixpoint modified deadlines. The second is a fast implementation of list scheduling for problems P |ri ; di ; pi = 1|−.
These techniques apply to typed task systems as well.
29
Table 1: ST200 VLIW processor resource availabilities and
operation class resource requirements
Resource
Issue Memory Control Align
Availability
4
1
1
2
ALU
1
0
0
0
ALUX
2
0
0
1
MUL
1
0
0
1
MULX
2
0
0
1
1
1
0
0
MEM
MEMX
2
1
0
1
CTL
1
0
1
1
Application to VLIW Instruction Scheduling
ST200 VLIW Instruction Scheduling Problem
We illustrate VLIW instruction scheduling problems on the
ST200 VLIW processor manufactured by STMicroelectronics. The ST200 VLIW processor executes up to 4 operations per time unit with a maximum of one control operation (goto, jump, call, return), one memory operation
(load, store, prefetch), and two multiply operations per time
unit. All arithmetic operations operate on integer values with
operands belonging either to the General Register file (64 ×
32-bit) or to the Branch Register file (8 × 1-bit). In order
to eliminate some conditional branches, the ST200 VLIW
architecture also provides conditional selection instructions.
The processing time of any operation is a single time unit
(pi = 1), while the precedence delays lij between operations
range from -1 to 2 time units.
The resource availabilities of the ST200 VLIW processor and the resource requirements of each operation are displayed in Table 1. The resources are: Issue for the instruction issue width; Memory for the memory access unit;
Control for the control unit. An artificial resource Align is
also introduced to satisfy some encoding constraints. Operations with identical resource requirements are factored into
classes: ALU, MUL, MEM and CTL correspond respectively to the arithmetic, multiply, memory and control operations. The classes ALUX, MULX and MEMX represent
the operations that require an extended immediate operand.
Operations named LDH, MULL, ADD, CMPNE, BRF belong
respectively to classes MEM, MUL, ALU, ALU, CTL.
A sample C program and the corresponding ST200 VLIW
processor operations for the inner loop are given in Figure 4. The operations are numbered in their appearance
order. In Figure 5, we display the precedence graph between operations of the inner loop of Figure 4 after removing the redundant transitive arcs. As usual in RCPSP, the
precedence graph is augmented with dummy nodes O0 and
On+1 : n = 7 with null resource requirements. Also, the
precedence arcs are labeled with the corresponding startstart time-lag, that is, the values of pi + lji . The critical path
of this graph is O0 → O1 → O2 → O3 → O7 → O8 so the
makespan is lower bounded by 7.
This example illustrates that null start-start time-lags, or
precedence delays lji = −pi , occur frequently in actual
VLIW instruction scheduling problems. Moreover, the start-
L?__0_8:
LDH_1
MULL_2
ADD_3
ADD_4
ADD_5
CMPNE_6
BRF_7
int
prod(int n, short a[], short b) {
int s=0, i;
for (i=0;i<n;i++) {
s += a[i]*b;
}
return s;
}
g131 = 0, G127
g132 = G126, g131
G129 = G129, g132
G128 = G128, 1
G127 = G127, 2
b135 = G118, G128
b135, L?__0_8
Figure 4: A sample C program and the corresponding ST200 operations
0
1
4
6
0
0
1
0
0
5
0
7
1
8
0
2
3
3
3
Figure 5: Precedence graph of the inner loop instruction scheduling problem
start time-lags are non-negative, so classic RCPSP schedule generation schemes (Kolisch & Hartmann 1999) (list
scheduling) are guaranteed to build feasible (sub-optimal)
solutions for these VLIW instruction scheduling problems.
In this setting, the main value of VLIW instruction scheduling problem relaxations such as typed task systems is to
strengthen the bounds on operation schedule dates including the makespan. Improving bounds benefits scheduling
techniques such as solving time-indexed integer linear programming formulations (Dupont de Dinechin 2007).
ST200 VLIW Compiler Experimental Results
We implemented our modified Leung-Palem-Pnueli algorithm in the instruction scheduler of the production compiler
for the ST200 VLIW processor family. In order to apply this
algorithm, we first relax instances of RCPSP with UET operations and non-negative start-start time-lags to instances of
scheduling problems on typed task systems with precedence
delays, release dates and deadlines:
• Expand each operation that requires several resources to
a chain of sub-operations that use only one resource type
per sub-operation. Set the chain precedence delays to -1
(zero start-start time-lags).
• Assign to each sub-operation the release date and deadline
of its parent operation.
The result is a UET typed task system with release dates and
deadlines, whose precedence graph is arbitrary.
Applying our modified Leung-Palem-Pnueli algorithm to
an arbitrary precedence graph implies that optimal scheduling is no longer guaranteed. However, the fixpoint modified
deadlines are still deadlines of the UET typed task system
considered, as the proof of Theorem 2 does not involve the
30
precedence graph properties. From the way we defined the
relaxation to typed task systems, it is apparent that these fixpoint modified deadlines are also deadlines of the original
problem (UET RCPSP with non-negative time-lags).
In Table 2, we collect the results of lower bounding the
makespan of ST200 VLIW instruction scheduling problems
with our modified LPPA for typed task systems. These
results are obtained by first computing the fixpoint modified deadlines on the reverse precedence graph, yielding
strengthened release dates. The modified LPPA is then
applied to the precedence graph with strengthened release
dates, and this computes fixpoint modified deadlines including a makespan lower bound. The benchmarks used to extract these results include an image processing program, and
the c-lex SpecInt program.
The first column of Table 2 identifies the code block that
defined the VLIW instruction scheduling problem. Column
n gives the number of operations to schedule. Columns
Resource, Critical, MLPPA respectively give the makespan
lower bound in time units computed with resource use,
critical path, and the modified LPPA. The last column
ILP gives the optimal makespan as computed by solving a
time-indexed linear programming formulation (Dupont de
Dinechin 2007). According to this experimental data, there
exists cases where using the modified LPPA yields a significantly stronger relaxation than critical path computation.
Summary and Conclusions
We present a modification of the algorithm of Leung, Palem
and Pnueli (LPPA) (Leung, Palem, & Pnueli 2001) that
schedules monotone interval orders with release dates and
deadlines on UET typed task systems (Jaffe 1980) in poly-
Table 2: ST200 VLIW compiler results of the modified
Leung-Palem-Pnueli algorithm
Label
n Resource Critical MLPPA ILP
BB26
41
11
15
19
19
BB23
34
10
14
18
18
10
3
5
5
5
BB30
BB29
16
5
10
10
10
1 31
34
9
14
18
18
BB9 Short 16
4
10
10
10
BB22
16
4
10
10
10
LAO021
22
6
6
7
7
LAO011
20
6
18
18
18
BB80
14
6
17
17
17
LAO033
41
11
31
32
32
4 1362
23
9
38
38
38
BB916
34
14
30
31
31
4 1181
15
8
18
19
19
4 1180
7
2
9
10
10
4 998
14
4
10
11
11
4 1211
9
2
9
9
9
4 1209
14
7
18
18
18
4 1388
6
2
8
9
9
4 949
13
5
12
13
13
BB740
11
4
13
14
14
LAO0160
17
7
7
11
11
nomial time. In an extended α|β|γ denotation, this is problem Σk P |intOrder(mono lij ); ri ; di ; pi = 1|−.
Compared to the original LPPA (Leung, Palem, & Pnueli
2001), our main modifications are: use of the Graham list
scheduling algorithm (GLSA) adapted to typed task systems
and to zero start-start time-lags; new definition of the backward scheduling problem B(Oi , Si ) that does not involve
the transitive successors of operation Oi ; core LPPA proof
adapted to typed task systems and simplified thanks to the
properties of monotone interval orders.
Like the original LPPA, our modified algorithm optimally solves a feasibility problem: after scheduling with
the core GLSA, one needs to check if the schedule meets
the deadlines. By embedding this algorithm in a dichotomy
search for the smallest Lmax such that the scheduling problem with deadlines di + Lmax is feasible, one also solves
Σk P |intOrder(mono lij ); ri ; pi = 1|Lmax in polynomial time. This is a significant generalization over the
Σk P |intOrder; pi = 1|Cmax problem solved by Jansen
(Jansen 1994) in polynomial time.
Our motivation for the study of typed task systems with
precedence delays is their use as relaxations of the ResourceConstrained Scheduling Problems (RCPSP) with Unit Execution Time (UET) operations and non-negative start-start
time-lags. In this setting, precedence delays are important,
yet no previous polynomial-time scheduling algorithms for
typed task systems consider them. The facts that interval
orders include operations without predecessors and successors, and that the LPPA enforces releases dates and deadlines, are also valuable for these relaxations.
31
References
Ali, H. H., and El-Rewini, H. 1992. Scheduling Interval Ordered Tasks on Multiprocessor Architecture. In SAC
’92: Proceedings of the 1992 ACM/SIGAPP Symposium on
Applied computing, 792–797. New York, NY, USA: ACM.
Brucker, P.; Drexl, A.; Möhring, R.; Neumann, K.; and
Pesch, E. 1999. Resource-Constrained Project Scheduling:
Notation, Classification, Models and Methods. European
Journal of Operational Research 112:3–41.
Brucker, P. 2004. Scheduling Algorithms, 4th edition.
SpringerVerlag.
Chaudhuri, S.; Walker, R. A.; and Mitchell, J. E. 1994. Analyzing and Exploiting the Structure of the Constraints in
the ILP Approach to the Scheduling Problem. IEEE Transactions on VLSI 2(4).
Dupont de Dinechin, B.
2004.
From
Machine
Scheduling
to
VLIW
Instruction
Scheduling.
ST Journal of Research 1(2).
http://www.st.com/stonline/press/magazine/stjournal/vol0102/.
Dupont de Dinechin, B.
2007.
Time-Indexed
Formulations and a Large Neighborhood Search for
the Resource-Constrained Modulo Scheduling Problem. In 3rd Multidisciplinary International Scheduling conference: Theory and Applications (MISTA).
http://www.cri.ensmp.fr/classement/2007.html.
Garey, M. R., and Johnson, D. S. 1976. Scheduling Tasks
with Nonuniform Deadlines on Two Processors. J. ACM
23(3):461–467.
Jaffe, J. M. 1980. Bounds on the Scheduling of Typed Task
Systems. SIAM J. Comput. 9(3):541–551.
Jansen, K. 1994. Analysis of Scheduling Problems
with Typed Task Systems. Discrete Applied Mathematics
52(3):223–232.
Kolisch, R., and Hartmann, S. 1999. Algorithms for Solving the Resource-Constrained Project Scheduling Problem: Classification and Computational Analysis. In J., W.,
ed., Handbook on Recent Advances in Project Scheduling.
Kluwer Academic. chapter 7.
Leung, A.; Palem, K. V.; and Pnueli, A. 2001. Scheduling Time-Constrained Instructions on Pipelined Processors. ACM Trans. Program. Lang. Syst. 23(1):73–103.
Palem, K. V., and Simons, B. B. 1993. Scheduling TimeCritical Instructions on RISC Machines. ACM Trans. Program. Lang. Syst. 15(4):632–658.
Papadimitriou, C. H., and Yannakakis, M. 1979. Scheduling Interval-Ordered Tasks. SIAM J. Comput. 8(3):405–
409.
Verriet, J. 1996. Scheduling Interval Orders with Release Dates and Deadlines. Technical Report UU-CS-199612, Department of Information and Computing Sciences,
Utrecht University.
Verriet, J. 1998. The Complexity of Scheduling Typed Task
Systems with and without Communication Delays. Technical Report UU-CS-1998-26, Department of Information
and Computing Sciences, Utrecht University.
A Note on Concurrency and Complexity in Temporal Planning
Maria Fox and Derek Long
Department of Computer & Information Sciences
University of Strathclyde, Glasgow, UK
Abstract
tional effects into classical propositional encodings (Gazen & Knoblock 1997; Nebel 2000).
Rintanen recently reported (Rintanen 2007) that
the complexity of temporal planning with durative actions of fixed durations in propositional domains depends on whether it is possible for multiple instances of the same action to execute concurrently. In this paper we explore the circumstances
in which such a situation might arise and show
that the issue is directly connected to previously
established results for compilation of conditional
effects in propositional planning.
2 Preliminaries
We begin by providing some definitions on which
the remainder of the paper is based.
Definition 1 A classical propositional planning
action, a, is a triple, hP, A, Di, where each of P ,
A and D is a set of atomic propositions. The action is applicable in a state, S, also represented
by a set of atomic propositions, if P ⊆ S. The effect of execution of a will be to transform the state
into the new state a(S) = (S − D) ∪ A.
1 Introduction
In his paper Complexity of Concurrent Temporal
Planning (Rintanen 2007), Jussi Rintanen shows
that temporal planning in propositional domains,
with durative actions of fixed durations, can be encoded directly in a propositional planning framework by using (propositionally encoded) counters to capture the passage of time. Actions are
split into their end points, in much the same way
as shown in the semantics of PDDL2.1 (Fox &
Long 2003) and as implemented in some planners (Halsey, Long, & Fox 2004; Long & Fox
2003). This encoding allows him to deduce that
the complexity of this form of temporal planning
is equivalent to that of classical planning when the
number of such counters is polynomial in the size
of the original (grounded) domain. However, if
multiple instances of the same action may execute concurrently then it is not sufficient to have
a single counter for each action instance, but instead as many counters are required as potential
instances of the same action that may run concurrently. Rintanen observes that this could be
exponential in the size of the domain encoding,
placing the planning problem into a significantly
worse complexity class than classical planning:
EXPSPACE-hard instead of PSPACE-hard.
In this paper, we explore the situations in which
instances of the same action can run concurrently and link the complexity costs the previously recognised problem of compiling condi-
Although states are sets of propositions, not all
sets of propositions form valid states for a given
domain. For a given domain, consisting of an initial state, a collection of actions and a goal condition, the set of states for the domain is the set
of all sets of propositions that can be reached by
legal applications of the actions. In the rest of the
paper, when we quantify over states we intend this
to be over all the valid states for the (implicit) domain in question.
Definition 2 A simple durative propositional action, D, with fixed duration (Fox & Long 2003), is
the 4-tuple hAs , Ae , I, di, where d is the duration
(a fixed rational), As and Ae are classical propositional planning actions that define the pre- and
post-conditions at the start and end points of D respectively, and I is an invariant condition, which
is a set of atomic propositions that must hold in
every state throughout the execution of D.
We do not choose to emphasise the conditions
under which two classical actions are considered
mutex, here (see (Fox & Long 2003) for details), but note that concurrent execution of two
instances of the same durative action in which the
end points coincide will not be possible if the end
points are mutex. This means that they cannot
delete or add the same propositions, so that they
actually have no effects. Hence, there is no role
for these actions in a plan and they can be ignored
32
Definition 6 A classical propositional action a =
hP, A, Di is a null-effect action if for every state
S such that P ⊆ S, S = a(S).
in planning. Therefore, we assume that all our durative actions must, if two instances of the same
action are to run concurrently, be executed with
some offset between them.
Note that one way in which an action can be
a null-effect action is that the action simply reasserts any propositions it deletes and all of its effects are already true in the state to which it is applied. Actions that reassert conditions they delete
are not entirely useless, provided they also achieve
some other effects that are not already true. Some
encodings of the blocks world domain can lead to
ground actions that both delete and add the proposition that there is space on the table, simply to
avoid having to write special actions to deal with
the table. Also observe that null actions are a special case of null-effect actions.
We can now prove a useful property of repeatable actions:
3 Key Properties of Actions
We now proceed to define some essential properties that help to characterise the ways in which actions can interact with one another or with aspects
of the states to which they are applied.
Definition 3 A classical propositional action,
a = hP, A, Di is repeatable if in every state S
in which a is applicable, P ⊆ a(S).
A repeatable action can be applied twice in succession without any intervening action to reset the
state of resources that might be used by the action.
As we shall see, repeatable actions are constrained
in the impact they may have on a state.
Theorem 1 Any repeatable action is either a
weakly conditional action, a null action or a nulleffect action.
Definition 4 A classical propositional action,
a = hP, A, Di is weakly conditional if there are
two states S1 and S2 such that a is applicable in
both states and either there is a proposition p ∈ A
such that p ∈ S1 and p 6∈ S2 or there is a proposition p ∈ D such that p ∈ S1 and p 6∈ S2 .
Proof: Suppose a repeatable action, a =
hP, A, Di, is not a null-effect action (and, therefore, not a null action). Then there must be some
state in which a can be applied, Sa , such that
a(Sa ) 6= Sa . Since a is repeatable, it must be
that P ⊆ a(Sa ) = (Sa − D) ∪ A 6= Sa .
Suppose a is not weakly conditional. Then for
every p ∈ D, p ∈ Sa iff p ∈ a(Ss ) and for every
p ∈ A, p ∈ Sa iff p ∈ a(Sa ). Since A ⊆ a(Sa ),
the latter implies that A ⊆ Sa . The fact that
a(Sa ) 6= Sa then implies that there is some p ∈ D
such that p ∈ Sa and p 6∈ a(Sa ). This contradicts
our assumption, so a must be weakly conditional.
We now consider the ways in which these classical actions can appear in certain roles in durative
actions.
A weakly conditional action is one that can be
executed in situations in which some of its positive effects are already true, despite not being preconditions of the action, or some of its negative
effects are already false. The reason we call these
actions weakly conditional is that these effects
are semantically equivalent to the simple conditional effects (when (not p) p) and (when p (not
p)) for positive and negative effects respectively.
These expressions are obviously part of a richer
language than the classical propositional actions.
In fact, they make use of both negative preconditions and conditional effects. This combination is
known to be an expensive extension to the classical propositional framework (Gazen & Knoblock
1997; Nebel 2000). Nevertheless, weakly conditional actions are obviously valid examples of
classical propositional actions. Notice that we require weakly conditional actions to be applicable
in states that capture both possibilities in the implicit condition. This constraint ensures that situations in which the preconditions of an action imply that a deleted condition must also hold, without that condition being explicitly listed as a precondition (or the analogous case for an add effect)
are not treated as examples of weakly conditional
behaviour.
We now define some actions with reduced
structural content of one form or another.
Definition 7 A simple durative action, D =
hAs , Ae , I, di, is a pseudo-durative action if Ae
is a null action and I is empty.
Definition 5 A classical propositional action a =
hP, A, Di is a null action if P , A and D are all
empty.
Thus, a deadlocking action is one that could begin execution and then, either by execution of intervening actions, or possibly simply by leaving
Definition 8 A simple durative action, D =
hAs , Ae , I, di, is a purely state-preserving action
if Ae = hP, A, Di is a null-effect action and every
state satisfying I also satisfies P .
3.1
Deadlocking Actions
One last variety of action is so significant we
choose to devote a separate subsection to it.
Definition 9 A simple durative action, D =
hAs , Ae , I, di, is a deadlocking action if there is
a state, S, such that I ⊆ S but Ae is not applicable in S.
33
d
As
Ae
I
As
X
be either a null action, a null-effect action or else
weakly conditional. If Ae is a null action then a
is either pseudo-durative (if I is empty) or else
it is purely state-preserving. Finally, if a is not
deadlocking and Ae is a null-effect action, then
any preconditions of Ae must be true in any state
satisfying I (otherwise there would be a state in
which I was satisfied, yet a could not terminate,
implying that a is deadlocking) and therefore a
is either pseudo-durative (I is empty) or else it is
purely state-preserving.
Ae
I
Y
Z
Figure 1: The intervals created by overlapping execution of two instances of the same action.
Now that we have classified the simple durative
actions that may execute concurrently with themselves, we briefly analyse the alternatives. We
have already argued that deadlocking actions do
not appear in natural domains. Pseudo-durative
actions can be treated as though they were classical propositional actions, without duration, provided that a simple check is carried out on completed plans to ensure that adequate time is allowed for any instances of these actions to complete. Purely state-preserving actions are more
interesting. An example of such an action is an
action that interacts with a savings account that
then triggers a constraint that the money in the account must be left untouched for some fixed period. Clearly, such an action is not unreasonable,
even if it is uncommon. Fortunately, Rintanen’s
translation of temporal domains into classical domains can be achieved for purely state-preserving
actions without additional counters to monitor the
duration of overlapping instances of these actions.
This is because the only important thing about
these actions is how long the conditions they encapsulate must be preserved. Each time a new instance is executed, the clock must be restarted to
ensure that the preservation period continues for
the full length of the action from that point. Since
the end of the action has no effects it is not necessary to apply it except when the counter reaches
zero, at which point the invariant constraint becomes inactive.
the state unchanged, it is possible to arrive in a
situation in which the action cannot terminate because the conditions for its termination are not satisfied.
Deadlocking actions are clearly no natural actions: there is no real situation in which it is possible to stop time advancing by entering a state in
which an action must terminate before time progresses, but cannot because the conditions for its
termination are not satisfied. If we adopt a model
in which a durative action has a fixed duration
then the conditions for its termination must be inevitable, but the effects it has might well be conditional on the state at that time. In domains where
deadlock is possible (for example, in the execution of parallel processes), the effect is not to stop
time, of course, but to stop execution of the processes. This means that if one were to consider the
behaviour of the parallel processes to be modelled
by durative actions, the failure to terminate is handled by the actions having unlimited duration.
Therefore, we contend that no natural domains
require to be modelled with deadlocking actions.
4 Self-Overlapping Actions
We now turn our attention to the circumstances
in which two instances of the same simple durative action can be executed concurrently. Figure 1
shows the intervals that are created by the overlapping execution of such a pair of action instances.
Note that when such an overlap occurs there are
two places where classical propositional actions
might be repeated: As and Ae .
Thus, the source of the complexity gap that Rintanen identifies can be traced, for all practical purposes, to the use of durative actions terminated by
weakly conditional actions. Weakly conditional
actions can be compiled into non-weakly conditional actions by the usual expedient of creating
multiple versions of the actions. The idea is to
have one version for the case where the condition
is true and one for the case where the condition is
false, each with the appropriate additional precondition to capture the case and the appropriate version carrying the conditional effect, but now as an
unconditional effect. The problem with this compilation is that it causes an exponential number of
variants to be created in the size of the collection
of conditional effects.
Theorem 2 If two instances of a simple durative action, a = hAs , Ae , I, di can execute concurrently, then either a is either a deadlocking,
pseudo-durative or purely state-preserving action,
or else Ae is weakly conditional.
Proof: Suppose that two instances of a can execute concurrently and consider the two instances
of Ae at the ends of the action instances. Either
a is deadlocking, or else it must be possible for
these two instances to be repeatable, since there
is no requirement that an action be inserted in the
period Z. Then, by our earlier result, Ae must
34
6 Conclusions
In general, the current collection of benchmark
domains do not appear to contain durative actions
with repeatable terminating actions (although in
many cases this is because the states in which the
end actions can be executed are limited by the necessary application of the start effects of the durative actions to which they belong). This means
that the problem of self-overlapping actions does
not arise in these domains.
In domains in which there are repeatable terminating actions, it is non-trivial to identify which
effects contribute to the weakly conditional behaviour. Delete effects are simpler to manage: any
delete effect that is not listed as a precondition can
be assumed to have the potential to be a weakly
conditional effect. Add effects are more problematic: unless an add effect is shown to be mutually
exclusive with the preconditions of the action, it
must be assumed that it is weakly conditional. It
is possible to use mutex inference, such as that
used in Graphplan (Blum & Furst 1995) or that
performed by TIM (Fox & Long 1998), to identify
which add effects must be considered as weakly
conditional. In general, to ensure that the weakly
conditional behaviour has been completely compiled out, it is necessary to make a conservative
assumption about any effects that cannot be shown
to be ruled out. Nevertheless, in practical (propositional) domains the number of effects is tightly
limited (ADL domains with quantified effects are
not quite so amenable) and this makes it possible
to compile out the weakly conditional effects with
a limited expansion in the number of actions.
We have shown what kinds of simple durative actions can run concurrently with instances of themselves. Identifying the conditions that allow this
has led to the discovery of a close link between
the complexity gap identified by Rintanen and the
complexity induced by the extension of propositional domains to those with conditional effects.
A further important consequence of this analysis is to learn that if actions have bounded effect lists then the complexity of temporal planning
is PSPACE-complete, even if self-overlapping actions are allowed.
References
Blum, A., and Furst, M. 1995. Fast planning
through plan-graph analysis. In Proceedings of
the Fourteenth International Joint Conference on
Artificial Intelligence (IJCAI95), 1636–1642.
Fox, M., and Long, D. 1998. The automatic
inference of state invariants in TIM. Journal of
AI Research 9.
Fox, M., and Long, D. 2003. PDDL2.1: An extension of PDDL for expressing temporal planning domains. Journal of AI Research 20:61–
124.
Gazen, B., and Knoblock, C. 1997. Combining
the expressivity of UCPOP with the efficiency of
Graphplan. In ECP-97, 221–233.
Halsey, K.; Long, D.; and Fox, M. 2004. Crikey a planner looking at the integration of scheduling
and planning. In Proceedings of the Workshop on
Integration Scheduling Into Planning at 13th Internati onal Conference on Automated Planning
and Scheduling (ICAPS’03), 46–52.
Long, D., and Fox, M. 2003. Exploiting a graphplan framework in temporal planning. In Proceedings of ICAPS’03.
Nebel, B. 2000. On the expressive power of
planning formalisms: Conditional effects and
boolean preconditions in the STRIPS formalism.
In Minker, J., ed., Logic-Based Artificial Intelligence. Kluwer. 469–490.
Rintanen, J. 2007. Complexity of concurrent
temporal planning. In Proceedings of International Conference on Automated Planning and
Scheduling, 280–287.
5 Relevance to Practical Planning
The relevance to practical planner design of the
result we have demonstrated is two-fold. Firstly,
we have shown that treatment of overlapping instances of the same action can only occur under
limited conditions. These conditions can often
be identified automatically using standard domain
analysis techniques (Fox & Long 1998). This
means that it is possible to determine whether machinery is required to be activated to handle the
special case. Avoiding the use of techniques that
would be redundant is useful in practical planner
design, as a way to achieve improved efficiency.
Secondly, the results demonstrate that the focus
of temporal planning should be, in the first place,
on handling concurrency between distinct action
instances and on the treatment of weakly conditional effects. The latter phenomenon is one that
has not, to the best of our knowledge, been highlighted in the past, but is clearly a significant issue, since compilation of such effects into unconditional actions is both non-trivial and also, potentially, exponentially costly.
35
Optimisation of Generalised Policies via Evolutionary Computation
Michelle Galea and John Levine and Henrik Westerberg∗
Dave Humphreys†
Department of Computer & Information Sciences
University of Strathclyde
Glasgow G1 1XH, UK
CISA, School of Informatics
University of Edinburgh
Edinburgh EH8 9LE, UK
Abstract
Planner
This paper investigates the application of Evolutionary Computation to the induction of generalised policies. A policy is
here defined as a list of rules that specify which actions to
be performed under which conditions. A policy is domainspecific and is used in conjunction with an inference mechanism (to decide which rule to apply) to formulate plans
for problems within that domain. Evolutionary Computation
is concerned with the design and application of stochastic
population-based iterative methods inspired by natural evolution. This work illustrates how it may be applied to the induction of policies, compares the results on one domain with
those obtained by a state-of-the-art approximate policy iteration approach, and highlights both the current limitations (such
as a simplistic knowledge representation) and the advantages
(including optimisation of rule order within a policy) of our
system.
Domain C
Domain B
Domain A
model
Problem
Domain C
Domain B
Domain A
policy
Inference
method
Plan
Figure 1: Planning using generalised policies and inference
mechanisms
Introduction
We present an evolution-inspired system that induces generalised policies from available solutions to planning problems.
The term generalised policy was coined by Martin & Geffner
(2004) for a function that maps pairs of initial and goal states
to actions. The actions outputted should, when performed,
achieve the specified goal state from the specified initial state.
Figure 1 presents a simplified view of a planner based on
generalised policies. A distinction is made here between a
policy – the knowledge used to solve a problem, and the inference mechanism that utilises the policy – the decision procedure that dictates when and how the knowledge is applied.
A domain model defines a specific domain in terms of relevant objects, actions and their effects.
A policy in this work is a list of domain-specific IF - THEN
rules. If the conditions stated in the IF - part of a rule match
the current state, then the action in the THEN part may be applied. The currently implemented inference mechanism is a
common and simple one – rules within a policy are ordered
and the action of the first rule that may be applied is performed. If more than one valid combination of variable bindings exists then orderings on the variables and their values are
adopted and the first valid combination is effected.
∗Currently at Systems Biology Unit, Centre for Genomic Regulation, C/Dr Aiguader
88, Barcelona 08003, Spain
†Currently at Mobile Detect Inc., Ottawa, Ontario, Canada K1L 6P5
36
It should be noted that these policies contain a particular
type of control knowledge. Control knowledge is domainspecific knowledge often used by some planners to prune
search during the construction or identification of a plan.
Control knowledge is often expressed as IF - THEN type rules,
but the conditions and actions relate to goal, domain operator and/or variable binding decisions to be taken during the
search process. Examples of work that induce such knowledge include (Leckie & Zukerman 1998) and (Aler, Borrajo,
& Isasi 2002).
In this work a policy determines domain operator selection and each rule describes the conditions necessary for a
particular operator to be applied. The inference mechanism
is responsible for deciding all other decisions (which rule to
apply, and which variable bindings to implement) without recourse to any search, leading to highly efficient planners.
The induction of policies is carried out using Evolutionary Computation (EC) in a supervised learning context. EC
is the application of methods inspired by Darwinian principles of evolution to computationally difficult problems, such
as search and combinatorial optimisation. Its popularity is
due in great part to its parallel development and modification
of multiple solutions in diverse areas of the solution space,
discouraging convergence to a suboptimal solution.
We compare the performance of one evolved policy with
that obtained using a state-of-the-art Approximate Policy Iteration (API) (Bertsekas & Tsitsiklis 1996) approach. We
focus on the knowledge representation language (KR) and
learning mechanism highlighting both the current limitations
and strengths of our system. The rest of this paper reviews
the literature on generalised policy induction, describes our
implemented system, and discusses experiment results and
future research directions.
Related Work
Early work on inducing generalised policies utilises genetic programming (GP) (Koza 1992), a particular branch of
EC. Evolutionary algorithms in general re-iteratively apply
genetic-inspired operators to a population of solutions, with
fitter individuals of a generation (according to some predefined fitness criteria) more likely to be selected for modification and insertion into successive generations than weaker
members. On average, therefore, each new generation tends
to be fitter than the previous one. GP is distinguished by a tree
representation of individuals that makes it a natural candidate
for the representation of functional programs.
Koza (1992) describes a GP algorithm for solving a
blocksworld problem variant – the goal is a program capable of producing a tower of blocks that spells “UNIVERSAL”, starting from a range of different initial tower configurations. The tree-like individuals in a generation are
constructed from sets of functions (such as move to stack
and move to table) and terminals that act as arguments to the functions (such as top block of stack and
next needed block).
Each individual in the population is assessed by its performance on a set of 166 initial configurations. Generation 10
produces a program that correctly stacks the tower for each
of the given configurations, though it uses unnecessary block
movements and contains unnecessary functions. When elements are included in the fitness assessment that penalise
against these inefficiencies, the algorithm outputs a parsimonious program that produces solutions that are both correct
and optimal (in terms of plan length).
Spector (1994) uses Koza’s algorithm with different function and terminal sets to induce solutions to the Sussman
Anomaly – the initial state is block C on block A, with blocks
A and B on the table; the goal state is block A on B, which
is on C, which is on the table. In a first experiment the author uses functions such as newtower (move X to table if X
is clear) and puton (put X on Y if both are clear), and the
terminals are the names of the blocks A, B and C. The goal is
a program that can attain the goal state from the initial state,
and individuals are assessed on this one problem. The fitness
function includes elements that reward parsimony and efficiency as well as correctness, and the goal is achieved well
before the final generation.
In further experiments the author introduces new functions
and replaces the block-specific terminals with ones that refer
to blocks by their positions in goals. The number of problems
on which individuals are assessed is also increased. One experiment is designed to produce a program that achieves the
Sussman goal state from a range of different initial states.
The resulting program achieves this particular goal state even
from initial configurations that are not used during learning.
However, it is incapable of achieving a different goal state
37
from that on which it was trained, even a simplified one such
as (ON B A).
Another experiment seeks a program capable of achieving 4 different goals states (maximum 3 blocks), from different initial states. This evolved program is capable of attaining any of the 4 specified goal states from initial states not
observed during the evolutionary process. The author indicates that it is also capable of solving some 4-block problems,
though its generalisation power for this and larger problems
has not been fully analysed.
The work of Khardon (1999) for inducing policies has inspired and/or often been cited by later work. It uses a deterministic learning method to induce decision lists of IF - THEN
rules from examples of solved problems, with the first rule
in the list that matches the observed state being applied. The
learning strategy is one of iterative rule learning where the
following step is iterated until no examples are left in the
training set – a number of rules are generated, the best (according to some criterion) is determined, examples that are
covered by this rule are removed from the training data, and
the rule is added to a growing rulebase. The number of rules
generated in each iteration must be finite and tractable and
this is controlled in part by setting limits to the number of
conditions and variables in the IF - part of a rule; all possible rules for each action are then generated in each iteration.
The training data is formulated by extracting examples from
planning problems and their solutions – each state and action
encountered in a plan constitutes one example.
In addition to the training examples and a standard STRIPS
domain description Khardon provides the learning algorithm
with background knowledge he calls support predicates –
concepts such as above and inplace for the blocksworld
domain. The resulting policy is an ordered list of existentially
quantified rules with predicates in the condition part that may
or may not be negated, and may or may not refer to a subgoal. For instance, holding(x1 ) ¬clear(x2 ) G(on(x1 , x2 )) →
PUTDOWN(x1 ), represents a rule that says if x1 is currently
held, x2 is not clear, and in the goal state x1 should be on x2 ,
put down x1 .
Blocksworld policies are generated using different training sets containing examples drawn from solutions to 8-block
problems, and are tested on new problems of sizes ranging 7–
20 blocks. Their performance varies from a high of 83% of
7-block problems solved, down to 56% of 20-block problems.
Similar experiments are carried out for the logistics domain
with training of policies on examples obtained from solutions
to problems with 2 packages, 3 cities, 3 trucks, 2 locations
per city, and 2 airplanes. Polices are tested on problems with
similar dimensions to the testing problems, and the number
of packages is varied from 2, solving 80% of problems, to
30, solving 68% of problems.
Martin & Geffner (2004) suggest that the generalisation
power of Khardon’s policies over large problems is weak, and
that obtaining domain-dependent background knowledge is
not always a trivial task. They use the same learning method
as Khardon but propose to overcome both weaknesses by using description logics (Baader et al. 2003) as the KR. This
enables the representation of concepts that describe classes
of objects, such as the concept of a well-placed block.
A blocksworld policy induced from 5-block problem examples solves 99% of the 25-block test problems. With the
addition of an incremental refinement procedure a policy is
eventually induced that solves 100% of test problems: a policy is induced and tested on 5-block problems; optimal solutions are found for the problems it fails on, and examples
are extracted from these and added to the training set; then, a
new policy is induced from the larger dataset. The authors repeat this procedure several times until a policy solves all the
25-block test problems presented (test problems are new each
time the policy is tested). It should be noted however that as
well as the KR and the refinement extension to the learning
algorithm, the way training examples are extracted from solutions is different from that in Khardon’s work – Martin &
Geffner use as examples all actions for each state that lead to
an optimal plan; this may have some impact on the quality of
the induced policies.
Fern, Yoon, & Givan (2006) learn policies for a long random walk (LRW) problem distribution using a form of API.
A policy is a list of action-selection rules where the action
of the first rule that matches the current and goal states is
applied. An LRW distribution randomly generates an initial
state for a problem, executes a long sequence of random actions, and sets the goal as a subset of properties of the final
resulting state. For a given domain API iteratively improves
a policy until no further improvement is observed or some
other stopping criterion is used. The expectation is that if
a learned policy πn performs well on problems drawn from
random walks of length n, then it will provide reasonable performance or guidance on problems drawn from random walks
of length m, where m is only moderately larger than n. πn is
therefore used to bootstrap API iterations to find πm , i.e. to
find a policy that handles problems drawn from increasingly
longer random walks.
Within each iteration, trajectories (sequences of alternating states and actions) for an improved policy are generated
using policy rollout (Tesauro & Galperin 1996), and then an
improved policy is learned using the trajectories as training
data. The policy learning component follows an iterative rule
learning strategy. The difference between this learning strategy and that of Khardon and Martin & Geffner lies in the
rule generation procedure where a greedy heuristic search is
used instead of exhaustively enumerating all rules. The KR
(based on taxonomic syntax) is also different, and is expressive enough so that no support predicates need be supplied to
the learning process.
This work is currently state-of-the-art in this particular
research area, i.e. where policies that are learned are used
with a simple and efficient decision procedure to solve planning problems. It presents policies for several domains and
tests them rigorously on deterministic and stochastic problems from an LRW distribution and from the 2000 planning
competition; the results compare favourably with those obtained by the FF planning system (Hoffmann & Nebel 2001).
In this paper we explore the Briefcase domain APIgenerated policy and compare its performance with one
evolved by our system, focusing on the limitations of our KR
and the strength of our policy optimisation mechanism.
38
(1) Create initial population
(2) WHILE termination criterion false
(3)
Evaluate current generation
(4)
WHILE new generation not full
(5)
Perform reproduction
(6)
Perform recombination
(7)
Perform mutation
(8)
Perform local search
(9)
ENDWHILE
(10) ENDWHILE
(11) Output fittest individual
Figure 2: Pseudocode outline of L2Plan
Learning Policies using L2Plan
L2Plan (Learn to Plan) induces policies of rules similar to
Khardon’s, but the learning mechanism used is a populationbased iterative approach inspired by natural evolution.
Input to L2Plan consists of an untyped STRIPS domain
description, additional domain knowledge if available (e.g.
concept of a well-placed block), and domain examples on
which to evaluate the policies being learned. The output is
a domain-specific policy that is used in conjunction with an
inference mechanism to solve problems within that domain.
A policy consists of a list of rules with each rule being a
specialised IF - THEN rule (also known as a production rule).
The IF - part is composed of two condition statements where
each is a conjunction of ungrounded predicates which may be
negated:
IF condition
AND
goalCondition THEN action
condition relates to the current state and goalCondition to the goal state. If variable bindings exist such
that predicates in condition match with the current state,
and predicates in goalCondition match with the goal state,
then the action may be performed. Note though that the action’s precondition must also be satisfied in the current state.
The list of rules is ordered and the first applicable rule is used.
Variable and domain orderings are followed if more than one
combination of bindings is possible.
Figure 2 presents an outline of the system. Each iteration starts with a population of policies (line(2)). The performance of these policies is evaluated on training data generated from planning problems from the domain under consideration (line (3)). The resulting measure of fitness for a
policy is used to determine whether it is replicated in the next
iteration (line (5)), or whether it may be used in combination
with another policy to reproduce ‘offspring’ that may be inserted into the next iteration (also called crossover, line (6)).
All policies to be inserted in the next iteration may undergo
some form of random mutation (i.e. small change, line (7)),
and a local search procedure that attempts to increase the fitness of the policy (line (8)).
The system terminates if a predefined maximum number
of generations have been created, or a policy attains maximum fitness by correctly solving all examples, or, the average difference in policy fitness in an iteration falls below a
predefined user-set threshold (indicating convergence of all
individuals to similar policies).
Since the results of the evaluation process influence the
creation of the next generation, the average fitness of all policies is expected to improve from one generation to the next.
The fact that several policies are in each iteration allows the
(:rule position briefcase to pickup misplaced object
:condition (and (at ?obj ?to))
:goalCondition (and (not(at ?obj ?to)))
:action movebriefcase ?bc ?from ?to)
Figure 3: Example of a briefcase rule with a variable in
condition that is not a parameter of the action
possibility of exploring different regions of the solution space
at once. This, coupled with an element of randomness that is
used in the selection of policies crossover and mutation, may
help to prevent all policies from converging to a local optimum solution.
The following paragraphs describe the creation of the initial population, policy evaluation, and the genetic operators
used to create new policies from old.
(define (example blocks1 1)
(:domain blocksworld)
(:objects 5 4 3 2 1)
(:initial
... )
(:goal
... )
(:actions
(move-b-to-b 1 3 4) 1
(move-b-to-b 1 3 5) 1
(move-b-to-b 4 2 1) 1
(move-b-to-b 4 2 5) 1
(move-b-to-t 1 3) 0
(move-b-to-t 4 2) 0
(move-t-to-b 5 1) 2
(move-t-to-b 5 1) 2) )
Figure 4: A training example generated from a blocksworld
problem
of a rule is liable to change with the application of genetic
operators.
Evaluating a Policy
Generating the Initial Population
L2Plan first generates an initial – the first generation – population of policies, Fig. 2 line (1). The number of individuals
in a population is predefined by the user (generally 100), and
stays fixed until the system terminates. The number of rules
in a policy at this stage is randomly set between user-defined
minimum and maximum values (4 and 8 respectively).
The condition and goalCondition statements of a rule
are also generated randomly, within certain constraints. The
action, i.e. the THEN part of the IF - THEN rule, is first selected
randomly from all domain actions.
The size of goalCondition in the IF - part of the rule
is determined by drawing a random integer between userdefined minimum and maximum values (set to 1 and 3 respectively), which determines the number of predicates. A
predicate is first selected, and then the appropriate number of
variables are randomly selected from all possible variables.
Predicates are randomly negated.
The size of condition in the IF - part of the rule is currently determined by the number of parameters of the selected
action, and a random selection of predicates. A predicate is
selected randomly, and then variables for the predicate are
randomly selected from the action’s parameters. Predicates
are selected, and variables assigned, until all of an action’s parameters are present in at least one predicate of condition.
Each predicate is randomly negated.
However, early experiments highlighted that restricting the
parameters in condition strictly to those in the set of parameters for an action, severly limits the knowledge that can
be expressed by a rule. For example, the system is unable
to learn the rule in Fig. 3 due to this constraint. This rule
specifies that if an object is misplaced (i.e. its current location is not the location specified for it in the goal state), then
a briefcase is moved to the current location of the object. A
temporary quickfix has been implemented that inserts an extra unary predicate in the domain description. With this predicate added to the precondition of each action/operator, it allows L2Plan the possibility of creating rules such as the one
in Fig. 3.
Note, that a policy need not contain a rule to describe each
action in the domain, and that the initially set number of rules
for a policy, and the number of predicates in the conditions
39
The training data on which a policy is evaluated is composed
of a number of examples that are generated from a number
of planning problems. Each example consists of a state encountered on an optimal plan for the problem from which it is
extracted, and a number of actions which may be taken from
that state, each with an associated cost.
Consider a planning problem that includes an initial state
SI and a goal state SG . Each possible action that may be
taken from SI is performed, leading to new states. For each
new state a solution that attains SG is found using an available planner. The length of each solution is determined, and
the smallest-size solution is deemed the optimal plan. A cost
is now attached to each action performed from SI : the action that leads to the optimal plan is given a cost of zero, and
all other actions are given a cost that is the difference between the length of the solution that they form a part of, and
the length of the optimal plan. This now forms one training
example on which an evolving policy may be evaluated. Figure 4 shows the representation used for a training example,
which is consistent, as far as possible, with STRIPS syntax.
For each state on the optimal plan just determineds the
same procedure is followed as for SI , i.e. all possible actions from the next state on the optimal plan, say Sn , are performed, solutions for each of the resulting states are found,
and costs for each possible action taken from Sn are determined from the solutions’ length. Each training problem
therefore yields as many examples as there are states encountered on the optimal plan. Duplicate training examples are
removed so as not to bias L2Plan towards any particular scenario(s).
The planner used to generate training examples, i.e. when
determining plans to SG from any state Sn , is a simple one
using breadth-first search. This ensures that an optimal plan
is obtained and that actions in examples designated as optimal are in fact actions for states encountered on some plan
of minimal length. For some domains (e.g. blocksworld and
briefcase), in order to speed up the generation of examples
hand-coded control rules to prune branches from the search
are used; these control rules are designed to ensure that an
optimal plan is still determined.
The fitness of a policy is determined by averaging its performance over all examples, where for each example pre-
sented it is scored based on whether the selected action forms
part of an optimal plan or not. Formula (1) below describes
the fitness function where m is the number of training examples and actionCosti is the cost of the action taken by the
policy for training example i:
m
f itness =
1 X
1
m i=1 1 + actionCosti
(1)
Creating a New Generation of Policies
Current L2Plan settings are such that the individuals comprising the fittest 5% of a generation are reproduced, improved by
a local search mechanism, and then inserted into the next generation. The remainder of the next generation is populated by
individuals selected from the current generation and on which
various genetic operations are performed. The fitter individuals in the current population have a greater chance of being
selected for recombination and mutation, in the expectation
that their offspring and/or mutations result in even fitter individuals. However, randomness plays a part in their selection
and in the application of genetic operators in an attempt to
search different areas of the solution space and to avoid local
minima.
Selection of two individuals is performed using tournament selection with a size of 2 (Miller & Goldberg 1995).
Crossover or mutation is then applied with some predefined
probability (0.9 for crossover, 0.1 for mutation). The output
of these operators is a single policy – for crossover the fittest
of parents and offspring, and for mutation the fittest of the
original policy or mutants. Local search is performed on this
policy before it is inserted into the new generation. This procedure is repeated until the new generation is full.
There are three types of crossover that may be performed
on the 2 selected policies, and 4 types of mutation that may
be performed on the first selected policy:
Single Point Rule Level Crossover A crossover point is
randomly chosen in each of the 2 policies, with valid points
being before any of the rules (points need not be the same
in the 2 policies). Two offspring policies are then created by
merging part of the policy of one parent (as delineated by the
crossover point), with a part of the other parent (the first part
of parent A with the second part of parent B, and the second
part of parent A with the first part of parent B).
Single Rule Swap Crossover A randomly selected rule
from policy A is swapped with a randomly selected rule from
policy B, resulting in two new policies. The replacing rule
is inserted in the same position in the policy as the one it is
replacing.
Similar Action Rule Crossover Two rules with the same
action are randomly selected from the parent policies, one
from each. Two new rules are created from the selected rules,
one by using condition from the first selected rule and
goalCondition from the second, and the other new rule is
created by using goalCondition from the first selected rule
and condition from the second. Each of the two newly created rules replaces the original rule in each of the two parent
policies, resulting in 4 new policies.
Rule Addition Mutation A new rule is generated and inserted at a random position in the policy.
40
Rule Deletion Mutation A randomly selected rule is removed from the policy (if the policy contains more than one
rule).
Rule Swap Mutation Two randomly selected rules have
their position swapped in the policy (if the policy has more
than one rule).
Rule Condition Mutation A randomly selected rule has
its condition and/or goalCondition statement mutated,
by replacing the condition statement with a newly generated
one, or by removing a predicate from the statement, or by
adding a new predicate.
The local search procedure currently used is aimed at increasing the fitness of a policy as quickly as possible. It performs rule condition mutations a predefined number of times
(called the local search branching factor). The fittest mutant
replaces the original policy, and again, rule condition mutations are performed on the new policy the same predefined
number of times. This process is repeated until either no
improvement in fitness is exhibited by any mutant over their
originator policy, or for a preset maximum number of times
(called the local search depth factor).
A Comparison of Two Policies
This study focusses on comparing two policies for the briefcase domain, one generated by L2Plan and the other by the
API approach introduced in the Related Work section (Fern,
Yoon, & Givan 2006). The comparison serves two purposes:
• it highlights a current limitation of L2Plan, which is the
limited expressiveness of the KR; and,
• demonstrates the advantage offered by its policy discovery
mechanism, which optimises the rule order in a policy.
The Briefcase domain is chosen partly because it is as yet
one of the few domains for which we have evolved L2Plan
policies, and partly because the knowledge expressed in the
API induced policy is such that can be expressed as IF - THEN
rules.
The API Policy
Figure 5 presents the briefcase domain policy induced
by the API algorithm. A policy provides a mapping
from states to actions for a specific domain and consists
of a decision list of ‘action-selection rules’ of the form
a(x1 , ..., xk ) : L1 , L2 , ...Lm where a is a k-argument action
type, xi an action argument variable and Li is a literal. An
API policy is utilised in the same way as an L2Plan policy.
Each rule describes the action to be taken if a variable binding
exists for the rule that matches both the current state and the
goal. The current state must also satisfy the preconditions of
the action specified by the rule. The rules in a policy are ordered and the rule that is applied in a state is the first rule for
which a valid variable binding exists. A lexicographic ordering is imposed on objects in a problem to deal with situations
where more than one variable binding for the same rule may
be possible.
Below is a simpler example policy for illustrating the main
features of the KR used. It is a policy for a blocksworld domain where the goal in all problems is to make all red blocks
clear is:
1. putdown(x1 ) : x1 ∈ holding
2. pickup(x1 ) : x1 ∈ clear, x1 ∈ (on∗ (on red))
1. PUT-IN: X1 ∈ (GAT −1 (N OT IS − AT )))
1.
TAKE - OUT
2. MOVE: (X2 ∈ (AT (N OT (CAT LOCAT ION)))) ∧
(X2 ∈ (N OT (AT (GAT −1 CIS − AT ))))
2.
PUT- IN
3.
MOVE
briefcase to pickup misplaced package
3. MOVE: (X2 ∈ (GAT IN )) ∧ (X1 ∈ (N OT (CAT IN)))
4.
MOVE
to goal location of package in briefcase
4. TAKE-OUT: (X1 ∈ (CAT −1 IS − AT ))
5.
MOVE
briefcase to its goal location
−1
5. MOVE: (X2 ∈ GIS − AT )
package that has arrived at its goal location
misplaced package in briefcase
Figure 7: L2plan briefcase policy in common language
6. MOVE: (X2 ∈ (AT (GAT −1 CIS − AT )))
7. PUT-IN: (X1 ∈ U N IV ERSAL)
Table 1: L2Plan parameter settings
Figure 5: API briefcase policy in taxonomic syntax
1.
PUT- IN
2.
briefcase to pickup misplaced package, if briefcase is at
its goal location and package does not have same goal location as
briefcase
misplaced package in briefcase
3.
MOVE to goal location of package in briefcase, if there is no package in briefcase whose goal location is the same as the current
location of briefcase
4.
TAKE - OUT
5.
MOVE
MOVE
Parameter
Range of initial policy size
Population size
Maximum number of generations
Proportion of policies reproduced
Crossover probability
Mutation probability
Local search branching
Local search depth
Tournament selection size
Setting
[4–8]
100
100
5%
0.9
0.1
10
10
2
package that has arrived at its goal location
briefcase to its goal location
6.
MOVE to pickup misplaced package, if briefcase is at its goal location and package has same goal location as briefcase
7. PUT- IN package in briefcase.
Figure 6: API briefcase policy in common language
The primitive classes (unary predicates) in this domain are
red, clear, and holding, while on is a primitive relation (binary predicate). If a domain contains predicates of greater
arity, these are converted to equivalent multiple binary predicates. A prefix of g indicates a predicate in the goal state
(e.g. gclear), while a comparison predicate c indicates that
a predicate is true in both the current state and the goal (e.g.
cclear). A primitive class (relation) is a current-state predicate, goal predicate or comparison predicate, and it is interpreted as the set of objects for which the class (relation) is
true in a state s. Compound expressions are formed by the
‘nesting’ of classes/relations, and/or the application of additional language features such as R∗ indicating a chain of a
relation R. Expressions have a depth associated with them
so that, for intstance, the first expression in rule 2 above has
depth 1 and the second expression has depth 3.
Figure 6 is a translation of the policy in Fig. 5 into common
language. Upon inspection it is clear that there is potential in
this policy to perform unnecessary steps. For instance, rule
2 moves the briefcase away from its current location without
first depositing any packages it contains that have as a goal
location the current briefcase location. Furthermore, two of
the four MOVE rules have as a necessary condition that the
briefcase must be at its goal location – this can cause problems and is discussed later on.
This API policy is translated into L2Plan-style IF - THEN
rules and tested using our implemented inference mechanism
on the same problems as our evolved policy. However, it is
important to note differences in the KR which highlight the
limited expressiveness of our current formulation of IF - THEN
rules. Consider rule 3 in Fig. 6 – it states that the briefcase is
41
moved to a goal location of a package within it, only if there
are NO other packages in the briefcase whose goal locations
are the same as the current location of the briefcase. If this
is so, then rule 4 is fired instead of rule 3, i.e. packages at
their goal location are taken out of the briefcase before the
briefcase is moved, despite the order and actions suggested
by these two rules.
As yet we cannot write rule 3 in L2Plan-style rules. This
limitation is partly due to the fact that we can only specify
individual packages using this KR and not sets of packages.
However, if we simplify the API policy’s rule 3 and switch
the order of the simplified rule 3 with rule 4, then we obtain
an equivalent policy we can test and compare with L2Plan’s
policy. The new rule 3 states: TAKE - OUT package that is at
its goal location, and the new rule 4 is: MOVE to goal location
of package in briefcase.
The L2Plan Policy
Figure 7 presents the L2Plan evolved policy against which
the API policy is compared. Note that the first four rules
are equivalent to the hand-coded control policy introduced in
(Pednault 1987) and which is used to prune search for this
domain by the TLPlan system (Bacchus & Kabanza 2000).
To produce this policy L2Plan was run 15 times with identical parameter settings (Table 1) though each time the training examples were generated from 30 different randomly
generated problems and their solutions. The training problem complexity is however the same: 5 cities, 2 objects and
1 briefcase. Using different training data for different experiments gives some indication of the impact of different examples on the induced policies, though it should be noted that
the element of randomness used in solution construction will
also have some influence.
Three of the 15 policies solve all test problems presented
(i.e. problems different from the ones used for training), and
the policy in Fig. 7 was selected from one of these three.
Note that though additional domain knowledge other than the
standard STRIPS description may used for inducing a policy,
L2Plan
L2Plan
API
Average extra steps in plan
90
Number of optimal solutions
API
3.0
100
80
70
60
50
40
30
20
2.5
2.0
1.5
1.0
0.5
10
0
[2-5]
[2-10]
[4-5]
0.0
[2-5]
[4-10]
[2-10]
[4-5]
[4-10]
Problem size [objects-cities]
Problem size [objects-cities]
Figure 9: Average number of extra steps in suboptimal plans
Figure 8: Number of optimal plans produced by a policy
none was used during the induction of briefcase policies. Furthermore, little system parameter tuning has been done at this
stage, and the settings in Table 1 appear to provide reasonable
policies for evolving both briefcase and blocksworld policies
(to be discussed briefly later).
Results
Both the API and L2Plan policies are run on the same 400 test
problems with 1 briefcase: 100 problems each with 2 objects
and 5 cities, 2 objects and 10 cities, 4 objects and 5 cities, and
4 objects and 10 cities. These test problems all contain a goal
location for the briefcase.
Each policy solves all 400 problems. Figure 8 however depicts the number of problems that a policy manages to solve
optimally, i.e. where the plan produced by the policy is no
longer than a known optimal plan. Figure 9 shows the average number of extra steps produced per plan by each policy
for the problems that were solved suboptimally (i.e. the total of extra steps over all 400 solutions is divided by only
the number of suboptimally solved solutins). In both respects
the L2Plan policy considerably outperforms the API policy
– it finds more optimal solutions for problems and generates
shorter plans than the API policy when a suboptimal solution
is found.
These results are a consequence of the rule order in the
respective policies. The API policy moves the briefcase away
from its current location without first checking whether an
object inside it might be deposited in the current location.
L2Plan uses several of the crossover and mutation operations
to optimise rule order so that the policy is evolved such that
it does the most it can do in the current briefcase location
– pickups misplaced objects or deposits ones arrived at their
current location – before the briefcase is moved.
The API policy also exhibits an apparent dependency on
the goal location of the briefcase with several rules checking
its location before an action may be taken. To confirm this
dependency both policies are run on a new suite of 400 test
problems, with the same complexity as the previous suite but
without a goal location for the briefcase. Table 2 gives the
results achieved by each policy – it shows the total number of
problems solved for each problem type, with the number of
problems solved optimally (out of the total given) in brackets.
42
Table 2: Performance of briefcase policies on problems without a goal location for the briefcase. (Number of optimal
plans found in brackets)
L2Plan
API
Problem size [objects-cities]
[2-5]
[2-10]
[4-5]
[4-10]
100 (93) 100 (94) 100 (72) 100 (74)
10 (10)
4 (4)
13 (11)
4 (4)
The L2Plan policy again solves all 400 problems with a high
proportion of them solved optimally.
The performance of the API policy on this suite of problems is however quite different – only a small number of
problems are solved, though most of these are solved optimally. This behaviour is a direct consequence of the requirement placed on two of its MOVE rules that the briefcase
should be at its goal location before it may be moved. If the
briefcase is not at its goal location and no other action can
be taken, then rule 5 in this policy moves the briefcase to its
goal location and other actions then become possible. The
type of problems that this policy has a chance of solving are
those where the briefcase starts out by being in the same location as one or more of the misplaced packages. The policy
dictates that the misplaced packages are put in the briefcase
(rule 1), the briefcase is moved to the goal location of one of
the packages (rule 3), and the package deposited (rule 4). The
policy again dictates that any misplaced packages are picked
up from this new location, and again the briefcase is moved to
the goal location of a package inside it. However, if the briefcase ends up at some location empty after having delivered a
misplaced package, and there are still misplaced packages in
other locations then no further action will be possible (since
there is no goal location in the problem to which the briefcase
can be taken by rule 5).
The L2Plan policy has evolved such that the briefcase is
moved to its goal location only when all objects have been
deposited at their own goal locations (rule 5), and no other
rule is dependent on the location of the briefcase.
Conclusions and Future Work
This work suggests that EC is a viable approach for learning
generalised policies, and highlights both the limitations and
strengths of the current implementation.
IF - THEN rules are a highly comprehensible but also a simplistic KR. As discussed in a previous section currently they
cannot capture knowledge that concerns a group of objects,
though this may be resolved by the addition of existential
and universal quantifiars. Even so, it is doubtful that using
this KR L2Plan could evolve policies that include recursive
concepts. In experiments on the Blocksworld domain, for instance, efficient and effective policies have been evolved but
only by adding similar support predicates to those used by
Khardon (1999) – the concept of a well-placed block is added
to the domain description.
What L2Plan currently lacks in KR expressiveness, however, it compensates for by optimising rule order in policies.
An iterative rule learning strategy is highly dependent on the
training data, which is often biased towards a few actions that
occur frequently in plan solving. Since criteria for defining
a ‘best’ rule often concern the number of training examples
covered, it is therefore quite likely that the first rules added
to any policy dictate the most frequent action found in examples. However, the most frequent actions need not, indeed
should not, always be performed first if the aim is an efficient
solution. Several crossover and mutation operators in L2Plan
essentially optimise this aspect of the policy.
This is early-stage work on utilising EC for generalised
policy induction and our experiments suggest several avenues
for investigation. As indicated the KR is a major theme, and
exploring how far we can push a comprehensible though simplistic language, i.e. which domains and which specific features of these domains require a more expressive language,
will be highly informative. Description logics and taxonomic
syntax are certainly more expressive (at some cost to comprehensibility), and well-worth investigating. It is interesting
to note though, that Fern, Yoon, & Givan (2006) cite as a
possible reason for their weak policies for the Logistics and
Freecell domains a limitation in their KR.
Not explored in this work is L2Plan’s potential for also optimising individual rules within a policy. (Khardon 1999),
(Martin & Geffner 2004) and (Fern, Yoon, & Givan 2006)
all impose limits on the size of rules that may be constructed
(as otherwise the search would be prohibitive), thereby restricting a search in the solution space of rules to prespecified
regions. One crossover and mutation operation on L2Plan
rules enables their size to vary, thereby allowing a search in a
much wider solution space.
A future improvement is expected from the implementation of typing. The current untyped system means that at least
some rules in some policies will be invalid (since predicates
can be created that contain variables of the wrong type), presenting lost opportunities for acting on training examples and
learning from the evaluation. Typing is therefore expected
to reduce the number of iterations necessary to evolve good
policies, and/or to present increased opportunities for learning better ones.
Furthermore, analysis of some experiment results also suggest that the current learning process is too highly selective.
For instance, only the very best individuals are inserted into
the following generation, restricting exploration perhaps too
soon in other regions of the search space. This is suggested
43
by the early convergence, and therefore termination of the
learning process, to policies that do not perform particularly
well on test problems. If the system were allowed to explore a
larger area for longer, then it may be possible to evolve better
policies.
With regards to improving system efficiency an area of investigation will be the impact of training examples on the
quality of the induced policies. A significant computational
expense is spent in the production of optimal plans from
which to generate training examples. One approach, naturally, is the use of non-optimal planners to generate solutions
from which to extract examples. The impact of suboptimal
examples on induced policies will therefore also be explored,
as empirical studies suggest that a noisy training environment
is not necessarily detrimental to the learning process (Ramsey, Schultz, & Grefenstette 1990).
References
Aler, R.; Borrajo, D.; and Isasi, P. 2002. Using genetic programming to learn and improve control knowledge. Artificial Intelligence 141:29–56.
Baader, F.; Calvanese, D.; McGuinness, D.; Nardi, D.; and PatelSchneider, P. 2003. The Description Logic Handbook: Theory,
Implementation, and Applications. Cambridge University Press.
Bacchus, F., and Kabanza, F. 2000. Using temporal logics to express search control knowledge for planning. Artificial Intelligence
116:123–191.
Bertsekas, D. P., and Tsitsiklis, J. N. 1996. Neuro-Dynamic Programming. Athena Scientific.
Fern, A.; Yoon, S.; and Givan, R. 2006. Approximate policy iteration with a policy language bias: Solving relational markov decision processes. Journal of Artificial Intelligence Research 25:75–
118.
Hoffmann, J., and Nebel, B. 2001. The FF planning system: Fast
plan generation through heuristic search. Journal of Artificial Intelligence Research 14:263–302.
Khardon, R. 1999. Learning action strategies for planning domains. Artificial Intelligence 113:125–148.
Koza, J. R. 1992. Genetic Programming: On the Programming
of Computers by Means of Natural Selection. Bradford Book, The
MIT Press.
Leckie, C., and Zukerman, I. 1998. Inductive learning of search
control rules for planning. Artificial Intelligence 101:63–98.
Martin, M., and Geffner, H. 2004. Learning generalized policies
from planning examples using concept languages. Applied Intelligence 20:9–19.
Miller, B. L., and Goldberg, D. E. 1995. Genetic algorithms,
tournament selection, and the effects of noise. Technical Report
95006, Department of General Engineering, University of Illinois
at Urbana-Champaign, Urbana, IL.
Pednault, E. 1987. Toward a Mathematical Theory of Plan Synthesis. Phd, Stanford University, USA.
Ramsey, C. L.; Schultz, A. C.; and Grefenstette, J. J. 1990.
Simulation-assisted learning by competition: Effects of noise differences between training model and target environment. In Proc.
7th International Conference on Machine Learning, 211–215.
Spector, L. 1994. Genetic programming and AI planning systems.
In Proc. 12th National Conference on Artificial Intelligence (AAAI94), 1329–1334.
Tesauro, G., and Galperin, G. 1996. On-line policy improvement
using Monte-Carlo search. In Advances in Neural Information Processing 9.
Assimilating Planning Domain Knowledge from Other Agents
Tim Grant
Netherlands Defence Academy
P.O. Box 90.002
4800 PA Breda, Netherlands
tj.grant@nlda.nl / tgrant@cs.up.ac.za
processes that receive feedback from the Controlling
process in the form of the observed sensory information.
State Estimation uses the feedback to identify the PUC’s
current state. Goal Setting determines whether the
current goal state has been achieved, can be maintained,
or must be replaced by another goal state. Modelling
assesses whether the domain model remains a complete
and correct description. If not, it uses the feedback to
modify or extend the domain model. This paper centres
on the Modelling process.
Abstract
Mainstream research in planning assumes that input
information is complete and correct. There are branches
of research into plan generation with incomplete planning
problems and with incomplete domain models.
Approaches include gaining knowledge aimed at making
the input information complete or building robust
planners that can generate plans despite the
incompleteness of the input. This paper addresses
planning with complete and correct input information, but
where the domain models are distributed over multiple
agents. The emphasis is on domain model acquisition, i.e.
the first approach. The research reported here adopts the
view that the agents must share knowledge if planning is
to succeed. This implies that a recipient must be able to
assimilate the shared knowledge with its own. An
algorithm for inducing domain models from example
domain states is presented. The paper shows how the
algorithm can be applied to knowledge assimilation and
discusses the choice of representation for knowledge
sharing. The algorithm has been implemented and applied
successfully to eight domains. For knowledge
assimilation it has been applied to date just to the blocks
world.
Goal Setting
State Estimation
Modelling
cur
r
stat ent
e
ain
dom del
mo
goal
state
Planning
plan
Controlling
planner
PUC
Introduction
The plan generation process takes as its input a planning
problem consisting of initial and goal states and a
domain model typically consisting of planning operators.
Its output is a sequence of actions – a plan - that will, on
execution, transform the initial state to the goal state.
To locate the research reported here, we place the
planning process into its wider context. In Figure 1 the
Planning process is central. Its output – a plan – is
ingested by the Controlling process. In executing the
plan, the Controlling process issues commands to the
Process Under Control (PUC), and receives sensory
information back.
The Planning process itself has inputs: the domain model
and the initial and goal states. The usual assumption is
that these inputs come directly from the Controlling
process. However, we take the view that each input is
developed by an intervening process: initial states result
from State Estimation 1 , goal states from Goal Setting,
and domain models from Modelling. It is these three
Figure 1. Planning in context.
Mainstream research in planning assumes that the input
information is complete and correct2 . In practical
applications, however, information about the domain
model, the planning problem, or both may be incomplete
and/or incorrect. In the literature there are two
approaches to planning with incomplete and/or incorrect
input information (Garland & Lesh, 2002):
• Gain better information, either during plan generation
or during plan execution. This may be done by using
sensors embedded in the PUC to acquire information,
by consulting an oracle (e.g. an expert), or by trialand-error learning from performing experiments in the
domain. The acquired information may be used in
state estimation, in goal setting, and/or in modelling.
• Build robust planners that can generate plans that
succeed regardless of the incompleteness and/or
Copyright © 2007, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
1
2
By convention, the goal state is usually a formula describing a set of
(goal) states.
The term is borrowed from the process control literature.
44
incorrectness of the input information. Conformant
planning (Goldman & Boddy, 1996) is planning with
incomplete knowledge about the initial state and/or the
effects of actions. Model-lite planning (Kambhampati,
2007) is planning with an incomplete or evolving
domain model. Erroneous planning (Grant, 2001) has
the more limited aim of characterizing the types of
erroneous plans generated if the planner is not robust
(the error phenotypes), based on concepts drawn from
the literature on human error, and trying to understand
the causes for the observed errors (the error
genotypes). Knowledge of the error phenotypes and
genotypes could then be used for plan repair (Krogt,
2005).
By contrast, this paper is concerned about planning with
complete and correct input information, but where that
information is distributed across multiple agents. In
particular, it is concerned with distributed domain
models. While the domain model is complete for some
set of agents, each individual agent’s domain model is
(initially) incomplete.
This paper adopts the view that, where knowledge about
the planning domain is distributed over multiple agents,
the agents must share that knowledge if planning is to
succeed. To do so, they must be interoperable. The
source of the knowledge and its recipient must adopt a
common knowledge representation, as well as
coordinating their knowledge-sharing actions. Moreover,
the recipient must be capable of assimilating the
knowledge gained (Lefkowitz & Lesser, 1988) into other
knowledge it may already have. Assimilation of another
agent’s domain model is an extension of the Modelling
process. This paper focuses on the knowledge
assimilation capability and choosing a suitable
representation for knowledge sharing.
The subject matter in this paper touches on several
theoretical areas. Firstly, it is based on the application of
machine learning to planning, because knowledge
assimilation is a learning process. More specifically, it is
concerned with applying machine learning techniques to
the acquisition of planning operators. Secondly, because
the recipient’s domain model is evolving, it touches on
model-lite and erroneous planning. Thirdly, it is based on
communication theory and, in particular, on information
or knowledge sharing concepts drawn from management
and organization theory.
The paper is divided into seven chapters. Chapter 2
describes the author’s algorithm for modelling planning
domains by acquiring planning operators from example
domain states. Chapter 3 introduces knowledge sharing
based on the Shannon (1948) model of communication.
Chapter 4 describes the assimilation of planning domain
knowledge, and chooses a suitable representation for
sharing that knowledge between agents. Chapter 5
describes two simple worked examples. Chapter 6
surveys related research. Finally, Chapter 7 draws
conclusions, identifying the key contributions of this
paper, its limitations, and where further research is
needed.
Modelling Planning Domains
The author’s algorithm for modelling planning domains
by acquiring planning operators from example domain
states is known as Planning Operator Induction (POI)
(Grant, 1996). As the name indicates, POI employs
inductive learning from examples. More specifically, it
embeds Mitchell’s (1982) version space and candidate
elimination algorithm, taking selected domain states as
input examples and inducing STRIPS-style planning
operators.
The POI algorithm has been implemented and applied
successfully to eight domains (Grant, 1996), including
the blocks world, the dining philosophers problem, and a
model of a real-world spacecraft payload based on a
chemical laboratory instrument. For knowledge
assimilation it has been applied to date just to the blocks
world.
The ontology employed in POI separates the domain
representation into static and dynamic parts. The static
part of the POI ontology represents invariant domain
entities in terms of object-classes and -instances, of interobject relationships, and of inter-relationship constraints.
By convention, relationships and constraints are binary 3 .
For example, the blocks world consists of Hand, Block,
and Table object-classes. The holding relationship
links an instance of the Hand (object-) class to an
instance of the Block class, and the onTable
relationship links a Block instance to a Table instance.
The holding and onTable relationships are
constrained in that no Block instance may be both held
and on the table simultaneously. Such constraints are
known in the planning literature as domain axioms or
invariants and in the database literature as cardinality
and exclusion constraints (Nijssen & Halpin, 1989). The
static part of the ontology (less the exclusion constraints)
may be depicted using Chen’s (1976) EntityRelationship Diagramming (ERD) notation4 .
The dynamic part of the POI ontology represents domain
entity behaviour in terms of states, transitions, and
planning
operators.
Planning
operators
are
reformulations of classes of domain transitions.
Instantiated relationships synchronise the states of
objects. For example, the holding hand1 block2
relationship synchronises the states of the objects hand1
and block2: hand1 must be holding block2 and
block2 must be held by hand1 simultaneously. If
hand1 ceases to be holding block2, then block2
must simultaneously cease being held by hand1.
Transitions combine synchronised changes in
relationships. For example, the cessation of the
3
Higher arity relationships and constraints can be reduced to binary
relationships and constraints by changing how the object - and
relationship -classes, respectively, are modelled. Details are given in
Grant (1996).
4
The ERD notation is limited to depicting constraints between two
instances of the same relationship, i.e. cardinality constraints. It cannot
depict constraints between instances of two different relationships, i.e.
exclusion constraints. The POI ontology and algorithm is not so
limited.
45
nodes. The version space is a partial lattice of valid
nodes, with each node being described in terms of
relationships between the domain object-instances.
• Step 2.3: Extract the domain states from the version
space. The domain states are the lattice nodes in the
maximally specific boundary of the version space.
• Step 2.4: Using the Single Actor / Single StateChange (SA/SSC) meta-heuristic, determine the
domain transitions between the domain states. The
SA/SSC heuristic is that a single object (the actor)
initiates the transition, undergoing a change in just
one of its relationships. The actor is at the root of a
causal hierarchy of state-changes in the other
participating objects. For example, in the blocks
world when a robot hand picks up a block from the
table, the hand is the actor, making true its
holding relationship with the block being picked
up. The hand’s action causes the block both to act
on itself so that it is no longer clear and to act on
the table, breaking the onTable relationship.
• Step 2.5: Generalise the domain transitions as
transition-classes.
• Step 2.6: Reformat the transition-classes as planning
operators.
Depending on how POI is to be used, Part 1 may be
optional. If an agent observes an existing domain and
uses POI to gain knowledge about how to plan actions in
that domain, then Part 1 is essential. By contrast, if an
ERD or equivalent static model of a domain (which may
not yet exist) is available, then modelling can proceed
directly to Part 2. In knowledge assimilation, one agent
(the source) performs Part 1 and another (the recipient)
performs Part 2.
holding hand1 block2 relationship may be
combined with the advent of the onTable block2
table1 relationship. In terms of Allen’s (1983)
temporal logic, we would say that the relationships meet.
Note that they meet because the domain constraint from
the previous paragraph forbids them from overlapping.
Grant (1995) says that the transition pivots around the
(instantiated) binary domain constraint.
The POI ontology is similar to McCluskey and Porteus’
(1997) object-centred representation for the specification
of planning domains. The key differences are that, in
POI, the relationships and constraints are strictly binary.
Moreover, the constraints hold only between
relationships. In addition, objects, relationships,
constraints, states, and transitions all have classes, i.e.
sorts in McCluskey and Porteus’ terminology.
The POI algorithm has two parts:
• Part 1: Acquisition. The purpose of the first part of POI
is to acquire a static, object-oriented model of the
domain from example domain states. POI does not
require that the example domain states form a valid
sequence, plan-segment, or plan, unlike other
algorithms for acquiring planning operators. However,
the examples may have to be carefully chosen. Part 1
subdivides into three steps:
• Step 1.1: Acquire domain state description(s).
• Step 1.2: Recognise the objects and relationships in
the state description(s).
• Step 1.3: Compile cardinality and exclusion
constraints from the objects and relationships. The
constraints can be generated exhaustively by
constructing all possible pairs between relationships
that share an object. For example, pairing the
relationship holding ?Hand1 ?Block1 with
holding ?Hand1 ?Block2 expresses the
domain constraint that a hand cannot hold two (or
more) blocks simultaneously 5 . By default, a
constraint is assumed to hold if no counterexample
can be found among the acquired domain state
descriptions. Thus, if an agent observed a domain
state in which two hands were indeed holding the
same block then this constraint would no longer
hold.
• Part 2: Induction. The purpose of the second part of
POI is to induce a dynamic model of domain
behaviour from the static, object-oriented domain
model. Domain behaviour is modelled using a statetransition network from which planning operators can
be extracted. Part 2 sub-divides into six steps:
• Step 2.1: Generate the description language for the
domain. The description language is the set of all
relationships between object-instances that satisfy
the cardinality and exclusion constraints.
• Step 2.2: Construct the version space for the
description language using the cardinality and
exc lusion constraints to eliminate invalid candidate
Knowledge Sharing
Information and knowledge sharing has been extensively
studied in management and organization theory. For
simplicity, we will take the terms “information” and
“knowledge” as being interchangeable, pace Ackoff
(1989). Information sharing is a dyadic exchange of
information between a source and a recipient (adapted
from Szulanski (1996), p.28). Sharing involves the dual
problem of “searching for (looking for and identifying)
and transferring (moving and incorporating) knowledge
across organizational subunits” (Hansen, 1999, p.83).
For the purposes of this paper, we will take knowledge
sharing as meaning knowledge transfer. Searching for or
discovery of other agents that have suitable
complementary knowledge about a domain is an area for
future research.
Shannon’s (1948) model of communication is useful for
thinking about knowledge sharing. In the Shannon
model, the source and recipient each operate within their
own organizational contexts. Information transfer begins
when the source generates a message. The message is
encoded into a form (a signal) in which it is transmitted
by means of a communications medium, such as
electromagnetic waves, telephone cables, optical fibres,
or a transportable electronic storage medium. Random
noise and systematic distortion may be added during
5
By convention, the same instance of the Block objectclass cannot be matched to the two different variables
?Block1 and ?Block2.
46
transmission. The recipient decodes the signal and
assimilates the decoded message into its own store of
knowledge.
events. Analogues of Lefkowitz and Lesser’s research
questions apply; here we are concerned with the planning
analogue of their second question.
Considering the POI algorithm from the viewpoint of
encoding and decoding, we see that there are three forms
in which knowledge relating to the domain model could
be exchanged:
• As cases. The source agent could transmit the domain
states it has observed, i.e. the input information to Part
1 of the POI algorithm. The source agent would not
have to process its observations before transferring
them to the recipient. The recipient agent would then
have to add the source’s domain states to its own
database of domain states, and perform Parts 1 and 2
of the POI algorithm to obtain a set of planning
operators. Exchanging knowledge in this form is likely
to be verbose for real-world domains, possibly with
duplicated observations. More importantly, it would
limit knowledge assimilation to learning-by-seeing.
The only thing that knowledge sharing achieves is that
the recipient can “see” both what it can itself observe
and what the source has observed.
• As static domain models. The source agent could
transmit its static domain model, i.e. the information
as output by Part 1 and as input to Part 2. The source
agent would have had to perform Part 1 before
transmitting its static domain model to the recipient.
The recipient agent would then have to add the
source’s objects, relationships, and constraints to its
own database of objects, relationships, and constraints.
Where source and recipient agents disagree on
whether a constraint holds, then the constraint is
assumed not to hold (because one of the agents will
have seen a counterexample). The recipient retains its
own list of object-instances and does not assimilate the
source’s object-instances list, because the recipient
may not be able to execute plans on objects that it
cannot see. Then the recipient would perform Part 2 of
the POI algorithm to obtain a set of planning
operators. Exchanging knowledge in this form is likely
to be concise. Moreover, it would allow learning-byseeing, learning-by-being-told, and their combination.
• As planning operators. The source agent could transmit
its dynamic domain model, i.e. the information as
output by Part 2. The recipient agent would then
simply have to add the planning operators obtained
from the source to its own planning operators.
Exchanging knowledge in this form is still more
concise, but assumes that (1) the source and the
recipient agents’ observations are sufficiently rich for
both of them to be able to induce a set of planning
operators, and that (2) their sets of planning operators
are complementary. There is no way for additional
planning operators to be induced by synergy.
In this research, the encoding-decoding schema has been
determined by the researcher. Ideally, the source and
recipient agents should themselves be able to negotiate a
suitable encoding-decoding schema, depending on
considerations such as privacy, security, and
communications bandwidth. Further research is needed
to provide agents with such a capability.
Figure 2. Linking source and recipient agents using
Shannon (1948) model.
For the purposes of this paper, we assume that the source
and recipient are agents with an internal structure as
shown in Figure 1. In general the agents should be able
to exchange the outputs of their respective Planning,
Controlling, State Estimation, Goal Setting, and
Modelling processes, given suitable encoders and
decoders (Figure 2). We concentrate here on the
Modelling process, how the source’s knowledge should
be encoded, and what decoder the recipient needs to
assimilate that knowledge. We neglect the issue of noise
and distortion in this paper.
Assimilating Planning Domain Knowledge
Lefkowitz and Lesser (1988) discuss knowledge
assimilation in the context of acquiring domain
knowledge from human experts. Their implemented
system, Kn A c, was developed to assist experts in the
construction of knowledge bases using a frame-like
representation. Assimilated knowledge represented
domain objects, relationships, and events. The main
contribution of their research was in developing several
generic techniques for matching sets of entities and
collections of constraints. Research questions included:
• How does the expert’s domain description correlate
with the description contained in the knowledge base?
• How should the knowledge base be modified based on
the expert’s new information?
• What should be done when the expert’s description
differs from the existing one?
Despite the contextual differences, there are strong
parallels between Lefkowitz and Lesser’s (1988) work
and assimilating planning domain knowledge.
Assimilation of domain knowledge should be integrated
with plan generation and execution. It should permit a
variety of ways of learning, including learning-by-seeing
(i.e. by observing the domain and inferring what actions
are possible), learning-by-being-told (e.g. by domain
experts or other agents), and learning-by-doing (i.e. by
generating and executing plans). When knowledge is
distributed over multiple agents, then individual agents
may need to combine different ways of learning. In
particular, an agent may well need to combine
knowledge it gained from its own observations of a
domain with information it has gained by being told by
another agent. Like Lefkowitz and Lesser, learning
concerns domain objects, relationships, constraints, and
47
AND onTable ?Block1
nil
THEN INVALID
-- block cannot be held
and not on a table
NOTE: This constraint
does not hold because
the observed state is a
counterexample.
Worked Examples
Two worked examples should make the key issues clear.
The first example is the one-block world and the second
is taken from the three-blocks world. Because the oneblock world is simple, the first example is described in
more detail. The second example illustrates the need to
select example states carefully if the agents are to induce
a full set of planning operators.
Suppose two agents each observe a different state of a
one-block world (Slaney & Thiebaux, 2001), as
represented by Nilsson (1980) 6 . There are two possible
states7 : [[holding hand1 block1] [onTable
block1 nil] [onTable nil table1]] and
[[holding hand1 nil] [holding nil block1]
[onTable block1 table1]]. Let us suppose that
Agent1 is given the first state description and Agent2 the
second.
The following table depicts the static domain model that
would result from their performing Part 1 separately, i.e.
without knowledge sharing and assimilation:
Object classes
Object instances
Relations
Constraints
Agent1’s model
Agent2’s model
Hand, Block, Table
Hand, Block, Table
hand1, block1, table1
hand1, block1, table1
holding ?Hand ?Block
onTable ?Block nil
onTable nil ?Table
IF holding ?Hand1
?Block1
AND holding ?Hand1
?Block2
THEN INVALID
-- hand cannot hold
multiple blocks
holding ?Hand nil
holding nil ?Block
onTable ?Block ?Table
IF holding ?Hand1 nil
AND holding ?Hand2 nil
THEN INVALID
-- multiple hands cannot be
empty at same time
IF holding ?Hand1
?Block1
AND holding ?Hand2
?Block1
THEN INVALID
-- block cannot be held
by multiple hands
IF holding ?Hand1
?Block1
IF onTable ?Block1
nil
AND onTable ?Block2
nil
THEN INVALID
-- multiple blocks cannot
be off the table
IF onTable nil ?Table1
AND onTable nil
?Table2
THEN INVALID
-- multiple tables cannot
be clear at same time
and on a table
NOTE: This constraint does
not hold because the
observed state is a
counterexample.
IF onTable ?Block1
?Table1
AND onTable ?Block1
?Table2
THEN INVALID
-- block cannot be on
multiple tables
IF onTable ?Block1
?Table1
AND onTable ?Block2
?Table1
THEN INVALID
-- table cannot hold
multiple blocks
Neither of the agents would be able to induce any
planning operators, because POI Part 2 would simply
result in the induction of a single state, namely the state
each agent had observed originally. There needs to be a
minimum of two states for the SA/SSC heuristic to find
any transitions.
Now suppose that the agents share their domain
knowledge. Since neither of them can induce planning
operators separately, exchanging data in the form of
planning operators is not feasible. However, they can
exchange knowledge in the form either of cases or of
their static domain models. For the one-block world it is
simpler for the agents to exchange cases, but this does
not apply to complex, real-world examples.
Sharing their domain models enables the agents to create
synergistic knowledge. Firstly, Agent1 learns from
Agent2 that blocks can be on tables, and Agent2 learns
from Agent1 that hands can hold blocks. Secondly,
additional constraints can be identified, as shown in the
following table:
IF holding nil ?Block1
AND holding nil
?Block2
THEN INVALID
-- multiple blocks cannot be
not held
IF holding nil ?Block1
AND onTable ?Block1
?Table1
THEN INVALID
-- block cannot be not held
and on a table
Synergistic knowledge
Relations
6
Distinguishing three object-classes (Hand, Block, Table) and
yielding four operators (pickup, putdown, stack, unstack). See
Grant et al, 1994.
7
A third state would be observed in an orbiting spacecraft:
[[holding hand1 nil] [holding nil block1] [onTable
block1 nil] [onTable nil table1]]. During development of
the POI algorithm the three states were indeed induced, resulting in the
induction of a set of six operators (pickup, putdown,
floatoff, floaton, letgo, capture). The author
observed that he had failed to represent the action of gravity. To do so
while retaining the Nilsson (1980) domain representation requires a
triple constraint, stating in effect that a block must be either held by a
hand or supported by a table or by another block. This can be solved by
extending the POI ontology, either by allowing constraints of arity
higher than two or by introducing an inheritance hierarchy of objectclasses. The author adopted the latter solution, because this has the
synergistic consequence of reducing the complexity of the version
space, leading to savings in induction time and memory requirements
(Grant, 1996).
Constraints
holding ?Hand ?Block
holding ?Hand nil
holding nil ?Block
onTable ?Block ?Table
onTable ?Block nil
onTable nil ?Table
IF holding ?Hand1 nil
AND holding ?Hand1 ?Block1
THEN INVALID
-- hand cannot be both empty and holding a block
IF holding nil ?Block1
AND holding ?Hand1 ?Block1
THEN INVALID
-- hand cannot be both held by a hand and not held
IF holding ?Hand1 ?Block1
AND onTable ?Block1 ?Table1
THEN INVALID
-- block cannot be both held and on a table
IF onTable nil ?Table1
48
learning, neural network, inductive logic programming,
and reinforcement learning. They observed that early
research emphasised learning search control heuristics to
speed up planning. This has fallen out of favour as faster
planners have become available. There is now a trend
towards learning or refining sets of planning operators to
enable a planner to become effective with an incomplete
domain model or in the presence of uncertainty.
“Programming by demonstration” can be applied so that
the user of an interactive planner could create plans for
example problems that the learning system would then
parse to learn aspects peculiar to the user.
In terms of Zimmerman and Kambhampati’s (2003)
survey, this paper applies Mitchell’s (1982) inductive
version space and candidate elimination algorithm to
planning. The POI algorithm could be used before
planning, during planning, or during execution. It centres
on the learning of domain models in the form of planning
operators. It exhibits an element of “programming by
demonstration” in that the user shows POI example
domain states, rather than example plans or execution
traces.
In his 2006 lectures on learning and planning at the
Machine Learning Summer School, Kambhampati
distinguished three applications of learning to planning:
learning search control rules and heuristics, learning
domain models, and learning strategies. Research in
learning domain models could be classified along three
dimensions: the availability of information about
intermediate states, the availability of partial action
models, and interactive learning in the presence of
humans. POI does not need information about
intermediate states nor partial action models, and it does
not require the presence of humans. By comparison,
other operator learning algorithms require as input:.
• Background domain knowledge: Porter & Kibler
(1986), Shen (1994), Levine & DeJong (2006).
• Partial domain model (i.e. operator refinement, rather
than ab initio operator learning): Gil (1992),
DesJardins (1994), McCluskey et al (2002).
• Example plans or traces: Oates and Cohen (1996),
Wang (1996), Yong et al (2005).
• Input from human experts: McCluskey et al (2002).
POI can accept a static domain model from a human
expert (e.g. for a domain that does not yet exist)
instead of observing domain states, but this is not
applicable to assimilating domain knowledge
distributed over multiple agents.
POI is closest to Mukherji and Schubert (2005) in that it
takes state descriptions as input and discovers planning
invariants. The differences are that POI also discovers
objects and relationships and uses the information it has
discovered to induce planning operators. Like
McCluskey and his collaborators (McCluskey & Porteus,
1997; McCluskey et al, 2002), POI models domains in
terms of object-classes (sorts, in McCluskey’s
terminology), relationships, and constraints.
AND onTable ?Block1 ?Table1
THEN INVALID
-- table cannot be supporting both a block and nothing
IF onTable ?Block1 nil
AND onTable ?Block1 ?Table1
THEN INVALID
-- block cannot be both off and supported by a table
The synergistic knowledge, together with the additional
constraints, enables the agents to induce the pickup
and putdown planning operators. They do not have
enough knowledge to induce the stack and unstack
operators because stacks of blocks and the on
relationship between blocks does not exist in the oneblock world.
The three-blocks world has 22 states, falling into five
state-classes (Grant et al, 1994). Experiments with the
implemented POI algorithm, adapted for knowledge
assimilation, showed that it is not necessary for the
agents to observe all 22 states (Grant, 1996). Just two,
judiciously-chosen, example states sufficed8 . In one state
the hand must be empty, and in the other it must be
holding a block. One state must show a stack of at least
two blocks, and one stack must show two or more blocks
on the table. Inspection shows that there are four pairings
of the five state-classes that can meet these requirements.
Two can be rejected on the grounds that they are
adjacent, i.e. that they are separated by the application of
just one operator. Successful knowledge assimilation has
been demonstrated for the remaining two state-pairs: for
all three blocks on the table paired with the state in
which one block is held and the other two are stacked,
and for a stack of three blocks paired with the state in
which one block is held and the other two are on the
table. Moreover, the induced set of planning operators
can be used to generate and successfully execute a plan
that passes through at least one novel state, i.e. a state
that the agents had not previously observed.
It is not known whether two (judiciously-chosen)
example states suffice in all domains for the induction of
a full set of planning operators. Hand simulations and
experiments have only been done for the (one-hand, onetable, and) one- and three-blocks worlds. More research
is needed, e.g. by applying knowledge assimilation using
POI to the
International Planning Competition
benchmark domains and to real-world domains where
planning knowledge is distributed geographically or
organizationally.
Related Work
In 2003, Zimmerman and Kambhampati surveyed the
research on applying machine learning to planning. They
identified three opportunities for learning: before
planning, during planning, and during execution.
Learning techniques applied fell into two groups:
inductive versus deductive (or analytical) learning.
Inductive techniques used included decision tree
8
Introspection suggests that there may be a single state in the twohands, four-block world that could provide all the information needed
to induce all four operators, but then the knowledge could not be
distributed over multiple agents.
49
• Extending the POI ontology to model inheritance and
aggregation relationships, with the eventual aim of
using the Unified Modeling Language (UML) as a
representation for the static, object-oriented and
dynamic, behavioural domain models in the POI
algorithm.
• Developing an integrated planning environment that
incorporates domain modelling, plan generation, plan
execution, state estimation, and goal setting to act on
real and simulated domains.
• Extending agent capability to (1) negotiating mutuallyacceptable encoding-decoding schemes, and (2)
discover agents that have complementary knowledge.
• Investigating the application of knowledge assimilation
using POI to real-world domains where planning
knowledge
is
distributed
geographically
or
organizationally. Example domains include air traffic
control and military Command & Control (Grant,
2006).
Conclusions
This paper has addressed the topic of planning with a
domain model that is complete and correct but
distributed across multiple agents. The paper takes the
view that the agents must share their knowledge if
planning is to succeed. The Planning Operator Induction
(POI) algorithm (Grant, 1996) has been introduced as a
means of acquiring planning operators from carefullychosen examples of domain states. Unlike other
algorithms for acquiring planning operators (Porter &
Kibler, 1986) (Gil, 1992) (Shen, 1994) (DesJardins,
1994) (Wang, 1996) (Oates & Cohen, 1996) (McCluskey
et al, 2002) (Yang et al, 2005) (Mukherji & Schubert,
2005) (Levine & DeJong, 2006), the example domain
states do not need to form a valid sequence, plansegment, or plan, nor do preceding or succeeding
transitions have to be given. When agents share their
partial knowledge of the domain model, the two parts of
the POI algorithm can be divided between the source and
recipient in the knowledge-sharing process. The agents
exchange the static, object-oriented domain model
resulting from Part 1 of the POI algorithm. This enables
the recipient to identify synergies between the shared
knowledge and knowledge it already has and to perform
the induction, i.e. Part 2 of the algorithm.
This paper makes several contributions. Its primary
contribution is in showing how planning domain
knowledge that is distributed across multiple agents may
be assimilated by sharing partial domain models.
Secondary contributions include:
• The POI domain-modelling algorithm is presented that
acquires planning operators from example domain
states. The example domain states do not need to form
a valid sequence, plan-segment, or plan, nor do
preceding or succeeding transitions have to be given.
• The ontology used in the POI algorithm extends
McCluskey and Porteus’ (1997) object-centred
representation. Relationships and constraints are
strictly binary. Constraints are between pairs of
relationships, rather than domain-level axioms. Hence,
both relationships and constraints are associated with
(classes of) domain objects.
A key limitation of the research reported here is that,
while knowledge assimilation using the POI algorithm
has been implemented, it has only been tested for the
(one-hand, one-table, and) one- and three-blocks worlds.
Future research should include:
• Applying POI-based knowledge assimilation to a wider
variety of planning domains, e.g. International
Planning Competition benchmark domains. One
research ques tion to be addressed is whether two
(judiciously-chosen) example states suffice in all
domains for the induction of a full set of planning
operators.
• Elucidating the conceptual links between the POI
algorithm and plan generation using planning graphs.
• Applying the POI algorithm to sense-making, i.e. the
modelling of novel situations (Weick, 1995). An
approach has been outlined in Grant (2005).
References
Ackoff, R. 1989. From Data to Wisdom. Journal of
Applied Systems Analysis, 16, 3-9.
Allen, J.F. 1983. Maintaining Knowledge about
Temporal Intervals. Communications of the ACM, 26,
11, 832-843.
Chen, P.P-S. 1976. The Entity-Relationship Model:
Towards a unified view of data. ACM Transactions on
Database Systems, 1, 9-36.
DesJardins, M. 1994. Knowledge Development Methods
for Planning Systems. Proceedings, AAAI-94 Fall
Symposium series, Planning and Learning: On to real
applications. New Orleans, LA, USA.
Fikes, R., & Nilsson, N.J. 1971. STRIPS: A new
approach to the application of theorem proving to
problem solving. Artificial Intelligence Journal, 2, 189208.
Garland, A., & Lesh, N. 2002. Plan Evaluation with
Incomplete Action Descriptions. TR2002-05, Mitsubishi
Electric
Research
Laboratories,
Cambridge,
Massachusetts, USA.
Gil, Y. 1992. Acquiring Domain Knowledge for
Planning by Experimentation. PhD thesis, School of
Computer Science, Carnegie Mellon University,
Pittsburgh, PA, USA.
Goldman, R., & Boddy, M. 1996. Expressive Planning
and Explicit Knowledge. Proceedings, AIPS-96, 110117, AAAI Press.
Grant, T.J. 1995. Generating Plans from a Domain
Model. Proceedings, 14th workshop of the UK Planning
and Scheduling Special Interest Group, 22-23 November
1995, University of Essex, Colchester, UK.
Grant, T.J. 1996. Inductive Learning of KnowledgeBased Planning Operators. PhD thesis, University of
Maastricht, The Netherlands.
Grant, T.J. 2001. Towards a Taxonomy of Erroneous
Planning. Proceedings, 20th workshop of the UK
Planning and Scheduling Special Interest Group, 13-14
December 2001, University of Edinburgh, Scotland.
Grant, T.J. 2005. Integrating Sensemaking and Response
using Planning Operator Induction. In Van de Walle, B.
50
& Carlé, B. (eds.), Proceedings, 2nd International
Conference on Information Systems for Crisis Response
and Management (ISCRAM), Royal Flemish Academy
of Science and the Arts, Brussels, Belgium, 18-20 April
2005. SCK.CEN and University of Tilburg, 89-96.
Grant, T.J. 2006. Measuring the Potential Benefits of
NCW: 9/11 as case study. In Proceedings, 11th
International Command & Control Research &
Technology Symposium (ICCRTS06), Cambridge, UK,
paper I-103.
Grant, T.J., Herik, H.J. van den, & Hudson, P.T.W.
1994. Which Blocks World is the Blocks World?
Proceedings, 13th workshop of the UK Planning and
Scheduling Special Interest Group, University of
Strathclyde, Glasgow, Scotland.
Hansen, M. T. 1999. The Search-Transfer Problem: The
role of weak ties in sharing knowledge across
organization subunits. Administrative Science Quarterly,
44 (1), 82-111.
Kambhampati, S. 2006. Lectures on Learning and
Planning. 2006 Machine Learning Summer School
(MLSS’06), Canberra, Australia.
Kambhampati, S. 2007. Model-lite Planning for the Web
Age Masses: The challenges of planning with incomplete
and evolving domain models. Proceedings, American
Association for Artificial Intelligence.
Krogt, R. van der. 2005. Plan Repair in Single-Agent
and Multi-Agent Systems. PhD thesis, TRAIL Thesis series T2005/18, TRAIL Research School, Netherlands.
Lefkowitz, L. S., and Lesser, V. R. 1988. Knowledge
Acquisition as Knowledge Assimilation. International
Journal of Man-Machine Studies, 29, 215-226.
Levine, G., & DeJong, G. 2006. Explanation-Based
Acquisition of Planning Operators. Proceedings, ICAPS
2006.
McCluskey, T.L., & Porteus, J.M. 1997. Engineering
and Compiling Planning Domain Models to Promote
Validity and Efficiency. Artificial Intelligence Journal,
95, 1-65.
McCluskey, T.L., Richardson, N.E., & Simpson, R.M.
2002. An Interactive Method for Inducing Operator
Descriptions. Proceedings, ICAPS 2002.
Mitchell, T.M. 1982. Generalization as Search. Artificial
Intelligence Journal, 18, 203-226.
Mukherji, P., & Schubert, L.K. 2005. Discovering
Planning Invariants as Anomalies in State Descriptions.
Proceedings, ICAPS 2005.
Nijssen, G.M., & Halpin, T.A. 1989. Conceptual Schema
and Relational Database Design: A fact-oriented
approach. Prentice-Hall Pty Ltd, Sydney, Australia.
Nilsson, N.J. 1980. Principles of Artificial Intelligence.
Tioga Publishing Company, Palo Alto, California, USA.
Oates, T., & Cohen, P.R. 1996. Searching for Planning
Operators with Context-Dependent and Probabilistic
Effects. Proceedings, AAAI, 865-868.
Porter, B., & Kibler, D. 1986. Experimental Goal
Regression: A method for learning problem-solving
heuristics. Machine Learning, 1, 249-284.
Shannon, C.E. 1948. A Mathematical Theory of
Communication. Bell System Technical Journal, 27, 379423 (July) & 623-646 (October).
Shen, W.-M. 1994. Discovery as Autonomous Learning
from the Environment. Machine Learning, 12, 143-156.
Slaney, J., & Thiébaux, S. 2001. Blocks World Revisited.
Artificial Intelligence Journal, 125, 119-153.
Szulanski, G. 1996. Exploring Internal Stickiness:
Impediments to the transfer of best practice within the
firm. Strategic Management Journal, 17, 27-43.
Wang, X. 1994. Learning Planning Operators by
Observation and Practice. PhD thesis, Computer Science
Department, Carnegie Mellon University, Pittsburgh,
PA, USA.
Weick, K. 1995. Sensemaking in Organizations. Sage,
Thousand Oaks, CA, USA. ISBN 0-8039-7178-1.
Yang, Q., Wu, K., & Jiang, Y. 2005. Learning Action
Models from Plan Examples with Incomplete
Knowledge. Proceedings, ICAPS 2005, 241-250.
Zimmerman, T., & Kambhampati, S. 2003. LearningAssisted Automated Planning: Looking back, taking
stock, going forward. AI magazine, 73-96 (Summer
2003).
51
The Dimensions of Driverlog
Peter Gregory and Alan Lindsay
University of Strathclyde
Glasgow
UK
{pg|al}@cis.strath.ac.uk
Abstract
The International Planning Competition has provided a
means of comparing the performance of planners. It is supposed to be a driving-force for planning technology. As the
competition has advanced, more and more complex domains
have been introduced. However, the methods for generating
the competition instances are typically simplistic. At best,
this means that our planners are not tested on the broad range
of problem structures that can be expressed in each of the domains. At worst, it means that some search techniques (such
as symmetry-breaking and graph-abstraction) are ineffective
for the competition instances.
It is our opinion that a competition with interesting instances
(those with varied structural properties) would better drive the
community to developing techniques that address real-world
issues, and not just solving contrived competition test-cases.
Towards this end, we present a preliminary problem generator for the Driverlog domain, and introduce several important
qualities (or dimensions) of the domain. The performance
of three planners on instances generated by our generator
are compared with their performance on the competition instances.
Introduction
The International Planning Competitions have been a driving force for the development of planning technology. Each
competition in turn has added to the expressivity of the standard language of AI Planning: PDDL. The domains that
have been created for each competition have also increased
in complexity and structure. For domains tested in the early
planning competitions, such as Blocksworld, problem generation was not considered a difficult problem: generate two
random configurations of the blocks and use those as the initial and goal states.
Slaney and Thiebaux showed that even for Blocksworld,
problem generation is an interesting problem. Using the intuitive technique to generate states will not generate all possible states (Slaney & Thiébaux 2001). If a simple, intuitive problem generation strategy is not satisfactory for a domain such as Blocksworld, it seems highly unlikely that a
similar strategy would be satisfactory for a modern, highlystructured domain.
This work addresses two questions. The first is how to
generate an interesting benchmark set for a complex structured domain (the Driverlog domain). The second question
52
asks whether or not the competition results accurately reflect the performance of the competing planners across the
benchmark problems that have been created.
Ideally, a set of benchmarks should test current planning
technology to its limits. More than simply supplying problems that reach outside of the scope of current planners,
a benchmark set should highlight the particular structural
properties that planners struggle with. This provides focus
for future research. Studying the reasons why our planners
fail to solve certain types of problems reveals where future
improvements might be made.
Benchmarks should, when appropriate, model reality in
a useful way. Of course, it is infeasible to expect planners
to solve problems on a massive scale. But it is possible to
retain structural features of real-world problems. Nobody
would write a logistics instance in which a particular package was in more than one location in the initial state, although this would probably be allowed by the domain file.
The structural property that objects cannot occupy more than
one location is intuitive, but there may be other real-world
structural properties that are not as obvious.
The final function that a good benchmark set should provide is a solid foundation for critical analysis of different
planners. One criticism of the IPC could be that there are
simply not enough instances to know which planner is best
and when. Ideally, there should be enough results to prove
that some planner is faster, or produces higher quality plans
to a statistically significant level.
The Driverlog Problem
A transportation problem involves a set of trucks moving
packages from a starting location to a goal destination in
an efficient way. The trucks drive along roads that connect
the locations together, and a package can be picked up from
or dropped off to a truck’s current location. The Driverlog
domain extends this model by introducing drivers. Drivers
have their own path network that connects the locations together, allowing them to walk between locations. Trucks in
Driverlog can only move if they are being driven by a driver.
This introduces an enabler role moving away from a simple
deliverable/transporter model. As well as this, goal locations are often set for the drivers and the trucks, not just the
packages.
Transportation domains can cover problems with interesting road structures. However, Driverlog adds interesting
challenges, as there can be complicated interaction between
the two graphs structures and there are many more factors to
consider when deciding how to deliver the packages, including additional goal types and useful driver truck pairings.
The Dimensions of Driverlog
A Driverlog problem comprises the following things: a set
of drivers, a set of trucks, a set of packages and a set of
locations. All of the drivers, trucks and packages are initially
at a location. A subset of the drivers, trucks and packages
have a goal location. Locations are connected in two distinct
ways: by paths (which drivers can walk along) and by roads
(which trucks can drive along).
We propose eight dimensions that we feel could be combined to create interesting and challenging Driverlog problems. The dimensions largely focus on introducing structural features to the graphs, however, we also consider the
types of goals and number of the separate objects in the
problem. These could greatly affect the difficulty of the
problem.
Graph topology There are several options relating to the
graph topology, connectivity and planar or non-planar.
Planar graphs are graphs that can be drawn on a plane,
with no intersecting edges, a property existing in many
real road networks. This domain can be used to represent
other problems and it is likely that non-planar graphs will
also be of interest and increase the problem space. The
connectivity of the graph, from sparse to dense can also
be set, allowing a whole range of interesting structures to
be explored.
Numbers of objects The number of trucks, drivers, packages and locations are the traditional parameters for generating Driverlog problems. This dimension can be used
to set the size of the problem and can have some effect on
the difficulty.
Types of goals There are only three possible types of goal
in Driverlog: the goal location for trucks, drivers or packages. In real world transportation problems, the planner
can never consider delivering the packages in isolation;
the final destination of the drivers and trucks is also extremely important. Allowing the types of goals to be selected provides control over the emphasis of the problem.
Disconnected drivers There are two separate graphs in the
Driverlog domain, the road graph and the path graph. The
interesting interactions that can happen between the two
graph structures are usually ignored. We want to encourage exploration of these interactions. Disconnected
drivers provide problems where drivers must traverse both
graphs (walking, or in a truck) to solve the problem.
One-way streets Links can be added between two locations
in a single direction. This means that trucks can move
from one location to another, but may have to find a different route to return to the original location. Solving problems with directed graph structure forces careful planning
of how the trucks are moved around the structure. If the
53
wrong truck is moved down one of the one-way streets,
then many wasted moves could be incurred as the truck
traverses back through the graph. As well as adding an
interesting level of difficulty, we think this dimension is
particularly relevant, because of the increasing number of
one-way streets in the transport network.
Dead ends Dead ends are locations, or groups of locations
that are joined to the main location structure in one direction only. This means that a truck cannot return once it has
moved into one of these groups of locations. This forces
the planner to carefully decide when to send the truck into
one of these groups, as it will be lost for the remainder of
the plan. For example, on difficult terrain there can be
craters that a robot can manage to move into, but are too
steep for the robot to get out again. In this case the planner
may want to balance the importance of the scientific gain
with the cost of the robot and termination of the mission.
SAT/ UNSAT This dimension allows the possibility of unsolvable problems. Solvable means that there is a sequence of actions moving the state from the initial state to
a state that satisfies the goal formula. This option might
allow the exploration of more interesting properties in the
other dimensions, as sometimes it is impossible to ensure
that certain combinations are solvable.
Symmetry in objects Symmetry occurs in the Driverlog
problem when different objects or configurations of objects are repeated. For example, three trucks that have
the same start and goal conditions are symmetric. Also,
the underlying road network may be symmetric. Planners
that perform symmetry breaking can exploit symmetry to
reduce the amount of necessary search.
The Instance Generators
Four different generators have been written for this work.
In the future, these will be reduced to a single generator.
But since this is preliminary work, different generators were
produced for different important dimensions. These are Planar, Non-Planar, Dead-ends and Disconnected Drivers. The
generators explore the different dimensions identified as interesting in the previous section. Three of these dimensions
are not explored: Symmetry in objects, types of goals and
SAT/UNSAT. The majority of modern planners have been
built around the assumption that instances will be satisfiable, and so this dimension may not produce any interesting discussion. In all of the instances, each driver, truck and
package has a goal destination (unless otherwise specified).
Symmetry in objects cannot be explicitly varied in any of
the generators. It is our intention to add the capacity to vary
these dimensions in the future. One more restriction is that
except in the Disconnected Drivers generator, the path map
is identical to the link map. The generators do the following:
Planar
Generates instances with planar maps. The user can vary
the number of drivers, trucks, packages and locations. The
user is required to supply the probability of two locations
being connected. The user specifies if the map is directed
or not. All of the generated maps will be connected. In the
implementation, if a generated map is not connected, it is
simply discarded and a new one generated.
Non-Planar
The Non-Planar generator is similar to the Planar generator
except that the user specifies a particular number of links in
the road-map and, of course, the resultant road-maps may
not be planar.
Dead-ends
To test road maps with dead-ends, the following method is
used for generating an instance. A tree is constructed as the
road map randomly, connecting location n with a random
location, lower than n. There are t trucks and drivers, initially located at location 1. The last t locations are then used
as destination locations for the packages. The trucks do not
have a destination location specified.
Each package is randomly assigned a destination from
those last t locations. Each package is then initially placed
at any location on the path between location 1 and its destination. The challenge in this problem is simply to drive a
truck to each destination and only load a truck with packages that are supposed to be delivered to that truck’s destination. Figure 1 shows one example. In this example, normal fonts represent the initial location of packages, italicised
fonts represent their goal locations.
Disconnected Drivers
The Disconnected Driver generator is designed to explore
the Disconnected Driver dimension. In order to do this, a
map with no paths is created. Each driver is paired with
a truck: the goal locations of the truck and driver are the
same. Their initial locations are not the same (although each
driver has a truck available). The challenge in the instances
generated is in swapping the drivers into the truck that shares
its goal location.
Experiments
To test the generators, we have used three of the most successful planners of recent times, FF (Hoffmann & Nebel
2001), LPG (Gerevini & Serina 2002) and SGPlan (Chen,
Wah, & Hsu 2006). We used FF version 2.3, LPG version
1.2 and SGPlan version 41. All of the tests were performed
on a desktop computer with a dual-core Intel Pentium 4
2.60GHz CPU. The tests were limited to using 10 minutes
and 300MB of memory. The timings for FF and LPG measure system time + user time. Sadly, there is no simple way
of calculating this measure for SGPlan, and so clock time is
used. This could mean that SGPlan seems slightly slower
than in reality. However, system load was minimal during
testing, and any scaling in performance should be a very
small constant factor. The quality of plans is measured by
number of actions. As FF only produces sequential plans,
and LPG by default optimises number of actions, this was
thought a fairer measure than makespan.
We generated a huge number of benchmark test cases,
and then after some preliminary small-scale tests chose an
54
interesting selection of problems that covered a range of
difficulty for all of the planners. We highly recommend
this method. Without preliminary tests, it is impossible to
know what range of problems may provide difficulties for
the planners. It is far too easy to construct a benchmark set
composed entirely of either very easy or impossible to solve
problems.
We provide detailed results for planar road maps with four
drivers, four trucks, nine packages, and number of locations
varying between 10 and 30, in steps of five. For each size
of map, we generated 50 instances with probability of two
nodes being connected of between 0.1 and 0.9 both for directed and undirected graphs. This gives 250 instances for
directed and undirected graphs. Planar graphs were selected
as they have a similar structure to real-world road networks.
We also used 180 of the generated Dead End instances.
These instances have between one and four trucks, they have
9 packages and all have 15 locations.
Results
The results of performing the above experiments can be seen
in Figure 2 to Figure 5. These graphs show the planar directed results (both time and quality) for FF vs. LPG, FF vs.
SGPlan and LPG vs. SGPlan respectively. The graphs of the
timings are log-scaled, whereas the graphs showing quality
are linear scaled.
Time vs. Quality
The results shown in Figure 2 to Figure 5 show that there is
little to choose between the planners in terms of plan quality.
In each comparison, the two compared planners seem to gain
wins in what seems about half of the cases. However, of
the three planners, LPG is considerably better in terms of
time taken than the other planners. This highlights the fact
that planners have been built specifically with the task of
attaining satisfiability, rather than trying to optimise metrics.
SGPlan Dependence on FF
It was noticed that in many problems that were found difficult by FF, SGPlan also struggled. FF’s performance seems
to dominate that of SGPlan. This is unusual, as each goal in
a Driverlog problem should be reasonably straightforward
to solve in isolation. However, it is perhaps due to the fact
that some of the goals in Driverlog are strongly dependent
on resources that participate in other goals. This could mean
that combining the sub-plans becomes difficult for SGPlan.
Dead-end Example
Figure 1 shows an example of the Dead End instances generated. This instance had three trucks. The numbers in Figure 1 represent package locations. The italicised numbers
represent the goal locations of the packages. All three planners were incapable of solving this simple problem.
This highlights the fact that the planners do not reason
about resource allocation intelligently. If the problem is
viewed as a task of assigning packages to trucks, then the
problem becomes very simple. It also shows that the planners do not reason about the consequences of irreversible
actions.
1,2,7,8,9
5
3,4
2
1
3
15
5
8,9
6
10
9
6
11
7
4
13
8
14
1,2,5
3,4,6,7
12
Figure 1: Dead-end instance in which all three planners fail
Instance
FF
LPG
SGPlan
1
9
7
9
2
23
27
24
3
13
13
13
4
17
16
21
5
23
31
24
6
14
14
14
7
18
21
18
8
24
25
30
9
32
32
35
10
21
22
19
Instance
FF
LPG
SGPlan
11
26
25
25
12
53
42
39
13
37
33
36
14
39
78
44
15
49
60
47
16
–
284
–
17
–
143
–
18
–
179
–
19
–
230
–
20
–
176
–
Table 1: Plan Quality for the 2002 IPC Benchmark Instances
Directed vs. Undirected
Figure 2 and Figure 3 show the results of the Planar Directed
and Undirected tests respectively. For each of the planners,
there was no large difference in the results between the directed and undirected test cases. It was thought that for the
same reason the planners deal badly with dead-ends, they
may also deal badly with one-way streets. This appears not
to be the case, although further experiments may reveal more
specific forms of dead-end roads in which the planners struggle.
Competition Comparisons
The planning competition provides a strong motivation in
our field and directs the activity of the community. In this
study we examined the generator used in the 2002 IPC (Long
& Fox 2003): the Driverlog problem generator. We feel that
the generator does not provide problems that capture the full
potential of what is a structurally rich domain. Therefore it
is our opinion that the competition has failed to fully explore
how the planners behave in this domain. Our approach focusses on generating problems with several varied structural
Instance
FF
LPG
SGPlan
1
0
0
0
2
0
0
0
3
0
0
0
4
0
0
0
5
0
0
0
6
0
0
0.1
7
0
0
0
8
0
0
0
9
0
0
0
10
0
0
0
Instance
FF
LPG
SGPlan
11
0
0
0
12
0.4
0.1
0.2
13
0.2
0.1
0.1
14
0.3
0.2
0.1
15
0.1
0.4
0.1
16
–
85.9
–
17
–
2.8
–
18
–
8.1
–
19
–
52.1
–
20
–
72.1
–
features and we feel our results provide more understanding of the planners’ strengths and weaknesses. We believe
that this provides a far stronger base for making comparisons between the planners. In this section we describe the
Driverlog generator used in the competitions and discuss the
differences between the results of the competition and the
results found in this study.
Driverlog is a domain that is rich in structure, however the
current competition generator uses a very simple approach
to creating the test cases. The parameters to the generator,
are the number of trucks, drivers, packages and road locations. The connectivity of the road graph is determined by
making (number of road locations × 4) connections between
any two random locations. If the graph is not connected,
then additional connections are added between disconnected
locations until it is. It is highly likely that this method will
produce a very densely connected graph. The same happens for the path graph, except there are (2 × the number of
locations) instead, thus increasing the chances of a sparser
path graph. These graphs are both undirected, removing any
chances of one way streets or dead-ends and each cover all
the road locations, removing the possibility of disconnected
drivers. As the graphs are so densely connected it is unlikely
that they will be planar and even less likely that they will resemble real-world road networks.
The objects are positioned randomly across the locations
and their goal locations (if required) are chosen randomly
too. The decision on whether an object has a goal is randomly made, with 95% chance of a package having a goal
destination and 70% for both drivers and trucks. This means
that no control is given to the types of goal in the problem
and no effort is made to position the goals in an interesting
way.
We feel that the planning competition should be able to
prove that a planner is faster or produces better quality plans
to a statistically significant level. Also, that how a planner
performs in a particular area of planning should be identifiable. In our approach we generated problems that incorporated several interesting structural features and spanned a
whole range of difficulties. This provides a solid base for
judging the performance of the planners across the whole
domain and additionally provides invaluable insight into
how the planner behaves when faced with specific structural
features. We believe that the competition generator fails to
explore the interesting features of this domain and makes no
attempt to incorporate real-world structures into the problems. Also, we feel that too few problems were generated
to determine the performance of the planners. Our results
show that our problems spanned a whole range of difficulties, whereas the competition problems were found either
too hard or too easy. It is our opinion that the results presented here are sufficient to determine the best planner over
the whole domain and in addition, provide useful information to the planner designer, regarding the planner’s capabilities.
Depth of results: Number and Range
Table 2: Execution Time for the 2002 IPC Benchmark Instances
55
One of the motivations for this work was to improve the
quality of results that the planning competition could pro-
200
100
150
LPG Quality (#actions)
LPG Time (sec)
1000
10
1
100
50
0.1
0
0.1
1
10
100
1000
0
50
100
FF Time (sec)
150
200
150
200
FF Quality (#actions)
(a) Time
(b) Quality
1000
200
100
150
LPG Quality (#actions)
LPG Time (sec)
Figure 2: FF vs. LPG Planar Directed Road Network
10
1
100
50
0.1
0
0.1
1
10
FF Time (sec)
100
1000
(a) Time
0
50
100
FF Quality (#actions)
(b) Quality
Figure 3: FF vs. LPG Planar Undirected Road Network
56
200
100
150
SGPlan Quality (#actions)
SGPlan Time (sec)
1000
10
1
100
50
0.1
0
0.1
1
10
100
1000
0
50
FF Time (sec)
100
150
200
150
200
FF Quality (#actions)
(a) Time
(b) Quality
1000
200
100
150
SGPlan Quality (#actions)
SGPlan Time (sec)
Figure 4: FF vs. SGPlan Planar Directed Road Network
10
1
100
50
0.1
0
0.1
1
10
LPG Time (sec)
100
1000
(a) Time
0
50
100
LPG Quality (#actions)
(b) Quality
Figure 5: LPG vs. SGPlan Planar Directed Road Network
57
s10
s11
l16
s0
l8
s16
s8
s1
l4
s5
l10
l2
l11
s9
l6
l15
s4
s18
l20
l12
s13
s14
l7
l9
s15
s6
s19
l5
l18
s17
s12
s2
l19
l17
l3
l1
s7
l14
s3
l13
(a) Competition
(b) Planar Undirected
Figure 6: Competition Benchmark vs. Planar Undirected Graph Density
vide. We feel that the competition would greatly benefit the
community if it not only suggested an overall winner, but
also highlighted particular features of planning that individual planners excelled in. The Driverlog domain provides an
opportunity to test the planners on many interesting structural problems. However, in the competition only 20 problem instances are generated, hardly enough to make a full
exploration of the domain. Table 2 shows the time results
for FF, LPG and SGPlan for the competition instances. It
is difficult to form any kind of of comparison, as the results
are so similar. In contrast, Figure 2 b) shows the time result
for FF and LPG for our planar problem set. The large problem set ranging over the entire dimension, provides results
that clearly shows how the planners compare throughout an
entire range of problem difficulties.
The results that we present for each dimension come from
a full range of problem difficulties. We feel that this gives
us a strong base to make informed claims about each planner’s abilities in terms of these dimensions. In the 2002
competition, the first 15 of the problems for Driverlog provided no challenge to the planners, and the last 5 were all
found extremely difficult (mostly impossible) (Long & Fox
2003). The problems failed to provide a smooth range of difficulty. We feel that if claims are going to be made about the
quality of plans a planner makes or how quickly it produces
those plans, then the planner must have been tested across
the whole range of possible problems.
Interesting structure
Driverlog problems have the potential of containing all sorts
of structural features. We feel that the dimensions introduced earlier, capture a very interesting selection of these.
58
The competition generator constructs the graphs by randomly forming many connections between nodes, and this
results in densely connected graphs. All of the graphs are
undirected and the road and path graphs must visit every
point. This means that the dimensions that we highlighted
either can not, or are very unlikely to appear in any of the
problems generated for the competition. The competition
therefore fails to explore much of the interesting structure
possible in this domain.
Our generators cover several structural features; the problems therefore test the planners across these features. This
means that our results can be used to determine more than
just the best planner: they also identify how a planner performs on problems with a particular structural feature. In the
results section, we identified the dead-end feature as a particular problem for FF, LPG and SGPlan. We feel that this
sort of information will provide invaluable feedback to the
planner designer, allowing them to focus their research on
the areas of weak performance. As discussed, it is unlikely
that the competition generator will provide many problems
with interesting structure. As a result, it is impossible to
identify when a planner performs poorly using the competition instances.
Density and realism
The planning competition is a force that directs the planning community and in our opinion it should be used to
push planning towards dealing with real-world situations.
Although current planners cannot deal with large real-world
problems, we feel that realistic structures should be incorporated into planning problems wherever possible. The road
connections in real-world transport network often form pla-
nar graphs. As we described previously, the competition
Driverlog generator is likely to generate very dense graphs,
contrasting with the real model. Figure 6 a) highlights the
connectivity of a typical competition problem, where b)
shows the more realistic, sparse structure generated by our
planar graph generator. The dimensions that we have presented in this work, have been designed specifically to test
planners on real-world structural features. It is therefore our
opinion that our generator is more likely to include realistic
structures within the problems it generates.
Future Work
This short study aims to motivate researchers to take the
problem of instance generation more seriously. To further
this work, several things can be done:
Create More Generators Driverlog is just one domain
from many previous competition domains. Instance
generators for the full range of competition domains
would help to further refine where planning technology’s
strengths and weaknesses are.
Complete Driverlog Generator Even the Driverlog generators as described in this work are not complete. New interesting dimensions may be identified, which would require extending the generator to create problems across
this new dimension. One of the current dimensions
(amount of symmetry) is not yet varied explicitly in the
generators. Adding this capacity is part of the future work
for this project.
Richer Modelling Language PDDL is capable of expressing far more than the propositional instances generated
by our current generator. In the IPC, numeric and temporal versions of Driverlog were tested alongside the purely
propositional forms of the problem. These included durations on each of the actions, and also fuel costs for driving.
They also had different metrics to optimise. Clearly expanding the generators to these dimensions is essential to
further planning technology in these areas.
Real-world Derived Instances Real logistics problems are
different from typical Driverlog instances both in size
and structure. Real logistics problems have huge numbers of locations. The structure of their underlying maps
will remain constant: road networks rarely change significantly. If one goal of the planning community is to
address real-world problems, then real-world benchmarks
are required. Techniques to exploit structures that are constant between different instances could be developed to
tackle these problems.
A Generator Generator There are common, repetitive
structures that occur in different planning domains. For
instance, there are many problems similar to Driverlog,
in which movement across a graph is required. If these
structures can be identified, then the dimensions identified here that relate to graph structures could be used as
generic dimensions in other problems with similar structures. Therefore, if enough different structures could be
identified, then a generic problem generator could be cre-
59
ated which would be able to generate instances of any domain that have interesting structure.
Conclusions
In this paper, we have tried to show that the problem of instance generation is of critical importance to the planning
community. Having complex domains is not enough. To test
planners effectively, then benchmarks that explore all possible structural dimensions of our domains have to be created.
We have identified several structural dimensions for the
Driverlog domain, and have created instance generators that
explore several of these. After creating many instances our
results show that, for the planners tested, there is little difference in plan quality. The planners also cannot handle resource allocation intelligently (as seen in the dead-end example).
We have shown that the IPC generator does not generate structurally interesting instances, and have made various
criticisms of the competition benchmarks. It must be remembered that running the IPC already requires a great deal
of work, and so this work is not created to undermine the efforts of the organisers. However, it does show that creating
instance generators should not simply be the responsibility
of the competition organisers.
This work is still preliminary, and a completely unified
Driverlog generator that can generate instances anywhere in
the structural dimensions is essential. There is still plenty
work to be done to understand what structural properties
underlie difficult instances. Hopefully this work will convince its readers that instance generation is an important
topic both for comparing our planners and for understanding what makes a difficult planning problem.
References
Chen, Y.; Wah, B. W.; and Hsu, C. 2006. Temporal Planning using Subgoal Partitioning and Resolution in SGPlan.
Journal of Artificial Intelligence Research 26:323–369.
Gerevini, A., and Serina, I. 2002. LPG: A Planner Based
on Local Search for Planning Graphs with Action Costs. In
AIPS, 13–22.
Hoffmann, J., and Nebel, B. 2001. The FF planning system: Fast plan generation through heuristic search. Journal
of Artificial Intelligence Research.
Koehler, J. 1999. RIFO within IPP. Technical report,
Albert-Ludwigs University at Freiburg.
Long, D., and Fox, M. 2003. The 3rd international planning competition: Results and analysis. Journal of AI Research 20:1–59.
Slaney, J., and Thiébaux, S. 2001. Blocks world revisited.
Artif. Intell. 125(1-2):119–153.
VLEPpO: A Visual Language for Problem Representation
Ourania Hatzi1, Dimitris Vrakas2, Nick Bassiliades2, Dimosthenis Anagnostopoulos1 and
Ioannis Vlahavas2
1
Harokopio University of Athens, Athens, Greece
{raniah, dimosthe}@hua.gr
2
Dept. Of Informatics, Aristotle University Of Thessaloniki, Thessaloniki, 54124, Greece
{dvrakas, nbassili, vlahavas}@csd.auth.gr
Abstract
AI planning constitutes a field of interest as its techniques
can be applied to many areas. Contemporary systems that
are being developed deal with certain aspects of planning
and focus mainly on dealing with advanced features such
as resources, time and numerical expressions. This paper
presents VLEPpO, a Visual Language for Enhanced
Planning problem Orchestration. VLEPpO is a visual
programming environment that allows the user to easily
define planning domains and problems, acquire their
PDDL representations, as well as receive solutions,
utilizing web services infrastructure.
2. Related Work
There have been a few experimental efforts to construct
general-purpose tools which offer user interfaces for
defining planning domains and problems, as well as
executing planners which provide solutions to the
problems.
The GIPO system [1] is based on an object-centric
view of the world. The main idea behind it is the notion
of change in the state of objects throughout plan
execution. Therefore, the domains are modelled by
describing the possible changes to the objects existing in
the domain. The GIPO system is designed to work with
both classical and HTN (Hierarchical Task Networks)
domains. In both cases, it offers graphical editors for
domain creation, planners, animators for the derived
plans and validation tools. The domain models are
represented mainly in an internal representation language
called OCL (Object Centered Language) [8], which is, as
the name implies, object oriented, in accordance with the
GIPO system. Translators from and to PDDL have been
developed, which cover only a few parts of the language
(typed / conditional PDDL).
SIPE-2 [2] is another system for interactive planning
and execution of the derived plans. As it is designed to be
performance-oriented, it embodies many heuristics for
increased efficiency. Another useful feature is the plan
execution monitoring, which enables the user to feed new
information to the system in case there is some change in
the world. In addition, the system offers graphical
interfaces for knowledge acquisition and representation,
as well as plan visualization. SIPE-2 is an elaborate
system with a wide range of capabilities. However, it
uses the ACT formalism, which is quite complicated and
does not correspond directly to PDDL, although PDDL
descended partially from this formalism, but also from
other formalisms such as ADL. Therefore, there is no
way to easily use a PDDL file to construct a domain in
SIPE-2, or export the domain or problem to PDDL.
ASPEN is an environment for automated planning and
scheduling. It is an object-oriented system, originally
targeted to space mission operations. Its features include
an expressive constraint modelling language which is
used for defining the application domain, systems for
defining activity requirements and resource constraints,
as well as temporal constraints. In addition, a graphical
user interface is included, but its use in confined to
1. Introduction
AI planning has been an active research field for a long
time, and its applications are manifold. A great number of
techniques and systems have been proposed during this
period in order to accommodate designing and solving of
planning domains and problems. In addition, various
formalisms and languages have been developed for the
definition of these domains, with Planning Domain
Definition Language (PDDL) [4][5][6] being dominant
among them.
Research among contemporary planning systems has
revealed a lack of appropriate integrated visual
environments for representing accurately PDDL elements
and structures, and consequently using these structures to
produce quality plans. This provided the motivation for
the work presented in this paper.
The proposed visual tool is intended to cover the need
for such an environment by providing an easy to use,
efficient graphical user interface, as well as
interoperability with planning systems implemented as
web services. The elements offered in the interface
correspond to PDDL elements and structures, making the
representation of most contemporary planning domains
possible. Furthermore, importing from and exporting to
PDDL features are provided as well. Drag and drop
operations along with validity checks make the use of the
environment easy even for users not particularly familiar
with the language.
The rest of the paper is organised as follows: Section
2 reviews related work in the field by presenting several
planning systems, while Section 3 discusses the eminent
formalisms for representing planning domains and
problems. Section 4 presents our visual tool and
demonstrates its use through examples, and finally,
Section 5 concludes and discusses future goals.
60
visualization of plans and schedules, in systems where
the problem solving process is interactive.
ASPEN was developed for the specific purposes of
space mission operations and therefore, it has only a few
vague correspondences to PDDL. Furthermore, it does
not offer a graphical interface for creating the planning
domains.
In conclusion, although the above systems are useful,
none of them offers direct visual representation of PDDL
elements, a feature which would make the design very
efficient for the users already familiar with the language.
Moreover, even the systems which offer translation to
PDDL do not cover important features of the language. It
should be mentioned that a couple of other systems which
provide user interfaces can be found in the literature, but
they are not mentioned in this section because of their
being developed for specific purposes.
The VLEPpO tool is based on ViTAPlan [3] a
visualization environment for planning based on the
HAPRC planning system. VLEPpO extends ViTAPlan in
numerous ways providing the user with visualization
capabilities for most of the advanced features of PDDL
[6] and a more accurate and expressive visual language.
momentum instead of variables when there is a need for
values that can change over time, as a result of an action.
Constants represent objects that do not change values
and can be used in the domain operators or the problems
associated with a domain.
Relations between objects in the domain are
represented by predicates. A predicate may have an
arbitrary number of arguments, whose ordering is
important in PDDL. Predicates are used to describe the
state of the world at a specific moment. Moreover, they
are used as preconditions and results of an action.
Timeless predicates are predicates that are true at all
times. Therefore, they cannot appear as a result of an
action unless they also appear among its preconditions. In
other words, timeless predicates are not affected by any
actions available to the planner.
Actions enable transitions between successive
situations. An action declaration mentions the parameters
and variables involved, as well as the preconditions that
must hold for the action to be applied. PDDL offers two
choices when it comes to defining the results of the
action: The results can either be a list of predicates called
effects, or an expansion, but not both at the same time.
The effects, which can be both conditional and
universally quantified, express how the world situation
changes after the action is applied. More specifically,
inspired by the STRIPS formalism, the effects include the
predicates that will be added to the world state and the
predicates that will be removed from the world state.
Axioms, in contrast to actions, state relationships
among propositions that hold within the same situation.
The necessity of axioms arises from the fact that the
action definitions do not mention all the changes in all
predicates that might be affected by an action. Therefore,
additional predicates are concluded by axioms after the
application of each action. These are called derived
predicates, as opposed to primitive ones. In more recent
versions of the language the notion of derived predicates
has replaced axioms, but the general idea described
remains the same.
Safety constraints in PDDL are background goals
which may be broken during the planning process, but
ultimately they must be restored. Constraint violations
present in the initial situation do not require to be fulfilled
by the planner.
After having defined a planning domain, problems can
be defined with respect to it. A problem definition in
PDDL must specify an initial situation and a final
situation, referred to as goal. The initial situation can be
specified either by name, or as a list of literals assumed to
be true, or a combination of both. In the last case, literals
are treated as effects; therefore they are added to the
initial situation stated by name. Again, the closed-world
assumption holds, unless stated otherwise. Therefore, all
predicates which are not explicitly defined to be true in
the initial state are assumed to be false. The goal can be
either a goal description, using function-free first order
predicate logic, including nested quantifiers, or an
expansion of actions, or both. The solution given to a
problem is a sequence of actions which can be applied to
the initial situation, eventually producing the situation
stated by the goal description, and satisfying the
expansion, if there is one.
3. Problem Representation
A crucial step in the process of solving a search problem
is its representation in a formal language. The choice of
the language can significantly affect not only the
comprehensiveness of the representation but also the
efficiency of the solver. The PDDL language is nowadays
the standard for representing planning problems. PDDL is
partially based on the STRIPS [7] formalism. Since the
environment presented in this work has a close
connection with PDDL, a brief description of the most
important language elements will be provided in the
following section.
3.1. The PDDL Definition Language
PDDL [4] stands for Planning Domain Definition
Language. Although it was initially designed for planning
competitions such as AIPS and IPC, it has become a
standard in the planning community for modelling
planning domains. PDDL focuses on expressing the
physical properties of the domain at hand in each
planning problem, such as the available predicates and
actions. However, at the same time, there are no
structures or elements in the language to provide the
planner with advice, that is, guidelines about how to
search the solution space, although extended notation
may be used, depending on the planner.
Each domain definition in PDDL consists of several
declarations, which include types of entities, variables,
constants, literals that are true at all times called timeless,
and predicates. In addition, there are declarations of
actions, axioms and safety constraints. These elements
are explained in the following paragraphs.
Variables have the same semantics as in any other
definition language, and are used in conjunction with
built-in functions for expression evaluation. In more
recent versions of PDDL, fluents seem to gain
61
PDDL 2.1 [5] was designed to be backward
compatible with PDDL 1.2, and to preserve its basic
principles. It was developed by the necessity for a
language capable of expressing temporal and numeric
properties of planning domains.
The first of the extensions introduced were numeric
expressions. Primitive numeric expressions are values of
functions which are associated with tuples of domain
objects. Further numeric expressions can be constructed
using primitive ones and arithmetic operators. In order to
support numeric expressions, new elements were added
to the language. Functions are now part of the domain
definition and, as mentioned above, they associate a
number of objects with an arithmetic value. Moreover,
conditions were introduced, which are in fact
comparisons between pairs of numeric expressions.
Finally, assignment operations are possible, with the use
of built-in assignment operators such as assign, increase
and decrease. The actual value for each combination of
objects given by the functions is not stated in the domain
definition but must be provided to the planner in the
problem definition.
A further extension to PDDL facilitated by numeric
expressions is plan metrics. Plan metrics specify the way
a plan should be evaluated, when a planner is searching
not for any plan, but for the optimal plan according to
some metric.
Other extensions in this version include durative
actions, both discretised and continuous. Up to now,
actions were considered instantaneous. Durative actions,
as the term implies, have a duration which is declared
along with their definition. Furthermore, as far as
discretised durative actions are concerned, temporal
annotations are introduced to their conditions and effects.
A condition can be annotated to hold at the start of the
interval, at the end of the interval, or all over the interval
during which the action lasts. An effect can be annotated
as immediate, that is, it takes place at the start of the
interval, or delayed, that is, it takes place at the end of the
interval.
In PDDL 3.0 [6] the language was enhanced with
constructs that increase its expressive power regarding
the plan quality specification. The constraints and goals
are divided into strong, which must be satisfied by the
solution, and soft, which may not be satisfied, but are
desired. In addition, the notion of plan trajectories is
introduced, which allows the specification of
intermediate states that a solution has to reach, before it
reaches the final state.
and covers the elements that are used more frequently in
contemporary planning domains and problems. In the
following, the features of the tool will be discussed in
more detail.
4.1. The Entity – Relation Model
The entity – relation model is used to design the structure
of data in a system. Our visual tool employs this wellknown formalism, adapting it to PDDL. Therefore, the
entities in a planning domain described in PDDL are the
objects of the domain, while the relations are the
predicates. These elements are represented visually in the
tool by various shapes and connections between them.
A class of objects in the tool is represented visually by
a coloured circle. A class in PDDL represents a type of
domain objects or action parameters. From a class the
user can create parameters of this type in operators, and
objects of this type in problems, by dragging and
dropping the class on an operator or a problem,
respectively. The type of a parameter or object is denoted
by their colour, which is the same as the corresponding
class.
Consider the gripper domain for example, where there
is a robot with N grippers that moves in a space,
composed of K rooms that are all connected with each
other. All the rooms are modelled as points and there are
connections between each pair of points and therefore the
robot is able to reach all rooms starting from any one of
them with a simple movement. In the gripper domain
there are L numbered balls which the robot must carry
from their initial position to their destination.
Following a simple analysis the domain described
above can be encoded using four classes: robot, gripper,
room and ball. However, since the domain does not
support the existence of multiple robots, the class robot
can be implicitly defined and therefore there is no need
for it. The three remaining classes are represented in
VLEPpO using three coloured circles as outlined in
Figure 1.
Figure 1. The classes in Gripper domain.
A relation is represented by a coloured rectangle with
black outline. A relation corresponds to a domain
predicate in PDDL and it is used for defining connections
among classes. The relations in PDDL and therefore in
VLEPpO are of various arities. Unary relations are
usually used to define properties of classes that can be
modeled as binary expressions that are either true or false
(e.g. closed(Door1)).
If at least one pair of class and relation is present in
the domain, the user can add connections between them.
Each connection represents an argument of a relation, and
the class shows the type of this argument. A relation may
have as many arguments as the user wishes, of any type
the user wishes. The arguments are ordered according to
the numbers on each connection, because this ordering is
important to PDDL.
The Gripper domain has four relations, as depicted in
Figure 2: a) at-robot, which specifies the position of the
4. The Visual Language
VLEPpO (Visual Language for Enhanced Planning
problem Orchestration) is an integrated system for
visually designing and solving planning problems,
implemented in Java. It offers an efficient and easy-touse graphical interface, as well as compatibility and
interoperability with PDDL. The main goal during the
implementation of the graphical component of the tool
was to keep the interface as simple and efficient as
possible, but, at the same time, represent accurately and
flexibly the features of PDDL. The range of PDDL
elements that can be represented in the tool is quite wide,
62
robot and it is connected only with one instance of room,
b) at which specifies the room in which each ball resides
and therefore is connected with an instance of ball and an
instance of room, c) holding which defines the alternative
position of a ball, i.e. it is held by the robot and therefore
it is connected with an instance of ball and an instance of
gripper and d) empty which is connected only with an
instance of gripper and states that the current gripper does
not hold any ball.
Figure 3. The operators in the Gripper domain.
The default view for an operator is in preconditions /
results view which follows a declarative schema that is
different from the classical STRIPS/PDDL approach.
However, there is a direct way to transform definitions
from one approach to the other.
Although the preconditions / results view is more
appropriate for visualizing operators, the system gives the
user the option to use the classical add / delete lists view,
therefore the STRIPS formalism is accommodated as
well. If selected, the column on the left, as before, shows
the preconditions that must hold for the action to be
executed, but the column on the right shows the facts that
will be added and deleted from the current state of the
world upon the execution of the action.
Figure 2. The relations in the Gripper domain.
Note here that although non-typed PDDL requires
only the arity for each predicate and not the type of
objects for the arguments, the interface obliges the user to
connect each predicate with specific object classes and
this is used for the consistency check of the domain
design. According to the design of Figure 2, the arity of
predicate holding, for example, is two and the specific
predicate can only be connected with one object of class
ball and one object of class gripper.
The aforementioned elements, classes, relations and
connections combined together form the entity – relation
model of the data for the planning domain the user is
dealing with.
4.2. Representing Operators
Operators have direct correspondence to PDDL actions,
which enable transitions between successive situations.
The main parts of the operator definition are its
preconditions and results, as well as the parameters.
Preconditions include the predicates that must hold for
the action to be applied. Results are the predicates that
will be added or removed from the world state after the
application of the action. Operators in the visual tool are
represented by light blue resizable rectangles in the
Operator Editor, comprised by three columns. The left
column holds the preconditions, the right column holds
the effects, and the middle one the parameters.
Dragging and dropping a relation on an operator will
add the predicate to the preconditions or effects,
depending on which half of the operator the shape was
dropped on. Parameters can be created in operators by
dropping classes on them. Adding a connection in the
ontology enables the user to add corresponding
connections in the operators. Other elements that can be
imported in operators will be discussed in more detail in
the section about advanced features.
For example, in the gripper domain there are three
operators: a) move which allows the robot to move
between rooms, b) pick which is used in order to lift a
ball using a gripper and c) drop which is the direct
opposite of pick and is used to leave a ball on the ground
(Figure 3)
Figure 4. Pick operator in add/delete lists view.
As an example, the pick operator of the Gripper
domain is considered. According to the STRIPS
formalism, the operator is defined by the following three
lists, also depicted in Figure 4.
prec = {empty(GripperObj1), at-robot(RoomObj1),
at(BallObj1,RoomObj1)}
add = {holding(GripperObj1, BallObj1)}
del = {empty(GripperObj1), at(BallObj1, RoomObj1)}
The equivalent operator in Preconditions / Results
view is presented in Figure 5.
Figure 5. Pick operator in preconditions / results view.
63
4.3. Representing Problems
elements. In the current implementation, AND and OR
nodes are binary, that is, they accept only two possible
arguments, while NOT nodes are by default unary. Each
of the node arguments can be either another node of any
type, or a relation. An example of a derived predicate is
depicted in Figure 7.
For every domain defined in PDDL a large number of
problems that correspond to this domain can also be
defined. Problem definitions state an initial and a goal
situation, and the task of a planner is to find a sequence
of operators that, if applied to the initial situation, will
provide the goal situation. The problem shape in the
visual tool is much like an operator in form, but different
semantically. It is represented by a three-column
resizable rectangle in the Problem Editor. Left column
holds the predicates in the initial state, right column holds
the predicates in the goal state, and middle column holds
the objects that take part in the problem definition.
Figure 7. A derived predicate with AND/OR tree.
Among the advanced features is the option to indicate
that a predicate is timeless, that is, the predicate is true at
all times. This operation involves a lot of validity checks,
which will be explained in the corresponding paragraph.
Another PDDL feature incorporated in the tool are
numerical expressions. In order for numerical expressions
to function properly, the definition of a number of other
elements is involved. Consequently, a combination of
design elements in each frame is used. First of all, in the
ontology frame the user can import functions, which are
represented by rectangles with double outline. These
functions may or may not have arguments. As with
simple relations, functions can be dragged on operators.
However, in order to appear in the PDDL description of
an operator, they must be involved in a condition or in an
assignment. The next step is to actually import conditions
and assignments which involve these functions in the
operator. In that case, a dialog box appears facilitating the
import of a condition or an assignment, by showing all
the available options that the user can select among.
Furthermore, for each function imported in the tool, a
new rectangle appears in the problem frame, which
corresponds to this function. This rectangle is used to
declare the initial values of the function for the problem
at hand.
Furthermore, the system supports discretised durative
actions. The definition of a durative action includes
setting the duration of an operator, in combination with
temporal annotations (Figure 8). In this case, the action is
considered to last a specific period of time, while the
preconditions can be specified to hold at the beginning of
this period, at the end of this period, or all over the period
(combination of these choices is also possible). Effects
can be immediate, that is, happen at the beginning of the
action, or delayed, that is happen at the end of the action.
Figure 6. A Problem instance of the Gripper domain.
Figure 6 presents a problem instance of the gripper
domain, which contains two rooms (Bedroom and
Kitchen), one ball (Ball1) and the robot has two grippers
(rightGripper and leftGripper). The initial state of the
problem defines the starting locations of the robot and the
ball (Kitchen and Bedroom respectively) and that both
grippers are free. The goals specify that the destination of
both the ball and the robot is the kitchen.
4.4. Advanced Features
The basic PDDL features described above are adequate
for simple planning domains and problems. However, the
language has many more features divided into subsets
referred to as requirements. An effort has been made in
order for the visual tool to embody the most significant
and frequently used among them.
An advanced design element offered by the system,
which has direct representation in PDDL, is a constant.
The constant is visually represented similarly to a class,
but it is enhanced with a red circle around it to
discriminate it from a class. The constant must be of a
type, and the tool enables the user to drag and drop it on a
class to denote that. Constants can be used either in an
operator or in a problem, where they are treated similarly
to parameters or objects, respectively.
A derived predicate is another advanced PDDL feature
that is represented by a group of design elements in the
visual tool. The term refers to predicates that are not
affected by operators, but they are derived by other
relations using a set of rules. Derived predicates in fact
existed in the first version of the PDDL language as well,
under the notion of axioms. Visually, they are represented
by a rounded rectangle with a specific colour, but they
are not complete unless they are enhanced with an
AND/OR tree that indicates the way they are derived by
other relations. Consequently, AND, OR and NOT nodes
for the construction of the tree are also offered as design
Figure 8. An example of a durative action.
64
additional checks are performed about the types of
arguments, similar to those performed for simple objects.
Finally, a very useful element for problem designing is
maps. Maps represent a special kind of relations that have
exactly two arguments of the same type, and are expected
to have many instances in the initial state of a problem
(Figure 9). For each relation that fulfills these conditions
a map can be created. Objects which take part in the
instances of the relation can then be dragged on the map,
and connections can be created between them. Each of
these connections represents an instance of the relation
that the map corresponds to. In conclusion, maps do not
have an exact representation to PDDL, but they express a
part of the initial state of the world, thus making the
problem shape more readable. The use of maps is not
mandatory, as the same relations can be simply
represented in the problem shape.
4.6. Translation to and from PDDL
The capability to export the domains and problems
designed in the tool to PDDL constitutes another
important feature. All of the design elements that the user
has imported in the domain, such as predicates and
operators, along with comments, are exported to a PDDL
file, which is enhanced with the appropriate requirements
tag. The user is offered the option to use typing,
therefore, the same domain can produce two different
PDDL files, one with the :typing requirement and one
without it. Details about exporting are presented in the
remainder of the paragraph.
Despite the fact that a class in the visual tool always
represents the same notion, that is, the type of domain
objects or parameters, it takes different forms when it
comes to exporting the domain. In case the requirement
:typing is declared, the class name is included in the
(:types ) construct of the domain definition, and for each
object, parameter and constant a type must be declared.
In case typing is not used, classes are treated as timeless
unary predicates and appear in the corresponding part of
the domain definition. In addition, for each parameter in
an operator, a precondition that denotes the type of the
parameter must be added in the PDDL definition,
although it does not appear visually in the tool. Likewise,
for each object, a new initial literal denoting the type of
this object must be included in the problem definition.
The elements in the Ontology Editor are combined
together in order to formulate the domain constructs in
the syntax that the language imposes. For example,
relations, connections and, if typing is used, classes are
combined to formulate the predicates construct. Likewise,
functions and derived predicates constructs are formed.
As far as constants are concerned, they may appear in the
place of parameters in operators and objects in problems,
and they also appear in the special construct (:constants )
in the domain definition.
Exporting the operators is quite more complicated,
because a combination of several elements of the
Operator Editor and the Ontology Editor is needed. Slight
changes occur to an operator definition depending on
whether the :typing requirement is declared.
Finally, exporting the problems is quite similar to
exporting the operators, but the problems are stored in a
different PDDL file. Therefore, numerous problems can
be defined for the same domain. If maps are used, care
must be taken to include the information they embody in
the list of predicates included in the initial state.
Furthermore, if functions are used, their initial values
provided by the user in the Problem Editor will be part of
the declaration of the initial state of the problem, in the
corresponding construct.
The visual tool also offers the feature of importing
planning domains and problems expressed in PDDL,
visualizing them, and thus enabling the user to
manipulate them. However, importing PDDL is subject to
some restrictions. The most important is that the domains
and problems must declare the :typing requirement. If no
typing is used, syntax is not enough, and semantic
Figure 9. A map for the relation connected(C1, C2).
4.5. Syntax and Validity Checking
A very important aspect in every tool for designing and
editing planning domains is syntax and validity checking.
Planning domains have to be checked for consistency
within their own structures, and planning problems have
to be checked for consistency and correspondence to the
related domains. This visual tool attempts to detect
inconsistencies at the moment they are created and notify
the user about them, before they propagate in the domain.
In the remainder of this paragraph several examples will
be given, in order to illustrate the validity checking
processes of the system.
Whenever the user attempts to insert a new connection
in an operator or in a problem, necessary checks are
performed and if a corresponding connection cannot be
found in the ontology an appropriate error message is
shown. Special care must be taken to verify that the types
of parameters and objects match to the types of
arguments of the predicates.
As already mentioned, the system supports timeless
predicates, which are, by definition, true at all times.
Therefore, they are allowed to appear in the preconditions
of an operator, but not in the add or delete lists. As a
consequence, if the user tries to add a timeless predicate
in the preconditions part of an operator, it will
automatically appear in the effects part, so the add and
delete lists will not be affected. Furthermore, if the user
tries to set a predicate timeless, checks will be performed
to determine if this operation is allowed. Finally, timeless
predicates are not allowed to appear in a problem. In all
above cases, error messages occur in order to warn the
user and help them correct the domain inconsistencies.
Another example is that of constants. Checks are
performed to confirm that the class of a constant has
already been defined before the user attempts to use the
constant in an operator or a problem. Furthermore,
65
information is necessary in order to discriminate types of
objects from common unary predicates.
Our future goals include the extension of the tool in
order to represent even more complex PDDL language
elements, as well as other planning approaches, such as
HTN (Hierarchical Task Network) planning. Such an
extension is believed to broaden the range of real world
problems that can be represented and solved by the tool.
Visual representation of the produced plans, along with
plan metrics are also among our imminent goals.
4.7. Interface with Planning Systems
As the tool is intended to be an integrated system not
only for designing but for solving planning problems as
well, an interface with planning systems is necessary.
This is achieved by providing the ability to discover and
communicate with web services which offer
implementations of various planning algorithms.
Therefore, a dynamic web service client has been
developed as a subsystem. The requirement for flexibility
in selecting and invoking a web service justifies the
decision to implement a dynamic client instead of a static
one. Therefore, the system can exploit alternative
planning web services according to the problem at hand,
as well as cope with changes in the definitions of these
web services.
The communication with the web services is
performed by means of exchanging SOAP messages, as
the web service paradigm dictates. However, in a higher
level, the communication is facilitated by the use of the
PDDL language, which constitutes the common ground
between the visual tool and the planners. An additional
advantage of using PDDL is that the visual tool is
released by the obligation to determine the PDDL
features that a planner can handle, thus leaving each
planning system to decide for itself.
The employment of web services technology for
implementing the interface results in the independency of
the visual tool from the planning or problem solving
module. Such a decoupling is essential since new
planning systems which outperform the current ones are
being developed. Each of them can be exposed as a web
service and then invoked for solving a planning problem
without any further changes to the visual tool or the
domains and problems already designed and exported as
PDDL files.
Acknowledgements
This work was partially supported by a PENED program
(EPAN M.8.3.1, No. 03 73), jointly funded by the
European Union and the Greek government (General
Secretariat of Research and Technology).
References
[1] T. L. McCluskey, D. Liu, Ron M. Simpson, “GIPO II: HTN
Planning in a Tool-supported Knowledge Engineering
Environment”, International Conference on Automated
Planning and Scheduling (ICAPS), 2003
[2] Wilkins, D. E., Lee, T. J. and Berry, P., Interactive
Execution Monitoring of Agent Teams, Journal of Artificial
Intelligence Research, 18 (2003), pp. 217-261.
[3] D. Vrakas, I. Vlahavas, “A Visualization Environment for
Planning”, International Journal on Artificial Intelligence
Tools”, Vol. 14 (6), 2005, pp. 975-998, World Scientific.
[4] Ghallab, M., Howe, A., Knoblock, C., McDermott, D., Ram,
A., Veloso, M., Weld, D. and Wilkins, D., "PDDL -- the
planning domain definition language". Technical report, Yale
University, New Haven, CT (1998).
[5] Fox, M. and Long, D., "PDDL2.1: An extension to PDDL
for expressing temporal planning domains". Journal of Artificial
Intelligence Research, 20 (2003), 61-124.
5. Conclusions and Future Work
[6] Gerevini, A. and Long, D., "Plan Constraints and
Preferences in PDDL3", Technical Report R.T. 2005-08-47,
Department of Electronics for Automation, University of
Brescia, Italy.
In this paper a visual tool for defining planning domains
and problems was proposed. The tool offers an efficient
user interface, as well as interoperability with PDDL, the
standard language for planning domain definition. The
elements represented in the tool cover a wide range of the
language, while the user is significantly facilitated by the
validity checks performed during the design process. The
use of the tool is not confined to designing planning
problems, but the ability to solve them by invoking
planners implemented as web services is offered as well.
Therefore, the tool is considered an integrated system for
designing and solving planning problems.
[7] Fikes, R. and Nilsson, N. J., STRIPS: A new approach to the
application of theorem proving to problem solving, Artificial
Intelligence, Vol 2 (1971), 189-208.
[8] Liu, D., and McCluskey, T. L. 2000. The OCL Language
Manual, Version 1.2. Technical report, Department
of Computing and Mathematical Sciences, University of
Huddersfield
66
Constraint Programming Search Procedure for Earliness/Tardiness Job Shop
Scheduling Problem
Jan Kelbel and Zdeněk Hanzálek
Centre for Applied Cybernetics, Department of Control Engineering
Czech Technical University in Prague, Czech Republic
{kelbelj,hanzalek}@fel.cvut.cz
Abstract
lem with earliness and tardiness costs. It significantly outperforms simple (default) models introduced in (Beck & Refalo 2003), and in average it gives results better than the
Unstructured Large Neighborhood Search (Danna & Perron
2003).
This paper describes a constraint programming approach to solving a scheduling problem with earliness
and tardiness cost using a problem specific search procedure. The presented algorithm is tested on a set of
randomly generated instances of the job shop scheduling problem with earliness and tardiness costs. The experiments are executed also for three other algorithms,
and the results are then compared.
Earliness Tardiness Job Shop Scheduling
Problem
Introduction
Scheduling problems with storage costs for early finished
jobs and delay penalties for late jobs are common in industry. This paper describes a constraint programming (CP) approach (Barták 1999) to solve a scheduling problem with
earliness and tardiness costs, which is for distinct due dates
NP-complete already on one resource (Baker & Scudder
1990).
This paper focuses on the job shop scheduling problem with earliness and tardiness costs. This problem—
introduced in (Beck & Refalo 2001; 2003)—is solved there
using hybrid approach based on probe backtrack search
(El Sakkout & Wallace 2000) with integration of constraint
programming and linear programing. This hybrid approach
performed significantly better than the generic (naive) CP
and MIP algorithms. With another hybrid approach, combining local search and linear programming (Beck & Refalo
2002), results slightly worse than in (Beck & Refalo 2001)
were obtained. The large neighborhood search (Danna &
Perron 2003) applied to the same earliness tardiness job shop
problem outperformed both hybrid approaches of Beck &
Refalo.
This paper describes a search procedure for scheduling
problems with earliness and tardiness costs which initially
tries to assign to variables those values that lead to a solution with minimal cost. It is developed by improving of the
search procedure used in (Kelbel & Hanzálek 2006) where
constraint programming is applied to an industrial case study
on a lacquer production scheduling. While in (Kelbel &
Hanzálek 2006) tardy jobs were not allowed, the procedure
described in this paper allows both early and tardy jobs, i.e.
optimal solutions are not discarded.
The proposed search procedure is tested on a set of randomly generated instances of the job shop scheduling prob-
67
The definition of the earliness tardiness job shop scheduling
problem (ETJSSP) is based on (Beck & Refalo 2003). We
assume a set of jobs J = {J1 , . . . , Jn } where job Jj consists of a set of tasks Tj = {Tj,1 , . . . , Tj,nj }. Each task has
given processing time pj,i , and required dedicated unary resource from a set R = {R1 , . . . , Rm }. Starting time Sj,i
of a task, and completion time defined as Cj,i = Sj,i + pj,i ,
determine the result of the scheduling problem. For each job
Jj there are precedence relations between tasks Ti and Ti+1
such that Cj,i ≤ Sj,i+1 for all i = 1, . . . , nj − 1, i.e. Tj , the
set of tasks, is ordered.
Concerning earliness and tardiness costs, each job has assigned a due date dj , i.e. the time when the last task of the
job should be finished. In general, the due dates are distinct.
The cost function of the job Jj is defined as αj (dj − Cj,nj )
for early job and βj (Cj,nj − dj ) for tardy job, where αj and
βj are earliness and tardiness costs of the job per time unit.
Taking into account both alternatives, the cost function of
the job can be expressed as
fj = max(αj (dj − Cj,nj ), βj (Cj,nj − dj )).
(1)
An optimal solution of the ETJSSP is the one with minimal
possible sum of costs over all jobs
X
min
fj .
Jj ∈J
In this article, a specific ETJSSP will be considered in
order to be consistent with the original problem instances
(Beck & Refalo 2003). All jobs have the sets of tasks with
the same cardinality, which is equal to the number of resources, i.e. nj = m for all j. Each of the nj tasks of the job
is processed on a different resource. Next, the problem has
a work flow structure: the set of resources R is partitioned
into two disjunctive sets R1 and R2 of about the same cardinality, and the tasks of each job must use all resources from
the first set before any resource from the second set, i.e. task
Tj,i for all i = 1, . . . , |R1 | requires resource from set R1 ,
and task Tj,i for all i = |R1 | + 1, . . . , nj requires resource
from set R2 .
The Model With Search Procedure for
ETJSSP
When solving constraint satisfaction problems (Barták
1999), constraint programming systems employ two
techniques—constraint propagation and search. The search
consists of a search tree construction by a search procedure (called also a labeling procedure) and applying a search
strategy (e.g. depth-first search) to explore the tree. The
search procedure typically makes decisions about variable
selection (i.e. which variable to choose) and about value
assignment (i.e. which value from domain to assign to the
selected variable).
Our approach to solving ETJSSP is based on usual constraint programming model with a problem specific search
procedure. The scheduling problem is modeled directly by
using a formulation from the previous section, yet by using
higher abstraction objects for scheduling (e.g. tasks and resources) available in ILOG OPL Studio (ILO 2002). The
model uses scheduling-specific edge-finding propagation algorithm for disjunctive resource constraints (Carlier & Pinson 1990). In the used CP system we obtained better performance of the computations when the cost function (1) was
expressed as fj ≥ αj (dj − Cj,nj ) ∧ fj ≥ βj (Cj,nj − dj ).
Most of the constraint programming systems have a default search procedure that builds the search tree by assigning the values from domains to variables in increasing order.
The idea of our search procedure is based on the fact that
only Cj,nj , the completion time of the last task of the job,
influences the value of the cost function, and that the values of Cj,nj inducing the lowest values of cost functions fj
should be examined first.
The search procedure, inspired by time-directed labeling
(Van Hentenryck, Perron, & Puget 2000), is directed by the
cost, only once at the beginning of the search (as an initialization of the search tree), however. It is denoted as costdirected initialization (CDI) and performs as described in
Algorithm 1: variables representing completion time Cj,nj
are selected in increasing order of the size of their domains,
then the value selection is made according to the lowest
value possible of the cost function. In the second branch
of the search tree, this value is disabled. This is done only
once for each task Tj,nj , then the search continues with the
default search procedure.
Slice Based Search available in (ILO 2002), based on
(Beck & Perron 2000), and similar to Limited Discrepancy
Search (Harvey & Ginsberg 1995) is used as a search strategy to explore the search tree constructed by the CDI procedure. This is necessary for obtaining good performance,
since using depth first search instead, the algorithm was not
able to find any solution for about 50% of larger size instances of the ETJSSP.
68
Algorithm 1 – CDI search procedure
1. sort the last tasks of all jobs, Tj,nj for all j, according to
the nondecreasing domain size of Cj,nj
2. for each task from the sorted list from domain of Cj,nj
select a value vj leading to minimal fj and create two alternatives in the search tree:
• Cj,nj = vj
• Cj,nj 6= vj
3. Continue with the default search procedure for all variables
Experimental Results
The proposed algorithm CDI was tested against two simple generic models introduced in (Beck & Refalo 2003),
a mixed integer programming model with disjunctive formulation of the problem (MIP), and a constraint programming model with SetTimes heuristic as a search procedure and depth-first search as a search strategy (ST). The
third model used for performance comparison is the Unstructured Large Neighborhood Search (uLNS) (Danna &
Perron 2003) by enabling Relaxation Induced Neighborhood Search (RINS) via IloCplex::MIPEmphasis=4
switch in Cplex 9.1 (Danna, Rothberg, & Le Pape 2005;
ILO 2005), while using the same MIP model as in (Beck
& Refalo 2003). The hybrid algorithm from (Beck & Refalo
2003) was not used due to its implementation complicacy.
Benchmarks are randomly generated instances of the
ETJSSP according to Section 6.1 in (Beck & Refalo
2003). The problem instances have a work flow structure. Processing times of tasks are uniformly drawn from
the interval [1, 99]. Considering the lower bound tlb
of the makespan of the job shop according to (Taillard
1993), and a parameter called looseness factor lf , the due
date of the job was uniformly drawn from the interval
[0.75 · tlb · lf, 1.25 · tlb · lf ]. The job shops were generated
for three n × m sizes, 10 × 10, 15 × 10, and 20 × 10, and for
lf ∈ {1.0, 1.3, 1.5}. Twenty instances were generated for
each lf —size combination.
The tests were executed using ILOG OPL Studio 3.6 with
ILOG Solver and Scheduler for the CP models, and ILOG
Cplex 9.1 for the MIP models, all running on a PC with CPU
AMD Opteron 248 at 2.2 GHz with 4 GB of RAM. The time
limit for each test was 600 s, after which the execution of
the test computation was stopped, and the best solution so
far was returned.
Table 1 shows the average ratio of the costs of the best
solutions obtained by the MIP, uLNS, and ST to the best
solutions obtained by CDI, for all types of instances.
In Tables 2 and 3 the ST algorithm will not be included
due to its poor performance. Table 2 shows the number of
instances solved to optimality within 600 s time limit, and
also the number of instances, for which the algorithm proved
the optimality of the solution. The CDI usually needed less
time than the MIP or uLNS to find a solution with optimal
cost, but in many cases it was not proven as an optimum in
given time or memory limit. In Table 2 a solution found by
the CDI model was considered as the optimal solution when
20 × 10
15 × 10
10 × 10
size
lf
MIP/CDI
uLNS/CDI
ST/CDI
MIP/CDI
uLNS/CDI
ST/CDI
MIP/CDI
uLNS/CDI
ST/CDI
1.0
1.3
1.5
1.8
4.8
3.8
1.2
1.8
2.1
2.6
9.2
8.1
4.7
18.4
7.9
3.1
5.3
1.9
6.2
28.3
37.9
5.3
14.0
5.5
4.9
14.3
5.7
6.7
25.8
50.6
Table 1: Average ratio for the best values of cost functions of solutions found within 600 s
the value of the objective function was equal to the one of
the proven optimal solution found by the MIP models or to
a lower bound found by the MIP.
Table 3 is inspired by (Beck & Refalo 2002). For each
problem instance, the lowest cost obtained by any of the
used algorithms is selected. Then, Table 3 contains the number of instances for which the algorithm found the solution
with the best cost, i.e. equal to the lowest cost, and the number of solutions with uniquely best cost, i.e. if no other algorithm has found solution with the same or lower cost.
Conclusion and Future Work
We have shown an algorithm called cost-directed initialization (CDI) designed to solve the earliness-tardiness scheduling problem. The algorithm was compared to other algorithms MIP, uLNS, and ST, on randomly generated earliness tardiness job shop benchmarks. The CDI was able to
find within 600 s a solution that is usually better than the
one found by any of the MIP, uLNS, or ST. With respect to
the best obtained value of the cost function, the CDI algorithm performed better than the other algorithms. However,
the weak point of the CDI is that the optimum, even if it is
found, is usually not proved.
Since the CDI search procedure does not exploit the structure of the job shop problem, it is possible to apply it on
other earliness/tardiness problems but the results may vary.
Revisiting the lacquer production scheduling problem (Kelbel & Hanzálek 2006) with the CDI, the solution of the
case study was further improved from the cost 886,535 to
777,249 due to the allowance of tardy jobs.
The earliness tardiness job shop scheduling problem, as
considered in this paper, does not fully correspond to real
production, since only the last tasks of jobs have direct impact on the cost of the schedule. If there is enough time,
i.e. the looseness factor is big, there can be quite a big delay
between the tasks of the same job, and so a storage would
be needed also during the production, but at no cost (since
no such cost is defined). So the payed storage of the final
product can be replaced by the free storage during the production.
There are some approaches to making formulation of this
problem closer to real life. Either by assignment of the due
date, earliness cost, and tardiness cost to all task (Baptiste,
Flamini, & Sourd To appear in 2008), or by introduction of
buffers with limited capacity that are used during the production (Brucker et al. 2006).
69
The approach with limited buffers is also used in the formulation of the lacquer production scheduling (Kelbel &
Hanzálek 2006) where each job needs a limited buffer (mixing vessel) during the whole time of its execution.
In future, we would like to focus on the formulation and
solution of the job shop problems with earliness and tardiness costs and with generic limited buffers.
Acknowledgement
The work described in this paper was supported by the
Czech Ministry of Education under Project 1M0567. Also,
we would like to thank the anonymous reviewers for useful
comments.
References
Baker, K. R., and Scudder, G. D. 1990. Sequencing with
earliness and tardiness penalties: A review. Operations
Research 38(1):22–36.
Baptiste, P.; Flamini, M.; and Sourd, F. To appear in 2008.
Lagrangian bounds for just-in-time job-shop scheduling.
Computers & Operations Research 35(3):906–915.
Barták, R. 1999. Constraint programming – what is behind? In Proc. of CPDC99 Workshop.
Beck, J. C., and Perron, L. 2000. Discrepancy-bounded
depth first search. In Second International Workshop on
Integration of AI and OR Technologies for Combinatorial
Optimization Problems (CP-AI-OR’00).
Beck, J. C., and Refalo, P. 2001. A hybrid approach to
scheduling with earliness and tardiness costs. In Third International Workshop on Integration of AI and OR Techniques (CP-AI-OR’01).
Beck, J. C., and Refalo, P. 2002. Combining local
search and linear programming to solve earliness/tardiness
scheduling problems. In Fourth International Workshop on
Integration of AI and OR Techniques (CP-AI-OR’02).
Beck, J. C., and Refalo, P. 2003. A hybrid approach to
scheduling with earliness and tardiness costs. Annals of
Operations Research 118(1–4):49–71.
Brucker, P.; Heitmann, S.; Hurink, J.; and Nieberg, T.
2006. Job-shop scheduling with limited capacity buffers.
OR Spectrum 28(2):151–176.
Carlier, J., and Pinson, E. 1990. A practical use of jackson’s pre-emptive schedule for solving the job-shop problem. Annals of Operations Research 26:269–287.
Danna, E., and Perron, L. 2003. Structured vs. unstructured large neighborhood search: A case study on job-shop
lf
1.0
1.3
1.5
uLNS
MIP
CDI
uLNS
MIP
20 × 10
15 × 10
10 × 10
size
CDI
uLNS
MIP
CDI
F
P
F
P
F
P
F
P
F
P
F
P
F
P
F
P
F
P
0
0
7
0
0
7
0
0
9
0
0
9
0
0
10
0
0
4
0
1
5
0
1
5
0
2
9
0
2
9
0
5
12
0
0
3
0
0
3
0
0
3
0
0
4
0
0
4
0
0
5
0
0
3
Table 2: Number of optimal solutions (F)ound and (P)roven within 600 s
lf
1.0
1.3
1.5
MIP
uLNS
20 × 10
15 × 10
10 × 10
size
CDI
uLNS
MIP
CDI
MIP
uLNS
CDI
B
U
B
U
B
U
B
U
B
U
B
U
B
U
B
U
B
U
0
0
8
0
0
0
5
2
12
5
2
0
15
18
20
15
18
8
0
1
5
0
0
0
0
2
14
0
0
3
20
20
16
20
18
6
0
1
3
0
1
0
0
1
7
0
1
4
20
18
16
20
18
12
Table 3: Number of (B)est and (U)niquely best solutions found within 600 s
scheduling problems with earliness and tardiness costs. In
Ninth International Conference on Principles and Practice
of Constraint Programming, 817–821.
Danna, E.; Rothberg, E.; and Le Pape, C. 2005. Exploring relaxation induced neighborhoods to improve MIP solution. Mathematical Programming 102(1):71–90.
El Sakkout, H., and Wallace, M. 2000. Probe backtrack
search for minimal perturbation in dynamic scheduling.
Constraints 5(4):359–388.
Harvey, W. D., and Ginsberg, M. L. 1995. Limited discrepancy search. In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI95), 607–615.
ILOG S.A. 2002. ILOG OPL Studio 3.6 Language Manual.
ILOG S.A. 2005. ILOG Cplex 9.1 User’s Manual.
Kelbel, J., and Hanzálek, Z. 2006. A case study on earliness/tardiness scheduling by constraint programming. In
Proceedings of the CP 2006 Doctoral Programme, 108–
113.
Taillard, E. 1993. Benchmarks for basic scheduling problems. European Journal of Operational Research 64:278–
285.
Van Hentenryck, P.; Perron, L.; and Puget, J.-F. 2000.
Search and strategies in OPL. ACM Transactions on Computational Logic 1(2):285–320.
70
Single-machine Scheduling with Tool Changes: A Constraint-based
Approach
András Kovács
J. Christopher Beck
Computer and Automation Research Institute
Hungarian Academy of Sciences
akovacs@sztaki.hu
Dept. of Mechanical & Industrial Engineering
University of Toronto
jcb@mie.utoronto.ca
the problem. The algorithms used for propagating the
global constraints that are crucial for the performance
of our solver are presented. Afterwards, the branch and
bound search procedure used is introduced. Finally,
experimental results are presented and conclusions are
drawn.
Abstract
The paper addresses the scheduling of a single machine
with tool changes in order to minimize total completion time. A constraint-based model is proposed that
makes use of global constraints and also incorporates
various dominance rules. With these techniques, our
constraint-based approach outperforms previous exact
solution methods.
Related Work
Introduction
This paper addresses the problem of scheduling a single machine with tool changes, in order to minimize the
total completion time of the activities. The regular replacement of the tool is necessary due to wear, which
results in a limited, deterministic tool life. We note that
this problem is mathematically equivalent to scheduling with periodic preventive maintenance, where there
is an upper bound on the continuous running time of
the machine. After that, a fixed-duration maintenance
activity has to be performed.
Our main intention is to demonstrate the applicability of constraint programming (CP) to an optimization problem that requires complex reasoning with constraints on sum-type expressions, a field were CP is
generally thought to be in handicap. We show that indeed, when appropriate global constraints are available
to deal with such expressions, CP outperforms other
exact optimization techniques. In particular, we would
like to illustrate the efficiency of the global COMPLETION constraint (Kovács & Beck 2007), which has been
proposed recently for propagating the total weighted
completion time of activities on a single unary resource.
For this purpose, we define a constraint model of the
scheduling problem. The model makes use of global
constraints, and also incorporates various dominance
properties described as constraints. A simple branch
and bound search is used for solving the problem. We
show in computational experiments that the proposed
approach can outperform all previous exact optimization methods known for this problem.
The paper is organized as follows. After reviewing
the related literature, we give a formal definition of
the problem and outline some of its basic characteristics. Then, we propose a constraint-based model of
71
The problem studied in this paper has been introduced
independently in the periodic maintenance context by
Qi, Chen, & Tu (1999) and in the tool changes context by Akturk, Ghosh, & Gunes (2003). Its practical
relevance is underlined in (Gray, Seidmann, & Stecke
1993), where it is pointed out that in many industries
tool change induced by wear is ten times more frequent
than change due to the different requirements of subsequent activities. Also, in some industries, e.g. in metal
working, tool change times can dominate actual processing times (Tang & Denardo 1988).
Akturk, Ghosh, & Gunes (2003) proposed a mixedinteger programming (MIP) approach and compared
the performance of various heuristics on this problem.
The basic properties of the scheduling problem have
been analyzed and the performance of the Shortest Processing Time (SPT) schedules evaluated in (Akturk,
Ghosh, & Gunes 2004). Three different heuristics have
been analyzed and a branch and bound algorithm proposed by Qi, Chen, & Tu (1999). The performance of
four different MIP models have been compared in (Chen
2006a).
The same problem has been considered with different objective criteria, including makespan (Chen 2007b;
Ji, He, & Cheng 2007), maximum tardiness (Liao &
Chen 2003), and total tardiness (Chen 2007a). In (Akturk, Ghosh, & Kayan 2007), the model is extended to
controllable activity durations, where there are several
execution modes available for each activity to balance
between manufacturing speed and tool wear. The basic model with several tool types has been investigated
by Karakayalı & Azizoğlu (2006). A slightly different
problem, in which maintenance periods are strict, i.e.
the machine has to wait idle if activities complete earlier than the end of the period, has been investigated
in (Chen 2006b).
A brief introduction to constraint-based scheduling is
given in (Barták 2003), while an in-depth presentation
of the modeling and solution techniques can be found
in (Baptiste, Le Pape, & Nuijten 2001).
Property 2 (SPT within tool) Activities executed
with the same tool must be sequenced in the SPT order.
Problem Definition and Notation
There are n non-preemptive activities Ai to be scheduled on a single machine. Activities are characterized
by their durations pi , and are available from time 0.
Processing the activities requires a type of tool that is
available in an unlimited number, but has a limited tool
life, T L. Worn tools can be replaced with a new one,
but only without interrupting activities. This change
requires T C time. It is assumed that ∀i pi ≤ T L, because otherwise the problem would have no solution.
The objective is to determine the start times Si of the
activities and start times tj of tool changes such that
the total completion time of the activities is minimal.
Constraint programming uses inference during search
on the current domains of the variables. The minimum
and maximum values in the current domain of a variable
X will be denoted by X̌ and X̂, respectively. Hence, Ši
will stand for the earliest start time of activity Ai , and
Ĉi for its latest finish time.
The above parameters and the additional notation
used in the paper is summarized in Fig. 1. We assume
that all data are integral. A sample schedule is presented in Fig. 2.
n
pi pmax TL TC Si Ci tj aj bj
X̌
X̂
Property 1 (No-wait schedule) Activities must be
scheduled without any waiting time between them,
apart from the tool change times.
Number of activities
Duration of activity Ai
Maximum duration of activities Ai
Tool life
Tool change time
Start time of activity Ai
End (completion) time of activity Ai
(Start) time of the jth tool change
Number of activities processed after the
jth tool change
- Number of activities processed before the
jth tool change
- Minimum value in the domain of variable X
- Maximum value in the domain of variable X
Property 3 (Tool utilization) The total duration of
activities processed with the jth tool is at least T L −
ter
ter
pminaf
+ 1, where pminaf
is the minimal duration
j
j
of activities processed with tools j ′ > j.
Consequence Every tool, except for the last one, is
utilized during at least Umin = T L − pmax + 1 time,
where pmax is the largest activity duration.
Hence, the
Pn
number of tools required is at most ⌈ i=1 pi /Umin ⌉.
Property 4 (Activities per tool) The number of activities processed using the jth tool is a non-increasing
function of j.
Property 5 (Symmetry breaking) There exists an optimal schedule in which for any two activities Ai and
Aj such that pi = pj and i < j, Ai precedes Aj .
Modeling the Problem
In our constraint model we apply a so-called machine
time representation, which considers only the active periods of the machine. It exploits that the optimal solution is a no-wait schedule (see Property 1), and contracts each tool change into a single point in time, as
shown in Fig. 3. Then, a solution corresponds to a sequencing
of the activities, with the last activity ending
P
at i pi , and instantaneous tool changes between them.
The objective value of a schedule in the machine time
representation takes the form
n
X
Ci + T C
i=1
m
X
aj .
j=1
Technically it will be easier to work with bj than with
aj , hence, we rewrite the objective function to the
equivalent form
n
X
Figure 1: Notation
i=1
Basic Properties
The single-machine scheduling problem
with tool
P
changes, denoted as 1|tool − changes| i Ci , has been
proven to be NP-hard in the strong sense in (Akturk,
Ghosh, & Gunes 2004). The same paper and (Qi, Chen,
& Tu 1999) investigated properties of optimal solutions.
Below we outline these properties, in conjunction with
a symmetry breaking rule that can also be exploited to
increase the efficiency of solution algorithms.
72
Ci + T C
m
X
(n − bj ).
j=1
Pn
We decompose
i=1 Ci and
Pm this function to K1 =
K2 = T C j=1 (n − bj ). Note that K1 corresponds to
the total completion time without tool changes, while
K2 represents the effect of introducing tool changes.
The variables in the model are the start times Si
of the activities, the times tj of the tool changes, and
the number of activities processed before the jth tool
change, bj . The two cost components K1 and K2 are
also handled as model variables. For the sake of brevity,
TL
A4
TL
A2
A1
A3
A5
TC
TC
Figure 2: A sample schedule. Wall clock time representation.
TL
TL
A4
A2
A1
A3
TC
A5
TC
Figure 3: Machine time representation of the sample schedule.
we also use Ci = Si + pi to denote the end time of
activity Ai .
Then, the problem consists of minimizing K1 + K2
subject to
(c1) Time P
window constraints, stating ∀i : Si ≥ 0 and
Ci ≤ i pi ;
(c2) Resource capacity constraint: at most one activity
can be processed at any point in time;
(c3) Activities are not interrupted by tool changes: ∀i, j :
Ci ≤ tj ∨ Si ≥ tj ;
(c4) Limited tool life: ∀j : tj+1 − tj ≤ T L;
(c5) Property 3 holds: ∀j : tj+1 − tj ≥ T L − pmax + 1;
(c6) Property 4 holds: ∀j : bj − bj−1 ≥ bj+1 − bj ;
(c7) Property 5 holds: ∀ii , i2 such that i1 < i2 and pi1 =
pi2 : Ci1 ≤ Si2 ;
(c8) The total completion time of activities Ai is K1 ;
(c9) The number of activities that end before tj is bj ;
Pm
(c10) K2 = T C j=1 (n − bj ).
Note that while constraints c1-c4 and c8-c10 are
fundamental elements of our model, c5-c7 incorporate
dominance rules to facilitate stronger pruning of the
search space. All the ten constraint can be expressed
by languages of common constraint solvers. However,
significant improvement in performance can be achieved
by applying dedicated global constraints for propagating c8 and c9. We discuss those global constraints in
detail in the next section.
Propagation Algorithms for Global
Constraints
Below, both for c8 and c9, we first present how the constraint can be expressed in typical constraint languages.
Then, we introduce a dedicated global constraint and a
corresponding propagation algorithm for either of them,
in order to strengthen pruning.
73
Total Completion Time
The typical way of expressing the total completion time
of a set of activities in constraint-based scheduling
P is
posting a sum constraint on their end times: K = Ci .
However, the sum constraint, ignoring the fact that the
activities require the same unary resource, assumes that
all of them can start at their earliest start times. This
leads to very loose initialPlower bounds on K; in the
present application Ǩ = i pi .1
In order to achieve tight lower bounds on K and
strong back propagation to the start time variables
Si , the COMPLETION constraint has been introduced
in (Kovács & Beck 2007) for the total weighted completion time of activities on a unary capacity resource.
Formally, it is defined as
COMPLETION([S1 , ..., Sn ], [p1 , ..., pn ], [w1 , ..., wn ], K)
P
and enforces K = i wi (Si + pi ). Checking generalized
bounds P
consistency on the constraint requires solving
1|ri , di | wi Ci , a single machine scheduling problem
with release times and deadlines and upper bound on
the total weighted completion time. This problem is
NP-hard, hence, cannot be solved efficiently each time
the COMPLETION constraint has to be propagated.
Instead, our propagation algorithm filters domains with
respect to the following relaxation of the above problem.
The preemptive mean busy time relaxation
(Goemans
P
et al. 2002), denoted by 1|ri , pmtn| wi Mi , involves
scheduling preemptive activities on a single machine
with release times respected, but deadlines disregarded.
It minimizes the total weighted mean busy times Mi of
the activities, where Mi is the average point in time at
which the machine is busy processing Ai . This is easily calculated by finding the mean of each time point
at which activity Ai is executed. This relaxed problem
can be solved to optimality in O(n log n) time.
1
The lower bound is a little tighter if symmetry breaking
constraints (c7) are present to increase the earliest start
times of some activities.
The COMPLETION constraint filters the domains of
the start time variables by computing the cost of the
optimal preemptive mean-busy time relaxation for each
activity Ai and each possible start time t of activity Ai ,
with the added constraint that activity Ai must start
at time t. If the cost of the relaxed solution is greater
than the current upper bound, then t is removed from
the domain of Si . The naive computation of all these
relaxed schedules is likely to be too expensive, computationally. The main contribution of (Kovács & Beck
2007) is showing that for each activity it is sufficient
to compute relaxed solutions for a limited number of
different values of t, and that subsequent relaxed solutions can be computed iteratively by a permutation
of the activity fragments in previous solutions. For a
detailed presentation of this algorithm and the COMPLETION constraint, in general, readers are referred
to the above paper.
Number of Activities before a Tool Change
Constraint c8 describes a complex global property of
the schedule. Standard CP languages make it possible
to express this property with the help of binary logical
variables indicating whether a given activity ends before
a point in time, i.e.
1 if Ci ≤ tj
yi,j =
0 otherwise.
P
Then, bj can be computed as bj = i yi,j . This representation would be rather inefficient, but implementing
a global constraint for this purpose is rather straightforward.
The NBEFORE global constraint states that given
activities Ai that have to be executed on the same unary
resource, the number of activities that can be completed
before time tj is exactly bj :
NBEFORE([S1 , ..., Sn ], tj , bj )
The propagation algorithm for this global constraint
is presented in Fig. 4. It first determines the set of
activities M that must be executed before tj , and the
set of activities P that are possibly executed before tj .
Computing the minimal (maximal) number of activities scheduled before tj is performed by sorting P by
non-decreasing duration, and then selecting the activities that have the highest (lowest) durations. The algorithm completes by updating b̌j , b̂j , and ťj . The time
complexity of the propagator is O(n log n), which is the
time needed for sorting P .
We note that it is straightforward to extend this algorithm with propagation from mj and tj to Si , and
also to t̂j . This extension has been implemented, but
did not achieve additional pruning, and therefore it has
been later omitted.
A Branch and Bound Search
We apply a branch and bound search that exploits the
dominance properties identified for the problem. It con-
74
structs a schedule chronologically, by fixing the start
times of activities and the times of tool changes. In
each node it selects, according to the SPT rule, the
minimal duration unscheduled activity A∗ that can be
scheduled next. The algorithm first checks if one of the
following dominance rules can be applied at this phase
of the search.
• If the remaining activities can all be scheduled without any tool changes, then A∗ must be scheduled
immediately, because all the unscheduled activities
must be scheduled according to the SPT rule. See
Property 2 and lines 4-5 of the algorithm.
• If A∗ cannot be performed before the next tool
change, then no unscheduled activities can be performed before the next tool change, since none of
them have shorter durations than A∗ . Therefore the
next tool change must be performed immediately.
See Property 1 and lines 6-7 of the algorithm.
If one of the dominance rules can be applied, then
the algorithm adds the inferred constraint, which may
trigger further propagation, and then reselects A∗
w.r.t. the new variable domains. Otherwise, it creates two children of the current search node, according
to whether
• A∗ is scheduled immediately and the next tool change
is performed after (but not necessarily immediately
after) A∗ ; or
• A∗ is scheduled after the next tool change.
In the latter case, it also adds the constraint that
another activity must be scheduled before the next tool
change. Hence, the next tool change must be performed
after Cmin , which is the lowest among the end times
of unscheduled activities (see line 9). Note that Cmin
exists because if there is an unscheduled activity (A∗ ),
then there are at least two unscheduled activities.
Also observe that the initial solution found by this
branch and bound algorithm is the SPT schedule.
Experimental Result
We ran computational experiments to evaluate the performance of the proposed CP approach from several
aspects. We addressed understanding how the COMPLETION and NBEFORE global constraints improve
the performance of our model compared to models using
only tools of standard CP solvers. We also measured
how problem characteristics influence the performance
of our approach, and finally, we compared it to previous
exact solution methods.
All models and algorithms have been implemented in
Ilog Solver and Scheduler version 6.1. The experiments
were run on a 2.53 GHz Pentium IV computer with 760
MB of RAM.
Two different problem sets were used for the experiments. The first set was generated as instances in (Qi,
Chen, & Tu 1999), the second as in (Akturk, Ghosh,
& Gunes 2003). Qi, Chen, & Tu (1999) took activity
durations randomly from the interval [1, 30] and fixed
1 PROCEDURE Propagate()
2
M = {Ai | Ŝi < tˇj }
3
P = {Ai | Či ≤ tˆj } \ M
4
Sort P by non-decreasing duration
P
4
kmin = min number of activities in P with total duration ≥ ťj − Ai ∈M pi
P
5
kmax = max number of activities in P with total duration ≤ t̂j − Ai ∈M pi
6
b̌j = |M | + kmin
7
b̂j = |M
P | + kmax
8
ťj =
Ai ∈M pi + total duration of the kmin shortest activities in |P |
Figure 4: Algorithm for propagating the NBEFORE constraint.
1 WHILE there are unscheduled activities
2
A∗ = Unscheduled activity with min ŠA∗ , min pA∗
3
T = Earliest tool change time with T̂ > ŠA∗
4
IF there is no such T
5
ADD SA∗ = ŠA∗ (Property 2)
6
ELSE IF T̂ < ČA∗
7
ADD T = ŠA∗ (Property 1)
8
ELSE
9
Cmin = min Či of unscheduled activities Ai 6= A∗
10
BRANCH:
- SA = ŠA and CA ≤ T
11
- SA ≥ T and T ≥ Cmin
Figure 5: Pseudo-code of the search algorithm.
the value of T C to 10. The number of activities n
has been varied between 15 and 40 in increments of
5, while values of the tool life T L have been taken from
{50, 60, 70, 80}. We generated ten instances with each
combination of n and T L, which resulted in 240 problem instances. The time limit for these problems was
set to one hour.
In (Akturk, Ghosh, & Gunes 2003), in order to obtain
instances with different characteristics, four parameters
of the generator were varied, each having a low (L) and
a high (H) value. These parameters were the mean
and the range of the durations (M D and RD), the tool
life (T L), and the tool change time (T C). Generating
ten 20-activity instances with each combination of the
parameters resulted in 24 · 10 = 160 instances. Since
this set contains harder instances, we set the time limit
to two hours.
We did not perform comparisons with the MIP models proposed in (Chen 2006a), because that paper
presents experimental results only on very easy instances containing few (in most cases only one) tool
changes over the scheduling horizon.
Results on Qi’s Instances and Comparison
to Naive Models
We compared the performance of four different CP
models of the problem that represent the two cost components K1 and K2 in different ways. K1 was expressed
75
either by a sum constraint (Sum)or by the COMPLETION constraint (COMPL), while K2 was described using binary variables (Bin) or the NBEFORE constraint
(NBEF ). Note that the COMPL/NBEF is the model
proposed in this paper.
The achieved results are displayed in Table 1. Each
row contains cumulative results for ten instances with
a given value of n and T L. For each of the models,
column Opt shows the number of instances for which
the optimal solution has been found and optimality has
been proven, column Nodes contains the average number of search nodes, and Time the average search time
in seconds. Nodes and Time also contain the effort
needed for proving optimality.
The results show that the proposed approach,
COMPL/NBEF solves instances with up to 30-35 activities to optimality. It outperforms the alternative CP
representations that do not benefit from the pruning
strength of the COMPLETION and NBEFORE constraints. Instances with a short tool life and hence,
many tool changes are more challenging. This is due to
the poorer performance of the SPT heuristic, and higher
importance of the bin packing aspect of the problem.
In contrast, Qi, Chen, & Tu (1999) report that the average solution time of 20-activity instances with their
branch and bound approach was in the range of [55.94,
3.57] seconds, depending on the value of T L, and their
algorithm could not cope with larger problems.
n
15
20
25
30
35
40
TL
50
60
70
80
50
60
70
80
50
60
70
80
50
60
70
80
50
60
70
80
50
60
70
80
Opt
10
10
10
10
6
7
9
10
0
0
1
1
-
Sum/Bin
Nodes
36278
55477
18275
19748
5365305
5365603
2544734
910496
6282502
9132083
10815570
11484097
-
Time
10.8
13.6
3.1
2.9
2605.5
1778.5
735.1
241.8
3600.0
3600.0
3587.7
3358.2
-
Opt
10
10
10
10
10
10
10
10
10
10
10
10
3
4
8
10
0
0
0
2
-
COMPL/Bin
Nodes
Time
877
0.0
1018
0.2
358
0.0
303
0.0
42853
35.1
19092
16.2
8051
7.1
1957
1.4
639147
727.3
91385
126.4
83095
104.2
91029
122.1
2581475 3229.5
2093233 2804.0
961460 1640.2
318435
560.9
3108739 3600.0
3193284 3600.0
2858550 3600.0
2000949 3162.0
-
Opt
10
10
10
10
8
7
9
10
0
0
2
1
-
Sum/NBEF
Nodes
Time
31134
5.4
49975
7.7
14357
1.5
15502
1.4
6579567 1685.3
7511826 1436.0
3119249
558.0
762404
127.8
11727713 3600.0
15404729 3600.0
16222223 3327.3
16808958 3287.7
-
Opt
10
10
10
10
10
10
10
10
10
10
10
10
9
10
10
10
7
9
10
10
1
6
10
10
COMPL/NBEF
Nodes
Time
49
0.0
76
0.0
17
0.0
19
0.0
7183
3.7
133
0.0
84
0.0
46
0.0
99239
78.0
1126
0.4
979
0.2
1082
0.6
230088
452.5
55374
46.9
7877
6.6
1721
1.1
1724651 2002.6
355709
449.5
160239
166.9
8121
8.9
2371440 3297.7
1088871 1597.6
279844
393.5
85854
143.3
Table 1: Experimental results on instances from (Qi, Chen, & Tu 1999): number of instances where optimality
has been proven (Opt), average number of search nodes (Nodes), and average solution time in seconds (Time), for
four different CP models. The models use binary variables (Bin) or the NBEFORE constraint, and a Sum or a
COMPLETION constraint to express the objective function. Dash ’-’ means that none of the instances with the
given n could be solved to optimality.
Results on Akturk’s Instances and Effect of
Problem Characteristics
Experimental results on the instances from (Akturk,
Ghosh, & Gunes 2003) are presented in Table 2. The results on the l.h.s. have been achieved by a naive model
with sum back propagation instead of the COMPLETION constraint, the results on the r.h.s. by the complete CP model.
Each row displays data belonging to a given choice
of parameters M D, RD, T L, and T C, as shown in the
leftmost columns. While the COMPLETION model
managed to solve all instances to optimality and also
proved optimality, the sum model missed finding the
optimum for 2 instances and proving optimality in 5
cases. The COMPLETION model was 10 times faster
on average than the sum model.
These results confirm that short tool life implies
many tool changes and renders problems more complicated for our model. Low mean duration makes things
easier, which is probably due to the higher number of
symmetric activities, since these activities can be ordered a priori. Although a low range of durations has a
similar effect, it also has a negative impact on the performance of the SPT heuristic, among which the latter
seems to be the stronger.
Compared to the MIP approach presented in (Ak-
76
turk, Ghosh, & Gunes 2003) our CP model solves more
instances, and does this more quickly: the MIP model
achieved an average solution time of 1904 seconds, it
was not able to solve all instances, and for the 15% of
the instances it found worse solutions than one of the
heuristics.
Conclusion
A constraint-based approach has been presented to single machine scheduling with tool changes. The proposed model outperforms previous exact optimization
methods known for this problem. This result is significant especially because the problem requires complex
reasoning with sum-type formulas, which does not belong to the traditional strengths of constraint programming. This was made possible by two algorithmic techniques: global constraints and dominance rules. Specifically, we applied the recently introduced COMPLETION constraint to propagate total completion time,
and defined a new global constraint, NBEFORE, to
compute the number of activities that complete before
a given point in time. Furthermore, we could formulate
the known dominance properties as constraints in the
model.
The introduced model can be easily extended with
constraints on the number of tools and with weighted
activities. The machine-time representation is appli-
MD
L
L
L
L
L
L
L
L
H
H
H
H
H
H
H
H
RD
L
L
L
L
H
H
H
H
L
L
L
L
H
H
H
H
TL
L
L
H
H
L
L
H
H
L
L
H
H
L
L
H
H
TC
L
H
L
H
L
H
L
H
L
H
L
H
L
H
L
H
Opt
10
10
10
10
10
10
10
10
7
10
10
10
8
10
10
10
NBEF/Sum
MRE
Nodes
0
1891018
0
968087
0
79344
0
12269
0
667659
0
127866
0
78775
0
6664
1.71 16430139
0
5606737
0
2170750
0
222435
0
6020041
0
186735
0
86856
0
154639
Time
529.9
205.9
11.9
1.6
171.8
23.7
13.2
0.7
3548.8
1018.0
357.9
40.6
2102.8
35.7
12.5
19.2
Opt
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
NBEF/COMPL
MRE
Nodes Time
0
38128
23.3
0
102237
52.1
0
237
0.1
0
73
0.0
0
3692
2.3
0
78955
25.7
0
27
0.0
0
29
0.0
0 1614494 596.4
0
47902
25.1
0
895
0.3
0
9023
3.6
0
81249
43.9
0
23214
11.3
0
20
0.0
0
1648
0.8
Table 2: Experimental results on instances from (Akturk, Ghosh, & Gunes 2003), for models using sum and COMPLETION back propagation: number of instances where optimality has been proven (Opt), mean relative error in
percents (MRE), average number of search nodes (Nodes), and average solution time in seconds (Time).
cable to solving the same problem with other regular
optimization criteria, such as minimizing makespan, or
maximum or total tardiness. However, it seems to be
impractical to apply this model to multiple-machine
problems, because the time scales would differ machine
by machine.
Acknowledgments The authors are grateful to the
anonymous reviewers for their helpful comments. A.
Kovács was supported by the János Bolyai Research
Scholarship of the Hungarian Academy of Sciences and
by the NKFP grant 2/010/2004.
References
Akturk, M. S.; Ghosh, J. B.; and Gunes, E. D. 2003.
Scheduling with tool changes to minimize total completion time: A study of heuristics and their performance. Naval Research Logistics 50:15–30.
Akturk, M. S.; Ghosh, J. B.; and Gunes, E. D. 2004.
Scheduling with tool changes to minimize total completion time: Basic results and SPT performance. European Journal of Operational Research 157:784–790.
Akturk, M. S.; Ghosh, J. B.; and Kayan, R. K. 2007.
Scheduling with tool changes to minimize total completion time under controllable machining conditions.
Computers and Operations Research 34:2130–2146.
Baptiste, P.; Le Pape, C.; and Nuijten, W. 2001.
Constraint-based Scheduling. Kluwer Academic Publishers.
Barták, R. 2003. Constraint-based scheduling: An
introduction for newcomers. In Intelligent Manufacturing Systems 2003, 69–74.
77
Chen, J.-S. 2006a. Single-machine scheduling with
flexible and periodic maintenance. Journal of the Operational Research Society 57:703–710.
Chen, W. J. 2006b. Minimizing total flow time in
the single-machine scheduling problem with periodic
maintenance. Journal of the Operational Research Society 57:410–415.
Chen, J.-S. 2007a. Optimization models for the tool
change scheduling problem. Omega (to appear).
Chen, J.-S. 2007b. Scheduling of nonresumable jobs
and flexible maintenance activities on a single machine
to minimize makespan. European Journal of Operational Research (to appear).
Goemans, M. X.; Queyranne, M.; Schulz, A. S.;
Skutella, M.; and Wang., Y. 2002. Single machine
scheduling with release dates. SIAM Journal on Discrete Mathematics 15(2):165–192.
Gray, E.; Seidmann, A.; and Stecke, K. E. 1993. A synthesis of decision models for tool management in automated manufacturing. Management Science 39:549–
567.
Ji, M.; He, Y.; and Cheng, T. C. E. 2007. Singlemachine scheduling with periodic maintenance to minimize makespan. Computers & Operations Research
34:1764–1770.
Karakayalı, I., and Azizoğlu, M. 2006. Minimizing total flow time on a single flexible machine. International
Journal of Flexible Manufacturing Systems 18:55–73.
Kovács, A., and Beck, J. C. 2007. A global constraint
for total weighted completion time. In Proceedings of
CPAIOR’07, 4th Int. Conf. on Integration of AI and
OR Techniques in Constraint Programming for Com-
binatorial Optimization Problems (LNCS 4510), 112–
126.
Liao, C. J., and Chen, W. J. 2003. Single-machine
scheduling with periodic maintenance and nonresumable jobs. Computers & Operations Research 30:1335–
1347.
Qi, X.; Chen, T.; and Tu, F. 1999. Scheduling the
maintenance on a single machine. Journal of the Operational Research Society 50:1071–1078.
Tang, C. S., and Denardo, E. V. 1988. Models arising
from a flexible manufacturing machine, Part I: Minimization of the number of tool switches. Operations
Research 36:767–777.
78
Comprehensive approach to University Timetabling Problem
Wojciech Legierski, Łukasz Domagała
Silesian University of Technology, Institute of Automatic Control, 16 Akademicka str., 44-100 Gliwice, Poland
Wojciech.Legierski@polsl.pl, Lukasz.Domagala@student.polsl.pl
Class-Teacher Timetabling (CTT) – the problem
amounts to allocating a timeslot to each course provided
for a class of students that has a common programme,
Room Assignment (RA) - each course has to be placed in
a suitable room (or rooms), with a sufficient number of
seats and equipment needed by this course.
Course Timetabling (CT) - the problem assumes that
students can choose courses and need not belong to some
classes.
Staff Allocation (SA) - the problem consists of assigning
teachers to different courses, taking into account their
preferences. The problem assumes that one course can be
conducted by several teachers.
Abstract
The paper proposes a comprehensive approach to University
Timetabling Problem, presents a constraint-based approach to
automating solving and describes a system that allows
concurrent access by multiple users. The timetabling needs to
take into account a variety of complex constraints and uses
special-purpose search strategies. Local search incorporated into
the constraint programming is used to further optimize the
timetable after the satisfactory solution has been found. Paper
based on experience gained during implementation of the system
at the Silesian University of Technology, assuming coordinating
the work of above 100 people constructing the timetable and
above 1000 teachers who can have influence on timetabling
process. One of the original issues used in real system, presented
in the paper, is multi-user access to the timetabling database
giving possibility of offline work, solver extended by week
definitions and dynamic resource assignment.
Till now Examination Timetabling (ET) was not required,
but is planned to be added in future.
Comprehensive approach to University Timetabling
Problem (UTP), besides taking into account different
timetabling problems, also assumes following tasks:
- formulating timetable data requires a lot of
flexibility,
- automated methods should be available for subproblems and should be able to take into account
many soft and hard constraints,
- timetabling can be conducted by many users,
simultaneously, which requires assistance in
manual timetabling and quick availability to
different resources’ plans.
Introduction
Timetabling is regarded as a hard scheduling problem,
where the most complicated issue is the changing of
requirements along with the institution for which the
timetable is produced. Automated timetabling can be
traced back to the 1960s [Wer86]. Some trials of
comprehensively approaching the timetabling problem are
presented in the timetabling research center lead by prof.
Burke [PB04]
and in several PhD thesis
[M05],[Rud01],[Mar02]. There are works connected with
general data formulation , metaheuristic approaches, and
user interfaces for timetabling. The paper presents a
proposition of a comprehensive approach to the realworld problem at Silesian University of Technology. This
paper presents the methods used for automated
timetabling, data description and user interaction
underlining connection of different idea to built whole
timetabling system.
Constraints
Timetable of UTP has to fulfill the following constraints,
which can be expressed as hard or soft:
- resources assigned to a course (classes, teachers,
rooms, students) have time of unavailability and
undesirability,
- courses with the same resource cannot overlap,
- some courses must be run simultaneously or in
defined order,
- some resources can be constrained not to have
courses in all days and more than some number
during a day,
- no gaps constraint between courses for the same
resource or gaps between some specific courses
can be strictly defined,
Problems description
A number of timetabling problems have been discussed in
the literature [Sch95]. Based on the detailed classification
proposed by Reise and Oliver [RL01], the presented
problem consists of a mixture of following categories:
79
-
Timetable designers often do not want to introduce all the
constraints and trust the computer in putting courses in
the best places. Manual timetabling assistance with
constraint explanation seems to be a very important step
in making timetable system useful. The assistance
requires very quick access to a lot of data and relations
between them to provide a satisfactory interface.
Therefore after dragging the course, colors of unavailable
timeslots change to color defining what sort of constraints
will be violated. For example overlapping of rooms
courses has gray color and undesirable hours of a teacher
leaded the course has yellow color.
the number of courses per day should be roughly
equal for defined resource - p,
courses should start from early morning hours.
Data representation
Although UTP data is gathered in relational database for
multi user access, data for the solver is saved as a XML
file, which also expresses sub-problems for the solver.
The main advantage of using XML file is the ability of
defining relations between courses, resources and
constraints in a very flexible way. The flexibility of UTP
features:
- defining arbitrarily resources (classes, teachers,
rooms, students)
- allowing assignment of some different resources
to one course,
- assigning resources can be treated as disjunction
of some resources, where also a number of
chosen resources can be defined,
- constraints can be imposed on every resource
and every course,
Additionally in UTP we are supposed to produce a plan
which is coherent during a certain time-span (it would be
for example one semester), with courses taking place
cyclically with some period (most often one-week
period). But frequently we face a situation where some
courses do not fit into the period mentioned above, for
example some of them should appear only in odd weeks
or only in even weeks and thus have a two week period.
Seeking solution to this problem we introduced the idea of
“week definition”. Different week definitions can be
defined in the timetable, together with the information,
which of them have common weeks and the courses
assigned to them.
Structure of the system
The proposed solution for comprehensive approach to
UTP requires a usage of different languages and
technologies for different features. Therefore the proposed
system consists of 4 parts as presented in Figure.1. The
system was firstly presented by the author in [LW03].
Presented system was extended mainly by multi-user
access.
Web application
(PHP and JavaScript)
Dynamic web pages
Database
SQL statements
Multi user access
Timetable Manager
(VC++)
The UTP requires taking into account that there are many
timetable designers, who are engaged in timetabling
process. The teachers and students are asked to submit
information about their choices as well as time
preferences. The appropriate management of the user
interaction is solved by introducing 3 levels of the rights
assigned to each user and connected with set of resources
like groups, teachers, students and rooms:
- user can be administrator of the resource,
- user can be planner of the resource,
- user can only use resource for courses.
Additionally each resource and courses have user which is
call “owner”. Owners and administrators can block
resources to restrict changing them.
Dedicated file format, STTL
XML file with problem description
Solver
(ECLiPSe)
XMLfile with solution
Figure 1, Diagram of four parts of the system, their
dependencies and their output data format.
Web application
HTML seems to be an obvious solution for presenting
results of the timetabling process in the Internet, but it
provides only static pages which are not sufficient for
Manual timetabling assistance
80
which automatically show schedules for resources of a
selected course can be freely placed by the user. During
manual scheduling available timeslots are shown and
constraint violations are explained by proper colors. Data
can be saved in two ways:
- data is saved locally in dedicated file format,
- data is synchronized with remote database.
The system takes into account privileges of users and does
not allow unauthorized change of data. An example
screen of the Timetable Manager is presented in Figure 3.
SAT. JavaScript improves the user interface and provides
the capability to create dynamic pages.
As client-side extensions it allows an application to place
elements on a HTML form and respond to user events
such as mouse clicks, form input, and page navigation.
Server-side web-scripting languages provide developers
with the capability to quickly and efficiently build Web
applications with database communication. They became
in the last decade a standard for dynamic pages. PHP
being one of the most popular scripting languages was
chosen for developing the system. Its main advantages are
facilities for database communication. People are used to
Web services which provide structured information,
search engines, email application etc. The proposed
system had to be similar to those services in its
functionality and generality. Most timetable applications
give a possibility to save schedules for particular classes,
teachers and rooms as HTML code, but they do not allow
interaction with its users. Creating a timetable is a process
which involves often a lot of people working on it. There
are not only timetable designers, but also teachers, who
should be able to send their requirements and preferences
to the database. This is to a high extent facilitated by a
web application, which allows interaction between
teachers, students and timetable designers. An example
screen of the web timetabling application is presented in
Figure 2.
Figure 3. The example screen of manual assistance in
Timetable Manager
Multi user support
Allowing user to work locally forces to develop of a data
synchronization mechanism between locally changed data
and remote database. The proposed mechanism is based
on idea of versioning systems like CVS or SVN. But
taking into account timetable data is much harder than
text files, because of complicated relations between data.
The main advantages of this mechanism are following:
- simultaneously changes are allowed and in case
of conflict possibilities discard changes or
introduce them are given to the user,
- user have very quick access to all timetable
without blocking them for other users,
- changes can be applied for a lot of data (e.g.
through locally solver) ,
- if data are not changed by one user or user has no
rights to change data, there updated without
inform the user,
- default values for changes are chosen in such a
way, that newest changes are taken or changes
with higher level of rights.
Figure 2. The example screen of the Web application
Timetable Manager
It is hard to develop a fully functional user interface using
only Internet technologies. Therefore VC++ was used to
build the Timetable Manager, a program for timetable
designers. The idea of the program was to simplify
manual timetabling and to provide as much information as
possible during this process. Operating on the data locally
significantly increases performance during data
manipulation, data manipulation is based on SQL queries
on database. Well-known features – drag and drop can be
implemented, layout is based on tree navigation. One of
the most important feature of the timetamble manager is
assistance during manual timetabling. Small timetables,
Two actions are proposed to take care of integrity of the
data:
81
-
effective search methods are customized for taking
soft constraints into account during the search, based
on the idea of value assessment [AM00]
- custom-tailored distribution strategies are developed,
idea of constraining while distributing, which
allowed to effectively handle constraints and search
for ’good’ timetables straight away,
- custom-tailored search methods were developed to
enhance search effectiveness of timetabling
solutions,
- integration of Local Search techniques into the
Constraint Programming paradigm search enhanced
optimization of timetabling solutions .
Additionally flexibility of the timetable definition was
widened by the week definitions and dynamic resource
assignment.
Import/update (it is required if data are changed
remotely and user want make export)
1. Assume unique index for each course and resource
and date of the last change. Indexes of deleted
resources are remembered in separate table.
maxIndex – the greater value of all indexes.
2. Remember locally current state (local_UT) and a
whole state of the last imported timetable (last_
remote_UT).
3. Select changes from database, which are newer than
last_ remote _UT.
4. Introduce changes to the last_remote_UT and build
remote_UT.
5. Indexes of local resource, which are greater than
last_remote_UT.maxIndex
are
increased
by
remote _UT.maxIndex - last_ remote _UT.maxIndex.
6. Compare all data of the 2 timetables (remote_UT and
local_UT), and check what kid of data was changed
locally or remotely. Give user possibilities to accept
or reject changes for data which change both locally
and remote.
7. By default assume acceptance of the changes.
8. Replaced last_remote_UT with remote_UT.
Week definitions
The idea of incorporating week definitions into the
problem definition comes from the fact that scheduling an
“odd” course will cause unused time in the “even” weeks
and vice versa. This might cause long gaps between
courses and could also render the problem unsolvable. To
deal with this disadvantage we could prolong the
scheduling period from one to two weeks to take account
of courses with a longer cycle. This unfortunately has a
drawback of doubling the domains of the courses' start
variables and a necessity to add special constraints (to
enforce the weekly scheduled courses happening in the
same time during both weeks). The aforementioned
solution would however not apply in some situations, for
example in the case when some courses are required to
happen only a few times in the semester or only in the
second half of the semester. It would also increase the size
of variable domains causing a great computational
overhead. We can eliminate these drawbacks thanks to the
introduction of week definitions. Week definitions are
logical structures that group a certain number of time
periods from the whole time-span. Referring to the
previous examples a week definition of odd weeks would
consist of weeks numbered <1 , 3 , 5 …15>, week
definition of all weeks<1,2,3,…16> and so on , more
examples below:
Export/commit
1. Make Import to check changes and build remote_UT.
Export is available if the last_remote_UT does not
differ from remote_UT. Otherwise import is forced.
2. Compare local_UT with remote_UT based on the
last change date to show user what changes will be
exported
3. Assume default introduced changes to send them to
database.
4. If some resources or courses are removed, store
indexes with data in a special table.
Multi user support was the most desire feature of the
whole timetable system. It can be solved by online
working on database with multi-user access, transactions
and locking tables. But this solution was rejected, because
of low performance in case of simultaneous work of many
users.
week_def{id:”A”,
weeks:[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]
} ( all weeks )
week_def{id:”O”,
weeks:[1,3,5,7,9,11,13,15]}
( odd weeks )
week_def{id:”E”, weeks:[2,4,6,8,10,12,14,16]}
(
even weeks )
week_def{id:”SHO”, weeks:[9 ,11,13,15]}
(second
half of the semester odd weeks)
week_def{id:”F4W”, weeks:[1,2,3,4]}
(first four
weeks)
Automated timetabling based on Constraint
Programming paradigm
The presented solver is written in ECLiPSe [ECL] using
the Constraint Programming paradigm and replaced
solver written in Mozart/Oz language. Main idea of the
methodologies are similar to those widely presented in
author’s PhD thesis [Leg06], [Leg03]. The main idea of
the solver were:
82
definition and management of resource groups
implemented in our XML interface and processed by the
solver. The data structure is such that for every course we
define resources. Resources are defined by a (theoretically
unlimited) number of resource groups. Each group
contains indexes, that correspond to certain resources and,
as a property, a number of required resources.
The number of required resources can range from one to
the cardinality of the group. When the number of required
resources is maximal, all the resources within the group
need to be used, but for any number below the maximum
we are left with a choice of resources.
For certain pairs of week definitions we state whether
they are in conflict, which corresponds to the fact that
their sets of weeks have common elements. Basing on the
example above we would say that conflicts are:
week_def_conflict{id1:”A”,
week_def_conflict{id1:”A”,
week_def_conflict{id1:”A”,
week_def_conflict{id1:”A”,
week_def_conflict{id1:”O”,
week_def_conflict{id1:”O”,
week_def_conflict{id1:”E”,
id2:”O”}
id2:”E”}
id2:”SHO”}
id2:”F4W”}
id2:”SHO”}
id2:”F4W”}
id2:”F4W”}
Having defined week definitions we take from the input
data assignment at least one of them to each course:
<Course>
…
<Resources>
<Group required=2 >
<Resource>teacher_32</Resource>
<Resource>teacher_78</Resource>
<Resource>teacher_93</Resource>
</Group>
<Group required=1 >
<Resource>classroom_122</Resource>
<Resource>classroom_123</Resource>
<Resource>classroom_144</Resource>
<Resource>classroom_215</Resource>
</Group>
<Group required=1 >
<Resource>students_group_23</Resource>
</Group>
<… optionally more groups>
</Resources>
</ Course >
course {id:”act1”,start_time:SA1 duration:5, … ,
week_defs:[“O”] , … }
(“act1” taking place in odd weeks)
course
{id:”act2”,start_time:SA2
duration:7
week_defs:[“F4W”, “SHO”] , … },
(“act2” taking place in first four weeks of the
semester and in odd weeks of the second half of
the semester )
These information is used at the constraint setup phase.
Pairs of courses which do not contain any conflicting
definitions are excluded from the constraint setup,
because they occur in different weeks and therefore there
is no risk that they would require the same resources
during the same time.
Pairs of courses that contain at least one pair of
conflicting week definitions, are potentially competitors
for the same resources during the same time and need to
be taken under consideration during constraint setup.
The idea of week definitions is a universalized and
convenient approach of handling courses which are
exceptional and do not occur regularly within each time
period.
This structure is translated into resource variables list in
each course.
Course
… , resource_variables_list:[Teacher1,
Teacher2, Classroom1, StudentGroup1] , …}
Dynamic resource assignment
And domains of those variables present in the list
We have taken the approach that resources do not have to
be instantiated at the phase of problem definition, which
on one hand enforces a more complex programmatic
approach but on the other better reflects the nature of real
timetabling problems and also allows greater flexibility at
search phase (possibility of balancing resource usage,
moving courses between resources might lead to further
optimization of the cost function).
Normally we would assume that a course requires a fixed
set of resources to take place. That would be for example,
a group of teachers, a group of classrooms and a group of
students, all known and stated at the time of problem
definition. We extend this model by enabling the user to
state how many elements from a group of resources are
required , without an explicit specification which ones
should be used. This flexibility is achieved thanks to the
domain(Teacher1)=domain(Teacher2) = [teacher_32,
teacher_78, teacher_93 ]
domain(Classroom1)=[classroom_122
,
classroom_123 , classroom_144 , classroom_215]
For every group of resources we create as many resource
variables as number of required resources, and give each
of them a domain of all resources in a group , then
constrain them to be all-different (since we cannot use any
resource twice in one course). For those groups where all
resources are required, variables should get instantiated
right away which corresponds to the model with fixed
resources:
83
instantiation of variables. This slight complication is the
consequence of using dynamic resource assignment.
StudentGroup1 = students_group_23
What we need to ensure now is that any two courses do
not use the same resource at the same time. This is
achieved for instantiated resources by imposing a
constraint that prevents courses from overlapping in time,
for every pair of courses that use the same instantiated
resource and are in conflict according to week definitions.
It is sometimes possible to set up global constraints
involving more than two courses that require the same
resource but only if each pair in the group is in conflict
according to their week definitions, which is not always
the case.
Results
The final results cannot be presented, because of the
implementation stage of the whole system. Some results
are taken from previous solver written in Mozart/Oz
language for two small real problem – one from highschool and departure at the Silesian University of
Technology. Results presented in Figure 4 shows that
using a too complicated propagator can twice increase
time and memory consumption.
What still needs to be handled are the uninstantiated
resource variables with domains. To do this we impose a
suspended test on every pair of courses that have at least
one common resource in their resource variables domains
and are in conflict according to their week definitions.
The tests wait for instantiation of both resources that
could potentially be the same, and checks if they are. If
the test succeeds, the constraint that prevents the pair of
courses from overlapping is imposed on the courses. The
invocation of tests and consequently imposing of
constraints happens at the search phase when resources
get instantiated by the search algorithm.
Figure 4. Comparison of two types of no overlap
constraints.
Schedule.serialize is a strong propagator to implement
capacity constraints. It employs edge-finding, an
algorithm which takes into consideration all tasks using
any resource. This propagator is very effective for jobshop problems. However, for the analyzed cases this
propagator is not suitable, because most tasks have
frequently small durations and the computational effort is
too heavy as compared with the rather disappointing
results. FD.disjoint which although may cut holes into
domains, must be applied for each two courses that cannot
overlap. Those constraints enable also the handling of
some special situations connecting with week definitions
described in previous section.
To enhance the constraint propagation it is useful to
impose a second set of tests on the courses to ensure that
the same resources are not chosen for courses that overlap
in time. To achieve this , for each pair of courses that are
in conflict according to their week definitions we impose
a test checking whether the courses overlap (different
conditions guarding for domain updates are acceptable
here, domain bound changes as well as variable
instantiation ). If the test succeeds the all-different
constraint is imposed on resource lists of the two courses
stating that none of the variables in one list takes the same
value as any variable in the other ( since they can not use
the same resources whilst overlapping in time and
belonging to conflicting week definitions ).
Popular first-fail (FF) strategy was compared with
custom-tailored distributed strategy (CTDS) based on
constraining while distributing and choosing those values
for variables, which have smallest assessment (assessment
for value was increased when soft constraints were
violated). Optimization was checked for popular branchand-bound and idea of incorporation local search into
constraint programming. This idea based on following
steps after finding feasible solution:
1. Finds a course which introduced highest cost (e.g.
makes gaps between courses)
2. Finds a second course to swap with the first one.
3. Creates a new search for the original problem from
memorized initial space.
4. Instantiates all courses (besides these two previously
chosen) to the values from solution. It can be made in
one step because they surely do not violate constraints.
5. Schedules first course in the place of the second one.
6. Finds the best start time for the second course.
This second set of tests (considering courses’ start times)
is redundant. We notice that its declarative meaning is the
same as for the first set of tests (considering resource
variables) , but in the case when we proceed through the
search tree both by instantiating start times for courses
and resource variable, we get a better constraint
propagation and avoid exploring some parts of search tree
which do not contain a solution.
There is a need to use these suspended tests that set up
constraints during search phase, because at the constraint
setup phase we do not have the knowledge which
activities will overlap in time or which will use the same
resources
therefore we need to wait for further
84
7. Computes the cost function. If it has improved, the
solution is memorized, else another swap is
performed.
Results of comparisons are presented in Figure 5.
[PB04] S. Petrovic and E.K. Burke, Edmund K,
Handbook of Scheduling: Algorithms, Models, and
Performance Analysis, Chapter 45: University
Timetabling,CRC Press,Edt: J. Leung, 2004
[RL01] Oliveira E. Reise L.P. A language for specifying
complete timetabling problem. In Practice and Theory
of Automated Timetabling III, volume 2079 of
Lecture Notes in Computer Science, pages 322–341.
Springer-Verlag, 2001.
[Rud01] H. Rudova. Constraint Satisfaction with
Preferences. PhD thesis, Masaryk University Brno,
2001.
[Sch95] A. Schaerf. A survey of automated timetabling.
In Wiskunde en Informatica, TR CS-R9567. CWICent,
1995.
[Wer86] J. Werner. Timetabling in Germany: A survey.
Interfaces, 16(4):66 74, 1986.
Figure 5. Comparison of two types of distribution strategy
and optimisation methods.
Conclusion and future work
Presented system describing comprehensive approach to
real-world University Timetabling problem is still during
implementation at the Silesian University of Technology.
Most of the parts system has been already implemented,
but it is still not used in full range. Multi-user paradigm
has been already implemented and tested. It is one of the
most important feature appreciated by the user, which use
nowadays only manual assistance of the presented system.
Authors plan test different methodologies based on
Constraint Programming and Local Search after gathering
data from whole university. The different search methods
will be tested similar to Iterative Forward Search
presented in [M05].
References
[AM00] S. Abdennadher and M. Marte. University course
timetablingusing constraint handling rules. Journal of
Applied Artificial Intelligence, 14(4):311–326, 2000.
[ECL] The ECLiPSe Constraint Programming System,
http://eclipse.crosscoreop.com/
[Leg03] W. Legierski. Search strategy for constraintbased class-teacher timetabling. In Practice and
Theory of Automated Timetabling IV, volume 2740 of
Lecture Notes in Computer Science, pages 247–261.
Springer-Verlag, 2003.
[LW03] W. Legierski and R. Widawski. System of
automated timetabling. In Proceedings of the 25th
International Conference Information Technology
Interfaces ITI 2003, Lecture Notes in Computer
Science, pages 495–500, 2003.
[Leg06] W. Legierski. Automated timetabling via
Constraint Programming, PhD Thesis, Silesian
University of Technology, Gliwice, 2006.
[Mar02] M. Marte. Models and Algorithms for School
Timetabling A Constraint-Programming Approach.
PhD
thesis,
Ludwig-Maximilians-Universitat
Munchen, 2002.
[M05] T. Muller. Constraint-based Timetabling. PhD
thesis, Charles University in Prague, Faculty of
Mathematics and Physics, 2005.
85
✂✁☎✄✝✆✟✞✡✠☞☛✍✌✏✎✒✑✔✓✖✕☞✗✘✠✚✙✜✛✣✢✤✕✍✛✥✗✧✦✡✙✩★✪✕✬✫✭✠✮✄✯✆✝✢✰✕✲✱✪✳☎✗✧✴✵✗✶✛✥✗✧✦✡✙
✷✹✸✻✺✪✸✻✼✾✽❀✿❂❁❄❃✮❅❇❆❉❈❋❊❉●■❍❏✸✻❑✒✸▲✿◆▼❖❈P❅◗❅❙❘❚❈P❁❄❁✘●✚❑✒✸❱❯❚✸✚❲❨❳❩✽❭❬☞❪✥▼❖❫✚❅❵❴❜❛❝❪❞❛✚❫✣✼❡✸✻✼❡✸❣❢❤❈P❅❵✐
❹ ♠♣❶✟❺☞②♣①③❻❙❶❸❷❽❥❧❼❄①❾❦♥✇➀♠♣❿⑥♦◗♦⑦♦❙r✥q❋➁☞♦❙r❞✈❖s❜⑧♣⑧♣♦❵❶✧t✜❷♥✉❖❼➀➂❖✈❧❶❸✇❄q③①③⑧➄②♣➃❖④⑥➁☞⑤⑦✈❖②❭⑧❖⑧⑩⑧❧⑨✬❶✧❷♥②♣❼➀④❙➂❭①③❶✧②♣q➅❶✧⑧➆❶❸❷❄➁✚①③②♣➇■④ ➈➊➉❵➇✚➁■➃❖❺✮➋
❄✇⑤⑦❼❩❷♥✉Ñ❷❽⑤❙❶✜❶❸❦✘❦✧⑤❙❶➜①➤➂❭❼✧②❂➃❭❦❸⑤➒♦❵✉❭✇❽❷❽♦❇①❾⑧♣❼❄♦❵❶✧❼❄②÷❷❽①③ê♣❶❸♦⑦q❾⑧✃❿☎r✮❼❄⑤❵❼❄❶✘✈♣❦✶✇➊✇❄✉♣①③♦❙✉♣♦❙r➼②✹q③①③⑤❵❶❸❼❄⑧☎❦✶❦♥✇❽♠♣①❾ê◗♦❵❶❸❿☎②◆t➮⑤➮①③⑤✭②❖✇❽❼❩rÛ❷❽✇❽❷❄⑤❙⑤⑦♦❵①❾②❭t②♣❦✘❶❸❶❸❷❸❶✘❼❸ï ý♣➃P✁⑤⑦ì✲t✏q➅⑤⑦♠❖✉♣②♣❶✧q③❷❽②♣❶✪❶✜①③②♣✉♣❶➜④➮q➅⑤❙⑤⑦❦♥✇❽②❖♠❚❷❽②♣⑤❵①❾⑤❙❦✘②❖❶➜❦✘④ ❼ñ
✇❄❼➀✇♥①③♦❙⑤⑦②❂②❖❦✧①❾❶❸②❖❼✍❼❩✇❽✇❄⑤❙♠❭②❖⑤➒❦✘✇✚❶✏⑤❙①➅❷❄❼✟❶➊①➅⑤✆⑧❧Ñ☎ ❶✧❶➜②❇❦✶✇❄✇❄①❾➂❖❶➜⑧➆❶➜⑧◆♦❙❷✮ê◗❿ ⑤❙❷❄②❭❶✟⑤⑦②♣t✏❶➜❶✏❦✘❶➜✉♣❼❄❼❽q❾✈❭⑤⑦❼❱❷❽①❾q③✇❄❿⑥♠♣❶➮✉♣❷❽♦❵❶❸✄ê ❼❄➀✂❶✧❶❸②❇❦✘✇✲✇✟ê♣①❾✈❧②❧✇ñ
rÛ✉♣②♣♦❙❷❄♦⑦❷❽♦❵✇❞t✏ê♣✝⑤ q③⑤⑦➬☎ ❶✧✇❄t➮❶❸①③♦❙❦✘❼✧✇❄②✭ï❶❸⑧➄②♣➃❙♦❵ê◗❷❄t➮❿➊⑤⑦⑤❙q③❦✘q③✇❄❿✪①③♦❙❶✘②✜ý❧✉Ñ❶✘ý❧❶❸❦✘❶❸✇❄❦✧❶❸✈❧⑧✃✇❄①③♦❙⑤❙②➄❼☞ï ⑤⑥❹ ❼❄♠♣♦❙①➅q③❼✥✈❧✇❽①③❼✵①❾♦❵✇❽②✭♠♣❶➼✇❄♦⑥ë❇①③✉♣②❖q➅⑧✡⑤⑦②❖♦⑦②♣rÑ①❾①❾②❖②❧④ ñ
✇❄î➲♠♣②ù①③❶✧♦❙✈♣②✃✇❽❷❽♠♣①➅①③❼➀①③②❇❼✏✇❽✇❄①③♦ð❦❸✉❭❼⑥⑤⑦úÛ✉Ñ❷❽rÛ❷❽❶➜❶✧♦❙û❩❷✏❦✘t❤♦❵ì❜②❖❶☎✇❄❼➀❷♥✇❽⑤⑦⑧❧❷❄①③❶➜✈❭②♣❼❄❦✶①③❦✧✇❄②♣❷❄①③④✹②♣①③êÑ④✭❶⑩❼❄❶❸⑤❙✇❽❼❽❦✘♠♣❼❄✇❄①❾❶✭①③♦❵♦❙②❖②❚❷❄❼✪❶➜❼❩❼❄ì✲✈❖❦♥♠♣q➤♠❖✇♥❶❸①③❼✡t➮❦♥♠ß♦❙⑤⑩r❏❦✘⑤❙⑤❙♦❙②❖②✹t✏⑧ ✉Ñ①③②◗✉♣♦❵❻❙q➅❼❄⑤⑦❶❸❶✃②❖❼❩✇❄②♣♦❙①③①❾④❵r■②❖⑤⑦④⑤ ñ
①✇❄③♠❖❼✭①③⑤⑦♦❙②❖②❖✉Ñ⑧❧❼✟♦❵rÛ❼❽✈♣♦❙❼❩r➼q✚①③ê♣⑤❵♦❙q③❶❂❦✶r✡✇❽①❾⑤❙rÛ♦❵♦❙❦✘②❚❷☎✇❄①③♦❙❼❽⑤⑦❦♥②❨②Ø♠♣❶❸✇❽⑤⑦t✏❷❽④❵⑤❵❶✧⑤⑩❦✘②❇❶➜rÛ✇✭❼✧❷❽ï ♦❙✇❄t ♦ÿ❹ ①❾♠♣❼❩②❭①③❶✃②♣⑧❧④❙✈❖t➮q③❦✘❶⑥⑤⑦❶ð①③⑤❙②Ò⑧❧❦✘❶✧✇❄❷❽①③✇❽♦❙❶❸⑤❙❼❄②❂①❾✈♣q③❶❸q❾✇❄⑧Ø✇⑩❷♥⑤❙①➅❼❄❦✧❼⑥✉❭❶❸❼✟❶➜✇❽♠❖❦✘⑤❙①❾⑤➒➂❭✈❧✇✭❦✧✇❄♦❙⑤⑦①❾✇ññ
t✏t✏⑤⑦⑤⑦✇❄✇❄①➅①③♦❙❦✧⑤❙②✏q❾q③rÛ❿❙♦❵➃✲❷✬ì✲❶❸⑤❵①❾✇❄❦♥♠♣♠➮♦❵✇❄✈❧❷♥✇✭⑤⑦①③❷❄②♣❶➜①③ö❵②♣✈❖④➊①❾❷❽❶✧①❾ý❧②❖⑤❙④ðt✏①❾✉♣②❇q❾✇❄❶❵❶❸ï ❷❄t✏❹ ♠❖❶➜⑧❧❶☞①③✇❄⑤⑦❷♥✇❄⑤❙❶❂⑧♣❶✘❼➀✇♥ñ▲✆♦⑤➒⑩✇❄☎ ❶◆①➅❼✬①❾②♣✇❄rÛ♠❖♦❙⑤⑦❷❄✇ñ
✇❄⑤⑦♠♣②❇❶➮✇❽❼✲⑤⑦⑧❧④❵❶➜❶✧❼❄②❇❦✧✇❸❷❄ó ①③❼❏ê♣⑧❧①③②♣♦❙④✏t➮♦❙⑤⑦ê✄①③②❚➀✂ ❶➜❦✶⑧❧✇☞❶❸❼❽❷❽❦✘❶✧❷❽q➅①❾⑤➒✉♣✇❽✇❄①❾♦❵①③♦❙②❖②◆❼✲❼❄⑤⑦♠♣②❖♦❵⑧➆✈♣♦❵q③⑧◆ê✄➀✂ ❦✘❶❸♦❵❦✘②❵✇✲✇♥❼❩⑤⑦✇❽①③⑤⑦②❖✇❄❼❏❶➜❼✧①❾②◗ï ❻➒❹ ⑤⑦❷❽♠❖①❾❶ ñ
①❾②♣②❖①❾②❖⑧♣✈❖④➆❦✘❶❸❶➜②♣⑧Ò④❙①③⑤❙②♣❦✘❶➜✇❄❼✧①③♦❙ï✟②❖❂✞ ❼➆❶➮⑤⑦❷❽✉♣❶◆❷❽❶❸⑧♣❼❄❶✘❶✧✇❽②❇⑤❙✇■①❾q③❶❸⑤⑦⑧Ò②❚⑤⑦❶✧②♣q③④❙♦❵♦❵✈♣❷❄④❙①❾✇❄♠ ♠❖t✣rÛ♦❙❷✭rÛ♦❙✈❖❷➊❼❄④❵❶✃❶✧②♣①③②✔❶❸❷❽✉♣⑤⑦✇❄q➅⑤⑦①③②♣②♣④ ñ
❼❩❼❄✈❖❦♥♠♣❦♥♠➊❶❸t➮⑧♣⑤✡♦❙t➮❦❸⑤⑦⑤⑦②➆①③②➊ê❭t✏❶✟♦❧ê♣⑧❧✈❖❶✧①❾q➅q❾❼✧✇✮➃⑦✈♣⑤❙✉✭②❖⑧➊①③②❵❼❄✇❽♠♣♦➮♦➒ì❚⑧❧♦❙♠♣t➮♦➒⑤⑦ì❚①③②➆✇❄♠❖t✜❶❜♦❧✉♣⑧❧❷❽❶❸①③t✜q③❼❸①❾ï ✇❄①③❻❙❶✍⑤❙❦✘✇❄①③♦❙②
✪➌ ➍✵➎❸➏✧➐➒➑♣➓❙➏
➦➥➀➞➔✥➧❵→❵→❵➨❏➣✍➣❏➯➲➫❉↔❧↔❙➦➤➣♥↕➀➭❵➙➜➳♥➣♥➦➺➛❇➯➲➸❇➝➞↔❙➳✶➣♥↕➀➢✧➟✒➣✶➥➀➢➜➦➞➙➜➭✡➙❸➧❵➠❀➡❇➯❜➠➅↔❵➙❸➙✧↕➀➥❩➠➄➟✚➢✧➭❙➩➽➡❇➣✍➻❙➝➤➧❇➢✧➙✧➥➀➢❸➠➄➦➞➟✟➧❵➾❜➨✡➦➞➚P➳✍➩⑦↔❇➩⑦➧❇➝➤➧❇➢✧➙✶➙✶➧❇➫❉➫❉➧❇➝➞➦➞➝➞➣✶➧❵➣✶➭❙➭❙➨❵➨➜➨➜➪✵➣❱➣➼➶✏➛❇➦➞➯✬➢❸➢❸➯➲➳❩➢✚➣♥→❇➯✲➛❇➦➞➧❇➢✧➳♥➣✍↕➲➙❸↕➀➧➒➝➤➦➞➣♥➥❩➣❽➢✧➢❸↕❉↕➀➦➞➥➀➧❵➧❵➙ ➵➵
➦➞➦➞➧❵➧❵➨✬↔❇➡❵→❇➥➀➢❸➯✥➯❭↕➀➛❧➣♥➣♥➘⑦➣♥➡❵➧✲➦➺↕➀➡❇➣✶➯➲➭✜➣♥➭✲➢✘↕➀➫❉➣✲➦➺➣♥➥➀→☞➦➺➥➀→❇➯➲➙➜➣❽➟✟↕❞➥➀➣➄➙➒➯➲➙✟➡❵➳♥➭❙➳♥➣❽➣♥➥❩➯➲➢❸➯❭➦➞➝➞➦➤➣✶➧✲➭♣➥➀➹◗→❵➙❸➣P↕❞↔❇➥➀➢❸→❵➯▲➣➼➥✶➹✧➝➞➣✶➛❇➢✧➡❙↕➀➥❭➧❵➥➀➦➞➧❇→❵➨➣
↔❙➦➞➧❵↕➀➨❏➙➒➳♥→◗➣♥➢✧➯➲➯✥➯✚➛❧→◗➣♥➢❸➣♥➯✮➧■↕➀➳♥➣✶➙❸➘➒➧❵➡❇➸◗➦➺↕➀➧❵➣✶➣✶➭➆➭✟➟➊➥➀➙✚➢✧➧➒↔❙➻➆↕➀➙➜➣❽↔❧➴❵➙➜➢✧➯➲➟✟➦➺➥➀➦➞↔❇➙➜➝➞➧❇➣♥➢❸➯♥➝❭➪⑥➢❸➷❇➳❽➡❙➥➀➦➞↕➲➙❸➥➀➧❇→❇➯❉➣❽↕✶➙❸➹➬↕❞➝➤↔❇➣♥➢✧➢✧↕➀↕➲➧❵➥➀➯➵
➙✧➦➞➳✶➠❀➢✧➢❸➝❋➳❽➣♥➥➀➧❇➦➞➙➜➨❸➧❵➦➞➧❇➯➼➣♥➯➲➣❽➡❇↕➀➦➞➳❩➧❵→⑥➨■➢❸→❇➯❜➢❸↔❵➯❜↕➀→◗➣♥➳❽➢➜➙➜➭➮➧◗➢✧➭❙➧➮➦➺➥➀➦➤➦➞➙❸➟✟➧❇↔◗➯♥➪✍➢❸➳❄➔✥➥✍→❇➙❸➣☞➧✏➸◗➥➀➣♥→❇➝➤➣✲➭✏➫❉➙✧➠✵➦➤➭❵➙❸➣❽➧⑦↕✍➥➀➳♥➙❸➙➜➝➞➙➜➟❱➨❸➵➵
➟✚❒ ➯▲➡❇➥❩➢✧➧❵➥➀➦➞➦➞➥➱➳♥➻☎❮⑥➦➞➯▲➧✃➥➲↕➀➥➀➡❇→❇➳❄➢✧➥➀➥■➡❵↕❩➢✧➢❸↔❇➝P↔❇➩⑦➝➞➦➞➧❇➳✶➙✶➢✘➫❉➥➀➦➤➝➞➙❸➣✶➧❚➭❵➨❸➙➜➣✟➧➒➙❸➥➀➙❸➠✬➝➤➙❸➢✧➨➜↔❇➦➞↔❇➣♥➝➞➯⑥➦➞➳✶➢✘❐➅➫❉➥➀➦➤→❇➙❸➦➞➧❇➳❩➯❩→❂❰☞➳♥➢✧➙❸↕➀➧➒➣➊➥❩➛❧➢❸➦➞➣❽➧ ➵
➥➀➳❽➟✟→❇➙➜➙⑦➟✟➢✧➭❙➥➊➦➞➣♥➧❇➦➞➝❧➯✟➨✮➙❸➛◗➫❉➠➬➢✧➢❸➦❾➭❙➯➲➧✡➣✶➣♥➭◆➯➲➢❸↔❵↔❵➙❸↕➀↔❇➣♥➧◆➝➞➢➜➦➞➳✶➭❖➥➀➢✧→❵➪P➥➀➣✜➦➞Ï❜➙➜➣❽➧❖➣❽➴❙↕➀➪P➣➼➦➞➯▲Ð✍➥➀➫✥➣♥➯➲➣✲➦➞➧❇➧❇➳❽➦➞➨✚➧➒➣✏➥➲➢✚↕➀➙❸➙⑦➠☞➯➲➭❵➟➊➢✪➡❵➢✧➳♥➯▲➝➤➣✲➝❧➥➲↕➀➧⑦➢❏➙➜➡❵➧❵➟✟➟❏➨➆➣❽➛❧➥➀➯▲➣❽→❵➥➲↕❀↕➀➙⑦➡❇➙❸➭❵➳❄➠Ñ➙❸➥➀➝➤➡❇➡❵➙❸➯➲↕❩➨❸➣❽➢❸➻ ↕ ➝
➥➲➦➞➧❇↕❩➢✧➭❵➦➞➡❇➧❇➳❽➦➞➧❇➣❏➨✏➢✧➳❽➯➲➥➀➣♥➦➞➘⑦➙➜➡❵➧✪➣♥➧❇➯➲➳❄➳❽→❵➣♥➣♥➯♥➟➊➹➬➫✥➢➊➣✟➢❸➧◗➦➞➝➞➭➮➝➞➡❇➯▲➳♥➥➲➙❸↕❩➟✟➢✘➥➀↔❧➣■➙➜→❵➡❵➙✶➧◗➫Ò➭✏➟✟➥➀→❵➣❽➣❱➥➀→❵➟✟➙⑦➣❽➭❵➥➀➯♥→❵➪✲➙⑦➔❭➭✪➙✡➳✶➭❙➢✧➧➙
➥➀➦➞➧❇→❵➭❵➦➞➯✥➡❇➫✥➳❽➣✟➣➼➢❸➣❽➳❽➴⑦➥➀➥➀➦➞➣♥➙❸➧❇➧❇➭✜➯☞Ó✍➠③↕➀➙➜➚ÕÔ✵➟ÙÖ✚➥➲× ↕❩➯✬➢✧Ö✍➦➞➧❇↔❵➦➞➧❇➟➊➨➮➢❸➩➽➯➲➣✶➣❽➘➒↕✵➡❵➯▲➣♥➻❙➧❇➯▲➥➀➳♥➣♥➣❽➟Ø➯➼➫❉➯➲➙✚➦➞➥➀→❵➥➀→❇➙➜➡❙➢✧➥❞➥✚➦➺➦➞➥❉➧➒➥➀➳✶➣❽➢✧↕➲➧ ➵
➟✟➧⑦➡❵➣♥➟❏➭❵➦➤➛❧➢✧➣❽➥➀↕➀➣✲➯❜➯▲➙❸➥❩➠❞➢✘➥➀➣❽➣✲➴❵➦➤➢✧➧❙➟✟➠➅➙✧↔❇↕➀➝➞➟➊➣♥➯♥➢✧➪➊➥➀➦➞➔✥➙➜➧⑥→❵➦➤➢✧➯✲➧◗➟✟➭✡➣❄➫❉➥➀→❇➦➺➥➀➙⑦→❇➭⑥➙❸➡❵➯➲→❇➥❉➙✶↕➀➫❉➣✶➘➒➯✲➡❇➥➀➦➺→❇↕➀➦➞➣❱➧❇➨❏↔❧➙✧➝❾➢✘➥➀↕➀➣♥➨➜➧❵➣ ➵
➥➀➣❽➦➤➧❇➢✧➨➜➝♣➦➞➠③➧❵➙❸➣♥↕❉➣❽↕➀➳♥➦➞➙❸➧❇➧❇➨❙➯➲➹◗➦➤➭❵➦➞➧✜➣❄↕❩➥➀➢❸→◗➛❇➢✘➝➺➻✟➥❉➦➺↕➀➥✬➣✶➭❙➫✵➡❇➙➜➳♥➡❵➦➞➝❾➧❵➭✡➨✮➛❧➥➀➣✲→❵➣❜↔❧➛❵➙➜➯➲➡❵➯➲↕❩➦➞➭❵➛❇➣❽➝➞➣✲➧■➥➀➙✧➙❏➠➬➣♥➩⑦➟✚➧❇➙✶➛❧➫❉➣✶➭■➝➞➣✶➭❙➥➀→❵➨➜➣➣
➟✟➘➒➡❵➣❄➦➞➥➀↕➀→❇➣♥➭➮➙⑦➭☞➥➀➙✡➦➤➧➒➥➀➭❵➙✲➙■➢✧↔❇➧✟➝➤➢✧➢❸➧❇➡❙➧❇➥➀➦➞➙➜➧❵➧❵➨❵➙➜➪➼➟✟Ü➆➙➜➡❵➣❏➯❋➦➞↔❵➝➞➝➤↕➀➡❵➙❸➯▲➨❸➥➲↕❩↕❩➢❸➢✧➥➀➟Ú➣✮❐Û➥➀➢✧→❇➨➜➣❱➣♥➧➒➢✧➥❄➝➞➨➜❰Ñ➙❸➫❉↕➀➦➺→❇➥➀➦➞→❵➳❩➟Ý→❏↕➀➢❸➣❽➯➵
↔❇➟✟➢✧➙⑦↕➲➭❙➥✵➣♥➙❸➝❣➠Ñ➹P➢❸➢❸➧➊➧❇➭✭➙✘Þ➽➳♥➣❽➙➜↕❩➟✟➢❸➝➞➝❧➟✟➟✟➣♥➧➒➣❽➥✮➥➀→❵➙➜➙⑦➧☎➭❏➦➞➥➀➧❵➙☞➦➞➥➀➦➤➦➤➧❇➢✧➭❵➝✥➡❵↕➀➳♥➣♥➣❜➯➲➡❇➯▲➝➺➥➲➥➀↕➀➯✚➡❵➳❽➥➀➥➀→❇➡❵➢✧↕➀➥✚➣♥➭✟➯➲→❇➭❵➙✶➙➜➫ß➟➊➥➀➢✧→❵➦➞➧ ➣
➣❄à✟➳✶➢❸➳❄➻✜➙✧➠➬➥➀→❇➣✲➦➞➧◗➭❙➡❇➳♥➣♥➭✡➟✟➙⑦➭❵➣❽➝❖➣❽➟✟↔❇➦➺↕➀➦➞➳✶➢❸➝➞➝➺➻➒➪
á⑦â✵ã❙ä❇å❞æ✍ç❜è❧ã❵éÕå✵â
❹❼❄✉❭♠♣❶➜❶❜❦✘①❾✉♣➂❭❷❄❦✧♦❵⑤⑦ê♣✇❄q③①③❶✧♦❙tØ②❖❼✲♦⑦♦❙r♣r❉rÛ♦❵⑧❧❷❄❿◗t✡②❖✈♣⑤⑦q➅t✏⑤➒✇❄①➅❦❱①③②♣ë❇④☞②❖ë◗♦➒②♣ì✲♦➒q❾ì✲❶➜⑧❧q③❶❸④❙⑧♣❶✟④❙①➅❶❉❼☞ê❖⑤➮⑤❵❼❩ê❖❶➜⑤❙❼❀❷❄❦✧❷❽♦❙①③❶✧②❇❷➼✇❽⑤❙✇❄♦✏①❾②❖✇❽①❾②♣♠♣④❶
➂❖ì✲ì✲q③①➅①❾❶✟✇❄⑧❧♠♣❶❸⑤⑦①③❼❄✉♣②⑥✉♣✉♣❷❽ò✮q③❶❸①➅❦✧⑤❙í❏⑤➒⑧❂❥❧✇❽①❾í➊✈♣♦❵✉♣②❖ó ❼❜✇❽❼➼⑤⑦ô✃❼❩ë❵✈❭❶➮⑤⑦❦♥❷♥♠⑩♦⑦❼❞r✲õ✲⑤❙í☞❼❜♦➒❻❙î❱✇❽♠♣❶❸✉♣❷❞❶❱q➅❷❄⑤⑦✈❖❶➜②♣❼❩ö❵②❖❶❱✈❖①❾②♣♦❙①❾❷❽r❀④❭❶➼ï✭✉♣✉❭q➅⑤⑦❶❸s❜❷❽②♣✈♣❼❄②❖①③❷❽❼❩①❾❷❽②♣✇❄❶✧❶❸④✡②❇②❇✇➊✇❄✇✬❶➜♠♣❷❄❦♥❶➜①③♠♣④❙❼❩②♣♠ð♦❵♦❵✈♣q❾✉♣❷♥♦❵❦✘❷❽④❙♦⑦❶❸❿ ❼ñ
②✇❄♣❦✘♠❖♦❵❶❸❶➆t✏❶✧❷♥✉♣②❖❼✧➃❵❷❄❶✧①➅î➲❶❸❼❄②➮⑧ù①❾②♣✉❖④✃✇❄⑤⑦♦❚❷❄♦⑦✇❄♠❖r✲①➅❦✘⑤❙✇❄✈♣❷❽❶➜⑧✹q➅⑤⑦⑤⑦t➮❷➜❦✘➃❙♦❧❼✡⑤✟⑧❧♦⑦❶✭✉❖r☞❷❄⑤⑦♠♣♦❵②❖ê♣①③④❙⑧✹q❾♠♣❶❸tøq③t➮❿ð⑤⑦r✻❼❩⑤❵②◗ë◗❦✘✈❖①③①③q❾⑤❙②♣q③q❾❶❸④➊q③⑧÷❿ðí☞ë❇t➮î✥②❖①➅⑤⑦♦➒❼❞①③ì✲②❵✇❄✇♥♦■q❾❶➜⑤⑦♦➒⑧❧①③②✹❻❙④❙❶✧❶⑥⑤❵❷♥❦✘❦✶❶❸♦❵✇❄②♣①③t✏④❙♦❙①❾② ❶ñ
✇❄♠♣❼❽♦❵❦♥❶❸♠♣②♣q❾✉✏❶✧♦❙t➮t✡♦➒❻❙⑤✟❿♣❶❸û✶ì✲❷❽ïü❦✧①❾♦❙✇❄î▲♠♣t✏✇✏①③②✪❶❉①➅❼✏✇❽⑤❙♠♣④❙✉Ñ❶➼❶✧♦❵②❇✉♣❼❽✇♥❼❩❷❽❼✮①③♦❙ê♣ê♣ú✻q③⑤➊❶✪q③❶✧✉♣t✭✇❽♦❂❷❄➃⑦♦❵❶❸ê♣✈❭④❏q③❼❩❶✧❶✭✈❭tÙ❼❩q③①③❶❸ì✲②♣⑤⑦④❏❷❽♠♣②♣①➅✇❄❦♥①③♦◗②♣♠➮♦❙④◆q➅q❾❼✥①③t✏✇❄ì✲❶❸①➤❦♥♠♣✇♥♠❖❼✬①➅②♣❦♥✇❽♠✡①③♠♣ö❇①③❶✧✈♣②❖①③❶➜❷❜⑧❧❼✜✈❭⑤❙✇❄✈❧❦✘♦❶ ñ
❶✧⑤❙t✡❦✘✇❄ê❭①③♦❙❶➜②❖⑧ÿ❼❱⑤⑦♦❵④❙❷❱❶❸t✜②❇✇❽❶✧❼✏✇❄♠♣ì✲♦❧①❾⑧♣✇❄♠ÿ❼✟rÛ✇❄❷❄♠❖♦❵❶✃t✰⑤⑦❶✘ê♣ý♣①③q③⑤⑦①➤✇➀t✏❿ù✉♣✇❄q③❶❸♦ð❼❸ï⑥①③②❖þ✚⑧❧✈❖②♣❦✧❶➮❶✭t✏✇❄♠♣❶✘✇❄❶✃♠❖⑧♣♦◗⑧◆❶✘✇❽⑤❙①➅❼❱①❾q③❶❸✇❄⑧♦
î➲t✏②Ú⑤❙①❾♦❙②÷✈♣❷❂⑤❵❦✶✉♣✇❽①❾❷❽♦❵❶✧②❖❻◗❼■①❾♦❵❦❸✈❖⑤⑦❼☎②ðì✍êÑ♦❙❶✪❷❽ë✔①③②❖⑧❧ì✍✈❖❶÷❦✧❶❸♠❖⑧❚⑤➽❻❵rÛ❶ù❷❽♦❙t❼❄♠♣♦➒❶✘ì✲ý♣②Ú⑤⑦t✏♠♣✉♣♦➒q③ì✡
✠ ☛Ñ⑤➒✇❸ó❱⑧❧♦❙ñ
❸
❶
❸
❼
ï
❦❼❩✧♦❵⑤⑦②➄②ð➃✛êÑ✚ ❶✪①③❥❧②❖①❾⑧❧t✏✈❭✉❖❦✘❼❄❶❸♦❙⑧ù②✌✈❖✜✗❼❄①❾✢✗②♣✢✣④✌❵✜ û✡☞✎ì✲✍✑♠♣✏✓①③❦♥✒✕♠❨✔✗✖✙✘÷♠❭⑤❙ú✻❼⑥ô✃êÑ❦➜❶✧s❜❶❸q③② ✈❖❼❩❶✧ë❵t✡❶✧❿❵ê❭➃✥❶➜õ☞⑧♣í✮⑧❧①③❦♥❶➜❦✘♠❖⑧ÿ✇❄⑤❙①③♦❙❷❽①❾②❖②❧⑧❧❼ññ
✇❄✜✆❶✧✢✗❷♥✢✳⑤❙✲❙❦✘û✘✇❄✴ï ①③❻❙❶❸✤✧q❾❿ ✦✩★✪①③②✥☞✰✤✧⑤❙✦✩①③⑧❖★✪❼➮☞ ⑧❧♦❙ú➱❥◗t➮①③t✏⑤⑦①③✉❖②ÿ❼❩♦❵❦✘②✫♦❙②❭✖✙❼➀✬✭✇❽❷❄✒✆✈❖✮✰❦✘✯✱✇❄①③✜✗♦❙✢✗②➄✢♣➃❜➈➽✆♦ û✶➬☎ ➃✍❶✧ú➱❷❽❥◗①❾①③②❖t✏④◆✉❖❶➜❼❩⑧❧♦❵①➤② ñ
✇❄✉♣♦❙q③⑤❙❷♥❼❸②♣➃⑦②♣❻➒①③②♣⑤⑦q③④ ①③⑧♣✇❽⑤⑦♦◗✇❄♦❙①③♦❙q➅❼✧②✡ïù✇❄♦◗þ✚♦❙✈❧q➅❼❸✇❽➃❵✉♣⑤❏✈❧✇✏④❙❷♥rÛ⑤⑦❷❄✉♣♦❵✵
♠❖①③❦❸⑤⑦✤✶q❧✦✷q③★✪①➤rÛ☞✂❶✧ñÕ♠♣①③①➅❼✡❼❩✇❄✇❄♦❙♠❖❷❽❶➆❿➊❶➜❦✧♦❙⑧❧t✏①➤✇❽♦❙✉♣❷❉q③❶✘⑤❙✇❄②❖❶➜⑧⑧
t
⑤⑦✸❏②❖î ⑧✹✁ þ✡❻➒⑤⑦ó ❼➬q③①③①❾⑧♣②❇⑤⑦✇❄✇❄❶❸❶➜❷❄⑧ü②❭⑤⑦⑧♣q⑦♦❙q➅⑤⑦t➮②♣⑤⑦④❵①③✈❖②✹⑤⑦④❵êÑ✭❶ ❶✧①③☞✪②♣④ð✹✎✺➆t✏✼ú ♦❧✻➄⑧❧①③✽✈ ❶✧q③✹✚q③❶❸⑧ÿô ①③❦➽②ÿs❜q❾⑤❂✈❭❼❩❻➒ë❵⑤⑦❶✧❷❽❿✪①➅⑤⑦✜✗②❇✢✗✇➮✢✣✢❵♦⑦ûr
♦❙❼❩♦✃❷ ✇❄✁ ♠❖➇❏⑤⑦✇✜➇✾①❾✻✬✇✏ï✮❦❸➁☞⑤⑦②✹❶❸❷❄❶❚①❾②❖⑧♣ì✍✈❖❶❚❦✘❶✭❶✘ý◗⑤❙✇❄❦✘❶❸✇❄②❖①③♦❙✥⑧ ②❖❼■✸❏rÛî ❷❄✁ ♦❵þ✡t ó ❼☎✇❄❷♥þ✚⑤⑦①③✉❖②♣t✏①③②♣⑤❙④◆ë❙❶❸❼❄❷✭❶❸ö❇❼❄✈♣❿❧❶❸❼➀②❖✇❽❶✧❦✘❶➜t ❼
❼♦⑦➀⑤⑦✇♥r❜②❖⑤➒⑧✏❶✧✇❄ý♣❶❱①❾✇❽⑤⑦①③❼✍t✏②❧rÛ❼➀✉♣♦❙✇♥q③⑤➒❷❽❶❸t➮✇❽❼❸①③❦✮ï ⑤➒✇❽♦❙❹ ①❾ê✄♦❵♠♣②➆➀✂ ①③❶➜❼■❦✶⑤❙✇❜②❖❦✘♦❙⑧✪t✏②❭♦❧ì✲❼❩⑧❧①➅①❾⑧❧✇❄❶❸❶✧♠❖qÑ❷♥♦❙⑤❙⑤⑦✈❧q❾ê❖♦❵✇☞q❾②♣❿ ❷❽❶❙❶❸➃❇❷❽ö❇❶❸ì✲✈♣⑧♣①③①➤✈❖❷❄✇❽①③♠♣❦✘②♣❶➜♦❙④✜❼❏✈♣✇❜q③✇❄⑤❙♠♣①③❷❄②❵❶➮④❵✇❽❶❱❶✧ê♣❷❽✈❖②❇t✏✈❖❷❽⑧❧t■❶❸❶❸⑧❧êÑ②❚①➅⑤➒❶✧✇❽♦⑦❷♥❶❼r
ë❇✉❭②❖❶❸❷❩♦➒rÛì✲♦❵❷❄q❾❶➜tÙ⑧❧④❙ë◗❶✡②♣♦➒❶✧②♣ì✲④❵q③❶❸①❾②♣⑧❧❶❸④❵❶✧❶✮❷❽①❾⑤❙②❖❦❸④❖ö❵➃❋✈❖❼❩①③♦⑩❼❄①➤✇❽✇❄①❾♠❭♦❵②⑥⑤➒✇➊❷♥⑤➒⑤✪✇❄✉❖♠❖❷❄❶✧♦❵❷❜④❙✇❄❷♥♠❖⑤⑦⑤❙t ②⑥①➤ú✻✇✲⑤⑦④❵♦❧❶✧❦✧②❇❦✧✈♣✇♥û✚❷❽❷❄❦✧①③⑤❙②♣②④
✇❄❼❩♠♣✈❖❷❽❦♥♦❙♠✭✈♣⑤❙④❵✿❼ ♠ü✸❏⑤❂î ✁ ♠◗þ✡✈♣t➮ï ⑤⑦②♣ñ▲⑧❧❷❽①③❻❙❶✧②✹✉♣❷❽♦◗❦✧❶❸❼❽❼➮❼❩✈❖✉♣✉❭♦❵❷❩✇❽❶❸⑧üê◗❿ù⑤◆✇❄♦◗♦❙q
86
❹❶❸❹ ⑧♣♠♣♠♣④❙❶❱❶➊❶➼⑤❙❷♥ê◗⑤➒❦❸❿✡✇❽ö❵①❾✈❖⑤⑦♦❵④❵①③②❖❼❄❶✧⑤⑦①➤②❇✇❽q③①❾❶❱✇❽♦❵❼✥②✁rÛ♦❙①➅❼✥❷✚❱❷❷❽❼❩❽❶✧❶✧❶✘q➅✇❩⑤➒➂❭✇❄✇❄②♣①③①③②♣❶✧❻❙t✏④⑥❶❸q❾❿✡❶✧✈❖②❇✉✭❼➀✇✍✇❽✇❄❷❽♦⑦♠♣⑤❙r➄①➅①❾❼✮④❵r✻⑤❵♠❵✉❖❦✶✇❄❷❄✇❄rÛ♦❵✈❭♦❙ê♣❷❽⑤⑦ì✍q❾qÑ❶❸⑤❙♦❙t ❷❽❷➼⑧➄❼➀①➅ï❋✇♥❼✮⑤➒î➲⑤❵②✡✇❄❼✲①➅❦✚✇❽rÛ♠♣♦❙ë❇❶☞q③②❖q③♦➒❦✘♦➒♦❵ì☞ì✲②❧❼✧q➤ññ ï
✇❄❷❽❶✧❶❸ý◗⑤❙❼❄✇➆♦❙♦❙②❭r➊⑤⑦ê♣✇❄q③♠♣❶⑥❶❚✇❽♠❖①③②❇⑤➒✇❄✇⑥❶✧❷❽⑤⑦②♣②ÿ❶✧✇✭⑤⑦④❵⑤❙❶✧②❖②❇⑧ ✇➮♦❵❦✧✉❭⑤⑦❶❸②ü②✔⑤❙❼❄❦❸❿◗ö❵❼❩✈❖✇❄①❾❶❸❷❽t➮❶➆❼✧⑤❙➃✚②❖⑧÷①➤✇☎❷❽①➅❶✘❼⑩➂❭②❖②♣♦⑦❶☎✇✭❼❩✈❖✈♣❦♥②❧♠ ñ
ë◗❶✘ý❧②♣✉❖♦➒⑤❙ì✲②❖q③❶❸❼❄①❾⑧❧♦❵④❵②✪❶➮♦⑦ì✲r✵①➤④❙✇❽q③♠ù♦❙ê❖❼❩⑤❙♦❵q❾t✏q③❿⑥❶✪⑤❙⑧❧❦❸❶✧❦✘④❵❶➜❷❄❼❄❶❸❼❄①❾❶⑥ê❖q❾♦⑦❶❏r✮♦❙⑤❙②❇✈❧✇❄✇❄♦❵♦❵q❾♦❵②♣④❙♦❙①③t✡❶❸❼❜❿❙ïì✲①➤❹ ✇❽♠♣♠♣①③❶⑩②✭❷❽❼❩⑤❙✇❽⑤❙✉♣②❧①③⑧ ñ
①③î➲⑧♣②❵②✭⑤❙✇❽❷❽❶✧❦✘⑧✹q③♦❙q❾①③②❇rÛ④❙✇❽♦❙❶❸❷❽❷❽②❇⑤❵t➮✇❜❼➀✇➜⑤➒⑤⑦➃❵✇♥④❵✇❽❼⑥❶✧♠♣②❇❼❩❶✟✇❽✈❭❼❞⑤⑦❦♥t✏♠ì✲①③♦❙⑤❙q③✈❖q❭❼⑩②❵♠❖✇✲þ✿⑤➽❻❵♦❙✞ ❶☞rP⑤❙✻❉❶ ☎➬❦❸➃✍❦✘♦❙❼❩❶❸❷❄✈♣❼❽✇✲❼✥✉❖②♣✉❭✇❽❶✧♦➊♦❵❶➜❷❩r✻⑧❧✇➮⑤❙❶❸❦✘⑧✪✇❽✇❄♠♣✈❖✇❄❶✃⑤❙♦✏q❖②♣❶✧ë◗♦❙②❭②♣✇❄❦✘♦➒①③♦❧♦❙ì✲⑧❧②üq③❶❱❶❸✇❄⑧❧ê♣♠❭④❵✈♣⑤➒❶❙④ ✇ ï
rÛ✇❄❷❽①➅❦✧❶✧❼❸❶❙➃❋➃❵⑤❙⑤⑦❦✧②❖❦✧⑧➆✈♣❷♥✇❽⑤➒♦✪✇❄t➮❶➼⑤❙⑤⑦❦✘①③②❇✇❄①③✇❽♦❙⑤❙②✏①❾②☎❼❩✉Ñ✇❽♠♣❶❸❦✧❶✧①➤t✭➂Ñ❦✧➃➬⑤➒①➅✇❽❼✚①❾♦❵❼❄②❖①❾④❵❼❞②♣⑤⑦①❾➂❭②❭❦✧⑧■⑤❙✉♣②❵q➅✇➜⑤⑦ï✮②❖íø②♣①❾②❖②❖④❏❶❸❦✘♠❖❶➜❶✧❼❄✈♣❼❽❷❽⑤⑦①③❷❽❼❩❿ ñ
✇❄✉♣❦✧❶➜⑤⑦❷❽❦♥✇❄❶❸♠♣①③❦✧♦❙②♣♦❙②➄②❖♦❵➃❭q❾⑧❧♦❵⑤⑦①❾④❙✇❄②❖①③❿✜⑧✭♦❙②☎①③①③❼❜②☎♦❙✇❄r❞♠❭t➮⑤➒✇❄⑤⑦♠♣✇✬②◗❶✡✇❽❿⑩♠♣✈❖❶✧❦❸❼❄❷❽⑤❙❶➊❶❏❼❄♦⑦❶❸❶✘r✬❼❸ý❧➃❧①③❦✧❼❩♠♣✈♣✇❽❶✧❷❄❼✍✈♣❷❽⑤■❶✧❷❽②❇①➅⑧♣❼➀✇✚✇❽❶✘①③⑤❙✇❽❦✟⑤❙✈❧ë◗①❾✇❄q③②♣♦❵❶❸⑧⑩t➮♦➒ì✲⑤➒⑤❙q③✇❄❦✘❶❸❶➜✇❄⑧♣⑧✭①③♦❙④❙②✪❶❙✉♣ï❜q➅❼❄⑤⑦➁☞✉❭②♣❶➜❶✧②♣❦✘②❭①③①❾②♣➂♣❦✘④❶ ñ
✄② ✂✬rÛ♦❙❷✮❶❸❻❙❶❸❷❄❿✪⑤❙④❙❶❸②❵✇✲✇❽♠❖⑤➒✇❏❦✧⑤⑦②☎✉Ñ❶✧❷❄ñ
rÛ①❾ì✍✇❽♦❙❼➆❶■❷❽t✂❦✧⑤❙⑤❙❦✶✉♣②✭✇❽①❾q③♦❵⑤❙⑤❵②Ò②♣❼❩ë✪②♣⑧❧①③✇❄②♣❶➜♠♣④❖❼❄❶✡❦✧➃❖❷❄t■ö❵①③✉❧✈❖✈❖✇❄❶❸①③❼❩❼❩♦❙✇✮✇❄②❭①③ì✍♦❙✆❼ ✖
❶➊♠❖⑤⑦②❭⑧☎❦✘♦❧⑧❧❶➊⑤⑦②❭⑧✭♠❖⑤⑦②❖⑧✭t➮⑤❙①❾②❇✇❽⑤❙①❾②
☎ ò☞♦❭➃☞①➤r✜⑤⑦④❙❶❸②❇✇❽❼✪⑤❙❷❄❶ ✇❄♦✹⑤❵❦♥♠♣①③❶✧❻❙❶
✇❄q③❶❸♠❖⑤⑦①③❼✚❷❽②♣ë◗①③②♣①❾②❖④✏⑧ ⑤⑦♦⑦②❖r✬⑧➆⑤⑦❷❽✈♣❶✘✇❄➂❖♦❙②♣②❖①③♦❙②♣t■④➮❿❵⑤❙➃❖❦✘✇❽✇❄♠♣①③♦❙❶✧②➆② ë◗✇❄②♣♠♣♦➒❶❸ì✲❿✭q❾❼❄❶➜♠♣⑧❧♦❵④❙✈♣❶❱q③⑧⑤⑦②❭ê❭⑧⑩❶✜♠❖❦✧❶✧⑤⑦✈♣✉❭❷❽⑤⑦①③❼❩ê♣✇❄q③①➅❶■❦✧❼❸♦❙ï r
î➀⑤✟ò ✠ t✏✁ ❺ ♦◗⑧♣❹ ❶✧✂❀qÕó♣í✚♦⑦❼❄r❀❼❄✇❄✈♣♠♣t✏❶❱❶➮ì✍✇❄♦❙♠❖❷❽❶⑥q③⑧❋①❾➃❧②❖⑤❙✉♣②❖✈❧⑧⑩✇■⑤✏✇❽♦☎❼❄❶✘✇❄✇✲♠❖❶✪♦⑦r❀q③❶❸✇❄❷♥⑤⑦⑤⑦❷❽①③②♣②♣①③①③②♣②♣④☎④➮✉♣❼❩❷❽❶➜♦❙ö❵ê♣✈❖q③❶✧❶✧②❖t ❦✧❶❸①③❼❸❼ ➃
④❙➈❙①③ï③❻❙➈❜❶✧ñ❀②➆✇❄♠♣⑤❙❼✍❶❸❷❄rÛ❶☞♦❵q❾⑤❙q③♦➒❷❄❶✲ì☞⑤✟✽❼ ✂ ②◗✈♣t✡ê❭❶❸❷❉♦⑦r❋❦✘q➅⑤❙❼❽❼❄❶❸❼❞❶➜⑤❙❦♥♠✏❦✘♦❙②❇✇♥⑤⑦①③②♣①❾②❖④➊⑤✟❼❩❶✧✇
♦⑦➈❙r➬ï ✜☞♦❙ñ✵ê✄➀✂❶❸❶➜⑤❵❦✶❦♥✇❽♠✜❼❸➃❙♦❵❶➜✄ê ⑤❙➀✂ ❦♥❶❸♠✜❦✘✇❉♦❙♦⑦ê✄r❋➀✂ ❶➜❶❸❦✶⑤❵✇❉❦♥♠✜êÑ❶✧❦✧q③q③♦❙⑤❵②♣❼❄④❇❼❉❼✥t✏✇❄⑤➽♦✟❿✡♦❵ê❭②♣❶✮❶☞❷❄❼❄❶❸❶✘q③✇✮⑤⑦ú✻✇❄❦❸❶❸⑤⑦⑧✡q③q❾✇❽❶➜♦➊⑧➮♦❙⑤✟ê✄❼❩➀✂ ♦❵❶➜❦✶❷❩✇♥✇✶❼û
♦⑦ì✲r➬①➤✇❽♦⑦♠❂✇❄♠❖❼❄❶✧❶✘❷✬✇❱❦✧♦⑦q③⑤❵r✍❼❄ê❖❼❄❶❸⑤❙❼❸❼❄①③➃❙❦■⑤⑦②❭❻➒⑤⑦⑧✏q③✈♣♠❖❶❸⑤➽❼➮❻❙❶➼ú✻ê❭✉♣♦◗❷❽♦❙♦❙✉Ñq③❶❸❶✧⑤❙❷❄②✃✇➀❿✟♦❵ñ❀❷❏❻➒❼❽⑤❙❦✧q❾⑤❙✈♣q③❶✲⑤❙❷♥❷❽û✘❶✧ï q➅⑤➒❹ ✇❽①❾♠♣♦❵❶✜②❖❼❄❷❄♠♣❶❸①❾q③✉❭⑤⑦❼ñ
✇❄✉♣①③❷❄♦❙❶➜②❖⑧❧❼➼①➅❦✧⑤⑦⑤➒②❭✇❽⑧✪❶❸❼❸✉♣ï ❷❽♦❙✉Ñ❶✧❷❄✇❄①③❶❸❼✍⑤⑦❷❽❶❱⑧❧❶✘➂❖②❖❶❸⑧⑩①❾②⑩✇❄♠♣❶✟✈❭❼❩✈❖⑤❙q➬ì✍⑤➽❿✏✈❖❼❄①❾②❖④
♠❖❼❩➈❙♠♣⑤❙ï ➉❚①③❼✍✉✭ñ✡⑤✡ì✲❶➜➂♣①➤⑤❙✇❽ý❧♠☎❦♥❶❸♠❨⑧ ♦⑦✇❽♦❙✠➺❼❩♠♣ê ✇❽❶✧✂➀⑤➒❷☞❶❸✇❽❦✶❶❙♦❵✇➆ó③✄ê ï ➀✂♦⑦❹❶❸r➊❦✘♠♣✇❽❶➜①➅❼☞⑤❙❼✍❦♥⑤❙❼❩♠ ②❖✇❽⑤⑦⑧✾❦✘✇❄➒♦q➅❶❏⑤❙❙❼❽①③❷➼❼✲❼✪✇❄⑧♣♠❖⑤⑦❶✘❶✟✇⑩➂❖❻➽②♣⑤÷⑤❙❶➜q❾⑧⑥t✏✈❖❶➊♦❵ê◗t✜❿⑥♦⑦r✥❶❸①❾✇❽✉♣②❇❼✍✇✪❷❽♦❙❷❽❶✧①❾✉Ñ②Òq➅❶✧⑤➒❷❄✇❄✇❄✇❄①③①③①③♦❙t✏❶❸②♣❼❸❶ ñ ï
❹ ♠♣❶✧❷❽❶■⑤⑦❷❽❶■⑤⑩❼❩t➮⑤⑦q③qÕ➃❖➂❖②❖①➤✇❽❶✡②◗✈♣t✡ê❭❶❸❷✮♦⑦r✬❼❩✇❽⑤⑦✇❄❶➜❼☞rÛ♦❵❷✮❶❸⑤❵❦♥♠ ♦❙ê♣ñ
✂➀❶❸❦✶✇✮❦✧q③⑤❵❼❄❼❸ï
④❙t✜➈❙①③ï ❻❙✿✚♦❵❶✧②◆ñ❀②☎✇❄❼❄♠♣⑤❙❶✧②❖❶❸ê❭❷❄❼❄♦➒❶✲❶❙❻❵ó❋①➅❶❙❼✬①③ï✍②❧⑤✟î➲rÛ②❧❶❸❼❄❷❄rÛ❶✘♦❙❶❸✇❜②❖❷❽î✵t➮❦✘❶✏♦❙⑤⑦r❋q③❦✧q❾①❾⑤❙❿❵②◗②◆➃❭❻➒⑤⑦⑤➮êÑ❷❽❼❄①③❶✏⑤❙❶✘②❇✇✮t✏✇❽①➅⑤❵❼✥❼✮⑧❧❷❽⑤❵❶✡❶✧⑧❧q➅⑤➒rÛ❶❸❷❄✇❄ö❇♦❵①③✈❖②♣t✣⑤⑦④❱✇❄✇❄✇❽❶➊♠♣♠♣①❾❶❸❶☞r❉t✭✉♣⑤⑦❷❽➃P②◗❶❸❿✱❼❩⑧❧✈❭①➅➺✠ ❦♥❦✧❦✘♠❂⑤⑦♦❵✇❄t✜⑤❵❶➜❼❼ñ
②♣➈❙♦❙ï ✲■❷❽t➮⑤✏⑤⑦❼❄q➬❶✘✇☞①❾②♣♦⑦rÛr✵❶✧❷❽✇❄❶✧❷♥②❖⑤⑦❦✧①③②♣❶❸①③❼✲②♣⑤⑦④✜êÑ✉♣♦❙q③✈♣⑤❙✇✮②❖❼✲❼❩✉❖♦❙⑤⑦rP✇❄✇❄①➅⑤⑦♠❖q❋❶❱❷❽rÛ❶✧♦❙q➅❷❽⑤➒t ✇❽①❾♦❵②❖❼✧ï
úÛ①③②♣①➤✇❽①③⑤❙qP❼❩✇❽⑤➒✇❽❶❙➃◗➂❖②❭⑤⑦qP❼❩✇❽⑤⑦✇❄❶➜û
✥ ✒✝✏ ✖❁✧❀ ✍❂❀❄❃✱❅✼❀
✥ ✒✝✏ ✖❇❆ ✍✻❆❈❃✱❅❉❆
ï❾✥ ï ✒✝✏ ✖✼✧❊ ✍❋❊✫❃✱❅●❊
✇❄①❾✥ ②♣❷♥✒✝⑤⑦①❾✇❄✏ ①③①➅②♣⑤⑦✖❁①❾q❇❀■②❖❼➀❍❏④➆❍❑✇♥✥✑⑤➒✉❖✒✝✇❽q③❶❉✏ ⑤❙①❾②➄✖✽②❇☎❊➃✵✇❽♦➼⑤❙⑤❙②❖✇❄❷❄♠♣⑧◆❶☎❶✬✇❄✇❽➂❖♠♣♠♣②❖❶❸❶ ⑤⑦❿❂q❇②❖⑤❙❼➀⑤❙✇♥❷❄t✏⑤➒❶➮✇❽❶❸❶❙⑤❵❼⑥ï❀❼❄❼❄♦❙➁✮✈♣r❱❶✧t✏❷❽✇❄❶❶❸♠♣⑧◆❶▼✍ ❀ ✇❄▲❚❃ ♦✭✍◆⑤❙❆●✇❽❦✶❷❽❃✼✇❽⑤❙❍❑❍①❾②❖✍ ♦❵❼❩❊②❖rÛ❼➮♦❙⑤❙❷❽❷❄①③t✰❶❞②❨❶➜⑤❙✇❄✇❄♠❖♠❖❦♥♠ ❶❶
q❾♦❙①➅✶❷❼➀✇♥✠ ❼P✉♣❷❽♦❙❶✧r❖❻➒♦❙⑤⑦ê ①③q➱✂➀ó✧❶❸♦❙❦✶✄ê✇✵➀✂ ②❖❶➜⑤⑦❦✶t✏✇❽❼P❶➜❷❄❼✍❶➜ö❇ú❖✈♣✇❄♠❖①❾❷❽❶✧❶❸❿➊⑧❏❦✘ê◗♦❵❿❱✈♣⑤⑦q➅⑧✟②✟ê❭⑤❙❶✍❦✘✇❄②◗①③♦❙✈♣②➄q③q➅➃➒û❀⑤⑦♦⑦②❭r♣⑧❖✈❖②❖❅✼❦♥❀■♠❖❃■❅⑤⑦❆②❖❃✽④❙❍❏①③❍P②♣❅●④❊
♦⑦✉♣⑤⑦r❖❷❽❷❄❶☞❶➜✇❽❼❩♠♣❶❸❶❸❶☞⑤❵②❇❦♥⑤❙✇❱♠⑥❦✘①③✇❄②ðq❾①③①➅♦❙❼❩❼❩✇❽②➄♦❵❼❜ï✵t✏♦⑦⑨✬rP❶⑥⑤❵♦❵❦♥❼➀✄ê ♠■✇♥➀✂⑤➒❶❸♦⑦✇❄❦✘rÑ❶❵✇❜➃P✇❄♠❖②❖ê♣❶✍⑤❙✈♣t✜q③✇✟①➅❼➀❶➜✇❄✇❉❼❜♠❖♦❙⑤⑦⑤✆r❭✇✡Ñ☎ ✉♣❶➜❼➀❷❽❦✶✇♥❶✧✇❽⑤➒❻➒❶❸✇❽⑤⑦⑧⑥❶⑥①③q❧ê◗⑧❧♦❙❿✜♦◗ê ❶❸✇❄✂➀❼❱♠❖❶❸❦✶❶✚②♣✇♥♦❙❶✧❼✥✇■ý◗t■❶➜❦♥❦✘✈❖♠❖✈❧❼❩⑤⑦✇❽✇❉②♣①❾♦❵êÑ④❵② ❶❶
⑧❧✈♣❷❽①❾②♣④➮⑤❵❦✶✇❄①③♦❙②➆❶✘ý❧❶➜❦✘✈❧✇❽①❾♦❵②➄ï
ñ✃✞❨⑤ ①➤✇❽♠♣ú✻①❾✉❭②Ø♦❇❼❄✇❽❼❄♠♣①③ê♣①➅❼☎q❾❿✒rÛ♦❙❶✧❷❽t✏t■✉❧✈❖✇➀q③⑤⑦❿♣✇❄û❚①③♦❙❼❄②➄❶✘➃✟✇÷⑤❙♦⑦❦✘r➆✇❄①③♦❙❶✘ý❧②Ú①➅❼➀❼❽✇❽❦♥①❾②♣♠♣④ø❶✧t➮⑤❵⑤❨❦✶✇❽⑤⑦①❾♦❵❷❽②❶ð✉❖❼❽❦♥⑤❙♠♣❷❽⑤❙❶✧t➮t✜⑤❖❶✧ñ ï
✇❄❶✧❷❽①➅❼❩❶➜⑧✪♦❵ê✄➀✂ ❶❸❦✘✇✍✇❽❷❽⑤❙②❖❼❩rÛ♦❙❷❽t✏⑤⑦✇❄①③♦❙②❖❼❸ï
ì✲❦✧þ❏⑤⑦①➤❺ ②ù✇❽♠☎✛❹ ê❭✁✇❄❶✭♠♣❺ ❶✜①③②❖❹ ❼➀❼➀✇♥✇♥⑤➒⑤❚⑤⑦✇❽②❇①③❼❩❦✡✇❄❶✧①➅✇➮⑤➒⑧❧✇❽♦❙♦⑦❶❸t➮r❏⑧ù⑤⑦⑤❵①❾①③❦✶②❇②☎✇❽✇❄①❾♦◆♦❵t✏②ü✇❄♦❧♠❖⑧❧❼❽❶⑩❦♥❶✧♠♣q✥✇❽❶✧❷❽❦✘t➮⑤❙♦❵①❾t✜⑤✃②❖①❾✉Ñ②♣✇❄♦❙♠❖④◆②❖⑤⑦❶✧✇➮✉♣②❇q➅ñ➊⑤⑦✇❽❼✜②❖①③❼✭❼➮ú➀➈❵❦✧ú➀ï❾➈❵♦❙➈✟ï②❖✲❵ñ♥❼❄û■➈❙①➅❼➀ï ✿❇❼❄✇❽✈♣❶✧❉û ❖✚②❇✉❧✇ññ
✉♣❼➀✇♥q❾⑤➒①③❶❸✇❄⑧➄❶❜➃❭♠♣⑤⑦❶✧②❭✈❖⑧⑩❷❄①➅❼➀ì✲✇❽①③①③❦❸q③q❋❼✥✇❄⑧❧❷♥❶✧⑤⑦❷❽②❖①❾❻❵❼❩rÛ❶❸♦❙⑧❱❷❽t rÛ❷❽♦❙✇❽tØ♠♣❶✟✇❽♠♣①③②♣❶✍①➤✇❽✇❄①③❷♥⑤❙⑤⑦q❀①③②♣❼❩✇❽①③②♣⑤⑦④✮✇❄❶✟✉♣①③q➅②❵⑤⑦✇❽②❭♦✏❼P✇❄✇❽♠♣♠❖❶➊⑤➒✇❞➂❖❦✧②❖⑤❙⑤⑦② q
ê❭❶✟✈❖❼❄❶❸⑧✪✇❄♦➮④❵✈♣①➅⑧❧❶✟⑤✡✉❖q③⑤❙②♣②♣❶✧❷
◗ ✟ ã ✞ å❞æ
❹❼❄❦✧♠♣❷❄①③❶✡✉❧✇❄q③❶❸①③♦❙⑤❙②➆❷❄②❖①③①❾✁② ②♣④⑩✵✜ ①③t✏④❙✈♣❶✘✇❄❷❽♠❖❶✏♦◗➈❙⑧◆ï❞①➅î➲❼❏②✭❼❄♦❙✉Ñ✈❧❶❸✇❽❦✘q❾①❾①③➂❖②♣❶➜❶❵⑧✃➃◗✇❄ê◗♠♣❿✭❶✟✇❄t✏♠❖❶✘❶✏✇❽♠♣⑤❙♦❧q❾④❵⑧➆♦❙❷❽①③❼✼①➤✇❽✂ ♠♣t✣⑧❧❶✘ñ
❦①❾♥úÛ②♣①❣♠❖û■④ÿ⑤⑦②♣✈❭❶✧④❵❼❩ý❧❶☎①❾②❖⑤❙t✏⑤❂④☞❼➀✉♣❼❩✇♥❶✧q❾⑤➒❶❵✇✏✇❄➃❏❶➜♦❙❼P✇❽r✚⑤⑦♦❙ë◗♠♣r❧①③❶❸❶❸②♣✈♣⑤❙④ ❷❽❦♥①③♠■❼❩⑤❙✇❄⑧♣♦❙①➅❦✧ê✄❻➽❼➮⑤❙➀✂ ❶➜②❇⑤❙❦✶✇❽②❖✇P⑤⑦⑧ù❷❽④❵❶✘❶◆rÛ①③②❧❶❸♦❙❷❄rÛr✡❶❸❷❽❶❸❷❄✇❽❶❸⑧❏♠♣②❖✇❽❶÷❦✘♦✮❶➜❼❩❼■ì✲✇❽①❾⑤➒✇❽✇❄♦❂✇❽♠♣①③❦❙①③✇❄②■➃✟❷♥⑤❙⑤✲♦❙❦♥ê✄✇❽ë❚❷❽➀✂ ⑤❙❶➜✇❄①❾❦✶♠❖②♣✇❄❶ ññ
t✜❼➀✇♥⑤➒♦❧✇❄⑧❧❶ÿ❶❸q➱①③ï⑩②❧rÛî➲♦❙②❧❷❽rÛt➮❶✧❷➊⑤➒rÛ✇❽✈♣①❾♦❵q❾q✍②Ù⑧❧⑤❙❶✘✇♥②❖⑤⑦⑧ ①③q③❼➊①❾②◗♦⑦❻➒r✲⑤⑦♦❙❷❽①➅ê✄⑤⑦➀✂ ②❇❶➜✇❽❦✶❼❂✇✟ì✲✇❽❷❽①❾⑤❙✇❄②❖♠♣❼❩①③②Ù①❾✇❄①③♦❙✇❄②❭♠♣❼❏❶ rÛ⑧♣♦❙❷➊♦❙t➮❶➜⑤❙⑤⑦❦♥①③②♠
◗ä â❜é▲â✏✎✒✑⑥ä❇å✔✓✖✕ ✟✍✗
❶✧②❖❶✧❷❄❷♥❶➜⑤⑦⑤❙q❜❼❄♦❙❼❄①❾②♣✇❄①③✈❖②♣⑤⑦④❚✇❄①③♦❙⑤⑦②÷êÑ♦❙①➅✈❧❼■✇✪♦❵⑤❵②♣❦✶❶✪✇❽①❾ì✲♦❵②❖♠❖❼✜❶✧❷❽✇❄❶⑩♦ð⑤⑦⑤❙②✹❦♥♠❖⑤⑦①❾④❙❶❸❶❸❻❙②❇❶☎✇✡⑤❚②♣❶❸⑧❧❶❸❶❸⑧♣❼❄❼■①❾❷❽❶❸✇❄⑧♦
✉Ñ④❙❹ ♦❇❶✧♠♣❷❄⑤⑦❶⑩rÛqÕ♦❙➃❖④❙❷❽⑤⑦t✾
⑤❷❽⑦❶❸②❂⑤⑦q❭❶✧②◗ì❜❻◗♦❵②❭①③❷❄❷❄⑧☎q➅♦❵⑧✜②♣①❾♦❙②☎t✏✉Ñ❶✧✉❭❶✧②❇⑤⑦❷♥✇✮⑤➒❷❄✇❄✇❽✇❽①➅①❾♠❖❦✘♦❵✈♣⑤➒②❖✇✟q➅❼❞⑤⑦①❾✇❽❷☞✇➊♠❖✉Ñ⑤➒♠❖❶✧✇➼⑤❙❷❄❼❏❦♥rÛ♦❙♠❖ë◗❷❽⑤⑦t✂②♣②♣♦➒④❵ì✲✉♣❶➼q③q③⑤❙✇❄❶❸♠♣②☎⑧❧❶❏④❵④❙❶■❼➀❶❸✇♥②♣♦⑦⑤➒r➀❶✧✇❽ï✏❷♥❶✮⑤➒í✮♦⑦✇❽r❋①❾❦✘♦❵♦❵✇❄②➆①③ê✄♦❙➀✂ì✲②❖❶❸❼✟①❾❦✘✇❄✇➜♠❖⑤❙ú✻①❾❷❄❼♥② ❶û
♦⑦①③⑧❧②ùr❀①➅❼➀♦❙✇❽✇❽①❾✄ê♠♣②❭➀✂❶☎❦✶❶➜✇❉❦✶ì❜✇❽❦✘♦❵❼❸q➅➃◗⑤❙❷❄q➅❼❽⑤⑦⑧÷❼❩②❖❶➜⑧⑩①③❼✧②ÿï❀❦✘î▲♦❙❼❩✇❞♦❵q③q③ë❇t✜❶❸②❖❦✶❶✭✇❽♦➒①❾ì☞♦❵ì➼②❖❼P⑤➽❼❜✇❽❿❙♠♣♦⑦ï ❶➼r❀❹ ❼❄✉❭①❾♦❇♠♣t✏❼❄❶☎①③❼❄q③①③⑤❙⑤⑦ê♣❷❜④❙q❾❶✲❶❸♦❵②❇❼❩✄ê ✇✜✇❽➀✂ ⑤⑦❶❸♠❖✇❄❦✘❶❸⑤❵✇❽❼✵❼✏❼❜♦❙t➮ë◗rÑ②♣⑤⑦⑤✮♦➒ë◗✇➀ì✲①❾❿❇②❖q❾✉❖❶➜④■①③⑧❧❦❸✈♣④❙⑤⑦✉ ❶ q
✇❄♦❙⑤⑦♠❭❷❽ê ❶➼⑤➒✂➀❶❸✇✟ì✲❦✶♦⑦✇✲❷❄✇❽①❾✇❩♦❙♠♣✇❽r❀❶✧❶✧❷✟❶❸②➮⑤❵⑤⑦①③❦♥②✏④❵♠➆❶✧✇❄②❇❦✧❶❸✇❽q③❷❄⑤❵❼❸t➮❼❄➃➬❼❸❼❞♦❙ï✥❷✟♦❙î▲rÑ✇✲⑤✪❻❵♠❖✇❄❶✧⑤❙❷♥❷❽⑤⑦❼➼ê❖①③❼❞ë◗②♣②♣❶✧⑤⑦❷➜②❖♦➒➃➬ì✲⑧➮♠❖q③✆⑤ ❶❸⑤❙Ñ☎⑧❧❼❏❶➜④❵✈❖❦✶❶✚✇❽❼❄♦❙❶❸❶❸⑧✜⑧❋r❀ï❶✧♦❙ý◗✄ê ❹ ①➅➀✂❼❩♠♣❶➜✇❄❦✶①③❶❸②♣✇♥❼❄❼☞④✏❶✜ú❋✉♣✉♣✉♣q③q③⑤❙⑤❙①➅②❖②❖❦♥ë ❼❼
✈♣✘✲✉ðû✶ï➬ê♣í✮q③♦❧⑧♣❦♥⑧♣ë◆①➤✇❽íÝ①❾♦❵②❖ì✲⑤⑦q③①❾q❾✇❄❿❵♠ð➃➬④❵✇❄♠♣❷❄①③❶✜✉♣✉Ñ⑤⑦④❵❶✧✙❷❶✧②❇❏✘ ✇✚➃P①➅❼✚q③①❾r❣⑤❵✇■❼❄❼❄✈♣✈♣✉ðt✏ì✲❶❸⑧➆♠♣❶❸✇❽❶✧♦⑩q✬♠❭íÝ⑤➽❻❙ì✲❶■①❾✇❄⑤⑦♠ ý◗①③✂❩♦❙⑤❵t➮❦♥ë ❼
⑧❧⑤⑦④❵❶➜❼❄❶✧❦✧②❇❷❄✇❞①③ê♣♠❖①③⑤❙②♣❼✥④☎②❖⑤⑩♦⑦✇❉②❭⑤⑦⑤⑦②✜①③❻❙❶✘❶✏ý❧✉♣✉♣♠◗q③①③❿❧❦✧①➤❼❩✇✬①➅❦✧❼❄❼❏✉Ñ❶❸♦⑦❦✘r✍①❾➂❭✇❄♠❖❦❸❶➮⑤➒✇❄ì❜①③♦❙♦❵②✜❷❄q➅♦⑦⑧❋r➬ï✏⑤❙❦✘➁☞✇❄①③♦➒♦❙ì✍②❖❶✧❼✥❻❙①③❶❸②➮❷❸➃➬❼❩✈❖✇❽♠♣❦♥♠ ❶
⑤⑩⑤⑦④❵ì✍❶✧②❇⑤➽✇❉❿➆⑧❧✇❄♦◗♠❭❶❸⑤➒❼✥✇✟♠❖①➤⑤➽✇➊❻❵❦❸❶✍⑤⑦❼❄②✃✈❖❦♥❷❽♠✏❶❸⑤❵⑤❱❼❩♦❵❼❩②✃✉Ñ❶❸⑤⑦❦✧êÑ①➤➂❭♦❙❦❸✈♣⑤➒✇✚✇❽①❾✇❽♦❵♠♣②✡❶✧①③ê♣❷➊✈♣❼❩✇❉❿◗②♣②❇❶✧✇❄❶➜♠❖⑧♣❶❸❼✥❼❄①③✇❄❼✜♦❱úÛ♦❵❷❽❶✘❷✚➂❖✇❽②❖♠♣❶❙❶ ➃
t➮⑤⑦①③②❇✇❽⑤⑦①③②➆♦❙❷✲❶✧❻❵♦❙q③❻❙❶❏①➤✇✶û✶ï
rÛ✈❖✸❏✈♣❼❄①③q③❶❸❻❙q➬⑧✡❶✧✉❖②■✇❄⑤⑦♦➊❷♥✇❄⑤⑦♠♣⑧♣t✏①➅♦❏❼✥❶✘❼❩✉❖✇❽①❾q③❶✧✇❄⑤❙✈❭❷❽②♣①③⑤➒❼❄②♣✇❄❶❸①③①③⑧✏②♣♦❙②P✛④ ❼❄✉❭❇✚➃➽✇❄⑤⑦❶➜♠♣②❖❦✘①❾❶➼⑧✡➂❭q③❦✧✇❄❶❸⑤⑦♦✟⑤⑦✇❄❷❽①③①③②♣②❖♦❙①③②⑥⑧❧②♣✈❭④☞♦❙❦✘rP✉♣❶✲⑤❙❷❽♠♣♦❙❦✘❶❸✇❄ê❖✈♣①③q❾♦❙❶❸❷❄②❖①➅tÚ❼❩❼✬✇❄①➅ì✲①③❦✧❼❀❼✥♠♣✇❄①③♦❏ì✲❦♥♠⑩♠♣①③②❖①➅❦✧❦♥⑧❧⑤⑦♠➮✈❖②➮❦✧❦✧❶✍⑤⑦êÑ②⑤❶
êÑ❦✘♦❵❶✮t✏✈❖✉♣❼❄❶❸✈❧⑧✏✇❽⑤⑦✇❽✇❄♦■①③♦❙t➮②❖⑤❙⑤⑦q❾ë❵q③❿❂❶✲✇❄✇❄♠❖❷♥❶✮⑤❙❦✘❷❽✇❽❶❸⑤⑦⑤❙ê❖❼❄q❾♦❙❶❵②❖✢ï ①❾②♣♣✜ ④✟✈♣①③❷❄②◗✇❄❻❙♠♣♦❵❶❸q❾❷❸❻❵➃❉❶❸✇❄⑧✜♠♣①❾❶☎②✪⑤⑦✇❄④❵♠♣❶✧❶✚②❇✉♣✇✜q➅⑤⑦❼❄♠♣②♣♦❵②♣✈♣①③②♣q③⑧④
✇❄êÑ❹ ①③❶⑥♦❙♠♣②◆❶■⑤⑦ê♣⑤❵♦⑦q③r➼❦✶❶✜✇❽⑤❙①❾✇❄♦❵❦✶♦ ②✇❽①❾♦❵✘✩❼❩✖✤②❖✉Ñ✣✦❼❸❶❸✥ ➃➬❦✧✖⑥①➤⑤❙➂❭②❖⑤⑦❦❸⑧ ⑤➒②◗✇❽❿ ♠❖①❾♦❵❶✧❶✘②❖✈♣ý❧❼✮❷❽①➅①③❼➀❼❄❼❩✇❽♠♣✇❄①❾①➅②❖♦❙❦✧④➆✈❖❼❸➃➬q③✉❭⑧✭✇❄⑤⑦♠❭êÑ❷♥⑤➒⑤⑦❶■✇✟t✏⑧❧①➤❶✘✇➊❶✧✇❽✇❽❦✧❶✧⑤⑦✈♣❷❽①③①③q③❷❄❼❄❶❸❷❽❶❸⑧☎❶✧⑧ ②❇❶❸✇❽❼❄②♣q❾✉Ñ❿✭♦❙❶❸✈♣♠♣❦✘④❵①❾♦❵➂❭♠☎q③❦❸⑧♣❼❩⑤➒❼❸♦ ñ ï
✇❄②♣♠❭♦❵⑤➒q❾♦❵✇✜④❙✇❄❿ð♠♣❶❸⑤❙❿❚❼✜❦❸❶✧⑤⑦✉❖②÷①➤✇❽♦❙êÑt✏❶✪①③❼❄①③②♣❶❸⑧✹✉♣✈♣ê◗✇✜❿÷✇❽♦✃❦✘♦❵t➮t✜⑤⑦✉Ñ①③❶✘②❖✇❽❼❩①➤✇❄✇❽❷❽♦❙❶❸❷♥⑤❙❼✏t ①❾②✹✉♣✇❄q➅⑤⑦♠❖②❖❶➆②♣î ①❾②❖✁ ④ s ✇❄❶➜ú❣✇❽❦♥♠♣♠❧❶ ñ
ê♣①❾ñ▲⑤❙②♣②◗✈❖⑤⑦q❋①③②❇✇❄❶❸❷❄②❖⑤⑦✇❄①③♦❙②❖⑤❙q➬✉♣q③⑤❙②♣②♣①③②♣④➮❦✘♦❵t✜✉Ñ❶✘✇❽①➤✇❽①❾♦❵②❭û✶ï
➌ ✧✔✩❭✫➐ ✪✭✬✯✻✮ ➑❧✱➏ ✰✲✩✍✳✴✩✍✲✵ ➏✱✶✸✷✺✹❱➐✫❭✩ ✯➍ ✮✤✷✻✪
★
✞❂❶❱rÛ♦❙❷❽t■✈♣q➅⑤➒✇❽❶✚✇❽♠♣❶✟q③❶❸⑤⑦❷❽②♣①③②♣④✜✉♣❷❄♦❵ê♣q③❶✧t ⑤❵❼✍rÛ♦❙q③q❾♦➒ì☞❼✼✂
✝✁✞✠✟☛✡☞✟✍✌
87
✂✁☎✄✝✆✞✁✠✟✝✡☞☛✂✌✝✍✏✎✒✑✔✓✖✕✘✗
✙✖✚✭↔◗➢✧↕➲➥➀➦➤➢✧➝➬➭❵➙❸➟➊➢❸➦➞➧✜➟✟➙⑦➭❙➣♥➝
✙✖✚➆➥➲↕❩➢❸➦➞➧❇➦➞➧❇➨❏➯➲➣✶➘➒➡❇➣♥➧❵➳♥✜
➣ ✛✣✢✥✤÷➫❉➦➞➥➀✧→ ÿ
✦ ➢❸➳❄➥➀➦➤➙❸➧❇➯♥➹◗➢❸➧◗➭✡➣✶➢❸➳❩→ ✓✩★ ✛✪✢✩✤✹→◗➢✧➯✬➳❽➙➜➟✟↔❧➙➜➧❵➣♥➧➒➥➀✬➯ ✫
✓✮✭ ✯✰✎✠✍✱✓✮✲✠✓✮✭ ✌✝✕✳✓✖✴✠✎✘✵✷✶✸✲✘✓✮✭✺✹✼✻✽✎✘✾✣✿✘✵❀✾✣✿✧❁ ➧❇➢❸➟✟➣➜➹❙➡❇➧❵➳❄→❇➢❸➧❵➨➜➦➞➧❇➨✚➙➜➛✣▲❂ ➣♥➳❽➥➀➯♥➹❇➳❄→❇➢❸➧❵➨➜➦➞➧❇➨❱➙❸❃➛ Õ❂ ➣♥➳❽➥➀➯♥➹
❄✜❅❇❆ ↔❇➢✧↕❩➢❸➟✟➣❄➥➀➣❽↕➀➦➞➯➲➣✶➭✏➢❸➳❄➥➀➦➤➙❸➧➮➭❵➣❽➯➲➳❽↕➀➦➞↔❵➥➀➦➞➙➜➧❵➯✍➢✧➧◗➭✡Ï✍❈
➔ ✦ü➟✟➣❽➥➀→❵➙⑦➭❵➯
❉ ➪❋❊✍➣❽➸◗➧❵➦➞➥➀➦➞➙❸➧❇✬➯ ✫
☛✰✭✺✹✰❁ ➳♥➡❵↕➲↕➀➣❽➧⑦➥❉➯▲➥❩➢✘➥➀➣✲➙❸➠P➢✧➧✏➙➜✣➛ ▲❂ ➣♥➳❄➥ ☛
☛✰✭❍●✰❁ ➯➲➙✧↕➲➥✍➙✧➠❋➙➜➛✣▲❂ ➣♥➳❄➥ ☛
☛✰✭ ■❏❁ ➸◗➧❇➢❸➝❖➯▲➥❩➢✧➥➀➣✲➙❸➠P➢✧➧✜➙❸❃➛ Õ❂ ➣♥➳❽➥ ☛
✛✂❑ ❁ ➯▲➥❩➢✧➥➀➣✲➳♥➝➤➢❸➯➲➯✬➙❸➠P➢✧➧➒➻ ✿✘✕▼▲✘◆✣✾P❖◗●✳❘❙✎✘❘✸✓ ✛
☛ ❑ ❁ ➢❱➭❵➦➞➯▲➥➀➦➞➧❇➳❄➥❜↔❇➢✧↕❩➢✧➟✟➣❽➥➀➣❽↕❞➫❉→❇➦➞➳❩→✜↕❩➢❸➧❵➨➜➣♥➯❞➥➀→❙↕➀➙➜➡❇➨❸→✜➥➀→❇➣✍➯➲➙❸↕➲➥✬➙❸➠ ▲✔❚✷❯✒✓✳✹❱❘❲☛
❳❩❨ ❁ ➯➲➣❽➥✬➙✧➠➄➢❸➝➞➝Ñ➯➲➙✧↕➲➥➀➯❜➙✧➠❋↔◗➢✘↕❩➢❸➟✟➣❽➥➀➣❄↕➀➯❜➢✧➧◗➭■➙➜✣➛ ▲❂ ➣♥➳❽➥➀➯✬➦➞➧✜➣❽➴❙↔❙↕➀➣♥➯➲➯➲➦➞➙➜❏
➧ ❳
✤ ➭❵➙
✗ ➪ ➠③➙❸↕❜➣✶➢❸➳❩→ ✓ ➦➞➧❬✛✣✢✥ü
➷❇➙✧↕➀❫
➟ ❪ ➝➞➦➤➯▲➥✬➙✧➠ ❑ ➠③➙❸↕❜➢❸➝➞➝ ➦➞➧
❭ ➪
❜❙➪
➠③➙❸↕✬➣✶➢❸➳❩→ ❁ ☛ ➦➞➧✏➝➞➦➞➯▲➥ ☛ ✓✮✭ ✌✝✕❝✓✖✴✠✎✮✶ ➭❵➙ ☛ ✓✮✭ ✌✝✕✳✓✖✴✠✎✠✵❴✶✽❵❬✓✮✭✺✹✼✻✽✎✘✾✣✿✘✵❀✾✣✿✝❛
➯▲➥➀➙❸↕➀➣✲➳❽➙➜➟✟↔❧➙➜➧❵➣♥➧➒➥❉➙❸➠➬➥➀→❵➣➼↔❵↕➀➣❽Þ➜➢✧➦➤➝P❐ ☛✰✭❍●✘✲✪☛ ❑ ✲✔☛✰✭✺✹ ✘❑ ❰
❞ ➪
➣♥➧❇➭■➠③➙❸↕
❡ ➪
➠③➙❸↕✬➣✶➢❸➳❩→ ☛ ➦➞➧✏➝➞➦➞➯▲➥ ✓✮✭✺✹✼✻✽✎✘✾✪✿✘✵✷✾✣✿ ➭❵➙
❢ ➪
➦➺➠ ➦➤➯❉➧❵➙❸➥✬➢✠♣❤ ➣❽➳❽➥➀➣✶➭✜➛➒➻✡➢❸➳❽➥➀➦➞➙❸➧❇➯❉➦➞➧✜➥➀→❇➣✍↕➀➣❽➯▲➥❜➙✧✐➠ ✛✣✢✥✤
❣ ➪
➥➀→❇➣❽☛ ➧✜➝➞➣❽❦➥ ❳ ❁❧☛♠✭ ■ ❑
❥ ➪
➣❽➝➤➯➲➣☞➳❩→❇➙➒➙❸➯➲➣♠❳✒➠③↕➀➙➜➟Ø➥➀→❵➣✲➯▲➥❩➢✧➥➀➣✲➳♥➝➤➢✧➯➲➯➲➣♥➯❜➙✧➠ ☛✰✭❀✹ ➯➲➡❵➳❄→■➥➀→◗➢✘➥
❉✬♥ ➪
❑ ➢✧➧◗➭✧❪❋❨❉➳♥➙➜➧➒➥❩➢❸➦➞➧❵❦
❳♣♦
➯ ❳❩❨
❉✘❉ ➪
➯▲➥➀➙✧↕➀➣✲➥➲↕❩➢❸❁❧➧❇➯➲☛✰➦➺➥➀✭✺➦➞✹➙➜r
➧ q ❁ ❐ ☛✰✭❍●✘✲✪☛ ❑ ✲✔☛✰✭✺✹ ❑❩st❳✏❰
❉✗ ➪
➟➊➢✘➥➀➳❄→■➠③↕➀➣♥➣✲Þ❸➢✧↕➀➯❉➦➞r
➧ ð
q ➫❉➦➺➥➀→✜➥➀→❵➙➜➯➲➣✲➦➞➧✉❪
❉✬❭ ➪
➣♥➧◗➭➊➠➅➙❸↕
❉ ❜❵➪
➠➅➙❸↕➀➟ ➢✧➳❽➥➀➦➞➙➜➧❇➯❞➠❾↕➀➙➜➟ø➳❽↕➀➙➜➯➲➯▲➵✻↔❙↕➀➙⑦➭❵➡❇➳❄➥❜➙✧➠➄➢❸➝➞➝❭➯▲➥➀➙❸↕➀➣✶➭✡➥➲↕❩➢✧➧❇➯➲➦➺➥➀➦➞➙➜➧❵➯
❉✒❞ ➪
➯➲➡❇➳❩→✡➥➀→◗➢✘➥❉➥➀→❇➣✲➢✧➳❽➥➀➦➞➙➜➧❇➯❜➢✧↕➀➣✲➳❽➙➜➧❇➯➲➦➞➯▲➥➀➣♥➧➒➥❉➫❉➦➺➥➀→✏➦➞➧⑦Þ➜➢✘↕➀➦❾➢✧➧➒➥➀➯
❉✬❡ ➪
❉✒❢ ➪P➣♥➧◗➭➊➠➅➙✧↕
❉✬❣ ➪P↔❵↕➀➙⑦➭❙➡❇➳♥➣☞➢❏➟✟➣❄➥➀→❇➙⑦➭✟➠③↕➀➙➜➟Ú➥➀→❵➣➼➯➲➣✶➘➒➡❇➣♥➧❵➳♥➣♥➯❞➙✧➠P➢❸➳❽➥➀➦➞➙➜➧❵➯✍➢✧➯❉➦➤➧✏Ö✍↔❇➟➊➢✧➩➽➣❽↕✶➪
➧ ❚
q ➫❉➦➞➥➀→✡➥➀→❇➙➜➯➲➣✲➦➞➧❏❪
✂✁☎✄✇✈✮①❃② ❅ ✁☎① ➟➊➢✧➥➀➳❩→■➠❾↕➀➣♥➣☞Þ➜➢✘↕➀➯❞➦➤r
❉ ➪P↕➀➣❽↔❧➣✶➢✧➥
➠③➙❸↕✬➣✶➢❸➳❩→✏↔❇➢✧↕❩➢❸➟✟➣❄➥➀➣❽↕④③✜➦➞➧✜➥➲↕❩➢❸➧❵➯➲➦➞➥➀➦➞➙❸➧⑤➼q ➹P③❬❁❧
♦ ☛ ➹
✗ ➪
➳❩→❇➙➒➙➜➯➲➣☞➢✚↔◗➢✧↕❩➢✧➟✟➣❽➥➀➣❽✰↕ ⑥✟➦➞➧❏✹
❪ ➥➀➙❱➟➊➢✘➥➀➳❄→➊➫❉➦➺➥➀→
❭ ➪
❙❜ ➪
✜③ ➯➲➡❇➳❩→✡➥➀→❇➢✧➥✩⑥❏❁❧
♦ ☛✰✲✬●✖▲✘✕✖❘ ❐❙③◗❰ ❁⑦●✖▲✘✕✼❘ ❐▼⑥❙❰❄➹
➣♥➧❇➭■➠③➙❸↕
❞ ➪
❡ ➪❀➡❵➧➒➥➀➦➤➝♣↔❇➢✧↕❩➢❸➟✟➣❄➥➀➣❽↕❉➟➊➢✧➥➀➳❩→✜➯➲➣❽➥✬➦➞➯❉➳♥➙➜➧❵➯➲➦➞➯▲➥➀➣♥➧➒➥
❢ ➪❀➣❽➧◗➭
✜✵①❾④❵✈♣❷❄❶✏❈➈ ❜
✂ þ✚✈❧✇❄q③①❾②❖❶❱➇✚❶❸❼❄①③④❙②✭♦⑦r❀✇❄♠♣❶ ☞ ✍ ✏ ✒ ✔✣✖✙✒✘ ■
⑧ í☞q③④❙♦❵❷❄①❾✇❄♠♣t
②❖⑤⑦t✏❶●✂ ⑤❙✉♣✉♣q③❿ ✇❄❷❽①❾t
✇❄❷❽①❾t✭➈❙➃ ì✲♠♣❶❸❶✧q ✲
⑧❧❿◗②❖⑤❙t✜①➅❦❏♦❙✄ê ➀✂ ❶➜❦✶✇➜ï
úÛ①③①❣û✏✈❖❼❄❶✭✇❄♠♣❶✭✇❽❶❸❦♥♠♣②♣①➅ö❇✈♣❶❸❼⑥♦⑦r❏✇❄♠♣❶☎♦❵❷❄①③④❙①③②❖⑤⑦q✚þ✚✉❖t✏⑤❙ë❙❶❸❷✏⑤❙q❾④❵♦⑦ñ
❷❽①➤✇❽♠♣t ú➱ô ❦➜s❜q③✈❖❼❄ë❙❶❸❿❙➃➊õ✲①➅❦♥♠❖⑤⑦❷♥⑧♣❼❄♦❙②➄➃ ✚ ❥◗①③t✜✉❭❼❩♦❵②✥✜✗✢✗✢✳❙✜ û✪✇❄♦
④❙❶❸②♣❶✧❷♥⑤⑦q③①➅❼❩❶☎♦❵✄ê ➀✂ ❶❸❦✘✇⑩❷❽❶✘rÛ❶✧❷❽❶✧②❭❦✘❶❸❼⑩⑤⑦②❖⑧❨❦✧❷❄❶➜⑤➒✇❽❶ ✉❖⑤⑦❷♥⑤⑦t✏❶✧✇❄❶✧❷❽①➅❼❩❶➜⑧
♦❙✉Ñ❶✧❷♥⑤➒✇❽♦❙❷❞❼❽❦♥♠♣❶✧t➮⑤✮rÛ❷❄♦❵tø✇❄♠❖❶☞❼❩✉Ñ❶❸❦✧①➤➂❭❦✍♦❵✄ê ➀✂ ❶❸❦✘✇❞✇❄❷♥⑤⑦②❖❼❄①➤✇❽①❾♦❵②❖❼✵❶✘ý◗ñ
✇❄❷♥⑤❙❦✘✇❄❶➜⑧✪①③②ðúÛ①❣û❜rÛ❷❄♦❵t ✇❄♠❖❶❱✇❄❷♥⑤⑦①③②♣①❾②❖④✡❶✧ý❧⑤❙t✏✉♣q❾❶➜❼✧ï
❹ ♦✃①③q❾q③✈❖❼❩✇❄❷♥⑤➒✇❄❶⑩✇❄♠❖❶⑩t➮⑤⑦①③②ù①③②♣②♣♦➒❻➒⑤⑦✇❄①③♦❙②❖❼✡♦⑦r✲✇❽♠♣❶✭t✜❶✧✇❄♠♣♦❧⑧❋➃✬ì❜❶
ì✲①③q❾q❭✈❖❼❩❶☞⑤❙②✏❶✘ý♣⑤⑦t✏✉♣q③❶✲ì✍⑤❙q❾ë❇ñÕ✇❄♠♣♦❵✈♣④❙♠✡✇❽⑤❙ë❙❶❸②■rÛ❷❽♦❙t✒♦❙✈♣❷❉❶❸t✏✉♣①❾❷❄ñ
①➅❦✧⑤⑦q➼❶❸❻➽⑤❙q❾✈❭⑤➒✇❄①③♦❙②✹①❾②◗❻❵♦❙q③❻❇①③②♣④❂⑤⑦②ù❶✧ý◗✇❄❶✧②❭⑧❧❶❸⑧÷✇➀❿❇❷❽❶✘ñ➲❦♥♠❖⑤❙②♣④❙❶✪⑧♣♦⑦ñ
t➮⑤⑦①③②➄ï☎í✮❼❽❼❩✈❖t✜❶➆⑤➆✇❄❷♥⑤⑦①③②♣①③②♣④✃❼❄❶❸ö❇✈♣❶✧②❭❦✘⑩❶ ⑨P❶✜❷ ①③❼■①❾②❖✉♣✈❧✇✡①③②❇✇❄♦
☞ ✍ ✏ ✒ ✔✣✖ ✘✆✏
✜ ⑤❙②❖⑧⑥✇❽♠♣①➅❼✲♠❖⑤❙❼✲❦✧♦❙t✏✉Ñ♦❙②♣❶❸②❵✇♥❼✲⑤❙❼✍rÛ♦❵q❾q③♦➒ì☞❼✽✂
②❖⑤❙t✜❶❈❉✂ ⑧❧♦ ✈♣✄✉ ✚♣✉❖❷❄❶❸❻➽⑤❙①❾q ✂❞ì✲❷❽❶✧②❖❦♥♠ ✢❖➃ ✂❩⑤❵❦♥✄ë ♣✢ ➃❇✇❄❷❽①❾t✭➈❈❖✚ ❦♥♠❖⑤⑦②❖④❙①③②♣✛④ ✂
♠◗✈♣ê➄➈❵➃ ②◗✈❧✇♥❼✧➈
②❖⑤❙t✜❶❈✑✂ ✂❩⑤❙❦♥ë ⑧❧♦➒ì✲✄② ✚❭❦♥♠❖⑤⑦②♣④❵①❾②❖✛④ ✵✂ ♠❇✈❖ê➄➈❙➃ ✂❩⑤❙❦♥✄ë ✢
②❖⑤❙t✜❶❈✬✂ ✇❽①❾④❵♠❇✇❄❶✧✔② ✚❭✉♣❷❽❶✧❻➒⑤❙①❾q ✂✬ì✲❷❽❶✧②❖❦♥♠ ✢♣➃ ♠❇✈❖ê➄➈❙➃ ✇❄❷❽①❾t✭➈❈❖✚ ❦♥♠❖⑤⑦②❖④❙①③②♣✛④ ✂
②◗✈❧✇❽❼❸➈
✚
✉♣❷❄❶❸❻➒⑤⑦①③q✤✂
♠◗✈♣ê➄❈➈ ✚ ❦♥♠❖⑤❙②♣④❙①③②♣④✾✂
❹ ♠♣①③❼◆①③q❾q③✈❖❼❩✇❄❷♥⑤➒✇❄❶➜❼❚⑤Ò❼❄♠♣♦❙❷❄✇◆✉♣❷❽♦❧❦✘❶➜⑧❧✈♣❷❽❶÷rÛ♦❙❷❂t➮⑤⑦ë◗①③②♣④ß⑤✔❦✧⑤❙❷
✲ì ♠♣❶✧❶❸qP❷❽❶❸⑤❙⑧♣❿⑥rÛ♦❙❷✚♦❙✉Ñ❶✧❷♥⑤➒✇❽①❾♦❵②☎♦❙②❖❦✧❶➊①➤✇❏♠❖⑤❙❼✮ê❭❶❸❶✧② ♠◗✈♣②❖④⑥♦❙②✭✇❽♦
⑤⑦②⑥⑤⑦✉❖✉♣❷❄♦❵✉♣❷❽①③⑤⑦✇❄❶☞ì✲♠♣❶❸❶✧qÑ♠◗✈♣ê➄ï✥î➲②❧rÛ♦❙❷❽t➮⑤⑦q③q❾❿❵➃❇⑧♣♦ ✈♣✉✪①③❼✬✇❄♠♣❶❏♦❙✉♣ñ
❶✧❷♥⑤➒✇❄①③♦❙②✪♦⑦rP✉♣✈♣✇❩✇❄①③②♣④✜✇❄♠❖❶✚②◗✈❧✇♥❼➼♦❙②⑥✇❽♠♣❶❏♠❇✈❖ê⑩♦❙r❀⑤■ì✲♠❖❶✧❶✧q➬ì✲♠♣❶❸②
①➤✇❱①③❼ ✂❩⑤❙❦♥ë❵❶❸⑧☎✈❖✉➄ï ❹ ♠♣❶■②❖⑤❙t✏❶❸❼❱❼❩✈❭❦♥♠✃⑤❙❼✚ì✲❷❄❶❸②❖❦♥♠ ✢❖➃❭♠◗✈♣ê➄➈✜⑤⑦❷❽❶
❷❄❶✧rÛ❶✧❷❽❶✧②❖❦✧❶❸❼❋✇❽♦❏⑤❵❦✶✇❽✈❖⑤⑦q❇♦❙✄ê ➀✂ ❶➜❦✶✇♥❼✧ï ❹ ♠♣❶✬✉♣❷❽❶✧❻➒⑤❙①❾q◗♦❙ê ✂➀❶❸❦✶✇♥❼P♠❭⑤➽❻❙❶❞✇❽♦
ê❭❶✡②♣❶❸❦✧❶❸❼❽❼❄⑤❙❷❄①③q❾❿⑩✉♣❷❽❶❸❼❄❶✧②❇✇✮①③②✃⑤➮✉❖⑤❙❷❩✇❽①③❦✧✈♣q③⑤❙❷✚❼❩✇❽⑤⑦✇❄❶➊ê♣✈♣✇❏❷❽❶✧t➮⑤⑦①③②
✈♣②❖⑤✝➬☎ ❶❸❦✘✇❄❶➜⑧÷ú ✠ ì✲❷❄❶❸②❖❦♥♠ ✢❖ó❖①➅❼✚⑤➽❻➒⑤⑦①③q➅⑤⑦ê♣q③❶❙✎➃ ✠ ✂❩⑤❵❦♥ë ✢❖ó❖①➅❼✧✂❩⑤❵❦♥ë◗①❾②♣④✪✈♣✉
✇❄♠♣❶❏ì✲♠♣❶✧❶❸q➱➃ ✠ ✇❄❷❽①❾t✭➈❵ó❇①➅❼✍♠◗✈♣ê➄➈❙ó ❼❜ì✲♠♣❶✧❶❸q❭✇❽❷❄①③t ⑤⑦②❭⑧⑥♠❖⑤❵❼❉✇❽♦✡♠❖⑤➽❻❵❶
ê❭❶❸❶✧②☎❷❽❶✧t✏♦➒❻❙❶➜⑧❖û✶ï ❹ ♠♣❶❸❼❄❶✟♦❙ê✄➀✂ ❶➜❦✶✇♥❼☞②♣❶❸❶❸⑧➆✇❄♦➮êÑ❶➊①③②☎✉❖⑤⑦❷❄✇❄①➅❦✘✈❖q③⑤❙❷
❼➀✇♥⑤➒✇❄❶➜❼☞rÛ♦❵❷✮✇❽♠♣❶✏⑤❙❦✶✇❽①❾♦❵②☎✇❄♦➆❶✘ý❧❶➜❦✘✈❧✇❽❶❙➃❋⑤⑦②❖⑧ ✇❄♠♣♦❇❼❩❶✜❼➀✇♥⑤➒✇❽❶❸❼ ✠ ✉❖❷❄❶✧ñ
❻➽⑤❙①❾qÕóP♦❵❷✡❼❩✇❽⑤➽❿◆✇❄♠♣❶➆❼❄⑤❙t✏❶⑩⑧❧✈♣❷❽①③②♣④ ❶✘ý❧❶➜❦✘✈❧✇❽①❾♦❵②÷♦⑦r➼✇❄♠❖❶➆⑤❵❦✶✇❄①③♦❙②Pï
❹ ♠♣✪❶ ✠ ❦♥♠❖⑤❙②♣④❙①③②♣④❭ó➒♦❙ê✄➀✂ ❶➜❦✶✇❽❼❞❦♥♠❭⑤⑦②♣④❵❶✍❼❩✇❽⑤⑦✇❄❶➊úÛ♠◗✈♣êP➈✍êÑ❶❸❦✧♦❙t✏❶❸❼❞r✻⑤❙❼❩ñ
✇❄❶✧②❖❶❸⑧➆✈♣✉➄➃◗✇❽♠♣❶✟②◗✈❧✇❽❼❸➈➊⑤⑦❷❽❶✮r✻⑤❵❼➀✇❽❶✧②♣❶➜⑧➆✈♣✉❭û✶ï
88
úÛ②◗✈❧✇❽❼❸➃♣ò➊✷➃ ✠ ♠❖⑤➽❻❵❶ ②◗✈❧✇♥❼❸ú➱ò✚û✍✡✸✳✹✠ ✇❄①③④❙♠❇✇➜ú➱ò➊➃ ♠❭û☛✡❞û
Ûú ②◗✈❧✇❽❼❸➃♣ò➊✷➃ ✠ ♠❖⑤➽❻❵❶ ②◗✈❧✇♥❼❸ú➱ò✚û✍✡✸✳✹✠ q❾♦◗♦❵❼❄❶❵ú➱ò➊➃ ♠❭û☛✡✥û
⑤⑦②❖⑧◆♠♣❶✧②❭❦✘✥❶ ✭✢ ✉❭♦❇❼❄❼❄①③ê♣q❾❶✏①③②❖⑧❧✈❖❦✧❶❸⑧❚⑤❙❦✘✇❄①③♦❙②❚❼❄❦♥♠❖❶✧t➮⑤❚úÛq③①❾②♣❶☎➈ ✲❙û✘ï
❹ ♠♣❶❸❼❄❶☞❼❩①❾ý✏♦❙✉❧✇❽①❾♦❵②❖❼❉⑤⑦❷❽❶➼✇❄♠❖❶✧②✪❦♥♠♣❶➜❦♥ë❙❶➜⑧■rÛ♦❵❷✬❦✘♦❵②❖❼❄①③❼❩✇❄❶❸②❖❦✘❿✜ì✲①➤✇❽♠
✇❄♠♣❶✍⑧❧♦❙t➮⑤⑦①③②➊①③②◗❻➽⑤❙❷❄①➅⑤⑦②❇✇♥❼❋ì✲♠♣①➅❦♥♠■⑤⑦❷❽❶✬❼❄♠♣♦➒ì✲②✟①❾② ✜✵①❾④❵✈♣❷❄❶ ✜❧ï ❹ ♠❖❶
❦✘♦❙✆② ➀✂ ✈♣②❖❦✶✇❽①❾♦❵②÷♦⑦r✮❼❩✇❽⑤⑦✇❄❶✪❦✧♦❙②❖❼❩✇❄❷♥⑤⑦①③②❇✇❽❼■①❾②÷êÑ♦⑦✇❽♠÷✇❽♠♣❶ ✻❀➁✚❥ð⑤❙②❖⑧
õ☞➁✚❥✭♦❙r❞✇❄❷♥⑤⑦②❖❼❄①❾✇❄①③♦❙②❖❼✮♦⑦r❞✇❄♠❖❶✡②♣❶❸ì✲q❾❿➆rÛ♦❵❷❄t✏❶❸⑧☎⑤❵❦✶✇❽①❾♦❵②✃❼❄❦♥♠❖❶✧t➮⑤
t■✈❖❼❩✇PêÑ❶✍❦✧♦❙②❖❼❄①③❼❩✇❄❶❸②❇✇Pì✲①➤✇❽♠✟✇❄♠❖❶❸❼❄❶✬①③②❇❻➒⑤❙❷❄①➅⑤⑦②❇✇❽❼❸ï➄î➲②■❦❸⑤❙❼❄❶❸❼Pì✲♠♣❶✧❷❽❶
✇❄♠♣❶❸❿✪⑤⑦❷❽❶❱②♣♦⑦✇➜➃❧✇❄♠♣❶➊⑤❵❦✶✇❽①❾♦❵②➆❼❽❦♥♠♣❶✧t➮⑤✡①③❼✮⑧❧①③❼❽❦✧⑤❙❷❽⑧♣❶❸⑧❋ï
❹ ♠♣①③❼❱❷❽❶❸⑧♣✈❖❦✘❶➜❼❱✇❄♠♣❶➮②◗✈♣t✡ê❭❶❸❷➊♦⑦r✍♦❙✉❧✇❽①❾♦❵②❖❼❱✇❽♦ ⑤➆❼❄①③②♣④❙q③❶⑥⑤❙❦✘✇❄①③♦❙②
❼❄❦♥♠♣❶❸t➮⑤♣ï ✁ ❷❄♦❧❦✘❶➜❼❄❼❄①③②♣④✡♦❙r❀✇❽♠♣❶➊♦❙✇❄♠♣❶❸❷☞➉➮⑤❙❦✶✇❽①❾♦❵②❖❼✲①③②➆✇❄♠♣❶✟✇❽❷❽⑤❙①❾②♣ñ
①❾②♣④✹❼❩❶➜ö❵✈❖❶✧②❖❦✧❶ q❾❶➜⑤❙⑧♣❼⑥✇❄♦ü⑤÷❼❩①③②♣④❵q❾❶ ①③②❇✇❄❶❸❷❄✉♣❷❽❶✘✇♥⑤➒✇❽①❾♦❵②❨♦⑦r➊❼➀✇♥⑤➒✇❽❶
❦♥♠❖⑤⑦②♣④❵❶❸❼❸➃✟⑤❙❼✭✇❽♠♣❶ù❦♥♠❖⑤❙②♣④❙①③②♣④ÿ♦❙✄ê ➀✂ ❶➜❦✶✇♥❼☎①❾②◗❻❵♦❙q③❻❙❶❸⑧ß⑤⑦❷❽❶÷⑤⑦q③q➊①③②
✇❄♠♣❶❸①❾❷❏➂❖②❖⑤❙q❞❼➀✇♥⑤➒✇❄❶➜❼✧➃➄⑤❙②❖⑧ ♠❖❶✧②❖❦✧❶✜➉⑩t✏♦❙❷❽❶■④❙❶❸②♣❶✧❷♥⑤⑦q③①➅❼❩❶➜⑧ ⑤❙❦✘✇❄①③♦❙②
❼❄❦♥♠♣❶❸t➮⑤❙❼❋⑤⑦❷❽❶❞④❙❶❸②♣❶✧❷♥⑤➒✇❽❶❸⑧❋ï✸✜❀①③②❖⑤❙q❾q③❿❙➃➽⑤➼♠♣①③❶✧❷♥⑤⑦❷♥❦♥♠♣①③❦❸⑤⑦q➒t✏❶✘✇❽♠♣♦❧⑧❱①③❼
④❙❶✧②❖❶✧❷♥⑤➒✇❄❶➜⑧⑥úÛq③①③②♣❶❱✻➈ ❵✺ û➄ê◗❿➊❦✘♦❙t✡ê♣①③②♣①❾②❖④✮✇❽♠♣❶✖✿❏⑤❵❦✶✇❄①③♦❙②✡❼❄❦♥♠♣❶❸t➮⑤✮①③②
⑤➮❼❩①③t✏①❾q➅⑤⑦❷✲r✻⑤❵❼❩♠❖①❾♦❵②⑩✇❽♦✏✇❄♠♣❶■♦❙❷❽①❾④❵①❾②❖⑤❙q✵þ✚✉♣t➮⑤⑦ë❵❶✧❷☞❼❄❿❧❼➀✇❽❶✧t ú➱ô ❦✶ñ
s❜q❾✈❖❼❄ë❙❶❸❿❙➃❧õ✲①➅❦♥♠❖⑤⑦❷♥⑧♣❼❄♦❙②➄➃ ✚ ❥◗①③t✜✉❭❼❩♦❵② ✜✗✢✗✢✣✜❵û✶ï
❹ ♦ð①❾q③q❾✈❭❼➀✇❽❷❽⑤⑦✇❄❶☎❼❩♦❵t✏❶☎♦⑦r❏✇❄♠♣❶✃⑧♣❶✘➂❖②♣①❾✇❄①③♦❙②❭❼➮①❾② ✻P①❾②♣❶ð➈☎♦❙r✚✇❽♠♣❶
⑤⑦q③④❙♦❵❷❄①❾✇❄♠❖tÚ①❾② ✜✵①❾④❵✈♣❷❽❶✚➈❵➃❙ì✍❶❜♠❖⑤➽❻❵❶✲❦✘♦❙t✏✉Ñ♦❙②♣❶❸②❇✇❽❼✵♦⑦r❋⑤⑦②✜♦❙✄ê ➀✂ ❶➜❦✶✇
⑤❙❼✍rÛ♦❵q❾q③♦➒ì☞❼✼✂
✠ ✈♣②❧r✻⑤❙❼❩✇❄❶❸②♣❶❸⑧PúÛ♠◗✈♣ê➄➈➽û✶➃
✂✁☎✄ ●➈ ✝❍ ✆✟✞
✂❩⑤❙❦♥ë❵❶❸⑧ ✈❖✉PúÛ♠◗✈♣ê➄➈❵➃ ✂❩⑤❵❦♥✄ë ❵✢ ☛û ✡
✠ ♦❙② ④❵❷❄♦❵✈♣②❖⑧➄ú✻♠◗✈♣ê➄➈➜û✘➃◗r✻⑤❙❼❩✇❄❶❸②♣❶❸⑧➄ú✻♠◗✈♣ê➄➈➜û✍✡
✂✁☎✄ ●➈ ❍ ☞✌✞
♠◗✈♣ê
✂✁☎✄ ●➈ ✏❍ ✎✑✞
⑨❞ý♣⑤❙t✜✉❖q❾❶➜❼➼♦⑦r❀♦❙✇❄♠♣❶❸❷✲♦❙✉Ñ❶✧❷♥⑤➒✇❽①❾♦❵②❖❼✲⑤⑦❷❽❶✜ú✻♠ ⑤⑦②❖⑧ ✂✲⑤⑦❷❽❶❏✉❭⑤⑦❷♥⑤⑦t✏❶✘ñ
✇❄❶❸❷❽❼♥û ✂
✂✁☎✄ ●➈ ✝❍ ✆✓✒✔✞✕✠ ✈❖②❧r✻⑤❙❼❩✇❄❶❸②♣❶❸⑧➄ú✻♠❭û✶➃✆❩✂ ⑤❙❦♥ë❙❶➜⑧ ✈♣✉❀úÛ♠➄➃ ✂❄û☛✡
✂✁☎✄ ●➈ ✝❍ ✆✗✖✘✞✙✠ ♠❇✈❖ê➄➃ ✂❩⑤❵❦♥✚ë ✡
✻➄①③②♣❶✟✪
✜ ①➤✇❽❶✧❷♥⑤➒✇❄❶➜❼☞✇❽♠♣❷❽♦❙✈♣④❵♠◆⑤⑦q③q✥✇❄♠❖❶■✇❽❷❽⑤❙①❾②♣①③②♣④➆❶✘ý♣⑤⑦t✏✉♣q③❶❸❼❸ï ✜❖♦❙❷
✇❄♠❖❶❱➂❖❷❽❼❩✇➼✇❄❷♥⑤⑦①③②♣①③②♣④✏❶✘ý♣⑤⑦t✏✉♣q③❶❙➃◗✇❽♠♣❶✟✉♣❷❽♦❙ê♣q③❶✧t❝①③❼✍✇❽♦⑥⑧❧❶✘✇❽❶✧❷❽t✜①③②♣❶
ì✲♠❖⑤⑦✇✲✇❄♠♣❶✟②♣❶❸ì✔❼❩✇❽⑤⑦✇❄❶❸❼✲⑤❙❷❄❶❱♦❙r❀♠◗✈♣ê➄➈➊⑤❙②❖⑧✪②◗✈❧✇♥❼✧➈❵ï
î➲② ✻P①❾②❖❶✍➉❖➃➒q❾❶✧✇✧★✛✞✛✠ ì✟➃ ✂✘➃➽✇❸➃⑦②➄➃➒♠✜Õ✡ ï✵î➲② ✻P①❾②♣❶➜❼✯✿⑦✍ñ ✢♣➃➜✇❄♠♣❶✍✉♣❷❽❶✧❻➒⑤⑦①③q
❦✘♦❵t✏✉❭♦❵②♣❶✧②❇✇❽❼✪⑤⑦❷❽❶➆④❵♦⑦✇➮rÛ❷❽♦❙t ✇❄♠♣❶✃❦✧✈♣❷❽❷❄❶❸②❵✇⑥❼❩✇❽⑤⑦✇❄❶✃❦✧q③⑤❵❼❄❼❄❶❸❼✏♦❙r
ì✲❷❽❶✧②❖❦♥♠ ✢❖➃ ✂❩⑤❙❦♥✄ë ➆✢ ⑤⑦②❖⑧◆✇❄❷❽①③t➆➈❵➃P⑤❙❼❏①③②❂✇❄♠♣❶✏♦❙❷❽①③④❙①③②❖⑤⑦q✬þ✚✉♣t➮⑤⑦ë❵❶✧❷
⑤⑦q③④❙♦❵❷❄①❾✇❄♠❖t➆ï ❹ ♠❖❶⑥q❾♦◗♦❵✉÷❼➀✇♥⑤⑦❷❄✇❄①③②♣④ ♦❙②ðq③①③②♣❶✤➆✣ ①➅❼✡①❾②❇✇❄❶❸②❖⑧❧❶➜⑧❚✇❄♦
⑧❧❶✧✇❄❶✧❷❽t✏①❾②❖❶❉✇❄♠❖❶❜⑧❧❶➜❼➀✇❽①❾②❖⑤⑦✇❄①③♦❙②➊♦❙r♣❶❸⑤❵❦♥♠✟♦❙ê✄➀✂ ❶➜❦✶✇P✇❽♠❖⑤➒✇✵①③❼✵❦♥♠❖⑤❙②♣④❙❶➜⑧
ê◗❿⑥✇❽♠♣❶■⑤❙❦✘✇❄①③♦❙②✭êÑ❶✧①③②♣④⑥q③❶❸⑤⑦❷❽②♣❶➜⑧❋ï❉♠◗✈♣ê➄➈✟①➅❼✲✇❽♠♣❶✟➂❖❷♥❼➀✇✮❦♥♠❭⑤⑦②♣④❵①❾②♣④
♦❙ê ✂➀❶❸❦✶✇➜✯ï ✜❖❷❄♦❵tÚ✇❄♠❖❶✍④❵①❾❻❵❶✧②✜✉❖⑤⑦❷❄✇❄①➅⑤⑦q❧⑧❧❶✧➂❖②♣①❾✇❄①③♦❙②✏♦⑦r❭✇❄♠♣❶☞⑧♣♦❙t➮⑤⑦①③②➄➃
①❾✇☞♠❖⑤❙❼✍rÛ♦❵✈♣❷☞❼➀✇♥⑤➒✇❽❶➊❦✘q➅⑤❙❼❽❼❩❶➜❼❜ì✲♠❖①③❦♥♠➆ì❜❶✟②❭⑤⑦t✏❶➊❥➬➈✘ñ ✿✛✂
❥➬➈✑✞✕✠ ♦❵② ④❙❷❽♦❙✈❖②❖⑧➄úÛ♠Ñû✶➃ r✻⑤❙❼❩✇❄❶✧②❖❶❸⑧➄ú✻♠❭û☛✡▲➃
❥ ✜✥✞✕✠ ✂❩⑤❵❦♥ë❙❶❸⑧ ✈♣✉PúÛ♠P➃ ✂❄û✘➃ r✻⑤❵❼➀✇❽❶✧②♣❶➜⑧➄úÛ♠❭û✍✡Õ➃
❥❧✦➉ ✞✕✠ rÛ❷❽❶✧❶❇úÛ♠❭û✘➃ ✂❩⑤❵❦♥ë❙❶❸⑧ ✈♣✉Pú✻♠➄➃ ✂❄û✘➃ ✈❖②❧r✻⑤❙❼❩✇❄❶❸②♣❶❸⑧➄ú✻♠❭☛û ✡▲➃
✻❥ ✧
✿ ✞✕✠ ✈♣②♣r✻⑤❙❼❩✇❄❶✧②❖❶❸⑧➄ú✻♠❭û✶➃ ✂❩⑤❙❦♥ë❵❶❸⑧ ✈❖✉PúÛ♠➄➃ ✂❄û☛✡
♠◗✈♣ê➄➈❵ó ❼❱❦✘✈❖❷❄❷❽❶✧②❇✇✟❼➀✇♥⑤➒✇❄❶✏①➅❼❱②♣♦⑦✇✟②♣❶➜❦✘❶➜❼❄❼❽⑤⑦❷❽①❾q③❿✭①➤✇♥❼❏➂❖②❖⑤❙q❞♦❙②♣❶❵➃➄⑤❙❼
①③②◆✇❽♠♣❶✏✇❄❷♥⑤⑦①③②♣①③②♣④✭❼❩❶➜ö❵✈❖❶✧②❖❦✧❶✜①❾✇➊①➅❼❱❷❄❶✧rÛ❶✧❷❽❷❄❶➜⑧ ✇❽♦✭⑤❙④❵⑤❙①❾②üúÛ①③②◆✇❽♠♣❶
❼❄❶❸❦✘♦❵②❖⑧ ♦❙r❉✇❄♠❖❶✏❼❄❶❸ö❇✈♣❶❸②❖❦✘❶❵✩➃ ✙★ ✪✒ ✆ ✔ ✫●✭❅ ✬ ✥➬û❱⑤❵❼❏⑤➆❦♥♠❖⑤❙②♣④❙①③②♣④✪♦❵ê❧ñ
✂➀❶❸❦✘✇❸ï➼➁☞❶❸②❖❦✘❶✟q③①❾②❖❶⑥➈ ✢✏①➅❼✲❶✘ý❧❶➜❦✘✈❧✇❽❶❸⑧❋ï✯✮ ❦❸⑤⑦②♣②❖♦⑦✇☞êÑ❶■❥✻✿ ú✻❼❄①❾②❭❦✘❶
✇❄♠❖①③❼❱①➅❼❱❦✘✈♣❷❽❷❄❶❸②❇✇❄q③❿⑩✇❽♠♣❶✏④❙❶❸②♣❶✧❷♥⑤⑦q③①③❼❽⑤➒✇❽①❾♦❵②☎♦⑦r❉✇❽♠♣❶✜♦❙ê✄➀✂ ❶➜❦✶✇❸ó ❼❱❦✘✈♣❷❄ñ
❷❽❶✧②❇✇✏❼❩✇❽⑤⑦✇❄❶❵➃❜⑤⑦②❭⑧ð✇❄♠❖❶➆♦❵✄ê ➀✂ ❶❸❦✘✇✜♠❭⑤❙❼✡✇❄♦ð❦♥♠❖⑤❙②♣④❙❶➆❼➀✇♥⑤➒✇❄❶✭❦✘q➅⑤❙❼❽❼♥û✶ï
î➲② ✻P①❾②❖❶✪➈❙➈ ★✰✖➮☛ú ✞✱✠ ì✲❷❄❶❸②❖❦♥♠➄✣➃ ❩✂ ⑤❙❦♥ë➬➃Ñ✇❄❷❽①❾t✭➃➬②◗✈❧✇❽❼❸➃➬♠❇✈❖ê✩❣✡ û✚❦✘♦❵②❧ñ
✇❽⑤❙①❾②❭❼✍⑤❙q❾q❭✇❄♠❖❶❱❼❄♦❙❷❄✇❽❼❜①③②✪❶❸⑤❵❦♥♠⑥♦⑦r✵❼➀✇♥⑤➒✇❄❶❱❦✘q➅⑤❙❼❽❼❄❶❸❼➼❥➬➈❙➃➺❥ ✜➊⑤❙②❖⑧⑩❥♣➉♣➃
⑤⑦②❭⑧✪❼❄♦➊✇❄♠❖①③❼✲⑧❧♦◗❶➜❼✬②♣♦❙✇✍②❭⑤⑦❷❽❷❄♦➒ì❨⑧❧♦➒ì✲②✪✇❄♠♣❶❱❦♥♠♣♦❙①➅❦✘❶➜❼✧ï❞➁✮❶✧②❖❦✧❶❏➉
✇❄❷♥⑤⑦②❭❼❩①❾✇❄①③♦❙②❖❼✲⑤❙❷❄❶❱❼❩✇❄♦❙❷❽❶❸✄⑧ ✂
úÛ♠◗✈♣êP➃♣♠➄✲➃ ✠ ✈♣②❧r✻⑤❵❼➀✇❽❶✧②♣❶➜⑧➄úÛ♠❭û✘➃ ✂❩⑤❵❦♥ë❙❶➜⑧ ✈♣✉Pú✻♠➄➃ ✂❄û☛✡✴✳
✠ ♦❙② ④❵❷❄♦❵✈♣②❖⑧➄ú✻♠❭û✶➃ r✻⑤❙❼❩✇❄❶❸②♣❶❸⑧PúÛ♠❭û✍✡❣û
úÛ♠◗✈♣êP➃♣♠➄✲➃ ✠ ✈♣②❧r✻⑤❵❼➀✇❽❶✧②♣❶➜⑧➄úÛ♠❭û✘➃ ✂❩⑤❵❦♥ë❙❶➜⑧ ✈♣✉Pú✻♠➄➃ ✂❄û☛✡✴✳
✠ rÛ❷❄❶❸❶❵úÛ♠Ñû✶➃ ✂❩⑤❙❦♥ë❙❶➜⑧ ✈♣✉❀úÛ♠➄➃ ✂❄û✶➃ ✈♣②❧r✻⑤❵❼➀✇❽❶✧②♣❶➜⑧➄úÛ♠❭✍û ✡❣û
úÛ♠◗✈♣êP➃ ♠➄➃ ✠ ✈❖②❧r✻⑤❙❼❩✇❄❶❸②♣❶❸⑧➄ú✻♠❭û✶➃ ✂❩⑤❙❦♥ë❵❶❸⑧ ✈♣✉Pú✻♠➄➃ ✂❄✍û ✡ ✳
✠ ✂❩⑤❙❦♥ë❙❶➜⑧ ✈♣✉❀úÛ♠➄➃ ✂❄û✶➃ r✻⑤❙❼❩✇❄❶❸②♣❶❸⑧➄ú✻♠❭û☛✡Ûû
î▲✇❄❶❸❷❽⑤⑦✇❄①③♦❙②✏♦⑦r❋q③①❾②❖✵❶ ✣❏ì✲①❾✇❄♠➮♦❵ê✄➀✂ ❶❸❦✘✇ ✥ ✁ ✬✶✎❙➈➼♦❧❦✧❦✘✈❖❷❽❼✬②♣❶✘ý◗✇❸ï✥î▲✇✬♠❭⑤❙❼
✇❄♠❖❷❄❶❸❶❱❼❩✇❽⑤⑦✇❄❶➜❼✽✂
❹ ✑➈ ✞✙✠ ✇❄①③④❙♠❇✇➜ú➱ò➊➃ ♠❭☛û ✡
❹ ✜✥✞✙✠ q❾♦◗♦❇❼❩❶❇ú✻ò➊➃ ♠❭✍û ✡
❹ ✦➉ ✞✙✠ ♠❖⑤➽❻❙❶ ②◗✈❧✇❽❼➜ú✻ò❏û☛✡
❹ ♠♣①➅❼✲q③❶❸⑤❙⑧❖❼✍✇❄✟♦ ✡✜ ✉❭♦❇❼❄❼❄①❾ê❖q❾❶❏✇❄❷♥⑤⑦②❭❼❩①❾✇❄①③♦❙②❖❼✼✂
◗ä é ✗✒✟ â✵ã✪❀ ✌ â❜æ❂❁ ✟ ❀❵✏ç ✕➱ã✪❀
❹ ♠♣❶✭t✜❶✧✇❄♠♣♦❧⑧✹♠❖⑤❙❼✏êÑ❶✧❶❸②✹①③t✏✉♣q③❶✧t✏❶✧②❇✇❄❶➜⑧ÿ⑤⑦②❖⑧ùt✜❶❸❷❄④❵❶❸⑧✹ì✲①➤✇❽♠
✇❄♠♣❶➮♦❵❷❄①③④❙①③②❖⑤⑦q✬þ✚✉♣t➮⑤❙ë❙❶✧❷➊❼❄❿❧❼➀✇❽❶✧t✭ï✟❂
✞ ❶⑥⑤⑦❷❽❶✏✈❖❼❩①③②♣④✭✇❽♠♣❶✪❼❽⑤⑦t✏❶
❶✘ý❧✉❭❶❸❷❄①③t✏❶✧②❇✇❽⑤❙q❉⑤⑦✉♣✉❖❷❄♦❇⑤❙❦♥♠◆⑤❵❼❱ì❜❶✏✈❖❼❄❶❸⑧◆✇❽♦⑩✇❽❶❸❼❩✇✟✇❄♠♣❶✪♦❙❷❽①❾④❵①❾②❖⑤❙q
❼❩❿❧❼❩✇❄❶✧t ✂
❃ ✞❂❶✲♠❖⑤⑦②❭⑧◗ñ▲❦✧❷❽⑤⑦r❣✇❞✇❄❷♥⑤⑦①③②♣①③②♣④✟❼❩❶➜ö❵✈❖❶✧②❖❦✧❶❸❼✥rÛ❷❽♦❙t ⑤❱❷❽⑤❙②♣④❙❶✍♦⑦r➄⑧❧♦❙ñ
t➮⑤⑦①③②❖❼❜❼❩❶❸q❾❶➜❦✶✇❄①③②♣④■⑤❵❦✶✇❽①❾♦❵②❖❼❞✇❽♠❖⑤➒✇✍ì✲①❾q③q❖ê♣✈❖①❾q➅⑧✪❼❄❶✧②❖❼❄①③ê♣q❾❶✮t✜❶✧✇❄♠❧ñ
♦❧⑧♣❼✍rÛ♦❵❷➼✇❄♠❖⑤⑦✇✮⑧❧♦❙t➮⑤⑦①③②➄ï
❃ ✞❂❶✚✈❭❼❩❶➊þ✚✉♣t➮⑤⑦ë❵❶✧❷ ✜❱✇❄♦✜①③②❖⑧❧✈❖❦✧❶❏⑤❵❦✶✇❄①③♦❙②❭❼✍⑤❙②❖⑧⑥♠❖①❾❶❸❷❽⑤❙❷❽❦♥♠♣①➅❦✧⑤❙q
ú➱➁ ❹ ò✲ñÕ✇➀❿◗✉❭❶➽û✍t✏❶✘✇❄♠❖♦◗⑧❖❼➼rÛ❷❄♦❵tÝ✇❄♠❖❶❱✇❄❷♥⑤⑦①③②♣①❾②❖④✏❼❄❶❸ö❇✈♣❶❸②❖❦✘❶➜❼✧ï
❃ ❺✚❼❩①③②♣④✹❼❩✇❽⑤❙②❖⑧♣⑤❙❷❽⑧ÿ✉♣q➅⑤⑦②❖②♣❶✧❷♥❼✧➃✲ì✍❶◆❦✧♦❙t✏✉❖⑤❙❷❄❶ ✉Ñ❶✧❷❄rÛ♦❙❷❽t➮⑤⑦②❖❦✧❶
✈❖❼❄①③②♣④✃♦❵q③⑧ù♠❖⑤⑦②❖⑧❧ñ▲❦✧❷❽⑤⑦r❣✇❄❶❸⑧✹⑤❙❦✘✇❄①③♦❙②✹❼❽❦♥♠♣❶❸t✏⑤ ✇❄♦◆✇❄♠♣❶✭✈❖❼❄❶➆♦⑦r
①③②❖⑧❧✈❖❦✧❶❸⑧✭❼❽❦♥♠♣❶✧t➮⑤♣ï
❥◗✈❖❦✧❦✧❶❸❼❽❼➼ì✲①❾q③q❋êÑ❶ ✂➀✈❖⑧❧④❙❶➜⑧➆✈❖❼❩①③②♣④✜✇❄♠❖❶❱rÛ♦❙q③q❾♦➒ì✲①③②♣④➮❦✘❷❽①➤✇❽❶✧❷❽①③✛⑤ ✂
❃ î▲r✬⑤✪❻➽⑤❙q❾①➅⑧ ❼❄❶✘✇❏♦⑦r❉✈♣②♣①➅ö❇✈♣❶■②❖❶✧ì ⑤❵❦✶✇❄①③♦❙②❭❼☞①➅❼✚⑧❧❶✧➂❖②♣❶➜⑧ ⑤❙❼❏⑤❙❦✘ñ
✇❽①❾♦❵②❖❼❏✇❄♠❖⑤⑦✇✡❦✧⑤❙②❂❼❄♦❙q③❻❙❶✜✇❽♠♣❶✪❼❽⑤⑦t✏❶✏✉♣❷❄♦❵ê♣q③❶✧t➮❼❏✇❄♠♣❶✪♦❙❷❽①❾④❵①❾②❖⑤❙q
✇❽❷❽⑤❙①❾②♣①③②♣④✏❼❩❶➜ö❇✈♣❶✧②❖❦✧❶❸❼➼ì✍❶✧❷❽❶❏⑤❙①❾t✏❶➜⑧⑩⑤⑦✇❸➃❖❦❸⑤⑦② ☞ ✍✑✏✓✒✕✔✗✖✙✘✆✜✜ ①❾②❧ñ
⑧❧✈❭❦✘❶➊✇❽♠♣❶❸❼❄❶■ì✲①❾✇❄♠❖♦❙✈❧✇❏♠❖⑤➽❻◗①❾②♣④⑥✇❄♦✪❶❸②❖❦✘♦❧⑧❧❶✜⑤⑥④❵❷❄❶➜⑤➒✇✮⑧❧❶➜⑤⑦q✵♦⑦r
①③②◗❻➽⑤❙❷❄①➅⑤⑦②❇✇♥❼❜①③②❇✇❄♦✜✇❄♠❖❶➊⑧❧♦❙t➮⑤⑦①③②➆t✜♦❧⑧❧❶❸q③❁❼ ☎
❄ ❦✧①❾❶❸②❇✇✥①③②✜✇❄❶❸❷❄t➮❼✥♦⑦rÑ❶ ☎➬♦❙❷❄✇✥✇❄①③t✜❶✍✇❽♦➊❦✘♦❙②❭❼➀✇❽❷❄✈❖❦✘✇
❃ î➀❼✥①❾✇❞t✜♦❵❷❄❶➼❅❶ ➮
⑤✏⑧❧♦❵t✏⑤❙①❾②➆✈❖❼❄①❾②❖④ ☞ ✍✑✏✓✒✕✔✗✖✙✘✆✜●☎
❄ ❦✘①③❶✧②❇✇❸➃✵①③②◆✇❽❶✧❷❽t➮❼✟♦⑦r➼✉♣q➅⑤⑦②♣②♣①③②♣④✭✇❽①❾t✏❶❙➃
❃ î➀❼❱①❾✇✡⑤➒✇➊q③❶❸⑤❵❼➀✇➊⑤❵❼❱❆❶ ⑥
✇❽♦ ❷❄❶➜⑤❙❦♥♠ð④❵♦❵⑤⑦q➅❼✟✈❖❼❄①③②♣✌
④ ☞ ✍ ✏ ✒ ✔✣✖✙✘✝✃
✜ ⑧❧❶✘➂❭②♣❶❸⑧÷⑤❵❦✶✇❽①❾♦❵②❖❼■⑤❙②❖⑧
t✏❶✘✇❽♠♣♦❧⑧♣❼❁☎
❺☞✉❱✇❄♦✲②❖♦➒ì◆ì✍❶❉♠❖⑤➽❻❵❶✵❶✧ý❧✉❭❶❸❷❄①③t✏❶✧②❇✇❄❶➜⑧❱ì✲①➤✇❽♠ ✜✲⑧❧♦❙t➮⑤❙①❾②❱t✏♦❧⑧❧❶✧q➅✽❼ ✂
✇❄♠♣❶✭❶✘ý◗✇❽❶✧②❖⑧❧❶➜⑧✹✇➀❿❇❷❽❶➆ì❜♦❵❷❄q➅⑧❋➃❜⑤❙②❖⑧ù✇❄♠♣❶✭♠♣①③ë◗①❾②♣④ð⑧❧♦❵t➮⑤⑦①③②ßú➱❼❩❶❸❶
♠❵✇❄✇❄✄✉ ✂ ●➒✉♣q➅⑤⑦②❧rÛ♦❵❷❄t✭ï ♠❇✈❭⑧❋ï ⑤❵❦⑦ï ✈♣✛ë ➽④❵①❾✉Ñ♦✻✚rÛ♦❵❷☞⑧❧❶✘✇♥⑤⑦①③q③❼☞♦⑦rP✇❽♠♣❶❸❼❄❶➜û✘ï
❥◗①❾②❭❦✘❶❱①❾②❭⑧❧✈❖❦✶✇❽①❾♦❵②☎❼❩❶➜ö❵✈❖❶✧②❖❦✧❶❸❼✲⑧❧❶❸q❾①③❻❙❶❸❷☞❼❩❶❸❻❙❶✧❷♥⑤⑦q➬⑤❙❦✘✇❄①③♦❙②❖❼✲⑤❙②❖⑧➆⑤
❼❩①③②♣④❙q③❶➆t✜❶✧✇❄♠♣♦❧⑧❋➃✍①③②♣①➤✇❽①③⑤❙q✮❼❄❶❸ö❇✈♣❶✧②❭❦✘❶❸❼✜ì✍❶✧❷❽❶✪✇♥⑤⑦①③q❾♦❵❷❄❶➜⑧ð✇❽♦❂✉❖❷❄♦❙ñ
✼✾✽✰✿ ✟
89
➪✁❀➘➒➡❵➦➤Þ❸➢❸➝➞➣♥➧❵➳♥➣✲➛❧➣❽➥➱➫✥➣♥➣♥➧✡→⑦➡❇➛ ■❱✎✘●✳❘✸✓✖✾ ✓❝❖ ➢❸➧◗➭■➧⑦➡❵➥➀➯ ❘❴✵ ✿☎✻✣✄❘ ✂✞✶❍▲✒▲✘●❱✓ ➙➜➧✡→⑦➡❇➛❭➪
☎✝✆ ✫ ✻✣◆✽❚✐✟✭ ✞ ■✬✎✘●✳❘✸✓✖✾ ✓❝❖ ❐ ✆ ✡❰ ✠ s☞☛ ✯ ✫ ✾✞◆✣❘❴● ✭ ❐ ❘❴✵ ✿☎✻✪❘ ❐ ✯❏✲ ✆ ❰✍✌ ✶❍▲✒▲✘●❱✓ ❐ ✯✉✲ ✆ ❰➲❰✏✎
➪ ❀➘➒➡❵➦➤Þ❸➢❸➝➞➣♥➧❵➳♥➣✲➛❧➣❽➥➱➫✥➣♥➣♥➧ ❯✒✎✔✹✼✑ ✵✷✾ ◆✣●❱✓ ➢✧➧◗➭ ❯✒✎✔✹✼✑✔✓❝❖ ◆☎✌ ➪
✗ ✁
☎✝✆ ✫ ✻✣◆✽❚✐✭ ☎✒✑ ✫ ❯✬✎✔✹✼✑♠✭✟✞ ❯✒✎✔✹✼✑ ✵✷✾ ◆❃●❱✓ ❐ ✑ ✲ ✆ ✓❰ ✠✜s ❯✒✎✔✹✼✑✔✓❝❖ ◆✠✌ ❐ ✆ ✲ ✑ ❰✏✎
➪ ❀➘➒➡❵➦➤Þ❸➢❸➝➞➣♥➧❵➳♥➣✲➛❧➣❽➥➱➫✥➣♥➣♥➧✡→⑦➡❇➛✜➧❵➙❸➥ ■✖✕❝✓✳✓ ➢✧➧◗➭✕✔ ✻✞✓✳✓❱✶ ▲✠✾ →⑦➡❇➛❭➪
❭ ✁
☎✝✆ ✫ ✻✣◆✽❚✐✟✭ ✞ ✖ ■✼✕✳✓✳✓ ❐ ✆ ✡❰ ✠ s✗☛✙✘ ✫✄✔ ✻✽✓✼✓❱✶✝✭ ✔ ✻✞✓✳✓❱✶ ▲✘✾ ✚❐ ✘ ✲ ✆ ✏❰ ✎
❜❵✁
➪ ❀➘➒➡❵➦➤Þ❸➢❸➝➞➣♥➧❵➳♥➣✲➛❧➣❽➥➱➫✥➣♥➣♥➧ ❘❴✕✖✵❀✍ ▲✘✾ ✔ ✻✞✓✳✓❱✶ ➢✧➧◗➭ ❘❴✕✖✵✷✍ ▲✘✾ ➪
☎ q✥✫✄✔ ✻✞✓✳✓❱✶ ❘❴✕✼✵✷✍ ✭ ☎ ✘ ✫✄✔ ✻✞✓✳✓❱✶✝✭✚❍✞ ❘❴✕✖✵✷✍ ▲✘✾ ✔ ✻✽✓✼✓❱✶ ✒❐ q ✲ ✘◆✓❰ ✠✜s ❘❴✕✼✵✷✍ ▲✘✾ ❐✚✘ ✲ q✬❰✏✎
❞ ➪✲Ö✍➧❇➝➺➻✡➢❏➯➲➦➞➧❇➨❸➝➤➣☞➯➲➣❽➥✬➙✧➠❋➧⑦➡❵➥➀➯❞➳✶➢✧➧✏➛❧➣✲➙➜➧✜➢❏→⑦➡❇➛❭➪
✣✥✤ ❐ ❘❴✵ ✿☎✻✣❘ ❐ ✯✦✛✠✲ ✆ ✍❰ ✌ ✶❍▲✒▲✘●❱✓ ❐ ✯✁✛✒✲ ✆ ❰➲❰
☎✝✆ ✫ ✻✣◆✽❚✐✭ ☎ ✯✁✛ ✫ ✾✽◆❃❘❴● ✭ ☎ ✯✢✜ ✫ ✾✞◆✣❘❴● ✭
s✤❐ ✯✦✛❈❁ ✯★✜ ❰✬✫
✧
❐ ❘❴✵ ✿☎✻✣❘ ❐ ✯★✜✔✲ ✆ ✍❰ ✌ ✶❍▲✒▲✘●❱✓ ❐ ✯✢✜✘✲ ✆ ❰➲❰✪✩
❡ ➪✲Ö✍➧❇➝➺➻✡➢❏➯➲➦➞➧❇➨❸➝➤➣✲➫❉→❵➣♥➣♥➝❖➳✶➢❸➧✏➛❧➣✲➙❸➧✏➢❏→⑦➡❵➛❭➪
✣✭✤ ✔ ✻✽✓✼✓❱✶ ▲✘✾ ✚❐ ✘ ✛ ✲ ✆ ❰
☎✝✆ ✫ ✻✣◆✽❚✐✭ ☎ ✘ ✛ ✫✄✔ ✻✞✓✳✓❱✶✝✭ ☎ ✘ ✜ ✫✄✔ ✻✞✓✳✓❱✶ ✭
s✤✚
❐ ✘ ✛ ❁ ✘ ✜ ❰✮✫
✧
❰ ✩
✔ ✻✽✓✼✓❱✶ ▲✘✾ ✚❐ ✘ ✜ ✲ ✆ ✪
➪ ❊➼➙➜➟➊➢✧➦➞➧✜➳♥➙❸➧❇➯▲➥➲↕❩➢❸➦➞➧➒✒➥ P✫ ➚➱➠❋➧⑦➡❙➥➀➯❉➢✧↕➀➣➼➥➀➦➞➨➜→➒➥✬➙➜➧✜➢❏→⑦➡❇➛✡➥➀→❵➣♥➧✡➥➀→❵➣➼→⑦➡❇➛✡➟❏➡❇➯▲➥✥➛❧➣➼➙➜➧✜➥➀→❵➣➼➨❸↕➀➙❸➡❇➧❇➭❖➪
❢ ❦
☎✝✆ ✫ ✻✣◆✽❚✐✟✭ ✞ ✮❐ ☛ ✯ ✫ ✾✽◆✣❘❴●✐✭✮❘❴✵ ✿☎✻✣❘ ❐ ✯❏✲ ✆ ❰➲❰ s ▲✠✾ ✿✘✕▼▲✘◆✣✾P❖ ❐ ✆ ✏❰ ✎
➪ ❊➼➙➜➟➊➢✧➦➞➧✜➳♥➙❸➧❇➯▲➥➲↕❩➢❸➦➞➧➒✒➥ P✫ ➦➺➠P➢✚➥➲↕➀➦➞➟Ø➦➞➯❜➙❸➧✏➢✚➫❉→❵➣♥➣♥➝❣➹❇➥➀→❵➣♥➧✡➥➀→❵➣✍➫❉→❇➣♥➣❽➝Ñ➦➞➯❉➙➜➧➮➢❏→⑦➡❵➛✜➢✧➧◗➭■➥➀→❵➣✲➧⑦➡❵➥➀➯❉➢✧↕➀➣➼➥➀➦➞➨❸→⑦➥✶➪
❣ ❦
☎ ✘ ✫✄✔ ✻✞✓✳✓❱✶ ✭ ☛ q✥✫✄✔ ✻✽✓✼✓❱✶ ❘❴✕✼✵✷✍ ✭✰✯ ✮❐ ☛ ✆ ✫ ✻✣◆✽❚❩✭ ✔ ✻✽✓✼✓❱❘❴✶ ✕✖✵❀✍ ▲✘✾ ✚❐ ▲✘✘ ✾ ✲ ✔ ✆ ✻✞❰➲✓✳❰ ✓❱✧✶ ✒❐ q ✮❐ ☛ ✲ ✯ ✘◆✫ ❰✾✞◆✣s ❘❴●✐✭✔❘❴✵ ✿☎✻✣❘ ❐ ✯❏✲ ✆ ❰➲❰✬✱
❉
❀①③④❙✈❖❷❄✽❶ ◆✜ ✥✂ î➲②◗❻➒⑤⑦❷❽①③⑤❙②❇✇❽❼✍❶✧②❖❦✧♦❧⑧❧❶❸⑧➆①③②⑩✇❽♠♣❶➊⑨❞ý◗✇❽❶✧②❖⑧❧❶➜⑧ ❹ ❿◗❷❄✭❶ ✞❂♦❙❷❽q③⑧
✜
❧⑧ ✈❖❦✧❶✹⑤Òt✜❶➜⑤⑦②♣①③②♣④❙rÛ✈♣q■t✏❶✘✇❽♠♣♦❧⑧❋➃✏⑤⑦②❖⑧ø❼❄✈✩⑥❄ ❦✘①③❶✧②❇✇❂①❾②♣①❾✇❄①➅⑤⑦q✏❼❄❶✘ñ
ö❇✈♣❶✧②❭❦✘❶❸❼✥ì✍❶✧❷❽❶➼❦✘♦❙t✏✉Ñ♦❵❼❄❶❸⑧➊✇❄♦✟❦✘♦➒❻❵❶✧❷✥⑤⑦q③q◗✇❄♠♣❶➼t➮⑤✝➀✂ ♦❵❷✥❼❄✈♣ê❧ñÕ✇❽⑤❵❼❩ë❧❼
✇❄♠❭⑤➒✇✲❦✘♦❙✈❖q③⑧➮êÑ❶❏❷❄❶➜ö❵✈❖①❾❷❽❶❸⑧✏ê◗❿✏✇❄♠❖❶✚⑧❧♦❵t✏⑤❙①❾②Pï✵î➲②⑩❶❸⑤❵❦♥♠✪❦✧⑤❵❼❩❶✲✇❽♠♣❶
⑤⑦④❵❶✧②❇✇⑩êÑ❶✧④❇⑤⑦②✔ê❇❿❨ë◗②♣♦➒ì✲①③②♣④ÿ⑧❧♦❙t➮⑤❙①❾②✔ë❇②❖♦➒ì✲q❾❶➜⑧❧④❙❶◆ê♣✈❧✇☎♠❖⑤❙⑧
❼❄ë❙❶✘✇♥❦♥♠◗❿✮♦❙❷❀②♣♦❙②♣ñÕ❶✧ý◗①➅❼❩✇❄❶✧②❇✇➄r✻⑤❵❦✶✇♥❼P⑤⑦êÑ♦❙✈♣✇❀①❾✇❽❼❀✉❭♦❙✇❄❶✧②❇✇❽①③⑤❙q❵⑤❙❦✘✇❄①③♦❙②❖❼❸ï
✜♣♦❵❷❱✇❄♠♣❶⑥⑨❉ý◗✇❄❶✧②❭⑧❧❶❸⑧ ❹ ❿❇❷❽✓
❶ ◆
✞ ♦❵❷❄q➅⑧◆ì❜❶➮⑧❧❶❸❻❇①➅❼❄❶❸⑧ ✣⑩❼❩❶➜ö❵✈❖❶✧②❖❦✧❶❸❼
♦⑦r➬êÑ❶✘✇➀ì✍❶✧❶✧✟② ❱✜ ⑤❙②❖⑧ ✲❱⑤❵❦✶✇❽①❾♦❵②❖❼✥①❾②➮q③❶✧②❖④⑦✇❄♠Pï✵í☞r❣✇❄❶✧❷❜⑤❙⑧♣⑧❧①③②♣④✟❱✺ ①③②❧ñ
❻➒⑤⑦❷❽①③⑤❙②❇✇❽❼❜✇❄♦✡✇❄♠♣❶➊⑧♣♦❙t➮⑤⑦①③②✪ì✍❶❱①❾②❖⑧♣✈❖❦✘❶➜⑧⑩⑤✏❼❄❶✘✇✲♦⑦r✥⑤❙❦✶✇❽①❾♦❵②❖❼➼⑤⑦②❖⑧
t✏❶✘✇❽♠♣♦❧⑧♣❼■⑤❙②❖⑧❚✈❖❼❄①❾②♣④ ✇❄♠♣❶➜❼❩❶⑥ì✍❶➮✉♣❷❽♦❧⑧❧✈❖❦✘❶➜⑧ð⑤ ⑧❧♦❵t✏⑤❙①❾②❚ì✲①❾✇❄♠
✜✗✡
✜ ⑤❙❦✘✇❄①③♦❙②❖❼✲⑤❙②❖⑧ ✣➊t✏❶✧✇❄♠♣♦❧⑧♣❼❸ï ❹ ♠♣❶❱②♣❶✧ì✔❻❙❶❸❷❽❼❄①❾♦❵②✪ì➼⑤❙❼❜✇❄❶➜❼➀✇❽❶❸⑧
♦➒❻❙❶❸❷ ✺⑩✇♥⑤❙❼❄ë◗❼❏①③②✃✇➀ì✍♦✭ì✍⑤➽❿❧❼✮ñ✲➂❖❷♥❼➀✇❽q❾❿ ✈❭❼❩①③②♣④ ✂➀✈❖❼➀✇■⑤❵❦✶✇❽①❾♦❵②❖❼❏①❾②
✇❄♠❖❶➆✉❖q③⑤❙②♣②♣①③②♣④◆⑤❙②❖⑧ù❼❄❶❸❦✧♦❙②❖⑧❧q③❿ð✈❖❼❄①❾②❖④◆❶❸①➤✇❽♠♣❶✧✪❷ ✂➀✈❖❼❩✇✜t✏❶✧✇❄♠♣♦❧⑧♣❼❸➃
♦❙❷✲⑤✜❦✘♦❵t■ê♣①③②❖⑤➒✇❽①❾♦❵②✪♦❙rPt✏❶✘✇❄♠❖♦◗⑧❖❼✲⑤⑦②❖⑧➆⑤❙❦✘✇❄①③♦❙②❖❼❸ï ❹ ♦✜①❾q③q③✈❖❼➀✇❽❷❽⑤⑦✇❄❶
✇❄♠❖❶✡❷❽❶❸❼❄✈♣q❾✇❽❼❸➃➬✇➀ì❜♦⑩♦⑦r❉✇❽♠♣❶➮⑤❙❦✶✇❽①❾♦❵②❖❼✮✇❄♠❖⑤⑦✇❏ì✍❶✧❷❽❶■①③②❖⑧❧✈❖❦✧❶❸⑧✃rÛ❷❄♦❵t
✇❄♠❖❶✟❷❄✈♣②❖②♣①❾②❖④✜❶✧ý♣⑤⑦t✏✉♣q③❶❏ì✍❶✧❷❽❶✟⑤❙❼✍rÛ♦❵q❾q③♦➒ì☞❼✽✂
✲❛✳✶✵▲✷✶✹✸✺✟✲✸✷❇✻❜✺❃❵❛❭▲❯✟✺✟✵✸❆❇✻✏❢✟✷✟✵✸❆❃✿❛❯✶▼✼❍✏❈▲❉✚❊●❋●❍✏❣✟❉✚✺ ❘ ❋●❍❙❤✚✷✶❵❥✐❫❋▲◆✭❍
P❜❘ ✵✼✻✏❅✟✷✚✵✸❆❲✿❛❯✓❍✮❢✟✷✟✵✸❆❲✿❁❯✶▼✼❍ P ❯✶✹✸❞✟✵✟❂✴❅✚✷✟✵✸❆❲✿❛❯❪✻✮❢✟✷✟✵✸❆❖✿❛❯✟▼❲◆ ◗ ◆✭❍
❘ ✵✼✻✏❯✚❉✚❊❇❍✏❈✚❉✚❊●❋■❍ P ✲✴❆❃❂✴❭✚✷✶✲✴❉✚❆✚❄❇✻✏❈✚❉✚❊■❋▲◆✭❍ ❳ ✹ ❘ ✺✟✵✸❆✶✵✚❄❇✻✮❈✚❉✟❊●❋▲◆ ◗ ◆●❍
❘ ✵✼✻✏❅✚❯✶✵▲✵✟❦✚❂✸✺✚✷✶❵❥✐✰❍❜❤✚✷✶❵❥✐✍❋●❍ P ✺✚✷❃❵❥✐■❂▲✲ ❳✚❳ ✻❜❤✟✷✶❵❥✐✍❋▲◆ ◗ ◆ ◗ ❍
P❜❘ ✿✍✻✏❆✚❉✚✺ ❘ ❍✏❣✚❉✚✺ ❘ ❋●❍ P ❦✚✲✚✲ ❘ ✵❫✻✏❣✚❉✟✺ ❘ ❋●❍✏❈✚❉✚❊✭❋✸◆ ◗❝❨✟❬
P ✺❃❵✴❭✸❯✟✺❇✻✏❣✚❉✚✺ ❘ ❋●❍✏❈▲❉✚❊●❋▲◆ ◗ ◆ ◗ ❍ P❧◗ ◆❚❡
✞❨♠♣❶✧❷❽❶ ✂➀✈❖❼❩✇✡⑤❙❦✘✇❄①③♦❙②❖❼■ì❜❶❸❷❄❶➮✈❭❼❩❶➜⑧❚①❾②÷✉❖q③⑤❙②♣②♣①③②♣④❖➃✥✉♣q➅⑤⑦②❚✇❽①❾t✏❶❸❼
rÛ♦❙❷➮❼❩♠❖♦❙❷❄✇✏✉❖q③⑤❙②❖❼✏♦⑦r❱✈♣✉✹✇❄♦❨➈ ✢✃✇❄♦ÿ➈ ✜◆⑤❵❦✶✇❽①❾♦❵②❖❼✏ì❜❶❸❷❄❶✭⑤⑦êÑ♦❙✈♣✇
✇❄♠♣❶✮❼❽⑤⑦t✏❶☞⑤❵❼❞rÛ♦❙❷❉✇❄♠♣❶☞♠❭⑤⑦②❖⑧◗ñ➲❦✘❷♥⑤➒r❣✇❽❶❸⑧✏❻❵❶✧❷♥❼❩①③♦❙②✜♦⑦r➬✇❄♠❖❶✮⑧❧♦❙t➮⑤❙①❾②➄ï
✜♣♦❙❷✍✉♣q➅⑤⑦②❖❼✍q③♦❙②♣④❵❶✧❷❜✇❄♠❖⑤❙②◆➈ ✜➊⑤❵❦✶✇❄①③♦❙②❭❼❜êÑ♦⑦✇❽♠⑩❻❵❶✧❷♥❼❩①③♦❙②❭❼❞✇❄♦◗♦❵ë➮①❾②❧ñ
❦✘❷❽❶❸⑤❙❼❄①③②♣④❙q③❿➊q③♦❙②♣④✟✇❽①❾t✏❶➜❼❉✇❽♦■❼❩♦❵q❾❻❵❶❙ï✥➁☞♦➒ì✍❶✧❻❵❶✧❷❉ì✲♠♣❶❸❷❄❶✮t✜❶✧✇❄♠♣♦❧⑧♣❼
♦❙❷❉❦✘♦❵t■ê♣①③②❖⑤⑦✇❄①③♦❙②❖❼❞♦❙r➄⑤❙❦✶✇❽①❾♦❵②❖❼❉⑤❙②❖⑧✏t✏❶✘✇❄♠❖♦◗⑧❖❼❉ì✍❶✧❷❽❶✍✈❭❼❩❶➜⑧✏✉♣q③⑤❙②
✇❄①③t✜❶➜❼➄ì✍❶✧❷❽❶❉❼❩①③④❙②❖①➤➂❭❦❸⑤⑦②❇✇❄q③❿✚❼❄♠♣♦❵❷❩✇❽❶✧❷➜ï ❹ ♠♣❶❞rÛ✈♣q③q❇✉♣q③⑤❙②♣②♣①③②♣④☞✉♣❷❽♦❙ê♣ñ
q❾❶❸t rÛ♦❙❷✚✇❄♠♣①➅❼❏❶✘ý◗✇❄❶❸②❖⑧❧❶❸⑧◆⑧❧♦❙t➮⑤❙①❾② ①➅❼❱⑧❧❶✘➂❭②♣❶❸⑧ ✇❄♦➆ê❭❶❈♥✂ ❄♠ í ❦✧⑤❙❷
①③❼✚rÛ♦❙✈❖②❖⑧☎✇❽♦⑩♠❖⑤➽❻❵❶➊✇➀ì✍♦ ☛Ñ⑤➒✇❏✇➀❿❇❷❽❶❸❼❸➃➬♦❙②♣❶✜①➅❼✮rÛ♦❵✈♣②❖⑧ ✇❄♦➆ê❭❶ ☛❭⑤⑦✇
⑤⑦②❖⑧✜❦✧⑤❙②✜êÑ❶➼➂♣ý❧❶❸⑧✜ê◗❿■✈❭❼❩❶✲♦❙r❖✇❄♠❖❶☞✉♣✈♣t✏✉➄➃❵ì✲♠♣①❾q➅❼❩✇❉✇❄♠♣❶✲♦❙✇❄♠♣❶❸❷❞①③❼
✉♣✈♣②❖❦✘✇❄✈♣❷❽❶❸⑧✡⑤⑦②❖⑧■❷❄❶➜ö❵✈❖①❾❷❽❶❸❼➄✇❄♠♣❶❜rÛ✈♣q③q◗✇➀❿❇❷❽❶❜❦♥♠❭⑤⑦②♣④❵❶❜⑧❧❶➜❼❄❦✧❷❄①③ê❭❶➜⑧➊①③②
✇❄♠♣❶☞✉❖❷❄❶❸❻❇①③♦❙✈❭❼❞❻❙❶❸❷❽❼❄①❾♦❵②✡♦❙r➬✇❄♠♣❶❏⑧❧♦❙t➮⑤⑦①③❚② ❖♦ ï❀❺✮❼❄①③②♣✛④ ✂➀✈❖❼❩✇✍⑤❵❦✶✇❄①③♦❙②❭❼
②♣♦✮❼❄♦❙q③✈❧✇❄①③♦❙②✟ì➼⑤❙❼➄rÛ♦❙✈♣②❖⑧✟✇❄♦✲✇❽♠♣①③❼❀✉♣❷❽♦❙ê♣q③❶✧tÚ⑤⑦r❣✇❄❶✧❷✵➉✪✲✢ ♠❖♦❙✈♣❷♥❼❋ê♣✈❧✇
✈❖❼❩①③②♣④➊t✏❶✧✇❄♠♣♦❧⑧♣❼✍⑤⑦②❖⑧ ✂➀✈❖❼➀✇✲⑤❱rÛ❶❸ì ⑤❙❦✶✇❽①❾♦❵②❖❼✍⑤➊❦✘♦❵❷❄❷❽❶❸❦✘✇❜❼❩♦❵q❾✈♣✇❄①③♦❙②
✲✴✳✶✵✸✷✶✹✸✺✶✲✸✷✼✻✾✽✚✹✟✿❁❀❃❂✸❄✶✲✴❅✚❆❇✻✏❈✚❉✚❊●❋■❍❑❏▲✹✟✿❁❀✶▼❖◆✭❍
P✮◗ ❍
P❙❘ ✿❚✻✏❯✚❉✚❊✓❍✮❈✚❉✚❊●❋●❍ P ✽✚✹✟✿❁❀✶✵▲❄✶❂✴❉✚✳❇✻✏❈✚❉✚❊●❋■❍❱❏▲✹✟✿❁❀❲▼❲◆●❍
❳ ✹ ❘ ✺✟✵✸❆✶✵▲❄✼✻✏❈✚❉✚❊●❋✸◆ ◗❩❨✚❬
P ✲✴❆❃❂✸❭✚✷✟✲✴❉✚❆✟❄❇✻✮❈✚❉✚❊●❋▲◆●❍ ❳ ✹ ❘ ✺✚✵✸❆✶✵▲❄❇✻✮❈✟❉✚❊●❋▲◆ ◗ ◆●❍
❘ ✿✍✻❙✽✚✹✟✿❁❀❪❍✾❏▲✹✟✿❁❀❃▼❫❍ P ✽✚✹✟✿❴❀❲❂✟❵❛❆❃❂❛❉ ❘ ✵✼✻❑❏✸✹✶✿❁❀❃▼✼❍❜❈✚❉▲❊●❋▲◆ ◗❝❨✟❬
P ❯❃✹✸❞✟✵✟❂▲✽✚✹✟✿❁❀❪✻✾❏▲✹✟✿❁❀❃▼❃◆ ◗ ◆ ◗ ❍ P✮◗ ◆❚❡
90
ì➼⑤❙❼✍rÛ♦❙✈❖②❖⑧✭⑤➒r❣✇❄❶❸❷✟➈❙➈❱❼❩❶➜❦✘♦❵②❖⑧♣❼❸ï
q✇❄③⑨❞①❾♠❖❶❸ý❧❶✏❷➆✉Ñ⑧♣❶✧❼❩❷❽✇❽♦❙①❾⑤❙t➮t✏④❙❶❙⑤⑦❶❸①③ï✒②❵②➄✇♥ï✽⑤➒í✮✇❽❨✞ ①❾❼⑩♦❵②✃①❾❿❙✇❄❶✘♠♣ì✲✇⑩♦❵①❾✈❧✇❄②♣♠◆✇❱♦ù✇❄✇❄①③♠♣♠♣②◗❶➜❶➮❻➽❼❩⑤❙❶✜♠♣❷❄①③①➅ì✍ë◗⑤⑦①❾②❇❶✜②❖✇♥④➆⑧❧❼⑥♦✭⑧❧♠❖♦❵②♣⑤➽t➮❻❵♦❙❶☎✇❏⑤⑦①③④❵ê❭②◆❶✘❶❸✇❱❶✧①➅❼✟②✔✈♣②♣⑤➒⑤❵✇✟①➅ö❇⑧♣⑤⑦✈♣⑧❧②❂❶➮❶➜⑧ÿ❶❸❼❩❶✧⑤❙✇❄✇❽❷❩♦ ❼ñ
♦⑦❼❄❶✧r❞❶✧❶✧②⑥ý❧⑤❙⑤❵t✏❦✶✇❽✉♣①❾♦❵q❾②❖❶■❼❞t✏④❙⑤⑦❶❸✇❄②♣❶❸❶✧❷❄❷♥①➅⑤➒⑤⑦✇❽q❋❶❸rÛ⑧❋♦❵ï ❷✮✞❂①③②❖❶☞⑧❧✈❖①③⑧♣❦✘❶✧✇❄②❇①③♦❙✇❄②☎①❾➂❖ê♣❶❸⑧✈♣✇❏✲❏⑤⑦✉❭q③❷❽♦❙❶❸✇❄⑤❙❶✧⑧♣②❇❿✪✇❽①③⑤❙ì✍q❖❶➊t✏♠❖❶✘⑤➽✇❽❻❙♠❧❶ ñ
♦❧❶✘ý♣⑧♣⑤❙❼❏t✜rÛ♦❙✉❖❷❱q❾❶✃✇❽♠♣❼❄①➅❶✘❼➊✇♥❼⑥⑧❧♦❙♦❙t➮r➊⑤❙②♣①❾♦÷②❚q➅⑤⑦⑤⑦②❖❷❽⑧✃④❙❶✧rÛ❷⑥♦❙❷❱✇❄♠❖rÛ⑤❙♦❵②✈♣❷✟✢♣♦⑦ïør❜✇❄➁✮♠♣♦➒❶➜ì❜❼❩❶➮❶❸❻❙ì✍❶✧❶✏❷⑥♦❙✇❄♠♣ê❧❶✃✇♥⑤⑦①③➂♣②♣r❣❶❸✇❄♠⑧
ì✇❄✲④❙♠❖❶❸①③❶✧②♣q❾q✥♦❙❶✧❷❽êÑ❷♥❿⑥⑤➒❶✡✇❽❷❽❶❸⑤❙❶✘⑧ ⑧❖➂❖⑧❧②❖✜✪❶❸❶✧⑥✺ ⑧ t✏❶✘✇❄❶✧ý♣♦⑥②❇⑤❙✇➼t✜✇❽♠♣✇❄✉❖♦➮❶✏q❾❶✜❷❽⑤⑦❶❸④❙❼❩⑧❧❶✧❶❸✈❭✇❽②❇❼❏❦✘✇❸❶❏ó ❼❩❼☞♦⑩✇❄ë◗♠♣❶✧②♣❶✟①❾♦➒✇❄❶✧♠♣ì✲ý♣❶❸q③⑤⑦❷❱❶❸t✏⑧❧⑤⑩④❵✉♣❶❙❼❩q③➃Ñ❶✧❶❱✇✚♦❙❼❄❷✮❶✘♦❙✇♥r✬ì✍❼✍①❾❶■②◗rÛ✈♣❻➒ì✲❷❄⑤⑦✇❄①③❷❽q③♠♣①➅q✵⑤⑦❶❸✈❭②❇❷❸❼❩ï✇❽❶❼
✜♣❷❽♦❙t✂✇❽♠♣❶■❷❽❶❸❼❄✈♣q❾✇❽❼✚♦❙ê❧✇♥⑤⑦①③②♣❶❸⑧ ❼❄♦⑥r✻⑤❙❷✚ì✍❶■❦❸⑤⑦②✃❦✧♦❙②❖❦✧q❾✈❖⑧♣❶➊✇❄♠❭⑤➒✇
❼❄⑤⑦⑤⑦❶❸②êÑö❇♦❙✈♣⑤❙✈♣❶✧④❙✇❞②❭❶❸✇❄❦✘②❵♠♣❶❸✇➜❶✮❼❸➃✲➃◗❼❩④❙⑤⑦✇❽①③⑤⑦②❖❻❙✇❄⑧➮❶❸❶➜②ÿ❼❞♠❭⑤➽♦⑦⑤ ❻❇r❋①③♦❙✠ ②♣ì✍ê ④✜♦❙✂➀❶❸❷❽⑧❧❦✶ë◗♦❙✇♥①❾t➮❼❞②❖④❚①❾⑤⑦✇✛①③②⑥❼❩✠ ë◗✇❄♦❧ë◗②♣❦♥②♣♦➒ë➬♦➒ì☞ó✬ì✲❼❸♦❙ó❙q③❶❸r➊⑤⑦⑧❧êÑ✉❭④❵♦❙♦❙❶✮✈♣✇❄⑤⑦✇❉❶❸②❭②❵ì✲⑧⑥✇❽①❾①③q③⑤❙⑤ q❖q✚êÑ✠ ê❭⑤❵❶✮❦✶❶❸⑤⑦✇❄q❾①③①③ê❖❶✘♦❙q❾r❸② ❶ ó
✇❄①❾✇❽♦☎❼❄❶✧④❵q❾r❉❶✧②♣ì✲❶❸①➤❷❽✇❽⑤⑦♠ ✇❄❶✏✉❖⑤⑦①❾✇❽❷♥❼✟⑤⑦t✏♦➒ì✲❶✧✇❄②❚❶✧❷❽❶✘①➅ý♣❼❩❶➜⑤❙⑧⑩t✜⑤❵✉❖❦✶q❾❶➜✇❄①③❼➊♦❙②❭⑤⑦②❖❼✲⑧❂✇❄♦⑩✈❖❼❩❼❄✈♣❶✏①❾✇✚✇❄♠❖❶❸❶✧❻❙t✰❶❸❷❄❿⑩✇❽♦✃✉❭❼❄♦❇✈♣❼❄❼❄✉♣①❾✉♣ê❖q③q❾❿ ❶
✇❄♦❙❶❆♠❖⑥❄ê ❶⑥✂➀❦✧❶❸①❾⑤❙❦✶❶❸❦✶✇❉②❵✇❽✇❽❦✧①❾q❾♦❵♦❙❿➆②◆t■⑤⑦ê❖❼❄②❖❶❸①❾②❖⑧➆ö❇⑤⑦✈♣⑤⑦✇❄❶❸✈♣①③②❖♦❙✇❄❦✘②➄♦❙❶➜②❖ï✥❼❏♦❙❥◗✇❄t✏①③♠♣②❖♦❙❶⑥❦✘✈❖❶✲⑤❙❼❄t✏④❙q③❿❙❶✧❶✘ï ②❇✇❽✇✟♠♣♦❧❼❩⑧♣♠❖❼❞♦❙✈♣❦✧q➅⑤❙⑧✃②✡êÑêÑ❶⑥❶➼rÛ⑤⑦♦❙ê❖❷❽q❾t✏❶✜❶❸✇❄⑧■♦✭rÛ✉♣❷❄q➅♦❵⑤⑦t ②
❾q❼➀❼❩❶❸✇♥✈❖❻❙⑤➒❦❸❶✧✇❄❦✘❶❵q➼❶➜ï ❼❄✉♣❼❩q➅rÛ⑤⑦✈♣②❭qÕ➃❵❼■❷❽✇❽❶✘♠❖✇❽✈♣⑤➒✇⑥❷❄②❭⑤❵❼✬❦♥⑤✟♠♣①❾✉♣❶❸❷❽❻❙①③❶➜t✜❼➊①❾✇❄✇❽①③♠♣❻❙❶✭❶✮④❙❷❄❶✧♦❵➂❖⑤❙②♣q✍❶✧❼❄t✏❶✘✇✏❶❸②❵rÛ❷❽✇✍♦❙♦⑦t r➄❼❄✇❽♦❙♠♣t✏❶✭❶✮①❾②♣♠♣①❾①③✇❄④❙①➅♠♣⑤⑦ñq
î➲♠♣②ß♦➒ì ú➱ò☞✇❄❶♠♣✂❩❶❸⑤⑦❿❂✇❄①Õ➃✝①③②❖P✻ ⑧❧⑤⑦✈❭②❖❦✘④❙✱❶ q③❶✧✬❿❙✖✙➃✕✮✰✹✚✖❁✝❅ ✘✩➋❱✖ ♦❵✒✪✙✆②♣☛✬ ①③✡✌ë✪☞ ✖✜✆✢✣✮❑✢✪❅✎✍✠❇✢ ✡û❧✆ ✇❽♠♣✍ ❶✬✘❇❅✎⑤⑦✗✍ ✈♣✘✩✇❄✒✝♠♣✏✾♦❵❱✎❷❽❼❋rÛ⑧❧❷❄♦❵❶➜t❼❄❦✧❷❄❶✧①③êÑý◗❶ ñ
êrÛ❇✉❭❷❄❿✃❶❸♦❵❷❩t✯✇❜✇❄♠♣✇❄❶➮❷♥❶✘⑤❙ý❧④❵❦✘✉♣♦❵❶➜q➅⑤❙❼✧⑤⑦ïq③②❭❼❏❹⑤➒✇❽✇❄♠♣♠♣①③♦❙❶✮❶✧②ß❿❚✇❄❶❸ê❖⑤❙q❾❶❸❦♥⑤❵♦❙♠♣❼❩❷❽❶➜①③❶❸❶✧⑧✔⑤❙❻❵❦✘❶❙q③✇❄ï❶❸①③⑤❙❻❙❹ ❷❄❶✚②❖♠❖✉♣①❾❶✧②♣❷❄❿✃④ü♦❵④❙✈❭✇❽❷♥❼❩♦Ò⑤⑦❶➮t➮❦♥t✏❼❞♠❖❶✘⑤⑦①③②❖✇❽①③♠♣②ß⑧❧♦❧❶✧ý⑥ê❖⑧♣⑤❙❼■t✜❦♥⑧♣ë◗❶✧❶✧ì➼✇❄❷❽♠♣⑤⑦①❾❻❵♦❧❷♥❶❸⑧♣⑧♣⑧ ❼❼
rÛ②❖❷❄⑤➒♦❵✇❽t ①❾♦❵②÷✇❄♠♣❼➀❶✃✇❽❷❄❶✧✈❖②❖❦✘⑧ü✇❄✈♣❷❽❷❽❶⑥❶❸❼❄✇❽✈♣♠◗q➤✇⑥✈❖❼■♦❙r✚♦❵ê❧✇❽✇❽♠♣⑤❙❶ ①❾②❖❼❽❶❸⑤⑦⑧ðt✏①➅✉♣❼■q③❶✭❷❽❶✘✇❄✇♥❷♥⑤⑦⑤❙①③❦✘②♣❶❵❶❸ï ⑧ð❹✇❄♦ ♠♣❶☎✉♣❷❽❶✧♦❧ý◗⑧❧✉❖✈❖q③❦✧⑤⑦❶ ñ
②♣✠ ➇✚❶✧ìÚ❶✧✉Ñ♦⑦♠♣✇♥①③❶✧❼✧❷♥óÑ⑤⑦ì✲❷♥❦♥♠♣♠♣①➅❦♥①➅❦✧♠ ⑤❙q❀①❾②◗❼❩❻❵✇❄♦❙❷❽✈❖q③❻❙❦✶❶❸✇❽❼✮✈♣❷❄❦✘❶➜❷♥❼✧⑤➒ï ✇❄❶➜❹ ❼✲♠♣✇❽❶✜♠❖⑤➒t✏✇✟❶✘❦✧✇❽⑤❙♠♣②☎♦❧⑧✃ê❭❶✜①➅❼✚q❾♦❇⑤❙⑤❙✉♣⑧❧✉♣❶➜q③⑧✭①❾❶➜⑧☎①❾②❇✇❽✇❽♦♦
✇❄❷❄❷❽❶➜✈❖❼❩✈♣❦♥ë❧q❾✇❄❼P❶➜⑧■⑤⑦②❭①③⑧■②✏❼❩✇❄✇❽♠❖⑤❙❶☞❦♥ë❵❼❩✈❖❶❸⑧❋❦❸❦✘ï❀❶➜➁☞❼❄❼❩♦➒rÛ✈♣ì✍q❖❶✧❼❄❻❙♦❙❶❸q③❷➄✈❧✇❽✇❄♠♣①③♦❙❶✍②✜⑧♣♦⑦♦❙r➬t➮❻❙⑤⑦❶❸①③❷❄②■❿➊❼❄rÛ♦✚❶✧ìÿ❦✘♦❵✉♣②❖❷❽❼❩♦❙✇❄ê♣❷❽✈❖q③❶✧❦✶t➮✇❽❶❸❼❸⑧ ï
①❾t✏✜♣②ÿ✈♣⑤❙❷❄úÛq❾✇❄î➲①➅❼❄♠♣q③④❙t ❶❸♠❖❷❞⑤❙ì✲✇❄t✏♠❖♠♣❶✧①✛❶❸♦❙❷❄✖✙❷❽❶❸✬✪❶✘ê❇✇❽❿⑩✒✆①③❦❸✮✰✯✟❼❩⑤⑦①❾q❭✇❄✜✗✈❖ì❜✢✗⑤⑦✢✣♦❵✇❄✲❵❷❄①③ë➊♦❙û✶ï②❖♦❵❼✮②⑥❹ ⑤⑦♠❖➁ ❷❽①③❶❏❼❱❹ t✏òÒ✉❖⑤❙♦❧✉♣✉❭⑧❧q③❶❸❶❸⑤❙❷❱q❾②♣q③❶❸①❾②♣②❇⑧✭①③✇❄②♣❷❽ì✲④➊♦❧♠♣⑧❧①➅❼❉❶❸✈❖❷❄✉♣❦✧❶❱❶❸❷❽❶❸❼❱④❵❼❄❶✧⑤⑩❶✧②♣②❇❶❸rÛ✇❽♦❙❷❽❶❸⑤❙❷❄⑧ ñq
①❾ì✲②❧①➤rÛ✇❽♦❵♠✃❷❄t➮❼❩♦❵⑤⑦t✏✇❄①③♦❙❶■②➊✉♣①➅q➅❼❞⑤⑦②☎⑤➽❻➒✇❽⑤⑦❷❽①③⑤❵q③⑤❙❦✘ê♣❶➜❼☞q❾❶❜ê♣♦⑦✈❧r❭✇❏✇❽⑤❙✇❄♠❖❼❄ë❧❶✧❼✥❷❽❶✡⑤⑦②❖⑤⑦⑧✡❷❽❶■❼❩✈♣②❖ê♣♦⑩ñ➱✇♥⑧❧⑤❙❶✧❼❄✇❽ë◗⑤⑦❼❸①③q➅➃➽❼✧✇❄ï✚♦❵④❙î➲② ❶✧✇❄✇❄♠♣♠❖❶❸❶❷
❶❸❶✘ý♣⑤⑦❷❽❦✘q③❶❸❿✡✉❧✇✡ì✍♦❙rÛ♦❵❷❽❷✜ë✡⑤❙✇❄♠♣q❾qÑ❶➆①❾②♣✉♣rÛ❷❽♦❙❶❸❷❽❦✘t✏♦❵②❖⑤⑦✇❄⑧❧①③①❾♦❙✇❄②✪①③♦❙②❖⑤❙ê❭❼❸ï ♦❵✈❧❹ ✇❜♠♣t✏①➅❼✏❶✧✇❄q❾♠♣①③t✏♦❧⑧♣①➤✇♥❼✍⑤➒ì➼✇❽①❾⑤❙♦❵❼❉②ù❷❽①➅❶❸❼✡ö❇✈♣♦➒❻❵①③❷❽❶✧❶❸❷❄⑧ ñ
❦✘ô☎♦❙✈♣t✏②♣❶➼♦✄①③✘✏ ②➮ñ➲í☞q③⑤⑦❻◗✇❄①❾❶✧q➅⑤❷❞ì✍✜✗✢✗♦❙✢✪❷❽ë➊❇✢ û✲ê◗⑤⑥❿➊✇❽②❖♠♣❶✧❶☞ìø❼❽⑤⑦⑤⑦t✏q③④❙♦❵❶✲❷❄④❙①❾✇❄❷❽♠❖♦❙t✈♣✉✭✠➺➁✮úÛî➲q③➇✪④❙♠❭✻❉⑤⑦ó❞t✏ú✻➁ ①➱➃❵❹ ò✮ò✒⑤⑦✈P➇✚➃✣♦⑦✚ ñ
t✏⑧❧❶❸⑤❙❼❽①❾❦✘② ❷❽①❾✉♣✻➄✇❄❶➜①③⑤⑦♦❙❷❽②❖②♣❼❀❶✧rÛ❷✶❷❽û✲♦❙t①➅❼❱✉♣✉❖❷❽q③⑤❙❶❸②■❼❄❶✧✇❽②❇❷❽✇❄⑤❵❶➜❦✘⑧ ❶❸❼❸ì✲ï ♠♣✘❜①➅❦♥❶✧♠◆✇➀ì❜q③❶❸❶❸❶✧⑤❙②✦❷❄②❭✝✣ ❼✚❏✢ ➁ ⑤⑦❹②❖⑧ ò ✜✗⑧♣✢✗♦❙✢✚t➮✉♣q③⑤⑦⑤❙①③②②
✇❄❷♥⑤❙❦✘❶➜❼➼⑤⑦❷❽❶❏❷❽❶❸ö❇✈♣①③❷❽❶❸⑧➮✇❽♦➮①❾②❭⑧❧✈❖❦✘❶❱✇❽♠♣❶➊⑧❧❶➜❼❄❦✧❷❄①③✉❧✇❄①③♦❙②❭❼✧ï
➁✜✆✢✗❹ ✢✂ò✲⑦✣ û✘ñ➲ï ô ❹ í✚♠♣➋✟①➅❼❉⑨✬❷❽õ✂❶❸❦✧❶✧①③①③❼✪❻❙❶❸✉♣❼✬❷❽❶❸⑤❙❼❄❼❉❶✧①③②❇②♣✇❄✉♣❶➜✈♣⑧✹✇✍①③⑤■②ø❥ ú✻❹ ➁✮õ☞♦❙④❵î ✁ ④ ❥➮✚ ⑧❧♦❵ô t➮✈♣⑤⑦②♣①③②➮♦✠✧✏ t✜ñ▲í✮♦❧❻◗⑧❧①❾❶❸q➅q➱⑤ ➃
✉♠❇♣⑤⑩❿◗❷❄❦✧♦❧✉Ñ♦❙⑧❧♦⑦q③✈❭q❾✇❽❶➜♠♣❦✘❦✶❶❸❶❸✇❄❼⑩❼❄①③①③♦❙❼✬⑤⑦②◆②①③❼✬♦⑦✇❄r✲➁ ♠❭❥❹ ⑤➒❹ ✇✍ò õ✲⑤⑦r❣î⑧❧✇❄✁ ♦❵❶❸❷✍❥✃t➮⑤❱⑤⑦✉♣①③rÛq➅②❨⑤⑦❶❸ì②❖t✜❼✟✉♣♦❧⑤❙❷❽⑧❧②❖♦❙❶❸⑧✃ê♣q➱q③ï ❶✧✇❽⑤❙t➮❹❼❄❼✬ë ♠♣♠❖❶✃⑧❧⑤➽❶✧❻❙❶✧➂❖❶☞ý❧②♣✉❭êÑ①❾✇❄❶❸❶✧①③❷❄❶❸♦❙①③②✪t✏②❖❼✟⑤⑦❶✧②❇②❭⑤❙✇❽②❖⑤⑦⑤❙q❾⑧ ñq
❿◗⑤⑦ê♣❼❄❶❸q③❶✟⑧■✇❄⑤⑦♦⑩②✡❼❩➁ ♦❵❹q❾❻❵òÿ❶➊t✏⑧❧♦❙♦❵t➮❼❩✇✚⑤⑦❼❩①③②■♦❵q❾t✏❻➒⑤❙♦❧ê♣⑧❧q❾❶■❶❸q❇✉♣ì✲❷❄①③♦❵q③ê♣q❇q③êÑ❶✧❶➼t➮✈♣❼✧q❾ï☞✇❄①③ít✏⑤⑦❻❙✇❄❶❸❶❸❷❽q❾❼❄❿✟①❾♦❵♦❵②✭ê❧✇❽♦⑦⑤❙r❞①❾②♣✇❄❶➜♠❖⑧ ❶
q❾①❾♦❵t✏④❙❶✧①➅②❇❼➀✇❽✇✪①③❦❸⑤❙❼➀②❖ñÕ✇❄⑧ü❷♥⑤⑦④❙②❖♦◗❼❄♦❧✉Ñ⑧ù♦❙❷❄✇❽❷❽⑤➒❶❸✇❽❼❄✈♣①❾♦❵q❾②✡✇❽❼⑥⑧❧⑤❙♦❵❷❄t➮❶✭⑤⑦♦❙①③②➮ê♣✇❽①③⑤⑦❼❜①③❦♥②♣♠♣❶➜♦❵⑧❋❼❄ïÒ❶✧②✜➁☞rÛ♦➒♦❙ì✍❷✬❶✧✇❄♠♣❻❵❶✧❶✚❷✜❶✘ý❧✇❄♠❖✉Ñ❶❸❶✧❼❄❷❄❶ ñ
④❙t✏♦◗⑤❙♦❧①❾⑧➮②Pïø❷❄❶➜❼❩þ✚✈❖②❖q➤✇♥❶ ❼❜✉♣⑤❙❷❄❷❽❶✚♦❙ê♣②♣q③♦⑦❶✧✇✍t ❷❄❶❸①③✉♣❼✪q❾①➅✇❄❦✧♠♣⑤⑦❶✃✇❄❶➜q③⑧➮⑤❙❷❄rÛ④❵♦❙❶☎❷❜✇❄②❇♠❖✈❖❶✚t■ê♣êÑq③♦❧❶✧❷✪❦♥ë❧♦❙❼➀r✟ñ▲ì❜t✜♦❵❷❄❶✧q➅✇❄⑧➮♠♣♦❧⑧❧⑧♣♦❙❼ñ
ì✲❼❩✈♣♠♣t✏①③❦♥❶❱♠✡⑤⑦♠❖②♣⑤➽♦❙❻❙✇❄❶✬♠♣✇❄❶❸♦❏❷❸ï ê❭❹ ❶➼♠♣q③❶❸❶❸⑤❙❿⑥❷❄②♣❼❩✈♣❶➜⑧❋④❵④❙➃➒❶➜ì✲❼➀♠♣✇➼❶❸❦♥❷❄♠♣❶✍♦◗♦❙♦❵②♣❼❄❶✍①❾②♣t✏④➊❶✘✇❄✇❽♠♣♠♣❶❱♦❧t✏⑧■♦❵t✏❼❩✇❜①③④❙④❵♠❇❶✧✇✥②♣❼❄❶❸✈♣❷❽ê❧⑤❙ñ q
✇❄t✜t✏♠♣❶✧⑤❙❶☎✇❄②♣♠♣✉♣②♣♦❧q➅❶❸⑤⑦⑧✭❷❸②❖ï ì✲②♣❶✧♠♣❷➮❶❸❷❄✇❽❶➊♦ð✇❄✈❖♠♣①➅❼❄❼☞❶☎①➅t✜❼✲✇❽❶✧♠♣✇❄♠♣❶✡♦❧❦❸⑧♣⑤❙❼⑥❼❄❶❙①❾ï✲② í☞⑤❙②♣②ü♦❙✇❄①③②❧♠♣➂❖❶❸❷✮②❖①➤✉♣✇❽❷❽❶✧♦❙q③❿✹ê❖q❾❷❽❶❸❶❸t❝❦✧✈♣①➅❷❽❼✲❼❄①③rÛ❻❙♦❵❶❷
✑ å❀â❜è✛▲✕ ç✴❀❵é➱å✵✴â ❀
❼①❾➀þ✚②♣✇❽✈♣❷❄④⑥✈❖❷✏⑤➽❦✘❻➒ì❜✇❄✈♣⑤❙♦❵❷❽①❾❷❄q➅❶❸⑤⑦ëð⑧✡ê♣⑤⑦q③❻◗❶❙②❭①❾ï❶❸⑧÷ì✹✞❨✇❽♦❙♠♣♠♣r➬❶☎❶❸⑧♣❷❄❷❄❶➜♦❙❶➜⑤❙t➮❼❩❼✲✈❖⑤⑦①③q➤①③②☎✇♥②✏❼✡✉♣ë❇❷❽❷❽②❖❶✧♦❙♦➒✉Ñ✉Ñì✲♦❙♦❵❷❄q❾❼❄❶➜✇❄①➤❶➜⑧❧✇❽⑧÷①❾④❙♦❵❶➼②❖♠♣⑤⑦⑤⑦❶❸êÑqÕ❷❄➃❖❶☎♦❙❦✘✈❧q➅⑧❧✇❉⑤❙❶❸❼❽♦❵✉❭❼❄ê✄①③❶❸❦❸✂➀②❖⑤⑦❶❸⑧ü❦✘q➄✇❽✉♣❼❞♦❙q➅②ü⑤⑦êÑ②♣❶✘⑤ ññ
②♣⑤❙❼❽①❾②❖❼❩✈♣④✪t✏❼➀✇♥❶✲⑤➒✇❽✇❽♠❖❶❸❼✮⑤➒✇✍⑤❙✇❄❷❄♠❖❶❱❶❏r✻⑤❙❼❄①❾✉❖❷❽⑤❙q③❿⑩❦✧❶✮⑤⑦♦❙❷❽ê♣r❀①❾❼➀✇❄✇♥❷♥⑤➒⑤⑦✇❽❷❽❶❸❿⑩❼❜❼❄①③❶✘❼❜✇♥❷❄❼☞❶➜♦⑦❼➀✇❽r❞❷❄✉♣①➅❦✶❷❽✇❽♦❙❶❸✉Ñ⑧➮♦❵①③❼❄②⑥①❾✇❄✇❽①③♦❙♠❖②❖⑤➒❼❸✇➼➃♣♦❙ì✍ê♣❶ ñ
✂➀❶❸❦✶✇♥❼❱⑤⑦❷❽❶✡✉♣❷❽❶✘ñ➲❦✘♦❵②❖❦✘❶❸①❾❻❵❶❸⑧➆✇❽♦➆♠❭⑤➽❻❙❶✡⑤✪➂♣ý❧❶❸⑧◆❼❩❶✧✇✟♦⑦r✬✉♣q➅⑤⑦✈❭❼❩①③ê♣q③❶
t✜❼➀❼➀✇♥✇♥⑤➒⑤➒❶✧✇❄✇❄✇❄♠♣❶➜❶✮❼✧♦❧♦⑦ï ⑧◆r➬✇❄rÛ✞❨♠♣♦❙❶❏❷■①❾✇❄⑤⑦①❾♠♣②❖❷❄①③✇❉⑧♣② ✈❖①❾②⑥❦✘✇❽♠♣①③✇❄②♣①③♠❖❼⑩④ ⑤⑦✇❜rÛ⑤❙❷♥❦✘①❾⑤⑦✇✬✇❄t✏①③❷❄♦❙❶➜❶✧②ðö❇ì✍✈♣❼❽♦❙①❾❦♥❷❽❷❽♠♣ë➬❶❸❶✧❼❉➃✲t➮②♣ì❜⑤➆♦➊❶◆✇❄①③②❵♠❖♠❭✇❽⑤➽⑤➒❶✧❻❙✇■❷❽❶◆t✏⑤❙⑧♣⑧❧❶❸❻➽⑧❧❶❸⑤❙❼❽①➅⑤➒❦✘②❖✇❽❷❽❦✘❶✮①❾❶➜êÑ❼❱❼➀❶❸✇♥⑧Ò✇❄⑤➒♠❖✇❽⑤❶❶
①❾②❧rÛ♦❵❷❄t➮⑤⑦✇❄①③♦❙②➄➃♣♦❵❷✮q➅⑤⑦❷❽④❙❶❱②◗✈♣t■êÑ❶✧❷♥❼☞♦❙r✥✇❄❷♥⑤⑦①③②♣①❾②❖④➮❶✘ý♣⑤⑦t✏✉♣q③❶❸❼❸➃♣✇❽♦
ã ✟ æ✁ å❀ä✄✂
❻rÛ❙❹ ❷❽❶❸♦❙♠♣q❾t♦❵❶✮✉❭⑤❙❶➜❶✘✈❧⑧✃ý♣✇❄⑤⑦♠❖⑤➆t✏♦❙❷♥❼❩✉♣❼✥❿❧q③❼❩♦⑦❶❸✇❄❼❸r❂❶❸ï t✾ú ✸❱❹ úÕ⑤⑦♠❖s❜❷❽❶✧q③♦❙①③⑤❙❷✡q③②❖q➅⑤⑦ì✍⑧❋④❙♦❙➃❇❶❸❷❽õ✲②❭ë◆û✮❿❵⑤❙①➅ì✲❼✜q❾qÕ♠♣➃✣❼❩①③①③ß✚❦♥t✏♠◆①❾õ✲q➅q❾⑤⑦❶➜①➅❦♥❷■⑤⑦♠ ❷❽✇❄②❖♦❂✜✗❼✚✢✗♦❙❖✢✇❽✈♣⑤❵➈➜❼❩❷♥û❀ë☎❼➊♠❖①③⑤➽t✏②ð❻❙♦❧❶☞✇❄⑧❧♠❭⑧♣❶✧⑤➒❶✘q➅✇❼ñ
✇❄✇❄♠❖♠❖❶✧❶❸❿➮❼❄❶✭❼❩♠❖❦✧♦❙♦➒②❇ìü✇❽⑤❙♦❵①❾❷❽②ü⑧♣❶✧êÑ❷❽♦⑦①❾②♣✇❄④❇♠ÿ❼❞✉♣♦⑦❷❽r❋①③t✜✇❽♠♣①❾❶✚✇❄①③✇❽❻❙⑤❙❶➜❼❄❼➮ë✡⑤⑦✇❄②❖♦✡⑧✹⑤❵❦♥②♣♠♣♦❵①③②❧❶✧ñÕ❻❙✉❖❶☞❷❄✇❄①③t✏♠♣❶✮①➤✇❽✇❽①❾❻❵⑤❙❼❄❶❸ë➮❼❸ï✹⑤⑦②❖î➲②⑧
⑤②♣ðú ✞❂❶❸❼❄❶❸✈➄❿◗⑧❨❼❩➃✄✇❄✬☎rÛ❶❸♦❵t⑤⑦❷⑩②♣④❭✈❭①③②❨➃✳❼❩❶❸✝✚ ❷⑩ì✲✆❵♠♣①③②❇①③①➅⑤❙✇❄❦♥②♣❶✧♠❨❷❽④ ❻❙♦❙✜✆❶❸✉Ñ②❇✢✣❶✧✇❄✢✣❷♥①③✲❙♦❙⑤➒ûP✇❄②➄♦❵✇❽ïÙ♠♣❷❽❼✏❶✮➁☞⑤❙⑤⑦♦➒❷❄✈♣ì✍❶☎✇❄♠♣❶✧q③❻❵♦❵❶❸❶✧⑤❙❷❽❷⑩❼❞❷❄②♣⑧❧í✚❶➜❶➜⑧ùõ☞❼❄❦✧ô✃ì✲❷❄①③êÑ①❾❥ ✇❄❶✮♠❖❷❄♦❙í✮❶➜✈❧ö❵õ☞✇⑥✈❖ô◆①❾✇❽❷❽♠♣❶❸❥➬❶ ❼ ➃
t➮ö❇✈♣⑤⑦❶✧②◗②❭❿✹❦✘❶❸✇❄❼❸❷♥➃◗⑤⑦⑤⑦①③②♣②❖①❾⑧✏②❖④ù✉♣❷❽❶❸❶✘ý♣❼❄❶✧⑤❙②❇t✜✇❽q❾✉❖❿✜q❾❶➜①③❼✍❼✪❦✧❦✘⑤❙♦❵✉❖②❇⑤⑦✇❽ê❖⑤⑦①③q❾②♣❶☞①③♦❙②♣r❋④÷①❾②❖❻➒⑧♣⑤⑦✈❖q③①➅❦✘⑧ ①③②♣❼❩④■♦❵q❾♦❵✈♣②♣✇❄q③①③❿ ♦❙② ✠ ☛❭⑤➒❼❄❶✘✇➜ñ ó
⑧❧♦❵t✏⑤❙①❾②❭❼✧ï
êÑ❶✧þ✚②❭♦⑦✈♣❦✧✇❄❷➼⑤⑦♠➆✉❭ì✍⑤❙❼❩♦❙❦✘✈♣❷❽✇❄q➅ë➮①③⑤➒♦❙✇❽②⑥①③①❾❼✲②❖❼❽④❂⑤❙❦♥q③♠♣❼❄❼❄♦✜❶✧❶✧t➮❻❵⑤❙❶✧⑤■①❾❷♥t✏⑤⑦⑤❙q➼❶➜②❖⑧⑩❼❄⑧➮❦♥⑤⑦♠♣♠♣✇✍❶❸①③❶✧t➮q③❶❸❷♥⑤⑦⑤❙⑤♣❷♥❷❄ï ②♣❦♥♠♣①③✁ ②♣①➅❦✧④✏❷❽⑤❙⑤❵q❖⑧❧❦✶✇❄♦❙❼❄①➅❦♥t➮❦✧♠❖⑤❙⑤❙❶✧q❜t➮①❾②❖✉❖⑤✪❼✲q③⑤❙❦✧ú✻②♣t✜♦❙②♣②❇❶✧①③✇❽②♣✇❄⑤❙♠♣④❂①❾②❖♦❧⑧♣⑧♣①❾②♣♦⑦❼♥④ ûñ
t➮⑧❧❶➜⑤⑦❦✘①③♦❙②❖t✏❼❱✉Ñ⑤⑦♦❵❷❽❶✡❼❄①➤✇❽ê❭①❾♦❵⑤❙②➄❼❄❶❸ï ⑧ ❹ ♦❵♠♣② ❶✜✠ ❦♥♠♣♠❖①③❶✧①❾❶✧❷♥r❜⑤⑦❷♥⑧❧❦♥①✰♠♣☎Ñ①③❶❸❦❸❷❄⑤⑦❶❸qP②❖✇❽❦✘⑤❙❶✡❼❄ë☎ê❭❶✧②♣✇➀❶✧ì❜✇➀❶❸ì❜❶✧♦❵② ❷❄ë➬✇❽♠♣ó✥❶✏ú✻➁ ➁ ❹ ❹ ò❏ò û
✉❖✠➺❦✘⑤❙♦❙❷❽t✏⑤❵⑧❧✉Ñ①❾♦❙④❵✈♣t ②❭⑧❋⑤⑦óÑ②❖✇❽⑧✹⑤❙❼❄❦✘ë❧q➅❼✚⑤❙❼❽❦❸❼❩⑤⑦①➅❦✧② ⑤❙q✍êÑ❶➮⑧❧♦❵⑧❧t➮❶❸❦✧⑤⑦♦❙①③②❖t✏❼■✉Ñ①➅♦❵❼✡❼❄❶❸✇❄⑧☎♠❖⑤⑦①③✇➮②❇✇❄①❾♦⑩②ù✇❄✇❄♠♣♠♣❶✏❶➆❼❩rÛ①③♦❙t✏❷❽✉♣t✏q③❶✧❶✧❷❷
❦ì✍✧✠ ✇❽⑤❙⑤❙♦❙②✏❼❄❷❽ë❧ë❙ê❭❼✧❶❸❶✮ó❭⑧➊⑧♣✉❖①❾②✜①⑤⑦❄⑥❷❄✉♣✇❄❦✘①➅❷❽✈❖❦✘♦◗✈❖q➤⑧♣✇❉q③✈❖⑤❙✇❽❷✲❦✘♦➊①③✇❄②♣❦✘♦✪④✚♦❵②❖❦✧✇❄❼➀q③♠♣⑤❵✇❽❶➜❷❄❼❄❼❩✈❭❼❄❶➼①➅❦✶❦✧✇❉✈❖⑤⑦❼❄t➮q❀①❾②❖⑧❧⑤⑦④✚♦❙②◗t➮✈❖t✏⑤❙⑤⑦❶✘q❾①③✇❽q③②❖❿■♠♣❼❸♦❧⑤⑦ï☞⑧♣②❭➁☞❼✵⑧➮♦➒rÛ❷❽⑤⑦ì✍♦❙✈❧❶✧tÚ✇❽❻❵♠♣❶✧t➮♦❙❷☞❷♥⑤❙❼✥➁ ❦♥♠❖❹♠♣⑤➽①③ò✚②♣❻❙❶❶❼
✚ ò✮⑤❙✈ ➈✟✞✠✞ ✢❵û✪✇❄♠♣❶÷⑤❙✈❧ñ
✇✇❄❄q③❶❸♠❭♠❖⑤⑦♦❙⑤⑦❷❽②❚❷♥②♣❼⑥①③✇❄②♣⑤⑦♠❖④❖❷❽♦❵ï✤
④❙❼❄✈♣❶⑥❶✭î➲♦⑦② ✇❽r☞♠❖❦✘⑤➒q➅ú✻✇➆⑤❙⑨✬❼❽❷❄➁ ❼❄♦❵①③❹❦❸q➱➃➊⑤⑦ò✣q✬➁✮⑧❧♦❵❶✧♦❵②❖✉❭t➮⑧♣❶❸❷❽q❾⑤⑦❶❸⑤⑦①③❷❸②❖✇❄➃✽♦❙❼➊❷♥❤
❼➮⑤❙❼■⑤⑦ì❜❷❽❶☎❶❸q❾t✏q✍⑤❙♦❙❼✟❷❽❶☎êÑ❶✘❶✧ý❧①③②♣✉♣④ ❷❽❶❸t✏❼❽❼❩♦❵①③❻❙❷❄❶❶
✇❄❶❆✜✆①③⑥❄ ✢✣♦❙✢✚②❭❦✧❙✣ ①❾❼✧❶❸û✶ó❧②❵ï❞ú✻✇➜➁✾⑨✬ï ✻P⑤❵❹ ❦♥í✚♠⑩♠❖❼❽❶✧ûP➁ ♦❙①③❷❽✻❀❼✵❶✘íÚ✉♣✇❽①③❷❽❦❸⑤❙❶❸⑤⑦❼❄⑧❧q❋❶✧t✏②❇✈♣✇❽①❾②❭✇❽❶❸⑧❧❼✲⑧✟❶✧♦❵①③❷❽②②♣✉♣❶❱①③②♣ú➱♦❙ô②♣❷✲①③②♣⑤⑦t✏❷❄④✏✇❄♦❙♠❖rÛ❷❽♦❙①➱❶✓➃✝❷ ❂✞ ✘✩✠➺➁☞✤✖ ✣✦♦❙①③q❾✥④❙rÛ❶❙♠ ✖✙✆➃✏ ✻P✚ ✖❉❶✧✥ ❻❵õ☞✬ ❶✧✎✬✈❖qP❼❄①③í✚②❇❼❄❶✧✇❄❦✶q③♦ ñq
q③❼❄❼❄❶✧♦❙❶❸❻❙ö❇✈♣❶❸✈♣②❖q❱❶✧⑧❨②❭♦❙❦✘❷➆⑤❙❶❸②❖❼✪✉♣⑧ ❷❄♦⑦①③t✏❦✧r➊♦❙①➤⑤❙t✏✇❽❦✘①❾❻❵✇❄✉♣①③❶❙q③♦❙❶✘ï ②❖✇❄❶◆❼❸❹ ➃✲⑤⑦♠❖ì✲q③❶◆④❙♠♣♦❙❶❸✉❖❷❽❷❄①❾⑤❙❶❂✇❄✉❭♠♣❶❸⑤⑦t❤❷✭② ①❾ì✲⑤❙②❇❦✘♠♣✇❄✇❄❷❽①➅①③❦♥♦❧♦❙♠❨⑧❧②❨✈❖①③❦✧❼⑩t✏❶❸❼➆①❾①③t✏④❙♠❇⑤ù✉♣✇⑩q③✉♣❶✧êÑ❷❽t✏❶✃♦➒❶❸❻➒②❵♠♣⑤⑦✇❽ê♣①③④❙❶❸q③♠⑧❿
✈❖⑤❙⑧♣❼❄①❾❻➽②❖⑤❙④ü②❇✇❽⑤ÿ⑤⑦④❵❥ ❶✟❹ ♦⑦õ✲r î ✠➺✁❼❩♦❵❥❇✈♣ñÕ②❖q③①③⑧☎ë❙❶✃⑤⑦②❭q➅⑤⑦⑧☎②❖④❙❦✘♦❙✈❖t✏⑤❙④❙✉♣❶❙q③ï❶✘✇❽❶❙❹ óÑ♠♣⑧❧❶❚❶❸❼❽❦✘⑤⑦❷❽q③④❙①❾✉♣♦❵✇❄❷❄①③①❾♦❙✇❄②❖♠❖❼✚t ⑤⑦②❖✇❽⑤⑦⑧❋ë❵➃❭❶❸①➤❼r
❁ ✟ ✕✌
91
①③②❖⑧❧✈❖❦✧❶➊⑤✡❻➒⑤❙q❾①➅⑧✭⑤❙❦✶✇❽①❾♦❵②➆❼❽❦♥♠♣❶❸t✏⑤✏❼❄❶✘✇➜✦ï ✜♣✈♣❷❄✇❄♠♣❶❸❷❸➃❖♦❙✈♣❷✲✉♣❷❽❶✧q③①③t✜①❾ñ
❖② ⑤❙❷❄❿✚❷❄❶➜❼❩✈♣q❾✇❽❼❀❼❩♠❖♦➒ì✃✇❽♠❖⑤➒✇❀✇❄♠♣❶❉♠❖①❾❶❸❷❽⑤❙❷❽❦♥♠♣①➅❦✧⑤❙q⑦t✏❶✘✇❽♠♣♦❧⑧♣❼➄①③②❖⑧❧✈❖❦✧❶❸⑧
ì✲①❾✇❄♠ð✇❽♠♣❶✪⑤❙❦✘✇❄①③♦❙②÷❼❽❦♥♠♣❶✧t➮⑤☎❦❸⑤⑦②ðq③❶❸⑤❙⑧❂✇❽♦ t✏♦❙❷❽❶➮❶❆⑥❄ ❦✘①③❶✧②❇✇✡⑧♣♦⑦ñ
t➮⑤⑦①③②✭t✜♦❧⑧❧❶❸q③❼❸ï
þ✚✉♣t➮⑤⑦ë❵❶✧❷ ✜✜①③❼❏⑤⑦②✃①❾t✏✉♣❷❽♦➒❻❙❶❸t✜❶❸②❇✇✮♦❙②❂þ✚✉❖t✏⑤❙ë❙❶❸❷✮①③② ✇❄♠❭⑤➒✇✮✇❽♠♣❶
q➅⑤➒✇❩✇❽❶✧❷ ❷❽❶❸ö❇✈♣①③❷❽❶❸❼✭①③②❵✇❽❶✧❷❽t✏❶❸⑧❧①➅⑤➒✇❽❶÷❼➀✇♥⑤➒✇❄❶ð①③②❧rÛ♦❵❷❄t➮⑤➒✇❽①❾♦❵②ß⑧♣✈♣❷❄①③②♣④
q③❶❸⑤⑦❷❽②♣①③②♣④❖ï■þ✚✉♣t➮⑤❙ë❙❶✧❷ ✜✏⑤⑦✈♣✇❄♦❙t➮⑤⑦✇❄①➅❦✧⑤⑦q③q③❿➆①③②❧rÛ❶❸❷❽❼✚✇❄♠❖①③❼❏①③②❵✇❽❶✧❷❽t✏❶✘ñ
⑧❧①➅⑤➒✇❽❶✡❼❩✇❽⑤➒✇❽❶➊①③②❧rÛ♦❙❷❽t➮⑤➒✇❄①③♦❙② ⑤❙②❖⑧➆✇❄♠❖❶✧②✃✉❖❷❄♦❧❦✘❶❸❶❸⑧♣❼☞①❾②☎✇❽♠♣❶✡❼❽⑤⑦t✏❶
r✻⑤❙❼❄♠♣①③♦❙② ⑤❵❼❚þ✚✉❖t✏⑤❙ë❙❶❸❷◆⑤⑦②❭⑧ø①③②❖⑧❧✈❖❦✧❶❸❼✃✇❄♠♣❶ÿ❼❄⑤❙t✏❶ù♦❙✉Ñ❶✧❷♥⑤➒✇❽♦❙❷
❼❽❦♥♠♣❶✧t➮⑤♣ïÒþ✚✉♣t➮⑤❙ë❙❶✧❷ ✜ ❦❸⑤⑦②üq❾♦❵④❙①➅❦✧⑤❙q❾q③❿ðê❭❶ ❼❄❶✧❶❸②ü⑤❵❼➮⑤❚❼❩✈♣✉Ñ❶✧❷❄ñ
❼❄❶✘✇✡♦❙r❏þ✚✉♣t➮⑤⑦ë❵❶✧❷➜➃❀ì✲♠♣❶❸❷❄❶✪✇❄♠♣❶⑩❶✘ý◗✇❄❷♥⑤✭rÛ✈♣②❭❦✶✇❄①③♦❙②❭⑤⑦q③①➤✇➀❿❚①③②üþ✚✉❧ñ
t➮⑤⑦ë❵❶✧❷ ✜✭❷❄❶❸t✏♦➒❻❙❶❸❼➊✇❄♠♣❶➆②♣❶❸❶❸⑧ð✇❽♦◆⑤❵❼❩ë❂✇❄♠❖❶✪✇❽❷❽⑤❙①❾②♣❶❸❷■rÛ♦❵❷✡t✏♦❵❷❄❶
①③②❧rÛ♦❙❷❽t➮⑤➒✇❄①③♦❙②Pï
þ✚✈♣❷✲❶✧ý◗✉Ñ❶✧❷❽①③t✜❶❸②❇✇❽❼➼ì✲①❾✇❄♠➆✇❄♠♣❶ ♠❽➁☞①③ë◗①❾②♣④➮➇✚♦❙t➮⑤⑦①③❚② ✏♦ ❼❄♠♣♦➒ì❨✇❄♠❭⑤➒✇
rÛ✈♣❷❄✇❄♠♣❶❸❷ü⑧♣❶✧❻❙❶❸q❾♦❵✉♣t✏❶✧②❇✇✹②❖❶✧❶❸⑧❖❼÷✇❽♦ êÑ❶Òt➮⑤❙⑧❧❶Ò✇❄♦✒✇❄♠♣❶Úþ✚✉❧ñ
t➮⑤⑦ë❵❶✧❷ ✜➼⑤⑦q③④❙♦❙❷❽①❾✇❄♠♣tØ❼❩♦☞✇❄♠❖⑤⑦✇✵①❾✇✵❦❸⑤⑦②■❦✧♦❙✉Ñ❶❉ì✲①➤✇❽♠✡⑧❧♦❵t➮⑤⑦①③②❖❼➄ì✲①❾✇❄♠
♠❄❼❩✇❽⑤⑦✇❄①➅✴❦ ✡
♦ ë◗②♣♦➒ì✲q❾❶➜⑧❧④❙❶❵ï
❅✫✲
✥ ✱
☎ ✪✒ ✆ ✡ ✥✑✖✓✮✰✖ ✒✝✘ ✥ ✡ ✥ ✍❙➃ ✢ ✢✣✔✲ ✂✞ ✢✚✣✗❧✜ ï➼ò☞❶❸ì ☎❉♦❵❷❄ë➬➃➬ò ☎✜➃Ñ❺❏❥❧í✙✂
✮í s✍ô ✁ ❷❽❶❸❼❽❼✧ï
❥◗①③t✜✉❭❼❩♦❵②➄➃❭õ✟ï❭ôðï❑✚Ñô ❦➜s❜q③✈❖❼❄ë❙❶❸❿❙➃ ❹ ï ✻❉ï❏✚✴Ñ✳ ♠❖⑤❙♦❖➃ ✞Úï❏✚❭í✮❿❇q③❶✘✇❄✇❸➃
õ✟ï❜❥➬❏ï ✚✚⑤❙②❖⑧ü➇✮♦❵②♣①③⑤⑦✇❸➃✮s✚ï✫✜✗✢✗♣✢ ➈❵ï ✸❏î ✁ þ ✂✬í☞②ÿî➲②❇✇❄❶❸④❙❷♥⑤➒✇❽❶❸⑧
✸❏❷❽⑤❙✉♣♠♣①➅❦✧⑤❙q ❹ ♦❇♦❵qP✇❽♦⑩❼❄✈♣✉♣✉Ñ♦❙❷❄✇❏➋❱②♣♦➒ì✲q③❶❸⑧♣④❙❶➊⑨❉②❖④❙①③②♣❶✧❶❸❷❄①③②♣④⑥①③②
í☞î ✁ q③⑤❙②♣②♣①③②♣④❖ï➬î➲② ★ ✘❇❅✓✆ ✖ ✖ ✫ ✡ ✥ ✍✭✎ ❅ ☞✿✬ ✖✵✭ ✬ ❶ ✁ ✘❇❅ ✍ ✖ ✒❄✥ ✹✦❅❄✥✚✗
☞ ✖ ✘✷❉✖ ✥ ✆✷✖ ❅✫✥ ★ ✮✰✒❄✥✛✥ ✡ ✥ ✍⑦ï
❥◗①③t✜✉❭❼❩♦❵②➄➃✍õ➊ï✬ôðï✫✜✗✢✗✢✣✲♣ï ✸❏①③✉Ñ♦ð④❙❷♥⑤⑦✉♣♠♣①➅❦✧⑤❙q➼①❾②❇✇❄❶❸❷❩r✻⑤❵❦✘❶✭rÛ♦❵❷
✉♣q➅⑤⑦②♣②♣①③②♣④⑥ì✲①❾✇❄♠◆♦❙ê✄➀✂ ❶➜❦✶✇♥❼✧ï✲î➲② ★ ✘❇❅✻✆ ✖ ✖✗✫ ✡ ✥ ✍ ✎ ❅ ☞ ✬ ✖ ✦❉✥ ✬ ✖ ✘ ✥✑✒✟✗
✬ ✡✲❅❄✥ ✒✆✮✎✹✦❅✫✥✓☞ ✖ ✘✩✖✽✥ ✆ ✖ ☞❉✝❅ ✘ ☛ ✥✍❅ ✬ ✮✰✖✗✫ ✍✣✩
✖ ❶ ✥ ✍ ✡ ✥✑✖✷✖✙✘ ✡ ✥ ✍ ✡ ✥✟★ ✮✰✒❄✚
✥ ✗
✥ ✡ ✥ ✍ ✒❄✥ ✫ ⑨ ✆ ✖✗✫ ✁ ✮ ✡ ✥ ✍⑦ï
✞◆✈P➃⑩➋✏❏ï ✚ ☎✬⑤⑦②❖④❖➃✪✡
✶ ❏ï ✚✃⑤⑦②❭⑧ ✆❵①➅⑤⑦②♣④❭➃ ☎✡ï ✜✆✢✗✢✳✲❧ï í☞❷❽t➮❼✽✂
í✮❦✘✇❄①③♦❙②❧ñ▲❷❄❶❸q③⑤⑦✇❄①③♦❙②⑩t✜♦❧⑧❧❶❸q❾q③①③②♣④✜❼❄❿❧❼➀✇❽❶✧tÙrÛ♦❵❷✲q❾❶➜⑤⑦❷❽②♣①❾②❖④✡⑤❙❦❸ö❇✈♣①③❼❄①❾ñ
✇❄①③♦❙②✭t✏♦❧⑧❧❶✧q➅❼✧ï❞î➲② ★ ✘❇❅✓✆ ✖ ✖ ✫ ✡ ✥ ✍✭✎ ❅ ☞ ✬ ✖✷✠ ✡ ✘✗✎ ✬ ✦❉✥ ✬ ✖ ✘ ✥✑✒✝✬☛✡✲❅✫✥ ✒✆✮
✹✦❅✝✏ ✍ ✖✙✬ ✡ ✬☛✡✲❅✫✥ ❅✫✥ ☛ ✥ ❅ ✬ ✮✰✖✗✫ ✍✳✉
✖ ❶ ✥ ✍✠✡ ✥ ✖ ✖✙✘ ✡ ✥ ✍✾❉☞ ✝❅ ✸
✘ ✂ ✦ ★ ✮✰✒❄✚
✥ ✗
✥ ✡ ✥ ✍⑦ï
❲ ❈✁♥❈➄▼❖❈P❛✮✽➬❈P❅
ÿ
⑨✬❷❄♦❵q➱➃➜➋✏❑ï ✚⑦➁✮❶✧②❖⑧♣q❾❶❸❷❸➃ ✆❭❑ï ✚⑦⑤❙②❖⑧➊ò✮⑤❙✈➄➃➒➇■ï➒❥➬ï♣✟➈ ✞✠✞ ✢♣ï❵s❜♦❙t✏✉♣q③❶✘ý❧①❾✇➀❿
õ☞❶❸❼❄✈♣q➤✇♥❼➮rÛ♦❵❷⑩➁ ❹ ò ✁ q➅⑤⑦②♣②❖①❾②♣④❭✄
ï ✂☞✥✾✥ ✒✆✮ ✎✭❅ ☞✆✱
☎ ✒✆✬ ✖ ✏ ✒✝☛✬ ✡ ✆ ✎
✒❄✥ ✫✝✾
✂ ✘ ✬ ✡ ✣ ✆ ✡ ✒✆✮ ✦ ✥ ✬ ✖✙✮ ✮ ✡ ✍✳❉✖ ✥ ✆✷✖ ✢✠✟✞ ✂✞ ❙✺ ➉❖ï
✸❱⑤❙❷❄q➅⑤⑦②❭⑧❋✚❀õ✲❿❇⑤⑦q③q✤✚✵⑤⑦②❖⑧❚õ✲①➅❦♥♠➄ï✟✜✆✢✣♣
✢ ➈❵ï ✻P❶❸⑤⑦❷❽②♣①③②♣④➆♠♣①❾❶❸❷❽⑤❙❷❽❦♥♠❖①➤ñ
❦❸⑤⑦q➬✇♥⑤❙❼❄ë⑥t✏♦◗⑧♣❶✧q➅❼✲ê◗❿⑩⑧❧❶✧➂❖②♣①③②♣④➮⑤⑦②❖⑧➆❷❽❶✘➂❖②♣①③②♣④✏❶✘ý♣⑤❙t✜✉❖q❾❶➜❼✧ï❞î➲②
★ ✘❇❅✻✆ ✖ ✖✗✫ ✡ ✥ ✍ ✎ ❅ ☞✟✬ ✖✡✠ ✡ ✘✗✎ ✬✛✦❉✥ ✬ ✖ ✘ ✥✑✒✝✬☛✡✲❅✫✑
✥ ✒✝✮ ✹✦❅❄✥✻☞ ✖✙✘✩❉
✖ ✥ ✆ ✖ ❅✫✥
☛ ✥ ❅ ✬ ✮✰✖✗✫ ✍✳✖ ✹✶✒ ✍ ✬ ✁ ✘✩✖✧ï
➁✮♦❙④❵④❖➃⑦s✚ï❾➃⑦⑤❙②❖⑧➊ô☎✈♣②❖♦✠✘✏ ñ➲í☞❻◗①③q③⑤❖➃➒➁➊✄ï ✜✆✢✗✢✂◗✣ ✆ï P✻ ❶❸⑤⑦❷❽②♣①③②♣④✮➁✮①❾❶❸❷❽⑤❙❷❩ñ
❦♥♠❖①③❦❸⑤⑦q ❹ ⑤❙❼❄ë✟ò☞❶✧✇➀ì❜♦❵❷❄ë❧❼❋rÛ❷❽♦❙t ✁ q③⑤❙② ❹ ❷❽⑤❵❦✘❶❸❼❸ï❙î➲② ★ ✘❇❅✓✆ ✖✷✗✖ ✫ ✡ ✥ ✍✭✎
❅ ☞✟✬ ✖ ✦✆✁
✹ ✂✾★④⑨✌☞ ✍✏✎✒✑ ❅✝✘✩✔ ✎ ❅✷✍ ❅❄✓
✥ ✾
✂ ✘ ✬☛✡ ✣ ✆ ✡ ✒✝✮✧✦❉✥ ✬ ✖ ✮ ✮ ✡ ✍✳✽✖ ✥ ✆ ✖
★ ✮✰✒✫✥✾✥ ✡ ✥ ✍ ✒✫✥ ✫ ✺ ✖ ✒✝✘ ✥ ✡ ✥ ✍⑦ï
î➲q③④❙♠❭⑤⑦t✏①➱➃✲þ✡ï❑✚✚ò✮⑤⑦✈P➃✍➇✡ï✍❥➬ï❏✚✮ô☎✈❖♦✠✘✏ ñ➲í☞❻◗①③q③⑤❖➃✍➁■ï❑✚✚⑤⑦②❖⑧ÿí☞♠❭⑤♣➃
➇✡ï ✞Úï ✜✆✢✣✢✣✲❧ï ✻P❶❸⑤⑦❷❽②♣①③②♣④ð✉♣❷❽❶❸❦✧♦❙②❖⑧❧①❾✇❄①③♦❙②❭❼➮rÛ♦❙❷⑩✉♣q➅⑤⑦②♣②♣①③②♣④
rÛ❷❽♦❙t✣✉❖q③⑤❙② ✇❄❷♥⑤❙❦✧❶❸❼❏⑤⑦②❖⑧◆➁ ❹ òÙ❼❩✇❄❷❽✈❖❦✶✇❽✈♣❷❄❶❵ï ✹✦❅✝✏ ✍ ✁ ✬ ✒✝☛✬ ✡✲❅✫✑✥ ✒✝✮
✦❉✥ ✬ ✖ ✮ ✮ ✡ ✍✣✽✖ ✥ ✆ ✖✿✜♣➈❵ú ✿◗û ✂ ✪➉ ✺ ✺✔Ñ
✞ ➈✼❵✿ ➉❖ï
î➲q③④❙♠❭⑤⑦t✏①➱➃✥þ✡ï❑✚✵ò✮⑤⑦✈P➃❀➇✡ï❀❥➬❏ï ✚✵⑤❙②❖⑧◆ô ✈♣②♣♦✠✧✏ ñ▲í✮❻◗①❾q➅⑤♣➃P➁■ï✟✜✆✢✗✢ ✢♣ï
✻P❶❸⑤❙❷❄②♣①③②♣④✲✇❽♦❏⑧♣♦✮♠❇✇❄②✡✉♣q③⑤❙②♣②♣①③②♣④❖ï❇î➲②✓★ ✘❇❅✻✆ ✖ ✖✗✫ ✡ ✥ ✍ ✎ ❅ ☞ ✬ ✖❦⑨ ✡✖✕✘✗
✬ ✖ ✖❉✥ ✬ ✦ ✥ ✬ ✖✙✘ ✥ ✒✆✬ ✡✲❅❄✥ ✒✝✮✪✦
✹ ❅✫✥✓☞ ✖ ✘✩✖✽✥ ✆ ✖ ❅❄✙
✥ ✂ ✁ ✬ ❅✝✏ ✒✝✬ ✖✗✫ ★ ✮ ✒✫✚
✥ ✗
✥ ✡ ✥ ✍ ✒✫✥ ✫✉⑨ ✆ ✖✗✫ ✁ ✮ ✡ ✥ ✍❙➃❭➉✠✣
✞ ✛
✢ ✞✪✄➉ ❙✞ ➉♣ï
✻P①❾✈P➃➬➇✡ï③➃P⑤⑦②❖⑧✃ô ❦➜s❜q③✈❖❼❄ë❙❶✧❿❵➃ ❹ ✑
ï ✻✬ï ✜✆✢✣✢✗✢♣ï ❹ ♠♣❶⑥þ✟s ✻ ✻P⑤❙②❧ñ
④❵✈❖⑤⑦④❵❶❏ô✃⑤⑦②◗✈❖⑤⑦qÕ➃✢❞✜ ❶✧❷♥❼❄①❾♦❵②◆➈❵ï ✜♣ï ❹ ❶➜❦♥♠♣②♣①➅❦✧⑤⑦q❋❷❽❶✧✉Ñ♦❙❷❄✇❸➃♣➇✚❶✧✉❭⑤⑦❷❄✇❩ñ
t✏❶❸②❵✇✚♦⑦r✍s❜♦❙t✏✉♣✈♣✇❄①③②♣④⑩⑤❙②❖⑧☎ô ⑤➒✇❽♠♣❶✧t➮⑤⑦✇❄①➅❦✧⑤⑦q✥❥❧❦✘①③❶✧②❭❦✘❶❸❼❸➃➬❺☞②♣①❾ñ
❻❵❶✧❷♥❼❩①❾✇➀❿➮♦⑦r✥➁☞✈❖⑧❖⑧❧❶✧❷♥❼➀➂❭❶✧q➅⑧◆ï
ô✃⑤⑦❷❄✇❄♠♣①Õ➃✔✘❏ï❑✚ ✞❂♦❙q❾rÛ❶❙➃ ✆❖❏ï ✚✵⑤❙②❖⑧❚õ✲✈❖❼❽❼❩❶❸q❾qÕ➃✵❥❋ï ✜✆✢✣✢✚◗✣ ï➮❥◗❶❸t✏⑤❙②❧ñ
✇❽①③❦❸❼❏rÛ♦❙❷■➁✮①❾④❵♠❧ñ▲q❾❶❸❻❙❶✧q❉í✚❦✶✇❽①❾♦❵②❖❼✧ï➮î➲② ★ ✘✆✻❅ ✆ ✖ ✖✗✫ ✡ ✥ ✍✭✎ ❅ ☞✓✬ ✖ ✦ ✥✣✗
✬ ✖✙✘ ✥ ✒✝☛✬ ✡✲❅✫✑
✥ ✒✝✮ ✹✦❅✫✥✓☞ ✖ ✘✷❉
✖ ✥ ✆✷✴
✖ ❅✫✥✤✂ ✁ ✬ ❅✝✏ ✒✝✬ ✖✗✫ ★ ✮✰✒❄✥✛✥ ✡ ✥ ✍✌✒❄✥ ✫
⑨ ✆ ✖✗✫ ✁ ✮ ✡ ✥ ✍✟✥ ✦✆✁✹ ✂✾★④⑨ ⑧✦✍✧✍✏✎✧ï
ô✃❦➜s❜q③✈❖❼❩ë❵❶✧❿❵➃ ❹ ✧ï ✻❉ï❏✚☞õ✲①➅❦♥♠❖⑤⑦❷♥⑧♣❼❄♦❙②➄➃✬ò■ï❉⑨☞ï❏✚☞⑤⑦②❖⑧ÿ❥◗①③t✏✉❖❼❩♦❵②➄➃
õ➊ï❖ôðï ✜✆✢✣✢✣✜♣ï✥í✮②✭î➲②❵✇❽❶✧❷♥⑤❙❦✘✇❄①③❻❙❶✟ô☎❶✘✇❽♠♣♦❧⑧⑩rÛ♦❙❷✲î➲②❖⑧❧✈❭❦✘①③②♣④⑩þ✚✉❧ñ
❶❸❷❽⑤⑦✇❄♦❵❷✚➇✚❶❸❼❽❦✘❷❽①❾✉❧✇❽①❾♦❵②❖❼❸ï✟î➲✩
② ★ ✖✉⑨ ✡✖✕✣✬ ✦❉✥ ✬ ✖ ✘ ✥✑✒✝✬☛✡✲❅✫✑✥ ✒✝✮ ✹✦❅✫✥✚✗
☞ ✖✙✘✩✽✖ ✥ ✆ ✖ ❅✫✪
✥ ✾
✂ ✘ ✬☛✡ ✣ ✆ ✡ ✒✝✮✑✦ ✥ ✬ ✖✙✮ ✮ ✡ ✍✣✽✖ ✥ ✆ ✖✭★ ✮✰✒❄✥✛✥ ✡ ✥ ✍ ⑨✣✫ ✎ ✬ ✖✙✾
✏ ♥✎ ï
ò✮❶ ✂❩⑤⑦✇❄①Õ➃♣ò➊ï❏✚ ✻P⑤⑦②❖④❙q③❶✧❿❙➃ ✁ ❏ï ✚♣⑤⑦②❖⑧⑩➋❱♦❙②♣①③ë➬➃ ❹ ï ✜✆✢✣✢✪♣✢ ï ✻➄❶❸⑤❙❷❄②❖①❾②♣④
♠❖①❾❶❸❷❽⑤❙❷❽❦♥♠♣①➅❦✧⑤❙q❞✇❽⑤❵❼❩ë❚②♣❶✧✇➀ì❜♦❵❷❄ë❧❼■ê❇❿❚♦❵ê❖❼❩❶❸❷❄❻➒⑤⑦✇❄①③♦❙②➄ï◆î➲② ✦✝✬✹ ☎ ✺
☞ ✍✮✭✰✯ ★ ✘❇✓❅ ✆ ✖ ✖ ✫ ✡ ✥ ✍✭✎
❅
☞ ✬ ✖❏✘⑧ ✆✱ ✘ ✫ ✡ ✥ ✬ ✖✙✘ ✥ ✆✒ ✬ ✡✲❅❄✥ ✒✝✮ ✆✆❅❄✻✥ ☞ ✖✙✘✩✽✖ ✥ ✆ ✖
92
✂✁☎✄✝✆✟✞✡✠☛✞✌☞✡✞✎✍✑✏✓✒✕✔✖✞✎✍✟✁☎✔✖✞✗✄✙✘✛✚✜✔✣✢✥✤✧✦★✁☎✆✩✍✟✞✡✪✜✄✫✍✟✞✌✤✂✪✭✬✮✚✯✍✰✁✱✤✲✍✳✞✗✄✯☞✵✴✷✶☛✶☛☞✌✞✗✸✹✄✫✍✳✞✗✚★✤✺✴✻✔✳✁☎✄✝✆✼✚✜✘✽✴✷✢
✬✾☞✗✄✯✤☛✤☛✞✌✤✂✪
✿❁❀❃❂❄❀✝❅❇❆❉❈❋❊❍●✱■❑❏✖▲◆▼
◗❖ P✎❘❚❙❯❙❲❱◆❙❲❳✳❨❩❙❭❬✲❪❴❫◗❵❍❛❝❜❚❞❄❡✛❜❣❢✐❤❥❜❚❞❲❛❝❜❚❦✡❦✌❧❍❛❝❜❚❞
♠ ❘❚❦✝♥☎❜❚❛❝♦❲❦✌❧q♣❍❛r❵ts❄❙✛❳✟✉☎❫❴❢❚❢❚❦✡❧✎♣t✈❴❦✌❱❝❢✥✇❴✉☎❫❴❢❴❢◗❦✡❧✎♣t✈❣❦✡❱①❢②✉④③✜⑤✯⑥❭③④✉✜✇❴♥✱⑦
❱r❦✌❦⑨⑧④❘❯❫❴❢◆⑩ ❡❲P❲⑩ ❫❴❶
❪❚❱❝❡❲❜❚❜❚❛❝❜❚❞②❵❍❦➐P✎❘❚❜❚❙❲❱❝❙❲❞❭s❲✇✩❡❲❜❴❢❋❘❚❙❾❮✾P✌❡✛❜❐❮➝❦➢❢❚❦✗❵❍❦✌❧❍❬➢❛❝❜❚❦✧❮✹❘❴❡✛❵
❡✛❧q❦✌❡❲♣❄❙✛❳✯❵q❘❚❦❋❡❲❪❚❪❚❱❝❛❝P✌❡❾❵q❛r❙❭❜÷❮❩❙❭❫❚❱①❢÷P✡❡❲❫❴♣➤❦➒❵❍❘❚❦❋❬➢❙❭♣➤❵✐❪❚❧q❙ ❒ ❰
❱r❦✌❬➢♣✎õ ➘ ❜✧❵❍❘❴❛❝♣❥❪❴❡✛❪Ð❦✡❧❩❮❩❦☎❦✡Õ◗❪❚❱r❙❭❧❍❦➺❵q❘❚❦✱P✎❘❴❡❲❧q❡❭P➟❵q❦✡❧q❛❝♣➤❵❍❛①P✡♣✟❙❲❳◆❡❲❜
❡✛❪❚❪❚❱❝❛①P✡❡❾❵q❛r❙❭❜☛❡✛❧q❦✌❡★❵❍❘❴❡✛❵④❬✧❡✛❶❭❦❃❵❍❘❚❦★❡✛❪❚❪❚❱❝❛①P✡❡❾❵q❛r❙❭❜✂❙✛❳✖➷ ➘ ❪❚❱①❡✛❜❚❰
❜❚❛r❜❴❞✂❳➬❦✌❡❲♣❍❛ ❒ ❱r❦❭⑩ ♠ ❙☛❬➢❙✛❵q❛r♦❾❡❾❵q❦✧❵❍❘❴❦❹❢◗❛①♣❍P✡❫❴♣❍♣❍❛❝❙❲❜✥✇❉❮❩❦✧❫❴♣❍❦✧❵t❮➝❙
❪❴❡✛❧❍❵❍❛①P✗❫❚❱①❡✛❧✝❡❲❪❚❪❚❱❝❛❝P✌❡❾❵❍❛❝❙❲❜❣♣✫❳➬❧❍❙❭❬✻❵❍❘❴❦ ♠ ❧✎❡✛❜❴♣❍❪Ð❙❲❧❍❵✝❡❲❜❴❢➎Ñ❋❡✛❵❍❦✡❧
æ➭❡✛❜❴❡❲❞❲❦✌❬✲❦✌❜❑❵✱♣➤❦✌❧❍♦❯❛①P✗❦✯❛❝❜❴❢◗❫❴♣➤❵❍❧q❛❝❦✌♣☎❧q❦✌♣❍❪Ð❦✌P➟❵q❛r♦❭❦✡❱❝s❲⑩ ♠ ❘❴❦✌♣❍❦✜❡✛❧q❦
❮✹❛❝❢◗❦④❧q❡❲❜❚❞❲❛❝❜❚❞❣✇❭P✗❙❭❬➢❪❚❱r❦✡Õ◆✇❭❛❝❜❯♦❲❙❲❱❝♦❲❦✱❬➢❡❲❜❯s➢♣➤❵q❡❲❶❲❦☎❘❴❙❲❱①❢◗❦✡❧✎♣❥❡❲❜❴❢
❙❲❧q❞❭❡✛❜❴❛❝♣q❡❾❵q❛r❙❭❜❴♣✡✇✗❡❲❜❴❢❃❘❴❡⑨♦❭❦✖❡✛❱❝❱r❛❝❦✌❢✫❧q❦✌♣❍❦✌❡✛❧✎P✎❘❃❡✛❜❴❢✝❢◗❦✌♦❲❦✌❱r❙❭❪❚❬➢❦✡❜❑❵
❡✛❧q❦✌❡❲♣✌⑩
♠ ❘❚❛❝♣✱❦✡❜❴❢❚❦✌❡⑨♦❲❙❭❫❚❧✹❘❴❡❭♣☎❬✜❫❣P✎❘☛❛r❜➒P✡❙❲❬➢❬➢❙❲❜☛❮✹❛r❵❍❘☛❵q❘❚❦✯❞❭❦✡❜❚❦✌❧q❡❲❱
❡✛❧q❦✌❡✐❙✛❳ ❒ ❫❴♣➤❛❝❜❚❦➐♣❍♣✝❪❴❧❍❙◗P✗❦➐♣❍♣❃P✎❘❴❡❲❜❚❞❲❦✲❵q❘❚❧❍❙❭❫❚❞❲❘➎❵❍❘❴❦✧❛r❜❑❵❍❧q❙◗❢◗❫❴P✗❰
❵❍❛❝❙❲❜✂❙✛❳✟❜❚❦✡❮✮❵❍❦➐P✎❘❚❜❚❙❭❱r❙❭❞❲s❲✇◗❡❲❜❴❢②❛r❜☛❪❴❡❲❧➤❵q❛❝P✡❫❚❱①❡✛❧➝❵❍❘❚❦✜❛r❜❑❵❍❧q❙◗❢◗❫❴P✗❰
❵❍❛❝❙❲❜❄❙❲❳✩❶❯❜❚❙❾❮✹❱r❦➐❢◗❞❲❦➐❢❯❰ ❒ ❡❲♣❍❦✌❢✧➷ ➘ ❵❍❦➐P✎❘❚❜❚❙❲❱❝❙❲❞❭s❲⑩ ♠ ❘❚❦④❪Ð❙✛❵q❦✡❜❑❵❍❛①❡✛❱
❪❚❧❍❙ ❒ ❱❝❦✡❬ø❡✛❧q❦✌❡❲♣❹❛r❜÷❵❍❘❚❦❐❡✛❪❚❪❴❱r❛①P✡❡✛❵❍❛❝❙❲❜÷❙❲❳✜❡✛❫◗❵q❙❲❬✧❡❾❵q❦✌❢Ò❪❚❱①❡✛❜❚❰
❜❚❛r❜❴❞✵❡❲❧❍❦❐❛❝❜Ó♣➤❙❭❬✲❦ëP✌❡❲♣❍❦✌♣☛♣❍❛r❬➢❛❝❱❝❡❲❧✂❵❍❙ÒP✎❘❴❡✛❱❝❱❝❦✡❜❚❞❭❦✌♣☛❡✛❱❝❧q❦✌❡❲❢❚s
❮❩❦✌❱r❱➺❶❯❜❚❙❾❮✹❜ç❮✹❘❚❦✌❜❁❛❝❬➢❪❚❱❝❦✡❬➢❦✡❜❑❵❍❛❝❜❚❞ë⑦✝ù✹❖ç♣❍s◗♣t❵q❦✡❬✧♣✡⑩ ♠ ❘❴❦✌♣❍❦
❛r❜❴P✡❱r❫❣❢◗❦➒❵q❘❚❦✵Ï ❶❯❜❚❙❾❮✹❱❝❦✌❢❚❞❲❦ ❒ ❙✛❵❍❵❍❱❝❦✡❜❚❦➐P✎❶✑Ï➺❰✲❵q❘❚❦❐❢◗❛➛ú❄P✡❫❚❱➛❵ts✽❙✛❳
❶❑❜❴❙❾❮✹❱r❦➐❢◗❞❲❦✲❦✡❱❝❛❝P✡❛➛❵✎❡❾❵q❛r❙❭❜❐❡❲❜❴❢➒❳➬❙❲❧q❬✜❫❚❱①❡❾❵q❛r❙❭❜✥✇◆❵❍❘❴❦❄❡⑨♦⑨❡❲❛r❱①❡ ❒ ❛r❱❝❛➛❵ts
❙✛❳✱❦✡Õ❯❪Ð❦✡❧❍❵q♣➢❡❲❜❴❢➓❦✡Õ◗❪❣❦✌❧➤❵q❛❝♣❍❦❲✇❥❡❲❜❴❢ë❵q❘❚❦✂♦❲❦✌❧❍❛r✈❣P✡❡✛❵❍❛❝❙❲❜✥✇❥♦⑨❡❲❱r❛①❢❚❡✛❰
❵❍❛❝❙❲❜✥✇❥❡❲❜❴❢ç❬✧❡❲❛r❜❑❵❍❦✌❜❴❡✛❜❣P✗❦②❙✛❳④❶❯❜❚❙❾❮✹❱❝❦✌❢❚❞❲❦ ❒ ❡❲♣❍❦✌♣✌⑩ ♠ ❘❚❦②♣❍❫ ❒ ❰
ät❦✌P➟❵✧❙❲❳✱❵❍❘❴❛❝♣➢❪❴❡❲❪❣❦✌❧❄P✡❡❲❜ ❒ ❦✐❵✎❡✛❶❭❦✡❜å❛r❜❁❵❍❘❚❦☛P✡❙❲❜❑❵❍❦✡Õ❑❵✧❙❲❳✱❵❍❘❴❦
❮❩❦✌❱r❱Ð❶❯❜❚❙❾❮✹❜❄❧q❦✌❡❲♣❍❙❲❜❣♣✳❳➬❙❲❧➝❳Ù❡❲❛r❱❝❫❚❧q❦✱❙✛❳❉❦✌❡✛❧q❱❝s➢⑦✝ù✹❖✑✇❭❵❍❙➢❢◗❙✲❮✹❛➛❵q❘
❵❍❘❚❦✌❛r❧ ❒ ❧❍❛r❵➤❵q❱r❦✌❜❚❦✌♣q♣✱❡❲❜❴❢☛♣➤❵q❡✛❜❣❢❯❰➲❡❲❱r❙❭❜❚❦✝❜❴❡✛❵❍❫❚❧q❦❲⑩✹✉☎❙❾❮➝❦✡♦❭❦✡❧➐✇❴❡✛❪❚❰
❪❚❱r❛①P✡❡✛❵❍❛❝❙❲❜❴♣④❙✛❳❥❡❲❫◗❵❍❙❭❬✧❡❾❵❍❦➐❢➭❪❚❱①❡✛❜❴❜❚❛r❜❴❞✐P✡❡❲❜➒❡✛❱①♣❍❙❄❵q❡❲❶❲❦✲❡❲❢◗♦❾❡✛❜❚❰
❵q❡✛❞❭❦✹❙✛❳◆❵q❘❚❦✱❬➢❙❲❧q❦☎❧q❦✌P✡❦✡❜❑❵❥❢◗❦✡♦❭❦✡❱❝❙❲❪❚❬➢❦✌❜❭❵✎♣✳❵q❘❴❡❾❵➺❡✛❱❝❱r❦✌♦❑❛①❡❾❵q❦✹❵❍❘❴❦
Ï ❶❯❜❚❙❾❮✹❱❝❦✌❢◗❞❭❦ ❒ ❙❲❵➤❵❍❱❝❦✡❜❴❦✌P✎❶✑Ïrû✖❵❍❘❚❦✜❢❚❦✡♦❲❦✌❱r❙❭❪❚❬➢❦✡❜❑❵☎❙✛❳✳♣❍❘❴❡✛❧q❦✌❢②❙❲❜❚❰
❵❍❙❲❱❝❙❲❞❭❛r❦➐♣➺❡✛❜❴❢✐❞❲❱❝❙ ❒ ❡❲❱r❱❝s❹❡❭P✡P✗❦➐♣❍♣❍❛ ❒ ❱r❦✫❶❯❜❚❙❾❮✹❱❝❦✌❢◗❞❭❦❲✇❚❡✛❜❣❢❄❵q❘❚❦✯❢◗❦✗❰
♦❲❦✡❱❝❙❲❪❴❬✲❦✌❜❑❵✯❙✛❳✹♣t❵✎❡✛❜❴❢❚❡❲❧q❢✥✇✩❵q❙❑❙❭❱❥♣❍❫❚❪❚❪Ð❙❲❧❍❵✜❦✡❜❯♦❯❛r❧q❙❲❜❴❬✲❦✌❜❑❵q♣❃❳➬❙❭❧
❵❍❘❚❦❄❦✌❜❚❞❲❛❝❜❚❦✌❦✡❧q❛r❜❚❞✂❙❲❳➺❶❯❜❚❙❾❮✹❱❝❦✌❢❚❞❲❦❲⑩②Ø❚❙❭❧✜❡☛❢◗❛①♣❍P✡❫❴♣❍♣❍❛❝❙❲❜❐❙✛❳➺❵❍❘❴❦
♣➤❛❝❬➢❛r❱①❡✛❧q❛➛❵q❛r❦➐♣✹❡✛❜❴❢②❢◗❛①♣t❵q❛r❜❴❞❲❫❚❛①♣➤❘❴❛r❜❚❞➢❳➬❦➐❡❾❵q❫❚❧❍❦➐♣ ❒ ❦✡❵t❮❩❦✌❦✡❜✂❶❯❜❚❙❾❮✹❱r❰
❦✌❢◗❞❭❦✯❦✡❜❚❞❭❛r❜❴❦✡❦✡❧q❛❝❜❚❞✧❳➬❙❲❧❃➷ ➘ ❪❴❱❝❡❲❜❚❜❚❛❝❜❚❞✐❡❲❜❴❢➭⑦✝ù✹❖✑✇❣❵❍❘❚❦✲❧❍❦➐❡❲❢◗❦✌❧
❛❝♣✰❧q❦✗❳➬❦✡❧q❧q❦✌❢✝❵q❙✫♣❍❦✌P✗❵❍❛❝❙❲❜ é ❙❲❳❣➱❥ü❉➷✱ý④❤ ♠ Ï ♣✰è☎❙❭❡❲❢❚❬➢❡❲❪✐➮➂ù❩❛❝❫❚❜❴❢❚❙
Ü✗Ý➺Þ❾ßrà➝á✛â❭â ⑥ ✃ ⑩
➘ ❜❹❵q❘❚❛❝♣➺❪❣❡✛❪Ð❦✡❧✹❮➝❦✫❡❭❢❚❢◗❧q❦✌♣q♣❥❵q❘❚❦✝❪❚❧q❙ ❒ ❱❝❦✡❬þ❙❲❳❉❦✌♦⑨❡❲❱r❫❣❡❾❵❍❛❝❜❚❞✲❵❍❘❴❦
❳➬❦✌❡❲♣❍❛ ❒ ❛r❱❝❛r❵ts❋❙❲❳④❡✛❪❴❪❚❱rs❯❛❝❜❚❞➎➷ ➘ ❪❚❱①❡✛❜❚❜❴❛r❜❚❞➒❵❍❦➐P✎❘❚❜❚❙❲❱❝❙❲❞❭s❲✇ ❒ së❢◗❦✗❰
♦❑❛①♣❍❛r❜❚❞❄❡❹♣➤❦✡❵✱❙✛❳✳❦✡♦❾❡❲❱r❫❴❡✛❵❍❛❝❙❲❜☛P✗❧q❛r❵❍❦✡❧q❛①❡ ❒ ❡❲♣❍❦✌❢②❙❲❜☛❬➢❙❲❵❍❛❝♦⑨❡✛❵❍❛❝❙❲❜✥✇
❵❍❦✌P✎❘❴❜❚❙❲❱❝❙❲❞❭❛❝P✌❡✛❱❴❛❝❜◗❳➬❧✎❡❲♣➤❵❍❧q❫❴P➟❵q❫❚❧q❦④❡✛❜❣❢❄❶❯❜❚❙❾❮✹❱❝❦✌❢◗❞❭❦☎❦✡❜❚❞❭❛r❜❴❦✡❦✡❧q❛❝❜❚❞
❡❲♣❍❪❣❦➐P➟❵q♣✱❙✛❳❩❡❲❜➒❡✛❪❴❪❚❱r❛①P✡❡✛❵❍❛❝❙❲❜✥⑩ ♠ ❙ ❒ ❙✛❵❍❘➒❛r❱❝❱r❫❣♣t❵q❧q❡✛❵❍❦★❡❲❜❴❢➭❦✌♦❾❡✛❱r❰
❫❴❡❾❵q❦④❵q❘❚❦✯❫❴♣❍❦✗❳➬❫❴❱r❜❚❦➐♣❍♣☎❙✛❳✰❵❍❘❚❦➐♣➤❦✯P✡❧❍❛r❵❍❦✌❧❍❛①❡✜❮➝❦✝❫❴♣❍❦❃❵❍❘❚❦✌❬ÿ❵q❙✧❛r❜◗❰
♦❲❦✌♣➤❵❍❛❝❞❭❡✛❵❍❦✳❵q❘❚❦❩❳➬❦✌❡❲♣❍❛ ❒ ❛r❱❝❛r❵ts✝❙✛❳❣❡❲❪❚❪❚❱❝s❑❛❝❜❚❞✫❡✛❫◗❵q❙❲❬✧❡❾❵q❦✌❢✝❪❚❱①❡✛❜❴❜❚❛r❜❴❞
❵❍❦✌P✎❘❴❜❚❙❲❱❝❙❲❞❭s➺❵❍❙✹❵q❘❚❦❥❡✛❪❴❪❚❱r❛①P✡❡✛❵❍❛❝❙❲❜❴♣✌⑩✥Ø❚❙❭❧✥❦✌❡❭P✎❘✫❳➬❦✌❡❾❵q❫❚❧q❦✳❮❩❦✳❧✎❡✛❜❚❶
❷❹❸✰❺✌❻✡❼❾❽❚❿❲❻
➀➂➁➄➃t➅❑➆➈➇❋➉❑➊✌➉◗➋q➌➎➍✟➋❁➊➐➏❭➏❲➌t➋✎➇➑➇➒➃t➅❑➋➓➉❭➌t➔➐→❭➣➈➋✎↔✙➔✡↕✐➏❭➋✎➙✎➆➛➏❲➆➛➁❭➜
➍✖➅❭➋q➃t➅❑➋❍➌➝➆➞➃❩➆➛➇❥↕①➋➟➊✡➇➑➆➈→❑➣➈➋④➃t➔✯➊✡➉❑➉❑➣➞➠➢➡❩➀✟➉❭➣r➊✡➁❑➁❭➆➛➁❭➜❃➃t➋✎➙➤➅❑➁❑➔✌➣➈➔➐➜✌➠
➥ ➆➈➁✛➦⑨➔✌➣➛➦✛➆➈➁❑➜★➙✎➧❭➌➑➌t➋q➁✛➃t➣➞➠✐➊➟➦➐➊✡➆➈➣r➊✡→❑➣➈➋✝➉❑➣➛➊✡➁❑➁❑➆➈➁❭➜➢➋✎➁❑➜✌➆➈➁❑➋✎➇➤➨➝➃t➔➢➊✡➁
➊✡➉❑➉❭➣➛➆➈➙➟➊✗➃t➆➈➔➐➁➎➊✡➌t➋➟➊✛➩✂➫✂➋✜➏❭➋q➦⑨➋✎➣➈➔➐➉➭➇➑➔➐↔✝➋✜➙q➌t➆➞➃t➋q➌t➆➛➊❄→❯➊✌➇➑➋✎➏➭➔✌➁
↔✝➔✡➃t➆➈➦➐➊✗➃t➆➛➔✌➁❣➯✗➃t➋✎➙➤➅❑➁❭➔➐➣➈➔➐➜✌➆➛➙✎➊✌➣❲➆➛➁❲↕❝➌➤➊✡➇➲➃➑➌t➧❑➙q➃t➧❲➌t➋✳➊✌➁❑➏✱➳✛➁❑➔➟➍✖➣➈➋➟➏❲➜➐➋
➋q➁❑➜➐➆➈➁❭➋✎➋q➌t➆➈➁❑➜✝➊✡➇➑➉◗➋✎➙q➃t➇✖➔✌↕◆➊✌➁✧➊✡➉❑➉❑➣➈➆➈➙➟➊✗➃t➆➛➔✌➁❣➯❑➊✌➁❑➏✜➍✟➋✹➜✌➔❃➔➐➁✜➃t➔
➊✡➉❑➉❭➣➈➠★➃t➅❑➋✎➇➑➋☎➙q➌t➆➞➃t➋q➌t➆➛➊✝➃t➔✝➃➵➍✟➔✯➊✡➉❑➉❭➣➛➆➈➙➟➊✗➃t➆➈➔➐➁✐➊✡➌t➋➟➊✡➇✎➩➺➸✟➅❭➋✱➙❍➌t➆➈➻
➃t➋❍➌t➆r➊★→◗➔✌➃t➅❹➅❑➋✎➣➈➉✐➃t➔★➋✎➦✌➊✌➣➈➧❯➊✗➃t➋✫➃t➅❑➋❃➔✗➦⑨➋❍➌➤➊✌➣➈➣◆↕①➋➟➊✡➇➑➆➛→❭➆➈➣➛➆➞➃➂➠❾➯✥➊✌➁❑➏
➆➈➁✯➙➟➊✡➇➑➋✎➇✟➍✖➅❑➋q➌t➋➝➏❭➋✎➦➐➋✎➣➈➔➐➉❑↔✝➋q➁✛➃✩➙✎➔✌➁❾➃t➆➛➁✛➧❭➋✎➇✎➯✛➅❑➋✎➣➈➉✜➧❑➇✩➃t➔☎↕❝➔❾➙✎➧❑➇
➔✌➁✐➃t➅❭➋❃➉❯➊✗➌➑➃t➇✱➔✡↕✟➃t➅❑➋✫➊✌➉❭➉❑➣➈➆➈➙➟➊✡➃t➆➈➔➐➁✂➍✖➅❭➆➈➙❍➅②➊✡➌t➋✝➣➈➆➈➳⑨➋✎➣➞➠❄➃t➔➢→◗➋
↔✝➔✌➇➲➃✳➃➑➌t➔➐➧❭→❑➣➈➋✎➇➑➔➐↔✝➋✌➩
❭➼ ➽❩➾◗➚❚➪✹➶ ●④❆ ➾❚➹t➪❥➽
➘ ❜➴❧❍❦➐P✗❦✡❜❑❵☛➷ ➘ P✗❙❲❜❚❳➬❦✡❧q❦✡❜❴P✡❦✌♣❋➮ ➘ ❨➝➷✱➱➝❖✑✇☎❤➝❨➝➷ ➘➤✃ ❵q❘❚❦✡❧q❦❐❘❴❡⑨♦❲❦
❒ ❦✡❦✡❜✲❡☎❜❯❫❚❬ ❒ ❦✌❧❉❙❲❳❴❮❩❙❭❧❍❶◗♣❍❘❚❙❲❪❣♣✥❢◗❦✡♦❭❙✛❵q❦✌❢❃❵q❙④➷ ➘ ❪❚❱❝❡❲❜❚❜❚❛❝❜❚❞④❡❲❪◗❰
❪❚❱❝❛❝P✌❡❾❵q❛r❙❭❜❴♣✡✇❑❡✛❜❴❢ ➘ ❨➝➷④➱❩❖✲❛r❵q♣❍❦✡❱r❳✑❞❭❛r♦❭❦✌♣✳❡❲❜➢❡⑨❮➺❡✛❧✎❢✜❵❍❙✧Ï ❒ ❦➐♣t❵❩❡❲❪◗❰
❪❚❱❝❛❝P✌❡❾❵q❛r❙❭❜✥Ï⑨❪❴❡✛❪Ð❦✡❧➐⑩❉ÑÒ❘❴❛r❱❝❦➺❬➢❡❲❜❯s✯❡✛❪❚❪❚❱❝❛①P✡❡❾❵q❛r❙❭❜❴♣✩❵❍❦✡❜❣❢✯❵q❙ ❒ ❦➺❛r❜
➷ ➘ ❰➲❧q❛❝P✎❘✝❦✌❜❑♦❯❛❝❧❍❙❭❜❚❬➢❦✡❜❑❵q♣❉♣➤❫❣P✎❘✜❡❲♣✰❖❯❪❴❡❲P✡❦ ♠ ❦✌P✎❘❚❜❴❙❲❱❝❙❲❞❲s❭✇✡❵q❘❚❦✡❧q❦❥❛①♣
❡☎❞❭❧❍❙❾❮✹❛❝❜❚❞ ❒ ❙◗❢◗s✝❙✛❳❴❡❲❪❚❪❚❱❝❛❝P✌❡❾❵❍❛❝❙❲❜❣♣◆❳➬❧q❙❲❬Ó❡☎❮✹❛①❢◗❦✡❧✰❧✎❡✛❜❚❞❭❦✖❙✛❳❴❡❲❧➤❰
❦✌❡❭♣✡⑩✖➷Ô❜❚❙✛❵✎❡ ❒ ❱❝❦❃❦✗Õ❚❡✛❬➢❪❚❱❝❦✝❛❝♣✹❵❍❘❚❦✜❖ ➘ ➷④③④❤❥Ö×➮ÙØ✟❢◗❦✌Ú✗❰➤Û④❱r❛❝♦❾❡✛❧q❦✌♣
✃ ❪❚❧q❙✛ät❦➐P➟❵➐✇✩❢◗❦✡♦❭❦✡❱❝❙❲❪❚❛❝❜❚❞②❵❍❙❯❙❭❱❝♣✫❳➬❙❭❧✝❘❚❦✡❱❝❪❚❛❝❜❚❞✂❪❣❦✌❙✛❰
❪❚Ü✡Ý✫❱❝❦❋Þ❾ß❝❵❍à❹❙åá✛❬➢â❭â❲❡❲ã ❜❴❡✛❞❭❦➒❳➬❙❭❧❍❦➐♣t❵②✈❴❧q❦➒✈❣❞❲❘❑❵❍❛❝❜❚❞å❧❍❦➐♣➤❙❭❫❚❧qP✡❦✌♣✌⑩×❖❯❦✡♦❭❦✡❧✎❡✛❱
❙✛❵q❘❚❦✡❧➺❜❚❙❲❵q❡ ❒ ❱r❦✝❡❲❪❚❪❚❱❝❛❝P✌❡❾❵❍❛❝❙❲❜❣♣❩❮➝❦✡❧q❦❃❢◗❦✌♣qP✗❧q❛ ❒ ❦✌❢✐❛❝❜❹❵q❘❚❦❃❧❍❦➐P✗❦✡❜❑❵
➘ ❨➝➷✱➱➝❖åÏ æ➭❙❾♦❑❛❝❜❚❞➭➱❥❱❝❡❲❜❚❜❚❛❝❜❚❞➒❡✛❜❣❢ç❖◗P✎❘❚❦✌❢❚❫❚❱r❛❝❜❚❞☛❵q❙☛❵❍❘❴❦✐è✹❦➐❡✛❱
Ñ❋❙❲❧q❱❝❢◆Ï❲❮❩❙❭❧❍❶◗♣❍❘❚❙❲❪②➮Ùæ☛s❭❦✡❧✎♣ Ü✗Ý✳Þ❾ß❝à✖á✛â❭â❑é ✃ ⑩❉✉✱❙❾❮❩❦✌♦❲❦✡❧➐✇❾❮➝❦☎♣t❵q❛r❱❝❱
❡✛❪❴❪❣❦➐❡✛❧✯❵q❙ ❒ ❦❹♦❲❦✌❧❍s➎❳Ù❡✛❧✲❡⑨❮➝❡⑨s➭❳➬❧❍❙❭❬✷❵q❘❚❦❹❪Ð❙❲❛❝❜❑❵★❮✹❘❚❦✌❧❍❦❹❡❲❫◗❰
❵❍❙❭❬✧❡❾❵❍❦➐❢➒❪❴❱❝❡❲❜❚❜❚❛❝❜❚❞❹❵q❦✌P✎❘❚❜❴❙❲❱❝❙❲❞❲s☛P✌❡✛❜ ❒ ❦✲❳➬❧✎❡✛❜❴P✎❘❚❛①♣❍❦✌❢☛❵q❙❹❵q❘❚❦
♣❍❙✛❳ê❵t❮➺❡✛❧q❦④❦✌❜❚❞❲❛❝❜❚❦✡❦✌❧❍❛❝❜❚❞➢P✗❙❭❬✲❬★❫❚❜❚❛r❵ts❲⑩
Û④❫❚❧✧❮➝❙❲❧q❶ë❛❝♣➢❬➢❙❲❵❍❛❝♦⑨❡✛❵❍❦➐❢ ❒ sç❛❝❜❯♦❲❦✌♣➤❵❍❛❝❞❭❡✛❵❍❛❝❙❲❜❴♣✲❛❝❜❑❵❍❙❋❵❍❘❚❦☛❫❣♣➤❦
❙✛❳➝➷ ➘ ❪❚❱①❡✛❜❴❜❚❛r❜❴❞✐❛❝❜➒❱①❡✛❧q❞❲❦✡❰➲♣qP✡❡❲❱r❦❹ì✎í❾î Ý➂ï í ß ❡❲❪❚❪❚❱❝❛❝P✌❡❾❵q❛r❙❭❜❴♣✡⑩✝➷✱❫◗❰
❵❍❙❭❬✧❡❾❵❍❦➐❢❹❡❭♣❍♣❍❦✌♣q♣➤❬➢❦✌❜❭❵➺❡✛❜❴❢✐❪❚❧q❦✌❢◗❛①P➟❵q❛r❙❭❜❹♦❯❛①❡✯❬➢❙❲❜❚❛r❵❍❙❭❧❍❛❝❜❚❞➢❡✛❜❴❢
❬➢❙◗❢◗❦✡❱❝❱r❛❝❜❚❞✐❛❝♣✫ð❑❫❚❛➛❵q❦✜❮➝❦✡❱❝❱✰❢❚❦✡♦❲❦✌❱r❙❭❪❣❦➐❢✂❛r❜➭❵❍❘❚❦➐♣➤❦★❶❑❛❝❜❴❢❚♣④❙✛❳❥❡❲❪◗❰
❪❚❱❝❛❝P✌❡❾❵q❛r❙❭❜❴♣✡✇ ❒ ❫◗❵✹❵❍❘❚❦✌❧❍❦✝❛①♣✹❡✲❜❚❦✡❦➐❢✐❵❍❙✧❢◗❦✡♦❭❦✡❱❝❙❲❪✂♣➤❙❲❳ê❵t❮➝❡❲❧❍❦❃♣❍❫❚❪◗❰
❪Ð❙❲❧❍❵★❵q❘❴❡❾❵➢❦✡❜❴❡ ❒ ❱❝❦✌♣✲❡❲P➟❵q❛r♦❭❦②❢❚❦✌P✗❛①♣❍❛r❙❭❜➓♣➤❫❴❪❚❪❣❙❭❧➤❵➢❙❲❧✲❦✡♦❭❦✡❜➓❡❲❫◗❰
❵❍❙❭❜❚❙❲❬➢❙❭❫❴♣✳P✗❙❭❜❭❵q❧❍❙❭❱◗❦✡❞❃❛❝❜✲❮➺❡❾❵q❦✡❧✗ñ➐ò❴❙❯❙◗❢✜P✡❙❲❜❑❵❍❧q❙❲❱✥➮➂è✹❙ ❒ á❲â❲â❑é ✃ ✇
❙❲❧❉❧❍❙❑❡❲❢✫❵q❧q❡❲❜❴♣➤❪Ð❙❲❧❍❵✩❜❚❦✡❵t❮❩❙❭❧❍❶❃P✗❙❭❜❑❵❍❧q❙❲❱❣➮Ùó✖❡✛❧q❛r❙❭❫❴♣❩⑤✌ô❲ô❭ô ✃ ⑩✥✉☎❙❾❮➺❰
❦✡♦❭❦✡❧➐✇❲❘❚❙❾❮❁❳➬❦✌❡❭♣➤❛ ❒ ❱❝❦☎❛①♣✳❵❍❘❴❦☎❫❴♣❍❦☎❙✛❳✥➷ ➘ ❪❴❱❝❡❲❜❚❜❚❛❝❜❚❞❃❵❍❙❯❙❲❱①♣✖❮✹❛r❵❍❘❴❛r❜
♣❍❫❴P✎❘➓❡✛❜å❡✛❪❚❪❴❱r❛①P✡❡✛❵❍❛❝❙❲❜➓❡❲❧❍❦➐❡❭õ✼✉☎❙❾❮×P✡❙❲❫❚❱①❢ë❮➝❦✐❦✌♦❾❡✛❱❝❫❴❡❾❵q❦✐❡✛❜
❡✛❪❴❪❚❱r❛①P✡❡✛❵❍❛❝❙❲❜☛❛❝❜☛❵❍❦✌❧❍❬✧♣④❙✛❳✳❮✹❘❚❦✡❵❍❘❚❦✌❧④❛r❵✫P✌❡✛❜ ❒ ❦✡❜❚❦✡✈❚❵④❳➬❧q❙❲❬ö➷ ➘
93
❛r❵★❡❲♣✜❱r❙❾❮✝✇❉❬✲❦➐❢◗❛❝❫❚❬ ❙❲❧✯❘❚❛❝❞❲❘✩✇❉❛❝❜❴❢◗❛①P✡❡✛❵❍❛❝❜❚❞➭❛r❵q♣✜P✡❙❲❜❑❵❍❧q❛ ❒ ❫◗❵❍❛❝❙❲❜
❵❍❙❾❮➺❡✛❧✎❢❚♣➺❡✛❜✂❙❾♦❭❦✡❧✎❡✛❱❝❱❣❳➬❦➐❡❲♣❍❛ ❒ ❛r❱❝❛➛❵ts❹❳Ù❡❲P✗❵❍❙❲❧➐⑩✖Ñ❋❦✯P✗❙❭❜❴P✗❱❝❫❴❢◗❦✝❮✹❛r❵❍❘
❡➢♣➤❘❴❙❲❧❍❵✹❢❚❛❝♣qP✗❫❴♣q♣❍❛r❙❭❜✐❙❲❳✩❵❍❘❴❦✝❫❴♣➤❦✝❙❲❳✩❵q❘❚❦✯P✗❧q❛➛❵q❦✡❧q❛❝❡❴⑩
❳➬❫❚❱❲❛r❳◗❵❍❘❚❦✌❧❍❦❩❡✛❱❝❧❍❦➐❡❲❢◗s☎❦✡Õ◗❛❝♣➤❵q♣✩❦✗Õ◗❪❣❦✌❧❍❛❝❬➢❦✡❜❑❵q❡❲❱❲❪❚❱①❡❾❵❍❳➬❙❲❧q❬✧♣✑❵❍❙✱♣❍❫❚❪◗❰
❪❣❙❭❧➤❵✖❵q❘❚❦✱❛❝❜❑❵❍❧q❙◗❢◗❫❴P➟❵q❛r❙❭❜✧❙✛❳◆❜❚❦✌❮✵❵❍❦✌P✎❘❴❜❚❙❲❱❝❙❲❞❭s❲⑩ ♠ s❯❪❚❛①P✡❡✛❱❝❱❝s✜❵q❘❚❛❝♣
❮❩❙❭❫❚❱❝❢❹P✗❙❲❬➢❪❚❧q❛①♣➤❦☎❙❲❳✥❘❚❛①♣t❵q❙❲❧q❛❝P✌❡✛❱✑❢❚❡❾❵✎❡✜❡✛❜❣❢❄❡✜♣❍❛r❬★❫❚❱①❡❾❵❍❛❝❙❲❜✐♣➤s◗♣➤❰
❵❍❦✡❬✣❮✹❘❴❛❝P✎❘❐P✡❡❲❜ ❒ ❦★❫❴♣❍❦✌❢➒❵❍❙②❛❝❜❑♦❭❦✌♣➤❵❍❛❝❞❭❡✛❵❍❦✯❵q❘❚❦➢✌❦ ✑☞ ❦✌P✗❵❍❛❝♦❲❦✡❜❴❦✌♣q♣
❙✛❳✩❵q❦✌P✎❘❚❜❴❛❝ð❑❫❚❦➐♣➺✻❙ Ð☞ ❰➵❱❝❛r❜❴❦❲⑩
Ø❚❦✌❡❭♣➤❛ ❒ ❛❝❱r❛r❵ts★❡❲❱❝♣❍❙✝❢◗❦✡❪Ð❦✡❜❣❢❚♣❥❙❭❜➢❘❑❫❴❬➢❡❲❜✲❳Ù❡❲P➟❵q❙❲❧✎♣✡û❉❛➛❳◆❶❭❦✡s★♣➤❵q❡❲❶❲❦✡❰
❘❚❙❲❱①❢◗❦✡❧✎♣✜❡❲❧❍❦❹❫❚❜❯❮✹❛r❱❝❱r❛❝❜❚❞➒❵❍❙➎❡❭P✡P✡❦✡❪◗❵★❵❍❘❚❦②❶❯❛r❜❴❢ç❙✛❳✱❡✛❫❚❵❍❙❲❜❴❙❲❬✜s
❢◗❦✡❱❝❛r♦❭❦✡❧q❦✌❢ ❒ s✯❡❲❫◗❵❍❙❭❬➢❡✛❵❍❦➐❢✯❪❚❱①❡✛❜❚❜❴❛r❜❚❞④❵❍❘❴❦✡❜✲❛➛❵✳❮✹❛❝❱r❱❯❜❚❙❲❵ ❒ ❦➝❳➬❦✌❡✛❰
♣➤❛ ❒ ❱❝❦❲⑩✟➷☎❜❹❦✡Õ❚❡✛❬➢❪❚❱❝❦☎❛❝♣❥❮✹❘❚❦✡❧q❦✹❵q❘❚❦✫P✡❫❚❧q❧❍❦✌❜❭❵✖❪❴❧❍❙ ❒ ❱r❦✌❬ ❙❾❮✹❜❚❦✡❧✎♣
P✗❙❲❜❑❵q❧q❡❭P➟❵✜❙❭❫◗❵✜❵q❘❚❦❹❪❚❱①❡✛❜❴❜❚❛r❜❴❞➭❵q❡❭♣➤❶❋❵q❙➎❡☛❵q❘❚❛❝❧q❢ë❪❣❡✛❧❍❵ts❲⑩ ♠ ❘❴❦
❵❍❘❚❛❝❧q❢➢❪❣❡✛❧❍❵ts✯❛❝♣❥❜❚❙✛❵❩❜❚❦✌P✡❦✌♣q♣❍❡❲❧❍❛❝❱rs✜❞❲❙❲❛❝❜❚❞❃❵q❙✯❘❚❦✱❡❃❮✹❛❝❱r❱❝❛❝❜❚❞✯❪❴❡❲❧➤❵❍❰
❜❚❦✡❧➝❛r❜❹❵❍❘❚❦✫♦❲❦✌❜❑❵❍❫❚❧q❦✱❛r❳✩❵q❘❚❦④❜❴❦✡❮Ò❵q❦✌P✎❘❚❜❚❙❭❱r❙❭❞❲s★❵❍❘❚❧q❦✌❡✛❵❍❦✌❜❴♣❩❵❍❘❴❡✛❵
P✗❙❲❜❑❵q❧q❡❭P➟✘❵ ✛✼ ❵❍❘❚❦➺❵q❘❚❛r❧✎❢✲❪❴❡✛❧❍❵ts❲✇❲❘❚❙❾❮❩❦✌♦❲❦✌❧✌✇❾❬✧❡⑨s✜❘❴❙❲❱①❢✜❶❯❜❚❙❾❮✹❱❝❦✌❢◗❞❭❦
❵❍❘❴❡✛❵✰❛①♣✟❜❚❦✌P✡❦✌♣q♣❍❡❲❧❍s✫❳➬❙❭❧✟♣❍❫❴P✌P✗❦✌♣q♣✌⑩ ➘ ❜➢♣➤❫❚❬➢❬✧❡✛❧qs❲✇⑨❵q❘❚❦➺❶❲❦✌s✯ð❭❫❴❦✌♣➤❰
❵❍❛❝❙❲❜❴♣✹❡❲❧❍❦❭û
✽✿✾❁❀❃❂ Ý ❀ ✲î ❄ Ý➲Ü ❆ì ❯❅ î✑í ß ❇í ❄ ❀ ì Þ✛ß ❀ ❉î ❈ ï❍Þ ❂ Ý➵ï❋❊ ì ●Ý ❊◗ï❍Ü û
ÑÒ❛➛❵q❘❚❛r❜➭❵❍❘❚❦✲P✗❙❲❬➢❪❚❫❚❵❍❦✡❧✫♣➤s◗♣➤❵❍❦✌❬➢♣☎❵❍❘❴❡✛❵✫❡❲❧❍❦✯P✡❫❚❧❍❧q❦✡❜❑❵q❱rs ❒ ❦✌❛r❜❴❞
❫❴♣➤❦➐❢✜❛❝❜✲❵❍❘❚❦✹❪Ð❙✛❵q❦✡❜❑❵❍❛①❡✛❱❚❡❲❪❚❪❚❱❝❛❝P✌❡❾❵q❛r❙❭❜✲❡❲❧❍❦➐❡❚✇✛❡✛❧q❦✹♣➤❙❭❪❚❘❚❛①♣t❵q❛❝P✌❡❾❵q❦✌❢
♣➤s◗♣➤❵❍❦✡❬✧♣✹❫❣♣➤❦➐❢✂❦✗Õ❯❵❍❦✌❜❴♣➤❛❝♦❲❦✌❱rs❹❳➬❙❲❧☎❬✧❡❲❜❴❡✛❞❭❦✡❬➢❦✡❜❑❵☎❛❝❜◗❳➬❙❲❧q❬✧❡❾❵❍❛❝❙❲❜✩✇
❡✛❜❴❢❣ñ❾❙❲❧❩❳➬❙❲❧✹❢◗❦➐P✗❛①♣➤❛❝❙❲❜②♣➤❫❚❪❴❪❣❙❭❧➤❵➟õ✂ÑÒ❘❴❡❾❵✹❛①♣❩❵❍❘❚❦❃❱❝❦✡♦❲❦✌❱◆❙✛❳✥❵q❦✌P✎❘◗❰
❜❚❙❲❱❝❙❲❞❭❛❝P✌❡✛❱Ð❵q❡❲❶❲❦✗❰➲❫❚❪②❛r❜②❵q❘❚❦✯❡✛❧q❦✌❡❑õ✂➷☎❧q❦④❵q❘❚❦✜P✡❫❚❧❍❧q❦✡❜❑❵☎♣❍s❯♣➤❵❍❦✌❬✧♣
♣t❵✎❡✛❜❴❢❯❰➑❡✛❱❝❙❲❜❴❦④❙❭❧❩❳➬❫❴❱r❱❝s✐❛❝❜❑❵❍❦✌❧❍❙❭❪❣❦✌❧q❡ ❒ ❱r❦⑨õ
❍ Þ✛Ý➲Þ❄✻Þ ⑨■ Þ ❀ ßrÞ✤❏ ❀ ß ❀ ●Ý ❑✂Þ ✒î ▲✠▼ ❊❴Þ✛ß ❀ ●Ý ❑ û
➘ ♣✟❵❍❘❚❦✌❧❍❦☎❡✯❧❍❦➐❡❲❢◗s✜♣❍❫❚❪❚❪❴❱rs✲❙✛❳◆❢❚❡❾❵✎❡✫❵❍❙✜♣❍❫❚❪❚❪❴❱rs➢♣t❵✎❡❾❵q❦✹❛r❜◗❳➬❙❭❧❍❬✧❡✛❰
❵❍❛❝❙❲❜☛❙❲❜➭❵❍❘❚❦★❙ ❒ ♣❍❦✡❧q♦❲❦➐❢☛♣➤s◗♣➤❵❍❦✡❬❄õ ➘ ♣☎❵q❘❚❦★❢❚❡✛❵q❡❹❛r❜➭❘❴❛r❞❭❘➭❱r❦✌♦❲❦✌❱
➮➬❛❝❜◗❳➬❙❲❧q❬✧❡❾❵❍❛❝❙❲❜➢❦✡Õ❑❵q❧q❡❭P➟❵q❦✌❢ ✃ ❳➬❙❭❧❍❬✂✇✛❙❭❧✖❛❝♣✖❛➛❵❥❛❝❜❄❡❃♦❲❦✌❧❍s✜❱r❙❾❮å❱r❦✌♦❲❦✌❱
➮➬❦✡❞✝❜❯❫❚❬➢❦✌❧❍❛①P✡❡❲❱ ✃ ❳➬❙❭❧❍❬❄õ ➘ ♣✖❵❍❘❚❦✱❢❴❡❾❵q❡✝❵❍❧q❫❴♣t❵t❮➝❙❲❧❍❵❍❘❯s✯❙❭❧❩❢◗❙❯❦➐♣✖❛r❵
P✗❙❲❜❑❵✎❡✛❛❝❜②❡✧♣❍❛r❞❭❜❚❛r✈❣P✡❡❲❜❭❵✱❡❲❬➢❙❲❫❚❜❑❵☎❙✛❳✳❫❚❜❣P✗❦✡❧❍❵q❡❲❛r❜❑❵ts❚õ➎❨➝❡❲❜➭❢❚❡❾❵✎❡
❒ ❦➢❦✗Õ❯❵q❧q❡❭P➟❵❍❦➐❢➭❳➬❧q❙❲❬✻❡✂♣➤❵q❡❲❜❴❢❚❡✛❧✎❢➒❢❴❡❾❵q❡②❛❝❜❑❵❍❦✡❧❍❳Ù❡❲P✡❦➐õ ➘ ♣✫❵❍❘❴❦✡❧q❦
❘❚❛❝♣➤❵❍❙❭❧❍❛①P✡❡❲❱✹❢❚❡❾❵✎❡➎❡❲❜❴❢❣ñ⑨❙❭❧✧❡❋♣❍❛❝❬✜❫❚❱①❡❾❵q❛r❙❭❜❁❦✌❜❑♦❯❛❝❧❍❙❭❜❚❬➢❦✡❜❑❵✲❵❍❘❴❡✛❵
P✡❡✛❜ ❒ ❦✝❫❴♣➤❦➐❢❄❵q❙➢❵❍❦➐♣t❵☎❜❚❦✌❮÷❵q❦✌P✎❘❚❜❚❙❭❱r❙❭❞❲s✧✤❙ ➒
☞ ❱r❛❝❜❚❦⑨õ
î ◗ Þ ìÝí ï❂û
◆ ❊P✧
❖ Þ❘
➷☎❧q❦❄❵❍❘❴❦✐❪❚❧q❙ ❒ ❱r❦✌❬ ❙❾❮✹❜❚❦✡❧✎♣❹➮➬❵❍❘❚❦✂P✗❫❚❧q❧q❦✡❜❑❵★❪❚❧q❙❾♦❯❛❝❢❚❦✡❧✎♣✯❙✛❳✫♣➤❙❲❰
❱r❫◗❵q❛r❙❭❜❴♣ ✃ ❡❲❜❴❢❄❙❲❵❍❘❚❦✌❧☎♣t❵✎❡✛❶❭❦✡❘❚❙❭❱❝❢◗❦✌❧q♣❥❙❲❪Ð❦✡❜✂❡✛❜❴❢②♣❍❫❚❪❚❪Ð❙❲❧❍❵❍❛❝♦❲❦✫❙✛❳
❛r❜❚❜❴❙❾♦⑨❡✛❵❍❛❝❙❲❜★❵❍❙✝❘❚❦✌❱r❪✧❛❝❬➢❪❚❧q❙❾♦❲❦➝❵❍❘❚❦✌❛r❧❥❬✲❦✡❵❍❘❚❙◗❢❚♣❥❡✛❜❴❢➢♣❍s❯♣➤❵❍❦✌❬✧♣qõ
❙✄✑❯❚✷✛✫✕❲❱❳✬✚✧❲✳✡✭✜✧✠❨❩✛✫✭ ✗ ✛✡✧❲✧ ❬❼ ✗ ✛✫✭❭✢ ❽❚❿❲❻ ✕ ❼❾❺
♠ ❘❚❦✡❧q❦★❛①♣❃❡❹❮➝❦✡❱❝❱✳❶❯❜❚❙❾❮✹❜➎P✎❘❣❡✛❧✎❡❲P➟❵q❦✡❧q❛❝♣q❡❾❵q❛r❙❭❜☛❙✛❳❩➷ ➘ ❪❚❱①❡✛❜❴❜❚❛r❜❴❞
❵❍❦✌P✎❘❴❜❚❙❲❱❝❙❲❞❭s❃❵❍❘❴❡✛❵✖❛➛❵❥❧q❦✌ð❑❫❚❛❝❧q❦✌♣❉❵❍❘❚❦✱❪❚❧❍❦✡❰➵❦✌❜❚❞❲❛❝❜❚❦✌❦✡❧q❛r❜❚❞✫❙✛❳◆❡❃♣➤❪Ð❦✗❰
P✗❛r✈❣P➭❢❚❡✛❵q❡ ❒ ❡❲♣❍❦☛❙✛❳✯❡❲P✗❵❍❛❝❙❲❜❴♣✌✇➺❘❚❦✡❫❚❧q❛①♣t❵q❛❝P✌♣❄❦✗❵✎P✛⑩ ♠ ❘❚❦➭❵q❡❲♣❍❶å❙✛❳
❦✡❜❚❞❭❛r❜❚❦✌❦✡❧q❛r❜❴❞ë❶❑❜❴❙❾❮✹❱r❦➐❢◗❞❲❦➭❛❝❜❑❵❍❙å♣➤❫❴P✎❘✮❡ç❪❴❡❲❧➤❵q❛❝P✡❫❚❱❝❡❲❧✧❳➬❙❲❧q❬ø❛❝♣
❛➛❵✎♣➤❦✌❱➛❳✩❬✧❡❭❢◗❦✫❳➬❦✌❡❲♣❍❛ ❒ ❱r❦ ❒ s➢❵❍❘❚❦❃❪❚❧q❦✌♣❍❦✡❜❣P✗❦✫❙✛❳❉❡★❜❯❫❚❬ ❒ ❦✡❧➺❙✛❳❉❳Ù❡❲P✗❰
❵❍❙❲❧✎♣✌✇✩♣➤❫❣P✎❘❐❡❭♣✡û✜❦✗Õ◗❛❝♣➤❵❍❛❝❜❚❞✂❘❚❛❝❞❲❘❐❱❝❦✡♦❲❦✌❱✳❳➬❙❲❧q❬✧❡✛❱❝❛❝♣q❡❾❵q❛r❙❭❜❴♣❃❙✛❳➝❵❍❘❴❦
❢◗❙❲❬✧❡✛❛❝❜✥✇❉❦✗Õ◗❛❝♣➤❵❍❛❝❜❚❞➭❘❴❛r❞❭❘❋❱❝❦✡♦❭❦✡❱✳❳➬❙❭❧❍❬✧❡❲❱r❛①♣❍❡✛❵❍❛❝❙❲❜❴♣❃❙❲❳✹❪❚❱❝❡❲❜❴♣✌✇❉❙❭❧
❵❍❘❚❦➺❦✗Õ◗❛①♣t❵q❦✡❜❴P✡❦❩❙❲❳Ð♣❍❛r❬➢❛❝❱❝❡❲❧✟❪❚❱❝❡❲❜❚❜❚❛❝❜❚❞❃❢◗❙❲❬✧❡✛❛❝❜✲❬✲❙◗❢◗❦✌❱❝♣✌⑩ ♠ ❘❴❦✌♣❍❦
❳Ù❡❲P➟❵q❙❲❧✎♣✯❡✛❧q❦✧♦❲❦✌❧❍s➎❧❍❦✌❱r❦✌♦❾❡✛❜❑❵✯❛r❜➓❶❑❜❴❙❾❮✹❱r❦➐❢◗❞❲❦✧❦✌❜❚❞❲❛❝❜❚❦✡❦✌❧❍❛❝❜❚❞✂❳➬❙❭❧
⑦✝ù➺❖✜❛❝❜✲❞❭❦✡❜❚❦✌❧q❡❲❱➂✇❭❡❲♣✟❛r❵✖❛❝♣✳❮➝❦✡❱❝❱◗❶❯❜❚❙❾❮✹❜★❵❍❘❴❡✛❵❥❛➛❳◆❡✛❱❝❱◗❵q❘❚❦✹❦✗Õ◗❪Ð❦✡❧❍❰
❵❍❛①♣➤❦✹❱❝❛r❦➐♣✖♣❍❙❲❱❝❦✡❱❝s✯❛r❜➢❵q❘❚❦ ❒ ❧✎❡✛❛❝❜❴♣✟❙❲❳Ð❦✡Õ❯❪Ð❦✡❧❍❵q♣✌✇✛❵q❘❚❦✡❜➢❵q❘❚❦✱❡❲❬✲❙❭❫❚❜❑❵
❙✛❳✱✺❦ Ð☞ ❙❭❧➤❵➢❛r❜❯♦❭❙❲❱❝♦❲❦✌❢ç❛r❜å❶❯❜❚❙❾❮✹❱r❦➐❢◗❞❲❦✐❦✡❱❝❛❝P✡❛➛❵✎❡❾❵q❛r❙❭❜❁❡✛❜❣❢ç❶❯❜❚❙❾❮✹❱r❰
❦✌❢◗❞❭❦❩❳➬❙❭❧❍❬★❫❚❱①❡❾❵❍❛❝❙❲❜✲P✡❡❲❜ ❒ ❦✹♦❲❦✌❧❍s✝❘❴❛r❞❭❘✲❛❝❜❴❢◗❦✌❦✌❢◆⑩✟➷☎❪❴❪❚❱r❛①P✡❡✛❵❍❛❝❙❲❜❴♣
❮✹❘❚❦✡❧q❦✹❵❍❘❴❦✡❧q❦④❡✛❧q❦☎❦✡Õ❯❛①♣➤❵❍❛❝❜❚❞✜❦✌❜❴P✗❙◗❢◗❛❝❜❚❞❭♣❩❙✛❳✩❡❭P➟❵q❛r❙❭❜❴♣❥❡❲❜❴❢✧❪❚❱①❡✛❜❴♣
❡✛❧q❦✳❵❍❘❯❫❴♣✰♦❲❦✌❧❍s✝❡✛❵➤❵❍❧✎❡❲P✗❵❍❛❝♦❲❦❭⑩✩✉☎❦✌❜❴P✗❦❭✇✛❛➛❳❣❶❑❜❴❙❾❮✹❱r❦➐❢◗❞❲❦❩❙✛❳❚❵q❘❚❦➝P✡❫❚❧❍❰
✁ ✄
▲ ✂✳■ ✆➹ ☎✫➹
❊ ➹➑➾ ▼✞✝✠✟✡✂✳❊❍●☛✂ ➾❚➹t➪❥➽ ❈ ➚❴➹t➾ ▲ ➚❴➹ ✂
Ñ❋❦➭❡❲♣q♣➤❫❚❬➢❦✂❵q❘❴❡❾❵❹❡❲❜➴Ï ❡✛❪❴❪❚❱r❛①P✡❡✛❵❍❛❝❙❲❜✵❡✛❧q❦✌❡❴Ï✳❘❴❡❲♣ ❒ ❦✡❦✌❜Ò❛❝❢◗❦✌❜◗❰
❵❍❛r✈❴❦➐❢◆✇✱❡✛❜❣❢å❵❍❘❚❦✌❧❍❦➎❛❝♣✐❡ë❪❚❧q❛r❬✧❡❐❳Ù❡❭P✗❛①❡çP✡❡❭♣➤❦☛❳➬❙❭❧✧❵❍❘❴❦➒❫❴♣❍❦➒❙❲❳
❡✛❫❚❵❍❙❲❬✧❡✛❵❍❦✌❢❋❪❚❱①❡✛❜❴❜❚❛r❜❴❞➭❮✹❛➛❵q❘❚❛❝❜ë❛➛❵➐⑩ ➘ ❜❐❵q❘❚❦❹P✌❡❲♣❍❦✧❙✛❳➺❵❍❘❚❦❄❵t❮❩❙
❡✛❪❴❪❚❱r❛①P✡❡✛❵❍❛❝❙❲❜❴♣✜P✗❙❲❜❣♣➤❛①❢◗❦✡❧q❦✌❢ ❒ ❦✡❱❝❙❾❮✝û✐➮Ù❛ ✃ ❧❍❙❑❡❲❢❋❜❚❦✡❵t❮❩❙❭❧❍❶➎❬➢❡❲❜◗❰
❡✛❞❭❦✡❬➢❦✡❜❑❵➐û❄❪❚❱①❡✛❜❚❜❚❛❝❜❚❞❐P✡❡❲❜ ❒ ❦✐❫❴♣❍❦✌❢ç❳➬❙❲❧✧❢◗❧✎❡⑨❮✹❛r❜❴❞➒❫❚❪❁❪❚❱❝❡❲❜❴♣
❵❍❙➎❦✌❡❭♣➤❦②P✗❙❲❜❴❞❲❦✌♣➤❵❍❛❝❙❲❜ë❙❭❧★❡❲❱r❱❝❦✡♦❯❛①❡❾❵❍❦❹❵❍❘❚❦②❦✌✑☞ ❦✌P✗❵q♣★❙✛❳☎❛❝❜❴P✗❛①❢◗❦✌❜❭❵✎♣
➮➬❛❝❛ ✃ ò❣❙❑❙◗❢➓❪❚❧❍❦✌♦❲❦✌❜❭❵q❛r❙❭❜➓❡✛❜❣❢ç❬✧❡✛❜❴❡❲❞❲❦✌❬✲❦✌❜❑❵✌û❄❪❚❱①❡✛❜❴❜❚❛r❜❴❞➒❬✧❡⑨s
⑩ ✍✫❛r♦❭❦✡❜➓❵❍❘❚❛①♣
❒ ❦✂❡❲❪❚❪❚❱❝❛r❦➐❢ë❵q❙➎❳➬❙❭❧❍❬ ❪❴❱❝❡❲❜❴♣★❳➬❙❲❧➢❦✡♦❾❡❲P✡❫❴❡❾❵q❛r❙❭❜❴♣✌✎
P✗❙❭❜❑❵❍❦✗Õ❯❵➐✇❉❮➝❦❄❪Ð❙❭♣➤❵❍❫❚❱①❡❾❵q❦❹❡☛❜❯❫❚❬ ❒ ❦✡❧✜❙✛❳✹❶❲❦✌s❋ð❑❫❚❦➐♣t❵q❛r❙❭❜❴♣❃❵❍❘❣❡❾❵
❜❚❦✌❦✌❢☛❵q❙ ❒ ❦✲P✡❙❲❜❴♣❍❛①❢◗❦✡❧q❦✌❢☛❛❝❜➒❦✌♦❾❡✛❱❝❫❴❡❾❵q❛r❜❚❞❄❵❍❘❚❦★❳➬❦✌❡❭♣➤❛ ❒ ❛❝❱r❛r❵ts➭❡✛❜❴❢
✌❦ ✑☞ ❦✌P✗❵❍❛❝♦❲❦✌❜❚❦✌♣q♣✰❙✛❳❴❫◗❵q❛r❱❝❛①♣➤❛❝❜❚❞❃❡✛❫◗❵q❙❲❬✧❡❾❵q❦✌❢✯❪❴❱❝❡❲❜❚❜❚❛❝❜❚❞❴⑩❉Ñ➎❦➺❞❲❧q❙❲❫❴❪
❵❍❘❴❦✡❬×❛❝❜❑❵❍❙❄⑥➢❡✛❧q❦✌❡❭♣✡û
✏✒✑✔✓✖✕ ✘❻ ✗✚✙❚❽◗✘❻ ✗ ✕✜✛✣✢ ❽❚❿❲❻ ✕ ❼❾❺
æ☛❙❲❵❍❛❝♦⑨❡✛❵❍❛❝❙❲❜✜❳Ù❡❲P➟❵q❙❲❧✎♣✥❛❝❜❴P✗❱❝❫❴❢◗❦❩❵❍❘❴❦❩❳➬❫❴❜❴❢◗❦✡❧q❬➢❦✡❜❑❵q❡❲❱❑❡❲❜❴❢✯❫❴❜❴❢◗❦✡❧❍❰
❱❝s❑❛❝❜❚❞✐❧❍❦➐❡❲♣❍❙❲❜❴♣☎❳➬❙❲❧④❵❍❘❚❦✲❛r❜❑❵q❧❍❙◗❢◗❫❴P✗❵❍❛❝❙❲❜➭❙❲❳❥❪❚❱①❡✛❜❚❜❴❛r❜❚❞❹❵❍❦➐P✎❘❚❜❚❙❲❱r❰
❙❲❞❭s❲⑩ ➘ ❳✖❡❄P✗❫❚❧q❧q❦✡❜❑❵✱♣➤s◗♣➤❵❍❦✌❬ ❢◗❦✌❱r❛❝♦❲❦✌❧q♣☎❡❲❜☛❙❲❪◗❵q❛r❬★❫❚❬ö♣➤❙❭❱r❫◗❵q❛r❙❭❜✥✇
❙❲❧✫❡❄♣❍❫ ❒ ♣❍❦✗❵④❙❲❳✖♣t❵✎❡✛❶❲❦✌❘❚❙❲❱①❢◗❦✌❧q♣☎❡❲❧❍❦★♣❍❡✛❵❍❛①♣t✈❴❦➐❢✂❮✹❛➛❵q❘☛❵❍❘❴❦✜❙❲❪Ð❦✡❧❍❰
❡❾❵q❛r❙❭❜➭❙✛❳✳❵q❘❚❦★♣❍s◗♣t❵q❦✡❬ö❫❴♣❍❛r❜❴❞✐P✗❫❴❧❍❧q❦✡❜❑❵☎❵q❦✌P✎❘❚❜❚❙❭❱r❙❭❞❲s❭✇❴❵q❘❚❦✡❜☛❵q❘❚❦
❬➢❙✛❵q❛r♦❾❡❾❵q❛r❙❭❜➎❬✧❡⑨s ❒ ❦✲❵❍❙❯❙✂❮❩❦➐❡✛❶✑⑩★➷✱❜➎❦✡Õ◗❡❲❬➢❪❚❱r❦➢❙❲❳❩❱❝❙❾❮Ô❬➢❙✛❰
❵❍❛❝♦❾❡❾❵q❛r❙❭❜å❛①♣✧❮✹❘❚❦✌❧❍❦✂❵q❘❚❦✡❧q❦☛❬✧❡⑨s ❒ ❦➭❪❚❧q❦✌♣q♣➤❫❚❧q❦②❵❍❙ë❛❝❜❑❵❍❧q❙❯❢❚❫❴P✗❦
❡❲❢❚♦⑨❡❲❜❴P✗❦➐❢✐❵❍❦✌P✎❘❴❜❚❙❲❱❝❙❲❞❭s✧❳➬❙❲❧☎❛r❵q♣✱❙❾❮✹❜☛♣❍❡❲❶❲❦❲✇❴❧q❡✛❵❍❘❚❦✌❧➺❵❍❙❹♣q❡❾❵q❛❝♣➤❳➬s
❡✲❪❣❦✌❧qP✡❦✡❛❝♦❲❦➐❢❹❜❚❦✌❦✌❢◆⑩
Û④♦❲❦✌❧q❡❲❱r❱❚❵❍❘❚❦✫ð❭❫❴❦✌♣➤❵❍❛❝❙❲❜❴♣✖❵❍❘❴❡✛❵➝♣❍❘❚❙❲❫❚❱①❢ ❒ ❦④❡❭♣➤❶❭❦✌❢➢❛❝❜❴P✗❱❝❫❴❢◗❦❭û✟❡❲❧❍❦
❵❍❘❴❦✡❧q❦❹P✗❙❭❬✲❪Ð❦✡❱❝❱❝❛r❜❚❞➎❧❍❦➐❡❲♣❍❙❲❜❴♣✫❳➬❙❭❧✯❵q❘❚❦❹❛❝❜❑❵❍❧q❙❯❢❚❫❴P➟❵q❛r❙❭❜ë❙✛❳✹❵❍❦➐P✎❘◗❰
❜❚❙❭❱r❙❭❞❲s✑û★❛①♣★❛➛❵✲❱r❛❝❶❲❦✌❱rs❋❵❍❙❋❢◗❦✡❱❝❛❝♦❲❦✡❧✲❡➭♣➤❵❍❦✡❪❁P✎❘❴❡✛❜❚❞❭❦❄❛①♣★ð❑❫❴❡❲❱r❛r❵ts
❙✛❳✩♣❍❦✡❧q♦❯❛❝P✡❦ ❒ ❦✡❛❝❜❚❞★✤❙ Ð☞ ❦✌❧❍❦➐❢➢❦✡❞★❛r❜❴P✡❧❍❦➐❡❲♣❍❦✌❢➢❧q❦✡❱❝❛❝❡ ❒ ❛❝❱r❛r❵ts➢❙✛❳✥❪❚❱①❡✛❜❴♣✌✇
P✗❙❭❧❍❧q❦✌P✗❵❍❜❚❦➐♣❍♣✱❙✛❳❥❪❴❱❝❡❲❜❴♣✡✇◆❧q❦✌❡❲❱➛❰➵❵❍❛❝❬➢❦★♣❍❪❣❦✌❦✌❢❯❰➲❫❚❪➎❛r❜➒❵❍❘❚❦✲❞❲❦✌❜❚❦✡❧✎❡❾❰
❵❍❛❝❙❲❜✩✇✟❡❲❜❴❢ë❦✡Õ❯❦➐P✗❫◗❵q❛r❙❭❜ë❙❲❧✲❢◗❛❝♣➤❵❍❧q❛ ❒ ❫◗❵❍❛❝❙❲❜➓❙✛❳✹❪❚❱①❡✛❜❴♣✎õþÑÒ❛❝❱r❱➺➷ ➘
❪❚❱①❡✛❜❚❜❴❛r❜❚❞★❦✡❜❴❡ ❒ ❱❝❦④❡✜♣❍❛❝❞❲❜❚❛r✈❣P✡❡❲❜❑❵❍❱❝s★❬➢❙❲❧q❦④P✡❙❭♣➤❵➤❰➲❦✌✑☞ ❦✌P✗❵❍❛❝♦❲❦✱♣❍❙❲❱❝❫◗❰
❵❍❛❝❙❲❜②❵q❙➢♣❍❙❲❬➢❦❃❪Ð❦✡❧✎P✗❦✡❛❝♦❲❦➐❢❹❪❴❧❍❙ ❒ ❱r❦✌❬❄õ
✥✄✑✔✦★✧ ✪❿ ✩ ✛✫✕✒✬✚✕✒✭ ✗Ù❿❲❽ ✬✯✮✰✕✒✛ ❻ ✧✲✱ ❻✧❽ ✛✫✳✵✴✷✶✡✸ ❽ ✛✹✢ ❽❚❿✛❻ ✕ ❼❾❺
Ø❚❦➐❡❲♣❍❛ ❒ ❛r❱❝❛➛❵ts✧❮✹❛r❵❍❘✂❧q❦✌♣❍❪❣❦➐P➟❵➝❵❍❙✲❵❍❘❴❦❃❡❲❪❚❪❚❱❝❛❝P✌❡❾❵q❛r❙❭❜✥Ï ♣➺P✡❙❲❜❑❵❍❦✡Õ❑❵✹❛❝❜◗❰
P✗❧q❦✌❡❭♣➤❦➐♣✱❛r❳➝❵❍❘❚❦✌❧❍❦✲❛❝♣✝❡❲❱r❧q❦✌❡❭❢◗s➭❡✐❘❚❛r❞❭❘➎❱❝❦✡♦❭❦✡❱✟❙✛❳❩❵❍❦➐P✎❘❚❜❚❙❲❱❝❙❲❞❭❛❝P✌❡✛❱
❢◗❦✌♦❲❦✡❱❝❙❲❪❴❬✲❦✌❜❑❵④❮✹❛r❵❍❘❚❛❝❜➭❵q❘❚❦✲♣❍❦✡❧q♦❯❛❝P✡❦✯❙❲❧✫❛❝❜❴❢◗❫❴♣➤❵❍❧qs❲⑩ ➘ ❜➒❘❯❫❚❬✧❡✛❜
P✗❙❭❜❑❵❍❧q❙❲❱❝❱r❦➐❢✲♣❍s◗♣t❵q❦✡❬✧♣❥❵❍❘❚❦✌❧❍❦✱❡❲❧❍❦④♣➤❦✌♦❲❦✡❧✎❡✛❱❴❮❩❦✌❱r❱Ð❢◗❦✗✈❴❜❴❦✌❢✧❪❚❘❣❡❲♣❍❦✌♣
❛❝❜➢P✡❙❲❜❑❵❍❧q❙❲❱➵û✥❫❚❜❣❢◗❦✡❧✎♣t❵✎❡✛❜❴❢❚❛r❜❚❞❃❮✹❘❴❡✛❵✖❛❝♣✟❘❣❡✛❪❚❪Ð❦✡❜❚❛❝❜❚❞✝❛❝❜✲❵❍❘❚❦☎♣❍s◗♣t❰
❵❍❦✌❬✂✇✹❦✡♦❾❡✛❱❝❫❴❡❾❵q❛r❜❴❞ë❵❍❘❴❡✛❵❹❫❚❜❴❢❚❦✡❧✎♣t❵✎❡✛❜❴❢◗❛❝❜❚❞✽➮Ù❛❝♣❄❵❍❘❚❦✌❧❍❦➎❡ë❪❚❧q❙ ❒ ❰
❱❝❦✡❬❄õ ✃ ❡❲❜❴❢➢❞❲❦✌❜❚❦✡❧✎❡❾❵q❛r❜❴❞✯❡✛❜❄❦✺Ð☞ ❦➐P➟❵❍❛❝♦❲❦✱❪❚❱❝❡❲❜✧❵❍❙✜❘❚❦✡❱❝❪❹❡✛❱❝❱❝❦✡♦❯❛❝❡✛❵❍❦
❵❍❘❴❦✜❪❚❧q❙ ❒ ❱❝❦✡❬✂⑩ ♠ ❘❚❦✜❛❝❜❑❵❍❧q❙❯❢❚❫❴P➟❵q❛r❙❭❜➭❙✛❳✖❪❚❱①❡✛❜❴❜❚❛r❜❴❞❄❵❍❦➐P✎❘❚❜❚❙❭❱r❙❭❞❲s
❛①♣✝❬➢❙❲❧q❦✧❱r❛❝❶❲❦✌❱rs➒❵❍❙ ❒ ❦✧❳➬❦✌❡❭♣➤❛ ❒ ❛❝❱r❛r❵ts➒❛r❳ ➘t♠ ❛❝♣✯❡❲❱r❧q❦✌❡❭❢◗s➒❘❴❦✌❡⑨♦❯❛r❱❝s
❫❴♣❍❦✌❢✂❛❝❜✂❵❍❘❚❦★P✗❙❲❱❝❱❝❦✌P➟❵q❛r❜❴❞❴✇❴❪❴❧❍❙◗P✗❦➐♣❍♣❍❛❝❜❚❞✧❡✛❜❴❢✂❛❝❜❑❵❍❦✡❧q❪❚❧q❦✗❵✎❡❾❵❍❛❝❙❲❜✂❙❲❳
❢❚❡✛❵q❡❚✇❚❡❲❜❴❢❄❛❝❜✂❪❚❧q❙❾♦❑❛①❢◗❛❝❜❚❞★♣❍❫❚❪❚❪Ð❙❲❧❍❵➺❳➬❙❲❧➺❵q❘❚❦✝P✗❫❚❧q❧❍❦✌❜❑❵➺❢◗❦➐P✗❛①♣➤❛❝❙❲❜
❪❚❧q❙◗P✗❦✌♣q♣❍❦✌♣✌⑩ ➘ ❳✟❵❍❘❚❦★❢❚❡❾❵✎❡✧P✗❙❲❱❝❱❝❦✌P➟❵q❦✌❢②❘❴❡❭♣✱❡✛❜✂❫❚❜❣P✗❦✡❧❍❵q❡❲❛r❜✂❛❝❜❑❵❍❦✡❧❍❰
❪❚❧q❦✗❵✎❡❾❵❍❛❝❙❲❜✩✇❲❙❲❧❩❛❝♣❥❛r❜❣P✗❙❲❬➢❪❚❱❝❦✗❵q❦❲✇❭❵❍❘❚❦✌❜✧❵❍❘❴❦✹❳➬❦✌❡❭♣➤❛ ❒ ❛❝❱r❛r❵ts✜❳Ù❡❭P➟❵q❙❲❧❥❛①♣
❱❝❦✌♣q♣➤❦✌❜❚❦✌❢◆⑩
✍✫❛❝♦❲❦✡❜✲❵q❘❚❦☎❜❴❡✛❵❍❫❚❧q❦✹❙✛❳✑❵❍❘❴❦➺❵❍❦✌P✎❘❴❜❚❙❲❱❝❙❲❞❭❛❝P✌❡✛❱❚P✎❘❴❡❲❜❚❞❲❦❭✇✛❛➛❵❩❛❝♣❥❘❚❦✡❱❝❪◗❰
94
❧q❦✡❜❑❵✫❪❴❱❝❡❲❜❚❜❚❛❝❜❚❞✐❪❴❧❍❙◗P✗❦➐♣❍♣④❛❝❜➒❵q❘❚❦✧❡✛❪❚❪❚❱❝❛①P✡❡❾❵q❛r❙❭❜➎❡❲❧❍❦➐❡❄❛①♣④❜❴❙✛❵❃❛r❜
❮✹❧q❛➛❵❍❵❍❦✡❜✂❳➬❙❭❧❍❬✂✇❚❙❭❧➺❵❍❘❚❦✌❧❍❦✯❡❲❧❍❦✝❜❴❙✧❦✗Õ❚❡✛❬➢❪❚❱❝❦✌♣➺❙❲❳✰❪❚❧q❦✌P✡❛❝♣❍❦✡❱❝s❹❦✌❜◗❰
P✗❙◗❢◗❦➐❢➭❪❚❱①❡✛❜❣♣✡✇Ð❵❍❘❴❦✡❜➭❵q❘❚❦★❳➬❦✌❡❲♣❍❛ ❒ ❛r❱❝❛r❵ts②❛①♣✫❱r❙❾❮✝✇✑❙❭❧④❡❾❵❃❱❝❦✌❡❲♣➤❵✱❵q❘❚❦
❡✛❬➢❙❭❫❚❜❑❵★❙✛❳☎❧q❦✌♣❍❙❲❫❴❧qP✡❦✧❜❚❦✡❦➐❢◗❦✌❢ç❵❍❙➎P✡❧❍❦➐❡❾❵q❦❹❡➒❢◗❙❭❬✧❡✛❛❝❜ë❬✲❙◗❢◗❦✌❱
❡✛❜❣❢ç❪❚❱①❡✛❜❚❜❴❛r❜❚❞❋❘❚❦✌❫❚❧❍❛①♣➤❵❍❛①P✡♣★❬➢❡⑨s ❒ ❦②❪❚❧❍❙❭❘❚❛ ❒ ❛➛❵q❛r♦❭❦❲⑩ ♠ ❘❚❦②❶❲❦✌s
ð❑❫❚❦✌♣➤❵❍❛❝❙❲❜❣♣➝❵❍❙✧P✗❙❭❜❴♣➤❛①❢◗❦✌❧☎❡✛❧q❦❲û
ß í ❂ Ü î Ü ❂❆❂ Ý ✂í ✁ ï❍✺Ü ■ ❀ í ❊ ❂ Þ ✁✄✁ ß ❀ ì Þ❾Ý ❀ í❾î ❂✆☎
➘ ♣★❵❍❘❚❦☛❡❲❪❚❪❚❱❝❛❝P✌❡❾❵❍❛❝❙❲❜å❡✛❧q❦✌❡➎P✗❱❝❙❭♣❍❦✐❙❭❧➢❡❲❜❴❡✛❱❝❙❲❞❭❙❲❫❴♣✯❵❍❙❐❡➒❪❴❧❍❦✌♦❑❛r❰
❙❲❫❣♣✲❢❚❦✗✈❴❜❚❦➐❢å❪❚❱❝❡❲❜❚❜❚❛❝❜❚❞ ❒ ❦✌❜❴P✎❘❚❬✧❡✛❧q❶✑✇❥❙❲❧✧❡❋P✗❫❚❧q❧q❦✡❜❑❵★✈❣❦✡❱①❢◗❦✌❢
♣❍s❯♣➤❵❍❦✌❬❄õ➒❨➝❡✛❜✐❪❴❡✛❧❍❵q♣➺❙❲❳✰❢◗❙❭❬➢❡❲❛r❜②❬➢❙◗❢◗❦✡❱①♣➺❙❲❧➺❪❚❧q❦✡♦❯❛❝❙❲❫❴♣❍❱rs✧❦✌❜◗❰
❞❲❛❝❜❚❦✌❦✡❧q❦✌❢②P✗❙❲❜❣♣t❵q❧q❡❲❛r❜❑❵q♣ ❒ ❦✝❧❍❦✡❰➵❫❣♣➤❦➐❢❴õ
✝ ï í➐ì Ü ▲ ❊◗ï❍Ü ❈✗í ❋ï ✧
❖ Þ❾ß ❀❃❂ Þ✛Ý ❀ í✛î ❀ î Ý ❅ Ü ✁ ï í ❏✡ßrÜ✺✻
❖ Þ❾ï❍Ü✎Þ ☎
➷☎❧q❦✹❵q❘❚❦✡❧q❦☎❦✗Õ◗❛①♣t❵q❛r❜❴❞✯❦✡❜❴P✡❙◗❢◗❦✌❢◆✇❭❳➬❙❲❧q❬✧❡✛❱❝❛❝♣❍❦✌❢➢❙❭❧✖❮✹❧q❛➛❵❍❵❍❦✡❜❚❰➲❢◗❙❾❮✹❜
❪❚❧q❙◗P✗❦✌❢❚❫❚❧❍❦➐♣✟❙❲❧✳❪❚❱①❡✛❜❣♣qõ ➘ ♣✟❵q❘❚❦☎P✗❫❚❧q❧q❦✡❜❑❵✖♣➤s◗♣➤❵❍❦✡❬✾❬✧❡✛❜❴❡❲❞❲❦✌❢ ❒ s
❦✗Õ◗❪Ð❦✡❧❍❵q♣➝❫❴♣❍❛r❜❴❞✯❵❍❘❴❦✡❛❝❧➝❙❾❮✹❜✐❦✗Õ◗❪Ð❦✡❧q❛r❦✌❜❴P✗❦❭✇❑❙❭❧➝❢❚❙✯❵❍❘❴❦✡s❄❘❴❡⑨♦❭❦☎❧q❦✗❰
P✗❙❭❫❚❧✎♣➤❦✱❵❍❙➢❬✧❡✛❜❯❫❴❡✛❱①♣➝❡✛❜❴❢❹❵❍❧✎❡✛❛❝❜❚❛❝❜❚❞✲❡❲❛❝❢❴♣qõ✂➷✱❧❍❦④❵❍❘❚❦✌❧❍❦❃❧q❦✌❡❭❢◗❛r❱❝s
❡⑨♦❾❡✛❛❝❱❝❡ ❒ ❱❝❦❩❦✡Õ◗❡❲❬➢❪❚❱r❦➐♣✟❙✛❳❣❵❍❘❚❦➺❶❯❛r❜❣❢❚♣✳❙✛❳Ð❪❚❱①❡✛❜❴♣✰❵q❘❴❡❾❵✖❡✛❧q❦➝❜❚❦✡❦➐❢◗❦✌❢
❵❍❙ ❒ ❦✝❞❲❦✌❜❚❦✡❧✎❡❾❵q❙❲❧q❦✌❢❴õ
✞ ✁✟✁ ï ✠í ✁ ï ❀ Þ✛Ý➲Ü î Ü ❂❆❂ ❈✗í ï ✞☛✡ ✁ ßrÞ î❴î ❀ ✲î ❄ ❂ í ß ❊◗Ý ❀ í❾î ☎
✉☎❙❾❮ÿ❳Ù❡✛❧❄❢◗❙❯❦✌♣★❵q❘❚❦➭P✗❙❭❜❴♣t❵q❧❍❫❣P➟❵❍❛❝❙❲❜❁❙✛❳④❡❋❪❚❱①❡✛❜❁❳Ù❡✛❱❝❱✹❛❝❜❭❵q❙➒❵q❘❚❦
P✗❱①❡❲♣q♣❍❛❝P✹❢◗❦✡✈❴❜❚❛r❵❍❛❝❙❲❜✧❙❲❳Ð❞❭❦✡❜❚❦✌❧q❡✛❵❍❛❝❜❚❞❃❙❲❧✎❢◗❦✡❧q❛r❜❴❞❭♣✟❙✛❳◆❛❝❜❴♣➤❵q❡✛❜❑❵q❛❝❡✛❵❍❦✌❢
❡❲P✗❵❍❛❝❙❲❜Ó♣qP✎❘❚❦✡❬✧❡❁❵❍❙÷❡❭P✎❘❚❛r❦✌♦❲❦❐❞❲❙❑❡✛❱①♣✂❙❲❧➒❢❚❦✌P✗❙❭❬➢❪❣❙❑♣➤❦❋❵q❡❭♣➤❶◗♣✎õ
③④❙❑❦➐♣➒❪❴❱❝❡❲❜Ó❞❲❦✡❜❴❦✡❧✎❡❾❵❍❛❝❙❲❜➄❛❝❜❯♦❲❙❲❱❝♦❲❦ç❡÷❞❲❧q❦✌❡❾❵❋❢◗❦✌❡❲❱✜❙✛❳✧❫❚❜❣P✗❦✡❧❍❰
❵q❡❲❛r❜❑❵ts❭✇❾❬✲❛rÕ◗❦✌❢★❢◗❛❝♣qP✗❧q❦✗❵q❦❩❡❲❜❴❢✜P✡❙❲❜❑❵❍❛❝❜❯❫❚❙❲❫❴♣❉♦❾❡✛❧q❛❝❡ ❒ ❱❝❦✌♣✌✇➐❙❭❧❉❱①❡✛❧q❞❲❦
❡✛❬➢❙❭❫❚❜❑❵q♣➺❙✛❳✰❘❯❫❚❬✧❡✛❜☛♣➤❶❯❛❝❱r❱êõ
❪❚❧❍❙ ❒ ❱❝❦✡❬✧♣✡✇➐♣➤❫❣P✎❘✯❡❲♣◆❦✡Õ◗P✡❦✌♣q♣➤❛❝♦❲❦✰❳➬❫❚❦✌❱❲❦✌❬✲❛①♣q♣➤❛❝❙❲❜❴♣✌✇➐❛r❜✯❡❲❜✝❛r❜❴P✡❧❍❦➐❡❲♣➤❰
❛r❜❚❞❭❱rs✐P✗❙❲❬➢❪❚❱❝❦✗Õ✐❦✡❜❯♦❯❛r❧q❙❲❜❚❬➢❦✌❜❭❵➐⑩
✲✴✳✶✵✙✷✹✸✭✵ ✬✟★✯✮✱✰ ✸ ✢✟★✯✰✭✺✼✻✽✦✾★✫✬✄✛✣✦✩★ ✵
✓✖✕ ❉❻ ✗ ✙❚❽◗❉❻ ✗ ✕✒✛✎✢
❚❽ ❿✛❻ ✕ ❼❾✟❺ ✿ ✩✫✗ ✭ ✩
ÑÒ❛➛❵q❘❚❛r❜➭è✱ý✱æë✇❴❵❍❘❚❦✌❧❍❦✯❛①♣☎❡➢❮➝❦✡❱❝❱❉❢◗❦✡✈❴❜❚❦➐❢☛♣➤❪❚❱❝❛r❵ ❒ ❦✡❵t❮❩❦✌❦✡❜✂❫❚❜◗❰
❢◗❦✡❧✎♣t❵✎❡✛❜❴❢❚❛r❜❚❞★❮✹❘❴❡❾❵➺❛①♣❩❘❣❡✛❪❚❪Ð❦✡❜❚❛❝❜❚❞✲❛r❜❹❵❍❘❚❦✝♣❍s◗♣t❵q❦✡❬✂✇◗❡✛❜❴❢❹❞❲❦✡❜❚❰
❦✡❧✎❡❾❵❍❛❝❜❚❞❄❡❲❜☛✌❦ ✑☞ ❦✌P➟❵q❛r♦❭❦✯❪❚❱①❡✛❜✂❵q❙❄❘❚❦✡❱❝❪➒❡❲❱r❱❝❦✡♦❯❛①❡❾❵❍❦❃❵q❘❚❦✜❪❴❧❍❙ ❒ ❱r❦✌❬✂⑩
➘ ❜✧❵❍❘❚❦④❳➬❙❲❧q❬➢❦✡❧➺P✡❡❭♣➤❦❭✇❲❵q❘❚❦✡❧q❦✫❡❲❧❍❦④❬✧❡✛❜❯s➢❧❍❦➐❡✛❱❣❵❍❛❝❬✲❦❃❢❚❡✛❵q❡✝❳➬❦✌❦✌❢❚♣
❳➬❧❍❙❭❬ ❮✹❘❚❛①P✎❘➴❶❯❜❚❙❾❮✹❱❝❦✌❢❚❞❲❦ë❡ ❒ ❙❲❫◗❵☛❵q❘❚❦ç♣❍s❯♣➤❵❍❦✌❬✙P✌❡✛❜ ❒ ❦ç❦✡Õ❯❰
❵❍❧✎❡❲P➟❵q❦✌❢◆✇✝❛❝❜❴P✡❱r❫❴❢❚❛r❜❚❞✮❱r❙❯❙❭❪➄❢❚❦✗❵❍❦➐P➟❵q❙❲❧✎♣✡✇✜➷④ý✱➱❩è ➮Ù❡✛❫❚❵❍❙❲❬✧❡✛❵❍❛①P
❜❑❫❴❬ ❒ ❦✡❧②❪❚❱❝❡✛❵❍❦➎❧❍❦➐P✗❙❲❞❭❜❚❛r❵❍❛❝❙❲❜ ✃ ✇✱❡✛❜❴❢➴❨✹❨ ♠ ó✜⑩ ➘ ❜÷❵q❘❚❦➒❱①❡❾❵❍❵❍❦✡❧
P✡❡❲♣❍❦❲✇☎❵❍❘❚❦➎❵❍❧✎❡❾ú❄P➭❬✧❡❲❜❴❡✛❞❭❦✡❧✐P✌❡✛❜✮❬➢❡❲❜❴❡✛❞❭❦➒❡å♣➤❛r❵❍❫❴❡✛❵❍❛❝❙❲❜ ❒ s
❛r❜❚❛r❵❍❛①❡❾❵q❛r❜❴❞➎❡➭❧✎❡✛❜❴❞❲❦✧❙✛❳✫❡❲P➟❵q❛r❙❭❜❴✘♣ ✳✼ ❵❍❘❚❛①♣★❛r❜❴P✡❱r❫❣❢◗❦✌♣★❵❍❘❴❦✐♣❍❦✗❵➤❵q❛r❜❴❞
❙✛❳❥❵q❧q❡✛ú❄P★❱❝❛❝❞❲❘❑❵❃❵❍❛❝❬✲❛❝❜❚❞❑♣✡✇❉♦⑨❡❲❧❍❛①❡ ❒ ❱r❦➢❬➢❦✌♣q♣❍❡❲❞❲❦➢♣➤❛❝❞❲❜❣♣✧➮➬ó④æ➎❖ ✃ ✇
♦⑨❡❲❧❍❛①❡ ❒ ❱r❦➢♣➤❪Ð❦✡❦➐❢➒❱❝❛❝❬✲❛r❵q♣✧➮Ùó✫❖◗ü ✃ ✇✥❧✎❡✛❬➢❪❋❬➢❦✗❵❍❦✌❧❍❛❝❜❚❞✂❡✛❜❴❢➎❧✎❡❲❢◗❛❝❙
❒ ❧❍❙❑❡❲❢❚P✌❡❲♣➤❵❍❛❝❜❚❞❴⑩ ➘ ❜➒❧q❦✌❡❲❱✩❵q❛r❬➢❦★❵❍❧✎❡❾ú❄P★P✡❙❲❜❑❵❍❧q❙❲❱❉❙✛❳❥❱①❡✛❧q❞❲❦✜❧❍❙❑❡❲❢
❜❚❦✗❵t❮➝❙❲❧q❶◗♣❥❛r❵✹❘❴❡❲♣ ❒ ❦✡❦✡❜✂❢◗❦✌❬✲❙❭❜❴♣➤❵❍❧✎❡❾❵❍❦➐❢✧❵q❘❴❡❾❵✹❜❚❦➐P✗❦✌♣q♣q❡✛❧qs✲❪❴❧❍❙❲❰
P✗❦✌♣q♣❍❛r❜❚❞✐❡✛❜❴❢➭❢❚❦✌P✗❛①♣❍❛r❙❭❜➭❬✧❡✛❶❯❛r❜❴❞❹❛①♣ ❒ ❦✡s❭❙❲❜❴❢☛❵❍❘❚❦➢P✡❡❲❪❴❡ ❒ ❛❝❱❝❛➛❵q❛r❦➐♣
❙✛❳❉❘❯❫❚❬✧❡❲❜②❙❭❪❣❦✌❧q❡✛❵❍❙❭❧q♣➺❡✛❱❝❙❲❜❴❦❲✇❚❡✛❜❣❢②❡❭♣➝❵❍❘❚❦✯❢❚❦✡❬✧❡✛❜❴❢✐❳➬❙❭❧✹❧❍❙❑❡❲❢
❫❴♣❍❡❲❞❲❦✹❛❝❜❴P✗❧q❦✌❡❭♣➤❦➐♣✡✇❲❵❍❘❚❛①♣➝❢◗❛➛ú❄P✡❫❚❱➛❵ts➢❛❝❜❄❬✧❡✛❜❴❡❲❞❲❛❝❜❚❞✝❵q❧q❡✛ú✧P☎✌❦ ✑☞ ❦✌P➟❰
❵❍❛❝♦❲❦✡❱❝s ❒ ❦✌P✡❙❲❬➢❦✌♣➢❬➢❙❲❧q❦✂❡❲P✗❫❚❵❍❦❲⑩Ò➷④❢❚❢◗❛r❵❍❛❝❙❲❜❴❡❲❱r❱❝s❲✇❩❵❍❘❚❦➭P✡❙❭♣➤❵✧❙✛❳
P✗❙❲❜❴❞❲❦✌♣➤❵❍❛❝❙❲❜✐❛❝♣➺❛❝❜❴P✡❧❍❦➐❡❲♣❍❛r❜❚❞✲❙❾♦❲❦✌❧❥❵q❛r❬➢❦✯❡❲❜❴❢❹❛❝❜②❵❍❘❚❦✯♥④⑦ ❡❲❱r❙❭❜❚❦
❛❝♣✟❦✡Õ❯❪Ð❦✌P✗❵❍❦➐❢✲❵❍❙❃❧❍❛①♣❍❦➺❵❍❁❙ ✳❀ ⑥ â ❒ ❛r❱❝❱❝❛r❙❭❜ ❒ s á✛â ⑤ â ⑩ ➘ ❬➢❪❚❧q❙❾♦❲❦✌❬➢❦✡❜❑❵
❵❍❙✯❵q❘❚❦✫❦✗ú❄P✗❛❝❦✡❜❴P✡s✧❙✛❳◆❵q❧q❡✛ú✧P✫P✗❙❭❜❑❵❍❧q❙❲❱❣❡❲❜❴❢✧❬✧❡✛❜❣❡✛❞❲❦✌❬➢❦✡❜❑❵➝❡❲❱❝♣❍❙
P✡❡✛❜ ❒ ❦✫❱r❛❝❜❚❶❲❦➐❢➢❵❍❙✜❵❍❘❚❦④❧❍❦➐❢◗❫❴P✗❵❍❛❝❙❲❜❄❙❲❳✥❦✡❬➢❛❝♣q♣❍❛r❙❭❜❴♣✖❙❲❳✩❡✛❛❝❧❥❪Ð❙❲❱❝❱r❫◗❰
❵q❡✛❜❑❵✎♣➺❪❚❧❍❙◗❢◗❫❣P✗❦✌❢ ❒ s❄❧q❙❭❡❲❢❹❵❍❧✎❡❾ú❄P❲⑩
➷☎❜÷❡✛❪❴❪❚❱r❛①P✡❡✛❵❍❛❝❙❲❜✵❳➬❙❭❧✐➷ ➘ ❪❚❱①❡✛❜❴❜❚❛r❜❴❞➓P✗❙❭❫❚❱❝❢ ❒ ❦➭❵q❙ç❞❲❦✌❜❚❦✡❧✎❡❾❵q❦
❵❍❧✎❡❾ú❄P✐❡❲❜❴❢➓❵❍❧✎❡✛❜❴♣❍❪❣❙❭❧➤❵✧♣❍s❯♣➤❵❍❦✌❬❇❪❚❱①❡✛❜❴♣✧❡❲❜❴❢åP✗❙❲❫❴❧q♣❍❦✌♣✲❙✛❳❃❡❲P✗❰
❵❍❛❝❙❲❜❴♣✐❛r❜✽❧❍❦➐❡✛❱r❰➂❵q❛r❬➢❦➭❵q❙➓❦✌❜❴❡ ❒ ❱❝❦➒❬➢❙❭❧❍❦➎❦✌✑☞ ❦✌P➟❵q❛r♦❭❦➎P✡❙❲❜❑❵❍❧q❙❲❱④❙✛❳
❛r❜❴P✡❛❝❢❚❦✡❜❑❵q♣➢❡✛❜❣❢ç❦✡♦❭❦✡❜❑❵q♣✌⑩ç➷ö♣➤❛❝❬➢❛r❱①❡✛❧✧❡❲❪❚❪❚❱❝❛❝P✌❡❾❵❍❛❝❙❲❜➓❬✲❛❝❞❲❘❑❵ ❒ ❦
❵❍❙✧❘❚❦✌❱r❪✂❮✹❛r❵❍❘➭P✗❧q❛①♣➤❛①♣➺❬✧❡✛❜❴❡❲❞❲❦✡❬➢❦✌❜❭❵☎❡❭P✗❧q❙❭♣q♣❥❵q❘❚❦✜ü❉➷Ó❡✛❜❴❢✂✉✱➷
P✗❙❲❜❑❵q❧❍❙❭❱r❱❝❦✌❢✧❜❴❦✗❵t❮➝❙❲❧q❶❯♣ ❒ s✧❞❲❦✡❜❴❦✡❧✎❡❾❵❍❛❝❜❚❞★❪❚❱❝❡❲❜❴♣➝❮✹❘❚❛①P✎❘❄❵q❡❲❶❲❦❃❡❲P✗❰
P✗❙❲❫❴❜❭❵➺❙✛❳✟ü✩➷➄❡✛❜❣❢✐✉④➷✮❪❴❧❍❛❝❙❲❧q❛➛❵q❛r❦➐♣➺❡✛❜❴❢❹❛r❜❑❵❍❦✌❧q❡❭P➟❵q❛r❙❭❜❴♣✡⑩✳✉✱❦✡❜❴P✡❦
❵❍❘❚❦✌❧❍❦✹❛①♣❥❡✝P✡❱r❦➐❡✛❧❥❡❲❛r❬ ❡✛❜❴❢✲❬➢❙✛❵q❛r♦❾❡✛❵❍❛❝❙❲❜➢❳➬❙❲❧✖❵❍❘❚❦☎❛❝❜❑❵❍❧q❙◗❢◗❫❴P➟❵q❛r❙❭❜
❙✛❳✱❵q❘❚❛①♣➢❶❑❛❝❜❴❢å❙✛❳④❵❍❦➐P✎❘❚❜❚❙❲❱❝❙❲❞❭s✑û❹❵q❙❋❛❝❜❴P✡❧❍❦➐❡❲♣❍❦✐❵q❘❚❦☛ð❑❫❴❡✛❱❝❛➛❵ts➓❙✛❳
❪❚❱❝❡❲❜❴♣✧➮➬❮✹❘❴❛❝P✎❘ë❛❝❜❯♦❲❙❲❱❝♦❲❦✲❱❝❛r❞❭❘❭❵✎♣✡✇✰ó✫❖❚ü✩♣✌✇❉ó❃❖◗æ➭♣✌✇❉❦✡❵qP ✃ ✇✩❵✎❡✛❶❯❛❝❜❚❞
❛r❜❑❵❍❙➢❡❲P✌P✗❙❭❫❚❜❑❵➺❡❲❜❹❛❝❜❴P✗❧q❦✌❡❭♣➤❛❝❜❚❞➢❡✛❬➢❙❲❫❚❜❑❵➺❙✛❳✰❛r❜◗❳➬❙❭❧❍❬✧❡✛❵❍❛❝❙❲❜❄ò❣❙❾❮✝✇
❮✹❘❚❛❝P✎❘❁❮✹❛r❱❝❱ ❒ ❦✡❜❚❦✡✈❚❵➢❵❍❘❚❦☛ð❑❫❴❡✛❱❝❛r❵ts❋❙❲❳④❱❝❛➛❳➬❦②❵q❘❚❧❍❙❭❫❚❞❲❘➓❧❍❦➐❢◗❫❴P✗❦➐❢
P✗❙❲❜❴❞❲❦✌♣➤❵❍❛❝❙❲❜②❡✛❜❴❢✐❪Ð❙❲❱❝❫◗❵q❡❲❜❭❵☎❦✌❬➢❛❝♣q♣➤❛❝❙❲❜❣♣✡⑩
✦ ✧ ✪❿ ✩ ✛✡✕✒✬✚✕✒✭ ✗➂❿❲❽ ✬ ✮✰✕✜✛ ❻ ✧❁✱ ❻✧❽ ✛✡✳ ✴✷✶✡✸ ❽ ✛✣✢ ❽❚❿❲❻ ✕ ❼❾❺
✽✿✾❁❀❃❂ Ý ❀ ✲î ❄ Ý➲Ü ❆ì ❯❅ î✑í ß ❇í ❄ ❀ ì Þ✛ß ❀ ❉î ❈ ï❍Þ ❂ Ý➵ï❋❊ ì ●Ý ❊◗ï❍Ü ☎ ❅ ❀ ❄❉❅
♠ ❘❚❦✡❧q❦❃❘❴❡❲♣ ❒ ❦✡❦✌❜✂❡✲❞❭❙❑❙◗❢✐❧❍❦➐P✗❙❭❧q❢❹❙✛❳✳❡❲❢◗❙❭❪◗❵❍❛❝❙❲❜②❙✛❳✟P✗❙❲❬➢❪❚❫❚❵❍❦✡❧
♣➤s◗♣➤❵❍❦✡❬✧♣④❛r❜➎❧❍❙❑❡❲❢✂❜❚❦✡❵t❮❩❙❭❧❍❶✐❵❍❧✎❡❾ú❄P★❬✧❡✛❜❴❡❲❞❲❦✡❬➢❦✌❜❭❵➐✇Ð❡❲❜❴❢➒P✡❫❚❧❍❰
❧❍❦✌❜❭❵q❱rs÷❵❍❘❚❦✌❧❍❦ç❡❲❧❍❦ë❦✌❬✲❦✌❧❍❞❭❛r❜❴❞✵P✡❙❲❬➢❬➢❙❲❜➄♣➤❦✌❧❍♦❯❛①P✗❦ë❪❚❱①❡❾❵❍❳➬❙❲❧q❬✧♣
❮✹❘❚❛❝P✎❘Ò❮✹❛❝❱❝❱ ❒ ❦ ❒ ❦✌❜❚❦✗✈ÐP✗❛①❡✛❱④❵❍❙ç❪❚❧q❙◗❢◗❫❴P✗❵q♣❹❡❲❜❴❢Ò♣❍❦✡❧q♦❑❛①P✗❦➐♣✧❢◗❦✗❰
❱r❛❝♦❲❦✌❧❍❦➐❢ ❒ s➭❵q❦✌P✎❘❚❜❚❙❭❱r❙❭❞❲s☛❪❚❧q❙❾♦❯❛❝❢❚❦✡❧✎♣✡⑩✧✉☎❛❝❞❲❘❐❱❝❦✡♦❭❦✡❱✖❢❚❡✛❵q❡②❪❚❱①❡❾❵➤❰
✡ ✘î ✗❈ í
❳➬❙❲❧q❬➢♣②♣❍❫❴P✎❘➴❡❭♣❄❵q❘❚❦ë✉✱➷✜Ï ❃
♣ ❂
❀ í✛î ◆ ❀ ❄✪✩❅ ❄
❡✛❱❝❱r❙❾❮ ♣➤❙❭❪❚❘❚❛①♣t❵q❛❝P✌❡❾❵q❦✌❢Ó♣❍❙✛❳ê❵t❮➺❡✛❧qï❍❦ë❬Þ ❾■ ❪❴Ü✡❡❲ß P✎❶❾❡✛❞❭❦✌❋ï ♣②➢❖ ❵qÞ✛❙ Ý ❒ ❙✛❵❍❘Ó❬➢❙❭❜❚✻Þ ❛➛❑ ❰
❵❍❙❲❧✧❡❲❜❴❢❁❢❚❛❝♣q♣➤❦✌❬➢❛r❜❴❡✛❵❍❦②❵❍❧✎❡❾ú❄P②❛❝❜◗❳➬❙❲❧q❬✧❡❾❵❍❛❝❙❲❜ ❒ ❙❲❵❍❘å❵❍❙❐❙❲❵❍❘❚❦✌❧
♣➤❦✌❧❍♦❯❛①P✗❦✌♣✂❡❲❜❴❢✮❵❍❙å❵❍❘❚❦➓❞❲❦✡❜❴❦✡❧✎❡✛❱❃❪❚❫ ❒ ❱❝❛①P÷➮➬❦✌❞å❛❝❜➴❵❍❘❚❦➓❫❚❶÷❮➝❦
❘❴❡⑨♦❲❦☛❮✹❮✹❮✝⑩ ❵❍❧✎❡❾ú❄P✗❦✌❜❚❞❲❱①❡✛❜❣❢◆⑩ P✡❙❴⑩ ❫❚❶ ✃ ⑩ ♠ ❘❚❦❋❢◗❦✡♦❭❦✡❱❝❙❲❪❚❬➢❦✌❜❭❵✐❙✛❳
☞✍✌✂✌ ❊ ➹ ❆ ✂ ➾◗➹➤➪❥➽ ☞ ➚ ✄▲ ✂✏✎✒✑ ➪ ✂ ➶✔✓ ▲ ➾✄❋
✕ ➪✖➚ ❏
❅ ✂ ➽ ✂✗✖✖▲✙✘✕▲ ➽❩➾
✚✜✛✣✢✥✤✧✦✩★✫✪✭✬✟★✯✮✱✰
è✹❙❑❡❲❢➭❜❴❦✗❵t❮➝❙❲❧q❶☛❬➢❡❲❜❴❡✛❞❭❦✡❬➢❦✡❜❑❵✧➮Ùè☎ý④æ ✃ ❧q❦✡❱❝❛r❦➐♣④❙❭❜❋P✡❙❲❬➢❪❚❱❝❦✗Õ◆✇
❛❝❜❭❵q❦✡❞❭❧q❡✛❵❍❦✌❢✼♣❍s❯♣➤❵❍❦✌❬✧♣å❵❍❙ ❬➢❦✡❦✗❵✮❛r❜❴P✡❧❍❦➐❡❲♣❍❛r❜❴❞✾❧q❦✌ð❑❫❚❛❝❧❍❦✌❬➢❦✡❜❑❵q♣
❫❚❪Ð❙❲❜✽❵❍❘❚❦❋❧❍❙❑❡❲❢✵❜❚❦✡❵t❮❩❙❭❧❍❶✵♣❍❪❣❦➐P✗❛r✈❴❦✌❢✽❮✹❛➛❵q❘❚❛r❜✮❪❣❙❭❱r❛①P✗s÷❢❚❙❯P✡❫◗❰
❬➢❦✡❜❑❵q♣✹❳➬❧❍❙❭❬×P✗❦✌❜❭❵q❧q❡❲❱✥❡✛❜❴❢✐❱❝❙❯P✌❡✛❱◆❞❭❙❾♦❲❦✡❧q❜❚❬➢❦✡❜❑❵➐⑩ ♠ ❘❚❦✝❧q❦✌♣❍❪Ð❙❲❜◗❰
♣❍❛ ❒ ❛❝❱❝❛➛❵ts➢❳➬❙❭❧❩❬✧❡✛❜❣❡✛❞❲❛❝❜❚❞✜❵❍❘❚❦✫❧❍❙❑❡❲❢➢❛❝❜◗❳➬❧✎❡❲♣➤❵❍❧q❫❴P➟❵q❫❚❧❍❦④❛r❜❹❵❍❘❚❦✝♥④⑦
❧q❦✌♣➤❵q♣✰❮✹❛➛❵q❘✯❵q❘❚❦✹✉☎❛❝❞❲❘❯❮➝❡⑨s◗♣❉➷☎❞❲❦✌❜❴P✗s❄➮➂✉✱➷ ✃ ❳➬❙❭❧✰❵❍❘❚❦➺❬➢❙✛❵q❙❲❧q❮➝❡⑨s
❡✛❜❣❢❄❵q❧❍❫❚❜❴❶❹❧q❙❭❡❲❢❹❜❚❦✗❵t❮➝❙❲❧q❶❄❡✛❜❣❢❄❵q❘❚❦✯ü✥❙◗P✡❡❲❱✥➷☎❫◗❵q❘❚❙❲❧q❛r❵ts➒➮Ùü❉➷ ✃
❳➬❙❲❧④❵❍❘❴❦★❫❚❧ ❒ ❡❲❜➭❜❚❦✗❵t❮➝❙❲❧q❶✑⑩✫❖❯❘❴❙❲❧❍❵✱❵❍❦✌❧❍❬✼❵q❧q❡✛ú✧P✜❦✡♦❭❦✡❜❑❵q♣✌✇◆♣➤❫❴P✎❘
❡❲♣❉❧q❙❭❡❲❢❃❮➝❙❲❧q❶◗♣✡✇⑨❡❲P✌P✗❛①❢◗❦✡❜❑❵➐✇⑨❡❭❢◗♦❲❦✌❧q♣❍❦✖❮➝❦✌❡✛❵❍❘❚❦✌❧❉P✗❙❭❜❴❢◗❛r❵❍❛❝❙❲❜❴♣✌✇⑨❙◗P➟❰
P✗❫❴❧❍❧q❛r❜❴❞✐❙❭❜❋❦✌❛➛❵q❘❚❦✡❧❃❵q❘❚❦✧❬➢❙✛❵q❙❲❧q❮➝❡⑨s✂❙❲❧✝❫❴❧ ❒ ❡❲❜➎❜❴❦✗❵t❮➝❙❲❧q❶➭P✡❡✛❜
❘❴❡⑨♦❭❦✫❢◗❦✌♦❾❡❲♣➤❵q❡❾❵q❛r❜❴❞➢✻❡ Ð☞ ❦➐P➟❵✎♣➺❙❲❜②❙❲❜❚❦✝❡✛❜❴❙✛❵❍❘❴❦✡❧➐⑩✖❨❩❫❴❧❍❧q❦✡❜❑❵❍❱❝s✧❘❯❫◗❰
❬✧❡✛❜❁❙❲❪Ð❦✡❧✎❡❾❵❍❙❭❧q♣✜❧❍❦➐♣➤❪Ð❙❲❜❣❢❐❵❍❙➎❵❍❘❚❛①♣✲❶❯❛r❜❴❢➓❙✛❳✱❪❴❧❍❙ ❒ ❱r❦✌❬ ❫❴♣➤❛❝❜❚❞
❵❍❘❴❦✡❛❝❧✲❦✡Õ◗❪❣❦✌❧➤❵➢❶❯❜❚❙❾❮✹❱r❦➐❢◗❞❲❦❭✇ ❒ ❫◗❵➢❵❍❘❚❦✌❛r❧✲❦✌✑☞ ❦✌P✗❵❍❛❝♦❲❦✌❜❚❦✌♣q♣✲❛❝♣✲❱❝❛r❬✲❰
❛r❵❍❦✌❢➒❡❲♣➺❵q❘❚❦✡s②❘❴❡⑨♦❲❦❃❵q❙❄❛❝❜❭❵q❦✡❧q❪❚❧q❦✗❵④P✡❙❲❬➢❪❚❱❝❦✗Õ✂❛r❜◗❳➬❙❭❧❍❬✧❡✛❵❍❛❝❙❲❜②❳➬❦✌❢
❵❍❙✯❵❍❘❚❦✌❬②✇◗❢◗❦➐P✗❛①❢◗❦☎❙❭❜✧❮✹❘❚❛❝P✎❘✧❙❲❳✥❡✛❜❄❡✛❧q❧✎❡⑨s✯❙✛❳✩❡❭P➟❵❍❛❝❙❲❜❣♣✟❵❍❙✝❵✎❡✛❶❭❦❲✇
❡✛❜❣❢☛❢◗❦✌❡❲❱✩❮✹❛r❵❍❘☛❵q❘❚❦✜❛❝❜❑❵❍❦✡❧❍❳Ù❡❲P✡❦ ❒ ❦✗❵t❮➝❦✡❦✡❜➒❫❚❧ ❒ ❡✛❜➭❡✛❜❣❢✂❬✲❙❲❵❍❙❭❧➤❰
❮➺❡⑨s✐❵q❧q❡✛ú❄P✜P✗❙❭❜❑❵❍❧q❙❲❱➵⑩✫ÑÒ❛r❵❍❘❴❛r❜➎❵❍❘❚❦➢♥✱⑦þ❵❍❘❚❦✌❧❍❦★❛❝♣❃❡②❢◗❫◗❵ts☛❙❲❜
ü✩➷④♣❩❵❍❙✲❬➢❡❲❜❴❡✛❞❭❦☎❵q❘❚❦✡❛❝❧➝❵❍❧✎❡❾ú❄P☎❜❚❦✡❵t❮❩❙❭❧❍❶◗♣❩❦✗ú❄P✗❛❝❦✡❜❑❵q❱rs❄❡❲❜❴❢✧❧q❦✗❰
❢◗❫❴P✡❦❥❵q❧q❡✛ú❄P❥❪Ð❙❲❱❝❱r❫❚❵❍❛❝❙❲❜✥⑩✖❨❩❱r❦➐❡✛❧q❱rs❃❵q❘❚❦✡❧q❦➝❛❝♣✟❡④❜❚❦✡❦➐❢✝❵❍❙❃❢◗❦✌♦❲❦✡❱❝❙❲❪
♣❍s❯♣➤❵❍❦✌❬✧♣✝❵❍❘❣❡❾❵✜❮✹❛❝❱r❱➺♣❍❫❚❪❚❪Ð❙❲❧❍❵✯❵❍❘❚❦✐❧❍❙❑❡❲❢❋❜❚❦✡❵t❮❩❙❭❧❍❶➎❙❲❪Ð❦✡❧✎❡❾❵❍❙❭❧q♣
❙ ❒ ät❦✌P➟❵q❛r♦❭❦✌♣④❮✹❘❚❦✡❜➭❵❍❘❚❦✌s✂❵❍❧qs✐❵q❙❹❵✎❡❲P✎❶❯❱❝❦✜P✗❙❭❜❚❞❲❦➐♣t❵q❛r❙❭❜➭❙❲❧④❙✛❵❍❘❴❦✡❧
95
❜❚❦✗❵t❮➝❙❲❧q❶✑✇❹❮✹❛r❵❍❘×♣❍❙❲❬➢❦✽❡❲❧qP✌♣✽➮➬❧q❙❭❡❭❢❚♣ ✃ ❒ ❛✂❡✛❜❴❢ÿ♣❍❙❲❬➢❦÷❫❴❜❚❛➛❰
❢◗❛r❧q❦✌P✗❵❍❛❝❙❲❜❴❡❲❱➂⑩➝➷☎❱①♣➤❙❣✇❚❵q❘❚❦❄Ï ❵❍❧✎❡✛❜❴♣❍❪❣❙❭❧➤❵q❦✡❧➐Ï✥➮➬❪❚❛❝❪Ð❦✜❙❲❧✱❧❍❙❑❡❲❢ ✃ ❢◗❙❯❦✌♣
❜❚❙✛❵➝❬✲❙❾♦❭❦✹❰✟❙ ❒ ät❦➐P➟❵q♣❩❬➢❙❾♦❲❦✱❡❲❱r❙❭❜❚❞✝❵q❘❚❦✡❬✂⑩✖③④❦✌♣❍❪❚❛➛❵q❦✱❵q❘❚❦✡❧q❦ ❒ ❦✗❰
❛r❜❚❞☛❮➺❡⑨s◗♣❃❵❍❙➎❡ ❒ ♣➤❵❍❧✎❡❲P✗❵✝❵❍❘❚❦②P✗❙❲❬➢❪❚❱❝❦✗Õ◗❛r❵ts➎❙❲❳✹❧❍❙❑❡❲❢❋❜❚❦✗❵t❮➝❙❲❧q❶◗♣
➮➬❦✡❞ ❒ s ❒ ❫❚❜❴❢❚❱r❛❝❜❚❞➭❵q❧q❡✛ú❄P➢❛r❜❑❵❍❙➎❢◗❛❝♣➤❵❍❛❝❜❴P✗❵✲ð❑❫❴❡✛❜❑❵✎❡ ✃ ❵❍❘❴❦✐P✗❙❭❬✲❰
❪❚❱r❦✡Õ◗❛➛❵ts☛❙❲❳✖❵❍❘❚❦➢❧q❙❭❡❭❢➭❜❚❦✡❵t❮❩❙❭❧❍❶✂❬✧❡⑨s✂❮❩❦✌❱r❱✖P✌❡✛❫❴♣❍❦★❡②❪❚❧q❙ ❒ ❱❝❦✡❬
❙✛❳✰♣qP✡❡❲❱r❦✫❵q❙➢P✡❫❚❧q❧❍❦✌❜❭❵✹❪❚❱①❡✛❜❴❜❚❛r❜❴❞✲❦✌❜❚❞❲❛❝❜❚❦➐♣✡⑩
✝ ï í➐ì Ü ▲ ❊◗ï❍Ü ❈✗í ❋ï ➢
❖ Þ❾ïqÜqÞ ☎ ❖➢Ü ▲ ❀ ❊P❖
❖ Þ✛ß ❀❃❂ Þ❾Ý ❀ í❾î ❀ î Ý ❅ Ü ✁ ï í ❏✗ß❝Ü✌✻
➱✖❱①❡✛❜❴♣❉❢◗❙✱❦✡Õ❯❛①♣➤❵❉❙❲❜★❪❴❡✛❪Ð❦✡❧➐✇ ❒ ❫◗❵✳❡✛❧q❦❥❜❚❙❲❵❉❪❚❱❝❦✡❜❑❵q❛➛❳➬❫❚❱➵⑩✳③✱❦➐P✗❛①♣➤❛❝❙❲❜❣♣
❡✛❜❴❢✐❪❚❱①❡✛❜❴♣☎❡❲❧❍❦❃❬✧❡❲❢❚❦ ❒ s✐❦✗Õ◗❪Ð❦✡❧❍❵q♣➺❙❲❜②❵q❘❚❦ ❒ ❡❲♣❍❛①♣✹❙✛❳✳P✗❙❭❱r❱①❡❾❵q❦✌❢
❛r❜◗❳➬❙❭❧❍❬✧❡✛❵❍❛❝❙❲❜✵❙✛❳❃❵❍❘❴❦➒❧q❙❭❡❲❢å❜❚❦✡❵t❮❩❙❭❧❍❶✑⑩Ô❨❩❫❚❧q❧❍❦✌❜❭❵❄❪❴❧❍❙◗P✗❦➐❢◗❫❚❧q❦
❳➬❙❲❧q❬✜❫❚❱①❡❾❵q❛r❙❭❜❹❛①♣☎❡❾❵➺❵❍❘❴❦✝❱r❦✌♦❲❦✡❱◆❙❲❳✳❖✒✫✑ ü❋P✡❙❲❜❴♣➤❵❍❧q❫❴P✗❵q♣✌⑩
✞ ✁✄✁ ï í ✁ ï ❀ Þ❾Ý➑Ü î Ü ❂❆❂ ❈✗í ï ✞ ✡ ✁ ßrÞ î❣î ❀ ❲î ❄ ❂ í ß ❊◗Ý ❀ í✛î ☎ ❖➢Ü ▲ ❀ ❊P❖
➱✳❡✛❧✎❡✛❬➢❦✗❵q❦✡❧q❛❝♣❍❦✌❢②❡❲P✗❵❍❛❝❙❲❜❴♣④P✡❡✛❜ ❒ ❦✜❳➬❙❲❧q❬✲❦➐❢✂❵❍❙✐❬➢❙❯❢❚❦✡❱❉❵❍❘❚❦➢❡❲P✗❰
❵❍❛❝❙❲❜❴♣➝❬➢❦✡❜❑❵❍❛❝❙❲❜❴❦✌❢②❡ ❒ ❙❾♦❲❦❲✇❯❡✛❱r❵❍❘❴❙❲❫❚❞❭❘❹❵q❘❚❦❃✌❦ ✑☞ ❦✌P➟❵✎♣➺❙✛❳❉♣❍❫❴P✎❘✂❡❲P✗❰
❵❍❛❝❙❲❜❴♣④❬✧❡⑨s ❒ ❦✲❢❚❛➛ú❄P✗❫❴❱➛❵✫❵❍❙✐❦✡❜❴P✡❙❯❢❚❦✜❛❝❜➒❪❚❧q❙❲❪Ð❙❭♣❍❛➛❵q❛r❙❭❜❴❡✛❱✥❳➬❙❲❧q❬②⑩
➱✖❧q❙❲❪Ð❙❭♣❍❛➛❵q❛r❙❭❜❴❡✛❱✖❢◗❦➐♣❍P✡❧❍❛❝❪◗❵q❛r❙❭❜❴♣✝❙✛❳✹❧q❙❭❡❭❢➎❜❴❦✗❵t❮➝❙❲❧q❶➒♣t❵✎❡❾❵q❫❴♣✜❡❲❜❴❢
❞❲❙❭❡❲❱◆P✗❧q❛➛❵q❦✡❧q❛❝❡✲❡✛❧q❦❃❜❚❙✛❵✹❞❲❦✌❜❚❦✡❧✎❡✛❱❝❱❝s✧❫❴♣➤❦➐❢②❛r❜☛P✗❫❴❧❍❧q❦✡❜❑❵☎♣➤s◗♣➤❵❍❦✌❬➢♣✌⑩
♣❍❦✡❱r❳ê❰➲❡❭❢❚❡✛❪◗❵q❛r❜❴❞➎P✗❙❭❬➢❪❚❫◗❵❍❦✌❧✲♣❍s◗♣t❵q❦✡❬✧♣✲♣➤❫❴P✎❘å❡❲♣➢❖❴❨✹Û✝Û ✁♠ ❘❣❡❲♣
❒ ❦✡❦✡❜å❙❲❜❴❦②❙❲❳✱❵q❘❚❦✂❬✲❙❑♣t❵➢❛❝❬✲❪Ð❙❲❧❍❵q❡❲❜❑❵✧♣❍❛❝❜❚❞❲❱❝❦②❢❚❦✡♦❲❦✌❱r❙❭❪❚❬➢❦✡❜❑❵q♣✌⑩
❖❴❨✹Û✝Û ♠ ♣➤s◗♣➤❵❍❦✡❬✧♣✟❡✛❧q❦➺❫❴♣❍❦✌❢★❮❩❙❭❧❍❱①❢◗❮✹❛①❢◗❦❥❵q❙✯P✗❙❭❜❭❵q❧❍❙❭❱❑❵❍❘❚❦➝❵❍❛❝❬✲❰
❛❝❜❚❞❭♣☎❡❲❜❴❢②✻❙ ◆☞ ♣➤❦✡❵q♣✹❙❲❳✰❞❭❧❍❙❭❫❚❪❴♣➺❙❲❳❉❵q❧q❡✛ú✧P❃❱❝❛❝❞❲❘❑❵q♣☎P✡❙❲❜❚❜❴❦✌P➟❵q❦✌❢ ❒ s
❡❋❱r❙◗P✌❡✛❱➺❧❍❙❑❡❲❢➓❜❚❦✗❵t❮➝❙❲❧q❶✑⑩ ♠ ❘❚❦✡s❁❡❲❢❚❡❲❪◗❵★❵q❙ë❢◗❛ ☞✑❦✡❧q❦✡❜❑❵➢❵❍❧✎❡❾ú❄P
❱❝❦✡♦❲❦✌❱❝♣✌✇◆❡✛❫◗❵q❙❲❬✧❡❾❵q❛❝P✌❡✛❱❝❱rs✂❡❲❢❾ät❫❴♣t❵q❛r❜❴❞✐❱❝❛r❞❭❘❑❵✱❵❍❛❝❬➢❛r❜❴❞❭♣❃❡❾❵✫❧❍❦✌❱❝❡✛❵❍❦✌❢
ät❫❚❜❴P✗❵❍❛❝❙❲❜❴♣★❛❝❜ë❧❍❦➐❡❲P➟❵q❛r❙❭❜❋❵❍❙➒♣❍❫❴❢❚❢◗❦✌❜ç❙❲❧✜❞❲❧✎❡❲❢◗❫❴❡❲❱❥P✎❘❴❡❲❜❚❞❲❦➐♣❃❛r❜
❵❍❧✎❡❾ú❄P④ò❴❙❾❮☎♣✌⑩
í ❅ ❀ ❄❉❅
❍ Þ❾Ý➑Þ❹❬Þ ❾■ Þ ❀ ß❝Þ✤❏ ❀ ß ❀ Ý ❑☛Þ î ▲ ▼ ❊❴Þ❾ß ❀ ●Ý ❑ ☎ ❖✧Ü ▲ ❀ ❊P×
❖ Ý✰
➘ ❜✲❵❍❘❚❦☎♥④⑦➢✇⑨❵❍❘❚❦☎♥ ♠ æ➎✄❨ ✂➺❛①♣✖❡✫❧❍❦✌❱❝❡✛❵❍❛❝❙❲❜❴❡❲❱❯❢❴❡❾❵q❡ ❒ ❡❲♣❍❦➺P✗❙❲❜❯♦❭❦✡❜◗❰
❵❍❛❝❙❲❜✂❳➬❙❭❧✱❢❚❡✛❵q❡❄P✡❙❲❱❝❱r❦➐P➟❵❍❦➐❢✂❡✛❜❴❢☛❢◗❛①♣➤❵❍❧q❛ ❒ ❫❚❵❍❦✌❢✂❛❝❜☛❵q❘❚❦✜P✗❙❭❫❚❧✎♣➤❦✝❙❲❳
❵❍❧✎❡❾ú❄P➝❬✧❡✛❜❴❡❲❞❲❦✌❬✲❦✌❜❑❵✌⑩❉♥ ♠ æ➎❨ç❪❚❧❍❙❾♦❯❛①❢◗❦✌♣✟❡✫❘❚❛❝❞❲❘✲❱r❦✌♦❲❦✌❱➂✇❲♣➤❵q❡❲❜◗❰
❢❚❡❲❧q❢☛❪❴❱❝❡✛❵➤❳➬❙❲❧q❬✼❳➬❙❲❧④❵❍❧✎❡❾ú❄P✜❡❲❪❚❪❚❱❝❛❝P✌❡❾❵q❛r❙❭❜❴♣✹❵q❙✐❫❴♣❍❦★❡❲❜❴❢☛❛❝❜❑❵❍❦✡❧❍❰
❙❲❪Ð❦✡❧✎❡❾❵q❦❲⑩✰ü✥❙◗P✡❡❲❱❑➷✱❫◗❵❍❘❚❙❭❧❍❛r❵❍❛❝❦✌♣✟❫❣♣➤❦➺♣➤s◗♣➤❵❍❦✌❬➢♣✟♣❍❫❴P✎❘★❡❭♣✰❖❣❨✹Û✝Û ♠
❡✛❜❣❢☛♥ ♠ æ➎❨✽❵q❙❹❬✧❡✛❶❭❦✝❦✌✑☞ ❦✌P➟❵q❛r♦❭❦★❡❲❜❴❢✂❦✗ú❄P✗❛❝❦✡❜❑❵✫❫❴♣➤❦✜❙✛❳✳❵❍❦➐P✎❘◗❰
❜❚❙❭❱r❙❭❞❲s☛❛❝❜➎❬✧❡✛❜❣❡✛❞❲❛❝❜❚❞✐❵❍❘❚❦➢❱❝❙◗P✡❡✛❱✳❧q❙❭❡❭❢➭❜❚❦✡❵t❮❩❙❭❧❍❶✑⑩✜✉✱❙❾❮❩❦✌♦❲❦✌❧✌✇
❵❍❘❴❦✡❧q❦④❡✛❧q❦④♣❍❙❲❬➢❦✫♣➤❘❚❙❭❧➤❵❍❳Ù❡✛❱❝❱❝♣❥❮✹❛➛❵q❘✐P✡❫❚❧❍❧q❦✡❜❑❵➺♣➤s◗♣➤❵❍❦✡❬✧♣✌✇❑❡❲❜❴❢✧❧q❛❝♣➤❰
❛❝❜❚❞❃❵❍❧✎❡❾ú❄P➺❱❝❦✡♦❲❦✌❱❝♣✖❮✹❛r❱❝❱❴❙❭❜❚❱rs★❦✗Õ❚❡❲P✡❦✡❧ ❒ ❡❾❵q❦➝❵❍❘❚❦✱♣❍❛r❵❍❫❴❡✛❵❍❛❝❙❲❜✥⑩ ♠ ❘❚❦
❬➢❙❭♣➤❵✱♣➤❦✌❧❍❛❝❙❲❫❣♣➺❪❚❧❍❙ ❒ ❱❝❦✡❬ÿ❛①♣➺❵❍❘❣❡❾❵✌✇Ð❡✛❱r❵❍❘❚❙❭❫❚❞❲❘②❬➢❙✛❵q❙❲❧q❮➝❡⑨s◗♣➺❡✛❜❴❢
P✗❛r❵ts✜P✡❦✡❜❑❵❍❧q❦✌♣✟❘❴❡⑨♦❭❦➝❵❍❧✎❡❾ú❄P❩ò❴❙❾❮➓❬➢❙❲❜❴❛➛❵q❙❲❧q❦✌❢◆✇❾❵q❧q❡✛ú❄P❥ò❣❙❾❮➓❙❲❫❚❵➤❰
♣❍❛❝❢◗❦④❙✛❳✥❵q❘❚❦✌♣❍❦④❡❲❧❍❦➐❡❲♣✳❛①♣➝❱❝❡❲❧❍❞❭❦✡❱❝s✜❫❚❜❴❶❑❜❴❙❾❮✹❜✥✇◗❡✛❜❴❢➢❵q❘❚❦✡❧q❦✱❛①♣➝♣t❵q❛r❱❝❱
❡➢❘❚❛❝❞❲❘➭❢◗❦✌❞❲❧q❦✡❦✝❙❲❳✰❫❚❜❣P✗❦✡❧❍❵q❡❲❛r❜❑❵ts✐❙✛❳✟❵❍❘❚❦★♣t❵✎❡❾❵❍❫❣♣☎❙✛❳✳♣❍❙❲❬➢❦✯❜❚❦✡❵➤❰
❮➝❙❲❧q❶❯♣✌⑩
è✹❦✌❞❭❡✛❧✎❢◗❛❝❜❚❞☎❦✌♦⑨❡❲❱r❫❣❡❾❵❍❛❝❙❲❜★❪❚❱①❡❾❵➤❳➬❙❭❧❍❬✧♣✩❳➬❙❲❧❉❵❍❦➐♣t❵q❛r❜❚❞✫❜❚❦✌❮ë❵❍❦➐P✎❘❚❜❚❙❲❱r❰
❙❲❞❭s➎❙✤❣☞ ❰➲❱❝❛r❜❚❦❭✇✰❵❍❘❚❦✌❧❍❦✐❛❝♣★❡➭❱❝❙❲❜❚❞➒♣➤❵q❡❲❜❴❢◗❛❝❜❚❞➭❘❚❛①♣➤❵❍❙❲❧qs❋❙✛❳✹❵❍❧✎❡✛❜❣♣t❰
❪Ð❙❲❧❍❵④❧q❦✌♣❍❦✌❡✛❧✎P✎❘☛❫❴♣❍❛r❜❴❞✐♣❍❫❴P✎❘➭❬➢❦✗❵q❘❚❙◗❢❚♣✡✇◆❮✹❛r❵❍❘➎❱❝❡❲❧❍❞❭❦✜❡✛❬➢❙❭❫❚❜❑❵q♣
❙✛❳✳❢❚❡✛❵q❡➢❡⑨♦❾❡✛❛❝❱❝❡ ❒ ❱r❦④❳➬❙❲❧➺❵q❦✌♣➤❵❍❛❝❜❚❞❄❡✛❜❣❢✐♣❍❛r❬★❫❚❱①❡❾❵❍❛❝❙❲❜✩⑩
◆ ❊P✧
❖ Þ î ◗ Þ ì Ý í ï ❂ ☎ ❅ ❀ ❄✪❅
❤✳Õ◗❪Ð❦✡❧❍❵❍❛①♣➤❦✹❛❝❜✧❬✧❡✛❜❴❡❲❞❲❦✌❬✲❦✌❜❑❵❥❡✛❜❴❢➢❙❭❪❣❦✌❧q❡✛❵❍❛❝❙❲❜➢❙✛❳✑❵❍❘❴❦☎❜❚❦✗❵t❮➝❙❲❧q❶
❡✛❪❴❪❣❦➐❡✛❧✎♣✟❵❍❙ ❒ ❦✹❵q❘❚❛r❜✩✇❯❡❲❜❴❢✲❵❍❘❴❦✡❧q❦✹❛❝♣❩❡✝❧❍❦➐❡✛❱❝❛❝♣q❡❾❵q❛r❙❭❜★❮✹❛r❵❍❘❴❛r❜✧❵q❘❚❦
♣❍❦✡❧q♦❑❛①P✗❦➭❵❍❘❴❡✛❵✐❵q❘❚❛❝♣✌✇✱❡❲❜❴❢Ò❵q❘❚❦➒❞❭❧❍❙❾❮✹❛❝❜❚❞çP✗❙❭❬➢❪❚❱r❦✡Õ◗❛➛❵ts✵❙✛❳✝❵q❘❚❦
❪❚❧q❙ ❒ ❱❝❦✡❬✂✇✯❮✹❛❝❱❝❱✜❧q❦✌ð❑❫❚❛❝❧❍❦ë❬➢❙❲❧q❦❐❵❍❦➐P✎❘❚❜❚❙❭❱r❙❭❞❲❛①P✡❡✛❱✝❛❝❜❯♦❲❦➐♣t❵q❬✲❦✌❜❑❵✌⑩
♠ ❘❚❦✌❧❍❦✜❡❲❧❍❦✯❡➢❧✎❡✛❜❴❞❲❦✝❙✛❳✳❘❚❛❝❞❲❘❚❰➂❵q❦✌P✎❘➭♣❍❦✡❧q♦❑❛①P✗❦✝❪❴❧❍❙❾♦❯❛①❢◗❦✡❧✎♣➺❛r❜✂❵q❘❚❦
♣❍❦✌P➟❵q❙❲❧❩❮✹❘❚❙★❡❲❧❍❦④❦✗Õ◗❪❣❦✌❧❍❛❝❦✡❜❣P✗❦✌❢❄❛r❜❹❵❍❦✌P✎❘❴❜❚❙❲❱❝❙❲❞❭❛❝P✌❡✛❱❚❛❝❜❚❜❚❙❾♦❾❡❾❵q❛r❙❭❜✥⑩
➷☎❱❝❱✥♣➤❵q❡✛❶❭❦✡❘❚❙❭❱❝❢❚❦✡❧✎♣❩❡❲❪❚❪❣❦➐❡✛❧➝❧❍❦➐❡❲❢◗s➢❵q❙★❦✌❬ ❒ ❧✎❡❲P✡❦☎❳➬❫❚❧❍❵❍❘❴❦✡❧➺❵❍❦➐P✎❘◗❰
❜❚❙❭❱r❙❭❞❲❛①P✡❡✛❱✳❛❝❜❚❜❚❙❾♦❾❡✛❵❍❛❝❙❲❜✵➮➬❦✌♣❍❪Ð❦✌P✗❛①❡✛❱❝❱❝s➒❞❲❛❝♦❲❦✌❜❋❵❍❘❚❦✧❪❴❡❭♣t❵✜♣❍❫❴P✌P✗❦✌♣q♣
❙✛❳✖❖❴❨✹Û✝Û ♠☎✃ ⑩
❚✷✛✡✕❲❱ ✬ ✧❲✳✫✭✒✧✷❨ ✛✡✭ ✗ ✛✡✧✲✧ ❬❼ ✗ ✛✡✭❭✢ ❽❚❿❲❻ ✕ ❼⑨❺
ß í ❂ Ü î Ü ❂❆❂ Ý ✂í ✁ ï❍✺Ü ■ ❀ í ❊ ❂ Þ ✁✄✁ ß ❀ ì Þ❾Ý ❀ í❾î ❂✆☎ ❖➢Ü ▲ ❀ ❊P❖
ÑÒ❛r❵❍❘❚❛❝❜☛❵❍❘❴❦✯❡✛❧q❦✌❡❚✇❚❵❍❘❚❦✌❧❍❦✝❘❴❡⑨♦❭❦ ❒ ❦✌❦✡❜➭❡❾❵❍❵❍❦✌❬✲❪❚❵q♣✹❵q❙❄❛❝❜❴P✗❙❭❧❍❪Ð❙✛❰
❧✎❡❾❵❍❦✖♣❍❙❲❬➢❦✖❶❯❛❝❜❴❢❚♣✩❙✛❳❚♣➤❪Ð❦✌P✡❛➛✈ÐP✖❡✛❫◗❵q❙❲❬✧❡❾❵q❦✌❢❃❧q❦✌❡❭♣➤❙❭❜❚❛r❜❴❞➺♣❍s◗♣t❵q❦✡❬✧♣
❛❝❜❭❵q❙✝❵❍❘❴❦✱P✗❙❭❜❭❵q❧❍❙❭❱❚❙✛❳◆❬➢❙✛❵q❙❲❧q❮➝❡⑨s✝❛❝❜❴P✡❛❝❢◗❦✌❜❑❵q♣❩❦✡❞✝❛❝❜➢❵❍❘❚❦✫æ➎Û✫ü✩➷
♣❍s❯♣➤❵❍❦✌❬ÿ➮➂❖❑❵q❛r❱❝❱➵✇❭➱✟⑩ ùë❡✛❜❴❢★✉✱❡✛❧ ❒ ❙❲❧✎❢◆✇✛ù✫⑩ ☎❣⑩❯⑤✌ô❲✝ô ✆ ✃ ⑩❉ý☎❙❃♣❍❫❴P✎❘★❡✛❵➤❰
❵❍❦✌❬➢❪◗❵❉❘❴❡❭♣ ❒ ❦✡❦✡❜✜❬➢❡❭❢◗❦✖❛❝❜✝❱r❙◗P✌❡✛❱❭❡❲❫◗❵❍❘❴❙❲❧q❛➛❵ts❑❰➑P✗❙❲❜❑❵q❧❍❙❭❱r❱❝❦✌❢✱❧❍❙❑❡❲❢❚♣
❛❝❜✐❵q❘❚❦✯♥✱⑦➢⑩
è✹❦✌❞❭❡✛❧✎❢◗❛❝❜❚❞ ♣➤❛❝❬➢❛r❱①❡✛❧Ò❢◗❙❭❬➢❡❲❛r❜❣♣✡✇☛❵q❘❚❦Ó➱✖❛❝❪❣❦➐♣➤❮➝❙❲❧q❱①❢×❢◗❙❭❬➢❡❲❛r❜
❳➬❧q❙❲❬ ➘ ➱➝❨❥❰✟✞④♣❍❘❴❡❲❧❍❦➐♣❉♣❍❙❲❬➢❦❩P✎❘❣❡✛❧✎❡❲P➟❵q❦✡❧q❛❝♣➤❵❍❛①P✡♣✩❮✹❛➛❵q❘★❧q❙❭❡❭❢❃❵❍❧✎❡✛❜❣♣t❰
❪Ð❙❲❧❍❵✌û✲❵❍❘❚❦ ❒ ❡❲♣❍❛①P❹❢◗❙❭❬➢❡❲❛r❜çP✡❙❲❜❴♣❍❛①♣t❵✎♣✯❙✛❳✱❡❲❜➓❡✛❧✎P✡♣✜❡✛❜❴❢ë❜❴❙❯❢❚❦✌♣
✠ ➅❾➃➑➃t☛➉ ✡ ☞✌✗☞ ➍✳➍✳➍✹➩ ➇➑➙✎➔❾➔✡➃➑➻➂➧❲➃t➙➐➩ ➙✎➔➐✍
↔ ☞
✎ ➅❾➃➑➃t☛➉ ✡ ☞✌✗☞ ➍✳➍✳➍✹➩ ➧❭➃t↔✝➙✌➩ ➜✌➔✗➦❚➩ ➧❭✏➳ ☞
☞✍✌✂✌ ❊ ➹ ❆ ✂ ➾❚➹t➪❥➽ ☞ ➚ ✄▲ ✂✏✎ ✁ ❊ ➪➝➪✹➶ ✌ ➚ ▲ ✟❥▲ ➽❩➾❚➹t➪❥➽ ✂ ➽✱➶
✘ ✂ ➽ ✂✗✖✖▲✱✘✾▲ ➽❥➾
Ø❉❱❝❙❯❙◗❢✐❪❴❧❍❦✌♦❲❦✡❜❑❵q❛r❙❭❜②❡❲❜❴❢②❬➢❡❲❜❴❡✛❞❭❦✡❬➢❦✡❜❑❵✯➮ÙØ✟➱❩æ ✃ ❛❝❜❑♦❭❙❲❱❝♦❲❦➐♣✡✇❚❡❭♣
r❛ ❜❋è☎ý✱æë✇✩❱r❙◗P✡❡❲❱✳❡✛❜❴❢➒❜❴❡❾❵q❛r❙❭❜❴❡✛❱✳❡✛❫❚❵❍❘❚❙❭❧❍❛r❵❍❛❝❦✌♣✌✇✥♣➤❦✌❧❍♦❯❛①P✗❦★❛r❜❴❢❚❫❴♣t❰
❵❍❧q❛r❦➐♣✡✇➝❡✛❜❴❢➓❧❍❦➐♣➤❦➐❡✛❧✎P✎❘ç❛r❜❣♣t❵q❛➛❵q❫◗❵❍❦➐♣✡⑩ ♠ ❘❚❛①♣➢❛❝♣✧❢❚❫❚❦②❵❍❙❐❛➛❵✎♣➢❪❣❦✌❧➤❰
P✗❦✡❛❝♦❲❦➐❢✝❛❝❬✲❪Ð❙❲❧❍❵q❡❲❜❴P✗❦❭û◆❵❍❘❴❧❍❙❭❫❚❞❲❘❚❙❭❫◗❵✟❬➢❡❲❜❯s✝❪❴❡✛❧❍❵q♣❉❙✛❳❚❵q❘❚❦➺❮❩❙❭❧❍❱①❢
❵❍❘❚❦Ò❪❚❧q❦✡♦❭❦✡❜❑❵❍❛❝❙❲❜✩✇❄❦✌❡❲❧❍❱❝sÔ❮➺❡✛❧q❜❚❛r❜❴❞❴✇❄P✗❧q❛①♣➤❛①♣ë❡✛❜❴❢✕❪Ð❙❭♣➤❵➤❰➑P✗❧q❛❝♣❍❛①♣
❬➢❡❲❜❴❡✛❞❭❦✡❬➢❦✡❜❑❵➝❙✛❳✩❮➺❡❾❵q❦✡❧➺❛r❜❴❜❑❫❴❜❴❢❚❡❾❵q❛r❙❭❜❹❛①♣➺❡✛❜❹❛❝❬➢❪❣❙❭❧➤❵✎❡✛❜❑❵➺❳Ù❡❲P✗❰
❵❍❙❲❧✧❛❝❜✵❘❯❫❚❬✧❡✛❜✵❮❩❦✌❱r❱r❰ ❒ ❦✌❛r❜❚❞❣⑩ÒÑ➎❦☛❘❣❡⑨♦❲❦✂❛❝❢❚❦✡❜❑❵❍❛r✈❴❦✌❢❁❵t❮➝❙ë❡✛❧❍❰
❦✌❡❲♣☎❮✹❘❚❛❝P✎❘➒❛r❜❴P✡❙❲❧q❪❣❙❭❧q❡✛❵❍❦❃❵t❮➝❙❄❪Ð❙✛❵❍❦✌❜❑❵❍❛①❡✛❱❉❡❲❪❚❪❚❱❝❛❝P✌❡❾❵q❛r❙❭❜❴♣☎❙❲❳❥➷ ➘
❪❚❱❝❡❲❜❚❜❚❛❝❜❚❞❴û✐❳➬❙❲❧➢❱❝❙❲❜❚❞➎❵❍❦✌❧❍❬ ❪❚❱❝❡❲❜❚❜❚❛❝❜❚❞❋❙✛❳④❛❝❜◗❳➬❧✎❡❲♣➤❵❍❧q❫❴P➟❵q❫❚❧q❦❹❵q❙
❪❚❧❍❦✌♦❲❦✌❜❭❵➺❙❲❧✹❱❝❦✌♣q♣❍❦✡❜②❵❍❘❚❦✝❧q❛①♣➤❶❄❙❲❳❉ò❣❙❑❙◗❢◗❛❝❜❚❞❣✇❚❡✛❜❴❢✐❳➬❙❲❧✹❧q❦✌❡❲❱➛❰➵❵❍❛❝❬➢❦
❪❚❱❝❡❲❜❚❜❚❛❝❜❚❞★❵❍❙➢♣➤❫❚❪❴❪❣❙❭❧➤❵➺ò❴❙❯❙◗❢❄❦✌♦❲❦✡❜❑❵✹❬✧❡✛❜❴❡❲❞❲❦✌❬✲❦✌❜❑❵✌⑩ ♠ ❘❚❦✫❳➬❙❲❧❍❰
❬✲❦✌❧➝❡❲❧❍❦➐❡✜P✡❙❲❜❴♣❍❛❝❢❚❦✡❧✎♣❥♣❍❫❴P✎❘✐P✡❧❍❛r❵❍❦✌❧❍❛①❡✜❡❭♣❩P✡❱r❛❝❬✧❡❾❵❍❛①P✫P✎❘❴❡✛❜❚❞❭❦④❡❲❜❴❢
❪❣❙❭❪❚❫❚❱①❡❾❵❍❛❝❙❲❜ëP✎❘❴❡❲❜❚❞❲❦❭✇✩❡✛❜❣❢➎❬✧❡⑨s➎❛r❜❯♦❲❙❭❱r♦❭❦✲ò❴❙❯❙❯❢❐❢◗❦✡❳➬❦✡❜❴P✡❦❄❢◗❦✗❰
♣➤❛❝❞❲❜ç❙❭❧✜❦✌♦❲❦✡❜➓❧❍❛❝♦❲❦✌❧★❢◗❦➐♣➤❛❝❞❲❜✥⑩ ♠ ❘❚❦②❱❝❡✛❵➤❵q❦✡❧➢❡✛❧q❦✌❡✂❳Ù❡❲❱r❱①♣✜❫❴❜❴❢◗❦✡❧
❵❍❘❚❦➭❘❴❦✌❡❲❢❚❛r❜❚❞➓❙✛❳✯P✗❧q❛❝♣❍❛①♣✧❬✧❡✛❜❴❡❲❞❲❦✡❬➢❦✌❜❭❵➐✇✹❡✛❜❴❢✵❬✧❡⑨s➓❛❝❜❴P✡❙❲❧q❪❣❙❲❰
❧q❡✛❵❍❦➺❦✡♦❾❡❲P✡❫❴❡❾❵q❛r❙❭❜➢❬➢❡❲❜❚❞❲❦✌❬➢❦✡❜❑❵✌⑩✟✉☎❦✌❧❍❦✜➮➂❡❲♣✳❛❝❜❄è☎ý④æ ✃ ❵q❘❚❦✡❧q❦✹❛❝♣
❵❍❘❚❦✝❜❚❦✌❦✌❢②❵q❙✧❫❚❜❴❢◗❦✌❧q♣➤❵q❡❲❜❴❢②❮✹❘❴❡❾❵✹❵q❘❚❦✜♣➤❵q❡❾❵q❫❴♣✹❙❲❳❉❵q❘❚❦✝❦✡♦❭❦✡❜❑❵☎❛❝♣
❰✖❵❍❘❚❛①♣☎❛①♣☎❦➐♣❍♣❍❦✡❜❑❵q❛❝❡❲❱◆❵❍❙❄♣❍❫❚❪❚❪Ð❙❲❧❍❵☎❵❍❘❴❦✜❡❲P✗❵❍❛❝♦❲❦✝❬✧❡✛❜❴❡❲❞❲❦✌❬✲❦✌❜❑❵☎❙✛❳
❡✛❜❯s✧❛❝❢❚❦✡❜❑❵❍❛r✈❴❦✌❢②❪❚❧q❙ ❒ ❱r❦✌❬➢♣✌⑩
ù❩❦✌❱r❙❾❮ ❮❩❦☛P✡❙❲❜❴P✡❦✡❜❑❵❍❧✎❡❾❵q❦②❙❭❜✵❦✡♦❾❡✛❱❝❫❴❡❾❵q❛r❜❴❞❐❵❍❘❚❦✂❳➬❦➐❡❲♣❍❛ ❒ ❛❝❱❝❛➛❵ts❁❙✛❳
➷ ➘ ❪❚❱①❡✛❜❚❜❚❛❝❜❚❞☛❵q❙➒♣❍❫❚❪❚❪Ð❙❲❧❍❵✯ò❴❙❯❙◗❢❐❦✡♦❲❦✌❜❑❵✯❬➢❡❲❜❴❡✛❞❭❦✡❬➢❦✡❜❑❵➐✇✰❡❲❜❴❢
❫❴♣➤❦➝❵q❘❚❦✹❛r❜❚❳➬❙❲❧q❬➢❡✛❵❍❛❝❙❲❜✲❳➬❧q❙❲❬ ❢◗❦✌❱r❛❝♦❲❦✌❧q❡ ❒ ❱❝❦✌♣✰❙✛❳Ð❵❍❘❚❦☎P✡❫❚❧q❧❍❦✌❜❭❵✖❤❩♥
❪❚❧❍❙❲ät❦✌P✗❵✲Ï Ø✟ü✰Û✝Û✫③✫♣➤❛r❵❍❦❭Ï✑❵❍❙☛♣➤❫❴❪❚❪❣❙❭❧➤❵✝❛r✔❵ ✧✓ Ø✰ü✟Û✝Û✫③④♣❍❛r❵❍❦➢❡✛❛❝❬➢♣
❵❍❙✱❢◗❦✌♦❲❦✌❱r❙❭❪❃❵❍❙❯❙❲❱①♣✑❵❍❙④❘❚❦✡❱❝❪✯❛❝❜✝❦✡♦❾❡❲P✡❫❴❡❾❵q❛r❙❭❜✝❬✧❡✛❜❴❡❲❞❲❦✡❬➢❦✌❜❭❵➐✇✌❪❣❡✛❧❍❰
❵❍❛①P✗❫❚❱①❡✛❧q❱rs➢❬➢❦✗❵✎❡❾❰➵❵❍❙❯❙❲❱①♣❥❡❲❜❴❢✧❳➬❧✎❡✛❬➢❦✌❮❩❙❭❧❍❶◗♣✟❳➬❙❭❧❥❵q❘❚❦ ❒ ❫❚❛r❱①❢◗❛❝❜❚❞✲❙✛❳
♣➤❪Ð❦✌P✡❛➛✈❣P❃❢◗❦➐P✗❛①♣➤❛❝❙❲❜✂♣❍❫❚❪❚❪Ð❙❲❧❍❵✱♣➤s◗♣➤❵❍❦✡❬✧♣✝➮➂③✫❖❚❖ ✃ ⑩
✕ ➅❾➃➑➃t✒➉ ✡ ☞✖➟☞ ➍✳➍✳➍✹➩ ✗❑➔❾➔✛➏❭➇➑➆➞➃t➋➐➩ ➁❑➋❍✘➃ ☞
96
✲ ✳✶✵✙✷✹✸ ✵ ✬✾★✫✮ ✰
✸ ✢✄★✹✰✭✺
❬✲❙◗❢◗❦✌❱❝♣✌⑩
î ◗ Þ ì Ý í ï ❂✆☎ ❖✧Ü ▲ ❀ ❊P✄
◆ ❊P✧
❖ Þ❘
❖ ❅ ❀ ❄✪❅
è✹❦✌♣❍❦✌❡❲❧qP✎❘✂❡✛❜❴❢➭❛r❜❚❜❴❙❾♦⑨❡✛❵❍❛❝❙❲❜☛❛❝❜✂❵❍❘❚❛①♣✫❡✛❧q❦✌❡➢❛①♣✱❡❲P✌P✗❦✌❪◗❵❍❦➐❢➭❡❲♣☎❡❲❜
❦✌♣q♣➤❦✌❜❭❵q❛❝❡❲❱❣❙❭❜❚❞❲❙❭❛r❜❴❞★❡❲P✗❵❍❛❝♦❯❛➛❵q❛r❦➐♣ ❒ s❹♣➤❵q❡❲❶❲❦✌❘❚❙❲❱①❢◗❦✡❧✎♣✖❛❝❜✐❵❍❘❚❦✫✈❴❦✌❱❝❢◆✇
❘❚❦✡❜❴P✡❦✱❵q❘❚❦✡❧q❦✫❮❩❙❭❫❚❱❝❢ ❒ ❦✫❜❚❙★❵❍❘❚❧q❦✌❡✛❵q♣❩❵❍❙★❳➬❦✌❡❭♣➤❛ ❒ ❛❝❱r❛r❵ts❲⑩✖æ➒❡✛❜❯s➢❙✛❳
❵❍❘❚❦➢❪Ð❙✛❵❍❦✌❜❑❵❍❛①❡✛❱✖❫❴♣❍❦✡❧✎♣✌✇✥❘❚❙❾❮➝❦✡♦❲❦✌❧✌✇✑❮➝❙❲❫❚❱①❢❋❜❚❙✛❵ ❒ ❦ ➘t♠ ❱❝❛➛❵q❦✡❧✎❡❾❵q❦
❡✛❜❴❢✧❘❴❦✡❜❴P✡❦④❡✛❜❯s➢➷ ➘ ♣❍❙✛❳ê❵t❮➺❡✛❧q❦✹❮➝❙❲❫❚❱①❢✧❜❚❦✌❦✌❢➢❵q❙ ❒ ❦④❦✌❬ ❒ ❦✌❢❚❢◗❦➐❢
❮✹❛➛❵q❘❚❛r❜✂❫❴♣❍❦✡❧❍❰➵❳➬❧❍❛❝❦✡❜❴❢❚❱rs✧❛❝❜❑❵❍❦✡❧❍❳Ù❡❲P✡❦✌♣✌⑩
❚✷✛✫✕❲❱❳✬✚✧❲✳✡✭✜✧ ❨❩✛✫✭ ✗ ✛✡✧❲✧ ✪❼ ✗ ✛✫✭❭✢ ❽❴❿✛❻ ✕ ❼❾❺
ß í ❂ Ü î Ü ❂❆❂ Ý í ✁ ïq✌Ü ■ ❀ í ❊ ❂ Þ ✁✄✁ ß ❀ ì Þ✛Ý ❀ í✛î ❂ ☎ ❖✧Ü ▲ ❀ ❊P❖
♠ ❘❚❛❝♣❹❡✛❧q❦✌❡❐❛①♣❄P✗❱❝❦✌❡❲❧❍❱❝s➓❧q❦✡❱①❡❾❵q❦✌❢❁❵❍❙ë❵q❘❚❦➭❬➢❙❲❧q❦☛❞❲❦✌❜❚❦✡❧✎❡✛❱④❡✛❧q❦✌❡
❙✛❳❩P✡❧❍❛①♣➤❛①♣✫❪❚❧q❦✡♦❲❦✌❜❑❵❍❛❝❙❲❜➎❡❲❜❴❢➎❬➢❡❲❜❴❡✛❞❭❦✡❬➢❦✡❜❑❵➐⑩ ♠ ❘❚❦✌❧❍❦✲❘❴❡❲♣ ❒ ❦✡❦✌❜
❡➭❞❲❧q❦✌❡✛❵✲❢◗❦➐❡✛❱➺❙✛❳④❮❩❙❭❧❍❶❐❙❲❜å❢◗❦✌P✡❛❝♣❍❛❝❙❲❜❁♣❍❫❚❪❚❪Ð❙❲❧❍❵✲❳➬❙❲❧✧P✗❧q❛❝♣❍❛①♣✜❙❭❧
❢◗❛❝♣q❡❲♣➤❵❍❦✌❧✱❬✧❡✛❜❣❡✛❞❲❦✌❬➢❦✡❜❑❵✌✇✑❧✎❡✛❜❚❞❭❛r❜❚❞ ❒ ❡❲P✎❶✂❬➢❙❲❧q❦✯❵q❘❴❡✛❜ ❰✹⑥
s❲❦✌❡❲❧q♣✌✇④❡❲❱➛❵q❘❚❙❲❫❴❞❲❘➴❙❲❜❚❱❝s✽❡❁❳➬❧q❡❭P➟❵q❛r❙❭❜✮❙❲❳★❵q❘❚❛❝♣✂❮➝❙❲❧q❶÷❘❴á✛❡❭â ♣☛❡❾❵❍â ❰
❵❍❦✡❬➢❪◗❵q❦✌❢✜❵❍❙❃❡✛❫◗❵q❙❲❬✧❡❾❵q❦❥❞❭❦✡❜❚❦✌❧q❡✛❵❍❛❝❙❲❜✯❙❲❳❴❪❚❱①❡✛❜❴♣✌⑩✰➷✱❜✜❦✡Õ◗P✡❦✡❪◗❵q❛r❙❭❜
❛❝♣➺❵q❘❚❦✜❙❭❜❚❞❲❙❭❛r❜❴❞✲❮➝❙❲❧q❶✐❡❲❛r❬➢❦➐❢☛❡❾❵④❢◗❛①♣q❡❲♣➤❵❍❦✡❧✱❬➢❡❲❜❴❡✛❞❭❦✡❬➢❦✡❜❑❵✹❳➬❙❭❧
❦✡❧q❫❚❪◗❵❍❛❝❙❲❜❣♣❉❙❲❳❴❵q❘❚❦☎➱✟❙❲❪Ð❙◗P✡❡❾❵q❦✡❪Ð❦✗❵q❱❑♦❭❙❲❱①P✡❡❲❜❚❙④❛❝❜➢æ☛❦✗Õ◗❛①P✗❙❴✇❲❮✹❘❚❦✡❧q❦
❵❍❘❚❦★❵❍❦➐P✎❘❚❜❚❛①ð❭❫❴❦✌♣✫❫❴♣❍❦✌❢❋❡✛❧q❦ ❒ ❡❭♣➤❦➐❢➭❙❲❜❋❡✛❜❴♣❍❮➝❦✡❧❍❰➲♣❍❦✗❵✱❪❴❧❍❙❭❞❲❧✎❡✛❬✲❰
❬✲❛❝❜❚❞➴➮➵❨❩❙❲❧❍❵❍❦➐♣✡✇✱❖◗❙❲❱❝❜❚❙❲❜✥✇✆☎ æ➭❡✛❧❍❵❍❜❚❦✌Ú á❲â❲â ✃ ⑩ ♠ ❘❴❛❝♣②❮➝❙❲❧q❶
❛❝♣④❡✛❛❝❬✲❦➐❢☛❡❾❵✱❛❝❜❑❵❍❦✌❞❲❧✎❡❾❵❍❛❝❜❚❞❄❡✧❪❴❱❝❡❲❜❚❜❚❛❝❜❚❞❄❳➬❫❚❜❣P➟❵❍❛❝❙❲❜➒❮✹❛➛❵q❘➭❦✗Õ◗❛①♣t❵❍❰
❛r❜❚❳❞ ✍ ➘ ❖❄♣❍s◗♣t❵q❦✡❬✧♣✌⑩ ♠ ❘❴❦④❱①❡✛❜❚❞❭❫❴❡✛❞❭❦✱❫❴♣❍❦✌❢❄❳➬❙❲❧➺❧q❦✡❪❚❧q❦✌♣❍❦✡❜❑❵q❡✛❵❍❛❝❙❲❜
❛r❜❴P✡❙❲❧q❪❣❙❭❧q❡✛❵❍❦➐♣✩♣❍❙❲❬➢❦❥❬➢❦✌❡❭♣➤❫❴❧❍❦➐♣✩❙❲❳❣❫❚❜❴P✡❦✡❧❍❵q❡❲❛r❜❑❵ts❲✇❾❡❲❜❴❢✝❵q❘❚❦✹♣➤s◗♣➤❰
❵❍❦✡❬✼❘❴❡❭♣➺❵❍❘❚❦✜❪❣❙❲❵❍❦✡❜❑❵q❛❝❡❲❱◆❳➬❙❲❧☎❞❭❦✡❜❚❦✌❧q❡✛❵❍❛❝❜❚❞✧♣➤❛❝❬➢❪❚❱❝❦✯❦✡❬➢❦✡❧q❞❲❦✌❜❴P✗s
❦✡♦❾❡❲P✡❫❴❡❾❵q❛r❙❭❜②❪❴❱❝❡❲❜❴♣✡⑩☎✉✱❙❾❮❩❦✌♦❲❦✌❧✌✇❯❵q❘❚❦★❡❲❪❚❪❚❱❝❛❝P✌❡❾❵❍❛❝❙❲❜➭❡❲❪❚❪Ð❦✌❡✛❧✎♣☎❡❭♣
s❲❦✗❵✹❜❚❙❲❵✹❛r❬➢❪❚❱❝❦✡❬➢❦✡❜❑❵q❦✌❢◆⑩
❤✖♦❾❡❲P✡❫❴❡❾❵q❛r❙❭❜➴❪❴❱❝❡❲❜❚❜❚❛❝❜❚❞÷❛①♣➎❡❲❜Ô❡❲P➟❵q❛r♦❯❛r❵ts✽❵q❘❴❡❾❵❋❘❴❡❲♣❋❡✛❱❝❧q❦✌❡❲❢❚s
❒ ❦✌❦✡❜✂❫❴♣❍❦✌❢✐❮✹❛➛❵q❘✂❵❍❘❚❦✯➱❥❱❝❡❲❜❚❜❚❛❝❜❚❞✲P✡❙❲❬➢❬✜❫❴❜❚❛➛❵ts❹❰✖❛➛❵☎❛①♣✹❫❴♣❍❦✌❢✂❡❭♣
❡✛❜✂❦✗Õ❚❡✛❬➢❪❚❱❝❦✝❮✹❛r❵❍❘❚❛❝❜✂❵❍❘❚❦✝❧q❦✌P✡❦✡❜❑❵✹❵❍❦✡Õ❯❵ ❒ ❙❑❙❭❶➎➮●✍✫❘❴❡✛❱❝❱❝❡ ❒ ✇❣ý④❡✛❫✥✇
☎ ♠ ❧q❡⑨♦❭❦✡❧✎♣➤❙ ✃ ⑩✰❖ ➘ ➷④③④❤❥Ö ➮ÙØ✟❢◗❦✌Ú✗❰➤Û④❱r❛❝♦❾❡✛❧q❦✌♣
✃
❛❝♣✧❡❋♣➤s◗♣➤❵❍❦✌❬ á✛❵❍â❭❘❴â ❡✛❵❄❛❝♣✧P✡❫❚❧q❧❍❦✌❜❭❵q❱rsë❫❴❜❴❢◗❦✡❧q❞❲❙❭❛r❜❴❞➭❵❍❦➐Ü✡♣tÝ✟❵✎Þ✛♣✧ßrà❥❛❝❜❁á❲â❲❧qâ❭❦✌ã ❡❲❱
✈❴❧❍❦④✈❴❞❭❘❭❵q❛r❜❴❞➢♣❍❛r❵❍❫❴❡✛❵❍❛❝❙❲❜❴♣✌⑩ ➘ ❵➺❪❴❧❍❙◗❢◗❫❴P✡❦✌♣➺❪❚❱①❡✛❜❣♣✡✇◗❬➢❙❲❜❴❛➛❵q❙❲❧✎♣❩❦✡Õ❯❰
❦✌P✗❫❚❵❍❛❝❙❲❜✥✇✩❡❲❜❴❢❋❛r❜❑❵q❦✡❧✎❡❲P➟❵✎♣❃❮✹❛➛❵q❘❐❘❑❫❴❬➢❡❲❜➎❦✡Õ◗❪❣❦✌❧➤❵✎♣❃❵❍❙☛♣❍❫❚❪❚❪Ð❙❲❧❍❵
❬➢❡❲❜❴❡✛❞❭❦✡❬➢❦✡❜❑❵➝❛r❜✐❳➬❙❲❧q❦✌♣➤❵❥✈❴❧q❦④✈❴❞❲❘❑❵❍❛❝❜❚❞❣⑩ ♠ ❘❴❦✫❛❝❜❴♣❍❛r❞❭❘❑❵q♣➝❧❍❦➐♣➤❫❴❱➛❵❍❰
❛r❜❚❞❄❳➬❧❍❙❭❬×❵q❘❚❦★❖ ➘ ➷④③④❤❥Ö ❛❝❬✲❪❴❱r❦✌❬✲❦✌❜❑❵q❡❾❵q❛r❙❭❜➭❮➝❙❲❫❚❱①❢☛P✗❦✌❧➤❵✎❡✛❛❝❜❚❱rs
P✗❙❲❜❑❵q❧❍❛ ❒ ❫◗❵q❦❃❵❍❙✧❵q❘❚❦★♣❍❫❴P✡P✡❦✌♣q♣✹❙✛❳✖❡✲ò❴❙❯❙◗❢☛❦✌♦❲❦✡❜❑❵☎❬✧❡✛❜❣❡✛❞❲❦✌❬➢❦✡❜❑❵
❡✛❪❚❪❚❱❝❛①P✡❡❾❵q❛r❙❭❜✥⑩
✝ ï í➐ì Ü ▲ ❊◗ï❍Ü ❈✗í ❋ï ➢
❖ Þ✛ß ❀❃❂ Þ❾Ý ❀ í❾î ❀ î Ý ❅ Ü ✁ ï í ❏✗ßrÜ✺❖ Þ❾ï❍Ü✎Þ ☎ ß í ❄
❖➢Ü ▲ ❀ ❊P❖
➘ ❜★❞❭❦✡❜❚❦✌❧q❡❲❱➂✇⑨❪❚❱①❡✛❜❴♣✳❡❲❜❴❢★❪❚❧❍❙◗P✡❦✌❢◗❫❚❧q❦✌♣❉❛❝❜✜❵q❘❚❦☎❡✛❧q❦✌❡④❡✛❧q❦❩❜❚❙❲❵✟❳➬❙❲❧❍❰
❬➢❡❲❱r❛①♣❍❦✌❢✜❡❲❜❴❢★❛➛❳❣❵❍❘❚❦✌s✜❦✗Õ◗❛①♣t❵✳❡✛❧q❦➺♣t❵✎❡❾❵❍❦➐❢✜❛❝❜➢❜❴❡❾❵q❫❚❧✎❡✛❱❯❱❝❡❲❜❚❞❲❫❣❡✛❞❲❦❭⑩
✉☎❙❾❮➝❦✡♦❲❦✌❧✌✇✰❵q❘❚❦✡❧q❦②❡❲❧❍❦✂♣❍❙❲❬➢❦②③❃❖❚❖ç❵❍❘❴❡✛❵✧❦✗Õ◗❪❣❦➐P➟❵➢❦✡❬➢❦✡❧q❞❲❦✌❜❴P✗s
❧❍❦➐♣➤❪Ð❙❲❜❴♣❍❦✯❪❚❱①❡✛❜❣♣✫❡❭♣④❡❲❜➭❛❝❜❚❪❚❫◗❵➐✇✩❡✛❜❴❢➒❦✡♦❾❡✛❱❝❫❴❡✛❵❍❦✯❵q❘❚❦✡❬ ❒ s➭P✌❡✛❱r❰
P✗❫❚❱①❡❾❵q❛r❜❚❞❹❵❍❘❚❦✲✌❦ ✑☞ ❦✌P✗❵✌⑩ ♠ ❘❚❛①♣✫❛r❬➢❪❚❱❝❛❝❦✌♣④❵❍❘❚❦✲❦✗Õ◗❛①♣t❵q❦✡❜❴P✡❦★❙❲❳❩♣❍❙❲❬➢❦
❪❚❱❝❡❲❜❹❳➬❙❭❧❍❬★❫❚❱①❡❾❵❍❛❝❙❲❜❣♣✡⑩
✞ ✁✄✁ ï í ✁ ï ❀ Þ❾Ý➑Ü î Ü ❂❆❂ ❈✗í ï ✞ ✡ ✁ ßrÞ î❣î ❀ ❲
î ❄ ❂ í ß ❊◗Ý ❀ í✛î ☎ ❖➢Ü ▲ ❀ ❊P❖
æ➭❡✛❜❯sÔ❙❲❳✐❵q❘❚❦Ò❛r❜❴❪❚❫◗❵q♣ç❧❍❦➐ð❭❫❴❛r❧q❦✌❢✾❛r❜þ❡➄❪❚❱①❡✛❜❴❜❚❛r❜❴❞➄❢❚❙❲❬✧❡✛❛❝❜
❬✲❙◗❢◗❦✌❱✱❘❴❡⑨♦❭❦ ❒ ❦✌❦✡❜Ò❳➬❙❭❧❍❬✧❡❲❱r❛①♣➤❦➐❢✵❛r❜Ò❪❣❡❲♣➤❵✐❢◗❦✌P✡❛❝♣❍❛❝❙❲❜÷♣❍❫❚❪❚❪Ð❙❲❧❍❵
♣➤s◗♣➤❵❍❦✡❬✧♣✌û✩❡❲P✗❵❍❛❝❙❲❜❴♣✟❡❲❜❴❢✜❬➢❦✡❵❍❘❚❙◗❢❚♣✳❧q❦✡❪❴❧❍❦➐♣➤❦✌❜❭❵q❛r❜❴❞✱❧q❦✌♣❍❙❲❫❚❧✎P✗❦➐♣✥❵q❙
❒ ❦✹❫❴♣❍❦✌❢✜❳➬❙❲❧❥❦✡♦❾❡❲P✡❫❴❡❾❵q❛r❙❭❜✥✇❲❡✛❜❣❢✜❙ ❒ ät❦✌P✗❵q♣✳♣❍❫❴P✎❘✧❡❲♣✳P✡❡❲❧❍❧q❛❝❦✡❧✎♣❉❡❲❜❴❢
✻✽✦✩★✫✬✄✛ ✦✩★ ✵
✓✖✕ ✘❻ ✗✚✙❚❽◗✘❻ ✗ ✕✜✛✹✢
❴❽ ❿✛❻ ✕ ❼❾✄❺ ✿ ✩✡✗ ✭ ✩
♠ ❘❚❦➢❜❚❦✌❦✌❢➒❳➬❙❲❧✝❪❚❱①❡✛❜➎❞❲❦✡❜❴❦✡❧✎❡❾❵❍❛❝❙❲❜❋♣➤❫❴❪❚❪❣❙❭❧➤❵✝❛❝❜➒❵q❘❚❦➢❧❍❦➐❡✛❱r❰➂❵q❛r❬➢❦
♣qP✗❦✡❜❣❡✛❧q❛r❙➢❛①♣✱❢◗❛❝❧❍❦➐P➟❵q❱rs✂♣➤❫❴❪❚❪❣❙❭❧➤❵q❦✌❢ ❒ s✂Ø✰ü✟Û✝Û✫③④♣❍❛r❵❍❦✯❧q❦✌♣❍❦✌❡❲❧qP✎❘✥û
Ï ✍✫❛r♦❭❦✡❜✲❵❍❘❚❦✹❱①❡✛❧q❞❲❦➝♦❾❡✛❧q❛r❦✡❵ts✯❙✛❳✑❪Ð❙❭♣q♣➤❛ ❒ ❱❝❦➺♣qP✗❦✌❜❴❡✛❧q❛❝❙❭♣✰❞❲❦✌❜❚❦✡❧✎❡❾❵q❛r❜❚❞
ò❣❡❭♣➤❘➓ò❴❙❯❙❯❢❴♣✡✇✟❵q❘❚❦✂❪❚❧❍❦✡❰➂ò❣❙❑❙◗❢ë❞❭❦✡❜❚❦✌❧q❡✛❵❍❛❝❙❲❜ë❙❲❳④❡✛❱❝❱➝❵❍❘❚❦✂P✗❙❭❧❍❧q❦✗❰
♣❍❪❣❙❭❜❴❢◗❛❝❜❚❞✜❦✌❬✲❦✌❧❍❞❭❦✡❜❴P✡s★❪❴❱❝❡❲❜❴♣❥❛①♣➝❙❲❫◗❵➝❙✛❳✥❧q❦✌❡❭P✎❘✥ÏÐ➮➂Ø✟ü✰Û✝Û✫③✫♣➤❛r❵❍❦
❮➝❙❲❧q❶❑❪❴❱❝❡❲❜✥✇✥❪❴❡❲❞❲❦ á ⑥ ✃ ⑩ ♠ ❘❚❛①♣✝❛❝❬➢❪❚❱r❛❝❦✌♣✯❵❍❘❴❡✛❵✝❵❍❘❚❦❹❬✲❙❲❵❍❛❝♦❾❡❾❵❍❛❝❙❲❜
❛①♣✜♣❍❛r❬➢❛❝❱❝❡❲❧✝❵❍❙➎❛r❜❴P✡❛❝❢❚❦✡❜❑❵✜❬✧❡✛❜❴❡❲❞❲❦✌❬✲❦✌❜❑❵✯❛r❜ë❵q❘❚❦❹è☎ý④æ ❡✛❪❴❪❚❱r❛r❰
P✡❡✛❵❍❛❝❙❲❜✐❰✳❵❍❙ ❒ ❦✯❡ ❒ ❱❝❦✫❵❍❙✧❪❚❧q❙◗❢◗❫❴P✗❦✝♣❍❙❲❫❚❜❣❢❹❪❚❱①❡✛❜❣♣➺❛r❜✂❧q❦✌❡✛❱Ð❵q❛r❬➢❦
❛❝❜✐❧q❦✌♣❍❪❣❙❭❜❴♣❍❦☎❵❍❙➢❡✲P✗❧q❛❝♣❍❛❝♣❩❛❝❜❹❮✹❘❚❛①P✎❘❹❵q❘❚❦✡❧q❦❃❡✛❧q❦④❡★❜❑❫❴❬ ❒ ❦✡❧✹❡✛❜❴❢
❬➢❛➛Õ②❙✛❳✰❛r❜◗❳➬❙❭❧❍❬✧❡✛❵❍❛❝❙❲❜✂♣t❵q❧❍❦➐❡✛❬✧♣✡⑩
✮✰✕✜✛ ❻ ✧✲✱ ❻➢❽ ✛✫✳ ✴✷✶✡✸ ❽ ✛✣✢ ❽❴❿✛❻ ✕ ❼❾❺
✽✿✾❁❀❃❂ Ý ❀ ❲î ❄ Ý➑Ü ❇ì ◗❅ îÐí ß í ❄ ❀ ì Þ❾ß ❀ ✘î ❈ ï❍Þ ❂ Ý➵ï❋❊ ì Ý ❊◗ï❍Ü ☎ ❖➢Ü ▲ ❀ ❊P✁
❖ ❅ ❀ ❄✪❅
♠ ❘❚❦✌❧❍❦ç❡❲❧❍❦❐❬✧❡✛❜❯s✮❢❚❦✌P✗❛①♣❍❛r❙❭❜➴♣❍❫❚❪❚❪Ð❙❲❧❍❵➒♣➤s◗♣➤❵❍❦✌❬➢♣☛❵❍❘❴❡✛❵➒❘❴❡⑨♦❲❦
❒ ❦✡❦✡❜☛P✡❧❍❦➐❡❾❵❍❦➐❢✧❵q❙✲❘❴❦✡❱❝❪②❛❝❜✐ò❴❙❯❙❯❢②❦✌♦❲❦✡❜❑❵➺❬✧❡✛❜❴❡❲❞❲❦✡❬➢❦✌❜❭❵✹❛❝❜❹❵q❘❚❦
♥✱⑦➢✇✑Ø❚❧✎❡✛❜❣P✗❦✜❡❲❜❴❢☛ý☎❦✡❵❍❘❚❦✌❧❍❱①❡✛❜❴❢❴♣④❡❲❱r❙❭❜❚❦❄➮Ùè✹❙ ❒ á✛â❭â❑é ✃ ⑩ ♠ ❘❚❦➐♣➤❦
③❃❖❚❖➓❡❲❧❍❦✐❵ts❯❪❚❛①P✡❡✛❱❝❱❝s ✍ ➘ ❖❯❰ ❒ ❡❭♣➤❦➐❢➓♣❍❛r❬★❫❚❱①❡❾❵❍❛❝❙❲❜å♣➤s◗♣➤❵❍❦✌❬➢♣✲❮✹❛r❵❍❘
❫❴♣❍❦✡❧❍❰➂❳➬❧q❛❝❦✡❜❴❢◗❱❝s②❛r❜❑❵❍❦✌❧➤❳Ù❡❭P✗❦➐♣✡⑩ ♠ ❘❴❦✡s✂P✡❡✛❜➒❛r❜◗❳➬❙❭❧❍❬ö❙❭❜✂ò❴❙❯❙❯❢➒❢◗❛❝♣➤❰
❵❍❧q❛ ❒ ❫◗❵❍❛❝❙❲❜❣♣✡✇✳❛①❢◗❦✡❜❑❵q❛➛❳➬sç❪Ð❙❲❪❴❫❚❱❝❡✛❵❍❛❝❙❲❜✥✇✟❵q❧q❡❲❜❴♣❍❪❣❙❭❧➤❵★❡❲❜❴❢ç❪❴❧❍❙❭❪❣❦✌❧➤❰
❵❍❛❝❦✌♣☛❡✛❵②❧q❛①♣➤❶ ✼✯❦✌♦⑨❡❲❱r❫❣❡❾❵❍❦❋❵q❘❚❦❐❱r❛❝❶❲❦✌❱rsÒ❦✺Ð☞ ❦➐P➟❵❍❛❝♦❲❦✌❜❚❦✌♣q♣②❙✛❳★ò❴❙❯❙❯❢
❢◗❦✡❳➬❦✡❜❴P✡❦✌♣✝❦✡❵qP✛⑩☛❨❩❙❲❬➢❬✜❫❴❜❚❛❝P✌❡❾❵q❛r❙❭❜ ❒ ❦✗❵t❮➝❦✡❦✌❜åÏ ❡❲P✗❵❍❙❭❧q♣✌Ï◆❛❝♣✯♦❲❦✡❧qs
❛❝❬✲❪Ð❙❲❧❍❵q❡❲❜❑❵❩❛❝❜✧ò❴❙❯❙◗❢✧❦✡♦❭❦✡❜❑❵❩❬✧❡✛❜❣❡✛❞❲❦✌❬➢❦✡❜❑❵④➮Ù❡❭♣❥❛❝❜❄❙❲❵❍❘❚❦✌❧❩❛❝❜❴P✗❛r❰
❢◗❦✌❜❭❵✗ñ❾P✡❧❍❛①♣➤❛①♣➺❬✧❡✛❜❴❡❲❞❲❦✌❬✲❦✌❜❑❵ ✃ ❡✛❜❴❢✂❘❴❦✡❜❴P✡❦✯♣➤s◗♣➤❵❍❦✡❬✧♣✹❡❲❧❍❦✯❡❲❛r❬➢❦✌❢
❡❾❵✰❛r❜❚❳➬❙❲❧q❬➢❡✛❵❍❛❝❙❲❜✯❢◗❛①♣q♣➤❦✌❬✲❛❝❜❴❡✛❵❍❛❝❙❲❜✜❡❲❬✲❙❭❜❚❞☎❦✌❬✲❦✌❧❍❞❭❦✡❜❴P✡s✫♣❍❦✡❧q♦❯❛❝P✡❦✌♣
❡✛❜❣❢✐P✡❙❲❜❚❜❚❦➐P➟❵q❦✌❢②❙❲❧q❞❭❡❲❜❚❛❝♣q❡❾❵q❛r❙❭❜❴♣✌⑩
♠ ❘❚❦❩❜❑❫❴❬ ❒ ❦✡❧✰❙✛❳❴❢◗❦➐P✗❛①♣➤❛❝❙❲❜✜♣❍❫❚❪❚❪Ð❙❲❧❍❵✰♣❍s◗♣t❵q❦✡❬✧♣✩♣❍❫❚❞❭❞❲❦✌♣➤❵q♣❉❡☎❘❚❛❝❞❲❘
❱❝❦✡♦❲❦✌❱❭❙❲❳◗❵❍❦➐P✎❘❚❜❚❙❲❱❝❙❲❞❭❛❝P✌❡✛❱❭❛r❜◗❳➬❧✎❡❲♣➤❵❍❧q❫❴P✗❵❍❫❚❧q❦❲⑩✩✉✱❙❾❮❩❦✌♦❲❦✌❧✌✇➐❧❍❦➐❡✛❱r❰➂❵q❛r❬➢❦
❫❴♣❍❦➢❙✛❳✖❵q❦✌P✎❘❚❜❚❙❭❱r❙❭❞❲s✂❮✹❛r❵❍❘❚❛❝❜❋❵❍❘❚❦✧♣❍❦✌P➟❵q❙❲❧❃❡✛❪❚❪Ð❦✌❡❲❧q♣④❵q❙ ❒ ❦➢❵q❡❲❧➤❰
❞❲❦✡❵❍❦➐❢✲❡✛❵❥❢◗❛①♣❍♣❍❦✡❬➢❛❝❜❴❡❾❵q❛r❜❚❞✝❛❝❜◗❳➬❙❭❧❍❬✧❡❾❵q❛r❙❭❜✲❡ ❒ ❙❭❫◗❵✳❵❍❘❴❦✹❫❚❜◗❳➬❙❲❱①❢◗❛❝❜❚❞
P✗❧q❛①♣➤❛①♣✳❵q❙✯❵❍❘❴❦✱❧✎❡✛❜❚❞❭❦☎❙✛❳✥❦✌❬✲❦✌❧❍❞❭❦✡❜❴P✡s★♣❍❦✡❧q♦❯❛❝P✡❦✌♣✳❵q❘❴❡❾❵➺❡✛❧q❦✱P✡❡❲❱r❱❝❦✌❢
❫❚❪Ð❙❲❜✂❵q❙❹❡❲♣q♣➤❛①♣➤❵✌⑩➺ý☎❙❄♣❍s❯♣➤❵❍❦✌❬✧♣✱♣❍❦✡❦✡❬×❵❍❙❄❦✡Õ◗❛❝♣➤❵☎❵❍❘❣❡❾❵✱❪Ð❦✡❧❍❳➬❙❲❧q❬
♣❍❫❚❪❚❪Ð❙❲❧❍❵❉❳➬❙❭❧❉ò❣❙❑❙◗❢✜❦✡♦❲❦✌❜❑❵❉❬✧❡✛❜❴❡❲❞❲❦✌❬✲❦✌❜❑❵✰❛❝❜✜❞❭❦✡❜❚❦✌❧q❡❲❱➂✇⑨❙❲❧✰❦✡♦❾❡❭P➟❰
❫❴❡✛❵❍❛❝❙❲❜✲❪❚❱❝❡❲❜❚❜❚❛❝❜❚❞❃❳➬❙❲❧✟ò❣❙❑❙◗❢★❦✡♦❭❦✡❜❑❵q♣✟❛❝❜➢❪❴❡✛❧❍❵❍❛①P✗❫❚❱①❡✛❧➐✇ ❒ s✯❞❭❦✡❜❚❦✌❧➤❰
❡❾❵q❛r❜❴❞✝❪❚❱❝❡❲❜❴♣✳❛❝❜✧❧❍❦➐♣➤❪Ð❙❲❜❣♣➤❦➝❵❍❙★❡✝♣➤❪Ð❦✌P✡❛➛✈❣P✹❢❚❛❝♣q❡❲♣➤❵❍❦✌❧✌⑩ ➘ ❜❴❢◗❦✌❦✌❢◆✇❭❛r❜
❵❍❘❴❦➢❡❲❧❍❦➐❡❄❙❲❳✖ò❴❙❯❙◗❢➒❦✌♦❲❦✡❜❑❵❃❬✧❡✛❜❴❡❲❞❲❦✡❬➢❦✌❜❭❵➐✇✑❮❩❦✧P✗❙❭❫❚❱①❢☛✈❴❜❴❢➎❜❚❙
❦✡♦❯❛①❢◗❦✡❜❣P✗❦❃❵❍❘❣❡❾❵☎❵q❘❚❦✡❧q❦✝❦✗Õ◗❛①♣t❵✎♣☎♣➤s◗♣➤❵❍❦✌❬➢♣✹❵❍❘❴❡✛❵✱P✡❡✛❜✂♦❾❡✛❱❝❛①❢❚❡❾❵q❦✝❙❲❧
♣❍❛r❬★❫❚❱❝❡✛❵❍❦✯❪❴❧❍❦✡❰➵❦✡Õ❯❛①♣➤❵❍❛❝❜❚❞✧❦✡♦❾❡❲P✡❫❴❡❾❵q❛r❙❭❜✂❪❚❱❝❡❲❜❴✘♣ ❴✼ ❵q❘❴❡❾❵✱❛①♣✱♣❍s◗♣t❵q❦✡❬✧♣
❵❍❘❣❡❾❵✹❛r❜❴❪❚❫◗❵☎❮➺❡❾❵❍❦✌❧✹❢◗❛❝♣➤❵❍❧q❛ ❒ ❫◗❵❍❛❝❙❲❜②❬➢❙◗❢◗❦✡❱①♣✡✇❚❡❲❜❴❢②♣➤❛❝❬✜❫❚❱①❡❾❵q❦✱❵q❘❚❦
❦✗Õ◗❦➐P✗❫◗❵q❛r❙❭❜➓❙✛❳✫❢◗❛❝♣q❡❲♣➤❵❍❦✌❧★❪❴❱❝❡❲❜❴♣✜❛❝❜➓❧q❦✌❡❲❱➝❵❍❛❝❬✲❦❭✇❥❡✛❜❣❢ç❦✡♦❾❡❲❱r❫❴❡✛❵❍❦
❵❍❘❴❦✡❬✂⑩
❍ Þ❾Ý➑Þ❹❬Þ ❾■ Þ ❀ ß❝Þ✤❏ ❀ ß ❀ Ý ❑☛Þ î ▲ ▼ ❊❴Þ❾ß ❀ ●Ý ❑ ☎ ❖✧Ü ▲ ❀ ❊P✂
❖ ❅ ❀ ❄❉❅
③✫❡❾❵q❡✧❳➬❧❍❙❭❬✼❬➢❦✡❵❍❦✡❙❭❧❍❙❭❱r❙❭❞❲❛①P✡❡❲❱◆❪❚❧❍❦➐❢◗❛①P➟❵❍❛❝❙❲❜❣♣✡✇✑❢❚❡❾❵✎❡❄P✗❙❭❜❴P✗❦✌❧❍❜❴❛r❜❚❞
❪Ð❙❲❪❚❫❚❱①❡❾❵q❛r❙❭❜➭❢◗❦✡❜❣♣➤❛r❵❍❛❝❦✌♣✌✇❣❪Ð❙❲❪❴❫❚❱❝❡✛❵❍❛❝❙❲❜➒P✎❘❣❡✛❧✎❡❲P➟❵q❦✡❧q❛❝♣➤❵❍❛①P✡♣✌✇◗❪❚❘❯s◗♣➤❛r❰
P✡❡❲❱❩❡❭♣❍♣❍❦✗❵✎♣❹➮Ù♣q❡❾❳➬❦ ❒ ❫❚❛❝❱❝❢◗❛❝❜❚❞➎❦✗❵qP ✃ ❡❲❜❴❢ë❦✡♦❾❡❭P✗❫❴❡✛❵❍❛❝❙❲❜ë❧q❙❲❫◗❵q❦✌♣✯❛①♣
❧q❦✌❡❲❢❚❛r❱❝s➒❡⑨♦❾❡✛❛❝❱❝❡ ❒ ❱❝❦❲⑩②Û④❜❋❵❍❘❴❦✧❙✛❵❍❘❴❦✡❧✯❘❴❡❲❜❴❢◆✇✰❮✹❘❚❛❝❱r❦✧❙ ❒ ❵q❡❲❛r❜❚❛❝❜❚❞
❢❚❡✛❵q❡✧❳➬❙❭❧✫♣❍❛❝❬✜❫❚❱①❡❾❵q❛r❙❭❜➭❛❝♣✫❪Ð❙❭♣q♣➤❛ ❒ ❱❝❦❲✇✑❛r❵✫❛①♣❃P✗❫❚❧q❧❍❦✌❜❑❵❍❱❝s✐❜❚❙❲❵✫❪Ð❙❭♣➤❰
♣❍❛ ❒ ❱❝❦✧➮Ù❡❭P✡P✗❙❭❧q❢❚❛r❜❚❞✜❵❍❙✧Ø✟ü✰Û✝Û✫③✫♣➤❛r❵❍❦ ✃ ❵❍❙➢❞❲❦✌❜❚❦✡❧✎❡❾❵q❦④❫❚❪✐❵❍❙✧❢❚❡✛❵❍❦
❬➢❙◗❢◗❦✡❱①♣✝❙✛❳➝❮➝❡✛❵❍❦✌❧✝❱r❦✌♦❲❦✡❱①♣✌✇✥♦❲❦✌❱r❙◗P✗❛r❵❍❛❝❦✌♣❃❦✡❵qP➢❛❝❜❐❧❍❦➐❡✛❱✟❵❍❛❝❬➢❦❲✇✰❢◗❫❚❦
❵❍❙✐❵❍❘❴❦✲❡❲❬✲❙❭❫❚❜❑❵✫❙❲❳❩P✗❙❭❬➢❪❚❫◗❵q❡✛❵❍❛❝❙❲❜❴❡❲❱✩❵q❛r❬➢❦➢❧q❦✌ð❑❫❚❛❝❧❍❦➐❢◆⑩✝✉☎❦✡❜❣P✗❦
❡✛❜❯s✜♣❍❛❝❬✜❫❚❱①❡❾❵q❛r❙❭❜✲♣❍s◗♣t❵q❦✡❬✧♣✳❮➝❙❲❫❚❱①❢✜❜❴❦✡❦✌❢★❵q❙✝❫❴♣➤❦✹❪❴❧❍❦➐P✗❙❲❬➢❪❚❫❚❵❍❦✌❢
✞
✞
97
❈ ➪❩➽ ❆◆❊➤●④■ ➹t➪❥➽ ■
➘ ❜☛❵❍❘❚❛①♣✫❪❴❡✛❪Ð❦✡❧✫❮➝❦✜❛❝❜❭❵q❧❍❙◗❢◗❫❣P✗❦✌❢➎❡❹♣❍❦✗❵❃❙✛❳❥P✡❧❍❛r❵❍❦✌❧❍❛①❡➢❳➬❙❲❧❃❦✌♦⑨❡❲❱r❫❚❰
❡❾❵❍❛❝❜❚❞✜❵❍❘❚❦④❳➬❦✌❡❭♣➤❛ ❒ ❛❝❱r❛r❵ts✧❙✛❳✩❛❝❜❑❵❍❧q❙◗❢◗❫❴P✗❛❝❜❚❞★❪❚❱❝❡❲❜❚❜❚❛❝❜❚❞★❵❍❦✌P✎❘❴❜❚❙❲❱❝❙❲❞❭s
❛r❜❑❵❍❙☛❡❲❜❐❡❲❪❚❪❚❱❝❛❝P✌❡❾❵q❛r❙❭❜❐❡❲❧❍❦➐❡❚⑩✧Ñ❋❦❹❡✛❪❴❪❚❱r❛❝❦✌❢➎❵q❘❚❦✌♣❍❦❄P✗❧q❛➛❵q❦✡❧q❛❝❡✐❵q❙
❵t❮❩❙②❡✛❪❚❪❚❱❝❛①P✡❡❾❵q❛r❙❭❜➎❡❲❧❍❦➐❡❲♣④❮✹❘❚❛❝P✎❘å➮Ù❡❭♣④s❭❦✗❵ ✃ ❘❣❡⑨♦❲❦✜❜❴❙✛❵✯♣➤❦✌❦✡❜❋➷ ➘
❪❚❱❝❡❲❜❚❜❚❛❝❜❚❞✐❡❲❪❚❪❚❱❝❛❝P✌❡❾❵q❛r❙❭❜❴♣✡⑩✫➷☎❱r❵❍❘❚❙❭❫❚❞❲❘➒❮✹❛➛❵q❘➭❵❍❘❴❦✲❡❲❪❚❪❚❱❝❛❝P✌❡❾❵❍❛❝❙❲❜❣♣
P✗❙❲❜❣♣➤❛①❢◗❦✡❧q❦✌❢✯❵❍❘❚❦✹❛❝❜❑❵❍❧q❙◗❢◗❫❴P➟❵q❛r❙❭❜★❙❲❳Ð➷ ➘ ❪❚❱①❡✛❜❴❜❚❛r❜❴❞✫❮➺❡❲♣❉❵q❘❚❙❲❫❚❞❭❘❑❵
❵❍❙ ❒ ❦✹❳➬❦✌❡❭♣➤❛ ❒ ❱❝❦✜➮➬❮✹❛r❵❍❘✐♣➤❙❭❬✲❦✱❧❍❦➐♣➤❦✌❧❍♦❾❡❾❵q❛r❙❭❜❴♣ ✃ ✇❾❵q❘❚❦✱❦✡Õ❯❦✌❧qP✡❛❝♣❍❦☎❡✛❪❚❰
❪❣❦➐❡✛❧✎♣✰❵❍❙✝❛❝❱r❱❝❫❴♣➤❵❍❧✎❡❾❵❍❦☎❵❍❙✝❫❴♣✟❵q❘❚❦✱❢◗❛rú❄P✗❫❚❱r❵ts✲❛r❜➢✈❴❜❣❢◗❛r❜❴❞✯♣➤❫❚❛r❵q❡ ❒ ❱❝❦
❡✛❪❚❪❚❱❝❛①P✡❡❾❵q❛r❙❭❜❄❡✛❧q❦✌❡❭♣✡û✰❡✛❜✐❡❲❪❚❪❚❱❝❛❝P✌❡❾❵q❛r❙❭❜✧❬✜❫❴♣➤❵➺♣qP✗❙❭❧❍❦✱❮❩❦✌❱r❱Ð❙❲❜❹❡❲❱r❱
❵❍❘❚❧q❦✡❦➢❡❲♣❍❪❣❦➐P➟❵✎♣✡û④❬✲❙❲❵❍❛❝♦❾❡❾❵❍❛❝❙❲❜✩✇✑❵❍❦✌P✎❘❴❜❚❙❲❱❝❙❲❞❭❛❝P✌❡✛❱✰❛r❜◗❳➬❧✎❡❲♣➤❵❍❧q❫❴P✗❵❍❫❚❧q❦
❡✛❜❴❢✐❶❯❜❚❙❾❮✹❱r❦➐❢◗❞❲❦✫❦✡❜❴❞❲❛❝❜❚❦✡❦✌❧❍❛❝❜❚❞✧❡❲♣❍❪❣❦➐P➟❵✎♣✡⑩
❖❯❙❲❬➢❦✧❡❲♣❍❪❣❦➐P➟❵✎♣❃❙✛❳➝❵❍❘❚❦❄P✡❧❍❛r❵❍❦✌❧❍❛①❡❹❮➝❦✡❧q❦ ❒ ❡❲♣❍❦✌❢➎❙❲❜➎❵❍❘❴❙❭♣❍❦➢❵❍❘❴❡✛❵
❮❩❙❭❫❚❱❝❢ ❒ ❦☎❫❴♣❍❦✌❢➢❮✹❘❚❦✌❜❄❡❲♣q♣➤❦➐♣❍♣❍❛❝❜❚❞✝❡✛❜❄❡✛❪❴❪❚❱r❛①P✡❡✛❵❍❛❝❙❲❜✲❳➬❙❲❧❥❵❍❘❚❦✱❛r❜◗❰
❵❍❧q❙❯❢❚❫❴P➟❵q❛r❙❭❜✫❙❲❳❯❡✹⑦✝ù✹❖✑✇➐❡❲♣Ð❵q❘❚❦❥♣q❡✛❬➢❦✳❪❚❧q❙ ❒ ❱❝❦✡❬✧♣◆❙❲❳❑❶❯❜❚❙❾❮✹❱❝❦✌❢◗❞❭❦
❦✡❱❝❛❝P✡❛➛❵✎❡❾❵❍❛❝❙❲❜✐❡✛❜❴❢✐❡⑨♦⑨❡❲❛r❱①❡ ❒ ❛r❱❝❛➛❵ts➢❙❲❳✩❦✗Õ◗❪Ð❦✡❧❍❵❍❛①♣➤❦✫❬✧❡⑨s ❒ ❦✫❦✡♦❯❛①❢◗❦✡❜❑❵✌⑩
➘ ❜åP✡❙❲❜❑❵❍❧q❙❲❱☎❡❲❪❚❪❚❱❝❛❝P✌❡❾❵❍❛❝❙❲❜❣♣✡✇❩❘❚❙❾❮❩❦✌♦❲❦✌❧✌✇❥❡➎❳➬❫❚❧❍❵❍❘❴❦✡❧✧❛❝❬✲❪Ð❙❲❧❍❵q❡❲❜❑❵
❳Ù❡❲P➟❵q❙❲❧✖♣❍❦✡❦✌❬➢♣✖❵❍❙ ❒ ❦☎❵q❘❚❦☎❱❝❦✡♦❲❦✌❱❚❙✛❳✑❵q❦✌P✎❘❚❜❚❙❭❱r❙❭❞❲❛①P✡❡❲❱◗❪❚❧❍❙❭❞❲❧q❦✌♣q♣➤❛❝❙❲❜
❮✹❛➛❵q❘❚❛r❜✯❵❍❘❚❦❩❛r❜❴❢❚❫❴♣t❵q❧❍s❭⑩ ➘ ❜✝❙❭❧q❢◗❦✌❧✥❵q❙☎❛r❜❑❵q❦✡❞❲❧✎❡❾❵q❦✳❪❚❱①❡✛❜❚❜❚❛❝❜❚❞✹❵q❦✌P✎❘◗❰
❜❚❙❲❱❝❙❲❞❭s★❛❝❜❑❵❍❙★❡★P✗❫❚❧q❧q❦✡❜❑❵❍❱❝s➢❘❑❫❴❬➢❡❲❜❹P✗❙❭❜❑❵❍❧q❙❲❱❝❱r❦➐❢✧♣➤s◗♣➤❵❍❦✡❬✂✇❑❵❍❘❴❦✡❧q❦
♣➤❘❚❙❭❫❚❱①❢★❡❲❱r❧q❦✌❡❭❢◗s✝❦✗Õ◗❛①♣t❵✖❘❚❛❝❞❲❘✲❱❝❦✡♦❲❦✌❱❝♣✟❙❲❳❣❵q❦✌P✎❘❚❜❴❙❲❱❝❙❲❞❲❛①P✡❡❲❱❯❫❴♣➤❦✹❡❲❜❴❢
❦✗Õ◗❪❣❦✌❧➤❵q❛❝♣❍❦✱❛❝❜❄❵q❘❚❦✫❛r❜❴❢❚❫❴♣t❵q❧❍s❭✇◗♣➤❫❴P✎❘②❡❲♣❩❦✗Õ❚❡✛❬➢❪❚❱❝❦✌♣❩❙✛❳❉❪❴❡❲♣➤❵➺♣❍❫❴P✗❰
P✗❦✌♣q♣➺❮✹❛r❵❍❘➭➷ ➘ ❵q❦✌P✎❘❚❜❚❙❭❱r❙❭❞❲s❭⑩✖➷Ó❪❴❡✛❧✎❡✛❱❝❱❝❦✡❱✥P✡❡❲❜ ❒ ❦✯❢◗❧✎❡⑨❮✹❜✂❮✹❛➛❵q❘
❵❍❘❚❦❹✈❴❦✡❱①❢❐❙✛❳✱➷✱❫◗❵❍❙❭❜❚❙❲❬➢❛①P②❨❩❙❲❬➢❪❚❫❚❵❍❛❝❜❚❞➓➮Ù⑦❃❦✌❪❚❘❴❡❲❧➤❵ ✷❨❩❘❴❦✌♣q♣
⑥ ✃ ✇❭❮✹❘❚❛①P✎❘❹❛①♣❩❵❍❙➢❢◗❙➢❮✹❛➛❵q❘❹❵q❘❚❦✫❬➢❡❲❜❯❫◗❳Ù❡❲P✗❵❍❫❚❧q❦④❙❲❳✩❙❲❬➢❪❚❫❚❵❍❦✡❧
♣➤á✛s◗â❲♣➤â ❵❍❦✡❬✧♣❩❮✹❘❚❛①P✎❘❄❵q❡❲❶❲❦❃P✡❡❲❧❍❦④❙✛❳✥❵q❘❚❦✡❬✧♣❍❦✡❱❝♦❲❦✌♣➝❛❝❜❄❵q❘❴❡❾❵➺❵❍❘❴❦✡s❄P✡❡❲❜
♣➤❦✌❱➛❳ê❰➑P✗❙❭❜◗✈❴❞❲❫❴❧❍❦❭✇❲♣➤❦✌❱➛❳✥❬✧❡❲❛r❜❑❵q❡❲❛r❜✩✇❭♣❍❦✡❱r❳ê❰➵❘❚❦➐❡✛❱❣❦✗❵qP❲⑩ ♠ ❘❚❦☎❪❚❧q❙✛❵✎❡✛❞❲❙❲❰
❜❚❛❝♣➤❵q♣❩❙✛❳✩➷✱❨✵❪❣❙❭❧➤❵q❧q❡⑨s✜❵❍❘❚❦❃❢◗❦✡❪❴❱r❙❾s❯❬➢❦✡❜❑❵❩❙❲❳✩❡✛❫◗❵q❙❲❜❚❙❭❬➢❛❝P④ð❭❫❣❡✛❱r❰
❛➛❵q❛r❦➐♣✯❡❲♣✯❵❍❘❚❦❹P✡❫❚❱❝❬✲❛❝❜❴❡✛❵❍❛❝❙❲❜ë❙✛❳✹❡✂❵q❦✌P✎❘❚❜❚❙❭❱r❙❭❞❲❛①P✡❡❲❱✳❪❚❧❍❙❭❞❲❧q❦✌♣q♣➤❛❝❙❲❜
❡✛❱❝❙❲❜❚❞✲❮✹❘❚❛①P✎❘✂❵❍❘❚❦✝❪❚❧q❙❲❞❭❧❍❦➐♣❍♣➝❙❲❳✟❡❲❜✂❡❲❪❚❪❚❱❝❛❝P✌❡❾❵q❛r❙❭❜②❡❲❧❍❦➐❡✲P✌❡✛❜ ❒ ❦
❵❍❧✎❡❲P✎❶❲❦➐❢◆⑩✟✉☎❦✡❜❣P✗❦❲✇❯❳➬❙❭❧✹❡✛❜✐❡❲❪❚❪❚❱❝❛❝P✌❡❾❵q❛r❙❭❜✐❡✛❧q❦✌❡✯❵❍❙✧❡❲❢❚❙❲❪◗❵✹❡✜❜❴❦✡❮
♣➤s◗♣➤❵❍❦✡❬ ❛r❜❴P✡❙❲❧q❪❣❙❭❧q❡✛❵❍❛❝❜❚❞❐❡✛❫◗❵q❙❲❜❚❙❭❬✲❛①P❹❳➬❦✌❡❾❵q❫❚❧q❦✌♣✌✇❥❵q❘❚❦➭P✗❫❚❧q❧q❦✡❜❑❵
❵❍❦✌P✎❘❴❜❚❙❲❱❝❙❲❞❭s☎❬✜❫❴♣➤❵❉❡❲❱r❧q❦✌❡❭❢◗s ❒ ❦✳❳Ù❡✛❧✩❡❭❢◗♦❾❡✛❜❴P✡❦✌❢➢➮➬❦✌❞✹❵❍❘❴❦❥P✗❫❚❧q❧q❦✡❜❑❵
♣➤s◗♣➤❵❍❦✡❬ ❬✧❡⑨s✐❘❴❡⑨♦❭❦✯♣➤❙❲❳ê❵t❮➝❡❲❧❍❦✝P✡❙❲❬➢❪❣❙❭❜❚❦✡❜❑❵✎♣☎❮✹❛r❵❍❘☛❛❝❜❭❵q❦✡❱❝❱r❛❝❞❲❦✌❜❑❵
P✎❘❴❡✛❧✎❡❲P✗❵❍❦✡❧q❛①♣t❵q❛❝P✌♣ ✃ ⑩ ♠ ❘❚❛①♣✱♣❍❦✡❦✌❬➢♣✹❵❍❘❚❦✜P✌❡❲♣❍❦✯❮✹❛r❵❍❘➭➷ ➘ ❪❚❱①❡✛❜❴❜❚❛r❜❴❞
❵❍❦✌P✎❘❴❜❚❙❲❱❝❙❲❞❭s✫❡❲❱❝♣❍❙❴û✑❵q❘❚❦➝❡❲❪❚❪❚❱❝❛❝P✌❡❾❵q❛r❙❭❜✜❡✛❧q❦✌❡✹❛❝❜✜❞❭❦✡❜❚❦✌❧q❡❲❱❲❬★❫❴♣t❵ ❒ ❦
❵❍❦✌P✎❘❴❜❚❙❲❱❝❙❲❞❭❛❝P✌❡✛❱❝❱rs✲♣➤❙❭❪❚❘❚❛①♣t❵q❛❝P✌❡❾❵❍❦➐❢❄❦✌❜❚❙❲❫❚❞❭❘✧❵❍❙➢♣❍❫❚❪❚❪Ð❙❲❧❍❵❩❶❯❜❚❙❾❮✹❱r❰
❦✌❢◗❞❭❦✱❦✡❜❴❞❲❛❝❜❚❦✡❦✌❧❍❛❝❜❚❞✲❙✛❳✥❵q❘❚❦❃❧❍❦➐ð❭❫❴❛r❧q❦✌❢❄❢❚s❑❜❣❡✛❬➢❛❝P✫❡✛❜❣❢❄❘❚❦✌❫❚❧❍❛①♣➤❵❍❛①P
❶❑❜❴❙❾❮✹❱r❦➐❢◗❞❲❦④❵❍❙➢❬✧❡✛❶❭❦✫❪❴❱❝❡❲❜②❞❭❦✡❜❚❦✌❧q❡✛❵❍❛❝❙❲❜❹❳➬❦✌❡❲♣❍❛ ❒ ❱r❦❭⑩
❧q❙❲❫◗❵q❦✌♣❹➮Ù❦✡❞➎❧❍❙❑❡❲❢❐❜❚❦✡❵t❮❩❙❭❧❍❶◗♣ ✃ ⑩ ♠ ❘❚❦✐❪❚❱❝❡❲❜❚❜❚❛❝❜❚❞➎♣➤❵q❡✛❵❍❦✐❮❩❙❭❫❚❱❝❢
❝❱ ❛r❶❭❦✡❱❝s✲P✡❙❲❜❴♣❍❛❝♣➤❵❥❙❲❳◆ò❴❙❯❙❯❢❄❱r❦✌♦❲❦✡❱①♣✌✇❭♣q❡❾❳➬❦✱❦✌♦❾❡❲P✗❫❣❡❾❵❍❛❝❙❲❜✧Ú✌❙❲❜❚❦➐♣✡✇❑♣➤❪❣❡❾❰
❵❍❛①❡✛❱❣❢◗❛❝♣➤❵❍❧q❛ ❒ ❫◗❵❍❛❝❙❲❜✧❙❲❳◆❛r❜❚❘❣❡ ❒ ❛r❵q❡❲❜❭❵✎♣✡✇❲❵ts❯❪❣❦➐♣✳❙✛❳◆❛❝❜❚❘❴❡ ❒ ❛➛❵✎❡✛❜❑❵q♣☎➮Ù❦✡❞
❡ ❒ ❱r❦✡❰ ❒ ❙◗❢◗❛r❦➐❢✵❙❲❧✐❜❚❙✛❵ ✃ ⑩ÓÑÒ❘❚❛❝❱r❦➒❵❍❘❚❛①♣②❡✛❪❚❪Ð❦✌❡❲❧q♣❄❡❲❪❚❪❚❧q❙❲❪❚❧q❛①❡❾❵❍❦
❳➬❙❲❧✧➷ ➘ ❪❚❱①❡✛❜❚❜❴❛r❜❚❞➎❵❍❦➐P✎❘❚❜❚❙❭❱r❙❭❞❲sÒ➮Ù❡❲❜❴❢ç❵q❘❚❦☛P✗❱❝❙❭♣❍❦✡❜❚❦➐♣❍♣★❙✛❳✱❵❍❘❚❛①♣
❢◗❙❭❬➢❡❲❛r❜❹❵❍❙✲❞❲❦✌❜❚❦✡❧✎❡✛❱✑❢◗❛①♣❍❡❭♣t❵q❦✡❧➝❬➢❡❲❜❴❡✛❞❭❦✡❬➢❦✡❜❑❵➝❛❝♣➺❦✌♦❑❛①❢◗❦✌❜❭❵ ✃ ❵q❘❚❦
P✗❙❭❜❑❵❍❛❝❜❑❫❴❙❲❫❴♣❹❜❴❡❾❵q❫❚❧❍❦➎❙✛❳✯❵q❘❚❦❐❢◗❙❲❬✧❡✛❛❝❜✥✇☎❛❝❜✽❪❣❡✛❧❍❵❍❛①P✗❫❚❱①❡✛❧❹ò❴❙❯❙❯❢
❢◗❛①♣t❵q❧❍❛ ❒ ❫◗❵q❛r❙❭❜✥✇❴❬✧❡⑨s ❒ ❦✜❢◗❛rú❄P✗❫❚❱r❵☎❵q❙✧❧❍❦✌❪❚❧❍❦➐♣➤❦✌❜❑❵➺❮✹❛r❵❍❘☛P✡❫❚❧❍❧q❦✡❜❑❵
❢◗❙❭❬➢❡❲❛r❜②❬➢❙◗❢◗❦✡❱✥❱①❡✛❜❴❞❲❫❴❡❲❞❲❦✌♣✌⑩
÷➹ ■❯❆◆●✱■❯■ ➹t➪❩➽
Ø❚❙❭❧❃❵❍❘❚❦❄è✱ý✱æø❡✛❪❚❪❴❱r❛①P✡❡✛❵❍❛❝❙❲❜✥✇✩♣➤❵❍❧q❦✡❜❴❞✛❵❍❘❣♣④❱❝❛❝❦✧❛r❜➎❵❍❘❚❦➢❵q❦✌P✎❘❚❜❴❙✛❰
❱❝❙❲❞❲❛①P✡❡❲❱❣❛❝❜◗❳➬❧✎❡❲♣➤❵❍❧q❫❴P➟❵q❫❚❧q❦❲✇❑❵❍❘❚❦✫❬➢❙✛❵q❛r♦❾❡❾❵q❛r❙❭❜❄❳➬❙❭❧❩❵q❘❚❦❃❮❩❙❭❧❍❶✑✇❯❡✛❜❴❢
➮➬❛❝❜Ò❵q❘❚❦➎♥④⑦ ❡✛❵✐❱❝❦✌❡❭♣t❵ ✃ ❵❍❘❚❦❋❡⑨♦❾❡✛❛❝❱❝❡ ❒ ❛r❱❝❛r❵ts➓❙✛❳✯❛❝❜❑❵❍❦✌❧❍❙❭❪❣❦✌❧q❡ ❒ ❱r❦
♣❍❦✡❧q♦❑❛①P✗❦➐♣★❢❚❫❚❦✂❛r❜✵❪❴❡❲❧➤❵✲❵❍❙❋❵❍❘❴❦✂♣➤❵q❡❲❜❴❢❚❡❲❧q❢ç❵❍❦✌P✎❘❴❜❚❙❲❱❝❙❲❞❭s❋❪❴❱❝❡✛❵➤❰
❳➬❙❲❧q❬ ➮Ù♥ ♠ æ❋❨ ✃ ⑩ ♠ ❘❚❦✯❦✗Õ◗❛①♣t❵q❦✡❜❴P✡❦✯❙✛❳✖♣❍❙✛❳ê❵t❮➺❡✛❧q❦❃❛r❜✂❵q❘❚❦✜❛❝❜❴❢◗❫❴♣➤❰
❵❍❧qs❋❮✹❛➛❵q❘➓➷ ➘ P✎❘❴❡❲❧q❡❭P➟❵q❦✡❧q❛❝♣➤❵q♣❄➮➵❖❴❨✹Û✝Û ♠ ❡✛❜❴❢➓æ➎Û✫ü✩➷ ✃ ❛❝♣✲❡✛❜
❛❝❬✲❪Ð❙❲❧❍❵q❡❲❜❑❵✝❳Ù❡❲P➟❵q❙❲❧➐⑩ ♠ ❘❚❦✧❬✧❡❲❛r❜❐❪❚❧q❙ ❒ ❱❝❦✡❬✧♣✯♣❍❦✡❦✌❬✻❵q❙ ❒ ❦✧❵q❘❚❦
❱①❡❲P✎❶②❙✛❳❥❘❚❛❝❞❲❘➒❱r❦✌♦❲❦✡❱✰❛❝❜◗❳➬❙❲❧q❬✧❡❾❵❍❛❝❙❲❜➭❡ ❒ ❙❭❫◗❵④❧q❙❭❡❭❢✂❜❚❦✗❵t❮➝❙❲❧q❶✂♣t❵✎❡❾❰
❵❍❫❣♣❩➮Ù❮✹❘❚❛❝P✎❘✜❦✌ð❑❫❴❡❾❵q❦✌♣✥❵❍❙☎❵q❘❚❦✱Ï ❮➝❙❲❧q❱❝❢✝♣➤❵q❡✛❵❍❦❲Ï✌❛r❜✜❪❚❱❝❡❲❜❚❜❚❛❝❜❚❞ ✃ ✇✛❡✛❜❴❢
❵❍❘❴❦✝❱❝❡❭P✎❶✧❙✛❳✟❪❚❧❍❦➐P✗❛①♣➤❦✌❱rs❄❢❚❦✗✈❴❜❚❦➐❢②❪❚❱❝❡❲❜②❢❴❡❾❵q❡ ❒ ❡❲♣❍❦✌♣✌⑩✳Ø❚❙❭❧☎Ø✰➱❩æë✇
❡✛❞❑❡✛❛❝❜✥✇❴❬➢❙❲❵❍❛❝♦⑨❡✛❵❍❛❝❙❲❜➭❡❲❜❴❢②❵❍❦➐P✎❘❚❜❚❙❲❱❝❙❲❞❭❛❝P✌❡✛❱✥❛❝❜◗❳➬❧✎❡❲♣➤❵❍❧q❫❴P➟❵q❫❚❧q❦✜❡✛❜❴❢
❛❝❜❚❜❚❙❾♦❾❡❾❵q❛r❙❭❜★❛①♣✳❞❲❦✌❜❚❦✡❧✎❡✛❱❝❱❝s✝❘❚❛r❞❭❘✥⑩ ♠ ❘❚❦✹❬✧❡✛❛❝❜✲❡❲❧❍❦➐❡❲♣✟❙❲❳✑P✡❙❲❜❴P✡❦✡❧q❜
❡✛❧q❦✝❮✹❛r❵❍❘❚❛❝❜☛❵❍❘❴❦✯❶❯❜❚❙❾❮✹❱r❦➐❢◗❞❲❦✝❦✌❜❚❞❲❛❝❜❚❦✌❦✡❧q❛r❜❚❞❄❡❭♣➤❪Ð❦✌P✗❵q♣✌✇❴❪❴❡❲❧➤❵q❛❝P✡❫◗❰
❱①❡✛❧q❱rs④❪❚❱❝❡❲❜❃❧❍❦➐❡❲♣❍❙❲❜❚❛❝❜❚❞✹❡✛❜❴❢❃❧q❦✡❪❴❧❍❦➐♣➤❦✌❜❭❵✎❡❾❵q❛r❙❭❜✫❡❭♣➤❪Ð❦✌P✗❵q♣✌⑩ ➘ ❜ ❒ ❙❲❵❍❘
❡✛❧q❦✌❡❭♣Ð❵❍❘❚❦✌❜✥✇⑨❛➛❵✰❮❩❙❭❫❚❱①❢✫♣❍❦✡❦✌❬✽❵q❘❴❡❾❵✩❵❍❘❚❦❥❡❲❪❚❪❚❱❝❛❝P✌❡❾❵q❛r❙❭❜❴♣✥❡✛❧q❦✟❳➬❦✌❡❭♣➤❛r❰
❒ ❱❝❦❲✇ ❒ ❫◗❵✖❬➢❙❲❧q❦➺❮❩❙❭❧❍❶✯❛①♣✟❧q❦✌ð❑❫❚❛❝❧❍❦➐❢✯❵q❙✯ð❭❫❣❡✛❜❑❵❍❛r❳➬s✝❵❍❘❴❦☎❧❍❦➐♣➤❙❭❫❚❧✎P✗❦✌♣
❧q❦✌ð❑❫❚❛❝❧❍❦➐❢❄❵q❙➢P✡❙❲❬➢❪❚❱❝❦✗❵q❦❃❵❍❘❚❦✝❶❯❜❚❙❾❮✹❱❝❦✌❢◗❞❭❦④❦✌❜❚❞❲❛❝❜❚❦✌❦✡❧q❛r❜❚❞★❵q❡❭♣➤❶✑⑩
➷☎❜❴❙✛❵❍❘❴❦✡❧✹❡✛❪❚❪❴❱r❛①P✡❡✛❵❍❛❝❙❲❜✐❡❲❧❍❦➐❡✯❮➝❦④❘❣❡⑨♦❲❦✫❛r❜❯♦❲❦➐♣t❵q❛r❞❑❡❾❵q❦✌❢✧❧q❦✌♣❍❫❚❱r❵❍❦✌❢
❛❝❜➎❡❄❧q❦✡❬✧❡❲❧❍❶❾❡ ❒ ❱rs✂❢◗❛ Ð❦✌❧❍❦✌❜❭❵✫❧❍❦➐♣➤❫❴❱➛❵✫❛r❜➒❵❍❘❚❦★❳➬❦✌❡❭♣➤❛ ❒ ❛❝❱r❛r❵ts☛P✗❧q❛➛❵q❦✗❰
❧q❛❝❡❴✇❩❱❝❦✌❡❭❢◗❛r❜❴❞➎❵q❙ë❫❴♣❄❜❴❙✛❵❄❪❚❫❚❧✎♣❍❫❚❛r❜❴❞➎❵q❘❚❦➒❡❲❪❚❪❚❱❝❛❝P✌❡❾❵❍❛❝❙❲❜å❙✛❳✝➷ ➘
❪❚❱①❡✛❜❚❜❴❛r❜❚❞❣⑩ ♠ ❘❚❛❝♣➺❛①♣✹❡✛❜②❘❚❛①♣t❵q❙❲❧q❛❝P✌❡✛❱✑❦✗Õ❚❡❲❬✲❪❴❱r❦✧➮➬❛❝❜✐❵❍❘❴❡✛❵☎❛➛❵✹❬✧❡⑨s
❜❚❙❲❵❉♣t❵q❛r❱❝❱❑❘❚❙❲❱①❢❃❵❍❙◗❢❚❡⑨s④❞❲❛❝♦❲❦✳❵q❘❚❦❩P✎❘❣❡✛❜❚❞❭❦✌♣◆❛❝❜✝❵❍❦➐P✎❘❚❜❚❙❭❱r❙❭❞❲s ✃ ❳➬❧❍❙❭❬
❵❍❘❴❦✫❡❲❧❍❦➐❡✯❙✛❳✰➷☎❛❝❧ ♠ ❧✎❡❾ú❄P✫❨❩❙❭❜❭❵q❧❍❙❭❱➂⑩ ➘ ❵➺❛❝♣ ❒ ❡❭♣➤❦➐❢❄❙❲❜❹❙❲❫❚❧➺❦✌❡❲❧❍❱❝s
❮➝❙❲❧q❶②❛❝❜➎❳➬❙❲❧q❬➢❡❲❱r❛①♣q❡❾❵❍❛❝❙❲❜➒❙✛❳❩➷ ♠ ❨Ô♣➤❦✌❪❴❡✛❧✎❡❾❵q❛r❙❭❜➒P✗❧q❛➛❵q❦✡❧q❛❝❡❋➮Ùæ➒P➟❰
❨❩❱❝❫❴♣➤❶❭❦✡s
⑤➐ô❲✂ô ✁ ✃ ⑩ ♠ ❘❴❦❩❡✛❪❴❪❚❱r❛①P✡❡✛❵❍❛❝❙❲❜✜❡❲❧❍❦➐❡➺❛❝♣✩❵❍❙④❪❚❧q❙❯❢❚❫❴P✗❦
❡★❪❚❱❝❡❲❜❚❜❚❛❝❜❚Ü✗❞★Ý✰Þ❾❡❲ßr❛❝à ❢❄❳➬❙❲❧➝❘❚❦✡❱❝❪❚❛❝❜❚❞➢P✗❙❲❜❚ò❴❛❝P✗❵➝❧q❦✌♣❍❙❲❱❝❫◗❵q❛r❙❭❜❄❙✛❳❉❡❲❛r❧✎P✗❧✎❡❾❳ê❵
❢◗❫❚❧q❛❝❜❚❞ë❦✡❜◗❰➲❧q❙❲❫◗❵q❦✂P✡❙❲❜❑❵❍❧q❙❲❱➺❙❾♦❭❦✡❧✧ý☎❙❭❧➤❵q❘✵➷➝❵q❱❝❡❲❜❑❵❍❛①P☛❡✛❛❝❧q♣❍❪❴❡❲P✡❦❲⑩
♠ ❘❯❫❴♣②❵❍❘❴❦ë❪❚❱❝❡❲❜❚❜❚❦✌❧☛❮❩❙❭❫❚❱❝❢➴❜❚❦✌❦✌❢✽❵❍❙✵❵✎❡✛❶❲❦❐❦✡Õ❯❛①♣➤❵❍❛❝❜❚❞✵❧❍❙❭❫◗❵❍❦
❪❚❱①❡✛❜❴♣✌✇✖❡❲❢❾ät❫❴♣t❵✲❵❍❘❴❦✡❬ ❵❍❙❋P✗❱❝❦✌❡❲❧★❡✛❜❯së❡✛❛❝❧✎♣➤❪❴❡❭P✗❦❹P✡❙❲❜◗ò❣❛❝P✗❵✜❵❍❘❣❡❾❵
❘❴❡❭❢ ❒ ❦✌❦✡❜➒❢◗❦✡❵❍❦➐P➟❵❍❦➐❢ ❒ s☛❡❄P✗❙❭❜◗ò❴❛①P➟❵✱❪❚❧q❙ ❒ ❦❲✇✑❡✛❜❣❢✂❙❲❫◗❵q❪❚❫◗❵✱❵q❘❚❦
❜❚❦✌❮✕❪❴❱❝❡❲❜❴♣✝❵q❙➎❡✛❜ç❡❲❛r❧✯❵❍❧✎❡❾ú❄P❄P✗❙❭❜❑❵❍❧q❙❲❱✖❙❲ú✧P✡❦✡❧➐⑩ ➘ ❜ë❵❍❘❚❛①♣✜❢❚❙✛❰
❬✧❡✛❛❝❜②❵❍❘❚❦✯❱❝❦✡♦❭❦✡❱◆❙❲❳✳P✡❫❚❧q❧❍❦✌❜❭❵➺❵q❦✌P✎❘❚❜❚❙❭❱r❙❭❞❲s❄❮➺❡❲♣➺❘❴❛r❞❭❘✥✇❴❢❚❡✛❵q❡➢❙❲❜
❡✛❛❝❧✎P✗❧✎❡❾❳ê❵✱❪Ð❙❭♣❍❛➛❵q❛r❙❭❜❴♣✱❡❲❜❴❢☛❪❚❱①❡✛❜❣♣✱❮➺❡❲♣✱♦❲❦✡❧qs②❞❲❙❯❙◗❢◆✇Ð❡❲❜❴❢☛☎❙ ➢✄ ❛❝❜❚❦
❦✡♦❾❡❲❱r❫❴❡✛❵❍❛❝❙❲❜å❮➝❡❭♣✲❪❣❙❑♣❍♣❍❛ ❒ ❱r❦❭⑩÷➷✱❢❚❢◗❛r❵❍❛❝❙❲❜❣❡✛❱❝❱rs❭✇❥❵q❘❚❦☛❶❯❜❚❙❾❮✹❱r❦➐❢◗❞❲❦
❦✡❜❴❞❲❛❝❜❚❦✡❦✌❧❍❛❝❜❚❞✐❡❭♣➤❪Ð❦✌P✗❵q♣✫❮➝❦✡❧q❦✜❞❲❙❯❙◗❢◆û✱❵❍❘❚❦✌❧❍❦✲❮❩❦✌❧❍❦★❧❍❫❴❱r❦ ❒ ❙❯❙❭❶❯♣✌✇
❳➬❙❲❧q❬✧❡✛❱❝❛❝♣❍❦✌❢➎❪❚❱①❡✛❜❴♣✌✇✩❪❴❧❍❙❭❪❣❙❑♣➤❛r❵❍❛❝❙❲❜❴❡❲❱✳♣t❵✎❡❾❵❍❦✧❢◗❦➐♣❍P✡❧❍❛❝❪◗❵q❛r❙❭❜❴♣✡✇✰❡✛❜❴❢
❬✜❫❣P✎❘✮❙❲❳✜❵❍❘❴❦çP✗❙❲❜❑❵q❦✗Õ❯❵☛❶❑❜❴❙❾❮✹❱r❦➐❢◗❞❲❦❋❘❴❡❲❢ ❒ ❦✡❦✡❜➴❳➬❙❭❧❍❬✧❡✛❱❝❛①♣➤❦➐❢
❵❍❘❴❧❍❙❭❫❚❞❲❘❁❙❲❫❚❧✲❪❚❧q❦✡♦❯❛r❙❭❫❴♣★❮❩❙❭❧❍❶❐❙❲❜å❡✛❛❝❧qP✡❧q❡✛❳ê❵➢♣❍❦✡❪❣❡✛❧✎❡❾❵❍❛❝❙❲❜çP✡❧❍❛r❰
❵❍❦✌❧❍❛①❡❚⑩ ♠ ❘❚❦✫P✗❧q❛➛❵q❦✡❧q❛❝❡✫❵q❘❴❡❾❵➺♣❍P✡❙❲❧q❦✌❢✲❱r❙❾❮✵❮➝❦✡❧q❦✹❬➢❙✛❵❍❛❝♦❾❡❾❵q❛r❙❭❜❹❡✛❜❴❢
❘❯❫❚❬✧❡✛❜✐❳Ù❡❲P✗❵❍❙❲❧✎♣✌û❉❛❝❜❯♦❲❦✌♣➤❵❍❛❝❞❭❡✛❵❍❛❝❙❲❜❹♣❍❘❚❙❾❮➝❦✌❢❄❵❍❘❴❡✛❵✹❡❲❱➛❵q❘❚❙❲❫❴❞❲❘✐❡❲❫◗❰
❵❍❙❭❬✧❡❾❵❍❦➐❢★❡❲❛❝❢❴♣✟❮❩❦✌❧❍❦➺❢◗❦✌♣❍❛❝❧❍❦➐❢ ❒ s✜♣❍❙❲❬➢❦➺♣t❵✎❡❾❵❍❦➺❘❚❙❭❱❝❢❚❦✡❧✎♣✡✇❾❵q❘❚❦➺❦✡❜
❧q❙❲❫◗❵q❦➢❡❲❛r❧✫❵q❧q❡✛ú✧P➢P✡❙❲❜❑❵❍❧q❙❲❱✟❙✛ú❄P✡❦✡❧✎♣④❮➝❦✡❧q❦➢ð❭❫❴❛➛❵q❦✲❘❣❡✛❪❚❪❯s➭❮✹❛r❵❍❘
❵❍❘❴❦✡❛❝❧✯P✗❫❚❧q❧❍❦✌❜❑❵✫❬➢❦✗❵q❘❚❙◗❢◆✇✩❮✹❘❴❛❝P✎❘❋❮➝❡❭♣❃P✡❡✛❪❣❡ ❒ ❱❝❦➢❙✛❳➝❢❚❦✡❱❝❛r♦❭❦✡❧q❛r❜❚❞
❵❍❘❴❦✝❪❚❱❝❡❲❜❴♣➺❮✹❛r❵❍❘❚❙❭❫◗❵✹❵❍❘❚❦✝❜❴❦✡❦✌❢✐❳➬❙❭❧✹❦✗Õ❯❵❍❧✎❡★❵❍❦✌P✎❘❴❜❚❙❲❱❝❙❲❞❭s❲⑩
☎
✔☞
❴❆ ❏ ➽✱➪✙✕ ❊t▲ ➶ ✖✖▲✙✘✕▲ ➽❩➾ ■
➘ ❮❩❙❭❫❚❱①❢➭❱❝❛r❶❭❦✜❵q❙❹❵q❘❴❡✛❜❚❶☛❪Ð❦✡❧✎♣❍❙❲❜❚❜❚❦✌❱✰❳➬❧❍❙❭❬ ❵q❘❚❦✲❳➬❙❲❱❝❱r❙❾❮✹❛❝❜❚❞✐❙❲❧❍❰
❞❭❡✛❜❴❛❝♣q❡❾❵q❛r❙❭❜❴♣☎❳➬❙❭❧✝❵❍❘❴❦✡❛❝❧✝❛r❜❚❪❴❫◗❵✌û✝❵q❦✡❱❝❦✡❜❑❵✌✇✰➷✩ó✫ü✰❨ ü◆❵✎❢◆✇✰➷✱❦ ✫❛①♣✡✇
♠ ❖◗❤❥♥✜✇ ♠ è✱ü✖✇✳ý✱➷✫❖◗➷✯✇✳Ñ❐❡✛❱❝❱r❛❝❜❚❞❲❳➬❙❲❧✎❢❁❖❯❙❲❳ê❵t❮➝❡❲❧❍❦❭✇✟❡❲❜❴❢➓⑦❃❦✌❜❑❵
❡✛❜❴❢②✉✱❡❲❬✲❪❣♣➤❘❚❛❝❧q❦✯❨❩❙❭❫❚❜❑❵ts②❨❩❙❲❫❴❜❴P✗❛❝❱❝♣✌⑩
☞
❉✍
➚ ▲ ➽ ❆✑▲✩■
ù➝❛r❫❚❜❣❢◗❙❴✇ç❖✑⑩ ✮➷✱s❑❱❝❦✗❵❍❵✌✇çè✝⑩ ✽ù➝❦✡❦✡❵❍Ú❭✇❐æç⑩ ➴ù❩❙❭❧❍❧✎❡❾ät❙❣✇➒③★⑩
❨❩❦✌♣➤❵q❡❴✇ ➷✜⑩ ✫❧q❡❲❜❭❵➐✇ ♠ ⑩ ❇æ➭P⑨❨❩❱r❫❣♣➤❶❭❦✡s❲✇ ♠ ⑩ ❇æ☛❛❝❱❝❡❲❜❚❛➂✇
➷✯⑩ ✼❡❲❜❴❢ ó✳❦✡❧❍❳Ù❡✛❛❝❱r❱❝❛r❦❭✇ ✲⑩ á❲â❲â ⑥❴⑩ ➱❩ü✩➷✱ý④❤ ♠ ❵q❦✌P✎❘◗❰
❜❚❙❲❱❝❙❲❞❭❛❝P✌❡✛❱÷❧q❙❭❡❭❢◗❬✧❡✛❪ø❙❲❜✙➷ ➘ ❪❚❱①❡✛❜❚❜❴❛r❜❚❞ ❡✛❜❴❢ ♣qP✎❘❚❦➐❢◗❫❚❱r❰
❛r❜❴❞❴⑩✻❤✖❱❝❦✌P✗❵❍❧q❙❲❜❚❛①P✡❡❲❱r❱❝s÷❡⑨♦❾❡✛❱❝❛❝❡ ❒ ❱❝❦❋❡✛❵➭❘❭❵❍❵❍❪✥û➈ñ❲ñ❾❮✹❮✹❮✝⑩ ❪❴❱❝❡❲❜❚❦✗❵❍❰
❜❚❙❯❦❲⑩ ❙❲❧q❞❯ñ✛♣➤❦✌❧❍♦❯❛①P✗❦⑨ñ✛è✹❦✌♣❍❙❲❫❴❧qP✡❦✌♣➟ñ❾è✹❙❑❡✛❢◗❬✧❡✛❪Ðñ✛è✹❙❲❡❭❢◗❬✧❡✛❪ á ⑩ ❪✑❢❯❳t⑩
✑✵▲✝✆✎▲
❯✼
✔✼
❯✼
98
❯✼
✍
✔✼
✖✍
✔✼
✔✼
❯✼
❨❩❙❭❧➤❵q❦✌♣✌✇✰❨④⑩✁✰⑩❯✼✖❖❯❙❲❱❝❜❚❙❭❜✥✇✳❨④⑩❯✼✳❡✛❜❴❢❐æ➭❡❲❧➤❵q❜❚❦✡Ú❭✇❉③★⑩✰❖✑⑩ ❴⑩
❥➱ ❱❝❡❲❜❚❜❚❛❝❜❚❞❋❙❲❪Ð❦✡❧✎❡❾❵❍❛❝❙❲❜✩û➭➷☎❜✵❦✗Õ❯❵❍❦✌❜❴♣❍❛r❙❭❜❁❙❲❳❃❡❋❞❲❦✌❙❲❞❭❧q❡❲❪❚á✛❘❚â❲❛①â P✡❡✛❱
❛❝❜◗❳➬❙❭❧❍❬✧❡❾❵q❛r❙❭❜➒♣❍s❯♣➤❵❍❦✌❬✂⑩✜➱✖❧q❙◗P✗❦✡❦➐❢◗❛❝❜❚❞❭♣④❙✛❳✖❵q❘❚❦✂⑤✌♣➤❵ ➘ ❜❑❵q❱➂⑩✥ü❉➷✹❰
ý④æ➭è ✜Ñ❋❙❲❧q❶❯♣❍❘❚❙❭❪✥✇❑➷✱❜❑❵❍❛❝❞❲❫❚❙❄❨❩❙❲❱❝❦✡❞❭❛r❙✜❢❚❦❃❖◗❡❲❜ ➘ ❱①❢◗❦✗❳➬❙❭❜❴♣❍❙❴✇
æ➭❦✗Õ◗❛❝P✡â ❙✧❨❩❛r❵ts❲✇❣③✜⑩ Ø÷✇❴æ➭❦✗Õ◗❛❝P✡❙❴⑩
Ø✟❢◗❦✌Ú✗❰➤Û④❱r❛❝♦❾❡✛❧q❦✌♣✌✇ ❴❯⑩ ✼✖❨➝❡❲♣➤❵❍❛❝❱r❱❝❙❴✇✩ü❥⑩✔✼✿✍❃❡✛❧✎P✗❛①❡❾❰➑➱✟❦✡❧q❦✡Ú❭✇✩Û★✔⑩ ✼✟❡✛❜❴❢
è☎❦✡❛❝❜❴♣✡✇✑Ø➝⑩Ð➱✟⑩
⑩✹ù➝❧❍❛❝❜❚❞❭❛r❜❚❞✧❫❣♣➤❦✌❧q♣④❡✛❜❴❢☛❪❴❱❝❡❲❜❚❜❚❛❝❜❚❞✧❵❍❦➐P✎❘◗❰
❜❴❙❲❱❝❙❲❞❲s➢❵q❙❲❞❭❦✗❵❍❘❴á✛❦✡â❲❧➐â❭û✰ã ❦✡Õ◗❪❣❦✌❧❍❛❝❦✡❜❴P✡❦✌♣➺❛❝❜➭❖ ➘ ➷✱③✫❤✖Ö★⑩ ➘ ❜ ï í➐ì Ü✎Ü ▲
î ❅ î î ❀ í✛î ✖í✛î✘❈ î✑ì í✛î
❀ î❲❄ ❂ í✆❈ Ý ❅ ✄
❀✾
íÝ ❖✧Þ❾Ý➑Ü ▲ Ü ßrÞ ✂ î❣î Ý➲❀ Ü✎❲î Ü ❄ ÞÝ î ▲ ✂ Ý➲❇ìÜ✡❅ ï Ü ▲ Þ✛❊◗Ý ß ❀ ❲î ✆❄ Þ❾ß ☎ Ü✡✂✞ï❍Ü ✝✠✟✡Ü✟☞☛✍✌ ✇◆⑤❲⑤✏❊ ✎
á❲â ⑩
✍✫❘❣❡✛❱❝❱❝❡ ❒ ✇✥æë⑩❯✼✰ý④❡✛❫✥✇✩③★❯⑩ ✼❉❡✛❜❣❢ ♠ ❧q❡⑨♦❭❦✡❧✎♣➤❙❣✇Ð➱✟⑩ á✛â❭â ❣⑩
❊◗Ý í
▲
❴
î
î
✲
î
❄
❅
í
✒
î
▲
ì
ì
➺
⑩
☛
æ
❭
❙
❍
❧
❑
❞
✛
❡
✐
❜
✝
⑦
❲
❡
❫◗❳ê❰
❀
❀
❬✧❖✧❡❲Þ❾❜❚Ý➑Ü ❜ ➘ ❖❚ß❝ù➝Þ ý ⑤✡❰ ✁✂✁ ã❲â ❰ Ü ✂✁ ❋ï ã✂❑ ❰ éÞ ⑩ ï❍Þ Ý Ü
⑦❃❦✌❪❚❘❴❡❲❧➤❵➐✇ ❣⑩❴Û★⑩r✇❚❡❲❜❴❢☛❨❩❘❚❦✌♣q♣✡✇◗③★⑩❚æë⑩ á✛â❭â ⑥❚⑩ ♠ ❘❚❦✫♦❑❛①♣❍❛r❙❭❜✐❙❲❳
❡❲❫◗❵❍❙❭❜❚❙❲❬➢❛①P④P✡❙❲❬➢❪❚❫◗❵q❛r❜❴❞❴⑩ ✖í ❖ ❊◗Ý➑Ü✗ï ⑥ ã ➮t⑤ ✃ û ❴✑⑤ ✎ ✁ â ⑩
æ➒P➐❨❩❱❝❫❴♣➤❶❭❦✡s❭✇ ♠ ⑩✥ü✖❯⑩ ✼❉➱✟❙❲❧❍❵❍❦✌❙❲❫❴♣✌✇ ❴⑩✥æë❯⑩ ✼❉ý④❡✛❛❝❶✑✓✇ ✒★❯⑩ ✼ ♠ ❡⑨s❯❱r❙❭❧✌✇
❨④⑩◆ý✜⑩✔✼✩❡❲❜❴❢ ❲❙❭❜❚❦✌♣✌✇✥❖✑⑩❄⑤✌ô❭ô✂❚✁ ⑩❃➷✕è✹❦➐ð❭❫❴❛r❧q❦✡❬➢❦✡❜❑❵✎♣✝❨➝❡❲❪◗❵❍❫❴❧❍❦
æ➭❦✗❵❍❘❴❙❯❢✐❡✛❜❴❢❹❛➛❵✎♣➺❫❴♣❍❦④❛❝❜②❡❲❜✐➷✱❛r❧ ♠ ❧q❡✛ú✧P✝❨❩❙❲❜❑❵q❧❍❙❭❱Ð➷✱❪❚❪❚❱❝❛❝P✌❡❾❰
❵q❛r❙❭❜✥⑩ ✂ í ❈ Ý Þ✛ï❍✕Ü ✔ ï❍Þ ì Ý ❀ ì Ü✧Þ ✒î ▲ ✽✿✾ Ü✡ï ❀ Ü îÐì Ü✱á ✁◗û é ✎ ❴⑤❭⑩
æ➭s❲❦✡❧✎♣✌✇✌⑦➢❯⑩ ✼✛Ø❚❧✎❡✛❜❚❶✑✇ ❴❯⑩ ✼✛æ➭P⑨❨❩❱r❫❴♣❍❶❲❦✌s❲✇⑨ü✖❯⑩ ✼✛❡✛❜❴❢✖✒✖❙❭❧❍❶❭❦✗❰t❖❯❬➢❛➛❵q❘✥✇
ý✜⑩ á✛â❭â❑é ⑩ÐÑ❋❙❲❧q❶❯♣❍❘❚❙❭❪✲❙❭❜❄æ☛❙❾♦❯❛r❜❴❞✜➱✖❱①❡✛❜❚❜❴❛r❜❚❞✯❡❲❜❴❢❹❖◗P✎❘❴❦✌❢◗❫❚❱r❰
❛❝❜❚❞④❖◗s❯♣➤❵❍❦✌❬✧♣✩❛❝❜❑❵❍❙✹❵q❘❚❦❩è✹❦➐❡✛❱❑Ñ➎❙❭❧❍❱①❢◆⑩✛❘❑❵❍❵❍❪✥û➈ñ❲ñ❾❛❝P✌❡✛❪❴♣ â❑é ⑩ ❛①P✡❡❲❪❴♣t❰
P✡❙❲❜◗❳➬❦✌❧❍❦✌❜❴P✗❦❭⑩ ❙❭❧❍❞◗ñ❯⑩
è☎❙ ❒ ✇❯æë⑩ á❲â❲â❑é ⑩Ðè☎❦✡♦❯❛r❦✌❮Ò❙✛❳◆ò❴❙❯❙◗❢✧❦✌♦❲❦✡❜❑❵➝❬➢❡❲❜❴❡✛❞❭❦✡❬➢❦✡❜❑❵❩❢❚❦✗❰
P✡❛❝♣❍❛❝❙❲❜✐♣❍❫❚❪❚❪Ð❙❲❧❍❵✹♣➤s◗♣➤❵❍❦✡❬✧♣✌⑩ ♠ ❦✌P✎❘❴❜❚❛❝P✌❡✛❱✑è✹❦✡❪Ð❙❲❧❍❵☎Ø✟ü✰Û✝Û✫③✫♣➤❛r❵❍❦
❢❚❦✡❱❝❛r♦❭❦✡❧✎❡ ❒ ❱❝❦ ♠ ⑤➐ô❾❰ â❑é ❰ â ⑤❲✇❭③④❦✡❱r❳ê❵④✉✱s❯❢❚❧q❡❲❫❚❱r❛①P✡♣✌⑩
❖❯❵❍❛❝❱r❱➵✇❴➱✟⑩ ù✽❡✛❜❣❢②✉④❡✛❧ ❒ ❙❭❧q❢✥✇❚ù④⑩ ❣⑩☎⑤✌ô❭ô ❚⑩❥❖❯❵❍❧✎❡❾❵❍❦✌❞❲❛①P✫❬➢❡❲❜❴❡✛❞❭❦✗❰
❬➢❦✌❜❭❵✫❙✛❳✟❵❍❧✎❡❾ú❄P✝❛❝❜➒⑦❃❦✌❜❭❵✫❫❴♣❍❛r❜❚❞✐æ➎Û✫ü✩➷✜⑩ ➘ ✘
❜ ✗ ❅ î î
î ✱✙ í Þ ▲ ï❍Þ î ❂ ❴í ï➟Ý ✘î ✗❈ í ❋ï ✧❖ Ý Þ❾Ý ❀ í❾Ý➲î Ü✡ï Þ ✒î Þ ▲
❀Ý í❾î Þ❾ß ✳í✛î✘❈ Ü✡ï❍Ü îÐì Ü í❾✚
✖í✛î Ý➵ï í ß ⑩
ó✖❡❲❧❍❛❝❙❲❫❣♣✡⑩ø⑤✌ô❭ô❲ô❴⑩ ♥④æ ♠ ❨ â ❣û✮è✹❦➐♣➤❦➐❡✛❧✎P✎❘➄è☎❦✡♦❯❛r❦✌❮❇❡✛❜❴❢
è☎❦✌ð❑❫❚❛❝❧❍❦✌❬✲❦✌❜❑❵q♣➴è✹❦✡❪Ð❙❲❧❍❵✌⑩ ù➝❛❝❧❍❬➢❛❝❜❚❞❲❘❴❡❲❬ ❨❩❛➛❵ts ❨❩❙❭❫❚❜❴P✗❛❝❱
❡❲❜❴❢➓ü✩❦✡❛①P✗❦➐♣t❵q❦✡❧❹❨❩❛➛❵ts✵❨❩❙❲❫❚❜❴P✡❛r❱☎❡❲❜❴❢➓♥✱❜❚❛❝♦❲❦✡❧✎♣❍❛➛❵tsë❙❲❳✫ü✥❦✌❦✌❢❚♣✌✇
❘❑❵❍❵❍❪✥û➈ñ❲ñ❾❮✹❮✹❮✝⑩ ❫◗❵q❬✧P✛⑩ ❞❲❙❾♦✑⑩ ❫❴❶❚ñ❾❧❍❦➐♣➤❦➐❡✛❧✎P✎❘Ðñ❯⑩
✞
✞
☎
✝
✡
✞
✡
✞
✝
✝
✞
✞
❂
✝
☎
✝
✆
✆
☎
✁
✞
☎
☎
❄
✁
✞
✞
✝
☎
☎
✆
✡
❂
✁
✡
✞
99
Planning in Supply Chain Optimization Problem
N.H. Mohamed Radzi, Maria Fox and Derek Long
Department of Computer and Information Sciences
University of Strathclyde, UK
Abstract
The SCO planning problem is a tightly coupled planning and scheduling problem. We have identified some
important features underlying this problem including
the coordination between actions, maintaining temporal and numerical constraints and the optimization
metric. These features have been modeled separately
and experimented with the state-of-the-art planners,
Crikey, Lpg-td and SgPlan5 . However, none of these
planners are able to handle all features successfully.
This indicates a new planning technology is required to
solve the SCO planning problem. We intend to adopt
Crikey as a basis of the new technology due to the
capability of solving the tightly coupled planning and
scheduling problem.
Introduction
The Supply Chain Optimization (SCO) covers decision
making at every level and stage of a system that produces products for a customer. The foremost important issues include the decisions about the quantities of
products to be produced, scheduling the production and
delivery whilst minimizing utilization of resources by
the system within a certain planning period. All these
decisions require reasoning and planning: understanding the factors that are relevant to the decisions and
evaluation of the combinatorial of the problem. This
means the planning process in SCO is not only deciding which action should be chosen to reach the goal
state based on the logical constraints but also what is
the consequence of selecting the action to the given optimization function. Due to these features the planning
problems in SCO are different from the standard planning problem. The SCO planning domains are richer
in temporal structure than most temporal domains in
standard planning.
Temporal domains were introduced in the third International Planning Competition (IPC3) along with
the temporal planning language PDDL2.1 (Long & Fox
2003)(Fox & Long 2003). The durative action is introduced as a new feature in the language. This feature allows actions in domains to be allocated a unit
of time specifying time taken to complete said action.
(Weld 1994). Furthermore, the quality of the plan is
100
also measured by the overall length or duration of the
plan generated. The temporal features in the language
were later extended by the introduction of timed initial literals in PDDL2.2 (Edelkamp & Hoffman 2003).
This is the language used in IPC4. Timed initial literals provide a syntactically simple way of expressing the
exogenous events that are both deterministic and unconditional (Cresswell & Coddington 2003). Another
way to express exogenous events was then introduced in
PDDL3.0 by using hard and soft constraints (Gerevini
& Long 2006). Hard and soft constraints express that
certain facts must be, or are preferred to be, true at a
certain time point as benchmarked in IPC5 (Dimopoulos et al. 2006).
Temporal domains in IPC3 require certain facts to be
true at the end of the planning period. Although domains with deadlines or exogenous events are modeled
in IPC4 and 5, none of these domains require actions
overlap in time. In contrast, SCO domains require some
collections of facts to be true not only at a particular
final state but also throughout the trajectory. For example, some quantities of a product may be required to
be in production throughout the planning period. Add
to that, the SCO problems also require that actions to
be executed concurrently during the planning process.
For instance, there are exogenous events such as order
deadlines that have to be met. We have to maintain
these deadlines and concurrently execute other production activities. Moreover, there might be some threshold values that have to be maintained over the planning
period.
As well as temporal structure, SCO domains are also
rich with numerical structure. The domains with numerical structure were also introduced in IPC3. But
most of the competition domains in the IPCs mainly
deal with the consumption of resources and cost. In
the SCO problems, numerical facts and constraints are
used to model beyond the consumption of resources and
cost. The numerical facts and constraints are also used
to model the multiple actions: actions that have equivalent chances of being selected but the difference between them lies in the cost associated with performing them. In sum, SCO problems are very complex
planning problems where temporal and numerical con-
straints enforced over time must be met as well as the
logical constraints.
From another point of view, SCO planning problem
is different from standard planning problems in terms
of the way plans are constructed. The standard or classical planning problems concentrates on a process to
find what actions should be carried out in a constructed
plan by reasoning about the consequence of acting in order to choose among a set of possible courses of action
(Dean & Kambhampati 1997). The number of actions
required in a plan is usually unknown. The temporal
planning problem is basically a combination of classical
planning and scheduling. In the pure scheduling process, the actions are usually known and the choice of
actions is limited compared to planning (Smith, Frank,
& Jonsson 2000). The scheduling process concentrates
on figuring out when and with what resources to carry
out so as satisfy various types of constraint on the order in which the actions need to be performed (Dean
& Kambhampati 1997). Therefore in temporal planning, the process of constructing a plan combines the
decisions on what actions should be applied, with when
it should be applied and with what resources (Halsey
2004).
The SCO problem however, is an example of a combinatorial problem that has string planning, scheduling and constraint reasoning components. Besides what
and when choices it also contain choices about how to
act. One way to introduce a how choice is to differentiate actions for achieving the same effect by numerical
values such as duration or resource consumption. The
what choices concern what resources are required for an
action to be applied and the when choices concern how
the action should be scheduled in to the rest of the plan.
A very good example of the problem is the following:
a manufacturer receives several orders from customers
that consist of producing various quantities of several
different items. These orders should be delivered within
specified deadlines. The manufacturer has to schedule
the production of each item. Due to the capacity limitations of the producer, the manufacturer has to decide
which items should be produced using his own facilities and which items should be produced using other
production options that are available. No matter how,
the deadlines have to be met and the overall production
cost should be minimized. In this case, the solution is
not as simple as performing a sequence of actions but
could involve executing many actions concurrently.
We have discovered that, although there are a number of planners in the literature that are capable of handling the individual features of PDDL2.1 and PDDL3,
there are no planners currently available that can reliably solve non-trivial problems.
The reminder of this paper is structured as follows.
First we present a description of a simple domain within
the class of problems. We have encoded the domain and
applied several state-of-the-art planners to it. The outcomes of the experiment are discussed in the following
section but, in brief, the best performing planners in
101
IPC4 and IPC5 are unable to solve the problems we
set. Clearly, SCO problems encompass a huge variety
and would in general be beyond the reach of any automated planner. Therefore, this discussion is followed by
the definition of a subclass of problems that we intend
to focus on in our work. We will develop a planner (by
enhancing an existing planning system) that is capable
of solving this subclass of problems. Later, we briefly
describe our future work including the planner that we
intend to enhance.
Domain Definition
A simple example of production planning problem in
the supply chain is illustrated in Figure 1. The process
starts with receiving the customers orders. Each order
has a different combination of products and also different delivery deadlines. The process is then followed
by selecting the production types of each product. In
our example, each production type has a different processing time and cost: normal-time, over-time and outsource. The outsource action furthermore can be performed by several suppliers where each supplier is associated with a different lead time and cost. The domain
demonstrates the properties discussed in the above section. The choices of production action represent the
multiple choice of actions for achieving the same task.
These actions can be executed simultaneously as well
in parallel with other activities. The probability of the
action being selected is dependent on the objective function. Any plan produced by the planner should minimize the overall cost and time taken to produce all items
as well as meeting the specified deadlines. For example,
item1 can be produced either by the normal-time action
or the outsource action but, choosing the normal-time
action might cause a delay in the product delivery so
that it is better to choose the outsource action. In an
efficient plan we might be producing item2 while we are
also producing item1 . This domain has been encoded
and presented to some of state-of-the-art planners.
normal
time
receives
order
select
production
overtime
finished
product
outsource
Figure 1: A Simple production process
State-of-the-Art Planners
We chose three different types of temporal planners for
our experiments. All of these planners are claimed to be
able to handle the temporal features of PDDL2.1 and
also features to express deadline such as time windows
and hard constraint. The planners are as follows:
SGPlan5 is a temporal planner that received a prize
for overall best performance in IPC5. The planner
works with PDDL3.0, which features timed initial literals, hard constraints and preferences. It generates
a plan by partitioning the planning problem into subproblems and finds a feasible plan for each sub-goal.
The multi-value domain formulation (MDF) is used as
a heuristic technique in the planner for resolving goal
preferences and trajectory and temporal constraints
(Chih-Wei et al. 2006).
The LPG-td planner is an extension of LPG
(Gerevini & Serina 2002) that can handle features in
PDDL2.1 and most features in PDDL2.2, including
timed initial literals and derived predicates. The timed
initial literals represent facts that become true or false
at certain time points, independently of the actions in
the plan. This feature can be used to model a range
of temporal constraints including deadlines. LPG-td is
an incremental planner that generates a plan in the domain involving maximization or minimization of complex plan metrics. An incremental process improves
the plan by using the first generated plan to initialize
a new search for a second plan of better quality and so
on. It can be stopped at any time to give the best plan
computed so far (Gerevini, Saetti, & Serina 2004).
CRIKEY is a temporal planner that solves a tightly
coupled type of planning and scheduling problem. This
planner supports PDDL2.1. It has implemented the
envelope and content concept in order to handle the
communication between the planning and scheduling.
Content actions are executed within envelope actions.
Therefore the minimum length of time for the content
actions must be less than or equal to the maximum total
length of time for the envelope actions (Halsey, Long,
& Fox 2004). The envelope and content concepts were
introduced to allow Crikey to solve problems in which
actions must be executed in parallel in order to meet
temporal and resource constraints.
Experimental Results
The aim of the experiments is to investigate the capability of each planner to cope with the following features:
(1) temporal constraints that require facts to be
maintained over time; (2) optimization metrics including temporal and numerical optimization; (3) coordination and concurrent actions.
The domain described in the previous section
was encoded using PDDL. There were six actions
modelled in the domain including STACK ORDER,
CHOOSE BRAND, OVERTIME, NORMAL TIME,
OUTSOURCE and SHIP ON TIME. Since different
planners can work with different versions of PDDL,
we have exploited PDDL2.1 features to represent domains presented to Crikey, PDDL2.2 for domains presented to Lpg-td and PDDL3.0 for domains presented
to SgPlan5 . We have had to use different syntax to
express the deadlines: timed initial literals for Lpg-td
and hard constraints for SgPlan5 . No specific syntax
is given in PDDL2.1 for expressing deadlines, but it
102
is possible to encode them using envelope actions and
clips (Fox, Long, & Halsey 2004; Cresswell & Coddington 2003). In the first experiment we have encoded only
a single deadline. The encoded problem has been presented three times to each planner, each time with a
different set-up. The problem instances are described
in Table 1. For example in the first instance, the duration for actions NORMAL TIME and OVERTIME are
7 and 8 unit time respectively. The OUTSOURCE action can be performed through either by supplier1 or
supplier2 with the duration are 5 and 6 unit time respectively. The planners are expected to perform one
of these actions in order to accommodate the deadlines.
The duration of other actions defined in the domain is
1 unit time. Table 2 describes the deadlines and the
plan duration given by each planner (if any) together
with the action selected in the plan.
prob
1
2
3
normal-time
7
7
5
overtime
8
5
8
supplier1
5
7
7
supplier2
6
6
9
Table 1: problem instances set-up
prob
1
d
8.05
2
8.05
3
8.05
Crikey
7.05
supplier1
8.05
supplier2
7.05
normal-time
SgPlan
8.004
supplier1
no
solution
no
solution
Lpg-td
8.000
supplier1
8.00
overtime
8.00
normal-time
d: deadline
Table 2: maintaining time constraints by each planners
As we can see in Table 2, Crikey and Lpg-td planners perform very well in maintaining the temporal constraint. Both planners managed to obtain a plan with
the most appropriate actions so that the completion
time is within the deadline. But, SgPlan5 only generates a plan for the first instance. There are no solutions
for the second and third instances.
Later, the second experiments were carried out to see
whether these planners can reason about the optimization metric, for example, minimize the makespan. For
this purpose, the deadlines were excluded from the domains and then replaced with the minimization metric
of total-time. The same encoded problems were applied to all planners. The description of the problem
instances were remained the same as in the Table 1.
Table 3 exhibits the completion time of the plan generated by each planner. We can see from this Table,
Lpg-td was capable of minimizing the makespan compared to both SgPlan5 and Crikey. SgPlan5 as described in the Table 3 always choose the same action no
matter the changes made in the duration of the actions
in the domain. Therefore in experiment 1, SgPlan5 unable to produce any plan for instance2 and instance3
since the set up time for the particular action has violated the deadlines. Crikey also performs similar to
SgPlan5 in this experiment. As depicted in Table 3,
NORMAL TIME action is chosen regardless the duration set up for the action in each instance. The result
from experiment 1 also indicates that Crikey will only
maintain the temporal constraint by finding a feasible
solution but not an optimal solution. Refer to Table 2
for instance2 . Although plan duration given by Crikey
meeting the deadlines, but the duration is slightly bigger than plan duration generated by Lpg-td.
prob
1
2
3
Crikey
9.004
normal-time
9.004
normal-time
7.004
normal-time
SgPlan
8.004
supplier1
10.004
supplier1
10.004
supplier1
Lpg-td
8.000
supplier1
8.000
overtime
8.000
normal-time
Table 3: optimization metric: minimizing makespan
Besides minimizing the makespan, some experiments
to investigate whether these planners can reason about
numerical values by giving a plan that minimizes the
total cost incurred due to action selection were carried
out. The problems used in experiment 2 were applied
in this experiment. But, the optimization metric was
changed to minimize total-cost and the same durations
were set to each action. The number of instances were
also increased to ten, each instance has been set up
with a different cost. The metric value of the plan is
given in the generated plan for the plan produced by
SgPlan5 or Lpg-td. But for Crikey the metric value can
be identified through the action selected in the plan.
Table 4 shows the metric values or total cost obtained
from the plan generated by each planner. Lpg-td produced plans that minimized the total cost for every instances. In some instances, either SgPlan5 or Crikey
also able to produce the optimized plans. The optimized plans were obtained due to the cost of the actions that are considered to be selected have the smallest cost compared to other actions in the problem. This
is definitely not because of the capability of the planner to reason on the numerical values. The domains and
problems involved in the experiment can be accessed at
http://www.cis.strath.ac.uk/∼nor.
As mentioned in the previous section, the SCO contains choices about how to act. The choices of how to
act affect the quality of solution as well as satisfiability of the schedule. We cannot simply perform the action selection first and later schedule the actions according to their temporal and numerical information. This
means the planning and scheduling tasks are tightly
coupled and cannot be performed separately. This situation requires coordination between actions and exe-
103
problem
1
2
3
4
5
6
7
8
9
10
Crikey
9.00
7.00
9.00
12.00
12.00
4.00
7.00
20.00
20.00
4.00
SgPlan
11.00
4.00
11.00
11.00
5.00
5.00
12.00
15.00
9.00
9.00
Lpg-td
3.20
3.20
4.20
7.20
5.20
4.20
7.20
13.20
9.20
4.20
Table 4: optimization metric: minimizing total-cost
cution of the concurrent actions. Coordination is where
the actions can happen together and interact with another. Meanwhile concurrency means more than one
action happen simultaneously but they are not to interfere with each other (Halsey 2004). For example see
Figure 2. There are three deadlines, denoted by x1 ,
x2 and x3 . The x2 and x3 happen at the same time
point. These deadlines x1 , x2 and x3 require actions
(a1 ,a2 ,a3 ), (a1 ,a4 ,a5 ) and (a1 ,a4 ,a6 ) respectively. Either some parts or all parts of the actions’ durations
are overlapped in time or executed in parallel. The actions a2 and a3 must interact with action a1 . These
actions must execute during the life time of action a1 .
But there is no interaction between a2 and a3 . The
actions are required to execute simultaneously in order
to achieve the deadline. The actions a4 , a5 and a6 are
also examples of coordination where action a5 and a6
are executed in some portion of the life time of a4 . Furthermore, the a5 and a6 actions demonstrate the choice
of how to act. In achieving deadlines x2 and x3 , either a5 or a6 has to be executed following a4 . Another
clear example of a domain in which some actions must
happen in parallel, which has been investigated in the
previous literature, is the Match Domain (Halsey, Long,
& Fox 2004). However, the choices on how to act is not
demonstrated in this domain.
Planning duration
x1
x2
x3
a1
a2
a3
a4
a5
a6
Figure 2: concurrent actions
Due to the importance of the above features in the
SCO domain, we also investigate the capability of these
planners to support these requirements. The coordina-
tion and concurrent features were indirectly performed
in the temporal constraint problem to which Crikey was
applied in experiment 1. In Crikey, actions are either
wrappers or contents with wrappers containing contents
and contents being completely contained within wrappers. Some content actions are also wrappers for other
actions. In experiment 1, we have encoded deadlines
as the wrapper actions. The other six actions decribed
in the beginning of this section were the content actions. These content actions have to start after the
wrapper start and end before the wrapper end. In
other words, the wrapper and the content actions are
performed in parallel. The wrapper and the content actions can be illustrated as actions (a1 ,a2 ,a3 ) in Figure
2. Lpg-td and SgPlan5 were then applied to the same
domain, since both planners are capable of handing all
PDDL2.1 features. Unfortunately, neither planner can
solve the problem. As in Table 5, besides SCO domain,
two other domains including the match domain were
also tried. The driverlog-shift domain (Halsey 2004) is
an extension of the driverlog domain used in IPC3. In
this extended domain, the driver can only work for a
certain amount of time or in a shift. The shift action
is modelled as an envelope action. Therefore, driving
and walking actions must be fitted into the shift action. SgPlan5 produced a plan for this domain. But,
in the plan, the walking and driving actions are performed after the shift action finished. In other words,
they are performed in a sequence. SgPlan5 and Lpg-td
are able to perform concurrent actions, provided that
the actions do not interfere with each other. This is as
a result of both planners generating the temporal planning problem by finding out the sequential solution first
and rescheduling them using temporal information.
domain
SCO
match
driverlog-shift
Crikey
plan
obtained
plan
obtained
plan
obtained
SgPlan
no
solution
no
solution
plan
obtained
Lpg-td
no
solution
no
solution
no
solution
Table 5: domain with concurrent actions
Moreover, domains that are encoded with coordination or concurrent actions will have plans that shorten
the makespan. Refer to Table 2. Although Crikey and
Lpg-td choose the same action, the plan duration generated by each of the planner is different. The plan
produced by Crikey has a shorter duration than the
plan generated by Lpg-td.
The overall performance based on the criteria outlined or properties underlying in the SCO problems in
the experiment are summarized in Table 6. Crikey is
very good at maintaining constraints and coordination
of tasks but very poor at metric optimization. Nevertheless for this problem, Crikey is still able to produce a feasible plan. Lpg-td, although it has a very
104
good performance both in maintaining constraints and
optimization, cannot perform coordination of actions.
When this is required no plan can be produced at all.
Although, SgPlan5 can handle temporal constraints as
benchmarked in IPC5, the domains involved do not include choices about how to act. An example arises in
the truck domain. This domain only encodes what action should be carried out in order to meet the temporal constraints. SgPlan5 seems unable to reason with
choices about how to act. Therefore for some instances
in experiment conducted, SgPlan5 did not produce any
plan. Unlike Lpg-td, SgPlan5 is sometimes able to produce a plan for a concurrent domain but the execution
of actions in the plan are performed in a sequenced
manner.
planner
optimization
metric
poor
Lpg-td
time
constraint
very
good
very good
SgPlan
poor
poor
Crikey
very good
coordination
very
good
cannot
performed
cannot
performed
Table 6: overall performance of planners
Subclass of SCO problem
As discussed in the beginning of the paper, SCO is a
hard combinatorial problem that requires not only reasoning about the logical relations between actions but
also has to examine the temporal and numeric relations between actions. Since it is very hard to solve
the overall problem features, only the subclass of this
problem will be focused on in this research. The properties of the subclass problem are identified as follows.
The very important properties are maintaining temporal and numerical constraints. The second feature is
the optimization metric in term of numerical values.
All these properties require coordination between actions as well as actions to be performed concurrently
in the generated plan. Since planning problems have a
strong scheduling element, we will have a selection of
alternative actions (planning) within the large selection
of actions described in the domain. This situation exhibits the how choices action in the domain. Within
the alternative actions, there is also a selection of possible resources, giving rise to a scheduling problem. All
these actions are weighted by numerical values representing their costs. At this stage we are not interested
in optimization in term of temporal metrics.
Conclusion
This paper discusses the features of SCO planning problems and investigates the performance of state-of-theart planners on domains with these features. We have
run the experiments on the individual features separately. The planners are expected to handle some of
the features, such as minimization of total-cost or totaltime metric as well as satisfying the hard constraints.
However, as we can see, none of the state of the art
planners we tried were able to successfully handle all
the features. Therefore, experiments conducted to date
have identified several improvements in the planning
technology that are required in order to solve the SCO
type of domain.
Future Work
In the near future we will develop a subclass of the SCO
problem that combines all the features together. The
more complex optimization metric will be included in
the problem since the numerical features considered in
the experiment so far are very simple. As numerical
constraints are identified as one of the properties of the
SCO subclass, the numerical constraints will also be
included in the domain. The domain will be used to test
a variety of planners. We plan to adopt Crikey as the
basis of the new technology that we intend to develop.
Crikey is chosen due to its ability to cleanly manage
the tightly coupled interaction between planning and
scheduling as well as other features such as duration
inequalities and interesting metric optimisation.
References
Chih-Wei; Wah, B.; Ruoyun; and Chen, Y. 2006.
Handling soft constraints and goal preferences in SGPLAN. In ICAPS Workshop on Preferences and Soft
Constraints in Planning. ICAP.
Cresswell, S., and Coddington, A. 2003. Planning
with timed literals and deadlines. In Porteous, J., ed.,
Proceedings of the 22nd Workshop of the UK Planning
and Scheduling Special Interest Group, 22–35. University of Strathclyde. ISSN 1368-5708.
Dean, T., and Kambhampati, S. 1997. Planning and
scheduling. In The Computer Science and Engineering
Handbook 1997. CRC Press. 614–636.
Dimopoulos, Y.; Gerevini, A.; Haslum, P.; and Saetti,
A. 2006. The benchmark domains of the deterministic part of IPC-5. In Booklet of the 2006 Planning
Competition, ICAPS’06.
Edelkamp, S., and Hoffman, J. 2003. PDDL2.2: The
language for the classical part of the 4th international
planning competition.
Fox, M., and Long, D. 2003. An extension of PDDL
for expressing temporal planning domains. Journal of
Artificial Intelligence Research 20:61–124.
Fox, M.; Long, D.; and Halsey, K. 2004. An investigation into the expressive power of PDDL2.1. In
Proceedings of ECAI’04.
Gerevini, A., and Long, D. 2006. Plan constraints
and preferences in PDDL3. In ICAPS Workshop on
Preferences and Soft Constraints in Planning. ICAPS.
105
Gerevini, A., and Serina, I. 2002. LPG: A planner
based on local search for planning graphs. In Proceedings of the Sixth International Conference on Artificial
Intelligence Planning and Scheduling.
Gerevini, A.; Saetti, A.; and Serina, I. 2004. Planning
with numerical expressions in LPG. In Proceedings
of the 16th European Conference on Artificial Intelligence (ECAI-04). IOS-Press, Valencia, Spain.
Halsey, K.; Long, D.; and Fox, M. 2004. CRIKEY a planner looking at the integration of scheduling and
planning. In Proceedings of the Workshop on Integration Scheduling Into Planning at 13th International
Conference on Automated Planning and Scheduling
(ICAPS’03), 46–52.
Halsey, K. 2004. CRIKEY! Its Co-ordination in
Temporal Planning: Minimising Essential Planner–
Scheduler Communication in Temporal Planning.
Ph.D. Dissertation, Ph. D. Dissertation, University of
Durham.
Long, D., and Fox, M. 2003. The 3rd international
planning competition: Results and analysis. Journal
of Artificial Intelligence Research 20:1–59.
Smith, D. E.; Frank, J.; and Jonsson, A. 2000. Bridging the gap between planning and scheduling. The
Knowledge Engineering Review 15:47–83.
Weld, D. S. 1994. An introduction to least commitment planning. AI Magazine 4.
Velocity Tuning in Currents Using Constraint Logic Programming
Michaël Soulignac∗,∗∗
Patrick Taillibert∗
Michel Rueher∗∗
* THALES Aerospace
2 Avenue Gay Lussac
78852 Elancourt, FRANCE
** Nice Sophia Antipolis University
I3S/CNRS, BP 145
06903 Sophia Antipolis, FRANCE
{firstname.lastname}@fr.thalesgroup.com
rueher@essi.fr
Abstract
ignoring currents can lead to incorrect or incomplete planners. Such planners may return a physically infeasible path,
or no path at all, even if a valid path exists.
Some extensions have been developed in the field of path
planning, but currents remain neglected during velocity tuning.
Because of its NP-hardness, motion planning among
moving obstacles is commonly divided into two tasks:
path planning and velocity tuning. The corresponding
algorithms are very efficient but ignore weather conditions, in particular the presence of currents. However,
when vehicles are small or slow, the impact of currents
becomes significant and cannot be neglected. Path planning techniques have been adapted to handle currents,
but it is not the case of velocity tuning. That is why
we propose here a new approach, based on Constraint
Logic Programming (CLP). We show that the use of
CLP is both computationally efficient and flexible. It
allows to easily integrate additional constraints, especially time-varying currents.
That is why we propose here a new velocity tuning approach, based on Constraint Logic Programming (CLP). Our
experimental results show that this approach is computationally efficient. Moreover, it offers a flexible framework, allowing to easily integrate other constraints, such as timevarying currents or temporal constraints.
Introduction
Mobile robots are more and more used to collect data in
hostile or hardly accessible areas. For physical or strategic reasons, these robots may not be able to receive directly
orders from a headquarter in real-time. Thus, they have to
embed their own motion planner. Because the environment
is often changing or unknown, this planner has to be very
reactive.
Motion planning is yet a complex task, answering to two
questions simultaneously: where should the robot be, and
when? It is known to be a NP-hard problem (Canny 1988).
That is to say, the computation time grows exponentially
with the number of obstacles.
To guarantee a reasonable response time, motion planning
is commonly divided into two simpler tasks: (1) a path planning task, dealing with the question where, and (2) a velocity
tuning task, dealing with when.
Algorithms associated to these two tasks are generally
based on simple assumptions. For instance, obstacles are
often modeled as polygonal-shaped entities, moving at constant velocity. Data about weather, in particular about (air or
water) currents, are usually ignored.
However, in the case of Unmanned Air Vehicles (UAVs)
or Autonomous Underwater Vehicles (AUVs), which may
be small or slow, the impact of currents is significant. So,
106
This paper is organized as follows. Section I recalls the
existing planning methods. Section II formalizes the problem of velocity tuning in presence of currents. Section III
introduces our modeling of this problem in terms of a Constraint Satisfaction Problem (CSP) on finite domains. Section IV proposes examples of additional constraints. Finally,
section V provides some experimental results, obtained on
real wind charts.
I. Motion planning in currents
The decomposition of motion planning into path planning
and velocity tuning tasks was first introduced in (Kant &
Zucker 1986). This decomposition is widely used in robotics
because both tasks can be done in a polynomial time.
However, it has to be noticed that it is source of incompleteness: the path planning phase may generate a path
which is unsolvable in the velocity tuning phase.
1. Path planning
Path planning methods consist in finding a curve between
a start point A and a goal point B, avoiding static obstacles Oi (generally polygonal-shaped). They can be divided
into four categories: (1) decomposition methods, (2) potential fields methods, (3) probabilistic methods, and (4) metaheuristics.
Graph decomposition methods (fig. 1a) are based on a
discretization of the environment into elementary entities
(generally cells or line segments). These entities (plus A
and B) are then modeled as nodes of a graph G. The initial
-i.e. concrete- path planning problem is thus reformulated
into an abstract one: find the shortest path from node A to
node B in G. To do this, classical search techniques are applied, such as the well-known A∗ algorithm (Nilsson 1969)
or one of its numerous variants.
In this context, new cost functions have been proposed,
to make a compromise between following the currents and
minimizing the traveled distance (Garau, Alvarez, & Oliver
2005)(Petres et al. 2007).
Potential field methods (fig. 1b) (Khatib 1986) consider
the robot as a particle under the influence of a potential field
U , obtained by adding two types of elementary fields: (a) an
attractive field Uatt , associated to B and (b) repulsive fields
i
, associated to obstacles Oi . The point B corresponds
Urep
to the global minimum of the function U . The path between
A and B can thus be computed by applying gradient descent
techniques in U values, starting from A.
2. Velocity tuning
Probabilistic methods (fig. 1c) (LaValle 1998) are based
on a random sampling of the environment. These methods are a particular case of decomposition methods: random
samples are used as elementary entities, linked to their close
neighbors, and modeled by a graph. Probabilistic RoadMap
(PRM) and Rapid Random Trees (RRT) are the most famous
methods in this category.
Metaheuristics refer to a class of algorithms which simulate natural processes (fig. 1d) (Zhao & Yan 2005). The
three main metaheuristics applied to path planning are: (a)
genetic algorithms, inspired by the theory of evolution proposed by Darwin; (b) particle swarm optimization, inspired
by social relationships of bird flocking or fish schooling; (c)
ant colony optimization, inspired by the behavior of ants in
finding paths from the colony to food.
The existing velocity tuning approaches generally work in
a 2-D space-time. The first dimension l ∈ [0, L] (where L
is the length of the path) represents the curvilinear abscissa
on the path. The second one, t ∈ [0, T ] (where T is the
maximal arrival time), the elapsed time since departure. In
this space-time:
• Each point of the path is represented by a column. In
particular, start and goal points are represented by the extreme left and right columns.
• Each moving obstacle Oi generates a set of forbidden surfaces S i (often only one). These surfaces contains all couples (l, t) leading to a collision between the robot and Oi .
For instance, in figure 2b, the abscissa l = 10 is forbidden
between t = 10 and t = 15.
T
15
B
O2
A l
A
O1
y
B
S1
t
x
l
B
A
(a)
A
B
S2
10
0
(b)
10
L
Figure 2: (a) path of fig. 1d, adding two moving obstacles;
(b) the corresponding 2-D space-time.
(a)
(b)
B
B
Once the space-time is built, the initial velocity tuning
problem can be reformulated into a path planning problem in
this space-time. However, this space-time has specific constraints, notably due to time monotony or velocity bounds.
Therefore, specific methods have been applied, like: (1)
adapted decomposition methods, (2) B-spline optimization,
and (3) the broken lines algorithm.
A
A
(c)
(d)
Figure 1: Paths (in light grey) obtained by the following
methods: (a) A∗ algorithm on regular cells; (b) potential
fields; (c) RRT; (d) particle swarm optimization.
All these methods have two common characteristics: (1)
the cost τ (M, N ) between two points M and N represents
the Euclidean distance d(M, N ) and (2) the computed path
is made up of successive line segments. This last property is
the base of our modeling, described in section II.
However, in presence of currents, the fastest path is not
necessary the shortest. To illustrate, let us consider a swirl:
the fastest way to link A and B is more circle-shaped than
linear.
107
As explained before, decomposition methods (figure 3a)
divide the space-time into elementary entities and apply
graph search techniques. Since a lot of paths are temporally
equivalent (they arrive at the same time), an appropriate cost
is necessary. For instance, (Ju, Liu, & Hwang 2002) used
a composite cost function balancing the arrival time and the
velocity variations.
B-spline optimization techniques (figure 3b) consist in
representing the optimal trajectory in the space-time by a Bspline function (Borrow 1988), parameterized by some control points K i . Graphically, the points K i locally attracts
the curve of the B-spline. Their position is computed in order to minimize the mean travel time, using interior point
techniques.
The broken lines algorithm (figure 3c) (Soulignac & Taillibert 2006) tries to link A and B using a unique velocity,
i.e. a unique line segment in the space-time. At each intersection of the line with a surface S i , a velocity variation is
introduced, by "breaking" this line into two parts. To sum
up, this algorithm tries first to arrive as earlier as possible,
and then to minimize velocity variations.
B
p1(t)
P
A
r1
y
O
O1
x
Figure 4: A velocity tuning problem with currents.
K4
Each moving obstacle Oi is a disk of radius ri . This disk
corresponds to a punctual mobile surrounded by a circular
safety zone. The position of the mobile -i.e. the center of
the disk- is given at every time t by pi . Note that contrary to
most approaches, there is no restriction on the function pi .
K2
K3
t
K1
l
(b)
(a)
(c)
Figure 3: Paths (in light grey) obtained in the space-time of
fig. 2 by the following methods: (a) visibility graph; (b) Bspline optimization with 4 control points; (c) broken lines
algorithm.
All these methods neglect the influence of currents. This
is acceptable in presence of weak currents, since trajectory
tracking techniques such as (Park, Deyst, & How 2004) remain applicable to dynamically adjust the robot’s velocity.
However, when currents become strong, the robot is neither guaranteed to stay on its path, nor to respect the time
line computed by the velocity tuning algorithm. That is why
we propose a new approach, based on CLP techniques.
II. Problem Statement
1. Informal description
A punctual robot is moving on a pre-computed path P from
a start site A to a goal site B, in a planar environment containing moving obstacles and currents, with a bounded velocity.
It has to minimize its arrival time at B, with respect to the
following constraints: (1) obstacle avoidance and (2) currents handling. Data about obstacles and currents are known
in advance.
2. Formalization
The environment is modeled by a 2-D Euclidean space E,
with a frame of reference R = (0, x, y). In R, the coordinates of a vector ~u are denoted (ux , uy ) and its modulus
u.
The path P is defined by a list V of n viapoints, denoted
V i . Each viapoint V i is situated on P at curvilinear abscissa
li . Two successive viapoints (V i and V i+1 ) are linked by a
line segment. In other terms, P is made up of successive line
segments, which is the result of all path planning methods
presented before.
Note that P is obtained by using adapted cost functions,
incorporating the influence of currents (otherwise the velocity tuning would be meaningless).
108
→
Finally, the current can be seen as a 2-D vector field −
c,
known either by measurement or forecasting. Thus, the
→
data about −
c are, by nature, discontinuous, i.e. defined on
the nodes of a mesh (not necessary regular), called current
nodes. The mean distance between current nodes may correspond to the resolution of measures or the precision of the
forecast model.
The robot’s velocity vector relative to the frame R
→
(ground speed) is denoted −
v , and its velocity vector rela→
→
tive to the current −
c (current speed) is denoted −
w.
−
→
It is important to understand that w only depends on the
→
→
engine command, whereas −
v is impacted by the current −
c .
Indeed, applying the velocity composition law, the quantities
−
→
→
→
v,−
c and −
w are linked by the following relation:
−
→
→
→
v =−
w +−
c
(1)
Our problem consists in finding a timing function σ:
σ : M ∈ P 7→ t ∈ [0, T ]
(2)
minimizing the arrival time tB = σ(B), with respect to
the following constraints:
1. maximal velocity: the modulus of the robot’s velocity relative to the current, denoted w, is smaller than wmax .
Note that the bound wmax only depends on the robot’s
engine capabilities;
2. obstacles avoidance: the robot has to avoid a set of m
moving obstacles;
3. currents handling: the robot has to take into account disturbances due to the field ~c.
The quantity T is called time horizon. It materializes the
maximal arrival date to B. This upper bound may be due to
the embedded energy or visibility conditions.
III. Velocity tuning using CLP
Velocity tuning using CLP consists in two steps: (1) defining the constraints describing the velocity tuning problem
and (2) solving the corresponding CSP, with the adequate
search strategies.
1. Data representation
B
P
The constraints above are defined on finite domains.
Therefore, the initial data about the environment are reformulated using an appropriate representation.
A
Time representation
The interval [0, T ] is discretized using a constant step ε.
The value ε depends on the context. In our applications,
[0, T ] contains less than 1000 time steps. For instance,
T = 2 hours and ε = 10 seconds leads to 720 time steps.
Figure 6: Artificial viapoints (white dots) obtained for the
Voronoï diagram of fig. 5a. These viapoints are added to the
initial viapoints (black dots).
Currents representation
As we explained before, the current is known in a finite
number of points, called current nodes, obtained by measurement or forecasting. Since current nodes already include
an error, we think that it is meaningless to finely interpolate
the value of the current between these nodes.
Therefore, we propose the concept of Elementary Current
Area (ECA). An ECA is an polygonal region of the environment, in which the current is homogeneous. Each ECA
contains a unique current node. The value of this node is
extended to the whole area.
ECAs are computed by building the Voronoï diagram
(Fortune 1986) around the current nodes. This diagram is
made up of line segments which are equidistant to the nodes.
It is illustrated in figure 5, for uniform and non-uniform distributions.
Note that the currents are constant in time here (timevarying currents are considered in section IV).
a. Constraints related to currents
−
→
Let us consider the straight line move di , between the
viapoints V i = (xi , y i ) and V i+1 = (xi+1 , y i+1 ). For this
move, we define:
−
→
• ci the velocity of the current
−
→
• v i the robot’s velocity relative to the frame R
−
→
−
→
• wi the robot’s velocity relative to ci
−
→
−
→
−
→
As explained in equation 1, v i and wi are linked by v i =
−
→ −
→
−
→
wi + ci . Moreover, since we want to impose the move di to
−
→
−
→
the robot, v i and di are collinear.
Thus, if we denote C i the result of translating V i by vec−
→
−
→
tor ci , we can build the vector v i by intersecting:
−
→
• The line Li , of direction vector di
y
x
(a)
• The circle C i , of center C i and radius wi
(b)
Figure 5: Illustration of ECAs for two distributions of current nodes (grey arrows): (a) uniform and (b) non-uniform.
Artificial viapoints
If I1i and I2i are the intersections obtained1 (possibly con−
→
−
→
−
→
−−→
founded), v i can be either the vector v1i = V i I1i or v2i =
−−i→i
V I2 . This is illustrated in figure 7.
Artificial viapoints are additional viapoints guaranteeing
that the current is constant between two successive viapoints. They are obtained by intersecting the path P and
the borders of ECAs. Since both P and borders are made up
of line segments, these intersections can be computed easily.
I2
di
Vi+1
vi2
I1
vi1
The initial list V of viapoints is thus enlarged into V ′ ,
containing n′ > n elements. The current between two suc−
→
cessive viapoints V i and V i+1 is denoted ci .
Vi
wi
wi
ci
Ci
wmax
2. Constraints definition
In this part, we show how the velocity tuning problem
can be described thanks to two types of constraints: (a) constraints related to currents and (b) constraints related to moving obstacles avoidance.
109
−
→
Figure 7: Different possibilities for v i , for wi < wmax .
1
Note that we are sure that at least one intersection exists, because the path P is supposed to be entirely feasible.
The radius wi = wmax allows to compute the minimal
−
→
i
i
:
and maximal modulus for v i , denoted vmin
and vmax
i
vmin
= min(v1i , v2i )
i
vmax = max(v1i , v2i )
(3)
−
→
−
→
If vji and di are not in the same direction, the robot is not
moving toward the next viapoint V i+1 , but at the opposite
(backward move). In this case, to force a forward move, the
modulus vji is replaced by 0 in equation 3.
These results allow us to describe the robot’s cinematic in
presence of currents:
A first idea to model obstacle avoidance would consist in
using the simple constraint:
∀i : ti ∈
/ Fi
(4)
However, this constraint is too weak to avoid collisions in
all cases.
i
i
]
v i ∈ [vmin
, vmax
(Dvi )
ti = ti−1 + di /v i
(Cti,vi )
To illustrate this point, let us consider two successive
viapoints V i and V i+1 . In the space-time, the visit of
these viapoints is symbolized by two points M i and
M i+1 . Even if both points respect equation 4, it is not
necessary the case for all intermediate points lying on
the line segment [M i , M i+1 ]. Indeed, this line segment
can intersect some forbidden surfaces, as shown in figure 8b.
i
i
are known and
Note that the quantities di , vmin
and vmax
constant:
• The distance di is deduced from position of viapoints V i
and V i+1 ,
i
i
are computed using
• The velocity bounds vmin
and vmax
equation 3.
Therefore, the only variables in the above equations are
ti and v i (both scalar).
b. Constraints related to moving obstacles avoidance
As explained in section I.2, moving obstacles can be represented in a 2-D space-time (l, t), which l represents the
curvilinear abscissa on the path P, and t the elapsed time
since departure.
In this space-time, each moving obstacle Oj generates a
set of forbidden surfaces S j , containing all forbidden couples (l, t), leading to a collision between the robot and Oj .
T
Vi
14
S2
S2
10
Vi
Mi
Vj
Vi+1
4
3
2
l
li
lj
(a)
L
0
li li+1
A simple way to avoid this situation is to force all the
points of the space-time to be on the same side of each
forbidden surfaces. This is modeled by the following constraints:
∀i ∈ [1, n′ ] :
F i = [t1 , t1 ] ∪ ... ∪ [tsi , tsi ]
∀j ∈ [1, si ] : bj ∈ {0, 1}
ti ≥ tj − T · (1 − bj )
t i ≤ t j + T · bj
(Dbj )
1
(Cti,bj
)
2
(Cti,bj
)
The binary variables bj allow to represent how the forbidden surface Sj is by-passed. Indeed, bj = 1 if the point M i
is above Sj , else bj = 0.
2. CSP solving
S1
S1
This problem appears when a forbidden surface is bypassed by one side at point M i and by the other side at point
M i+1 . In the example of figure 8b, M i is above the surface
S2 , whereas M i+1 is below, which leads to an intersection.
Since these variables bj are shared by all points M i , they
are forced to by-pass forbidden surfaces in the same way.
Combining the variables bj and T allows to avoid the use of
reification techniques: if one constraint is true, the other is
naturally disabled (since ∀i : ti ≤ T ).
Mi+1
0
As shown in figure 8a, F i is an union of subintervals
F1i ∪ F2i ∪ ... ∪ Fsii , where si denotes the number of
intersected surfaces.
(Dti )
∀i ∈ [1, n′ ] : ti ∈ [0, T ]
t
Therefore, for each viapoint V i , we can define the interval
of forbidden times, denoted F i . This interval has the following meaning: if ti ∈ F i , then the robot collides a moving
obstacle at viapoint V i . F i is computed by intersecting all
surfaces S j with the line l = li .
a. CSP formulation
L
A CSP is commonly described as a triplet (X, D, C),
where:
(b)
Figure 8: (a) forbidden times for two viapoints V i and V j :
F i = [0, 3] and F j = [2, 4] ∪ [10, 14]; (b) impossible move
between two successive viapoints M i and M i+1 : M i ∈
/ Fi
i+1
i+1
i
i+1
and M
∈
/ F , but [M , M ] intersects the forbidden
surface S 2 .
110
• X = ∪{xi } is a set of variables,
• D = Π Di is a set of domains associated to X (Di represents the domain of xi ),
• C = ∪{C i } is a set of constraints on elements of X.
Using these notations, our velocity tuning problem can be
modeled by the following CSP:
c
• X = ∪{v i , ti , bj }, i ∈ [1, n′ ], j ∈ [1, si ] where n′ is
the number of viapoints (including artificial ones) and si
the number of intersected forbidden surfaces by the line
l = li in the space-time,
• D = Dti × Dvi × Dbj ,
cy
(a)
cx
t=10
t=5
1
2
• C = Cti,vi ∪ Cti,bj
∪ Cti,bj
.
y
t
(b)
x
This CSP has the following properties:
• It contains 2n′ + maxi {si } variables and n′ + 2 maxi {si }
constraints. In our applications, n′ < 50 and maxi {si } <
10.
• All constraints are linear2 .
• Variables are defined on finite domains, with the following sizes:
– |Dti | ≈ 1000 (number of time steps)
– |Dvi | ≈ 100 (number of different velocities)
– |Dbj | = 2 (binary variables)
b. Enumeration strategy
Since many solutions are temporally equivalent, we chose
the following enumeration strategy:
Figure 9: A time-varying current. (a) graph of cx and cy
functions, defined by levels; (b) the corresponding velocity
vector.
−
→
Let us consider a current ci , between viapoints V i and
V i+1 , changing k times in the interval [0, T ]. This interval is
thus split into k+1 subintervals: [0, t1 ], [t1 , t2 ], ..., [tk−1 , tk ],
[tk , T ]. In each subinterval [tj , tj+1 ], the value of the current
−
→
is constant, denoted cij .
The influence of this time-varying current can be modeled in our CSP by using some binary variables. Indeed, the
equation (Dvi ) is replaced by the following constraints:
• Variables ordering: bj , then ti , then v i (in the decreasing
order of i).
• Values ordering:
– increasing values for bj (to by-pass the forbidden surface by the bottom first)
– increasing values for ti (to determine the first valid time
steps)
– decreasing values for v i (because v i ∼ O(1/ti ))
∀j ∈ [1, k − 1] : bj ∈ {0, 1}
t i ≥ t j · bj
ti < (1 − bj ) · T + tj+1
Pk−1
j=1
vi ≥
With this strategy, we try to visit the viapoints as earlier
as possible, from the last viapoint to the first viapoint.
The variables bj allow to roughly identify a first solution,
by determining by which side the forbidden surfaces are bypassed. Then, the variables ti and v i refine this solution.
Note that the enumeration mainly concern the variables ti ,
because a value of ti imposes a value for v i .
IV. Extension to other constraints
Modeling the velocity tuning problem as a CSP allows
to easily integrate other constraints. This section gives two
examples: (1) time-varying currents and (2) temporal constraints.
1. Time-varying currents
In a forecast context, values of currents are valid during
a time interval ∆T , depending on the application. For instance, in maritime applications, ∆T represents a few hours.
As for ECAs, we find that it is useless to interpolate these
data between two intervals. We thus consider that a timevarying current is defined by successive levels, as shown in
figure 9.
111
k−1
X
bj = 1
(5)
(6)
i
bj · vmin,j
(7)
i
bj · vmax,j
(8)
j=1
vi ≤
k−1
X
j=1
The binary variables bj allow to identify the subinterval
[tj , tj+1 ] in which lies the variable ti . In other terms, bj = 1
if and only if ti ∈ [tj , tj+1 ]. This is modeled by equations 5
and 6.
Then, equations 7 and 8 allows to impose velocity bounds
on v i according to this subinterval. That is, if ti ∈ [tj , tj+1 ],
i
i
i
i
then v i ∈ [vmin,j
, vmax,j
]. The values of vmin,j
and vmax,j
−
→i
are computed as explained in part III.1a, substituting c by
−
→i
cj .
This model is simple but rough. More precisely, it ignores
current changes between two successive viapoints. Therefore, an error is potentially made on velocity bounds. This
error remain negligible if the distance di between viapoints
is small.
2
′
After the change of variable v i = 1/v i .
robot
loitering
3min
wind
wind
(a) t=[0:00,2:40]
(b) t=[2:40,6:00]
y
wind
wind
wind
obstacle
x
(c) t=[6:00,9:00]
(d) t=[9:00,12:00]
(e) t=[12:00,20:00]
Figure 10: Complete example: (a)(b) moving obstacle avoidance, (c) effect of a current change at t = 6min, (d) loitering during
D = 3min (in black square) and (e) effect of a time window, imposing the arrival at t = 20min.
If it is not the case, di can be reduced by artificially subdividing ECAs. By this way, the size of ECAs is decreased
and the number of artificial viapoints increased. Therefore,
viapoints will be globally closer from each other.
T
2. Temporal constraints
obstacle
avoidance
time window
wind change
loitering
In this section, we explain how to temporally constrain a
viapoint V i . Especially, we study two temporal constraints
particularly mentioned in literature: (a) time windows and
(b) loitering.
t
S1
l
0
a. Time windows
L
Figure 11: The space-time corresponding to fig. 10
A time window W i is a couple (wi , wi ), specifying the
minimum date wi and the maximum date wi for the robot to
visit the viapoint V i .
In a military context, by example, time windows may correspond to strategic data, such as: "the target will be at V i
between wi and wi ".
Modeling of W i is quite natural in our CSP, leading to the
single constraint:
ti ∈ [wi , wi ]
1. Illustrative example
We illustrate here all the constraints presented before
through a complete example containing: a moving obstacle, a current change, a loitering task and a time window on
arrival.
b. Loitering
In this example, simple instances of the constraints have
been chosen: (1) the current is uniform on the map and (2)
the moving obstacle performs a straight-line move at constant velocity.
The concept of loitering consists in forcing the robot to
wait at viapoint V i for a given duration Di . From a practical point of view, Di may correspond to the minimum time
required to perform a task at V i .
The result obtained by our approach is depicted in figures
10 and 11. Figure 10 shows the different phases of velocity
tuning in the initial environment, and figure 11 in the spacetime.
Here, our goal does not consist in choosing the best value
of Di , but choosing the best beginning time ti for the loitering task.
2. Performance evaluation
This choice seems to be hard, because it depends both on
the moving obstacles and the current changes. However, it
can be simply modeled in our CSP, replacing the constraint
(Cti,vi ) by :
ti = ti−1 + di /v i + Di
(9)
V. Experimental results
This section has two objectives: (1) illustrating our approach and (2) evaluating its performance.
112
In this part, we evaluate experimentally the impact of current changes and moving obstacles on the computation time,
in the following conditions:
• Hardware: Our approach has been run on a 1.7Ghz PC
with 512M o of RAM, using the clpfd library (Carlsson,
Ottosson, & Carlson 1997), provided by Sicstus.
• Current data: All data are issued from real wind charts,
collected daily during three months on Meteo France
website3 (leading to about 90 different charts). The wind
changes are simulated as follows: to simulate k wind
3
http://www.meteofrance.com/FR/mer/carteVents.jsp
changes, the interval [0, T ] is divided into k + 1 equal
subintervals. A different wind chart is used for each
subinterval.
• Moving obstacles: As in figure 10, each moving obstacle
goes across the environment by performing a straight-line
move P1 → P2 at constant velocity. This move is computed in the following way:
1. Two points P1 and P2 are randomly chosen on two borders of the environment, until an intersection I between
the path P and the line segment [P1 , P2 ] is detected.
2. The velocity of the obstacle is chosen such that the obstacle and the robot are at the same time at point I.
The resulting computation times are provided in table 1.
Each cell is the mean time obtained on 100 different environments.
Table 1: Average computation time (in ms), for m moving
obstacles and k current changes .
❍
❍ m
6
5
4
3
2
1
0
k ❍❍
0
1
2
3
4
5
6
5
7
10
16
51
80
98
9
12
14
21
55
97
127
11
13
15
23
56
104
147
14
16
18
25
68
106
152
17
20
23
34
66
111
159
21
24
28
35
67
112
162
26
27
29
38
71
114
166
From a strictly qualitative point of view, we can observe
that the global computation time remains reasonable (a few
milliseconds) even in complex environments. Therefore, we
think that our approach is potentially usable in on-boards
planners.
A theoretical study of the time complexity could confirm
these results. In particular, it could be interesting to try different enumeration strategies and evaluate their impact on
computational performances.
Conclusion
In this paper, we proposed a velocity tuning approach,
based on Constraint Logic Programming (CLP). At our
knowledge, this approach is the first able to handle currents. Moreover, this approach is computationally efficient
and flexible.
Indeed, we explained that modeling the velocity tuning
problem into a Constraint Satisfaction Problem (CSP) allows to easily incorporate more complex constraints, in particular time-varying currents. Moreover, our experiments
showed the velocity tuning task could be performed in a
polynomial time. It means that our approach is potentially
usable in on-board planners.
Further works will investigate the coordination of multiple robots sharing the same environment. In particular, we
will study how additional constraints could allow the coordination of fleets of UAVs (Unmanned Air Vehicles).
113
Acknowledgments
The authors would like to thank Paul-Edouard Marson,
Maxime Chivet, Nicolas Vidal and Katia Potiron for their
careful reading of this paper.
References
Borrow, J. E. 1988. Optimal robot path planning using the
minimum-time criterion. Journal of Robotics and Automation 4:443–450.
Canny, J. 1988. The Complexity of Robot Motion Planning.
MIT Press.
Carlsson, M.; Ottosson, G.; and Carlson, B. 1997. An
open-ended finite domain constraint solver. In Proceedings
of Programming Languages: Implementations, Logics, and
Programs.
Fortune, S. 1986. A sweepline algorithm for voronoi diagrams. In Proceedings of the second annual symposium on
Computational geometry, 313–322.
Garau, B.; Alvarez, A.; and Oliver, G. 2005. Path planning of autonomous underwater vehicles in current fields
with complex spatial variability: an a∗ approach. In Proceedings of the International Conference on Robotics and
Automation, 194–198.
Ju, M.-Y.; Liu, J.-H.; and Hwang, K.-S. 2002. Realtime velocity alteration strategy for collision-free trajectory
planning of two articulated robots. Journal of Intelligent
and Robotic Systems 33:167–186.
Kant, K., and Zucker, S. W. 1986. Toward efficient trajectory planning: the path-velocity decomposition. The International Journal of Robotics Research 5:72–89.
Khatib, O. 1986. Real-time obstacle avoidance for manipulators and mobile robots. In Proceedings of the International Conference on Robotics and Automation, volume 2,
500–5005.
LaValle, S. M. 1998. Rapidly-exploring random trees: A
new tool for path planning. TR 98-11, Computer Science
Dept., Iowa State Univ.
Nilsson, N. J. 1969. A mobile automation: An application of artificial intelligence techniques. Proceedings of
the International Joint Conference on Artifical Intelligence
509–520.
Park, S.; Deyst, J.; and How, J. 2004. A new nonlinear
guidance logic for trajectory tracking. Proceedings of the
AIAA Guidance, Navigation and Control Conference.
Petres, C.; Pailhas, Y.; Patron, P.; Petillot, Y.; Evans, J.; and
Lane, D. 2007. Path planning for autonomous underwater
vehicles. Transactions on Robotics 23:331–341.
Soulignac, M., and Taillibert, P. 2006. Fast trajectory planning for multiple site surveillance through moving obstacles and wind. In Proceedings of the Workshop of the UK
Planning and Scheduling Special Interest Group, 25–33.
Zhao, Q., and Yan, S. 2005. Collision-free path planning
for mobile robots using chaotic particle swarm optimization. In Proceedings of the International Conference on
Advances in Natural Computation, 632–635.
114
SHORT PAPERS
Planning as a software component: A Report from the trenches ∗
Olivier Bartheye and Éric Jacopin
M ACCLIA
Crec Saint-Cyr
Écoles de Coëtquidan
F-56381 G UER Cedex
{olivier.bartheye,eric.jacopin}@st-cyr.terre.defense.gouv.fr
An awarded claim While the Pengi paper (AGRE &
C HAPMAN 1987) received a Classic Paper award at
AAAI’2006 (News 2006), to our knowledge we have yet
to see whether its main claim on classical planning is
true (AGRE & C HAPMAN 1987, page 269): that “a traditional problem solver for the Pengo domain [could not
cope] with the hundreds or thousands of such representations as (AT BLOCKS-213 427 991), (IS-A BLOCK-213
BLOCK), and (NEXT-TO BLOCK-213 BEE-23)”. Or, stated
differently (AGRE & C HAPMAN 1987, page 272): “[The
Pengo domain] is one in which events move so quickly that
little or no planning is possible, and yet in which human experts can do very well."
The Pengo domain is that of a video-game of the eighties
where a player navigates a penguin around a two dimensional maze of pushable ice blocks. The player must collect
diamonds distributed across the maze while avoiding to get
killed by bees; but the player can push an ice block which
kills a bee if it slides into it.
The Pengi system described in the Pengi paper (AGRE &
C HAPMAN 1987) is a video-game playing system which just
happens to fight bees in the Pengo game. Pengi first searches
for the penguin on the screen to register its initial position.
Then searches for the most dangerous bee, an appropriate
weapon to kill that bee (that is, an ice block) and then navigate the penguin towards that weapon to kick it. Both written
in Lisp, the Pengo game and the Pengi system are in fact the
same Lisp program: the search for the penguin and the most
dangerous bee can be made directly by looking at the Lisp
data structures. According to the on-going conditions of the
game, various pieces of code are activated (for instance, you
may wish to push an ice block several times before it becomes a weapon). We refer the reader to the Pengi paper for
further information on the Pengi system. Finally, “[Pengi]
plays Pengo badly, in near real time. It can maneuver behind blocks to use as projectiles and kick them at bees and
can run from bees which are chasing it” (AGRE & C HAP MAN 1987, page 272).
Interpreted as a finite state machine, the Pengi system can
easily be re-implemented and not only fight bees not badly
but also collect diamonds even in non trivial mazes (D RO GOUL , F ERBER, & JACOPIN 1991).
∗
Special thanks to Maria F OX, Jörg H OFFMANN, Jana
KOEHLER and Derek L ONG about the gripper domain.
117
The awarded claim eventually is about space and time
complexity in the Pengo domain and of classical planning
algorithms around 1987. But since 1987, processors are several hundred times faster and fastest classical planners are
able to produce plans with hundreds of actions in a matter
of seconds for certain problems. Consequently, we thought
it would be interesting and, most surely, fun, to see how the
current technology could cope with an 1980s video-game.
We here report on our very first steps towards the evaluation of the claim about classical planning.
Classical planning, really? As a testbed, we chose Iceblox (BARTLETT, S IMKIN, & S TRANC 1996, pages 264–
268), a slightly different version of the Pengo game for
which there exists an open and widely available java implementation (H ORNELL 1996). For instance (cosmetic differences): flames, and not bees, are chasing the penguin-player
who must now collect coins, and not diamonds. Moreover
(different actions), coins must be extracted from ice blocks.
Extraction means kicking seven time at an ice block to destroy the ice and thus making the coin ready for collection.
Such an ice block with a coin inside slides as well as any
other ice block. So the player must kick in a direction where
the ice block cannot slide (e.g. against the edge of the game)
in order to extract an iced coin.
Instead of designing a new planning system, we decided
to pick up an existing one, and eventually several, in order to
compare their relative performance if they had any ability at
playing Iceblox. We consequently decided to re-implement
Iceblox in Flash (Adobe 2007). Not only would we provide
a new implementation of the game, but also could we use the
plug-in architecture of the Flash runtime: a call and return
mechanism can run (and pass in and out parameters to) any
external piece of executable code when put in the appropriate directory.
This deviates from the original Pengi system which was
the same Lisp program as the Pengo game (and also deviates
from (D ROGOUL, F ERBER, & JACOPIN 1991) where everything was implemented in the same SmallTalk program), but
would eventually ease the comparison as classical planners
are not necessarily written in Flash.
However, this dramatically changes the setting of the
problem.
On one side, a classical planner becomes an external component which happens to provide a planning functionality:
fine, that’s how we want it to work.
On the other side, the world view of the Pengi paper (AGRE 1993) is that of the dynamics of everyday
life (AGRE 1988) (plans do exist, but are better communicated1 to people than built from scratch) and thus is opposed to the heavily intentional (B RATMAN 1987; M ILLER,
G ALANTER, & P RIBRAM 1986) world view of planning.
In other words: the Pengi system is always in charge
of the actions (moving the penguin, kicking ice blocks)
whereas an external component is in charge only when activated and is harmless otherwise: a player must be able to
play Iceblox when the planning component is not activated
or no component is plugged-in. This generates supplementary questions: when is classical planning activated and for
how long? One more constraint. To respect the dynamics
of the domain of video-games, Iceblox must never stop and
must run while the classical planning component is planning: flames keep on chasing the penguin and sliding ice
blocks keep on sliding.
Consequently, the classical planning component is activated when the player presses the “p” key. This activation
is ended as soon as the player presses the keyboard again:
the arrow keys to move the penguin up, right, down and left;
and the space key to kick an ice block.
Hopefully, an anonymous classical planner shall build a
plan and return it to Iceblox. What shall Iceblox do with
this plan? Please, note that this question does not immediately entail further questions of interleaving classical planning and execution (A MBROS -I NGERSON & S TEEL 1988).
To begin with, there is a matter of level of detail: actions
in Iceblox corresponds to keys pressed by the player. Is the
classical planning component really expected to build plans
with such actions?
Hints from a gripper video game On one hand, the classical planning component is expected to build plans with
keys pressed. First because it seems part of the claim: if the
classical planning component (that is, the “traditional problem solver” of the claim) has to cope “with hundreds or thousands” of detailed representations describing the initial and
final situations, then we can expect action representations
to be as detailed as the initial and final situations. However,
the Pengi literature (A GRE & C HAPMAN 1987; AGRE 1988;
1993; 1997; C HAPMAN 1990) says nothing about this.
On the other hand, classical planners are used to cope
with high-level action description. For instance, here is the
classical planning Move operator from the well-known gripper (F OX & L ONG 1999) domain:
(
Move(X,Y) =
Preconditions
Additions
Deletions
:
:
:
{at-robby(X)}
{at-robby(Y)}
{at-robby(X)}
1
Official player’s guides are good sources of plans communicated to video-game players that would otherwise take some time
to build.
118
Figure 1: An anonymous classical planning system has built
(actually, it’s FF, plugged-in our Flash application as described earlier; but let’s say we didn’t tell you) a plan for the
following the gripper video game problem: 4 balls must be
moved from room B to room D. The on-going action (from
the plan) is printed in the green area at the bottom of the
window: robby is moving from room D to room B; details
of the navigation (and of the picking up and down of balls)
are left to the Flash application.
In the gripper domain robby-the-robot uses its arms to
move balls from one room, along a corridor, to another. Neither bees nor flames prevent robby-the-robot from succeeding in transporting balls from one room to another. It is nevertheless easy to come up with a simplistic two dimensional
gripper video-game: your task is to move as fast as possible
a set of balls from their initial location to their final location
(see Figure 1).
As stupid as this may sound, this gripper video-game isn’t
too far from, say, the popular Sokoban video-game (in a
maze, blocks must be slided from one place to another, with
no time limit) (C HARRIER 2007). In such a puzzle, the
details of the block pushing activity are important: e.g. a
wrong push at a corner can make the problem unsolvable.
But more important is the block you push next, which sequences the player’s next Move. Similar Iceblox situations
where the player only needs to navigate towards iced coins
and then extract them do exist (See Figure 2).
Here are two operators which can combine into a plan and
solve the simple situation of Figure 2: first MoveToCoin(6,4),
then Extract(6,4).
Figure 2: A simplistic level in the Iceblox domain: move to
the ice block containing a coin and then extract it. Details
of the navigation, as far as possible from the flames, and of
the extraction of the coin (seven kicks to the ice block) are
again left to the Flash application.
MoveToCoin(X,Y) =
Extract(X,Y)
=
(
Preconditions : {at(X,Z)}
Additions :
{at-coin(X,Y)}
Deletions :
{at(X,Z)}
Preconditions : {at-coin(X,Y),
iced-coin(X,Y)}
Additions : {at(X,Y),
extracted(X,Y)}
Deletions : {at-coin(X,Y),
iced-coin(X,Y)}
Since we have neither implemented flame-fighting nor
fleeing operators, flames must be un-aggressive so that the
coin of Figure 2 can be extracted. And because of the simple path from the Penguin to the coin, the initial and final
situations are simply described: {at(1,1), iced-coin(6,4)} and
{extracted(6,4)}, respectively.
We won’t discuss this extremely low number of formulas
needed to describe what could be called a minimal Iceblox
problem: up to now, the biggest part of our work has been
devoted to stay as close as possible to the spirit of classical planning and video-games, while designing a satisfying
testbed. In the future, we hope to concentrate more on designing classical planning predicates and operators in order
to cope with more complex Iceblox situations.
References
Adobe. 2007. Flash. http://www.adobe.com/.
News, A. 2006. Classic paper award. AI Magazine 27(3)
4.
AGRE, P., and C HAPMAN, D. 1987. Pengi: An implementation of a theory of activity. In Proceedings of 6th AAAI,
268–272.
AGRE, P. 1988. The Dynamics of Everyday life. Ph.D.
Dissertation, MIT AI Lab Tech Report 1085.
119
AGRE, P. 1993. The symbolic worldview: Reply to vera
and simon. Cognitive Science 17(1) 61–69.
AGRE, P. 1997. Computation and Human Experience.
Cambridge University Press.
A MBROS -I NGERSON, J., and S TEEL , S. 1988. Integrating
planning, execution and monitoring. In Proceedings of 7th
AAAI, 83–88.
BARTLETT, N.; S IMKIN, S.; and S TRANC, C. 1996. Java
Game Programming. Coriolis Group Books.
B RATMAN, M. 1987. Intentions, Plans and Practical Reason. Harvard University Press.
C HAPMAN, D. 1990. Vision, Instruction and Action. Ph.D.
Dissertation, MIT AI Lab Tech Report 1204.
C HARRIER, D. 2007. Super sokoban 2.0. http://d.ch.free.fr/.
D ROGOUL, A.; F ERBER, J.; and JACOPIN, E. 1991.
Viewing cognitive modelling as eco-problem solving: The
PENGI experience. In Proceedings of the 1991 European
Conference on Modelling and Simulation Multiconference,
337–342.
F OX, M., and L ONG, D. 1999. The detection and exploitation of symmetry in planning problems. In Proceedings of
16th IJCAI, 956–961.
H ORNELL, K. 1996. Iceblox. http://www.tdb.uu.se/˜karl.
M ILLER, G.; G ALANTER, E.; and P RIBRAM, K. 1986.
Plans and the Structure of Behavior. Adams-BannisterCox.
Nurse Scheduling Web Application
Zdeněk Bäumelt1,2 , Přemysl Šůcha1 , Zdeněk Hanzálek1,2
1
Department of Control Engineering, Faculty of Electrical Engineering
Czech Technical University in Prague, Czech Republic, {baumez1,suchap,hanzalek}@fel.cvut.cz
2
Merica s. r. o., Czech Republic, {imedica,hanzalek}@merica.cz
Abstract
One way to find some solution is to use artificial intelligence methods (e.g. declarative and constraint programming (Okada 1992) or expert systems (Chen & Yeung
1993)). The second way is to use some metaheuristics (simulated annealing, tabu search (Berghe 2002) or evolutionary
algorithms (Aickelin 1999)).
The focus of this paper is on the development of a web
application for solving Nurse Scheduling Problem. This
problem belongs to scheduling problems domain, exactly timetabling problems domain. It is necessary to
consider large amount of constraints and interactions
among nurses, that can be simplified through web access.
Contributions
This paper uses Tabu Search approach and the main contribution of this work lies in application structure designed for
access via web.
Introduction
Application Structure
The structure of Nurse Scheduling Web Application
(NSWA) is shown in Figure 1. Users can work with the apSERVER
CLIENT
WEB SERVICE
(C#)
COMMUNICATION
INTERFACE
(C#)
WEB
APPLICATION
(ASP.NET, C#)
http
GUI
ENGINE
head nurse
nurse
nurse
nurse
nurse
...
SCHEDULING
ALGORITHM
DATABASE
(MS SQL)
other ward
DB
... ...
Preparation of multishift schedule is rather difficult process
which incorporates couple of constraints (e.g. minimum
number of nurses for each type of shift, nurses’ workload,
balanced shift assignment) and interaction of several users
(nurses’ requests consideration). Even though single-user
nurse scheduling applications avoid rather painful manual
process, they do not allow easy access of all nurses to interact with each other. This problem can be efficiently solved
using modern web technologies, while carefully considering
all specific features of such application; e.g. large amount of
human interactions, dramatic impact on satisfaction of individual nurse as well as good mood in nurse team.
other ward
Definition of Nurse Scheduling Problem
Nurse Scheduling Problem (NSP) is NP-hard problem, that
belongs to timetabling or personnel scheduling domain. The
solution of this problem should satisfy all constraints, that
are set on the input. With larger instances (growing with
number of nurses, number of days in schedule, set of constraints) NSP comes to the combinatorial explosion and it is
harder to find an optimal solution.
Related Works
There are several views for solving NSP. In background paper (Hung 1995) there is a history of NSP research from the
60’s to 1994. Other bibliographic survey with one described
approach is in (Cheang et al. 2002). More actual survey is
presented in (Burke et al. 2004).
On one hand, there is the branch of optimal solution approaches. It includes linear programming (LP) and integer linear programming (ILP) (Eiselt & Sandblom 2000).
On the other hand, there are some heuristic approaches.
120
Figure 1: NSWA structure - block design.
plication via common web browsers. All application blocks
are on the server side, which brings many other advantages (operating system independence, no installation and
upgrades on client side). The scheduling algorithm runs independently as a web service and exchanges data with application and database through communication interface.
Scheduling Algorithm
We decided to use a scheduling algorithm that is based
on multicriterial programming implemented as Tabu Search
metaheuristic.
Mathematical Model
Our mathematical model is designed as three-shift model –
early (E), late (L) and night shift (N) (in Figure 2 early (R),
Figure 2: Screenshot from our Nurse Scheduling Web Application (july 2007).
late (O), night shift (N), holiday (D) – shifts with star are
requested shifts by nurses). Coverage is per shift, under and
over coverage is not allowed. We decided for one month
scheduling period, because of data export to salary administration. There are no qualification groups (all nurses have
the same qualification) and we considered full-time workload in this version of mathematical model.
We optimize objective function Z
min(Z(x))
(1)
x∈X
Algorithm 1 – Nurse Scheduling Algorithm
1. read the scheduling parameters and the nurses requests;
where x is one schedule from X state space of schedules. There are two types of constraints in our mathematical
model.
• soft constraints with penalization fj (x) that are the subject of objective function Z(x), which is defined as
Pd
j=1
2. find a feasible solution xinit satisfying hard constraints;
3. optimization (Algorithm 2);
4. user choice
• hard constraints have to be fulfilled always
Z(x) = w · f(x) =
Head nurses can choose which of hard constraints will
be used in our algorithm. Soft constraints are weighted by
the head nurses as well. Some hard constraints (hc2, FC)
have been converted to the soft constraints with very large
weights compared to weights of soft constraints sc1, sc2,
sc3, sc4, sc5.
The outline of full Nurse Scheduling Algorithm is described in Algorithm 1 below.
• schedule is acceptable, goto 7;
• schedule is acceptable with manual corrections, goto 6;
• schedule is not acceptable, user can reconfigure scheduling
parameters, goto 5;
5. reconfiguration of scheduling parameters, goto 1, 3 or 6;
wj fj (x),
(2)
wj ≥ 0, d = dim(f(x))
6. manual corrections, goto 3, 5 or 7;
7. end of optimization, save the schedule.
where w is a vector of weights given by user, f(x) is a vector
function of constraints penalization and d is a number of soft
constraints. In our algorithm we considered the following
constraints:
Hard Constraints
• required number of nurses for each shift type
(#RE, #RL, #RN)
• to consider days from previous month (#H)
• nurses’ requests consideration (#R)
Tabu Search Algorithm
Tabu Search algorithm shown in detail in Algorithm 2 is
used to reduce the state space of schedules.
In our implemenentation, T abuList represents the list of
forbidden shift exchanges and has three attributes. The indexes i1 and i2 represent origin and target row (nurse) of
shift exchange. The third index j is day index. Length of
T abuList, so called T abuList tenure, was set to 8.
• one shift assignment per day (hc1a)
days
• no early shift after night shift assignment (hc1b)
L
• no more than five consecutive working days (hc2)
nurses
• forbidden shift combinations (FC)
Soft Constraints
• nurses’ work-load balance (sc1)
• nurses’ day/night shift balance (sc2)
N
E
E
E
L
L
E
L
E
N
N
N
L
L
E
N
N
• nurses’ weekend work-load balance (sc3)
• avoiding isolated working days (sc4)
Figure 3: The candidate search, non-permissible shift exchanges.
• avoiding isolated days-off (sc5)
121
Table 1: NSWA experiments.
n
m
#RE
#RL
#RN
#H
#R
hc1
hc2
FC
w(sc1)
w(sc2)
w(sc3)
w(sc4)
w(sc5)
ts [s]
28
28
28
28
28
28
12
12
12
12
12
12
4
4
4
4
4
4
3
3
3
3
3
3
2
2
2
2
2
2
0
5
5
5
5
5
0
0
1
0
0
0
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
100
100
100
0
0
0
100
100
100
0
0
0
0
100
100
0
0
0
0
100
100
0
0
0
0
100
100
1.336
2.396
1.038
2.860
4.342
6.460
28
12
4
3
2
5
0
1
1
100
100
100
100
100
7.327
28
20
7
5
3
5
0
1
1
100
100
100
100
100
88.588
28
20
3
2
2
5
0
1
1
0
0
0
0
0
’NNLL’
’NNLL’
’LNLE’
’NNLL’
’LNLE’
’NNLL’
’LNLE’
100
100
100
100
100
35.607
Let the candidate be a possible shift exchange in one
day that satisfies hard constraints (see Figure 3 – two candidates are forbidden due to hard constraints hc1b, hc2). Let
xcand be the schedule x within updated candidate shift exchange and Z(xcand ) be the value of objective function of
this schedule.
Z(x) = Pj wj fj (x)
without change
1
optimization
forbidden wj fj (x)
forbidden soft constraint j
without change
3
optimization
xnext := xinit ;
Z(xnext ) := Z(xinit );
without change
4
optimization
3. while ((Z(x) > 0) & (∃ not forbidden fj (x))) (Figure 4)
without change
5
choose max(wj fj (x)), j ∈ not forbidden constraints;
for ∀candidate
if (candidate ∈
/ T abuList)
compute Z(xcand );
if (Z(xcand ) < Z(xnext ))
xnext := xcand ;
Z(xnext ) := Z(xcand );
endif
endif
endfor
if (Z(xnext ) < Z(x))
x := xnext ;
Z(x) := Z(xnext );
add opposite exchange to the top of T abuList;
clear all forbidden constraints (Figure 4, step 2);
else
add an empty record to the top of T abuList;
forbid the chosen constraint (Figure 4, steps 1, 3, 4, 5, 6);
endif
max (wj fj (x))
chosen soft constraint j
new x, Z(x)
2
optimization
1. compute Z(xinit );
x := xinit ;
Z(x) := Z(xinit );
wj fj (x)
soft constraint j
optimization
Algorithm 2 – Tabu Search Algorithm
2.
Z(xinit)
step
optimization
without change
6
algorithm termination
final x, Z(x)
Figure 4: Choice of soft constraints for the next step of optimization and algorithm termination.
Experiments
endwhile
4. return x, Z(x).
Let next be the best candidate (with respect to the objective function) at each optimization step. When we have
gone through all possible candidates, we compare values
of Z(x) and Z(xnext ) and choose the better one for the
next step of optimization. The idea of soft constraint choice
and algorithm termination is demonstrated in Figure 4 for
the case of four soft constraints.
122
We used our NSWA called iMEDICA1 for the instances,
that are presented Table 1. Columns from n to w(sc5) are
input parameters (n stand for the number of nurses and m
for number of days in schedule, other columns are hard and
soft constraints). Column FC shows considered forbidden
shift combinations (e.g. ’LNLE’ - late, night, late and early
shift). Output parameter ts is scheduling time in seconds
including steps 1-4 from Algorithm 1 and web communication. The instances were computed on server Intel Pentium
3.4 GHz@4 GB DDR.
In order to evaluate our NSWA, we implemented optimal
solution via ILP for simplified two-shift type (day and night)
mathematical model (Azaiez & Sharif 2005). We used free
solver GLPK2 . Scheduling times for instances with n ∼ 10
nurses and m = 28 days were hundreds of seconds (more
results are in (Baumelt 2007)).
1
iMEDICA, http://imedica.merica.cz/, the product
of Merica s. r. o.
2
GPLK, http://www.gnu.org/software/glpk/
Conclusions
In this paper we briefly presented our Nurse Scheduling Web
Application. We have got the feedback from several hospitals in the Czech Republic. In cooperation with these hospitals we are working on the improvement of the mathematical
model and application interface.
Acknowledgements
This work was supported by the Ministry of Education of
the Czech Republic under Project 2C06017.
References
Aickelin, U. 1999. Genetic Algorithms for Multiple-Choice
Optimisation Algorithms. Ph.d. diss., European Business
Management School University of Swansea.
Azaiez, M. N., and Sharif, S. S. 2005. A 0-1 goal programming nodel for nurse scheduling. Computers & Operations
Research 32.
Baumelt, Z. 2007. Hospital Nurse Scheduling. Master thesis, Department of Control Engineering, Faculty of Electrical Engineering, Czech Technical University in Prague.
Berghe, G. V. 2002. An Advanced Model and Novel
Meta-Heuristic Solution Methods to Personnel Scheduling
in Healthcare. Ph.d. diss., University of Gent.
Burke, E. K.; de Causmaecker, P.; Berghe, G. V.; and van
Landeghem, H. 2004. The state of the art of nurse rostering. Journal of Scheduling 7:441–499.
Cheang, B.; Li, B.; Lim, A.; and Rodrigues, B. 2002.
Nurse rostering problems – a bibliographic survey. European Journal of Operations Research.
Chen, J., and Yeung, T. 1993. Hybrid expert system approach to nurse scheduling. Computers in Nursing.
Eiselt, H. A., and Sandblom, C. L. 2000. Integer Programming and Netwotk Models. Springer-Verlag Berlin and Heidelberg, 1st edition.
Hung, R. 1995. Hospital nurse scheduling. Journal of
Nursing Administration 25.
Okada, M. 1992. An approach to the generalised nurse
scheduling problem – generation of a declarative program
to represent institution-speciffic knowledge. Computers
and Biomedical Research 25.
123
!
#
)
-.
3
)
4) ' '5
#
) ' '
# %
#
' . #
#
##
6
#
0
#
0
#
) ' '
#
%& #
'
#
+ ,,, (
"
12
# /#
!
(
/ 0
2
/ #/0
!@
" @A
' 0
/3
#
#
#
#
/0
#
. #
#
0
#
/
#
/ #
# $
*
"
#
# # 0
#
#
4 6 5 # 0
/
#
) ' '
#7
/
!
%<
=
/
8
# ' '
9
0
5
4' . #
# #
#
#
!"
) ' '
4' '5
#
0
#
$
7
) ' '
. #
#
!
#
:
&
) ' '
# %#
#
. # #
:
3
4 9 #5
) ' '
4&
5
#
<B=
#
;
;
0
0
$
(
#
<C=/
6
#
) ' '
4&
0
/ 3
0
#
&! !
0
#
#
5/
124
/
# %0
0 " ; #;
/
#
# . 0
#
# %
#
0
# # #
<C=/
0
#
0
&
) ' '
#
#
#
#
#
5
#
'
#
/
) ' '
#
/ 3
#
#
&!
#
#
) ' '
0
7
#
< =
0
:
# $
/3 ) ' '
7 #
#
#
/
#
) ' '
' '
#
0 #
#
% %
#
#
;
%
7
;
#
/ ##
" #7
/< =
>
>
4 #
5/ *
/< =
>
>
/
#
%?%
4#
&
' 0
#%0
0
) ' '/
6
#
4/ /
#
0
#
0
#
#
#
#
9
) ' '
!
0
0
#
5/ 3
6 :
6 5/ 3
% 6
#
#
6
#
6 4 % 6 5
6 4'%
6
#
0
6
0
#
#
0
/3
'%
6
0
' 'H
' '
<C=/
#
#7
0
' 'H
JB,% (
0
0
) ' '
#
6 %0
# #
/ ' '
#
;
# / 3
0
# 0
&
% ( %& 3 #
#
9
0
0. # %
# 4 '5
) ' '
D
$%%
&
'
#
#
/3
#
' '
9
0
#
#/5/
9
0
' #
("
&
0
8 #
0
0
#
#
0
% %
#
#
E
4 / /
#
9
4/ /
#
#
C
'3 4
/, 6*;
,,A
/
/
E
#/ 8
#
4#
0
# 0
) ' '
# #
0
#
0
#
' '
#
6 %0
%0
;
) ' '/ 8
J H
0
# #
5< =
9
#
0
7
%
#
0
#
0 ' 'F "<A=5
G(F%0
0
0 9
< = " #7 '/ $ 9
/ (
'
K/ +++/ "
)
#
J
?
)
#
< =*
8/ $ ) #7 "/
++ / "
)
0
/ 3
#
+
) ' '
#
#
6
#
H;
7
<I=/ 3
#
9
&
0
' '
)!)!
'
' '
0
'
% 6
# 4
;
# / 3
) ' '
&
;
'/
) ' '
'% 6
#
5/
'
0
)/
L/
'$
K
$
'$
? ) /H / A
7 )/
8
"
K/
<B= " ; #; J/
8/
;
J/
+ I/
'
$
*
+
:,
& -/ J/ / " ;
<C= L # )/
*
/
/ +++/ .
,'
'
"
)
'
$
,
:
'$ "
'
L
<A= L # )/
#
/ ++ /
/01 ,
'
/
K
J
?
)
# +I
<I= 6
K/
K/
,,,/ ,
'
2
'
'
' # #
K9
# H /B,
< = H
H/ "
M
&/ N
/
,,I/
3
4
5
2
:
+
' L
%?%
/
#
0
#
#
0
#
#
/
' '
/
6
/ 3
0
#
#
#
' '
' '
%
#
0
0
)!&! *
0
9
,!
$
;
9#
0
# / 3
%#
0
C ,
4' 'F "5
/4
# 5
% 6
3
5
#
'
) (
H
D K9
7 /,/
#
.
#
) ' '5/ ' '
) ' '
#
4/ /
#/5/
%
#50
#
4
/
3
' '
) ' '/ 3
3
0
) ' '
' ?
' '
#
5
' 0
0 #
7
)! "
(
? 3
# 4
/ &
6
)! !
# 4 / /
0
' '
#
125
ISSN 1368-5708