Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Industrial Control Systems Security via Runtime Enforcement

Published: 09 November 2022 Publication History
  • Get Citation Alerts
  • Abstract

    With the advent of Industry 4.0, industrial facilities and critical infrastructures are transforming into an ecosystem of heterogeneous physical and cyber components, such as programmable logic controllers, increasingly interconnected and therefore exposed to cyber-physical attacks, i.e., security breaches in cyberspace that may adversely affect the physical processes underlying industrial control systems.
    In this article, we propose a formal approach based on runtime enforcement to ensure specification compliance in networks of controllers, possibly compromised by colluding malware that may locally tamper with actuator commands, sensor readings, and inter-controller communications. Our approach relies on an ad-hoc sub-class of Ligatti et al.’s edit automata to enforce controllers represented in Hennessy and Regan’s Timed Process Language. We define a synthesis algorithm that, given an alphabet 𝒫 of observable actions and a timed correctness property e, returns a monitor that enforces the property e during the execution of any (potentially corrupted) controller with alphabet 𝒫, and complying with the property e. Our monitors do mitigation by correcting and suppressing incorrect actions of corrupted controllers and by generating actions in full autonomy when the controller under scrutiny is not able to do so in a correct manner. Besides classical requirements, such as transparency and soundness, the proposed enforcement enjoys deadlock- and diverge-freedom of monitored controllers, together with scalability when dealing with networks of controllers. Finally, we test the proposed enforcement mechanism on a non-trivial case study, taken from the context of industrial water treatment systems, in which the controllers are injected with different malware with different malicious goals.

    1 Introduction

    Industrial Control Systems (ICSs) are physical and engineered systems whose operations are monitored, coordinated, controlled, and integrated by a computing and communication core [57]. They represent the backbone of Critical Infrastructures for safety-critical applications such as electric power distribution, nuclear power production, and water supply.
    The growing connectivity and integration in Industry 4.0 have triggered a dramatic increase in the number of cyber-physical attacks [32] targeting ICSs, i.e., security breaches in cyberspace that adversely affect the physical processes. Some notorious examples are: (i) the Stuxnet worm, which reprogrammed Siemens PLCs of nuclear centrifuges in the nuclear facility of Natanz in Iran [36]; (ii) the CRASHOVERRIDE attack on the Ukrainian power grid, otherwise known as Industroyer [62]; (iii) the recent TRITON/TRISIS malware that targeted a petrochemical plant in Saudi Arabia [19].
    One of the key components of ICSs is Programmable Logic Controllers, better known as PLCs. They control mission-critical electrical hardware such as pumps or centrifuges, effectively serving as a bridge between the cyber and the physical worlds. PLCs have a simple ad-hoc architecture based on a central processing module (CPU) and further modules supporting physical inputs and outputs. The CPU executes the operating system of the PLC and runs a logic program defined by the user, called the user program, which executes simple repeating processes known as scan cycles (IEC 61131-3 [1]). Each scan cycle consists of three phases: (i) reading of sensor measurements of the physical process; (ii) execution of the controller code to compute how the physical process should evolve; (iii) transmission of commands to the actuator devices to govern the physical process as desired.
    Due to their sensitive role in controlling industrial processes, the successful exploitation of PLCs can have severe consequences on ICSs. In fact, although modern controllers provide security mechanisms to allow only legitimate firmware to be uploaded, the running code can be typically altered by anyone with network or USB access to the controllers (see Figure 1). Published scan data shows how thousands of PLCs are directly accessible from the Internet to improve efficiency [56]. Thus, despite their responsibility, controllers are vulnerable to several kinds of attacks, including PLC-Blaster worm [63], Ladder Logic Bombs [28], and PLC PIN Control attacks [5].
    Fig. 1.
    Fig. 1. A network of compromised PLCs: \(y_i\) denote genuine sensor measurements, \(y_i^{\mathrm{a}}\) are corrupted sensor measurements, \(u_i^{\mathrm{a}}\) corrupted actuator commands, and \(c_i^{\mathrm{a}}\) denote corrupted inter-controller communications.
    Extra trusted hardware components have been proposed to enhance the security of PLC architectures [48, 50]. For instance, McLaughlin [48] proposed a policy-based enforcement mechanism to mediate the actuator commands transmitted by the PLC to the physical plant. Mohan et al. [50] introduced a different architecture, in which every PLC runs under the scrutiny of a monitor which looks for deviations with respect to safe behaviors. Both architectures have been validated by means of simulation-based techniques. However, as far as we know, only in recent years have formal methodologies been used to model and formally enforce security-oriented architectures for ICSs.
    Runtime enforcement [22, 43, 60] is a formal verification/validation technique aiming at correcting possibly-incorrect executions of a system-under-scrutiny (SuS). It employs a kind of monitor [23] that acts as a proxy between the SuS and the environment interacting with it. At runtime, the monitor transforms any incorrect executions exhibited by the SuS into correct ones by either replacing, suppressing, or inserting observable actions on behalf of the system. The effectiveness of the enforcement depends on the achievement of the two following general principles [43, 60]:
    transparency, i.e., the enforcement must not alter correct executions of the SuS;
    soundness, i.e., incorrect executions of the SuS must be prevented.
    In this article, we propose a formal approach based on runtime enforcement to ensure specification compliance in networks of controllers possibly compromised by colluding malware that may tamper with actuator commands, sensor readings, and inter-controller communications. As pointed out by Pearce et al. [53], the use of runtime enforcement for ensuring security can be thought of as behavior monitoring misuse and anomaly detection models [17] combined with automatic recovery mechanisms.
    Our goal is to enforce potentially corrupted controllers using secure proxies based on a sub-class of Ligatti et al.’s edit automata [43]. These automata will be synthesized from enforceable timed correctness properties to form networks of monitored controllers, as in Figure 2. The proposed enforcement will enjoy both transparency and soundness together with the following features:
    Fig. 2.
    Fig. 2. A network of monitored controllers: \(y_i\) denote genuine measurements; \(u_i\) and \(c_i\) denote corrected commands and inter-controller communications, respectively; \(y_i^{\mathrm{a}}\) , \(u_i^{\mathrm{a}}\) , and \(c_i^{\mathrm{a}}\) are their corrupted counterparts.
    deadlock-freedom, i.e., the enforcement should not introduce deadlocks;
    divergence-freedom, i.e., the enforcement should not introduce divergencies;
    mitigation of incorrect/malicious activities;
    scalability, i.e., the enforcement mechanism should scale to networks of controllers.
    Obviously, when a controller is compromised, these objectives can be achieved only with the introduction of a physically independent secure proxy, as advocated by McLaughlin and Mohan et al. [48, 50], which does not have any Internet or USB access, and which is connected with the monitored controller via secure channels. This may seem like we just moved the problem over to securing the proxy. However, this is not the case because the proxy only needs to enforce a timed correctness property of the system, while the controller does the whole job of controlling the physical process relying on potentially dangerous communications via the Internet or the USB ports. Thus, any upgrade of the control system will be made to the controller and not to the secure proxy. Of course, by no means runtime reconfigurations of the secure proxy should be allowed as its enforcing should be based on the physics of the plant itself and not on the controller code (only periodic offline updates are allowed to account for possible system drifts).
    Contribution. First of all, we define the attacker model and the attacker objectives in an enforced ICS architecture such as that depicted in Figure 2. Then, we introduce a formal language to specify controller programs. For this very purpose, we resort to process calculi, a successful and widespread formal approach in concurrency theory for representing complex systems, such as mobile systems [16, 29] and cyber-physical systems [37, 42], and used in many areas, including verification of security protocols [3, 4] and security analysis of cyber-physical attacks [40, 41]. Thus, we define a simple timed process calculus, based on Hennessy and Regan’s Timed Process Language (TPL) [30], for specifying controllers, finite-state enforcers, and networks of communicating monitored controllers.
    Then, we define a simple description language to express timed correctness properties that should hold for a (possibly unbounded) number of scan cycles of the monitored controller. This will allow us to abstract over controller implementations, focusing on general properties which may even be shared by completely different controllers. In this regard, we might resort to one of the several logics existing in the literature for monitoring timed concurrent systems, and in particular cyber-physical systems (see, e.g., [9, 24]). However, the peculiar iterative behavior of controllers convinced us to adopt the sub-class of regular expressions that can be recognized by finite automata whose cycles always contain at least one final state; this is the largest class of regular properties that can be enforced by finite-state Ligatti et al.’s edit automata (see Beauquier et al.’s work [10]). In Section 5, we express a wide class of correctness properties for controllers in terms of such regular properties.
    After defining a formal language to describe controller properties, we provide a synthesis function \({\langle \!| \!{-}\! | \! \rangle ^{\mathcal {P}} }\) that, given an alphabet \(\mathcal {P}\) of observable actions (sensor readings, actuator commands, and inter-controller communications) and a deterministic regular property e combining events of \(\mathcal {P}\) , returns an edit automaton \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\!\) . The resulting enforcement mechanism will ensure the required features mentioned before: transparency, soundness, determinism preservation, deadlock-freedom, divergence-freedom, mitigation, and scalability. Then, we propose a non-trivial case study, taken from the context of industrial water treatment systems, and implemented as follows: (i) the physical plant is simulated in Simulink [47]; (ii) the open-source PLCs are implemented in OpenPLC [8] and executed on Raspberry Pi; (iii) the enforcers run on connected FPGAs. In this setting, we test our enforcement mechanism when injecting the PLCs with five different malware aiming at causing three different physical perturbations: tank overflow, valve damage, and pump damage.
    Outline. Section 2 describes the attacker model and the attacker objectives. Section 3 gives a formal language for monitored controllers. Section 4 defines the case study. Section 5 provides a language of regular properties to express controller behaviors; it also contains a taxonomy of properties expressible in the language. Section 6 contains the algorithm to synthesize monitors from regular properties, together with the main results. Section 7 discusses the implementation of the case study when exposed to five different attacks. Section 8 is devoted to related work. Section 9 draws conclusions and discusses future work. Technical proofs can be found in the appendix.

    2 Attacker Model and Attacker Objectives

    Our enforcement-based architecture for ICSs (see Figure 2) assumes physically independent secure proxies connected to the controller via secure channels and no Internet or USB access.
    In such secure architecture, the attacker can basically inject arbitrary code in the user program of PLCs. As a consequence, the attacker has the following capabilities on PLCs: (i) forge/drop actuator commands, (ii) read/modify sensor readings coming from the plant, (iii) forge/drop inter-controller communications. The malware injected in different PLCs of the same field communications network may collaborate/communicate with each other to achieve common objectives and possibly take control of the PLC network communication.
    The attacker assumed in this article has the following two limitations.
    Malicious alterations of sensor signals at a network level, or within the sensor devices, are not allowed because the attacker is assumed to reside in the PLC only. However, such sensor attacks can be “locally” simulated by our attackers who can modify the user program of PLCs, and hence the sensor measurements involved, as she wishes.
    Attackers never violate the maximum cycle limit of the PLC under attack, i.e., the maximum time required to complete a PLC scan cycle [1]. Such a violation would cause an immediate shutdown of the PLC preventing more sophisticated attacks [63].
    Thus, the attacker objectives can be resumed in affecting the runtime evolution of the controlled physical process, possibly transmitting fake measurements to the supervisory control network.

    3 A Formal Language for Monitored Controllers

    In this section, we introduce the Timed Calculus of Monitored Controllers, called TCMC, as an abstract formal language to express networks of controllers integrated with edit automata sitting on the network interface of each controller to monitor/correct their interactions with the rest of the system. Basically, TCMC extends Hennessy and Regan’s TPL [30] with monitoring edit automata. Like TPL time proceeds in discrete time slots separated by \({\scriptstyle \mathsf {tick}}\) -actions.
    Let us start with some preliminary notation. We use \(s, s_k \in \mathsf {Sens}\) to name sensor signals; \(a,a_k \in \mathsf {Act}\) to indicate actuator commands; \(c, c_k \in \mathsf {Chn}\) for channels; \(z, z_k\) for generic names.
    Controllers. In our setting, controllers are nondeterministic sequential timed processes evolving through three main phases: sensing of sensor signals, communication with other controllers, and actuation. For convenience, we use five different syntactic categories to distinguish the five main states of a controller: \(\mathbb {C}{𝕥𝕣𝕝}\) for initial states, \(\mathbb {S}{𝕝𝕖𝕖𝕡}\) for sleeping states, \(\mathbb {S}{𝕖𝕟𝕤}\) for sensing states, \(\mathbb {Com}\) for communication states, and \(\mathbb {Act}\) for actuation states. In its initial state, a controller is a recursive process waiting for signal stabilization in order to start the sensing phase:
    \begin{align*} \begin{array}{rcl} \mathbb {C}{𝕥𝕣𝕝} \ni P & \quad ::= \quad & X \\ \mathbb {S}{𝕝𝕖𝕖𝕡} \ni W & \quad ::= \quad & {\scriptstyle \mathsf {tick}}.W \quad \big | \quad S. \end{array} \end{align*}
    The main process describing a controller consists of some recursive process defined via equations of the form \(X = {\scriptstyle \mathsf {tick}}.W\) , with \(W \in \mathbb {S}{𝕝𝕖𝕖𝕡}\) ; here, X is a process variable that may occur (free) in W. For convenience, our controllers always have at least one initial timed action \({\scriptstyle \mathsf {tick}}\) to ensure time-guarded recursion, thus avoiding undesired zeno behaviors [31]: the number of untimed actions between two \({\scriptstyle \mathsf {tick}}\) -actions is always finite. Then, after a determined sleeping period, when sensor signals get stable, the sensing phase can start.
    During the sensing phase, the controller waits for a finite number of admissible sensor signals. If none of those signals arrives in the current time slot then the controller will timeout moving to the following time slot (we adopt the TPL construct \(\lfloor \cdot \rfloor \cdot\) for timeout). The syntax is the following:
    \begin{equation*} \mathbb {S}{𝕖𝕟𝕤} \ni S \quad \quad ::= \quad \quad \left\lfloor \sum _{i \in I} s_i.S_i \right\rfloor {S} \quad \big | \quad C, \end{equation*}
    where \(\sum _{i \in I} s_i.S_i\) denotes the standard construct for nondeterministic choice with \(s_k \ne s_h\) , for \(k,h \in I\) . Once the sensing phase is concluded, the controller starts its calculations that may depend on communications with other controllers governing different physical processes. Controllers communicate with each other for mainly two reasons: either to receive notice about the state of other physical sub-processes or to require an actuation on a physical process which is out of their control. As in TPL, we adopt a channel-based handshake point-to-point communication paradigm. Note that, in order to avoid starvation, communication is always under timeout. The syntax for the communication phase is
    \begin{equation*} \mathbb {C}{𝕠𝕞𝕞} \ni C \quad \quad ::= \quad \quad \lfloor {\sum _{i \in I}c_i.C_i}\rfloor {C}\quad \big | \quad \lfloor \overline{c}.C \rfloor C \quad \big | \quad A, \end{equation*}
    where \(\lfloor \sum _{i \in I}c_i.C_i \rfloor C\) denotes nondeterministic input choice under timeout, with \(c_k \ne c_h\) for \(k,h \in I\) , whereas \(\lfloor \overline{c}.C \rfloor C\) represents output along channel c under timeout.
    In the actuation phase, a controller eventually transmits a finite sequence of commands to actuators, and then, it emits a fictitious system signal \({\scriptstyle \mathsf {end}}\) to denote the end of the scan cycle. After that, the whole scan cycle can restart. Notice that, while \({\scriptstyle \mathsf {tick}}\) -actions model the passage of (discrete) time, \({\scriptstyle \mathsf {end}}\) -actions have a different granularity, as they signal the end of a scan cycle. Formally,
    \begin{align*} \begin{array}{rcl} \mathbb {Act} \ni A & \quad ::= \quad & \overline{a}.A \quad \big | \quad {\scriptstyle \mathsf {end}}.X. \end{array} \end{align*}
    Remark 1 (Scan Cycle Duration and Maximum Cycle Limit).
    The scan cycle of a PLC must be completed within a specific time, called the maximum cycle limit, which depends on the controlled physical process; if this time limit is violated the controller stops and throws an exception [63]. Thus, the signal \({\scriptstyle \mathsf {end}}\) must occur well before the maximum cycle limit of the controller. We actually work under the assumption that our controllers successfully complete their scan cycle in less than half of the maximum cycle limit. This assumption ensures us that even when the controller is completely unreliable and the monitor inserts an entire safe trace, the resulting enforced scan cycles will always end well before a violation of the maximum cycle limit. Please, notice that it is easy to statically derive the maximum duration of a scan cycle expressed in our calculus by simply counting the maximum number of \({\scriptstyle \mathsf {tick}}\) -prefixes occurring between two subsequent \({\scriptstyle \mathsf {end}}\) -prefixes.
    The operational semantics in Table 1 is along the lines of Hennessy and Regan’s TPL [30]. In the following, we use the metavariable \(\alpha\) to range over the set of all (observable) controller actions: \(\mathsf {Sens}\cup \overline{\mathsf {Act}} \cup \mathsf {Chn}\cup \overline{\mathsf {Chn}} \cup \lbrace {\scriptstyle \mathsf {tick}}\rbrace \cup \lbrace {\scriptstyle \mathsf {end}}\rbrace\) .1 These actions denote: sensor readings, actuator commands, channel transmissions, channel receptions, passage of time, and end of scan cycles, respectively. Notice that at our level of abstraction we represent only the observable behavior of PLCs: internal computations are not modeled within PLCs; although, we do have \(\tau\) -actions to express communications between two PLCs, as the reader will notice in Table 2.
    Table 1.
    Table 1. LTS for Controllers
    Table 2.
    Table 2. LTS for Field Communications Networks of Monitored Controllers
    Remark 2 (Attacker model and end-signal)
    In our abstract representation of PLCs, the \({\scriptstyle \mathsf {end}}\) -signal is not really part of the (possibly compromised) PLC program but it is rather a system signal denoting the end of a scan cycle. As a consequence, in accordance with our attacker model, we assume that this fictitious signal cannot be dropped or forged by the attacker.
    Monitored controllers. The core of our enforcement relies on (timed) finite-state Ligatti et al.’s edit automata [43], i.e., a particular class of automata specifically designed to allow/suppress/insert actions in a generic system in order to preserve its correct behavior. The syntax is as follows:
    \begin{equation*} \mathbb {E}{𝕕𝕚𝕥} \ni \mathsf {E}, \mathsf {F}\quad \quad ::= \quad \quad {\sf go}\ \ \vert \ \ \sum _{i \in I} \lambda _i.\mathsf {E}_i\ \ \vert \ \ \mathsf {X}. \end{equation*}
    The special automaton \({\sf go}\) will admit any action of the monitored system. The edit automaton \(\sum _{i \in I} \lambda _i.\mathsf {E}_i\) enforces an action \(\lambda _i\) , and then continues as \(\mathsf {E}_i\) , for any \(i \in I\) , with I finite. Here, the symbol \(\lambda\) ranges over: (i) \(\alpha\) to allow the action \(\alpha\) , (ii) \(^-\alpha\) to suppress the action \(\alpha\) , and (iii) \(\alpha _1 \prec \alpha _2\) , for \(\alpha _1 \ne \alpha _2\) , to insert the action \(\alpha _1\) before the action \(\alpha _2\) . Recursive automata \(\mathsf {X}\) are defined via equations of the form \(\mathsf {X} = \mathsf {E}\) , where the automata variable \(\mathsf {X}\) may occur (free) in \(\mathsf {E}\) .
    The operational semantics of our edit automata is given via the following transition rules:
    Our monitored controllers, written \(\mathsf {E} \! \bowtie \! {\boldsymbol { \lbrace }}J{\boldsymbol { \rbrace }}\) , consist of a controller J, for \(J \in \mathbb {C}{𝕥𝕣𝕝} \cup \mathbb {S}{𝕝𝕖𝕖𝕡} \cup \mathbb {S}{𝕖𝕟𝕤} \cup \mathbb {C}{𝕠𝕞𝕞} \cup \mathbb {Act}\) , and an edit automaton \(\mathsf {E}\) enforcing the behavior of J, according to the following transition rules, presented in the style of Martinelli and Matteucci [45]:
    Rule (Allow) is used for allowing observable actions emitted by the controller under scrutiny. By an application of Rule (Suppress), incorrect actions \(\alpha\) emitted by (possibly corrupted) controllers J are suppressed, i.e., converted into (silent) \(\tau\) -actions. Rule (Insert) is used to insert an action \(\alpha _1\) before an action \(\alpha _2\) of the controller.
    Here, we wish to stress that, like Ligatti et al. [43], we are interested in deterministic (and hence implementable) enforcement. With the following technical definitions we extract from enforcer actions \(\lambda\) both: (i) the controller triggering actions, and (ii) the resulting output actions.
    Definition 1.
    Let \(\lambda\) be an arbitrary action for edit automata, we write \(\mathit {trigger}(\lambda)\) to denote the controller action triggering \(\lambda\) , defined as: \(\mathit {trigger}(\alpha) = \alpha\) , \(\mathit {trigger}({^{-}{\alpha }})= \alpha\) and \(\mathit {trigger}(\alpha _1 \prec \alpha _2) = \alpha _2\) . Similarly, we write \(\mathit {out}({\lambda }\) ) to denote the output action prescribed by \(\lambda\) , defined as: \(\mathit {out}({\alpha }) = \alpha\) , \(\mathit {out}({{^{-}{\alpha }}})= \tau\) and \(\mathit {out}({\alpha _1 \prec \alpha _2}) = \alpha _1\) . Given a trace \(t= \lambda _1 \cdots \lambda _n\) , we write \(\mathit {out}(t)\) for the trace \(\mathit {out}(\lambda _1) \cdots \mathit {out}(\lambda _n)\) .
    Now, we provide a definition of deterministic enforcer along the lines of Pinisetty et al. [54].
    Definition 2 (Deterministic Enforcer).
    A edit automaton \(\mathsf {E}\in \mathbb {E}{𝕕𝕚𝕥}\) is said to be deterministic iff in every term \(\sum _{i \in I} \lambda _i.\mathsf {E}_i\) that appears in \(\mathsf {E}\) there are no \(\lambda _k\) and \(\lambda _j\) , for \(k,j \in I\) , \(k\ne j\) , such that \(\mathit {trigger}(\lambda _k) = \mathit {trigger}(\lambda _j)\) and \(\mathit {out}({\lambda _k}) = \mathit {out}({\lambda _j})\) .
    Finally, we can generalize the concept of the monitored controller to a field communications network of parallel monitored controllers, each one acting on different actuators, and exchanging information via channels. These networks are formally defined via a straightforward grammar:
    \begin{equation*} \mathbb {F}\mathbb{N}{𝕖𝕥} \ni N\quad ::= \quad \mathsf {E} \! \bowtie \! {\boldsymbol { \lbrace }}J{\boldsymbol { \rbrace }} \quad \big | \quad N\parallel N, \end{equation*}
    with the operational semantics defined in Table 2.
    Notice that monitored controllers may interact with each other via channel synchronization (see Rule (ChnSync)). Moreover, via rule (TimeSync) they may evolve in time only when channel synchronization may not occur (our controllers do not admit zeno behaviors). This ensures maximal progress [30], a desirable time property when modeling real-time systems: channel communications will never be postponed.
    In the following, the metavariable \(\beta\) will range over the same set of actions as \(\alpha\) , together with the silent action \(\tau\) . It is used to denote actions of monitored field networks.
    Definition 3 (Execution Traces).
    Given three finite execution traces \(t_{\mathrm{c}}=\alpha _1 \ldots \alpha _k\) , \(t_{\mathrm{e}}=\lambda _1 \ldots \lambda _l\) , and \(t_{\mathrm{m}}=\beta _1 \ldots \beta _n\) , for controllers, edit automata and monitored controllers, respectively. We write: (i) \(P \mathop \rightarrow^ {t_{\mathrm{c}} } P^{\prime }\) , as an abbreviation for \(P=P_0{\mathop \rightarrow^{\!\!\!\!\!\!\alpha _1}} \cdots \rightarrow^{\!\!\!\!\!\!\alpha _k}P_k=P^{\prime }\) ; (ii) \(\mathsf {E}\mathop \rightarrow^ {t_{\mathrm{e}}} \mathsf {E}^{\prime }\) , as an abbreviation for \(\mathsf {E}=\mathsf {E}_0\mathop \rightarrow^ {\lambda _1} \cdots \mathop \rightarrow^ {\lambda _l}\mathsf {E}_l=\mathsf {E}^{\prime }\) ; (iii) \(N \mathop \rightarrow^ {t_{\mathrm{m}}} N^{\prime }\) , as an abbreviation for \(N=N_0\mathop \rightarrow^ {\beta _1} \cdots \mathop \rightarrow^ {\beta _n}N_n=N^{\prime }\) .
    In the rest of the article, we adopt the following notations.
    Notation 1.
    As usual, we write \(\epsilon\) to denote the empty trace. Given a trace t we write \(|t|\) to denote the length of t, i.e., the number of actions occurring in t. Given a trace t we write \(\hat{t}\) to denote the trace obtained by removing the \(\tau\) -actions. Given two traces \(t^{\prime }\) and \(t^{\prime \prime }\) , we write \(t^{\prime } \cdot t^{\prime \prime }\) for the trace resulting from the concatenation of \(t^{\prime }\) and \(t^{\prime \prime }\) . For \(t= t^{\prime } \cdot t^{\prime \prime }\) we say that \(t^{\prime }\) is a prefix of t and \(t^{\prime \prime }\) is a suffix of t.

    4 Use Case: the SWaT System

    In this section, we describe how to specify in TCMC a non-trivial network of PLCs to control (a simplified version of) the Secure Water Treatment system (SWaT) [46].
    SWaT represents a scaled down version of a real-world industrial water treatment plant. The system consists of six stages, each of which deals with a different treatment, including: chemical dosing, filtration, dechlorination, and reverse osmosis. For simplicity, in our use case, depicted in Figure 3, we consider only three stages. In the first stage, raw water is chemically dosed and pumped in a tank \(T_1\) , via two pumps \(\mathit {pump}_1\) and \(\mathit {pump}_2\) . A valve connects \(T_1\) with a filtration unit that releases the treated water in a second tank \(T_2\) . Here, we assume that the flow of the incoming water in \(T_1\) is greater than the outgoing flow passing through the valve. The water in \(T_2\) flows into a reverse osmosis unit to reduce inorganic impurities. In the last stage, the water coming from the reverse osmosis unit is either distributed as clean water, if required standards are met, or stored in a backwash tank \(T_3\) and then pumped back, via a pump \(\mathit {pump}_3\) , to the filtration unit. Here, we assume that tank \(T_2\) is large enough to receive the whole content of tank \(T_3\) at any moment.
    Fig. 3.
    Fig. 3. A simplified industrial water treatment system.
    The SWaT system has been used to provide a dataset containing physical and network data recorded during 11 days of activity [27]. Part of this dataset contains information about the execution of the system in isolation, while a second part records the effects on the system when exposed to different kinds of cyber-physical attacks. Thus, for instance, (i) drops of commands to activate \({\it pump}_2\) may affect the quality of the water, as they would affect the correct functioning of the chemical dosing pump; (ii) injections of commands to close the valve between \(T_1\) and \(T_2\) , may give rise to an overflow of tank \(T_1\) if this tank is full; (iii) integrity attacks on the signals coming from the sensor of the tank \(T_3\) may result in damages of the pump \(\mathit {pump}_3\) if it is activated when \(T_3\) is empty.
    Each tank is controlled by its own PLC. The programs of the three PLCs, expressed in terms of ladder logic, are given in Figure 4. In the following, we give their descriptions in TCMC.
    Fig. 4.
    Fig. 4. Ladder logics of \(\mathrm{PLC_1}\) , \(\mathrm{PLC_2,}\) and \(\mathrm{PLC_3}\) , respectively, controlling the system in Figure 3.
    Let us start with the user program of the controller \(\mathrm{PLC}_1\) managing the tank \(T_1\) . Its definition is given in terms of two equations to deal with the case when the two pumps, \(\mathit {pump}_1\) and \(\mathit {pump}_2\) , are both off and both on, respectively. Intuitively, when the pumps are off, the level of water in \(T_1\) drops until it reaches its low-level (event \(l_1\) ); when this happens both pumps are turned on and they remain so until the tank is refilled, reaching its high-level (event \(h_1\) ). Formally,
    Thus, for instance, when the pumps are off the \(\mathrm{PLC}_1\) waits for one-time slot (to get stable sensor signals) and then checks the water level of the tank \(T_1\) , distinguishing between three possible states. If \(T_1\) reaches its low-level (signal \(l_1\) ) then the pumps are turned on (commands \(\overline{ {\mathsf {\scriptstyle on_{1}}} }\) and \(\overline{ {\mathsf {\scriptstyle on_{2}}} }\) ) and the valve is closed (command \({\mathsf {\scriptstyle { {\mathsf {\scriptstyle close}} }{\_}req}}\) ). Otherwise, if the tank \(T_1\) is at some intermediate level between low and high (say, some level \(m_1\) ) then \(\mathrm{PLC}_1\) listens for requests arriving from \(\mathrm{PLC}_2\) to open/close the valve. Precisely, if the PLC gets an \({\mathsf {\scriptstyle {open}{\_}req}}\) request then it opens the valve, letting the water flow from \(T_1\) to \(T_2\) , otherwise, if it gets a \({\mathsf {\scriptstyle {close}{\_}req}}\) request then it closes the valve; in both cases, the pumps remain off. If the level of the tank is high (signal \(h_1\) ) then the requests of water coming from \(\mathrm{PLC}_2\) are served as before, but the two pumps are eventually turned off (commands \(\scriptstyle \overline{\mathsf {off}_1}\) and \(\scriptstyle \overline{\mathsf {off}_2}\) ).
    \(\mathrm{PLC_2}\) manages the water level of tank \(T_2\) . Its user program consists of the two equations to model the filling (state \(\uparrow\) ) and the emptying (state \(\downarrow\) ) of the tank. Formally,
    Here, after one-time slot, the level of \(T_2\) is checked. If the level is low (signal \(l_2\) ) then \(\mathrm{PLC}_2\) sends a request to \(\mathrm{PLC}_1\) , via the channel \({\mathsf {\scriptstyle {open}{\_}req}}\) , to open the valve that lets the water flow from \(T_1\) to \(T_2\) , and then returns. Otherwise, if the level of tank \(T_2\) is high (signal \(h_2\) ) then \(\mathrm{PLC}_2\) asks \(\mathrm{PLC}_1\) to close the valve, via the channel \({\mathsf {\scriptstyle {close}{\_}req}}\) , and then returns. Finally, if the tank \(T_2\) is at some intermediate level between \(l_2\) and \(h_2\) (say, some level \(m_2\) ) then the valve remains open (respectively, closed) when the tank is refilling (respectively, emptying).
    Finally, \(\mathrm{PLC_3}\) manages the water level of the backwash tank \(T_3\) . Its user program consists of two equations to deal with the case when the pump \(\mathit {pump}_3\) is off and on, respectively. Formally,
    Here, after one-time slot, the level of tank \(T_3\) is checked. If the level is low (signal \(l_3\) ) then \(\mathrm{PLC}_3\) turns off the pump \(\mathit {pump}_3\) (command \(\scriptstyle \overline{ {\mathsf {\scriptstyle off_{3}}} }\) ), and then returns. Otherwise, if the level of \(T_3\) is high (signal \(h_3\) ) then the pump is turned on (command \(\scriptstyle \overline{ {\mathsf {\scriptstyle on_{3}}} }\) ) until the whole content of \(T_3\) is pumped back into the filtration unit of \(T_2\) .
    Examples of correctness properties and attacks. In a system similar to that described above, one would expect a number of properties to capturing the correct functioning of system components. Let us provide a few examples of such correctness properties and some specific attacks that may potentially invalidate these properties.
    A first property might say that if \(\mathrm{PLC}_1\) receives a request to open the valve between tanks \(T_1\) and \(T_2\) then the same valve will be eventually closed early enough to prevent water overflow in tank \(T_2\) . This property certainly holds when the system is not exposed to any attack. However, a malware injected in \(\mathrm{PLC}_1\) might try to undermine this property by tampering either with the actuator dedicated to the valve or with the requests of \(\mathrm{PLC}_2\) to open/close the valve. In particular, a malicious request to open the valve might be forged by an attacker injected in \(\mathrm{PLC}_2\) . Another desired correctness property might say that whenever the tank \(T_2\) is full then \(\mathrm{PLC}_2\) will never ask for incoming water from tank \(T_1\) . Finally, another expected property might say that \(\mathit {pump}_3\) will never work without enough water in tank \(T_3\) . Again, an attacker injected in \(\mathrm{PLC}_3\) might try to undermine this property by tampering either with the actuator dedicated to the pump or with the sensor measuring the level of tank \(T_3\) .
    In Section 7.3 we will provide formal definitions for patterns template of structured correctness properties that are suitable for enforcing correct behaviors of our PLCs.

    5 A Formal Language for Controller Properties

    In this section, we provide a simple description language to express correctness properties that we may wish to enforce at runtime in our controllers. As discussed in the Introduction, we resort to (a sub-class of) regular properties as they allow us to express interesting classes of properties referring to one or more scan cycles of a controller.
    The proposed language distinguishes between two kinds of properties: (i) global properties, \(e \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) , to express general controllers’ execution traces; (ii) local properties, \(p \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) , to express traces confined to a finite number of consecutive scan cycles. The two families of properties are formalized via the following regular grammar:
    \begin{align*} \begin{array}{lcl} e \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}&\quad ::= \quad & p^\ast \,|\, e_1 \cap e_2 \\ p,q \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}&\quad ::= \quad & \epsilon \,|\, p_1; p_2 \,|\, \bigcup _{i\in I}\pi _i.p_i \,|\, p_1 \cap p_2, \end{array} \end{align*}
    where \(\pi _i \in \mathsf {Events}\triangleq \mathsf {Sens}\cup \overline{\mathsf {Act}} \cup \mathsf {Chn}\cup \overline{\mathsf {Chn}} \cup \lbrace {\scriptstyle \mathsf {tick}}\rbrace \cup \lbrace {\scriptstyle \mathsf {end}}\rbrace\) denote atomic properties, called events, that may occur as prefix of a property. With an abuse of notation, we use the symbol \(\epsilon\) to denote both the empty property and the empty trace.
    The semantics of our logic is naturally defined in terms of sets of execution traces that satisfy a given property; its formal definition is given in Table 3.
    Table 3.
    Table 3. Trace Semantics of Our Regular Properties
    However, the syntax of our logic is a bit too permissive with respect to our intentions, as it allows us to describe partial scan cycles, i.e., cycles that have not been completed. Thus, we restrict ourselves to considering properties that builds on top of local properties associated with complete scan cycles, i.e., scan cycles whose last action is an \({\scriptstyle \mathsf {end}}\) -action. Formally,
    Definition 4.
    Well-formed properties are defined as follows:
    the local property \({\scriptstyle \mathsf {end}}. \epsilon\) is well-formed;
    a local property of the form \(p_1 ; p_2\) is well-formed if \(p_2\) is well-formed;
    a local property of the form \(p_1 \cap p_2\) is well-formed if both \(p_1\) and \(p_2\) are well-formed;
    a local property of the form \(\bigcup _{i\in I}\pi _i.p_i\) is well-formed if either \(\pi _i . p_i = {\scriptstyle \mathsf {end}}. \epsilon\) or \(p_i\) is well-formed, for any \(i \in I\) ;
    a global property \(p^\ast\) is well-formed if p is well-formed;
    a global property \(e_1\cap e_2\) is well-formed if both \(e_1\) and \(e_2\) are well-formed.
    In the rest of the article, we always assume to work with well-formed properties. Moreover, we adopt the following notations and/or abbreviations on properties.
    Notation 2.
    We omit trailing empty properties, writing \(\pi\) instead of \(\pi .\epsilon\) . For \(k \gt 0\) , we write \(\pi ^k.p\) as a shorthand for \(\pi .\pi \ldots \pi .p\) , where prefix \(\pi\) appears k consecutive times. Given a local property p we write \(\mathsf {events}(p) \subseteq \mathsf {Events}\) to denote the set of events occurring in p; similarly, we write \(\mathsf {events}(e) \subseteq \mathsf {Events}\) to denote the set of events occurring in a global property \(e \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) . Given a set of events \({\mathcal {A}} \subseteq \mathsf {Events}\) and a local property p, we use \({\mathcal {A}}\) itself as an abbreviation for the property \(\bigcup _{\pi \in {\mathcal {A}}}\pi .\epsilon\) , and \({\mathcal {A}}.p\) as an abbreviation for the property \(\bigcup _{\pi \in {\mathcal {A}}}\pi .p\) . Given a set of events \({\mathcal {A}}\) , with \({\scriptstyle \mathsf {end}}\not\in {\mathcal {A}}\) , we write \({\mathcal {A}}^{\le k}\) , for \(k \ge 0\) , to denote the well-formed property defined as follows: (i) \({\mathcal {A}}^{\le 0} \triangleq {\scriptstyle \mathsf {end}}\) ; (ii) \({\mathcal {A}}^{\le k} \triangleq {\scriptstyle \mathsf {end}}\cup {\mathcal {A}}.{\mathcal {A}}^{\le k-1}\) , for \(k\gt 0\) . Thus, the property \({\mathcal {A}}^{\le k}\) captures all possible sequences of events of \({\mathcal {A}}\) whose length is at most k, for \(k \in \mathbb {N}\) . We write \(\mathsf {PEvents}\) to denote the set of pure events, i.e., \(\mathsf {Events}\setminus \lbrace {\scriptstyle \mathsf {end}}\rbrace\) . Finally, we write \(\mathsf {PUEvents}\) to denote the set of pure untimed events, i.e., \(\mathsf {Events}\setminus \lbrace {\scriptstyle \mathsf {end}}, {\scriptstyle \mathsf {tick}}\rbrace\) .
    Note that our properties are in general nondeterministic. However, since we are interested in deterministic enforcers, in the following we will focus on deterministic enforcing properties.
    Definition 5 (Deterministic Properties).
    A global property \(e\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) is said to be deterministic if for any sub-term \(\bigcup _{i \in I}\pi _i.p_i\) appearing in e, we have \(\pi _k \ne \pi _h\) , for any \(k,h\in I\) , \(k\ne h\) .

    5.1 Local Properties

    As already said, local properties describe execution traces, which are limited to a finite number of scan cycles. Let us present a number of significant local properties that can be expressed in our language of regular properties. In the following, we assume a fixed maximum number of actions, \({\scriptstyle \mathsf {maxa}}\) , that may occur within a single scan cycle of our controllers, i.e., between two subsequent \({\scriptstyle \mathsf {end}}\) -actions.

    5.1.1 Basic Properties.

    They prescribe conditional, eventual, and persistent behaviors.
    Conditional. These properties say that when a (pure) untimed event \(\pi\) occurs in the current scan cycle then some property p should be satisfied. More generally, for \(\pi _i \in \mathsf {PUEvents}\) and \(p_i\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) , we write \(\mathrm{Case}{(\,\bigcup _{i\in I}\lbrace \mathrel {(\pi _i, p_i)}\rbrace)}\) to denote the property \(q_k\) , for \(k={\scriptstyle \mathsf {maxa}}\) , defined as follows:
    \(q_k \triangleq {\scriptstyle \mathsf {end}}\cup \bigcup _{i\in I} \pi _i.p_i \cup \ (\mathsf {PEvents}{\setminus }\bigcup _{i \in I} \lbrace \pi _i\rbrace).q_{k-1}\) , for \(0 \lt k \le {\scriptstyle \mathsf {maxa}}\)
    \(q_0 \triangleq {\scriptstyle \mathsf {end}}\) .
    When there is only one triggering event \(\pi \in \mathsf {PUEvents}\) and one associated local property \(p \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) , we have a simple conditional property defined as follows: \(\mathrm{Cnd}(\mathit {\pi , p}) \triangleq \mathrm{Case}{(\lbrace \mathrel {(\pi , p)}\rbrace)}\) .
    Conditional properties \(\mathrm{Cnd}(\mathit {\pi , p})\) define a cause-effect relation in which the triggering event \(\pi\) is searched in the current scan cycle; one may think of a more general property \(\mathrm{PCnd}_{\mathit {m}}(\mathit {\pi , p})\) , in which the cause-effect relation persists for \(m \gt 0\) consecutive scan cycles, i.e., the search for the triggering event \(\pi\) continues for at most m consecutive scan cycles. Of course, the triggered local property p may span over a finite number of scan cycles (see Figure 5). Formally, we write \(\mathrm{PCnd}_{\mathit {m}}(\mathit {\pi , p})\) , for \(\pi \in \mathsf {PUEvents}\) , \(p\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) and \(m \gt 0\) , for the property \(q^{m}_{\scriptstyle \mathsf {maxa}}\) defined as follows:
    Fig. 5.
    Fig. 5. A trace satisfying a persistent conditional property \(\mathrm{PCnd}_{\mathit {m}}(\mathit {\pi , p})\) .
    \(q^h_k\triangleq {\scriptstyle \mathsf {end}}.q^{h-1}_{\scriptstyle \mathsf {maxa}}\, \cup \, \pi .p \, \cup \, (\mathsf {PEvents}{\setminus }\lbrace \pi \rbrace).q^{h}_{k-1}\) , for \(1 \lt h \le m\) and \(0 \lt k \le {\scriptstyle \mathsf {maxa}}\)
    \(q^{h}_0\triangleq {\scriptstyle \mathsf {end}}.q^{h-1}_{\scriptstyle \mathsf {maxa}}\) , for \(1 \lt h \le m\)
    \(q^{1}_k\triangleq {\scriptstyle \mathsf {end}}\, \cup \, \pi .p \, \cup \, (\mathsf {PEvents}{\setminus }\lbrace \pi \rbrace).q^{1}_{k-1}\) , for \(0 \lt k \le {\scriptstyle \mathsf {maxa}}\)
    \(q^{1}_0 \triangleq \epsilon\) .
    Obviously, \(\mathrm{Cnd}(\mathit {\pi , p}) = \mathrm{PCnd}_{\mathit {1}}(\mathit {\pi , p})\) .
    Bounded eventually. In this case, an event \(\pi\) must eventually occur within m scan cycles. Formally, for \(\pi \in \mathsf {PUEvents}\) and \(m \gt 0\) , we write \({\mathrm{BE}_{\mathit {m}}(\mathit {\pi })}\) to denote the property \(q_{\scriptstyle \mathsf {maxa}}^m\) defined as follows:
    \(q^h_k\triangleq {\scriptstyle \mathsf {end}}.q_{{\scriptstyle \mathsf {maxa}}}^{h-1} \cup \pi .{\mathsf {PEvents}}^{\le k-1} \cup (\mathsf {PEvents}{\setminus }\lbrace \pi \rbrace).q_{k-1}^{h}\) , for \(1 \lt h \le m\) and \(0 \lt k \le {\scriptstyle \mathsf {maxa}}\)
    \(q_0^h \triangleq {\scriptstyle \mathsf {end}}.q^{h-1}_{{\scriptstyle \mathsf {maxa}}}\) , for \(1 \lt h \le m\)
    \(q^1_k\triangleq \pi .{\mathsf {PEvents}}^{\le k-1} \cup (\mathsf {PEvents}{\setminus }\lbrace \pi \rbrace).q_{k-1}^{1}\) , for \(0 \lt k \le {\scriptstyle \mathsf {maxa}}\)
    \(q_0^1 \triangleq {\pi }.{\scriptstyle \mathsf {end}}\) .
    Bounded persistency. While in \({\mathrm{BE}_{\mathit {m}}(\mathit {\pi })}\) the event \(\pi\) must eventually occur within m scan cycles, bounded persistency prescribes that an event \(\pi\) must occur in all subsequent m scan cycles. Formally, for \(\pi \in \mathsf {PUEvents}\) and \(m \gt 0\) , we write \({\mathrm{BP}_{\mathit {m}}(\mathit {\pi })}\) to denote the property \(q_{\scriptstyle \mathsf {maxa}}^{m}\) defined as follows:
    \(q_{k}^h \triangleq {\pi }. \mathsf {PEvents}^{\le k-1}; q^{h-1}_{\scriptstyle \mathsf {maxa}}\cup {(\mathsf {PEvents}{\setminus }\lbrace \pi \rbrace)}. q_{k-1}^{h}\) , for \(1 \lt h \le m\) and \(0 \lt k \le {\scriptstyle \mathsf {maxa}}\)
    \(q_{0}^h\triangleq \pi .{\scriptstyle \mathsf {end}}.q^{h-1}_{\scriptstyle \mathsf {maxa}}\) , for \(1 \lt h \le m\)
    \(q_{k}^1 \triangleq {\pi }. \mathsf {PEvents}^{\le k-1} \cup {(\mathsf {PEvents}{\setminus }\lbrace \pi \rbrace)}. q_{k-1}^{1}\) , for \(0 \lt k \le {\scriptstyle \mathsf {maxa}}\)
    \(q_0^1 \triangleq \pi .{\scriptstyle \mathsf {end}}\) .
    Bounded absence. The negative counterpart of bounded persistency is bounded absence. This property says that an event \(\pi\) must not appear in all subsequent m scan cycles. Formally, for \(\pi \in \mathsf {PUEvents}\) and \(m\gt 0\) , we write \({\mathrm{BA}_{\mathit {m}}(\mathit {\pi })}\) to denote the property \(q_{m}\) defined as follows:
    \(q_{h} \triangleq (\mathsf {PEvents}{\setminus }\lbrace \pi \rbrace)^{\le {\scriptstyle \mathsf {maxa}}};q_{h-1}\) , for \(0 \lt h \le m\)
    \(q_{0} \triangleq \epsilon\) .

    5.1.2 Compound Conditional Properties.

    The properties above can be combined together to express more detailed PLC behaviors. Let us see a few examples with the help of the use case of Section 4.
    Conditional bounded eventually. According to this property, if a triggering event \(\pi _1\) occurs then a second event \(\pi _2\) must eventually occur between the mth and the nth scan cycle, with \(1\le m \le n\) . Formally, for \(\pi _1,\pi _2 \in \mathsf {PUEvents}\) and \(1\le m \le n\) , we define \(\mathrm{CBE}_{[\mathit {m, n}]}(\mathit {\pi _1, \pi _2})\) as follows:
    \begin{equation*} \mathrm{CBE}_{[\mathit {m, n}]}(\mathit {\pi _1, \pi _2}) \triangleq \mathrm{Cnd}(\mathit {\pi _1 \, , (\mathsf {PEvents}^{\le {\scriptstyle \mathsf {maxa}}})^{m-1};\mathrm{BE}_{\mathit {n-m+1}}(\mathit {\pi _2})}). \end{equation*}
    Intuitively, if the event \(\pi _1\) occurs then the event \(\pi _2\) must eventually occur between the scan cycles m and n. In case we would wish that \(\pi _2\) should not occur before the mth scan cycle, then the property would become: \(\mathrm{Cnd}(\mathit {\pi _1 \, , \mathrm{BA}_{\mathit {m-1}}(\mathit {\pi _2});\mathrm{BE}_{\mathit {n-m+1}}(\mathit {\pi _2})}).\)
    As an example, we might enforce a conditional bounded eventually property in \(\mathrm{PLC}_1\) of our use case in Section 4 to prevent water overflow in the tank \(T_2\) due to a misuse of the valve connecting the tanks \(T_1\) and \(T_2\) . Assume that \(z \in \mathbb {N}\) is the time (expressed in scan cycles) required to overflow the tank \(T_2\) when the valve is open and the level of tank \(T_2\) is low. We might consider to enforce a property of the form \(\mathrm{CBE}_{[\mathit {1, w}]}(\mathit { {\mathsf {\scriptstyle {o`pen}{\_}req}} , \overline{ {\mathsf {\scriptstyle close}} }})\) , with \(w \lt \lt z\) ,2 saying that if \(\mathrm{PLC}_1\) receives a request to open the valve, then the valve will be eventually closed within at most w scan cycles (including the current one). This will ensure that if a water request coming from \(\mathrm{PLC}_2\) is received by \(\mathrm{PLC}_1\) then the valve controlling the flaw from \(T_1\) to \(T_2\) will remain open for at most w scan cycles, with \(w \lt \lt z\) , preventing the overflow of \(T_2\) .
    Conditional bounded persistency. Another possibility is to combine conditional with bounded persistency to prescribe that if a triggering event \(\pi _1\) occurs then the event \(\pi _2\) must occur in the mth scan cycle and in all subsequent \(n-m\) scan cycles, for \(1\le m \le n\) . Formally, for \(\pi _1,\pi _2 \in \mathsf {PUEvents}\) and \(1 \le m \le n\) , we write \(\mathrm{CBP}_{[\mathit {m, n}]}(\mathit {\pi _1, \pi _2})\) to denote the property defined as
    \begin{equation*} \mathrm{CBP}_{[\mathit {m, n}]}(\mathit {\pi _1, \pi _2}) \triangleq \mathrm{Cnd}(\mathit {\pi _1 \, , (\mathsf {PEvents}^{\le {\scriptstyle \mathsf {maxa}}})^{m-1};\mathrm{BP}_{\mathit {n-m+1}}(\mathit {\pi _2})}). \end{equation*}
    As an example, we might enforce a conditional bounded persistency property in \(\mathrm{PLC}_3\) of our use case in Section 4 to prevent damages of \(\mathit {pump}_3\) due to lack of water in tank \(T_3\) . Assume that \(z \in \mathbb {N}\) is the minimum time (in terms of scan cycles) required to fill \(T_3\) , i.e., to pass from level \(l_3\) to level \(h_3\) , when \(\mathit {pump}_3\) is off. We might consider to enforce a property of the form \(\mathrm{CBP}_{[\mathit {1, z}]}(\mathit {l_3, \overline{ {\mathsf {\scriptstyle off_{3}}} }})\) , to prescribe that if the tank reaches its low-level (event \(l_3\) ) then \(\mathit {pump}_3\) must remain off (event \(\overline{ {\mathsf {\scriptstyle off_{3}}} }\) ) for z consecutive scan cycles. This will ensure enough water in tank \(T_3\) to prevent damages on \(\mathit {pump}_3\) .
    Notice that all previous properties have a single triggering event \(\pi _1\) ; in order to deal with multiple triggering events it is enough to replace the conditional operator with the case construct.
    Conditional bounded absence (also called Absence timed [24]). Finally, we might consider to combine conditional with bounded absence to formalize a property saying that if a triggering event \(\pi _1\) occurs then another event \(\pi _2\) must not occur in the mth scan cycle and in all subsequent \(n-m\) scan cycles, with \(1\le m \le n\) . Formally, for \(\pi _1,\pi _2 \in \mathsf {PUEvents}\) and \(1\le m \le n\) , we write \(\mathrm{CBA}_{[\mathit {m, n}]}(\mathit {\pi _1, \pi _2})\) to denote the property defined as follows:
    \begin{equation*} \mathrm{CBA}_{[\mathit {m, n}]}(\mathit {\pi _1, \pi _2}) \triangleq \mathrm{Cnd}(\mathit {\pi _1, \, (\mathsf {PEvents}^{\le {\scriptstyle \mathsf {maxa}}})^{m-1}; \mathrm{BA}_{\mathit {n-m+1}}(\mathit {\pi _2})}). \end{equation*}
    Intuitively, if the triggering event \(\pi _1\) occurs then the event \(\pi _2\) must not occur in the time interval between the mth and the nth scan cycle.
    As an example, we might enforce a conditional bounded absence property in \(\mathrm{PLC}_2\) of our use case in Section 4 to prevent water overflow in the tank \(T_2\) due to a misuse of the valve connecting the tanks \(T_1\) and \(T_2\) . Assume that \(z \in \mathbb {N}\) is the time (expressed in scan cycles) required to empty the tank \(T_2\) when the valve is closed and the tank \(T_2\) reaches its high-level \(h_2\) . Then, we might consider to enforce a property of the form \(\mathrm{CBA}_{[\mathit {1, w}]}(\mathit {h_2, \overline{ {\mathsf {\scriptstyle {open}{\_}req}} }})\) , for \(w \lt z\) , to prescribe that if the tank reaches its high-level (event \(h_2\) ) then \(\mathrm{PLC}_2\) may not send a requests to open the valve (event \(\overline{ {\mathsf {\scriptstyle {open}{\_}req}} }\) ) for the subsequent w scan cycles. This ensures us that when \(T_2\) reaches its high-level then it will not ask for incoming water for at least w scan cycles, so preventing tank overflow.

    5.1.3 Compound Persistent Conditional Properties.

    Now, we formalize in our language of regular properties a number of correctness properties used by Frehse et al. for the verification of hybrid systems [24]. More precisely, we formalize bounded versions of their properties. Bounded minimum duration. When a triggering event \(\pi _1 \in \mathsf {PUEvents}\) occurs, if a second event \(\pi _2 \in \mathsf {PUEvents}\) occurs within m scan cycles then this event must appear in at least all subsequent n scan cycles (see Figure 6). Formally, we can express this property as follows:
    \begin{equation*} \mathrm{MinD}(\mathit {\pi _1, \pi _2,m, n }) \triangleq \mathrm{Cnd}(\mathit {\pi _1, \mathrm{PCnd}_{\mathit {m}}(\mathit {\pi _2, \mathrm{BP}_{\mathit {n}}(\mathit {\pi _2})}) }). \end{equation*}
    Fig. 6.
    Fig. 6. A trace satisfying a minimum duration property \(\mathrm{MinD}(\mathit {\pi _1, \pi _2,m, n })\) , for \(m=n=3\) .
    Note that the property \(\mathrm{MinD}(\mathit {\pi _1, \pi _2,m, n })\) is satisfied each time \(\mathrm{CBP}_{[\mathit {m, m+n}]}(\mathit {\pi _1, \pi _2})\) is. The vice versa does not hold as in \(\mathrm{CBP}_{[\mathit {m, m+n}]}(\mathit {\pi _1, \pi _2})\) the event \(\pi _2\) is required to occur in the whole time interval \([m, m{+}n]\) , while, according to \(\mathrm{MinD}(\mathit {\pi _1, \pi _2,m, n })\) , the event \(\pi _2\) might not occur at all.
    Bounded maximum duration. When an event \(\pi _1 \in \mathsf {PUEvents}\) occurs, if a second event \(\pi _2 \in \mathsf {PUEvents}\) occurs within m scan cycles then the same event \(\pi _2\) may occur in at most all subsequent n scan cycles. Formally, we can represent this property as follows:
    \begin{equation*} \mathrm{MaxD}(\mathit {\pi _1, \pi _2, m, n}) \triangleq \mathrm{Cnd}(\mathit {\pi _1, \mathrm{PCnd}_{\mathit {m}}(\mathit {\pi _2, (\mathsf {PEvents}^{\le {\scriptstyle \mathsf {maxa}}})^{n};\mathrm{BA}_{\mathit {1}}(\mathit {\pi _2})})}). \end{equation*}
    The property \(\mathrm{MaxD}(\mathit {\pi _1, \pi _2, m, n})\) is satisfied each time the property \(\mathrm{CBP}_{[\mathit {m, m+n}]}(\mathit {\pi _1, \pi _2}); \mathrm{BA}_{\mathit {1}}(\mathit {\pi _2})\) is. Again, the vice versa does not hold.
    Bounded response. When an event \(\pi _1 \in \mathsf {PUEvents}\) occurs, if a second event \(\pi _2 \in \mathsf {PUEvents}\) occurs within m scan cycles then a third event \(\pi _3 \in \mathsf {PUEvents}\) appears within n scan cycles. Formally, we can express this property as follows:
    \begin{equation*} \mathrm{BR}(\mathit {\pi _1, \pi _2, \pi _3, m, n}) \triangleq \mathrm{Cnd}(\mathit {\pi _1, \mathrm{PCnd}_{\mathit {m}}(\mathit {\pi _2, \mathrm{BE}_{\mathit {n}}(\mathit {\pi _3}) })}). \end{equation*}
    Bounded invariance. Whenever an event \(\pi _1 \in \mathsf {PUEvents}\) occurs, if \(\pi _2 \in \mathsf {PUEvents}\) occurs within m scan cycles then \(\pi _3 \in \mathsf {PUEvents}\) will persistently occur for at least n scan cycles. Formally, we can express this property as follows:
    \begin{equation*} \mathrm{BI}(\mathit {\pi _1, \pi _2, \pi _3, m, n}) \triangleq \mathrm{Cnd}(\mathit {\pi _1, \mathrm{PCnd}_{\mathit {m}}(\mathit {\pi _2, \mathrm{BP}_{\mathit {n}}(\mathit {\pi _3}) })}). \end{equation*}

    5.1.4 Bounded Mutual Exclusion.

    A different class of properties prescribes the possible occurrence of events \(\pi _i \in \mathsf {PUEvents}\) , for \(i \in I\) , in mutual exclusion within m consecutive scan cycles. Formally, for \(\pi _i\in \mathsf {PUEvents}\) , \(i\in I\) and \(m\ge 1\) , we write \(\mathrel {\mathrm{BME}_{\mathit {m}}{(\bigcup _{i\in I}\lbrace \pi _i\rbrace \,)}}\) , for the property \(q^{m}_{\scriptstyle \mathsf {maxa}}\) defined as
    \(q^h_k\triangleq {\scriptstyle \mathsf {end}}.q^{h-1}_{\scriptstyle \mathsf {maxa}}\, \cup \, \bigcup _{i\in I} \pi _i.(\bigcap _{j\in I\setminus \lbrace i\rbrace } \mathrm{BA}_{\mathit {h}}(\mathit {\pi _j})) \cup (\mathsf {PEvents}{\setminus }\bigcup _{i \in I} \lbrace \pi _i\rbrace).q^{h}_{k-1}\) , for \(1 \lt h \le m\) and \(0 \lt k \le {\scriptstyle \mathsf {maxa}}\)
    \(q^{h}_0\triangleq {\scriptstyle \mathsf {end}}.q^{h-1}_{\scriptstyle \mathsf {maxa}}\) , for \(1 \lt h \le m\)
    \(q^{1}_k\triangleq {\scriptstyle \mathsf {end}}\cup \bigcup _{i\in I} \pi _i.(\bigcap _{j\in I\setminus \lbrace i\rbrace } \mathrm{BA}_{\mathit {1}}(\mathit {\pi _j})) \cup (\mathsf {PEvents}{\setminus }\bigcup _{i \in I} \lbrace \pi _i\rbrace).q^{1}_{k-1}\) , for \(0 \lt k \le {\scriptstyle \mathsf {maxa}}\)
    \(q^{1}_0 \triangleq \epsilon\) .
    As an example, we might enforce a bounded mutual exclusion property in the \(\mathrm{PLC}_1\) of our use case of Section 4 to prevent chattering of the valve, i.e., rapid opening and closing which may cause mechanical failures in the long run. In particular, we might consider to enforce a property of the form \(\mathrel {\mathrm{BME}_{\mathit {3}}{(\lbrace \overline{ {\mathsf {\scriptstyle open}} }, \overline{ {\mathsf {\scriptstyle close}} } \rbrace)}}\) saying that within 3 consecutive scan cycles the opening and the closing of the valve (events \(\overline{ {\mathsf {\scriptstyle open}} }\) and \(\overline{ {\mathsf {\scriptstyle close}} }\) , respectively) may only occur in mutual exclusion.
    In Table 4, we summarize all local properties represented and discussed in this section.
    Table 4.
    Case:if \(\pi _i\) occurs then \(p_i\) should be satisfied, for \(i\in I\)
    Persistent conditional:for m scan cycles, if \(\pi\) occurs then p should be satisfied
    Bounded eventually:event \(\pi\) must eventually occur within m scan cycles
    Bounded persistency:event \(\pi\) must occur in all subsequent m scan cycles
    Bounded absence:even \(\pi\) must not occur in all subsequent m scan cycles
    Conditional bounded eventually:if \(\pi _1\) occurs then \(\pi _2\) must eventually occur in the scan cycles \([m,n]\)
    Conditional bounded persistency:if \(\pi _1\) occurs then \(\pi _2\) must occur in all scan cycles of \([m,n]\)
    Conditional bounded absence:if \(\pi _1\) occurs then \(\pi _2\) must not occur in all scan cycles of \([m, n]\)
    (Bounded) Minimum duration:when \(\pi _1\) , if \(\pi _2\) in \([1,m]\) then \(\pi _2\) persists for at least n scan cycles
    (Bounded) Maximum duration:when \(\pi _1\) , if \(\pi _2\) in \([1,m]\) then \(\pi _2\) persists for at most n scan cycles
    Bounded response:when \(\pi _1\) , if \(\pi _2\) in \([1,m]\) them \(\pi _3\) appears within n scan cycles
    Bounded invariance:when \(\pi _1\) , if \(\pi _2\) in \([1,m]\) then \(\pi _3\) persists for at least n scan cycles
    Bounded mutual exclusionevents \(\pi _i\) may only occur in mutual exclusion within n scan cycles
    Table 4. Overview of Local Properties, for \(\pi , \pi _1,\pi _2, \pi _3, \pi _i \in \mathsf {PUEvents}\) and \(p_i , p \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\)

    5.2 Global Properties

    As expected, the previously described local properties become global ones by applying the Kleene-operator \(\ast\) . Once in this form, we can put these properties in conjunction between them. Here, we show two global properties, the first one is built on top of a conditional bounded persistency properties and the second one is built on top of a conditional bounded eventually property.
    As a first example, we might consider a global property saying that whenever an event \(\pi\) occurs then all events \(\pi _i\) , for \(i \in I\) , must occur in the mth scan cycle and in all subsequent \(n-m\) scan cycles, for \(1\le m \le n\) . Formally, for \(\pi ,\pi _i \in \mathsf {PUEvents}\) , \(i\in I\) , and \(1\le m\le n\) : \(\bigcap _{i\in I} (\mathrm{CBP}_{[\mathit {m, n}]}(\mathit {\pi , \pi _i}))^\ast\) .
    We might enforce this kind of property in \(\mathrm{PLC}_1\) of our use case of Section 4. Assume \(z\in \mathbb {N}\) being the time (expressed in scan cycles) required to overflow the tank \(T_1\) when the level of the tank \(T_1\) is low and both pumps are on and the valve is closed. Then, the property would be \((\mathrm{CBP}_{[\mathit {1, w}]}(\mathit {l_1, \overline{ {\mathsf {\scriptstyle on_{1}}} }}))^\ast \cap (\mathrm{CBP}_{[\mathit {1, w}]}(\mathit {l_1, \overline{ {\mathsf {\scriptstyle on_{2}}} }}))^\ast\) , with \(w\lt z\) , saying that if the tank \(T_1\) reaches its low-level (event \(l_1\) ) then both \(\mathit {pump}_1\) and \(\mathit {pump}_2\) must be on (events \(\overline{ {\mathsf {\scriptstyle on_{1}}} }\) and \(\overline{ {\mathsf {\scriptstyle on_{2}}} }\) ) in all subsequent w scan cycles, starting from the current one.
    As a second example, we might consider a more involved global property relying on conditional bounded eventually, persistent conditional, and bounded persistency. Basically, the property says that whenever an event \(\pi _1\) occurs then a second event \(\pi _2\) must eventually occur between the mth scan cycle and the nth scan cycle, with \(1\le m \le n\) ; moreover, it must occur for d consecutive scan cycles, for \(1 \le d\) (see Figure 7). Formally, the property is the following:
    \begin{equation*} \big (\mathrm{CBE}_{[\mathit {m, n}]}(\mathit {\pi _1, \pi _2})\big)^\ast \cap \, \big (\mathrm{Cnd}(\mathit {\pi _1, \mathrm{PCnd}_{\mathit {n}}(\mathit {\pi _2, \mathsf {PEvents}^{\le {\scriptstyle \mathsf {maxa}}}; \mathrm{BP}_{\mathit {d-1}}(\mathit {\pi _2})})})\big)^\ast , \end{equation*}
    for \(\pi _1,\pi _2 \in \mathsf {PUEvents}\) , with \(1\le m\le n\) and \(d \ge 1\) . Intuitively, the property \((\mathrm{CBE}_{[\mathit {m, n}]}(\mathit {\pi _1, \pi _2}))^\ast\) requires that when \(\pi _1\) occurs the event \(\pi _2\) must eventually occur between the mth scan cycle and the nth scan cycle. The remaining part of the property says if the event \(\pi _2\) occurs within the nth scan cycle (recall that \(m \le n\) ) then it must persist for d scan cycles.
    Fig. 7.
    Fig. 7. A trace satisfying the just mentioned property for some m, \(n = m+4\) and \(d=4\) .
    In this manner, we might strengthen the conditional bounded eventually property given in Section 5.1 for \(\mathrm{PLC}_1\) of our use case to prevent water overflow in the tank \(T_2\) . Let \(z \in \mathbb {N}\) be the time (expressed in scan cycles) required to overflow the tank \(T_2\) when the valve is open and the level of tank \(T_2\) is low. The property is the following:
    \begin{align*} {\big (\mathrm{CBE}_{[\mathit {1, w}]}(\mathit { {\mathsf {\scriptstyle {open}{\_}req}} , \overline{ {\mathsf {\scriptstyle close}} }}) \big)^\ast \, \cap \, \big (\mathrm{Cnd}(\mathit { {\mathsf {\scriptstyle {open}{\_}req}} , \mathrm{PCnd}_{\mathit {w}}(\mathit {\overline{ {\mathsf {\scriptstyle close}} }, \mathsf {PEvents}^{\le {\scriptstyle \mathsf {maxa}}}; \mathrm{BP}_{\mathit {d-1}}(\mathit {\overline{ {\mathsf {\scriptstyle close}} }})})})\big)^\ast , } \end{align*}
    where \(w \lt \lt z\) , and \(d \in \mathbb {N}\) is the time (expressed in scan cycles) required to release in \(T_3\) the (maximum) quantity of water that the tank \(T_2\) may accumulate in w scan cycles. The first part of the property says that if \(\mathrm{PLC}_1\) receives a request to open the valve (event \({\mathsf {\scriptstyle {open}{\_}req}}\) ) then the valve must be eventually closed (event \(\overline{ {\mathsf {\scriptstyle close}} }\) must eventually occur) within at most w scan cycles. The remaining part of the property says that when \(\mathrm{PLC}_1\) receives a request to open the valve (event \({\mathsf {\scriptstyle {open}{\_}req}}\) ), if the valve gets closed (event \(\overline{ {\mathsf {\scriptstyle close}} }\) ) within the wth scan cycle, then it must remain closed for the d consecutive scan cycles. Here, d depends both on the maximum level of water reachable in \(T_2\) in w scan cycles and on the physical law governing the water flow from \(T_2\) to \(T_3\) .

    6 Monitor Synthesis

    In this section, we provide an algorithm to synthesize monitors from regular properties whose events are contained in (the set of events associated with) a fixed set \(\mathcal {P}\) of observable controller actions. More precisely, given a global property \(e \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) the algorithm returns an edit automaton \(\langle \! | \! e \! | \! \rangle _{}^{\mathcal {P}} \in \mathbb {E}{𝕕𝕚𝕥}\) that is capable to enforce the property e during the execution of a generic controller whose possible actions are confined to those in \(\mathcal {P}\) . The synthesis algorithm is defined in Table 5 by induction on the structure of the global/local property given in input; as we distinguish global properties from local ones, we define our algorithm in two steps.
    Table 5.
    Table 5. Monitor Synthesis from Properties in \(\mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) and \(\mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\)
    Remark 3.
    We recall that, according to the operational semantics defined in Table 1, all controller actions \(\alpha\) are observable and they basically coincide with the set \(\mathsf {Events}\) used to build up the enforcing properties defined in Section 5. As a consequence, we will synthesize enforcing monitors that may observe any action of the controller under scrutiny and may act consequently.
    The monitor \(\langle \! | \! p^{\ast } \! | \! \rangle _{}^{\mathcal {P}}\) associated with a global property \(p^{\ast }\) is an edit automaton defined via the recursive equation \(\mathsf {X} = { \langle \! | p | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , to recursively enforce the local property p. The monitor \(\langle \! | \!e_1\cap e_2\! | \! \rangle _{}^{\mathcal {P}}\) is given by the cross product between the edit automata \(\langle \! | e_1 | \! \rangle _{}^{\mathcal {P}}\) and \(\langle \! | e_2 | \! \rangle _{}^{\mathcal {P}}\) , to accept only traces that satisfy both \(e_1\) and \(e_2\) ; the definition of the cross product between two edit automata recalls that for finite state automata, and it is reported in the appendix in Table 6. The monitor \({{ \langle \! | \! p_1\cap p_2 \! | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}}\) is given by the cross product between the edit automata \({{ \langle \! | p_1 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}}\) and \({{ \langle \! | p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}}\) . The monitor \({{ \langle \! | p_1;p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}}\) is given by the automaton \({{ \langle \! | p_1 | \! \rangle _{\mathsf {Z}}^{\mathcal {P}}}}\) , where \(\mathsf {Z} = {{ \langle \! | p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}}\) ; basically \(\mathsf {Z}\) ties the final states of the automaton enforcing \(p_1\) with the initial state of the automaton enforcing \(p_2\) (e.g., \({{ \langle \! | \epsilon ;p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}} = {{ \langle \! | \epsilon | \! \rangle _{\mathsf {Z}}^{\mathcal {P}}}} = \mathsf {Z}\) , for \({\mathsf {Z}} = {{ \langle \! | p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}}\) ). The monitor associated with a union property \(\bigcup _{i\in I}\pi _i.p_i\) does the following: (i) allows all actions associated with the events \(\pi _i\) , (ii) inserts an action associated with some admissible event \(\pi _i\) only when the controller wishes to prematurely complete the scan cycle, i.e., it emits an \({\scriptstyle \mathsf {end}}\) -action, and (iii) suppresses any other action except for \({\scriptstyle \mathsf {tick}}\) - and \({\scriptstyle \mathsf {end}}\) -actions.
    Table 6.
    Table 6. Cross Product Between Two Edit Automata with Alphabet \(\mathcal {P}\)
    Thus, the mitigation of the enforcement is actually implemented in the monitors synthesized from union properties. In practice, when the controller under scrutiny complies with the property enforced by the monitor, the two components, monitor and controller, evolve in a tethered fashion (by applying rule (Allow)), moving through related correct states. However, if the controller gets somehow corrupted (for instance, due to the presence of a malware) then the two components will get misaligned reaching unrelated states. In this case, the enforcer mitigates the attack by suppressing the remaining actions emitted by the controller (by applying rule (Suppress)) until the controller reaches the end of the scan cycle, signaled by the emission of the \({\scriptstyle \mathsf {end}}\) -action.3 After that, if the monitor and controller are not aligned the monitor will command the insertion of a safe trace, without any involvement of the controller, via one or more applications of the rule (Insert). Safe traces inserted in full autonomy by our enforcers always terminate with an \({\scriptstyle \mathsf {end}}\) . Thus, when both the controller and the monitor will be aligned, at the end of the scan cycle, they will synchronize on the action \({\scriptstyle \mathsf {end}}\) , via an application of the rule (Allow), and from then on they may continue in a tethered fashion.
    Note that, according to Remark 1, even when the controller is completely unreliable and the monitor inserts an entire safe trace, the enforced scan cycle always will end well before a violation of the maximum cycle limit.
    Now, we calculate the complexity of the synthesis algorithm based on the number of occurrences of the operator \(\cap\) in e and the dimension of e, \(\mathsf {dim}(e)\) , i.e., the number of all operators occurring in e. Intuitively, the size of a property is given by the number of operators occurring in it.
    Definition 6.
    Let \(\mathsf {dim}() :\mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\cup \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\rightarrow \mathbb {N}\) be a property-size function defined as
    Proposition 1 (Complexity).
    Let \(e \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) be a global property and \(\mathcal {P}\) be a set of actions such that \(\mathsf {events}(e) \subseteq \mathcal {P}\) . The complexity of the algorithm to synthesize \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\) is \(\mathcal {O}({|}\mathcal {P}{|} \cdot m^{k+1})\) , with \(m= \mathsf {dim}(e)\) and k being the number of occurrences of the operator \(\cap\) in e.
    In the following, we prove that the enforcement induced by our synthesized monitors enjoys the properties stated in the Introduction: determinism preservation, transparency, soundness, deadlock-freedom, divergence-freedom, and scalability. In this section, with a small abuse of notation, given a set of observable actions \(\mathcal {P}\) , we will use \(\mathcal {P}\) to denote also the set of the corresponding events.
    Given a deterministic global property e, our synthesis algorithm returns a deterministic enforcer (according to Definition 2), i.e., an enforcer that can be effectively implemented. Formally,
    Proposition 2 (Deterministic Preservation).
    Given a deterministic global property \(e \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) over a set of events \(\mathcal {P}\) . The edit automaton \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\) is deterministic.
    Let us move to transparency. Intuitively, the enforcement induced by a deterministic property \(e \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) should preserve any execution trace satisfying e itself (Definition 2 at page 5 of [43]).
    Theorem 1 (Transparency).
    Let \(e \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) be a deterministic global property, \(\mathcal {P}\) be a set of observable actions such that \(\mathsf {events}(e) \subseteq \mathcal {P}\) , and \(P\in \mathbb {C}{𝕥𝕣𝕝}\) be a controller. Let \(t= \alpha _1 \cdots \alpha _n\) be a trace of the controller P with \(t\in {[\!\![} e{]\!\!]}\) . Then, (1) t is a trace of the edit automaton \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\!\) , and (2) there is no trace \(t^{\prime }=\alpha _1 \cdots \alpha _k\cdot \lambda\) for \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\!\) such that \(0 \le k \lt n\) and \(\lambda \in \lbrace {^{-}{\alpha _{k+1}}}, \alpha \prec \alpha _{k+1}\rbrace\) , for some \(\alpha\) .
    Basically, conclusion (1) says that all execution trace t (of a controller P) satisfying the enforcing property e are allowed by the associated enforcer \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\!\) , while conclusion (2) says that allowing the trace t is the only possible option in the enforcement (this follows by the determinism of e).
    Another important property of our enforcement is soundness [43]. Intuitively, a controller under the scrutiny of a monitor \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\) should only yield execution traces, which satisfy the enforced property e, i.e., execution traces which are consistent with its semantics \({[\!\![} e{]\!\!]}\) (up to \(\tau\) -actions).
    Theorem 2 (Soundness).
    Let \(e \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) be a global property, \(\mathcal {P}\) be a set of observable actions such that \(\mathsf {events}(e) \subseteq \mathcal {P}\) , and \(P\in \mathbb {C}{𝕥𝕣𝕝}\) be a controller. If t is a trace of the monitored controller \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} } \! \bowtie \! {\boldsymbol { \lbrace }}P{\boldsymbol { \rbrace }}\) then \(\hat{t}\) is a prefix of some trace in \({[\!\![} e{]\!\!]}\) (see Notation 1 for the definition of the trace \(\hat{t}\) ).
    Here, it is important to stress that in general soundness does not ensure the deadlock-freedom of the monitored controller. That is, it may be possible that the enforcement of some property e causes a deadlock of the controller P under scrutiny. In particular, this may happen in our controllers only when the initial sleeping phase does not match the enforcing property (e.g., \(P={\scriptstyle \mathsf {tick}}.c.{\scriptstyle \mathsf {end}}.P\) and \(e=(c.{\scriptstyle \mathsf {end}})^\ast\) ). Intuitively, a local property will be called a k-sleeping property if it allows k initial time instants of sleep. Formally,
    Definition 7.
    For \(k \in \mathbb {N}^+\) , we say that \(p \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) is a k-sleeping local property, only if \({[\!\![} p{]\!\!]} =\lbrace t | t=t_1\cdot ... \cdot t_n, \text{ for } n\gt 0, \text{ s.t. } t_i = {\scriptstyle \mathsf {tick}}^k{\cdot } t_i^{\prime }{\cdot } {\scriptstyle \mathsf {end}}, {\scriptstyle \mathsf {end}}\notin t_i^{\prime }, \text{ and } 1\le i \le n \rbrace\) . We say that \(p^\ast\) is a k-sleeping global property only if p is, and \(e=e_1\cap e_2\) is k-sleeping only if both \(e_1,e_2\) are k-sleeping.
    The enforcement of k-sleeping properties does not introduce deadlocks in k-sleeping controllers. This is because our synthesized monitors suppress all incorrect actions of the controller under scrutiny, driving it to the end of its scan cycle. Then, the controller remains on stand-by while the monitor yields a safe sequence of actions to mimic a safe completion of the current scan cycle.
    Theorem 3 (Deadlock-freedom).
    Let \(e\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) be a k-sleeping global property, and \(\mathcal {P}\) be a set of observable actions such that \(\mathsf {events}(e) \subseteq \mathcal {P}\) . Let \(P\in \mathbb {C}{𝕥𝕣𝕝}\) be a controller of the form \(P={\scriptstyle \mathsf {tick}}^{k}.S\) whose set of observable actions is contained in \(\mathcal {P}\) . Then, \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} } \! \bowtie \! {\boldsymbol { \lbrace }}P{\boldsymbol { \rbrace }}\) does not deadlock.
    Another important property of our enforcement mechanism is divergence-freedom. In practice, the enforcement does not introduce divergence: monitored controllers will always be able to complete their scan cycles by executing a finite number of actions. This is because we limit our enforcement to well-formed properties (Definition 4) which always terminates with an \({\scriptstyle \mathsf {end}}\) event. In particular, the well-formedness of local properties ensures us that in a global property of the form \(p^\ast\) the number of events within two subsequent \({\scriptstyle \mathsf {end}}\) events is always finite.
    Theorem 4 (Divergence-freedom).
    Let \(e\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) be a global property, \(\mathcal {P}\) be a set of observable actions such that \(\mathsf {events}(e) \subseteq \mathcal {P}\) , and \(P\in \mathbb {C}{𝕥𝕣𝕝}\) be a controller. Then, there exists a \(k \in \mathbb {N}^+\) such that whenever \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} } \! \bowtie \! {\boldsymbol { \lbrace }}P{\boldsymbol { \rbrace }}\mathop \rightarrow^ {t}\mathsf {E} \! \bowtie \! {\boldsymbol { \lbrace }}J{\boldsymbol { \rbrace }}\) , if \(\mathsf {E} \! \bowtie \! {\boldsymbol { \lbrace }}J{\boldsymbol { \rbrace }}\mathop \rightarrow^ {t^{\prime }}\mathsf {E}^{\prime } \! \bowtie \! {\boldsymbol { \lbrace }}J^{\prime }{\boldsymbol { \rbrace }}\) , with \(|t^{\prime }| \ge k\) , then \({\scriptstyle \mathsf {end}}\in t^{\prime }\) .
    Notice that all properties seen up to now scale to field communications networks of controllers. This means that they are preserved when the controller under scrutiny is running in parallel with other controllers in the same field communications network. As an example, by an application of Theorems 1 and 2, we show how both transparency and soundness scale to field networks. A similar result applies to the remaining properties.
    Corollary 1 (Scalability to Networks of PLCs).
    Let \(e \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) be a global property and \(\mathcal {P}\) be a set of observable actions, such that \(\mathsf {events}(e) \subseteq \mathcal {P}\) . Let \(P\in \mathbb {C}{𝕥𝕣𝕝}\) be a controller and \(N \in \mathbb {F}\mathbb{N}{𝕖𝕥}\) be a field network. If \(({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} } \! \bowtie \! {\boldsymbol { \lbrace }}P{\boldsymbol { \rbrace }}) \parallel N \mathop \rightarrow^ {t} (\mathsf {E} \! \bowtie \! {\boldsymbol { \lbrace }}J{\boldsymbol { \rbrace }}) \parallel N^{\prime }\) , for some t, \(\mathsf {E}\) , J and \(N^{\prime }\) , then
    whenever \(P \mathop \rightarrow^ {t^{\prime }} J\) , with \(t^{\prime } = \alpha _1 \cdots \alpha _n \in {[\!\![} e{]\!\!]}\) , the trace \(t^{\prime }\) is a trace of \({{\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }}\) and there is no trace \(t^{\prime \prime }=\alpha _1 \cdots \alpha _k\cdot \lambda\) of \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\!\) such that \(0 \le k \lt n\) and \(\lambda \in \lbrace {^{-}{\alpha _{k+1}}}, \alpha \prec \alpha _{k+1}\rbrace\) , for some \(\alpha\) ;
    whenever \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} } \! \bowtie \! {\boldsymbol { \lbrace }}P{\boldsymbol { \rbrace }} \mathop \rightarrow^ {t^{\prime }} \mathsf {E} \! \bowtie \! {\boldsymbol { \lbrace }}J{\boldsymbol { \rbrace }}\) the trace \(\widehat{t^{\prime }}\) is a prefix of some trace in \({[\!\![} e{]\!\!]}\) .

    7 Our Enforcement Mechanism at Work

    In this section, we propose an implementation of our enforcement mechanism in which monitors, running on field-programmable gate arrays (FPGAs) [65], enforce open-source PLCs [8], running on Raspberry Pi devices [25], and governing a physical plant simulated in Simulink [47] (see Figure 8). The section has the following structure. In Section 7.1, we argue why FPGAs are good candidates for implementing secure proxies. In Section 7.2, we describe how we implemented the whole enforcement architecture for the use case of Section 4. In Section 7.3, we test our implementation by injecting the enforced PLCs with five different malware aiming at causing three different physical perturbations: tank overflow, valve damage, and pump damage. The attacks have been chosen to cover as much as possible the attacker model of Section 2. In particular, they include: a drop of the actuator commands of the valve, an integrity attack on the water-level sensors, a forgery of the actuator commands of the valve, a forgery of the message requests to open/close the valve, and a forgery of the actuator commands of the pumps. Section 7.4 discusses the performance of our implementation.
    Fig. 8.
    Fig. 8. Some physical components of our implementation.

    7.1 FPGAs as Secure Proxies for ICSs

    FPGAs are semiconductor devices that can be programmed to run specific applications. An FPGA consists of (configurational) logic blocks, routing channels, and I/O blocks. The logic blocks can be configured to perform complex combinational functions and are further made up of transistor pairs, logic gates, lookup tables, and multiplexers. The applications are written using hardware description languages, such as Verilog [64]. Thus, in order to execute an application on the FPGA, its Verilog code is converted into a sequence of bits, called bitstream, that is loaded into the FPGA.
    FPGA are assumed to be secure when the adversary does not have physical access to the device, i.e., the bitstream cannot be compromised [33]. Recent FPGAs support remote updates of the bitstream by relying on authentication mechanisms to prevent unauthorized uploads of malicious logic [33]. Nevertheless, as said in the Introduction and advocated by McLaughlin and Mohan [48, 50], any form of runtime reconfiguration should be prevented. In summarizing, under the assumption that the adversary does not have physical access to the FPGA and she cannot do remote updates, FPGAs represent a good candidate for the implementation of secure enforcing proxies.

    7.2 An Implementation of the Enforcement of the SWaT System of Section 4

    The proposed implementation adopts different approaches for plants, controllers, and enforcers.
    Plant. The plant of the SWaT system is simulated in Simulink [47], a framework to model, simulate, and analyze cyber-physical systems, widely adopted in industry and research. A Simulink model is given by blocks interconnected via wires. Our Simulink model contains blocks to simulate water tanks, actuators (i.e., pumps and valves), and sensors (see Figure 9). In particular, water-tank blocks implement the differential equations that model the dynamics of the tanks according to the physical constraints obtained from [27, 46]. Actuation blocks receive commands from PLCs, whereas sensor blocks send measurements to PLCs. For simplicity, state changes of both pumps and valves do not occur instantaneously; they take 1 second.4 We ran our Simulink model on a laptop with 2.8 GHz Intel i7 7700 HQ, 16 GB memory, and Linux Ubuntu 20.04 LTS OS.
    Fig. 9.
    Fig. 9. An implementation in Simulink of the plant of the SWaT system.
    Controllers. Controllers are defined in OpenPLC [8], an open-source PLC capable of running user programs in all five IEC61131-3 defined languages [1]. Additionally, OpenPLC supports standard SCADA protocols, such as Modbus/TCP, DNP3, and Ethernet/IP. OpenPLC can run on a variety of hardware, from a simple Raspberry Pi to robust industrial boards. We installed OpenPLC on three Raspberry Pi 4 [59]; each instance runs one of the three ladder logics seen in Figure 4.
    Enforcers. Enforcers are implemented using three NetFPGA-CML development boards [70]. Our synthesis algorithm is implemented in Python to return enforcers written in Verilog, and checked for correctness using ModelSim [49]. The Verilog code is then compiled into a bitstream and executed in the FPGA. More precisely, our algorithm in Python takes as input a JSON file containing the property to be synthesized and other relevant informations, such as the number of input/output signals and a priority among admissible output signals to ensure a better safety of the system (e.g., water levels far from the borders). Then, the property is parsed by means of the ANTLR parser [52]. After the parsing, our algorithm implements the synthesis of Table 5 to derive the enforcing edit automaton; this is written down into a JSON file. At this stage, the derived edit automaton is still somewhat abstract, as both \({\scriptstyle \mathsf {end}}\) - and \({\scriptstyle \mathsf {tick}}\) -actions are explicitly represented. Finally, the algorithm compiles the edit automaton into an enforcer written in Verilog, where the above abstractions are implemented. In particular, the passage of time (i.e., \({\scriptstyle \mathsf {tick}}\) -actions) is represented and monitored via clock variables, while the end of scan cycles (i.e., \({\scriptstyle \mathsf {end}}\) -actions) is implemented via specific code to synchronize enforcers and controllers, relying on clock variables. Thus, before each scan cycle the enforcer forwards the current inputs (coming from the plant) to the controller. Then, when the scan cycle is completed, it receives from the controller all the current outputs, and forwards them to the actuators. In the meanwhile, the enforcer monitors the passage of time via its clock variables, and when the scan cycle is completed (i.e., the controller sends all outputs) it moves to the state corresponding to the following scan cycle. Finally, in our FPGAs we also write some code to implement an UDP-based network connecting together enforcers, PLCs, and the simulated plant.
    The code of the three PLCs, the algorithm in Python, the enforcers written in Verilog, and the Simulink simulations can be found at: https://bitbucket.org/formal_projects/runtime_enforcement.

    7.3 The Enforced SWaT System under Attack

    In this section, we consider five different attacks targeting the PLCs of the SWaT system to achieve three possible malicious goals: (i) overflow the water tanks, (ii) damage of the valve, (iii) damage of the pumps. In order to simulate the injection of malware in the PLCs, we reinstall the original PLC ladder logics with compromised ones, containing some additional logic intended to disrupt the normal operations of the PLC [28]. In the following, we will discuss these attacks, grouped by goals, showing how the enforcement of specific properties mitigates the attacks by preserving the correct behavior of the monitored PLCs.
    Tank overflow. Our first attack is a DoS attack targeting the valve of \(\mathrm{PLC}_{1}\) by dropping the commands to close the valve. In the left-hand side of Figure 10 we show a possible implementation of this attack in ladder logic. Basically, the malware remains silent for 500 seconds, and then it sets true a malicious \(\mathit {drop}\) variable (highlighted in yellow). Once the variable \(\mathit {drop}\) becomes true, the \(\mathit {valve}\) variable is forced to be false (highlighted in red), thus preventing the closure of the valve.
    Fig. 10.
    Fig. 10. Tank overflow: Compromised ladder logics in the first (left, \(\mathrm{PLC_{1}}\) ) and the second attack (right, \(\mathrm{PLC_{2}}\) ).
    In order to prevent attacks aiming at overflowing the tanks, we propose three enforcing properties, one for each PLC. The first property \(e_1\) is for preventing overflow of tank \(T_1\) by ensuring that \(\mathit {pump}_1\) and \(\mathit {pump}_2\) are off when the tank is full. The second property \(e_2\) is for preventing overflow of tank \(T_2\) by ensuring that the \(\mathit {valve}\) , regulating the incoming flow, gets closed when the tank is full. Finally, the third property \(e_3\) is for preventing overflow of tank \(T_3\) by ensuring that \(\mathit {pump}_3\) is on when the tank is full.
    Let us define these three properties in our formal language.
    \(e_1\triangleq (\mathrm{CBP}_{[\mathit {1, m}]}(\mathit {h_1, \overline{ {\mathsf {\scriptstyle off_{1}}} }}))^\ast \cap \, (\mathrm{CBP}_{[\mathit {1, m}]}(\mathit {h_1, \overline{ {\mathsf {\scriptstyle off_{2}}} }}))^\ast\) , an intersection between two conditional bounded persistency properties to enforce \(\mathrm{PLC_{1}}\) . This property ensures that both pumps \(\mathit {pump}_1\) and \(\mathit {pump}_2\) are off, for m consecutive scan cycles, when the level of \(T_1\) is high (measurement \(h_1\) ). Here, \(m\lt n\) for \(n\in \mathbb {N}\) is the number of scan cycles required to empty \(T_1\) when its level is high, both pumps are off, and the valve is open.
    \(e_2\triangleq (\mathrm{CBP}_{[\mathit {1, u}]}(\mathit {h_2, \overline{ {\mathsf {\scriptstyle {close}{\_}req}} }}))^\ast\) , a conditional bounded persistency property for \(\mathrm{PLC_{2}}\) ensuring that requests to close the valve (event \(\overline{ {\mathsf {\scriptstyle {close}{\_}req}} }\) ) are sent for u consecutive scan cycles when the level of water in the tank \(T_2\) is high (measurement \(h_2\) ). Here, \(u\lt v\) for \(v \in \mathbb {N}\) is the number of scan cycles required to empty the tank \(T_2\) when the level is high and the valve is closed.
    \(e_3\triangleq (\mathrm{CBP}_{[\mathit {1, w}]}(\mathit {h_3, \overline{ {\mathsf {\scriptstyle on_{3}}} }}))^\ast\) , a conditional bounded persistency property for \(\mathrm{PLC_{3}}\) to ensure that \(\mathit {pump}_3\) is on for w consecutive scan cycles when the level of water in tank \(T_3\) is high (measurement \(h_3\) ). Here, \(w\lt z\) for \(z\in \mathbb {N}\) is the time (expressed in scan cycles) required to empty the tank \(T_3\) when the level is high and \(\mathit {pump}_3\) is on.
    Now, let us analyze the effectiveness of the enforcement induced by these three properties. For instance, in the upper graphs of Figure 11, we report the impact on the tanks \(T_1\) and \(T_2\) of the DoS attack previously described, when enforcing the three properties \(e_1, e_2,\) and \(e_3\) in the corresponding PLCs. Here, the red region denotes when the attack becomes active. As the reader may notice, despite repeated requests to close the valve coming from \(\mathrm{PLC}_2\) , the compromised \(\mathrm{PLC}_1\) never closes the valve causing the overflow of tank \(T_2\) . So, the enforced property \(e_1\) is not up to the task.
    Fig. 11.
    Fig. 11. Tank overflow: DoS attack on \(\mathrm{PLC}_1\) when enforcing \(e_1, e_2,e_3\) (up) and \(e_1^{\prime },e_2, e_3\) (down).
    In order to prevent this attack, we must guarantee that \(\mathrm{PLC}_1\) closes the valve when \(\mathrm{PLC}_2\) requests so. Thus, we should enforce in \(\mathrm{PLC}_1\) a more demanding property \(e_1^{\prime }\) defined as follows: \(e_1 \, \cap \, \mathrm{CBE}_{[\mathit {1, 1}]}(\mathit { {\mathsf {\scriptstyle {close}{\_}req}} , \overline{ {\mathsf {\scriptstyle close}} }})\) . Basically, the last part of the property ensures that every request to close the valve is followed by an actual closure of the valve in the same scan cycle. The impact of the malware on \(\mathrm{PLC}_{1}\) when enforcing the properties \(e_1^{\prime },e_2,e_3\) is represented in the lower graphs of Figure 11. Now, the correct behavior of \(\mathrm{PLC}_1\) is ensured, thus preventing the overflowing of the water tank \(T_2\) . In these graphs, the green highlighted regions denote when the monitor detects the attack and mitigates the activities of the compromised \(\mathrm{PLC}_1\) . In particular, the monitor inserts the commands to close the valve on behalf of \(\mathrm{PLC}_1\) when \(\mathrm{PLC}_2\) sends requests to close the valve.
    Having strengthened the enforcing property for \(\mathrm{PLC}_1\) one may think that the enforcement of \(e_2\) in \(\mathrm{PLC}_2\) is now superfluous to prevent water overflow in \(T_2\) . However, this is not the case if the attacker can compromise \(\mathrm{PLC}_{2}\) . Consider a second attack to \(\mathrm{PLC}_2\) , an integrity attack that adds an offset of \(-30\) to the measured water level of \(T_2\) . We show a ladder logic implementation of such an attack on the right-hand side of Figure 10 where, for simplicity, we omit the initial silent phases lasting 500 seconds. The impact on the tanks \(T_1\) and \(T_2\) of the malware injected in \(\mathrm{PLC}_{2}\) in the presence of the enforcing of the properties \(e^{\prime }_1\) and \(e_3\) , respectively, is represented on the upper graphs of Figure 12.
    Fig. 12.
    Fig. 12. Tank overflow: integrity attack on \(\mathrm{PLC}_2\) when enforcing \(e_1^{\prime },e_3\) (up) and \(e_1^{\prime },e_2, e_3\) (down).
    Again, the red region shows when the attack becomes active. As the reader may notice, the compromised \(\mathrm{PLC}_2\) never sends requests to close the valve causing the overflow of the water tank \(T_2\) . On the other hand, when enforcing the three properties \(e^{\prime }_1, e_2, e_3\) in the three PLCs, the lower graphs of Figure 12 shows that the overflow of tank \(T_2\) is prevented. Again, the green highlighted regions denote when the monitor detects the attack and mitigates the commands of the compromised \(\mathrm{PLC}_2\) . Here, the monitor inserts the request to close the valve on behalf of \(\mathrm{PLC}_2\) when \(T_2\) reaches a high-level.
    Valve damage. Wenow consider attacks whose goal is to damage the valve via chattering, i.e., rapid alternation of openings and closings of the valve that may cause mechanical failures on the long run. On the left-hand side of Figure 13, we show a possible ladder logic implementation of a third attack that does the injection of the commands to open and close the valve. In particular, the attack repeatedly alternates a stand-by phase, lasting 70 seconds, and a injection phase, lasting 30 seconds (yellow region); then, in the injection phase, the valve is opened and closed rapidly (red region).
    Fig. 13.
    Fig. 13. Valve damage: Compromised ladder logic in the third (left, \(\mathrm{PLC}_1\) ) and the fourth attack (right, \(\mathrm{PLC}_2\) ).
    With no enforcement, the impact of the attack on the tanks \(T_1\) and \(T_2\) is represented on the upper graphs of Figure 14, where the red region denotes when the attack becomes active. From the graph associated with the execution of \(T_1\) the reader can easily see that the valve is chattering. Note that this is a stealthy attack as the water level of \(T_2\) is maintained within the normal operation bounds.
    Fig. 14.
    Fig. 14. Valve damage: injection attack on \(\mathrm{PLC}_1\) in the absence (up) and in the presence (down) of enforcement.
    In order to prevent this kind of attacks, we might consider to enforce in \(\mathrm{PLC}_1\) a bounded mutual exclusion property of the form \(e_1^{\prime \prime } \triangleq (\mathrel {\mathrm{BME}_{\mathit {10000}}{\lbrace \overline{ {\mathsf {\scriptstyle open}} }, \overline{ {\mathsf {\scriptstyle close}} }\rbrace }})^\ast\) to ensure that within 10,000 consecutive scan cycles (10 seconds) openings and the closings of the valve may only occur in mutual exclusion. When the property \(e^{\prime \prime }_1\) is enforced in \(\mathrm{PLC}_1\) , the lower graphs of Figure 14 show that the chattering of the valve is prevented. In particular, the green highlighted regions denote when the monitor detects the attack and mitigates the commands on the valves of the compromised \(\mathrm{PLC}_1\) .
    A fourth attack with the same goal of chattering the valve may be launched on \(\mathrm{PLC}_{2}\) , by sending rapidly alternating requests to open and close the valve. This can be achieved by means of an integrity attack on the sensor of the tank \(T_2\) by rapidly switching the measurements between low and high. On the right-hand side of Figure 13 we show parts of the ladder logic implementation of this attack on \(\mathrm{PLC}_2\) , where, for simplicity, we omit the machinery for dealing with the alternation of phases. Again, the attack repeatedly alternates between a stand-by phase, lasting 70 seconds, and a active phase, lasting 30 seconds. When the attack is in the active phase (red region) the measured water level of \(T_2\) rapidly switches between low and high, thus, sending requests to \(\mathrm{PLC}_1\) to rapidly open and close the valve in alternation.
    The impact of this attack targeting on \(\mathrm{PLC}_{2}\) in the absence of an enforcing monitor is represented in the upper graphs of Figure 15, where the red region shows when the attack becomes active. Notice that the rapid alternating requests originating from \(\mathrm{PLC}_2\) cause a chattering of the valve. On the other hand, with the enforcement of the property \(e^{\prime \prime }_1\) in \(\mathrm{PLC}_{1}\) , the lower graph of Figure 15 show that the correct behavior of tanks \(T_1\) and \(T_2\) is ensured. In that figure, the green highlighted regions denote when the enforcer of \(\mathrm{PLC}_{1}\) detects the attack and mitigates the commands (on the valve) of the compromised \(\mathrm{PLC}_2\) . Notice that in this case no enforcement is required in \(\mathrm{PLC}_2\) .
    Fig. 15.
    Fig. 15. Valve damage: integrity attack on \(\mathrm{PLC}_2\) in the absence (up) and in the presence (down) of enforcement.
    Pump damage. Finally, we consider attacks whose goal is the damage of the pumps, and in particular \(\mathit {pump}_3\) . In that case, an attacker may force the pump to start when the water tank \(T_3\) is empty. This can be done with a fifth attack that injects commands to turn on the pump based on a ladder logic implementation similar to that seen in Figure 10. The impact of this attack to tank \(T_3\) in the absence of enforcement is represented on the left-hand side graphs of Figure 16, where the red region shows when the attack becomes active. As the reader may notice, \(\mathit {pump}_3\) is turned on when \(T_3\) is empty.
    Fig. 16.
    Fig. 16. Pump damage: injection attack on \(\mathrm{PLC}_3\) in the absence (left) and in the presence (right) of enforcement.
    Now, we can prevent damage on \(\mathit {pump}_3\) by enforcing on \(\mathrm{PLC}_3\) the following conditional bounded persistent property: \(e_3^{\prime }\triangleq (\mathrm{CBP}_{[\mathit {1, w}]}(\mathit {l_3, \overline{ {\mathsf {\scriptstyle off_{3}}} }}))^\ast\) . The enforcement of this property ensures that \(\mathit {pump}_3\) is off for w consecutive scan cycles when the level of water in tank \(T_3\) is low, for \(w\lt z\) and \(z\in \mathbb {N}\) being the time (expressed in scan cycles) required fill up tank \(T_3\) when the pump is off. Thus, when the enforcement of the \(e^{\prime }_3\) is active, the lower graphs of Figure 16 shows that the correct behavior of \(T_3\) is ensured, thus preventing pump damage. In that figure, the green highlighted regions denote when the monitor detects the attack and mitigates the commands (of the pumps) of the compromised \(\mathrm{PLC}_3\) . More precisely, the enforcer suppresses the commands to turn on the pump when the tank is empty, for w consecutive scan cycles.

    7.4 Discussion

    In this section, we rely on the Vivado Design Suite 15.2 [66] analysis tool to do a performance analysis of our implementation.
    As to the hardware resources used by our FPGAs, we measured them in terms of lookup tables and registers used during the enforcement. The number of them depends on the number of states of the enforcers implemented in the FPGAs. And this number is proportional to the number of scan cycles involved in the enforced (local) property. In particular, for each scan cycle, the number k of states of the enforcer depends on the monitored input/output signals and their admissible values. For instance, for scan cycles taking 10 ms (0.1 kHz), an enforced local property lasting 10 seconds will cover 1,000 consecutive scan cycles, and the synthesized enforcer would have \(k \ast 1,000\) states. In our experiments, when enforcing properties covering 1,000 scan cycles the hardware resource use reaches 5%; for 10,000 cycles the resource use rises to 13% due to the increase in the size of the enforcer.
    As for the execution speed of the enforcement, in general, all FPGAs are capable of running at a speed of 100 MHz (or higher). The actual execution speed depends on the complexity of the underlying code, in our case the enforcer, plus some extra code to implement the network communication protocol (UDP). In our experiments, FPGAs ran with a frequency of 1 MHz while PLCs ran with a frequency of 0.1–1 kHz. Thus, the overhead introduced by the FPGAs is negligible, independently on the size (the number of states) of the enforcer implemented in the FPGAs. We recall that in Remark 1 we assumed that our enforced controllers successfully complete their scan cycle in less than half of the maximum cycle limit (just in case the scan cycle should be entirely corrected by the enforcer). However, as the overhead introduced by FPGAs is negligible this constraint can be actually relaxed.
    Finally, concerning the communication latency between enforcers, many FPGAs support high speed and low latency communications, which are the ones used in industrial control contexts [51]. We used FPGAs with Ethernet ports supporting 1 Gbps speed, i.e., with 100 microseconds latency. Furthermore, thanks to our result of scalability (Corollary 1), a network of enforcing FPGAs introduces a negligible overhead in terms of communication latency and hardware resources.

    8 Related Work

    The notion of runtime enforcement was introduced by Schneider [60] to enforce security policies via truncation automata, a kind of automata that terminates the monitored system in case of violation of the property. Thus, truncation automata can only enforce safety properties. Furthermore, the resulting enforcement may obviously lead to deadlock (actually termination) of the monitored system with no room for mitigation.
    Ligatti et al. [43] studied a hierarchy of enforcement mechanisms, each with different transformational capabilities: Schneider’s truncation automata, suppression automata, insertion automata, and finally, edit automata that combine the power of suppression and insertion automata. Edit automata are capable of enforcing instances of safety and liveness properties, along with other properties such as renewal properties [12, 43]. Ligatti et al. defined different notions of enforcement, and in particular the so called precise enforcement (Definition 2, page 5) which basically corresponds to the combination of our notion of transparency and soundness, proved in Theorems 1 and 2, respectively. Our enforcers differ from Ligatti et al.’s edit automata for the following three aspects. First, in general, the edit automata have an enumerable number of states, whereas in the current article, we restrict ourselves to finite-state edit automata. Second, Ligatti et al.’s edit automata can insert a non-empty string of symbols in a single step, whereas, without no loss of expressiveness, our enforcers can only insert one symbol per step (i.e., we will need multiple steps to insert a string of symbols). Third, Ligatti et al.’s edit automata are deterministic, in the sense that for any action of the system under scrutiny the enforcer admits only one possible output. In our article, we adopt Pinisetty et al. [54] notion of deterministic enforcer, in which the insertion has a certain degree of freedom since the inserted symbol is chosen within a set of admissible symbols. Despite these differences, we believe that when focusing on finite-state enforcers we are able to enforce the same set of correctness properties as Ligatti et al. [43].
    Bielova and Massacci [12, 13] provided a stronger notion of enforceability by introducing a predictability criterion to prevent monitors from transforming invalid executions in an arbitrary manner. Intuitively, a monitor is said predictable if one can predict the number of transformations used to correct invalid executions. In our setting, in case of injection of a malware that may act in an unpredictable manner, this approach appears unfeasible.
    Falcone et al. [21, 22] proposed a synthesis algorithm, relying on Streett automata, to translate most of the property classes defined within the safety-progress hierarchy [44] into (a slight variation of) edit automata. In the safety-progress hierarchy, our global properties can be seen as guarantee properties for which all execution traces that satisfy a property contain at least one prefix that still satisfies the property. However, it should be noticed that they consider untimed properties only; as already pointed out before, timed actions play a special role in our enforcement and they cannot be treated as untimed actions.
    Beauquier et al. [10] proved that finite-state edit automata (i.e., those kinds of enforcers we are actually interested in) can only enforce a sub-class of regular properties. Actually, they can enforce all and only the regular properties that can be recognized using finite automata whose cycles always contain at least one final state. This is the case of our enforced regular properties, as well-formed local properties in \(\mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) always terminate with the “final” atomic property \({\scriptstyle \mathsf {end}}\) .
    Pinisetty and Tripakis [55] studied the compositionality of the enforcement of different regular properties \(p_1, \ldots , p_n\) at the same time, by composing the associated enforcing monitors. The idea is to replace a monolithic approach, in which a monitor is synthesized from the property \(p_1 \cap \ldots \cap p_n\) , with a compositional one, where the n monitors enforcing the properties \(p_i\) are somehow put together to enforce \(p_1 \cap \ldots \cap p_n\) . The authors of [55] proved that runtime enforcement is not compositional with respect to general regular properties, neither with respect to serial nor parallel composition. On the other hand, compositionality holds for certain sub-classes of regular properties such as safety (or co-safety) properties. Here, we wish to point out that our notion of scalability is different from their notion of compositionality, as we aim at scaling our enforcement on a network of PLCs and not on multiple regular properties on the same PLC.
    Bloem et al. [14] defined a synthesis algorithm that given a safety property returns a monitor, called a shield, to enforce untimed properties in reactive systems (which have many aspects in common with control systems). Their algorithm relies on a notion called k-stabilization: when the design reaches a state where a property violation becomes unavoidable for some possible future inputs, the shield is allowed to deviate for at most \(k \in \mathbb {N}\) steps; if a second violation happens during the k-step recovery phase, the shield enters a fail-safe mode where it only enforces correctness, but no longer minimizes the deviation. However, The k-stabilizing shield synthesis problem is unrealizable for many safety-critical systems, because a finite number of deviations cannot be guaranteed. Humphrey et al. [34] addressed this problem by proposing the notion of admissible shields which was extended and generalized in Könighofer et al. [35] by assuming that systems have a cooperative behavior with respect to the shield, i.e., the shield ensures a finite number of deviations if the system chooses certain outputs. The authors presented a synthesis procedure that maximizes the cooperation between system and environment for satisfying the required enforced properties. This approach has some similarities with our enforcement in which a violation of a property during a scan cycle induces the suppression of all subsequent controller actions until the PLC reaches the end of the scan, so the monitor can insert a safe trace before permitting the completion of the scan cycle.
    Pinisetty et al. [54] proposed a bi-directional runtime enforcement mechanism for reactive systems, and more generally for cyber-physical relying on Berry and Gonthier’s synchronous hypothesis [11], to correct both inputs and outputs. Pinisetty et al. express safety properties in terms of Discrete Timed Automata (DTA) which are more expressive than our class of regular properties. Thus, an execution trace satisfies a required property only if it ends up on a final state of the corresponding DTA. However, as not all regular properties can be enforced [10], they proposed a more permissive enforcement mechanism that accepts execution traces as long as there is still the possibility of reaching a final state. Furthermore, due to the instantaneity of the synchronous approach, their enforcement actions are applied in the same reaction step to ensure reactivity. On the contrary, in our approach, the enforcement takes places before the conclusion of scan cycles which are clearly delimited via \({\scriptstyle \mathsf {end}}\) -actions. Our notion of deterministic enforcers is taken from Pinisetty et al. [54]. Moreover, when inserting safe actions, our synthesized enforcers follow Pinisetty et al.’s random edit approach, where the inserted safe action is randomly chosen from a list of admissible actions.
    Pearce et al. [53] proposed a bi-directional runtime enforcement over valued signals for PLCs, by introducing smart I/O modules (similar to our secure proxy) between the PLCs and the controlled physical processes, to act as an effective line of defense. The authors express security properties in terms of Values Discrete Timed Automata (VDTA), inspired by the DTA of Pinisetty et al. [54]. Unlike DTA, VDTA support valued signals, internal variables, and guard conditions. As in Pinisetty et al. [54], the article adopts the synchronous hypothesis [11] to correct both inputs and outputs; thus, their enforcement actions are applied in the same reaction step to ensure instantaneous reactivity. The authors do not consider attacks that may tamper with inter-controller communications: their attackers may only manipulate sensor signals and/or actuator commands. Finally, their semantics requires that every enforcer knows the state of all relevant signals and commands in a given system. Thus, as written by the same authors, a networked system featuring multiple I/O modules may significantly complicate the enforcement, as pertinent I/O for a security policy may not be locally available. As a consequence, unlike us, their enforcement does not naturally scale to networks of controllers; we believe this is basically due to the fact that they do bi-directional enforcement. Last but not least, like them, we implement enforcers via FPGAs to ensure efficiency and security at the same time. In particular, when inserting safe actions our implementation fixes a priority between admissible safe actions, similarly to their selected edit approach. However, our implementation differs from theirs in at least the following aspects: (1) our FPGAs do enforce PLC transmissions (with a negligible latency); (2) our enforcement is uni-directional and hence our FPGAs need to know only the state of signals and commands of the corresponding enforced PLCs; (3) as a consequence, our FPGAs can be networked to monitor field communications networks paying only negligible overhead in terms of computational resources and communication latency.
    Aceto et al. [6] developed an operational framework to enforce properties in HML logic with recursion ( \(\mu\) HML) relying on suppression only. More precisely, they achieved the enforcement of a safety fragment of \(\mu\) HML by providing a linear automated synthesis algorithm that generates correct suppression monitors from formulas. Enforceability of modal \(\mu\) -calculus (a reformulation of \(\mu\) HML) was previously tackled by Martinelli and Matteucci [45] by means of a synthesis algorithm which is exponential in the length of the enforceable formula. Cassar [18] defined a general framework to compare different enforcement models and different correctness criteria, including optimality. His works focuses on the enforcement of a safety fragment of \(\mu\) HML, paying attention to both uni-directional and bi-directional notions of enforcement. More recently, Aceto et al. [7] developed an operational framework for bi-directional enforcement and used it to study the enforceability of the aforementioned safety fragment of HML with recursion, via a specific type of bi-directional enforcement monitors called action disabling monitors.
    As regards articles in the context of control system security closer to our objectives, McLaughlin [48] proposed the introduction of an enforcement mechanism, called \(\mathrm{C}^2\) , similar to our secure proxy, to mediate the control signals \(u_k\) transmitted by the PLC to the plant. Thus, like our secured proxy, \(\mathrm{C}^2\) is able to suppress commands, but unlike our proxy, it cannot autonomously send commands to the physical devices in the absence of a timely correct action from the PLC. Furthermore, \(\mathrm{C}^2\) does not cope with inter-controller communications, and hence with colluding malware operating on PLCs of the same field network.
    Mohan et al. [50] proposed a different approach by defining an ad-hoc security architecture, called Secure System Simplex Architecture (S3A), with the intention to generalize the notion of “correct system state” to include not just the physical state of the plant but also the cyber state of the PLCs of the system. In S3A, every PLC runs under the scrutiny of a side-channel monitor which looks for deviations with respect to safe executions, taking care of real-time constraints, memory usage, and communication patterns. If the information obtained via the monitor differs from the expected model(s) of the PLC, a decision module is informed to decide whether to pass the control from the “potentially compromised” PLC to a safety controller to maintain the plant within the required safety margins. As reported by the same authors, S3A has a number of limitations comprising: (i) the possible compromising of the side channels used for monitoring, (ii) the tuning of the timing parameters of the state machine, which is still a manual process.
    The present work is a revised extension of the conference version that appeared in [38]. Here, we provide a detailed comparison with that article. In Section 2 we specified the attacker model and the attacker objectives. In Section 3, we adopted a simplified operational semantics for edit automata, in the style of Martinelli and Matteucci [45]. In Section 5, we have extended our language of regular properties with the intersection of both local and global properties. With this extension we have expressed a wide family of correctness properties that can be combined in a modular fashion; these properties include and extend the three classes of properties appearing in the conference article. In Section 6, we have extended our synthesis algorithm to deal with our extended properties: both local and global intersection of properties are synthesized in terms of cross products of edit automata. Notice that, compared to the conference article, our enforcement mechanism does not rely anymore on an ad-hoc semantic rule (Mitigation) to insert safe actions at the end of the scan cycle, but rather on the more standard rule (Insert) together with the syntactic structure of synthesized enforcers. As stated in Proposition 1, now our synthesis algorithm depends on the size and the number of occurrences of intersection operators of the property in input. Last but not least, in this journal version we provide an implementation of our use case based on: (i) Simulink to simulate the physical plant, (ii) OpenPLC on Raspberry Pi to run open PLCs, and (iii) FPGAs to implement enforcers. We have then exposed our implementation to five different attacks targeting the PLCs and discussed the effectiveness of the proposed enforced mechanism.
    In a preliminary work [39], we proposed an extension of our process calculus with an explicit representation for malware code. In that article, monitors are synthesized from PLC code rather than correctness properties. The focus of that article was mainly on: (i) deadlock-free enforcement, and (ii) intrusion detection via secure proxies. Here, it is worth pointing out that the work in [39] shares some similarities with supervisory control theory [15, 58], a general theory for the automatic synthesis of controllers (supervisors) for discrete event systems, given a plant model and a specification for the controlled behavior. Fabian and Hellgren [20] have pointed out a number of issues to be addressed when adopting supervisory control theory in industrial PLC-based facilities, such as causality, incorrect synchronization, and choice between alternative paths. However, our synthesis is plant-independent as it returns an enforcer from a given logical property (the plant does not play any role).
    Finally, Yoong et al. [69] proposed a synchronous semantics for functions blocks, a component-oriented model at the core of the IEC 61499 international standard [2] used to design distributed industrial process measurement and control systems. In contrast to the scan cycle model followed in the current article (IEC 61131 [1]) prescribing the execution of a sequential portion of code at each scan cycle, the event-driven model for function blocks relies on the occurrence of asynchronous events to trigger program execution. Yoong et al. [69] adopted a synchronous approach to define an execution semantics to function blocks by translating them into a subset of Esterel [11], a well-known synchronous language. Here, we wish to point out that our PLC specification is given at a more abstract level compared to that of [69], and it complies with the sequential scan cycle standard IEC 61131, rather than the event-driven standard IEC 61499.

    9 Conclusions and Future Work

    We have defined a formal language to express networks of monitored controllers, potentially compromised by colluding malware that may forge/drop actuator commands, modify sensor readings, and forge/drop inter-controller communications. The enforcing monitors have been expressed via a finite-state sub-class of Ligatti et al.’s edit automata. In this manner, we have provided a formal representation of field communications networks in which controllers are enforced via secure monitors, as depicted in Figure 2. The room of maneuver of attackers is defined via a proper attacker model. Then, we have defined a simple description language to express timed regular properties that are recognized by finite automata whose cycles always contain at least one final state (denoted via an \({\scriptstyle \mathsf {end}}\) -action). We have used that language to build up formal definitions for pattern templates suitable for expressing a broad family of correctness properties that can be combined in a modular fashion to prescribe precise controller behaviors. As an example, our description language allows us to capture all (bounded variants of the) controller properties studied in Frehse et al. [24]. Once defined a formal language to describe controller properties, we have provided a synthesis function \(\langle \!| \ \!-\!\ |\! \rangle\) that, given an alphabet \(\mathcal {P}\) of observable controller actions and a deterministic regular property e consistent with \(\mathcal {P}\) , returns a finite-state deterministic edit automaton \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\) . The resulting enforcement mechanism will ensure the required features advocated in the Introduction: transparency, soundness, deadlock-freedom, divergence-freedom, mitigation, and scalability.
    As a final contribution, we have provided a full implementation of a non-trivial case study in the context of industrial water treatment, where enforcers are implemented via FPGAs. In this setting, we showed the effectiveness our enforcement mechanism when exposed to five carefully-designed attacks targeting the PLCs of our use case.
    As future work, we wish to test our enforcement mechanism in different domains, such as industrial and cooperative robotic arms (e.g., Kuka, ABB, Universal Robots) which are endowed with control architectures working at a fixed rate [61]. More generally, we would like to consider physical plants with significant uncertainties, in terms of measurements noise, possibly due to malicious alterations of sensor devices, and physical process uncertainty. To address such challenges we intend to integrate our secured proxies with physics-based attack detection mechanisms [17, 26] based on well-known control-theory algorithms to correctly estimate the state of the physical plant. Furthermore, notice that in the proposed enforced secure architecture all outputs of the monitored controllers are handled to the associated proxies via a dedicated logical connection. Thus, the secure proxy has full observability of the controller outputs. However, the measurements coming from the plants are much less reliable, as sensors devices are exposed to failures. So, we would like to extend our enforcement mechanism under the hypothesis of partial observability of the measurements, taking inspiration from the works by Yin and Lafortune [67, 68].
    Appendix

    10 Proofs

    In order to prove the results of Section 6, in Table 6 we provide the technical definition of the cross product between two edit automata used in the synthesis of Table 5. As the first three cases are straightforward, we explain only the fourth case, the cross product associated with \(\mathsf {Prod}^{\mathcal {P}}_{\mathsf {Z}}(\sum _{i \in I}\lambda _i.\mathsf {E}_i,\sum _{j \in J}\nu _j.\mathsf {E}_j)\) Here, we use the abbreviation \(\lambda .\mathsf {E}\oplus \lambda ^{\prime }.\mathsf {E}\) to denote the automaton \(\lambda .\mathsf {E}+ \lambda ^{\prime }.\mathsf {E}\) , if \(\lambda \ne \lambda ^{\prime }\) , and, the automaton \(\lambda .\mathsf {E}\) , if \(\lambda = \lambda ^{\prime }\) . Thus, the product does the intersection of those addends \(\lambda _i.\mathsf {E}_i\) and \(\nu _j.\mathsf {E}_j\) , with \((i,j) \in H\) , for which: (a) the prefixes have the same output (e.g., \(\lambda _i=\alpha\) and \(\nu _j=\alpha \lt \alpha ^{\prime }\) ), (b) the prefixes are not suppressions, (c) the product of their continuations \(\mathsf {E}_i\) and \(\mathsf {E}_j\) “is not empty”, i.e., it is not a suppression-only automaton. For the other addends \(\lambda _i.\mathsf {E}_i\) and \(\nu _j.\mathsf {E}_j\) which do not comply with the above conditions (i.e., \((i,j) \not\in H\) ), the product results in a suppression-only automaton.
    Let us prove the complexity of the synthesis algorithm formalized in Proposition 1. For that, we need three technical lemmata. The first lemma shows that our synthesis algorithm always returns an edit automaton in a specific canonical form.
    Lemma 1 (Canonical Form).
    Let \(e \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) and \(\mathcal {P}\) be a set of actions such that \(\mathsf {events}(e) \subseteq \mathcal {P}\) . Then, either \({ \langle \! | e | \! \rangle _{}^{\mathcal {P}}}=\mathsf {E}\) or \({ \langle \! | e | \! \rangle _{}^{\mathcal {P}}} = \mathsf {Z}\) , with \(\mathsf {Z}= \mathsf {E}\) , for \(\mathsf {E}\) of the following form:
    where \(\alpha _i \in \mathcal {P}\) , \({\mathcal {Q}}= {\mathcal {P}} \setminus (\cup _{i \in I}\alpha _i\cup \lbrace {\scriptstyle \mathsf {tick}},{\scriptstyle \mathsf {end}}\rbrace)\) , and \(\mathsf {E}_i\) and \(\mathsf {F}\) edit automata. A similar result holds when e is replaced with some local property \(p \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) .
    Proof.
    The proof is by induction on the structure of the property e. The most interesting case is when \(e= e_1\cap e_2\) . Then, \({ \langle \! | e_1\cap e_2 | \! \rangle _{}^{\mathcal {P}}}\) returns \(\mathsf {Prod}^{\mathcal {P}}_{\mathsf {X}}({ \langle \! | e_1 | \! \rangle _{}^{\mathcal {P}}},{ \langle \! | e_2 | \! \rangle _{}^{\mathcal {P}}})\) . By inductive hypothesis, \({ \langle \! | e_1 | \! \rangle _{}^{\mathcal {P}}}\) and \({ \langle \! | e_2 | \! \rangle _{}^{\mathcal {P}}}\) have the required form. We prove the case when
    , with \({\mathcal {Q}}_1={\mathcal {P}} \setminus (\cup _{i \in I}\alpha _i\cup \lbrace {\scriptstyle \mathsf {tick}},{\scriptstyle \mathsf {end}}\rbrace)\) and \({\scriptstyle \mathsf {end}}\not\in \cup _{i \in I}\alpha _i.\)
    , with \({\mathcal {Q}}_2={\mathcal {P}} \setminus (\cup _{i \in I}\alpha _i\cup \lbrace {\scriptstyle \mathsf {tick}},{\scriptstyle \mathsf {end}}\rbrace)\) and \({\scriptstyle \mathsf {end}}\in \cup _{j \in J}\alpha _j\) .
    The other cases are similar or simpler. For any \(i \in I\) and \(j\in J\) , we have: (i) \(\mathit {out}(\alpha _i)= \mathit {out}(\alpha _j)\) if and only if \(\alpha _i =\alpha _j\) ; (ii) \(\mathit {out}(\alpha _i\! \prec \! {\scriptstyle \mathsf {end}})=\mathit {out}(\alpha _j)\) holds if and only if \(\alpha _i =\alpha _j\) . We recall that \(\mathit {out}(^{-}\alpha)= \tau\) . Thus, the set H of the definition of cross product in Table 6 for \(\mathsf {Prod}^{\mathcal {P}}_{\mathsf {X}}({ \langle \! | e_1 | \! \rangle _{}^{\mathcal {P}}},{ \langle \! | e_2 | \! \rangle _{}^{\mathcal {P}}})\) is equal to \(\lbrace (i,j) \in I \times J: \alpha _i = \alpha _j { \text{ and } \mathsf {Prod}^{\mathcal {P}}_{ \mathsf {X}_{i,j} }(\mathsf {E}_i,\mathsf {F}_j)}\ne \sum _{ \alpha \in \mathcal {P}\setminus \lbrace {\scriptstyle \mathsf {tick}},{\scriptstyle \mathsf {end}}\rbrace }{^{-}}\alpha . \mathsf {X}_{i,j} \rbrace\) , with \(\mathsf {X}_{i,j}=\mathsf {Prod}^{\mathcal {P}}_{ \mathsf {X}_{i,j}}(\mathsf {E}_i,\mathsf {F}_j)\) . As a consequence, we derive
    \begin{equation*} \mathsf {Prod}^{\mathcal {P}}_{\mathsf {X}}({ \langle \! | e_1 | \! \rangle _{}^{\mathcal {P}}},{ \langle \! | e_2 | \! \rangle _{}^{\mathcal {P}}})= \sum \limits _{(i,j)\in H} \alpha _i. \mathsf {X}_{i,j} + \sum \limits _{(i,j)\in H} {\scriptstyle \alpha _i \prec \, {\scriptstyle \mathsf {end}}}. \mathsf {X}_{i,j}+ \sum \limits _{{{\scriptscriptstyle \alpha \in {\mathcal {Q}} }}} {^{-}}\alpha .\mathsf {X}, \end{equation*}
    with \({\mathcal {Q}} ={\mathcal {P}} \setminus (\cup _{(i,j) \in H}\alpha _i\cup \lbrace {\scriptstyle \mathsf {tick}},{\scriptstyle \mathsf {end}}\rbrace)\) . It remains to prove that \({\scriptstyle \mathsf {end}}\not\in \cup _{(i,j) \in H}\alpha _i\) . Since \({\scriptstyle \mathsf {end}}\not\in \cup _{i \in I}\alpha _i\) and \({\scriptstyle \mathsf {end}}\in \cup _{j \in J}\alpha _j\) , then there is no \((i,j)\in H\) such that \(\alpha _i={\scriptstyle \mathsf {end}}\) . Thus, \({\scriptstyle \mathsf {end}}\not\in \cup _{(i,j) \in H}\alpha _i\) , as required.□
    By an application of Lemma 1, we derive a second lemma which extends a classical result on the complexity of the cross product of finite state automata to the cross product of (synthesized) edit automata.
    Lemma 2.
    Let \(e_1, e_2\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G},\) and \(\mathcal {P}\) be a set of observable actions. Let \(v_1,v_2\) be the number of derivatives of \(\langle \! | e_1 | \! \rangle _{}^\mathcal {P}\) and \(\langle \! | e_2 | \! \rangle _{}^\mathcal {P}\!\) , respectively.5 The complexity of the algorithm to compute \(\mathsf {Prod}^{\mathcal {P}}_{\mathsf {X}}(\langle \! | e_1 | \! \rangle _{}^\mathcal {P}, \langle \! | e_2 | \! \rangle _{}^\mathcal {P})\) is \(\mathcal {O}({|}\mathcal {P}{|} \cdot v_1 \cdot v_2)\) . A similar result holds for edit automata derived from local properties \(p_1, p_2\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) .
    The third lemma provides an upper bound to the number of derivates of the automaton \(\langle \! | e | \! \rangle _{}^{\mathcal {P}}\) .
    Lemma 3 (Upper Bound of Number of Derivatives).
    Let \(e \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) be a global property with \(m= \mathsf {dim}(e)\) , and \(\mathcal {P}\) be a set of observable actions. Then, the number of derivatives of \(\langle \! | e | \! \rangle _{}^{\mathcal {P}}\) is at most \(m^{k+1}\) , where k is the number of occurrences of the symbol \(\cap\) in e.
    Proof.
    The proof is by structural induction on e. Let \(e\equiv e_1 \cap e_2\) and \(m = \mathsf {dim}(e_1 \cap e_2)\) . By definition, the synthesis function recalls itself on \(e_1\) and \(e_2\) . Obviously, \(m_1+ m_2=m- 1\) with \(m_1 = \mathsf {dim}(e_1)\) and \(m_2 = \mathsf {dim}(e_2)\) . Let k, \(k_1,\) and \(k_2\) be the number of occurrences of the symbol \(\cap\) in \(e_1 \cap e_2\) , \(e_1\) and \(e_2\) , respectively. We deduce that \(k_1+k_2=k-1\) . By the inductive hypothesis, \({ \langle \! | e_1 | \! \rangle _{ }^{\mathcal {P}}}\) has at most \(m_1^{k_1+1}\) derivatives, and, \({ \langle \! | e_2 | \! \rangle _{}^{\mathcal {P}}}\) has at most \(m_2^{k_2+1}\) derivatives. As the synthesis returns the cross product between \({ \langle \! | e_1 | \! \rangle _{ }^{\mathcal {P}}}\) and \({ \langle \! | e_2 | \! \rangle _{}^{\mathcal {P}}}\) , we derive that the resulting edit automaton will have at most \(m_1^{k_1+1} \cdot m_2^{k_2+1}\) derivatives. The result follows because \(m_1^{k_1+1} \cdot m_2^{k_2+1} \le m^{k_1+1}\cdot m^{k_2+1} \le m^{k_1 + k_2+2} \le m^{k-1+2} \le m^{k+1}\) .
    Let \(e\equiv p^\ast\) , for \(p\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) . In order to analyze this case, as \(m= \mathsf {dim}(p^*) = \mathsf {dim}(p) +1\) and \({\langle \!\mid } p^{\ast } {\mid \!\rangle }^{\mathcal {P}} \triangleq \mathsf {X}\) , for \(\mathsf {X} = \langle \! | p | \! \rangle _{\mathsf {X}}^{\mathcal {P}}\) , we proceed by structural induction of \(p\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) . We focus on the most significant case \(p\equiv \bigcup _{i\in I}\pi _i.p_i\) . We have that \(m-1= \mathsf {dim}(\bigcup _{i\in I}\pi _i.p_i)\) . By definition the synthesis produces \(|I|\) derivatives, one for each \(\pi _i \in I\) , and also the derivative \(\mathsf {Z}\) . Furthermore, the synthesis algorithm re-calls itself \(|I|\) times on \(p_i\) , with \(m_i= \mathsf {dim}(p_i)\) such that \(m-1 = |I| + \sum _{i \in I} m_i\) , for \(i\in I\) . Let k and \(k_i\) be the number of occurrences of \(\cap\) in p and in \(p_i\) , respectively, for \(i\in I\) . We deduce that \(\sum _{i \in I}k_i=k\) . By inductive hypothesis, the synthesis produces \(m_i^{k_i+1}\) derivatives on each property \(p_i\) , for \(i\in I\) . In summarizing, in this case, the number of derivatives is \(1+|I| + \sum _{i \in I}m_i^{k_i+1}\) . Finally, the thesis follows as \(1+|I| + \sum _{i \in I}m_i^{k_i+1} \le \sum _{i \in I}m^{k_i+1} \le m^{k+1}\) .□
    Proof of Proposition 1 (Complexity)
    For any property \(e\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) and any set of observable actions \(\mathcal {P}\) , we prove that the recursive structure of the function returning \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\) can be characterized in the following form: \(T(m) = T(m-1) +{|}\mathcal {P}{|}\cdot m^{k}\) , with \(m= \mathsf {dim}(e)\) , and k the number of occurrences of \(\cap\) in e. The result follows because \(T(m) = T(m-1) +{|}\mathcal {P}{|}\cdot m^{k}\) is \(\mathcal {O}({|}\mathcal {P}{|}\cdot m^{k+1})\) . The proof is by case analysis on the structure of e, by examining each synthesis step in which the synthesis process \(m = \mathsf {dim}(e)\) symbols.
    Case \(e \equiv e_1\cap e_2\) . Let \(m= \mathsf {dim}(e_1\cap e_2)\) . By definition, the synthesis \({ \langle \! | e_1\cap e_2 | \! \rangle _{}^{\mathcal {P}}}\) call itself on \(e_1\) and \(e_2\) , with \(m_1= \mathsf {dim}(e_1)\) and \(m_2= \mathsf {dim}(e_2)\) symbols, respectively, where \(m_1+m_2=m-1\) . Let k be the number of occurrences of \(\cap\) in e and \(k_1,k_2\) be the number of occurrences of \(\cap\) in \(e_1\) and \(e_2\) , respectively. We deduce that \(k_1 + k_2 =k -1\) . By an application of Lemma 2, the complexity of the algorithm to compute \(\mathsf {Prod}^{\mathcal {P}}_{\mathsf {X}}(\langle \! | e_1 | \! \rangle _{}^\mathcal {P}, \langle \! | e_2 | \! \rangle _{}^\mathcal {P})\) is \(\mathcal {O}({|}\mathcal {P}{|} \cdot v_1 \cdot v_2)\) , where \(v_1\) and \(v_2\) are the number of derivatives of \(\langle \! | e_1 | \! \rangle _{}^\mathcal {P}\) and \(\langle \! | e_2 | \! \rangle _{}^\mathcal {P}\) , respectively. By an application of Lemma 3, we have that \(v_1 \le m_1^{k_1+1}\) and \(v_2\le m_2^{k_2+1}\) . Thus, the number of operations required for the cross product between \({ \langle \! | e_1 | \! \rangle _{ }^{\mathcal {P}}}\) and \({ \langle \! | e_2 | \! \rangle _{ }^{\mathcal {P}}}\) is \(\mathcal {O}({|}\mathcal {P}{|}\cdot m_1^{k_1+1}\cdot m_2^{k_2+1})\) . Thus, we can characterize the recursive structure as: \(T(m) = T(m_1) + T(m_2) + {|}\mathcal {P}{|}\cdot m_1^{k_1+1}\cdot m_2^{k_2+1}\) . We notice that the complexity of this recursive form is smaller than the complexity of \(T(m-1) +{|}\mathcal {P}{|}\cdot m^{k}\) .
    Case \(e \equiv p^\ast\) . In order to prove this case, as \(m= \mathsf {dim}(p^*) = \mathsf {dim}(p) +1\) and \({\langle \!\mid } p^{\ast } {\mid \!\rangle }^{\mathcal {P}} \triangleq \mathsf {X}\) , for \(\mathsf {X} = \langle \! | p | \! \rangle _{\mathsf {X}}^{\mathcal {P}}\) , we proceed by case analysis on \(p\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) . Thus, we consider the local properties \(p\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) . We focus on the most significant case \(p\equiv \bigcup _{i\in I}\pi _i.p_i\) . We have that \(m-1= \mathsf {dim}(\bigcup _{i\in I}\pi _i.p_i)\) . By definition, the synthesis \({\langle \!\mid } \bigcup _{i\in I}\pi _i.p_i {\mid \!\rangle }^{\mathcal {P}}\) consumes all events \(\pi _i\) , for \(i\in I\) . The synthesis algorithm re-calls itself \({|I|}\) times on \(p_i\) , with \(\mathsf {dim}(p_i)\) symbols, for \(i \in I\) . Furthermore, let l be the size of the set \(\mathcal {P}\) , the algorithm performs at most l operations due to a summation over over \(\alpha \in \mathcal {P}\setminus (\bigcup _{i \in I}\pi _i\cup \lbrace {\scriptstyle \mathsf {tick}},{\scriptstyle \mathsf {end}}\rbrace)\) , with \(|\! \mathcal {P}\setminus (\bigcup _{i \in I}\pi _i\cup \lbrace {\scriptstyle \mathsf {tick}},{\scriptstyle \mathsf {end}}\rbrace) \!| \lt l\) . Thus, we can characterize the recursive structure as \(T(m) = \sum _{i \in I} T(\mathsf {dim}(p_i)) + l\) . Since \(\sum _{i \in I} \mathsf {dim}(p_i) = m-1-{|I|}\le m-1\) . The resulting complexity is smaller than that of \(T(m-1)+{|}\mathcal {P}{|}\cdot m^{k}\) .□
    Proof of Proposition 2 (Deterministic Preservation)
    We reason by contradiction. Suppose there is a sum \(\sum _{i \in I} \lambda _i.\mathsf {E}_i\) appearing in \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\) such that \(\mathit {trigger}(\lambda _k) = \mathit {trigger}(\lambda _j)\) and \(\mathit {out}({\lambda _k}) = \mathit {out}({\lambda _j})\) , for some \(k,j \in I\) , \(k\ne j\) . We proceed by case analysis on the structure of the property e. Let us focus on the case \(e=\bigcup _{i\in I}\pi _i.p_i\) . The other cases are simpler. Then, \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\) is equal to \(\mathsf {Z}\) , for
    and \({\mathcal {Q}} = {\mathcal {P}} \setminus (\cup _{i \in I}\pi _i\cup \lbrace {\scriptstyle \mathsf {tick}},{\scriptstyle \mathsf {end}}\rbrace)\) . Since e is deterministic (Definition 5) it follows that \(\pi _h \ne \pi _l\) , for any \(h,l \in I\) , \(h\ne l\) . As a consequence, it cannot be \(\lambda _k= \pi _h\! \prec \! {\scriptstyle \mathsf {end}}\) , for \(h \in I\) , and \(\lambda _j= \pi _l\! \prec \! {\scriptstyle \mathsf {end}}\) , for \(l \in I\) , \(h \ne l\) , because \(\mathit {out}({\lambda _k})= \pi _h \ne \pi _l = \mathit {out}({\lambda _j})\) . Thus, the only chance for \(\mathsf {Z}\) to be nondeterministic is that \(\lambda _k = \pi _h\) , for \(h \in I\) , and \(\lambda _j= \pi _l\! \prec \! {\scriptstyle \mathsf {end}}\) , for \(l \in I\) , in the case \({\scriptstyle \mathsf {end}}\not\in \cup _{i \in I}\pi _i\) . However, this is not admissible because \({\scriptstyle \mathsf {end}}\not\in \cup _{i \in I}\pi _i\) implies \(\mathit {trigger}(\lambda _k) = \pi _h \ne {\scriptstyle \mathsf {end}}= \mathit {trigger}(\lambda _j)\) .□
    In order to prove Theorem 1, we need to prove that the cross product between edit automata satisfies a standard correctness result saying that any execution trace associated with the intersection of two regular properties is also a trace of the cross product of the edit automata associated with the two properties, and vice versa.
    Lemma 4 (Correctness of Cross Product).
    Let \(e_1,e_2 \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) (respectively, \(p_1,p_2 \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) ) and \(\mathcal {P}\) be a set of actions such that \(\mathsf {events}(e_1\cap e_2) \subseteq \mathcal {P}\) (respectively, \(\mathsf {events}(p_1\cap p_2) \subseteq \mathcal {P}\) ). Then, it holds that:
    If t is a trace of \({\mathsf {Prod}^{\mathcal {P}}_{\mathsf {X}}({ \langle \! | e_1 | \! \rangle _{ }^{\mathcal {P}}},{ \langle \! | e_2 | \! \rangle _{ }^{\mathcal {P}}})}\) (respectively, \({\mathsf {Prod}^{\mathcal {P}}_{\mathsf {X}}({ \langle \! | p_1 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}},{ \langle \! | p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}})}\) ), then \(\widehat{\mathit {out}(t)}\) is prefix of some trace in the semantics \({[\!\![} e_1\cap e_2{]\!\!]}\) (respectively, \({[\!\![} p_1\cap p_2{]\!\!]}\) ).
    If t is a trace in \({[\!\![} e_1\cap e_2{]\!\!]}\) (respectively, \({[\!\![} p_1\cap p_2{]\!\!]}\) ) then there exists a trace \(t^{\prime }\) of \({\mathsf {Prod}^{\mathcal {P}}_{\mathsf {X}}({ \langle \! | e_1 | \! \rangle _{ }^{\mathcal {P}}},{ \langle \! | e_2 | \! \rangle _{ }^{\mathcal {P}}})}\) (respectively, \({\mathsf {Prod}^{\mathcal {P}}_{\mathsf {X}}({ \langle \! | p_1 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}},{ \langle \! | p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}})}\) ) such that \(\widehat{\mathit {out}(t^{\prime })}=t\) .
    Proof of Theorem 1 (Transparency)
    We prove a stronger result. Let \(e\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) be a global deterministic property and \(P \in \mathbb {C}{𝕥𝕣𝕝}\) be a controller such that \({P}\mathop \rightarrow^ {t} J\) , for some trace \(t=\alpha _1 \cdots \alpha _n\) . If t is the prefix of some trace in the semantics \({[\!\![} e{]\!\!]}\) then the following sub-results hold:
    There exists a unique \(\mathsf {E}\) such that \({ \langle \! | e | \! \rangle _{ }^{\mathcal {P}}} \mathop \rightarrow^ {t}\mathsf {E}\) where either \(\mathsf {E}= { \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) or \(\mathsf {E}= \mathsf {Z}\) , with \(\mathsf {Z}= { \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , for some \(p^{\prime }\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) and some automaton variable \(\mathsf {X}\) .
    There is a trace \(t^{\prime }\in {[\!\![} p^{\prime }{]\!\!]}\) such that \(t\cdot t^{\prime }\) is a prefix of some trace in \({[\!\![} e{]\!\!]}\) .
    There is no trace \(t^{\prime \prime }=\alpha _1 \cdots \alpha _k\cdot \lambda\) for \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\!\) such that \(0 \le k \lt n\) and \(\lambda \in \lbrace {^{-}{\alpha _{k+1}}}, \alpha \prec \alpha _{k+1}\rbrace\) , for some \(\alpha\) .
    These three sub-results imply the required result. We proceed by induction on the length n of trace t.
    Base case: \(n=1\) . That is \(t=\alpha\) , with \(\alpha \in \mathsf {Sens}\cup \overline{\mathsf {Act}}\cup \mathsf {Chn}\cup \overline{\mathsf {Chn}} \cup \lbrace {\scriptstyle \mathsf {tick}},{\scriptstyle \mathsf {end}}\rbrace\) . We proceed by induction on the structure of e.
    Case \(e \equiv p^\ast\) , for some \(p \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) . We prove the following three results:
    (i) there exists a unique \(\mathsf {E}\) such that \({ \langle \! | p | \! \rangle _{\mathsf {X}^{\prime }}^{\mathcal {P}}} \mathop \rightarrow^ {\alpha } \mathsf {E}\) and either \(\mathsf {E}= { \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}^{\prime }}^{\mathcal {P}}}\) or \(\mathsf {E}= \mathsf {Z}\) , with \(\mathsf {Z}= { \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}^{\prime }}^{\mathcal {P}}}\) , for some \(p^{\prime }\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) and some automaton variable \(\mathsf {X}^{\prime }\) ;
    (ii) there is a trace \(t^{\prime } \in {[\!\![} p^{\prime }{]\!\!]}\) such that \(\alpha \cdot t^{\prime }\) is a prefix of some trace in \({[\!\![} p{]\!\!]}\) ;
    (iii) there is no \(\lambda \in \lbrace {^{-}{\alpha }}, \alpha ^{\prime } \prec \alpha \rbrace\) such that \({ \langle \! | p | \! \rangle _{\mathsf {X}^{\prime }}^{\mathcal {P}}} \mathop \rightarrow^ {\lambda } \mathsf {E}^{\prime }\) , for some \(\mathsf {E}^{\prime }\) .
    As \({\langle \!| \!{p^\ast }\! | \! \rangle ^{\mathcal {P}} } \triangleq \mathsf {X}\) , with \({\mathsf {X}} = { \langle \! | p | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , results (i) and (ii) and (iii) imply the required facts (1) and (2) and (3) for \(e=p^\ast\) . We proceed as follows: first, we prove items (i) and (ii) by induction on the structure of p, and then we prove item (iii) by contradiction.
    We prove items (i) and (ii). We focus on the most significant cases: \(p= \bigcup _{i\in I}\pi _i.p_i\) and \(p \equiv p_1 \cap p_2\) . The other cases are similar or simpler.
    Let \(p \equiv \bigcup _{i\in I}\pi _i.p_i\) . In this case, \(\alpha\) is a prefix of some trace in \({[\!\![} p{]\!\!]}\) and \({ \langle \! | p | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) returns \(\mathsf {Z}^{\prime }\) , for
    where \({\mathcal {Q}} = {\mathcal {P}} \setminus (\cup _{i \in I}\pi _i\cup \lbrace {\scriptstyle \mathsf {tick}},{\scriptstyle \mathsf {end}}\rbrace)\) . Since \(\alpha\) is a prefix of some trace in \({[\!\![} p{]\!\!]}\) and \(\pi _i\ne \epsilon\) , for any \(i\in I\) , and e is deterministic, then we derive that \(\alpha =\pi _k\) , for a unique index \(k \in I\) .
    Let us prove (i). Since k is the unique index such that \(\alpha =\pi _k\) , we derive that \({ \langle \! | p | \! \rangle _{\mathsf {X}}^{\mathcal {P}}} \mathop \rightarrow^ {\alpha } \mathsf {E}\) is the unique transition labeled \(\alpha\) such that either \(\mathsf {E}= { \langle \! | p_k | \! \rangle _{\mathsf {Z}^{\prime }}^{\mathcal {P}}}\) or \(\mathsf {E}= \mathsf {Z}_1\) , with \(\mathsf {Z}_1 = { \langle \! | p_k | \! \rangle _{\mathsf {Z}^{\prime }}^{\mathcal {P}}}\) .
    Let us prove (ii). Since \(P\mathop \rightarrow^ {\alpha }J\) and \(\alpha =\pi _k\) , by inductive hypothesis there exists \(t^{\prime }\in {[\!\![} p_k{]\!\!]}\) such that \(\alpha \cdot t^{\prime }\) is a prefix of some trace in \({[\!\![} \pi _k.p_k{]\!\!]}\) , and hence also in \({[\!\![} p{]\!\!]}\) , as required.
    Let \(p \equiv p_1\cap p_2\) . In this case, we have that \(\alpha\) is prefix of some trace in \({[\!\![} p{]\!\!]}\) and the synthesis \({ \langle \! | p | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) returns the edit automaton \(\mathsf {E}_p=\mathsf {Prod}^{\mathcal {P}}_{\mathsf {X}}({ \langle \! | p_1 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}},{ \langle \! | p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}})\) .
    Let us prove (i). By definition of cross product in Table 6, the most interesting case is when \({ \langle \! | p_1 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}} ={\sum _{i \in I}\lambda _i.\mathsf {E}_i}\) and \({ \langle \! | p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}} = {\sum _{j \in J}\nu _j.\mathsf {F}_j}\) . In this case,
    for \(\mathsf {X}_{i,j}=\mathsf {Prod}^{\mathcal {P}}_{ \mathsf {X}_{i,j}}(\mathsf {E}_i,\mathsf {F}_j)\) and \({\mathcal {Q}}= (\mathcal {P}\setminus \lbrace {\scriptstyle \mathsf {tick}},{\scriptstyle \mathsf {end}}\rbrace) \setminus \bigcup _{(i,j) \in H}\lbrace \lambda _i,\nu _j\rbrace\) and \(H =\lbrace (i,j) \in I {\times } J: \mathit {out}(\lambda _i) = \mathit {out}(\nu _j) \ne \tau { \text{ and } \mathsf {Prod}^{\mathcal {P}}_{ \mathsf {X}_{i,j} }(\mathsf {E}_i,\mathsf {F}_j)}\ne \sum \limits _{ \alpha \in \mathcal {P}\setminus \lbrace {\scriptstyle \mathsf {tick}},{\scriptstyle \mathsf {end}}\rbrace }\!\!{^{-}}\alpha . \mathsf {X}_{i,j} \rbrace\) . Now, since \(\alpha\) is a prefix of some trace in \({[\!\![} p {]\!\!]}\) , then \(\alpha\) is a prefix of some trace in both \({[\!\![} p_1{]\!\!]}\) and \({[\!\![} p_2{]\!\!]}\) . Thus, since \(P \mathop \rightarrow^ {\alpha } J\) , by inductive hypothesis there exists a unique \(\mathsf {E}\) such that \({ \langle \! | p_1 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}} \mathop \rightarrow^ {\alpha } \mathsf {E}\) , and either \(\mathsf {E}= { \langle \! | p_{1}^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) or \(\mathsf {E}= \mathsf {Z}_{1}\) , with \(\mathsf {Z}_{1} = { \langle \! | p_{1}^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , for some \(p_1^{\prime }\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) . Similarly, there exists unique \(\mathsf {F}\) such that \({ \langle \! | p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}} \mathop \rightarrow^ {\alpha } \mathsf {F}\) , and either \(\mathsf {F}= { \langle \! | p_{2}^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) or \(\mathsf {F}= \mathsf {Z}_{2}\) , with \(\mathsf {Z}_{2} = { \langle \! | p_{2}^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , for some \(p_2^{\prime }\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) . Since \({ \langle \! | p_1 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}} ={\sum _{i \in I}\lambda _i.\mathsf {E}_i}\) and \({ \langle \! | p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}} = {\sum _{j \in J}\nu _j.\mathsf {F}_j}\) , then we have that there exist \(i\in I\) and \(j\in J\) such that \(\mathsf {E}=\mathsf {E}_j\) and \(\mathsf {F}=\mathsf {F}_j\) By Lemma 4 and by definition of cross product, we have that \((i,j)\in H\) , \(\alpha =\lambda _i\) and \(\mathsf {E}_p \mathop \rightarrow^ {\alpha } \mathsf {X}_{i,j}\) , with \(\mathsf {X}_{i,j}=\mathsf {Prod}^{\mathcal {P}}_{ \mathsf {X}_{i,j}}(\mathsf {E}_i,\mathsf {F}_j)=\mathsf {Prod}^{\mathcal {P}}_{ \mathsf {X}_{i,j}}(\mathsf {E},\mathsf {F})\) . Thus, since \(\mathsf {E}\) and \(\mathsf {F}\) are unique, it follows that \(\mathsf {E}_p \mathop \rightarrow^ {\alpha } \mathsf {X}_{i,j}\) is the only possible transition for \(\mathsf {E}_p\) with label \(\alpha\) . Finally, we have that \(\mathsf {Prod}^{\mathcal {P}}_{ \mathsf {X}_{i,j}}(\mathsf {E},\mathsf {F})= \mathsf {Prod}_{\mathsf {X}}({ \langle \! | p_{1}^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}},{ \langle \! | p_{2}^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}})={ \langle \! | p^{\prime }_1\cap p_2^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , as required.
    Let us prove (ii). As \(\mathsf {E}_p \mathop \rightarrow^ {\alpha } \mathsf {Prod}^{\mathcal {P}}_{ \mathsf {X}_{i,j}}(\mathsf {E},\mathsf {F})= \mathsf {Prod}^{\mathcal {P}}_{\mathsf {X}}({ \langle \! | p_1^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}},{ \langle \! | p_2^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}) = { \langle \! | p_1^{\prime }\cap p_2^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , by Lemma 4 we derive that \({[\!\![} p_1^{\prime } \cap p_2^{\prime }{]\!\!]} \ne \emptyset\) . Thus, there exists \(t^{\prime }\in {[\!\![} p_1^{\prime } \cap p_2^{\prime }{]\!\!]}\) . Again, by Lemma 4 it follows that \(\mathsf {E}_p \mathop \rightarrow^ {\alpha }\mathsf {Prod}^{\mathcal {P}}_{\mathsf {X}}({ \langle \! | p_1^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}},{ \langle \! | p_2^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}) \mathop \rightarrow^ {t ^{\prime }}\mathsf {E}^{\prime }\) , for some \(\mathsf {E}^{\prime }\) , with \(\alpha \cdot t^{\prime }\) prefix of some trace in \({[\!\![} p_1 \cap p_2{]\!\!]}\) , as required.
    We have proved items (i) and (ii), for any \(p\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) . It remains to prove item (iii) namely, if \({ \langle \! | p | \! \rangle _{\mathsf {X}^{\prime }}^{\mathcal {P}}} \mathop \rightarrow^ {\alpha } \mathsf {E}\) then there is no \(\lambda \in \lbrace {^{-}{\alpha }}, \alpha ^{\prime } \prec \alpha \rbrace\) such \({ \langle \! | p | \! \rangle _{\mathsf {X}^{\prime }}^{\mathcal {P}}} \mathop \rightarrow^ {\lambda }\mathsf {F}\) , for some \(\mathsf {F}\) . By Lemma 1 we have that \({ \langle \! | p | \! \rangle _{\mathsf {X}^{\prime }}^{\mathcal {P}}}=\mathsf {E}^{\prime }\) or \({ \langle \! | p | \! \rangle _{\mathsf {X}^{\prime }}^{\mathcal {P}}} = \mathsf {Z}\) , with \(\mathsf {Z}= \mathsf {E}^{\prime }\) for
    for \(\alpha _i \in \mathcal {P}\) , \({\mathcal {Q}}= {\mathcal {P}} \setminus (\cup _{i \in I}\alpha _i\cup \lbrace {\scriptstyle \mathsf {tick}},{\scriptstyle \mathsf {end}}\rbrace)\) , and \(\mathsf {E}_i\) and \(\mathsf {E}^{\prime \prime }\) edit automata. Since \({ \langle \! | p | \! \rangle _{\mathsf {X}^{\prime }}^{\mathcal {P}}} \mathop \rightarrow^ {\alpha } \mathsf {E}\) it follows that \(\alpha =\alpha _k\) , for some \(k\in I\) . Let us assume by contradiction that \({ \langle \! | p | \! \rangle _{\mathsf {X}^{\prime }}^{\mathcal {P}}} \mathop \rightarrow^ {\lambda } \mathsf {F}\) , for some \(\lambda \in \lbrace {^{-}{\alpha }}, \alpha ^{\prime } \prec \alpha \rbrace\) and automata \(\mathsf {F}\) . Since \(\alpha =\alpha _k\) , with \(k \in I\) , we derive that \(\alpha \not\in {\mathcal {Q}} = {\mathcal {P}} \setminus (\cup _{i \in I}\alpha _i\cup \lbrace {\scriptstyle \mathsf {tick}},{\scriptstyle \mathsf {end}}\rbrace)\) , that is \(\lambda\) is an insertion, \(\lambda =\alpha ^{\prime } \prec \alpha\) , for some \(\alpha ^{\prime }\) . As in \(\mathsf {E}^{\prime }\) the only insertions are of the form \(\alpha _i\! \prec \! {\scriptstyle \mathsf {end}}\) , it follows that \(\alpha ={\scriptstyle \mathsf {end}}\) and \({\scriptstyle \mathsf {end}}\not\in \cup _{i \in I}\alpha _i\) . However, since \({\scriptstyle \mathsf {end}}\not\in \cup _{i \in I}\alpha _i\) it follows that \(\alpha =\alpha _k \ne {\scriptstyle \mathsf {end}}\) . Contradiction.
    Case \(e \equiv e_1 \cap e_2\) , for some \(e_1,e_2\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) . This case can be proved with a reasoning similar to that of the case \(p_1 \cap p_2\) .
    Inductive case: \(n\gt 1\) , for \(n\in \mathbb {N}\) . Suppose \(P \mathop \rightarrow^ {t} J\) such that t is a prefix of some trace in \({[\!\![} e{]\!\!]}\) . Since \(n\gt 1\) , \(P \mathop \rightarrow^ {t^{\prime }} J^{\prime } \mathop \rightarrow^ {\alpha } J\) , for some trace \(t^{\prime }\) such that \(t=t^{\prime } \cdot \alpha\) . As t is a prefix of some trace in \({[\!\![} e{]\!\!]}\) then \(t^{\prime }\) is a prefix of some trace in \({[\!\![} e{]\!\!]}\) as well. Thus, by inductive hypothesis we have that:
    There exists a unique \(\mathsf {E}^{\prime }\) such that \({ \langle \! | e | \! \rangle _{ }^{\mathcal {P}}} \mathop \rightarrow^ {t^{\prime }} \mathsf {E}^{\prime }\) , and either \(\mathsf {E}^{\prime } = { \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) or \(\mathsf {E}^{\prime } = \mathsf {Z}\) , with \(\mathsf {Z}= { \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , for some \(p^{\prime }\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) and some automaton variable \(\mathsf {X}\) .
    There is a trace \(t^{\prime \prime }\in {[\!\![} p^{\prime }{]\!\!]}\) such that \(t^{\prime }\cdot t^{\prime \prime }\) is a prefix of some trace in \({[\!\![} e{]\!\!]}\) .
    There is no trace \(t^{\prime \prime \prime }=\alpha _1 \cdots \alpha _k\cdot \lambda\) for \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\!\) such that \(0 \le k \lt n-1\) and \(\lambda \in \lbrace {^{-}{\alpha _{k+1}}}, \alpha ^{\prime } \prec \alpha _{k+1}\rbrace\) , for some \(\alpha ^{\prime }\) .
    Since from (1) \(\mathsf {E}^{\prime }\) is unique and either \(\mathsf {E}^{\prime } = { \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) or \(\mathsf {E}^{\prime } = \mathsf {Z}\) , with \(\mathsf {Z}= { \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , we have to prove: (i) there exists a unique \(\mathsf {E}^{\prime \prime }\) such that \({ \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}} \mathop \rightarrow^ {\alpha } \mathsf {E}^{\prime \prime }\) , and either \(\mathsf {E}^{\prime \prime } = { \langle \! | p^{\prime \prime } | \! \rangle _{\mathsf {X}^{\prime }}^{\mathcal {P}}}\) or \(\mathsf {E}^{\prime \prime } = \mathsf {Z}\) , with \(\mathsf {Z}= { \langle \! | p^{\prime \prime } | \! \rangle _{\mathsf {X}^{\prime }}^{\mathcal {P}}}\) , for some \(p^{\prime \prime }\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) and some automaton variable \(\mathsf {X}^{\prime }\) ; (ii) there is a trace \(t^{\prime } \in {[\!\![} p^{\prime \prime }{]\!\!]}\) such that \(\alpha \cdot t^{\prime }\) is a prefix of some trace in \({[\!\![} p^{\prime }{]\!\!]}\) ; (iii) there is no \(\lambda \in \lbrace {^{-}{\alpha }}, \alpha ^{\prime } \prec \alpha \rbrace\) , such \({ \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}} \mathop \rightarrow^ {\lambda } \mathsf {F}\) , for some \(\mathsf {F}\) . These three facts can be proved as previously done for the base case, \(n=1\) .□
    In order to prove Theorem 2, we need a couple of technical lemmata.
    Lemma 5 (Soundness of the Synthesis).
    Let \(e \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) be a global property and \(\mathcal {P}\) be a set of observable actions such that \(\mathsf {events}(e) \subseteq \mathcal {P}\) . Let \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\mathop \rightarrow^ {\lambda _1}\cdots \mathop \rightarrow^ {\lambda _n}\mathsf {E}\) be an arbitrary execution trace of the synthesized automaton \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\) . Then,
    for \(t=\mathit {out}(\lambda _1)\cdot \ldots \cdot \mathit {out}(\lambda _n)\) the trace \(\hat{t}\) is a prefix of some trace in \({[\!\![} e{]\!\!]}\) ;
    either \(\mathsf {E}= { \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) or \(\mathsf {E}= \mathsf {Z}\) , with \(\mathsf {Z}= { \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , for some \(p^{\prime }\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) and some automaton variable \(\mathsf {X}\) .
    Proof.
    We proceed by induction on the length of the execution trace \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\mathop \rightarrow^ {\lambda _1}\ldots \mathop \rightarrow^ {\lambda _n}\mathsf {E}\) .
    Base case: \(n=1\) . In this case, \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\mathop \rightarrow^ {\lambda }\mathsf {E}\) . We proceed by induction on the structure of e.
    Case \(e \equiv p^\ast\) , for some \(p \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) . We prove by induction on the structure of p the following two results: (i) for \(\beta =\mathit {out}(\lambda)\) , \(\hat{\beta }\) is a prefix of some trace in \({[\!\![} p{]\!\!]}\) , and (ii) either \(\mathsf {E}= { \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}^{\prime }}^{\mathcal {P}}}\) or \(\mathsf {E}\!=\!\mathsf {Z}\) , with \(\mathsf {Z}\!=\!{ \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}^{\prime }}^{\mathcal {P}}}\) , for some \(p^{\prime }\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) and some automaton variable \(\mathsf {X}^{\prime }\) . As \({\langle \!| \!{p^\ast }\! | \! \rangle ^{\mathcal {P}} } \triangleq \mathsf {X}\) , for \({\mathsf {X}} = { \langle \! | p | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , results (i) and (ii) imply the required results (1) and (2), for \(e=p^\ast\) . We show the cases \(p\equiv p_1;p_2\) and \(p\equiv p_1 \cap p_2\) , the other cases are similar or simpler.
    Let \(p\equiv p_1;p_2\) and \({ \langle \! | p_1; p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}} \mathop \rightarrow^ {\lambda }\mathsf {E}\) . We prove the two results (i) and (ii) for \(p_1 \ne \epsilon\) , the case \(p_1 = \epsilon\) is simpler. By definition, \({ \langle \! | p_1; p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) returns \({ \langle \! | p_1 | \! \rangle _{\mathsf {Z}^{\prime }}^{\mathcal {P}}}\) , for \(\mathsf {Z}^{\prime }={ \langle \! | p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , and \(\mathsf {Z}^{\prime } \ne \mathsf {X}\) . As a consequence, from \(p_1 \ne \epsilon\) and \({ \langle \! | p_1; p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}} \mathop \rightarrow^ {\lambda }\mathsf {E}\) it follows that \({ \langle \! | p_1 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\mathop \rightarrow^ {\lambda }\mathsf {E}_1\) , for some \(\mathsf {E}_1\) .
    Let us prove (i). Since \({ \langle \! | p_1 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\mathop \rightarrow^ {\lambda }\mathsf {E}_1\) , by inductive hypothesis we have that \(\hat{\beta }\) is a prefix of some trace in \({[\!\![} p_1{]\!\!]}\) . Thus, \(\hat{\beta }\) is a prefix of some trace in \({[\!\![} p_1;p_2{]\!\!]}\) , as required.
    Let us prove (ii). Again, since \({ \langle \! | p_1 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\mathop \rightarrow^ {\lambda }\mathsf {E}_1\) , by inductive hypothesis either \(\mathsf {E}_1 = { \langle \! | p_1^{\prime } | \! \rangle _{\mathsf {Z}^{\prime }}^{\mathcal {P}}}\) or \(\mathsf {E}_1 = \mathsf {Z}_1\) , with \(\mathsf {Z}_1 = { \langle \! | p_1^{\prime } | \! \rangle _{\mathsf {Z}^{\prime }}^{\mathcal {P}}}\) , for some \(p_1^{\prime }\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) and some automaton variable \(\mathsf {Z}^{\prime }\) . Let us analyze \(\mathsf {E}_1 = { \langle \! | p_1^{\prime } | \! \rangle _{\mathsf {Z}^{\prime }}^{\mathcal {P}}}\) (the case \(\mathsf {E}_1 = \mathsf {Z}_1\) , with \(\mathsf {Z}_1 = { \langle \! | p_1^{\prime } | \! \rangle _{\mathsf {Z}^{\prime }}^{\mathcal {P}}}\) , is similar). As \(\mathsf {E}_1 = { \langle \! | p_1^{\prime } | \! \rangle _{\mathsf {Z}^{\prime }}^{\mathcal {P}}}\) with \(\mathsf {Z}^{\prime }={ \langle \! | p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , by definition of the synthesis algorithm it follows that \(\mathsf {E}_1 ={ \langle \! | p_1^{\prime };p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , as required.
    Let \(p\equiv p_1\cap p_2\) and \({ \langle \! | p_1\cap p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}} \mathop \rightarrow^ {\lambda }\mathsf {E}\) . By definition, the synthesis algorithm applied to \({ \langle \! | p_1\cap p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) returns \(\mathsf {E}_p=\mathsf {Prod}_{\mathsf {X}}({ \langle \! | p_1 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}},{ \langle \! | p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}})\) . Let us prove the results (i) and (ii).
    Result (i) follows directly from Lemma 4.
    Let us prove (ii). By inspection of the definition of cross product in Table 6, the most interesting case is when \({ \langle \! | p_1 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}} ={\sum _{i \in I}\lambda _i.\mathsf {E}_i}\) and \({ \langle \! | p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}} = {\sum _{j \in J}\nu _j.\mathsf {F}_j}\) . In this case,
    for \(\mathsf {X}_{i,j}=\mathsf {Prod}^{\mathcal {P}}_{ \mathsf {X}_{i,j}}(\mathsf {E}_i,\mathsf {F}_j)\) and \({\mathcal {Q}}= (\mathcal {P}{\setminus } \lbrace {\scriptstyle \mathsf {tick}},{\scriptstyle \mathsf {end}}\rbrace) {\setminus } \bigcup _{(i,j) \in H}\lbrace \lambda _i,\nu _j\rbrace\) and \(H =\lbrace (i,j) \in I {\times } J: \mathit {out}(\lambda _i) = \mathit {out}(\nu _j) \ne \tau { \text{ and } \mathsf {Prod}^{\mathcal {P}}_{ \mathsf {X}_{i,j} }(\mathsf {E}_i,\mathsf {F}_j)}\ne \sum \limits _{ \alpha \in \mathcal {P}\setminus \lbrace {\scriptstyle \mathsf {tick}},{\scriptstyle \mathsf {end}}\rbrace }\!\!{^{-}}\alpha . \mathsf {X}_{i,j} \rbrace\) . Hence \(\mathsf {E}_p\) has following two (families of) transitions: (a) \(\mathsf {E}_p \mathop \rightarrow^ {\lambda } {\mathsf {X}_{i,j}}\) , for \(\lambda \in \bigcup _{(i,j) \in H}\lbrace \lambda _i,\nu _j\rbrace\) ; (b) \(\mathsf {E}_p\!\!\mathop \rightarrow^ {{^{-}{\alpha }}} \mathsf {Z}\) , for \(\alpha \in \mathcal {Q}\) . We prove the result for the case (a); the case (b) can be proved in a similar manner. Since \(\lambda \in \bigcup _{(i,j) \in H}\lbrace \lambda _i,\nu _j\rbrace\) we have that \(\lambda =\lambda _i\) or \(\lambda =\nu _j\) , for some \((i,j)\in H\) . By definition of cross product, it holds that \({ \langle \! | p_1 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\mathop \rightarrow^ {\lambda _i}\mathsf {E}_i\) and \({ \langle \! | p_2 | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\mathop \rightarrow^ {\nu _j}\mathsf {E}_j\) , with \(\mathit {out}(\lambda _i)=\mathit {out}(\nu _j)=\mathit {out}(\lambda)\) . Thus, by inductive hypothesis we have that: (1) either \(\mathsf {E}_{i} = { \langle \! | p_{1}^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) or \(\mathsf {E}_{i} = \mathsf {Z}_{1}\) , with \(\mathsf {Z}_{1} = { \langle \! | p_{1}^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , for some \(p_1^{\prime }\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) ; (2) either \(\mathsf {F}_{j} = { \langle \! | p_{2}^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) or \(\mathsf {F}_{j} = \mathsf {Z}_{2}\) , with \(\mathsf {Z}_{2} = { \langle \! | p_{2}^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , for some \(p_2^{\prime }\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) . Therefore, by definition of cross product we derive that \(\mathsf {Prod}^{\mathcal {P}}_{ \mathsf {X}_{i,j}}(\mathsf {E}_i,\mathsf {F}_j)=\) \(\mathsf {Prod}_{\mathsf {X}}({ \langle \! | p_{1}^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}},{ \langle \! | p_{2}^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}})\) . Finally, by definition of our synthesis it follows that \(\mathsf {Prod}_{\mathsf {X}}({ \langle \! | p_{1}^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}},{ \langle \! | p_{2}^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}) = { \langle \! | p^{\prime }_1\cap p_2^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , as required.
    Case \(e = e_1 \cap e_2\) for some \(e_1,e_2\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) . This case can be proved with a reasoning similar to that seen in the proof of case \(p_1 \cap p_2\) .
    Inductive case: \(n\gt 1\) , for \(n\in \mathbb {N}\) . Suppose \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\mathop \rightarrow^ {\lambda _1}\cdots \mathop \rightarrow^ {\lambda _{n-1}} \mathsf {E}^{\prime } \mathop \rightarrow^ {\lambda _n}\mathsf {E}\) , for \(n\gt 1\) . \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\mathop \rightarrow^ {\lambda _1}\) \(\ldots \mathop \rightarrow^ {\lambda _{n-1}}\mathsf {E}^{\prime }\mathop \rightarrow^ {\lambda _n}\mathsf {E}\) . Thus, by induction, we have that:
    for \(t^{\prime }=\mathit {out}(\lambda _1)\cdot \ldots \cdot \mathit {out}(\lambda _{n-1})\) the trace \(\widehat{t^{\prime }}\) is a prefix of some trace in \({[\!\![} e{]\!\!]}\) , and
    either \(\mathsf {E}^{\prime } = { \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) or \(\mathsf {E}^{\prime } = \mathsf {Z}\) , with \(\mathsf {Z}= { \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , for some \(p^{\prime }\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) and some automaton variables \(\mathsf {Z}\) and \(\mathsf {X}.\)
    Since either \(\mathsf {E}^{\prime } = { \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) or \(\mathsf {E}^{\prime } = \mathsf {Z}\) , with \(\mathsf {Z}= { \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , then to conclude the proof it is sufficient to prove that given \({ \langle \! | p^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\mathop \rightarrow^ {\lambda _n}\mathsf {E}\) and \(\beta _n=\mathit {out}(\lambda _n)\) , it holds that \(\hat{\beta _n}\) is a prefix of some trace in \({[\!\![} p^{\prime }{]\!\!]}\) . For that, we resort to the proof of the base case.□
    In the next lemma, we prove that, given the execution traces of a monitored controller, we can always extract from them the traces performed by its edit automaton and its monitored controller in isolation.
    Lemma 6 (Trace Decomposition).
    Let \(\mathsf {E}\in \mathbb {E}{𝕕𝕚𝕥}\) be an edit automaton and \(J \in \mathbb {C}{𝕥𝕣𝕝}\) be a controller. Then, for any execution trace \(\mathsf {E}_0 \! \bowtie \! {\boldsymbol { \lbrace }}J_0{\boldsymbol { \rbrace }}\mathop \rightarrow^ {\beta _1} \ldots \mathop \rightarrow^ {\beta _n}\mathsf {E}_n \! \bowtie \! {\boldsymbol { \lbrace }}J_n{\boldsymbol { \rbrace }}\) , with \(\mathsf {E}_0=\mathsf {E}\) and \(J_0=J\) , it hold that (1) \(\mathsf {E}_{i-1} \mathop \rightarrow^ {\lambda _i} \mathsf {E}_i\) , with \(\beta _i = \mathit {out}(\lambda _i)\) , and (2) either \(J_{i-1}\mathop \rightarrow^ {\alpha _i}J_{i}\) , with \(\alpha _i = \mathit {trigger}(\lambda _i)\) , or \(J_{i}=J_{i-1}\) , for \(1 \le i \le n\) .
    Proof.
    The transition \(\mathsf {E}_{i-1} \! \bowtie \! {\boldsymbol { \lbrace }}J_{i-1}{\boldsymbol { \rbrace }}\smash{\mathop \rightarrow^ {\beta _i}}\mathsf {E}_i \! \bowtie \! {\boldsymbol { \lbrace }}J_i{\boldsymbol { \rbrace }}\) , for \(1 \le i \le n\) , can be only derived by applying one of the following rule: (Allow), (Insert), (Suppress). In the case of an application of rule (Allow), \(\mathsf {E}_{i-1} \! \bowtie \! {\boldsymbol { \lbrace }}J_{i-1}{\boldsymbol { \rbrace }}\smash{\mathop \rightarrow^ {\beta _i}}\mathsf {E}_i \! \bowtie \! {\boldsymbol { \lbrace }}J_i{\boldsymbol { \rbrace }}\) derives from \(\mathsf {E}_{i-1} \mathop \rightarrow^ {\alpha _i} \mathsf {E}_i\) and \(J_{i-1}{\mathop \rightarrow^ {\alpha _i }}J_i\) with \(\beta _i=\alpha _i=\lambda _i\) . Hence, \(\mathit {out}(\lambda _i)=\mathit {trigger}(\lambda _i)=\alpha _i\) , as required. In the case of rule (Insert), \(\mathsf {E}_{i-1} \! \bowtie \! {\boldsymbol { \lbrace }}J_{i-1}{\boldsymbol { \rbrace }}\smash{\mathop \rightarrow^ {\beta _i}}\mathsf {E}_i \! \bowtie \! {\boldsymbol { \lbrace }}J_i{\boldsymbol { \rbrace }}\) derives from \(\mathsf {E}_{i-1} \mathop \rightarrow^ {\alpha \prec \alpha _i} \mathsf {E}_i\) and \(J_{i-1}{\mathop \rightarrow^ {\alpha _i }}J\) , for some \(\alpha\) and J, with \(\beta _i=\alpha\) . Thus, \(\mathit {out}(\lambda _i) {=} \mathit {out}(\alpha \prec \alpha _i)=\beta _i\) and \(J_i=J_{i-1}\) , as required. Finally, in the case of rule (Suppress), \(\mathsf {E}_{i-1} \! \bowtie \! {\boldsymbol { \lbrace }}J_{i-1}{\boldsymbol { \rbrace }}\smash{\mathop \rightarrow^ {\beta _i}}\mathsf {E}_i \! \bowtie \! {\boldsymbol { \lbrace }}J_i{\boldsymbol { \rbrace }}\) derives from \(\mathsf {E}_{i-1} \mathop \rightarrow^ {{^{-}{\alpha _i}}} \mathsf {E}_i\) and \(J_{i-1}{\mathop \rightarrow^ {\alpha _i }}J_i\) , for some \(\alpha _i\) , with \(\beta _i=\tau\) and \(\lambda _i = {^{-}{\alpha _i}}\) . Hence, \(\mathit {out}(\lambda _i)=\tau\) and \(\mathit {trigger}({\lambda _i})=\alpha _i\) , as required.□
    Proof of Theorem 2 (Soundness)
    Let \(t=\beta _1{\cdot }\ldots \cdot \beta _n\) be a trace s.t. \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} } \! \bowtie \! {\boldsymbol { \lbrace }}P{\boldsymbol { \rbrace }} \mathop \rightarrow^ {t} \mathsf {E} \! \bowtie \! {\boldsymbol { \lbrace }}J{\boldsymbol { \rbrace }}\) , for some \(\mathsf {E}\in \mathbb {E}{𝕕𝕚𝕥}\) and some controller J. By an application of Lemma 6 there exist \(\mathsf {E}_i \in \mathbb {E}{𝕕𝕚𝕥}\) and \(\lambda _i\) , for \(1 \le i \le n\) , such that: \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\mathop \rightarrow^ {\lambda _1}\mathsf {E}_1\mathop \rightarrow^ {\lambda _2}\ldots \mathop \rightarrow^ {\lambda _n}\mathsf {E}_n=\mathsf {E}\) , with \(\beta _i = \mathit {out}(\lambda _i)\) . Thus, \(t=\mathit {out}(\lambda _1)\cdot \ldots \cdot \mathit {out}(\lambda _n)\) . By Lemma 5, \(\hat{t}\) is a prefix of some trace in \({[\!\![} e{]\!\!]}\) , as required.□
    Lemma 7 (Deadlock-freedom of the Synthesis).
    Let \(e \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) be a global property and \(\mathcal {P}\) be a set of observable actions s.t. \(\mathsf {events}(e) \subseteq \mathcal {P}\) . Then the edit automaton \({ \langle \! | e | \! \rangle _{ }^{\mathcal {P}}}\) does not deadlock.
    Proof.
    Given an arbitrary execution \({ \langle \! | e | \! \rangle _{ }^{\mathcal {P}}}\mathop \rightarrow^ {\lambda _1} \cdots \mathop \rightarrow^ {\lambda _n}\mathsf {E}\) , the proof is by induction on the length n of the execution trace. By an application of Lemma 5 we have that either \(\mathsf {E}= { \langle \! | p | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) or \(\mathsf {E}= \mathsf {Z}\) , with \(\mathsf {Z}= { \langle \! | p | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , for \(p\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) and some automaton variable \(\mathsf {X}\) . Hence, the result follows by inspection of the synthesis function of Table 5 and by induction on the structure of p.□
    Proof of Theorem 3 (Deadlock-freedom)
    Let t be a trace such that \({{\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} } \! \bowtie \! {\boldsymbol { \lbrace }}P{\boldsymbol { \rbrace }}\mathop \rightarrow^ {t}\mathsf {E} \! \bowtie \! {\boldsymbol { \lbrace }}J{\boldsymbol { \rbrace }}}\) , for some edit automaton \(\mathsf {E}\) and controller J. By contradiction, we assume that \(\mathsf {E} \! \bowtie \! {\boldsymbol { \lbrace }}J{\boldsymbol { \rbrace }}\) is in deadlock. Notice that, by definition, our controllers J never deadlock. By Lemma 7 the automaton \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\) never deadlock as well. Consequently, we have that for any transition \(J\mathop \rightarrow^ {\alpha }J^{\prime }\) there is no action \(\lambda\) for \(\mathsf {E}\) , such that the monitored controller \(\mathsf {E} \! \bowtie \! {\boldsymbol { \lbrace }}J{\boldsymbol { \rbrace }}\) may progress according to one of the rules: (Allow), (Suppress), and (Insert). By an application of Lemma 5, we have that either \(\mathsf {E}= { \langle \! | p | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) or \(\mathsf {E}= \mathsf {Z}\) , with \(\mathsf {Z}= { \langle \! | p | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) , for some \(p\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) and some automaton variable \(\mathsf {X}\) . Now, by Lemma 1, we have that
    for \(\alpha _i \in \mathcal {P}\) , \({\mathcal {Q}}= {\mathcal {P}} \setminus (\cup _{i \in I}\alpha _i\cup \lbrace {\scriptstyle \mathsf {tick}},{\scriptstyle \mathsf {end}}\rbrace)\) , and \(\mathsf {E}_i\) and \(\mathsf {F}\) edit automata. In both cases \({ \langle \! | p | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) may only deadlock the enforcement when the controller may only perform \({\scriptstyle \mathsf {tick}}\) -actions. From this fact, we derive \(J = {\scriptstyle \mathsf {tick}}^h.S\) , for \(0\lt h\le k\) . Since \({\scriptstyle \mathsf {tick}}\) -actions cannot be suppressed, we have that \(t=t^{\prime } \cdot {\scriptstyle \mathsf {tick}}^{k-h}\) , for some possibly empty trace \(t^{\prime }\) terminating with an \({\scriptstyle \mathsf {end}}\) . By Theorem 2, \(t=t^{\prime } \cdot {\scriptstyle \mathsf {tick}}^{k-h} \in {[\!\![} e{]\!\!]}\) . And since e is k-sleeping we derive \(p={\scriptstyle \mathsf {tick}}^h.p^{\prime }\) , for some \(p^{\prime }\) . Since \({\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} }\) is sound (Lemma 5) we derive that \(\mathsf {E}={ \langle \! | p | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}={ \langle \! | {\scriptstyle \mathsf {tick}}^h.p^{\prime } | \! \rangle _{\mathsf {X}}^{\mathcal {P}}}\) . Finally, \(h\gt 0\) implies \(\mathsf {E}\mathop \rightarrow^ {{\scriptstyle \mathsf {tick}}}\mathsf {E}^{\prime }\) , for some \(\mathsf {E}^{\prime }\) , in contradiction with what is stated four lines above.□
    Proof of Theorem 4 (Divergence-freedom)
    Let \(e \in \mathbb {P}{𝕣𝕠𝕡}\mathbb{G}\) be a global property in its general form, given by the intersection of \(n \ge 1\) global properties \(p_1^\ast \cap \dots \cap p_n^\ast\) , for \(p_i\in \mathbb {P}{𝕣𝕠𝕡}\mathbb{L}\) , with \(1\le i \le n\) . As e is well-formed, according to Definition 4 also all local properties \(p_i\) are well-formed. This means that they all terminate with an \({\scriptstyle \mathsf {end}}\) event. Thus, in all global properties \(p_i^\ast\) , for \(1 \le i \le n\) , the number of events within two subsequent \({\scriptstyle \mathsf {end}}\) events is always finite. The same holds for the property e. Now, let t be an arbitrary trace such that \({{\langle \!| \!{e}\! | \! \rangle ^{\mathcal {P}} } \! \bowtie \! {\boldsymbol { \lbrace }}P{\boldsymbol { \rbrace }}\mathop \rightarrow^ {t}\mathsf {E} \! \bowtie \! {\boldsymbol { \lbrace }}J{\boldsymbol { \rbrace }}}\) , for some edit automaton \(\mathsf {E}\) and controller J. And let \(k= \mathrm{max}_{1\le i\le n}k_i\) , where \(k_i\) is the length of the longest trace of \({[\!\![} p_i{]\!\!]}\) , for \(1\le i\le n\) . Thus, if \(\mathsf {E} \! \bowtie \! {\boldsymbol { \lbrace }}J{\boldsymbol { \rbrace }}\mathop \rightarrow^ {t^{\prime }}\mathsf {E}^{\prime } \! \bowtie \! {\boldsymbol { \lbrace }}J^{\prime }{\boldsymbol { \rbrace }}\) , with \(|t^{\prime }|\ge k\) , and since by Theorem 2 we have that \(t\cdot t^{\prime }\) is a prefix of some trace \({[\!\![} e{]\!\!]}\) , then \({\scriptstyle \mathsf {end}}\in t^{\prime }\) .□

    Acknowledgments

    We thank the anonymous reviewers for their insightful and careful reviews. We thank Adrian Francalanza, Yuan Gu, Davide Sangiorgi, and Marjan Sirjani for their comments on early drafts of the article.

    Footnotes

    1
    Here, \(\overline{\mathsf {Act}}= \lbrace \overline{a}: a \in \mathsf {Act} \rbrace\) and \(\overline{\mathsf {Chn}} = \lbrace \overline{c}: c \in \mathsf {Chn} \rbrace\) .
    2
    We use Vinogradov’s notation \(w \lt \lt z\) to mean that w is much less than z.
    3
    As said in Section 2, a malware that aims at taking control of the plant has no interest in delaying the scan cycle and risking the violation of the maximum cycle limit whose consequence would be the immediate shutdown of the controller [63].
    4
    The reader is referred to Section 7.4 to understand that such delay is consistent with the reaction time of our enforced SWaT system.
    5
    These numbers are finite as we deal with finite-state edit automata.

    References

    [1]
    Int’l Standard IEC 61131-3. 2003. Programmable Controllers - Part 3: Programming Languages. second ed., Int’l Electrotechnical Commission.
    [2]
    Int’l Standard IEC 61499-1. 2005. Function Blocks - Part 1: Architecture. first ed., Int’l Electrotechnical Commission.
    [3]
    M. Abadi, B. Blanchet, and C. Fournet. 2018. The Applied Pi Calculus: Mobile Values, New Names, and Secure Communication. Journal of the ACM 65, 1 (2018), 1:1–1:41.
    [4]
    M. Abadi and A. D. Gordon. 1997. A calculus for cryptographic protocols: The spi calculus. In Proceedings of the 4th ACM Conference on Computer and Communications Security. ACM, 36–47.
    [5]
    A. Abbasi and M. Hashemi. 2016. Ghost in the PLC designing an undetectable programmable logic controller rootkit via pin control attack. In Proceedings of the Black Hat Europe. 1–35.
    [6]
    L. Aceto, I. Cassar, A. Francalanza, and A. Ingólfsdóttir. 2018. On runtime enforcement via suppressions. In Proceedings of the CONCUR. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 34:1–34:17.
    [7]
    L. Aceto, I. Cassar, A. Francalanza, and A. Ingólfsdóttir. 2021. On Bidirectional Runtime Enforcement. In Proceedings of the International Conference on Formal Techniques for Distributed Objects, Components, and Systems. Springer, 3–21.
    [8]
    T. R. Alves, M. Buratto, F. M. de Souza, and T. R. Rodrigues. 2014. OpenPLC: An open source alternative to automation. In Proceedings of the IEEE Global Humanitarian Technology Conference. 585–589.
    [9]
    E. Bartocci, J. V. Deshmukh, A. Donzé, G. E. Fainekos, O. Maler, D: Nickovic, and S. Sankaranarayanan. 2018. Specification-based monitoring of cyber-physical systems: A survey on theory, tools and applications. In Proceedings of the Lectures on Runtime Verification - Introductory and Advanced Topics. Springer, 135–175.
    [10]
    D. Beauquier, J. Cohen, and R. Lanotte. 2013. Security policies enforcement using finite and pushdown edit automata. International Journal of Information Security 12, 4 (2013), 319–336.
    [11]
    G. Berry and G. Gonthier. 1992. The Esterel synchronous programming language: Design, semantics, implementation. Science of Computer Programming 19, 2 (1992), 87–152.
    [12]
    M. Bielova. 2011. A Theory of Constructive and Predictable Runtime Enforcement Mechanisms. Ph.D. Dissertation. University of Trento.
    [13]
    N. Bielova and F. Massacci. 2011. Predictability of enforcement. In Proceedings of the Engineering Secure Software and Systems. 73–86.
    [14]
    R. Bloem, B. Könighofer, R. Könighofer, and C. Wang. 2015. Shield synthesis: - Runtime enforcement for reactive systems. In Proceedings of the International Conference on Tools and Algorithms for the Construction and Analysis of Systems.Springer, 533–548.
    [15]
    W. M. Brandin and B. A. Wonham. 1994. Supervisory control of timed discrete-event systems. IEEE Transactions on Automatic Control 39, 2 (1994), 329–342.
    [16]
    L. Cardelli and A. Gordon. 2000. Mobile ambients. Theoretical Computer Science 240, 1 (2000), 177–213.
    [17]
    A. A. Cárdenas, S. Amin, Z. Lin, Y. Huang, C. Huang, and S. Sastry. 2011. Attacks against process control systems: Risk assessment, detection, and response. In Proceedings of the 6th ACM Symposium on Information, Computer and Communications Security. 355–366.
    [18]
    I. Cassar. 2020. Developing Theoretical Foundations for Runtime Enforcement. Ph.D. Dissertation. University of Malta and Reykjavik University.
    [19]
    A. Di Pinto, Y. Dragoni, and A. Carcano. 2018. TRITON: The first ICS cyber attack on safety instrument systems. In Proceedings of the Black Hat USA (2018). 1–28.
    [20]
    M. Fabian and A. Hellgren. 1998. PLC-based implementation of supervisory control for discrete event systems. In Proceedings of the 37th IEEE Conference on Decision and Control. IEEE, 3305–3310.
    [21]
    Y. Falcone, J.-C. Fernandez, and L. Mounier. 2012. What can you verify and enforce at runtime? International Journal on Software Tools for Technology Transfer 14, 3 (2012), 349–382.
    [22]
    Y. Falcone, L. Mounier, J. Fernandez, and J. Richier. 2011. Runtime enforcement monitors: composition, synthesis, and enforcement abilities. Formal Methods in System Design 38, 3 (2011), 223–262.
    [23]
    A. Francalanza. 2021. A theory of monitors. Information and Computation 281 (2021), 104704.
    [24]
    G. Frehse, N. Kekatos, D. Nickovic, J. Oehlerking, S. Schuler, A. Walsch, and M. Woehrle. 2018. A toolchain for verifying safety properties of hybrid automata via pattern templates. In Proceedings of the Annual American Control Conference. 2384–2391.
    [25]
    Warren Gay. 2014. Mastering the Raspberry PI. A Press.
    [26]
    J. Giraldo, D. I. Urbina, A. Cardenas, J. Valente, M. Faisal, J. Ruths, N. O. Tippenhauer, H. Sandberg, and R. Candell. 2018. A survey of physics-based attack detection in cyber-physical systems. ACM Computing Surveys 51, 4 (2018), 76:1–76:36.
    [27]
    J. Goh, S. Adepu, K. N. Junejo, and A. Mathur. 2017. A dataset to support research in the design of secure water treatment systems. In Proceedings of the International Conference on Critical Information Infrastructures Security. Springer, 88–99.
    [28]
    N. Govil, A. Agrawal, and N. O. Tippenhauer. 2018. On ladder logic bombs in industrial control systems. In Proceedings of the SECPRE@ESORICS’17. Springer, 110–126.
    [29]
    M. Hennessy, M. Merro, and J. Rathke. 2004. Towards a behavioural theory of access and mobility control in distributed systems. Theoretical Computer Science 322, 3 (2004), 615–669.
    [30]
    M. Hennessy and T. Regan. 1995. A process algebra for timed systems. Information & Computation 117, 2 (1995), 221–239.
    [31]
    M. Heymann, F. Lin, G. Meyer, and S. Resmerita. 2005. Analysis of zeno behaviors in a class of hybrid systems. IEEE Transactions on Automatic Control 50, 3 (2005), 376–383.
    [32]
    Y. Huang, A. A. Cárdenas, S. Amin, Z. Lin, H. Tsai, and S. Sastry. 2009. Understanding the physical and economic consequences of attacks on control systems. International Journal of Critical Infrastructure Protection 2, 3 (2009), 73–83.
    [33]
    T. Huffmire, C. Irvine, T. D. Nguyen, T. Levin, R. Kastner, and T. Sherwood. 2010. Handbook of FPGA Design Security. Springer Science & Business Media.
    [34]
    L. R. Humphrey, B. Könighofer, R. Könighofer, and U. Topcu. 2016. Synthesis of admissible shields. In Proceedings of the Haifa Verification Conference. 134–151.
    [35]
    B. Könighofer, M. Alshiekh, R. Bloem, L. Humphrey, R. Könighofer, U. Topcu, and C. Wang. 2017. Shield synthesis. Formal Methods in System Design 51, 2 (2017), 332–361.
    [36]
    David Kushner. 2013. The real story of STUXnet. IEEE Spectrum 50, 3 (2013), 48–53.
    [37]
    R. Lanotte and M. Merro. 2017. A calculus of cyber-physical systems. In Proceedings of the International Conference on Language and Automata Theory and Applications. Springer, 115–127.
    [38]
    R. Lanotte, M. Merro, and A. Munteanu. 2020. Runtime enforcement for control system security. In Proceedings of the 33rd Computer Security Foundations Symposium. IEEE, 246–261.
    [39]
    R. Lanotte, M. Merro, and A. Munteanu. 2021. A process calculus approach to detection and mitigation of PLC malware. Theoretical Computer Science 890 (2021), 125–146.
    [40]
    R. Lanotte, M. Merro, A. Munteanu, and S. Tini. 2021. Formal impact metrics for cyber-physical attacks. In Proceedings of the 34th Computer Security Foundations Symposium. IEEE, 1–16.
    [41]
    R. Lanotte, M. Merro, A. Munteanu, and Viganò, L.2020. A formal approach to physics-based attacks in cyber-physical systems. ACM Transactions on Privacy and Security 23, 1 (2020), 3:1–3:41.
    [42]
    R. Lanotte, M. Merro, and S. Tini. 2021. A probabilistic calculus of cyber-physical systems. Information & Computation 279 (2021), 104618.
    [43]
    J. Ligatti, L. Bauer, and D. Walker. 2005. Edit automata: Enforcement mechanisms for run-time security policies. International Journal of Information Security 4, 1–2 (2005), 2–16.
    [44]
    Z. Manna and A. Pnueli. 1987. A Hierarchy of Temporal Properties. Technical Report. Stanford University.
    [45]
    F. Martinelli and I. Matteucci. 2007. Through modeling to synthesis of security automata. Electronic Notes in Theoretical Computer Science 179 (2007), 31–46.
    [46]
    A. P. Mathur and N. O. Tippenhauer. 2016. SWaT: A water treatment testbed for research and training on ICS security. In Proceedings of the CySWater@CPSWeek. IEEE Computer Society, 31–36.
    [47]
    MATLAB. 2018. 9.7.0.1190202 (R2019b). The MathWorks Inc., Natick, Massachusetts.
    [48]
    S. E. McLaughlin. 2013. CPS: Stateful policy enforcement for control system device usage. In Proceedings of the 29th Annual Computer Security Applications Conference. ACM, 109–118.
    [49]
    Mentor Graphics. 2014. Mentor Graphics ModelSim. Retrieved on 17 July, 2022 https://cseweb.ucsd.edu/classes/fa10/cse140L/lab/docs/modelsim_user.pdf.
    [50]
    S. Mohan, S. Bak, E. Betti, H. Yun, L. Sha, and M. Caccamo. 2013. S3A: Secure system simplex architecture for enhanced security and robustness of cyber-physical systems. In Proceedings of the 2nd ACM International Conference on High Confidence Networked Systems. ACM, 65–74.
    [51]
    G. Nikolakopoulos and S. Manesis. 2018. Introduction to Industrial Automation. Taylor & Francis Group.
    [52]
    Terence Parr. 2013. The Definitive ANTLR 4 Reference. Pragmatic Bookshelf.
    [53]
    H. Pearce, S. Pinisetty, P. S. Roop, M. M. Y. Kuo, and A. Ukil. 2020. Smart I/O modules for mitigating cyber-physical attacks on industrial control systems. IEEE Transactions on Industrial Informatics 16, 7 (2020), 4659–4669.
    [54]
    S. Pinisetty, P. S. Roop, S. Smyth, N. Allen, S. Tripakis, and R. V. Hanxleden. 2017. Runtime enforcement of cyber-physical systems. ACM Transactions on Embedded Computing Systems 16, 5s (2017), 178:1–178:25.
    [55]
    S. Pinisetty and S. Tripakis. 2016. Compositional runtime enforcement. In Proceedings of the NASA Formal Methods Symposium. Springer, 82–99.
    [56]
    B. Radvanovsky. 2013. Project shine: 1,000,000 internet-connected SCADA and ICS stystems and counting. 19 (2013). Tofino Security.
    [57]
    R. Rajkumar, I. Lee, L. Sha, and J. A. Stankovic. 2010. Cyber-physical systems: The next computing revolution. In Proceedings of the Design Automation Conference. ACM, 731–736.
    [58]
    P. J. Ramadge and W. M. Wonham. 1987. Supervisory control of a class of discrete event processes. SIAM Journal on Control and Optimization 25, 1 (1987), 206–230.
    [59]
    Raspberry Pi. 2019. Raspberry Pi 4 Model B. Retrieved 10 June, 2022 from https://www.raspberrypi.org/products/raspberry-pi-4-model-b/.
    [60]
    F. B. Schneider. 2000. Enforceable security policies. ACM Transactions on Information and System Security 3, 1 (2000), 30–50.
    [61]
    Bruno Siciliano, Lorenzo Sciavicco, Luigi Villani, and Giuseppe Oriolo. 2009. Mobile robots. Robotics: Modelling, Planning and Control (2009), 469–521.
    [62]
    J. Slowik. 2018. Anatomy of an attack: Detecting and defeating CRASHOVERRIDE. VB’18, (2018), October, 1–23.
    [63]
    R. Spenneberg, M. Brüggerman, and H. Schwartke. 2016. PLC-blaster: A worm living solely in the PLC. In Proceedings of the Black Hat Asia. 1–16.
    [64]
    D. Thomas and P. Moorby. 2008. The Verilog® Hardware Description Language. Springer Science & Business Media.
    [65]
    W. Wolf. 2004. FPGA-based System Design. Pearson Education.
    [66]
    Xilinx. 2022. Vivado design suite. White Paper 5 (2012), 1–30.
    [67]
    X. Yin and S. Lafortune. 2016. Synthesis of maximally permissive supervisors for partially observed discrete-event systems. IEEE Transactions on Automatic Control 61, 5 (2016), 1239–1254.
    [68]
    X. Yin and S. Lafortune. 2016. A uniform approach for synthesizing property-enforcing supervisors for partially-observed discrete-event systems. IEEE Transactions on Automatic Control 61, 8 (2016), 2140–2154.
    [69]
    L. H. Yoong, P. S. Roop, V. Vyatkin, and Z. A. Salcic. 2009. A synchronous approach for IEC 61499 function block implementation. IEEE Transactions on Computers 58, 12 (2009), 1599–1614.
    [70]
    N. Zilberman, Y. Audzevich, G. Kalogeridou, N. Manihatty-Bojan, J. Zhang, and A. Moore. 2015. NetFPGA: Rapid prototyping of networking devices in open source. ACM SIGCOMM Computer Communication Review 45, 4 (2015), 363–364.

    Cited By

    View all
    • (2023)Towards Obfuscation of Programmable Logic ControllersProceedings of the 18th International Conference on Availability, Reliability and Security10.1145/3600160.3605081(1-10)Online publication date: 29-Aug-2023
    • (2023)HoneyICS: A High-interaction Physics-aware Honeynet for Industrial Control SystemsProceedings of the 18th International Conference on Availability, Reliability and Security10.1145/3600160.3604984(1-10)Online publication date: 29-Aug-2023
    • (2022)Towards Reverse Engineering of Industrial Physical ProcessesComputer Security. ESORICS 2022 International Workshops10.1007/978-3-031-25460-4_15(273-290)Online publication date: 26-Sep-2022

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Privacy and Security
    ACM Transactions on Privacy and Security  Volume 26, Issue 1
    February 2023
    342 pages
    ISSN:2471-2566
    EISSN:2471-2574
    DOI:10.1145/3561959
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 09 November 2022
    Online AM: 04 July 2022
    Accepted: 22 June 2022
    Revised: 13 June 2022
    Received: 29 April 2021
    Published in TOPS Volume 26, Issue 1

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. ICS security
    2. PLC malware
    3. mitigation
    4. runtime enforcement

    Qualifiers

    • Research-article
    • Refereed

    Funding Sources

    • Dipartimenti di Eccellenza 2018
    • Ministry of Universities and Research

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)670
    • Downloads (Last 6 weeks)64
    Reflects downloads up to 11 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Towards Obfuscation of Programmable Logic ControllersProceedings of the 18th International Conference on Availability, Reliability and Security10.1145/3600160.3605081(1-10)Online publication date: 29-Aug-2023
    • (2023)HoneyICS: A High-interaction Physics-aware Honeynet for Industrial Control SystemsProceedings of the 18th International Conference on Availability, Reliability and Security10.1145/3600160.3604984(1-10)Online publication date: 29-Aug-2023
    • (2022)Towards Reverse Engineering of Industrial Physical ProcessesComputer Security. ESORICS 2022 International Workshops10.1007/978-3-031-25460-4_15(273-290)Online publication date: 26-Sep-2022

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Get Access

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media