Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Semantics Foundation for Cyber-physical Systems Using Higher-order UTP

Published: 13 February 2023 Publication History

Abstract

Model-based design has become the predominant approach to the design of hybrid and cyber-physical systems (CPSs). It advocates the use of mathematically founded models to capture heterogeneous digital and analog behaviours from domain-specific formalisms, allowing all engineering tasks of verification, code synthesis, and validation to be performed within a single semantic body. Guaranteeing the consistency among the different views and heterogeneous models of a system at different levels of abstraction, however, poses significant challenges. To address these issues, Hoare and He’s Unifying Theories of Programming (UTP) proposes a calculus to capture domain-specific programming and modelling paradigms into a unified semantic framework. Our goal is to extend UTP to form a semantic foundation for CPS design. Higher-order UTP (HUTP) is a conservative extension to Hoare and He’s theory that supports the specification of discrete, real-time, and continuous dynamics, concurrency and communication, and higher-order quantification. Within HUTP, we define a calculus of normal hybrid designs to model, analyse, compose, refine, and verify heterogeneous hybrid system models. In addition, we define respective formal semantics for Hybrid Communicating Sequential Processes and Simulink using HUTP.

1 Introduction

Hybrid systems, also known under the more architecture-centered denomination of cyber-physical systems (CPSs), exploit networked computing units to monitor and control physical processes via wired and/or radio communications. CPSs are omnipresent, from high-speed train control systems, power grids, automated factories, to ground, sea, air, and space transportations. CPSs can be complex, networked, systems of systems, and are often entrusted with mission- and safety-critical tasks. Therefore, the efficient and verified development of safe and reliable CPSs is a priority mandated by many standards, yet a notoriously difficult and challenging field.
To address design complexity under the necessity of verified CPS development, model-based design (MBD) [18] has become a predominant development approach in the embedded systems industry. The classical development process of MBD can be sketched as: (1) to start from an abstract model, ideally with a precise mathematical semantics; (2) to conduct extensive analysis and verification on this abstract model, to locate and correct errors as early as possible; (3) subsequently, to refine that abstract, system-level, model to a semantically concrete specification and, eventually, to executable code, by model transformation.
To assist such a development process, design-by-contract (DbC) is an important tool. It allows to define the abstract model of a composite system from the assembly of simple interfaces declaring the behavioural assumptions and guarantees of its constituents. DbC provides a method to conduct verification and development tasks in a modular and compositional manner. Under the DbC paradigm, developers define contracts with two parts: (1) the required behaviour that a constituent guarantees to implement and (2) the assumptions the constituent makes about its environment. A composite system can thus be characterised by the contracts of its constituents, the composition of which fulfills the expected system contract. Each system constituent can be shown to fulfill its guaranteed behaviour under given assumptions, the violation of which could lead to an unpredictable behaviour of the entire system.
By allowing an arbitrary level of system decomposition, vertically (domain-specific) or horizontally (cross-domain), design complexity becomes tractable and controllable. Errors can be identified and corrected at the very early stages of system design; correctness and reliability can be guaranteed by simulation, verification and refinement; and developers can reuse existing contracts and components, further improving development efficiency. Expectedly, model- and contract-based design practices are much recommended by many standard bodies for safety-critical system development.
Nonetheless, guaranteeing the consistency of a system based on the models of its components poses significant challenges. To address it, ideally, one would need a uniform semantics domain capable of capturing all component models, at different abstraction levels, across heterogeneous domain concerns, into the same analysis, automated reasoning and verification framework.
Motivated by the Grand Unified Theory in physics, in the literature, Hoare and He’s unifying theories of programming (UTP) [28] is the first proposal toward unifying different programming paradigms and models under a common relational calculus suitable for design and verification. In UTP, programs are interpreted as guarded designs consisting of a guard, pre- and post-condition [24, 28]. UTP defines operators and a refinement relation over guarded designs to form a lattice. Hence, in UTP, arbitrarily different programming constructs, paradigms, and models of computation can be logically interpreted and compared using the Galois connection induced by their lattice ordering.
Guaranteeing consistency among different models at different levels of abstraction is a grand challenge in control and software engineering. To address it for the design of hybrid systems, this article follows the UTP paradigm by extending the classical UTP with higher-order quantification on continuous variables (functions over time) and differential relations to form a higher-order UTP (HUTP), that can be used to give uniform semantics of hybrid systems at different levels of abstraction, from higher-level abstract models like Simulink or Hybrid Communicating Sequential Processes (HCSP), down to the low-level models of computation of their implementation by generated source code in SystemC or ANSI-C, allowing the consistency among models of hybrid systems at different levels of abstraction/implementation to be proved using theorem proving techniques. Note that HUTP in this article differs from the higher-order UTP defined in References [28, 61, 62]. The latter focuses on higher-order programming, i.e., programs can be treated as higher-order variables in the paradigm. Higher-order quantifications over functions and predicates are also addressed in related works, such as Reference [62]. However, it is still unclear whether the classical UTP can be extended to hybrid systems like the extension to higher-order programs.
Our work starts from Reference [10], which extends UTP by allowing differential relations and higher-order quantification, establishing a theory of HUTP. Additionally, Reference [10] defines HUTP syntactically, which still leaves the question open of equipping it with an adequate algebraic structure (e.g., complete lattices) to serve as a semantics foundation for CPS design.
During the same period of Reference [10], to address the context of CPS design with UTP, Woodcock et al. introduce time-dependent trajectory variables into the alphabet and define hybrid relations [17], based on which the semantics of Modelica [42] with a mixture of differential algebraic equations and event preemption is given. To unify discrete and hybrid theories, they introduce continuous-time traces to the trace model and define a generalised theory of reactive processes [16] in terms of trace algebra. Subsequently in Reference [15], they define reactive design contracts that provide the basis for modelling and verification of reactive systems in the presence of parallelism and communication, yet limited to discrete controllers. Based on the theory of reactive processes, Reference [13] provides an alternative definition of hybrid relations to define a denotational semantics of hybrid programs [48]. However, the above bulk of works does not address some important features, including
Super-dense computation. As argued in Reference [38], super-dense computation provides a very appropriate abstract computation model for embedded systems, as computer is much faster than other physical devices and computation time of a computer is therefore negligible, although the temporal order of computations is still there. As Reference [17] considers finite timed traces with right continuity, super-dense time computation violates right continuity at time points, where a sequence of discrete events happen, and therefore cannot be modelled, although Reference [16] argues that the theory of finite timed traces is extensible to deal with infinite ones.
Parallelism with communication-based synchronisation is not addressed yet, although widely used in CPS designs.
Declaration of local variables and channels. Variables and channels are interpreted as functions over time, therefore declaration of local variables and channels have to resort to higher-order quantification. Reference [14] uses symmetric lenses to deal with local variables, but only discrete variables are considered.
For the drawbacks of our previous work [10], we propose a new theory of HUTP and investigate the suitable algebraic structure for HUTP by taking all important features of hybrid systems into consideration, to enable its use as a semantics foundation for CPS design. Inspired by the Duration Calculus [66], we introduce real-time variables, denoted as functions over time. Derivatives of real variables are additionally allowed in predicates, to form differential relations. To model synchronous communication between processes, we introduce timed traces to record the communication history of processes. Noteworthy contributions of previous works [15, 16, 28] with related aim regarded traces as finite abstract sequences. By contrast, timed traces in this article can be infinite. Therefore, non-terminating behaviours such as divergence and deadlock can be described explicitly. These definitions ground the elaboration of the concepts of hybrid processes and (normal) hybrid designs, based on which we define formal semantics for HCSP [22, 67] and Simulink [39] using HUTP, and justify the correctness of the translation from Simulink to HCSP in Reference [69] as an application of HUTP. In addition, by abstracting hybrid processes and (normal) hybrid designs, we propose a skeleton of equipping HUTP with compositional assume-guarantee reasoning by means of contracts.
Our work is similar to Reference [15] in some aspects, as both are extensions of traditional UTP [28]. However, they differ in essence, namely:
Our HUTP theory separates the concerns of time, state and traces. Especially, we introduce continuous state variables and construct a trace model to address concurrency and communication for hybrid systems. In References [15, 16], although continuous states are encapsulated into traces, parallelism based on traces is not made explicit. For example, the interleaving of traces carrying continuous states is confusing, as, in reality, physical evolution never interleaves with discrete models.
Our approach can deal with infinite behaviour very well, in which therefore deadlock and divergence can be modelled formally. For example, the Zeno’s paradox (Example 3.5) can be modelled as a normal non-terminating (and not necessarily unpredictable) behaviour. In contrast, it is unclear how to model the Zeno’s paradox and other infinite behaviours using Reference [15], as it seems that only finite traces are involved.
Since our HUTP theory supports infinite behaviours, auxiliary variables \(\mathit {wait}\) and \(\mathit {wait}^{\prime }\) , denoting reactivity in discrete models [15, 28], are interpreted explicitly with our approach.
In summary, our proposed HUTP is a conservative extension to the classical UTP of Reference [28] and contributes:
the separation of concerns in hybrid system design into time, state and trace;
a timed trace model recording execution history and captures communication behaviours;
real-time variables and their derivatives, which are functions over time, and differential relations over them that are very powerful to express all kinds of continuous dynamics;
higher-order quantification for specifying locality;
a calculus of hybrid processes and (normal) hybrid designs to model and analyse hybrid systems with (potentially) infinite behaviours; and
the HUTP semantics of HCSP and Simulink.
Article Organisation. The rest of the article is organised as follows. Section 2 retrospects some preliminary concepts of the Unifying Theories of Programming. Section 3 defines a timed trace model that will play an important role in the HUTP theory. Section 4 introduces hybrid processes that combine time, states, and traces to provide a uniform semantics for hybrid systems. To elaborate a sound theory, we extend hybrid processes to (normal) hybrid designs in Section 5. In Section 6, the HUTP is applied to verify the translation from Simulink diagrams to HCSP as an application of HUTP. Section 7 addresses the related work and Section 8 concludes this article and discusses future works.

2 Unifying Theories of Programming

UTP [28] is an alphabetised refinement calculus unifying heterogeneous programming paradigms. An alphabetised relation consists of an alphabet \(\alpha (P)\) , containing its variables x and primes \(x^{\prime }\) , and a relational predicate P referring to this vocabulary. The terms x and \(x^{\prime }\) are called observable variables: x is observable at the start of execution and \(x^{\prime }\) is observable at the end of execution. The behaviour of a program is encoded as a relation between the observable variables x and \(x^{\prime }\) . In particular, assignment, sequential composition, conditional statement, non-deterministic choice, and recursion of imperative programs can be specified as alphabetised relations below, where \(\mathbf {x}\) and \(\mathbf {x}^{\prime }\) are sequences or vectors of variables, \(\mathbf {x}\backslash \lbrace x\rbrace\) ( \(\mathbf {x}^{\prime }\backslash \lbrace x^{\prime }\rbrace\) ) denotes excluding x ( \(x^{\prime }\) ) from \(\mathbf {x}\) ( \(\mathbf {x}^{\prime }\) ). To start with, the relation calculus comprises all operators of first-order logic:
\begin{equation*} \begin{array}{rcl} x:= e&\widehat{=}&x^{\prime }=e\wedge \mathbf {x}^{\prime }\backslash \lbrace x^{\prime }\rbrace =\mathbf {x}\backslash \lbrace x\rbrace ,\\ P⨟ Q&\widehat{=}&\exists \mathbf {x}_\ast \cdot P[\mathbf {x}_\ast /\mathbf {x}^{\prime }]\wedge Q[\mathbf {x}_\ast /\mathbf {x}],\\ P\lhd b\rhd Q &\widehat{=}&(b\wedge P)\vee (\lnot b\wedge Q),\\ P\sqcap Q &\widehat{=}& P\vee Q,\\ P\sqcup Q &\widehat{=}& P\wedge Q. \end{array} \end{equation*}
Assignment \(x:= e\) is defined by observing the update \(x^{\prime }\) of variable x once its value e is evaluated, leaving other variables in the alphabet \(\mathbf {x}\) unchanged. Sequence \(P⨟ Q\) is modelled by locally binding, through \(\mathbf {x}_\ast\) , the final state \(\mathbf {x}^{\prime }\) of P and the initial state \(\mathbf {x}\) of Q both are instantiated to it. Note that \(⨟\) requires that \(\alpha _\mathit {out}(P)=\alpha _\mathit {in}^{\prime }(Q)\) , where \(\alpha _\mathit {out}(P)\) and \(\alpha _\mathit {in}(Q)\) denote the sets of output and input variables in \(\alpha (P)\) and \(\alpha (Q)\) , respectively, and \(\alpha _\mathit {in}^{\prime }(Q)\) is the primed version by priming all the variables in \(\alpha _\mathit {in}(Q)\) . The conditional \(P\lhd b\rhd Q\) evaluates as P if b is true or as Q otherwise. \(P\sqcap Q\) non-deterministically chooses P or Q, whereas \(P\sqcup Q\) is a conjunction of P and Q.
Let P and Q be two predicates with the same alphabet, say \(\lbrace \mathbf {x},\mathbf {x}^{\prime }\rbrace\) . Then, Q is a refinement of P, denoted \(P\sqsubseteq Q\) , if \(\forall \mathbf {x},\mathbf {x}^{\prime }\cdot Q\Rightarrow P\) . In addition, \(P\sqsubseteq Q\) iff \(P\sqcap Q=P\) iff \(P\sqcup Q=Q\) . With respect to the refinement order \(\sqsubseteq\) , the least ( \(\mu\) ) and greatest ( \(\nu\) ) fixed points of a program function F can be defined as follows:
\begin{equation*} \begin{array}{rcl} \mu F&\widehat{=}&\sqcap \lbrace X\mid F(X)\sqsubseteq X\rbrace ,\\ \nu F&\widehat{=}&\bigsqcup \lbrace X\mid X\sqsubseteq F(X)\rbrace . \end{array} \end{equation*}
Example 2.1.
Given an alphabet \(\lbrace x, y, x^{\prime }, y^{\prime }\rbrace\) , the semantics of the assignment \(x:= 1\) is different from \(x^{\prime }=1\) . The former is equivalent to \(x^{\prime }=1\wedge y^{\prime }=y\) , while the latter indicates that y and \(y^{\prime }\) can be arbitrary, i.e., \(x^{\prime }=1\sqsubseteq x:= 1\) . They are equivalent if the alphabet contains only x and \(x^{\prime }\) .
A UTP theory is a well-defined subset of alphabetised relations that satisfies certain conditions, called healthiness conditions. If a predicate is a fixed point of the healthiness condition, \(P=\mathcal {H}(P)\) , then it is said to be \(\mathcal {H}\) -healthy. In other words, a healthiness condition \(\mathcal {H}\) defines an invariant predicate set \(\lbrace X\mid \mathcal {H}(X)=X\rbrace\) , and is required to be idempotent ( \(\mathcal {H}\circ \mathcal {H}=\mathcal {H}\) ), which means that taking the medicine twice leaves you as healthy as taking it once (no overdoses). So, in UTP, the healthy predicates of a theory are the fixed points of idempotent functions. When \(\mathcal {H}\) is monotonic with respect to the refinement order \(\sqsubseteq\) , then according to the Knaster-Tarski theorem [57], the UTP theory satisfying \(\mathcal {H}\) forms a complete lattice and, additionally, recursion can be well specified. Multiple healthiness conditions \(\mathcal {H}_0\) , \(\mathcal {H}_1, \ldots , \mathcal {H}_n\) can also be composed as \(\mathcal {H}~\widehat{=}~\mathcal {H}_0\circ \mathcal {H}_1\circ \cdots \circ \mathcal {H}_n\) , provided that \(\mathcal {H}_i\) and \(\mathcal {H}_j\) commute, for all \(0\le i,j\le n\) . Healthiness conditions play an important role in UTP, and distinct healthiness conditions can be defined to specify properties of different programming paradigms. In Section 4, we use healthiness conditions to define the concept of hybrid processes.
Example 2.2.
Given an alphabet \(\lbrace x, x^{\prime }, \mathit {term}, \mathit {term}^{\prime }\rbrace\) , where the Boolean variable \(\mathit {term}^{\prime }\) denote whether the program has terminated. If the preceding program is non-terminated ( \(\lnot \mathit {term}\) ), then the current program P should do nothing but keep the values of all variables unchanged, expressed by the following healthiness condition:
\begin{equation*} \begin{array}{ccc} \mathcal {H}(P)&\widehat{=}&P\lhd \mathit {term}\rhd (x^{\prime }=x\wedge \mathit {term}^{\prime }=\mathit {term}). \end{array} \end{equation*}
The statement \(x:= 1\) is not \(\mathcal {H}\) -healthy, but we can make it \(\mathcal {H}\) -healthy by
\begin{equation*} \begin{array}{ccc} \mathcal {H}(x:= 1)&=&(x:= 1)\lhd \mathit {term}\rhd (x^{\prime }=x\wedge \mathit {term}^{\prime }=\mathit {term}). \end{array} \end{equation*}
It is easy to check that the above program is indeed a fixed point of \(X=\mathcal {H}(X)\) . When \(\mathit {term}\) is true, i.e., the preceding program is terminated, the above program executes \(x:= 1\) . Note that this assignment is equivalent to \(x^{\prime }=1\wedge \mathit {term}^{\prime }=\mathit {term}\) . If non-termination is expected, then we can write \(\mathcal {H}(x^{\prime }=1\wedge \lnot \mathit {term}^{\prime })\) . It is a left-zero of sequential composition, i.e.,
\begin{equation*} \begin{array}{ccc} \mathcal {H}(x^{\prime }=1\wedge \lnot \mathit {term}^{\prime })⨟ P&=&\mathcal {H}(x^{\prime }=1\wedge \lnot \mathit {term}^{\prime }), \end{array} \end{equation*}
provided that P is any \(\mathcal {H}\) -healthy predicate whose alphabet is \(\lbrace x, x^{\prime }, \mathit {term}, \mathit {term}^{\prime }\rbrace\) .
Following the convention of UTP, in this article, we will define a set of healthiness conditions to identify the features of hybrid systems. The operators for classical sequential programs based on first-order relational calculus, such as \(⨟\) , \(\sqcap\) and \(\sqcup\) , can be easily lifted to the theory for hybrid systems proposed in this article based on higher-order differential relational calculus, and we will prove that they still hold the desired properties like in classical UTP.

3 Timed Trace Model

In this section, we construct a timed trace model to capture the communication behaviour of processes. Traces play an important role in our HUTP theory, which separates the concerns of time, state, and trace. For processes in parallel, we assume that no shared variable is allowed between them and exchange of data between processes is described solely by communications. The synchronous communication between concurrent processes is achieved using input and output data channels. The input action \({ch}?x\) receives a value along channel \({ch}\) and assigns it to the variable x, and the output action \(\mathit {ch!e}\) sends the value of the expression e along channel \({ch}\) . A synchronous communication takes place immediately when both the sending and the receiving actions are enabled. If one party is not ready, then the other has to wait.

3.1 Trace Blocks

Each timed trace is a sequence of communication, wait and internal blocks defined below:
A pair \(\langle {ch}\ast , d\rangle\) is a communication block, where \({ch}\ast\) , a channel operation, indicates input ( \({ch}?\) ), output ( \({ch}!\) ) or synchronised communication ( \({ch}\) ), and d is a value along channel \({ch}\) . It represents that a communication along \({ch}\ast\) takes place instantly. For example, \(\langle {ch}?, d\rangle\) describes the communication receiving the value d along the channel \({ch}\) at current time.
A pair \(\langle \delta , \mathit {RS}\rangle\) is a wait block, where \(0\lt \delta \le +\infty\) is the time period and \(\mathit {RS}\) is a set of waiting channel operations, called ready set. It means the process evolves during the period with the channel operations in \(\mathit {RS}\) ready for communication. The dual of \(\mathit {RS}\) is defined by
\begin{equation*} \begin{array}{ccc} \overline{\mathit {RS}}&=&\lbrace \mathit {ch\overline{\ast }}\mid \mathit {ch\ast }\in \mathit {RS}, \ast \in \lbrace ?, !\rbrace \rbrace , \end{array} \end{equation*}
where \(\overline{?}=~!\) and \(\overline{!}=~?\) , and we assume that \(\mathit {RS}\cap \overline{\mathit {RS}}=\emptyset\) .
The symbol \(\tau\) is an internal block standing for an internal or invisible action such as a timeless discrete computation.
Example 3.1.
For the communication of two HCSP processes: \({ch}?x\Vert {ch}!2\) , since \({ch}?x\) and \({ch}!2\) are ready simultaneously at the beginning, the communication takes place instantly according to the semantics of HCSP [64]. The timed traces of \({ch}?x\) and \({ch}!2\) are \(\langle {ch}?, 2\rangle\) and \(\langle {ch}!, 2\rangle\) , respectively.
Example 3.2.
Consider the delay communication \({ch}?x\Vert ({\boldsymbol {\rm{wait}}\ 1;{ch}!2)\) . Since the right process waits for 1 time unit before \({ch}!2\) is ready, the left process has to wait for 1 time unit for synchronisation although it has been ready since the beginning. The communication takes place after 1 time unit. Therefore, the timed trace of the left is \(\langle 1, \lbrace {ch}?\rbrace \rangle ^{\smallfrown }\langle {ch}?, 2\rangle\) and the right is \(\langle 1, \emptyset \rangle ^{\smallfrown }\langle {ch}!, 2\rangle\) .
Example 3.3.
Consider an ordinary differential equation (ODE) with communication interruption:
\begin{equation*} x:= 1;\langle \dot{x}=x\&\mathbf {true}\rangle \unrhd ⫾ (\mathit {sensor}!x\rightarrow \mathit {actuator}?x) \Vert {\boldsymbol {\rm{wait}}}\ 2;\mathit {sensor}?y; \mathit {actuator}!(y+1). \end{equation*}
The assignment \(x:= 1\) is a timeless discrete action denoting initialisation. The left HCSP process means the continuous evolution depicted by the ODE \(\dot{x}=x\) is preempted as soon as the communication \(\mathit {sensor}!x\) takes place, followed by \(\mathit {actuator}?x\) . The right process indicates that the ODE of the left process evolves for 2 time units before \(\mathit {sensor}!x\) takes place when the value of x reaches \(\exp (2)\) . Therefore, the timed trace of the left process is
\begin{equation*} \tau ^{\smallfrown }\langle 2, \lbrace \mathit {sensor}!\rbrace \rangle ^{\smallfrown }\langle \mathit {sensor}!, \exp (2)\rangle ^{\smallfrown }\langle \mathit {actuator}?, \exp (2)+1\rangle , \end{equation*}
and the timed trace of the right process is
\begin{equation*} \langle 2, \emptyset \rangle ^{\smallfrown }\langle \mathit {sensor?}, \exp (2)\rangle ^{\smallfrown }\langle \mathit {actuator}!, \exp (2)+1\rangle . \end{equation*}

3.2 Timed Traces

In the previous works such as References [15, 16, 28], traces are finite sequences. In our work, we consider infinite timed traces. A timed trace is a finite or infinite sequence of communication, wait, and internal blocks that records the execution history of a process.
Let \(\Sigma\) be the set of all channels and \(\mathit {\Sigma ?}\) and \(\mathit {\Sigma !}\) the respective sets of receiving and sending events along channels in \(\Sigma\) . The sets of communication and wait blocks are, respectively,
\begin{equation*} \begin{array}{ccl} \mathbb {C}&\widehat{=}&(\mathit {\Sigma ?}\cup \mathit {\Sigma !}\cup \Sigma)\times \mathbb {D},\\ \mathbb {W}&\widehat{=}&\lbrace \langle \delta , \mathit {RS}\rangle \mid 0\lt \delta \le +\infty , \mathit {RS}\subseteq (\mathit {\Sigma ?}\cup \mathit {\Sigma !}), \mathit {RS}\cap \overline{\mathit {RS}}=\emptyset \rbrace , \end{array} \end{equation*}
where \(\mathbb {D}\) is some domain.
Definition 3.4 (Timed Trace Set).
Let \(\mathbb {TT}=(\mathbb {C}\cup \mathbb {W}\cup \lbrace \tau \rbrace)^{\ast }\cup (\mathbb {C}\cup \mathbb {W}\cup \lbrace \tau \rbrace)^{\omega }\) be the timed trace set, \(\epsilon\) the empty trace and \(^{\smallfrown }\) the concatenation between trace blocks. Then, the algebra \((\mathbb {TT}, \epsilon , {}{^{\smallfrown }})\) satisfies the following axioms, where \(^{\smallfrown }\) takes precedence over \(=\) .
Associativity
\(\forall \texttt {tt}_0, \texttt {tt}_1, \texttt {tt}_2\in \mathbb {TT}\cdot ({\texttt {tt}_0}^{\smallfrown }{\texttt {tt}_1})^{\smallfrown }\texttt {tt}_2={\texttt {tt}_0}^{\smallfrown }({\texttt {tt}_1}^{\smallfrown }\texttt {tt}_2)\)
Unit
\(\forall \texttt {tt}\in \mathbb {TT}\cdot \epsilon ^{\smallfrown }\texttt {tt}=\texttt {tt}^{\smallfrown }\epsilon =\texttt {tt}\)
Left Zero
\(\forall \texttt {tt}\in \mathbb {TT}\cdot \langle +\infty , \mathit {RS}\rangle ^{\smallfrown }\texttt {tt}=\langle +\infty , \mathit {RS}\rangle\)
Additivity
\(\langle \delta _0, \mathit {RS}\rangle ^{\smallfrown }\langle \delta _1, \mathit {RS}\rangle =\langle \delta _0+\delta _1, \mathit {RS}\rangle\)
The axiom Left Zero reflects a simplifying function \(\mathit {lz}:\mathbb {TT}\rightarrow \mathbb {TT}\) that removes the redundant tail from each trace. For example, \(\mathit {lz}(\langle +\infty , \mathit {RS}\rangle ^{\smallfrown }\langle 2,\mathit {RS}\rangle)=\langle +\infty , \mathit {RS}\rangle\) . The axiom Additivity reflects a simplifying function \(\mathit {add}:\mathbb {TT}\rightarrow \mathbb {TT}\) that merges all the consecutive wait blocks with the same ready set in each trace. For example, \(\mathit {add}(\langle 1, \mathit {RS}\rangle ^{\smallfrown }\langle 2, \mathit {RS}\rangle)=\langle 3, \mathit {RS}\rangle\) . Then, the composite function \(\mathit {add}\circ \mathit {lz}:\mathbb {TT}\rightarrow \mathbb {TT}\) returns the simplest form of each trace. A trace \(\texttt {tt}\) is called canonical normal form if \(|\texttt {tt}|=|\mathit {add}\circ \mathit {lz}(\texttt {tt})|\) , where \(|\cdot |\) returns the length of a trace. Thus, we say \(\texttt {tt}\) is canonical if it cannot be simplified. In what follows, all traces are referred to canonical normal form if not otherwise stated.
We say \(\texttt {tt}_0\) is equivalent to \(\texttt {tt}_1\) , denoted by \(\texttt {tt}_0=\texttt {tt}_1\) , if each finite prefix of \(\texttt {tt}_0\) is also a prefix of \(\texttt {tt}_1\) , and vice versa. We say \(\texttt {tt}_0\lt \texttt {tt}_1\) if \(\texttt {tt}_0\) is finite and there exists a trace tt such that \({\texttt {tt}_0}^{\smallfrown }\texttt {tt}=\texttt {tt}_1\) , and in addition, we let \(\texttt {tt}_1-\texttt {tt}_0=\texttt {tt}\) . Therefore, \(\texttt {tt}_0\lt \texttt {tt}_1\) means \(\texttt {tt}_0\) is a finite prefix of \(\texttt {tt}_1\) , and \(\texttt {tt}_1-\texttt {tt}_0\) denotes the suffix after deleting \(\texttt {tt}_0\) .
Let \(\lim (\texttt {tt})\) be the sum of the time periods of all the wait blocks in tt. If no wait block appears in tt, then we let \(\text{limit}(\texttt {tt})=0\) . A trace tt is non-terminated if \(\lim (\texttt {tt})=+\infty \vee |\texttt {tt}|=+\infty\) and divergent if \(\lim (\texttt {tt})\lt +\infty \wedge |\texttt {tt}|=+\infty\) . Let \(\texttt {tt}(n)\) be the nth (start from 0) block of tt. If \(|\texttt {tt}|\lt +\infty\) , then for any \(n\ge |\texttt {tt}|\) , we let \(\texttt {tt}(n)=\texttt {tt}(|\texttt {tt}|-1)\) , i.e., the last block of tt. We say tt is deadlocked if
\begin{equation} \lim (\texttt {tt})=+\infty \wedge \exists N\in \mathbb {N}\cdot \forall n\ge N\cdot \texttt {tt}(n)=\langle \delta , \mathit {RS}\rangle \wedge \mathit {RS}\ne \emptyset , \end{equation}
(1)
where \(\mathbb {N}\) is the set of natural numbers. Equation (1) says that, from a certain time point, the process will be waiting for communication forever. Clearly, a deadlock is non-terminated, because \(\lim (\texttt {tt})=+\infty\) . This non-termination occurs because some channel operations are never matched. From the view of failures-divergence model [54], the standard denotational semantics for CSP, a ready set is exactly the dual of a refusal set, which contains the events (channel operations) that are not refused. Thus, Equation (1) means that, from a certain time point, the dual channel operations in tt’s ready set are always refused by the environment, which leads to a deadlock. Especially, if \(\lim (\texttt {tt})=+\infty \wedge |\texttt {tt}|\lt +\infty\) , then the last block of tt must be a wait block \(\langle +\infty , \mathit {RS}\rangle\) , in which case tt will be deadlocked if \(\mathit {RS}\ne \emptyset\) .
Example 3.5 (Zeno’s Paradox).
Consider a ball falling from an initial height h with initial velocity 0. The ball falls freely with its dynamics given by the ODE \(\dot{v}=-g\) , where v is the velocity and g is the gravitational acceleration. After \(\sqrt {2h/g}\) time units, it hits the ground ( \(h=0\) ) with \(v=-\sqrt {2hg}\) . Then, there is a discontinuous update in its velocity. This discrete change can be modelled as a mode switch with the update given by \(v:= -c\cdot v\) . This assumes that the collision is inelastic, and the velocity decreases by a factor c, for some appropriate constant \(0\lt c\lt 1\) . Next, the ball bounces back with velocity \(c\sqrt {2hg}\) and reaches the peak ( \(v=0\) ) after \(c\sqrt {2h/g}\) time units. The ball then falls again and this process repeats infinitely. In summary, the timed trace of the bouncing ball is \(\langle \sqrt {2h/g}, \emptyset \rangle ^{\smallfrown }\tau ^{\smallfrown }\langle 2c\sqrt {2h/g}, \emptyset \rangle ^{\smallfrown }\tau ^{\smallfrown }\cdots ^{\smallfrown }\langle 2c^{n+1}\sqrt {2h/g}, \emptyset \rangle ^{\smallfrown }\tau ^{\smallfrown }\cdots\) where the internal blocks ( \(\tau\) s) denote the mode switch of the velocity ( \(v:= -c\cdot v\) ). The limit of this timed trace is \(\frac{1+c}{1-c}\cdot \sqrt {2h/g}\) but the length of it is infinite, i.e., a divergence.

3.3 Parallel Composition

For each pair of processes in parallel, we assume that they communicate over a set of channels I. Concretely, they can only exchange messages or data synchronously via the channels in I, and they communicate with the environment via channels outside I. In Reference [54], this parallel is called generalised or interface parallel, and in Reference [64], it is called alphabetised parallel and the channels in I are called common channels. Therefore, in this article, the set I is called a common channel set between processes (traces). For simplicity, we also assume:
(1)
Each input or output channel can only be owned by one sequential process, i.e., a channel operation like \({ch}?\) cannot occur in two different sequential processes in parallel.
(2)
Any pair of symmetric input and output channels cannot be owned by one sequential process, i.e., channel operations like \({ch}?\) and \({ch}!\) cannot occur in one sequential process.
Such restrictions are not essential, as the general communication model can always be reduced to this simplified model, please refer to Reference [27] for the details.
Let \(\texttt {tt}_0\Vert _I\texttt {tt}_1\leadsto \texttt {tt}\) denote that the parallel composition of two timed traces \(\texttt {tt}_0\) and \(\texttt {tt}_1\) over a common channel set I derives a new timed trace \(\texttt {tt}\) . Since infinite timed traces are included, we give the following coinductive rules to define the parallel composition of timed traces, where the binding priority among operators is that \(^{\smallfrown }\) prior to \(\Vert _I\) prior to \(\leadsto\) , and \(\uplus\) is a disjoint union.
Rule [Empty] says that the parallel composition of two empty traces trivially reduces to an empty trace. Rule [SynWait] states that when both sides are waiting for a communication, they can be synchronised for the same length of period if their ready sets are unmatched, i.e., \(\overline{\mathit {RS}_0}\cap \mathit {RS}_1=\emptyset\) , because \(\overline{\mathit {RS}_0}\cap \mathit {RS}_1\ne \emptyset\) indicates that the two sides are waiting for a communication with some channel operations matched, that should be taken immediately, and no time synchronization is allowed. Rule [SynIO] synchronises the input and output communications along some channel \({ch}\in I\) . The synchronization will never be applied to channels outside I or internal actions \(\tau\) as stated by Rules [NoSynIO] and [ \(\tau\) -Act]. In addition, there are symmetric versions of these rules, which means that \(\Vert _I\) is commutative.
Property 1 (Associativity).
Let \(I_0\) , \(I_1\) and \(I_2\) be the respective common channel sets between timed traces \(\texttt {tt}_0\) and \(\texttt {tt}_1\) , \(\texttt {tt}_1\) and \(\texttt {tt}_2\) , and \(\texttt {tt}_2\) and \(\texttt {tt}_0\) , then \((\texttt {tt}_0\Vert _{I_0}\texttt {tt}_1)\Vert _{I_1\uplus I_2}\texttt {tt}_2=\texttt {tt}_0\Vert _{I_0\uplus I_2}(\texttt {tt}_1\Vert _{I_1}\texttt {tt}_2)\) .
Example 3.6.
The timed trace \(\texttt {tt}_0=\langle 2, \lbrace {ch}?\rbrace \rangle ^{\smallfrown }\langle {ch}?, 5\rangle\) indicates a delay for 2 time units, during which the channel operation \({ch}?\) is waiting, followed by receiving the value 5 along channel \({ch}\) . The trace \(\texttt {tt}_1=\langle 2, \emptyset \rangle ^{\smallfrown }\langle {ch}!, 5\rangle\) indicates a delay for 2 time units, during which no channel operations are waiting, followed by sending the value 5 along channel \({ch}\) . The parallel composition of \(\texttt {tt}_0\) and \(\texttt {tt}_1\) over the channel \({ch}\) is
\begin{equation*} \begin{array}{ccc} \texttt {tt}_0\Vert _{\lbrace {ch}\rbrace }\texttt {tt}_1 &\leadsto & \langle 2, \lbrace {ch}?\rbrace \rangle ^{\smallfrown }\langle {ch}, 5\rangle \end{array} \end{equation*}
by [SynWait], [SynIO], and [Empty]. In fact, recording \({ch}?\) in the ready set of the composed traces is not necessary, as \({ch}\) is a channel that only connects \(\texttt {tt}_0\) and \(\texttt {tt}_1\) , making it impossible for the environment to synchronise with it via \({ch}\) . However, recording \({ch}?\) has no side effect, and just indicates that the process itself was waiting for the channel operation \({ch}!\) during the period, no matter whether \({ch}\) is a common channel between \(\texttt {tt}_0\) and \(\texttt {tt}_1\) or not. This notion is in accordance to the semantics of HCSP (please refer to Reference [64] for the details).
Example 3.7.
The timed trace \(\texttt {tt}_0=\langle \mathit {dh}?, 3\rangle ^{\smallfrown }\langle 2, \emptyset \rangle\) indicates receiving the value 3 along channel \(\mathit {dh}\) at the initial time, followed by a delay of 2 time units during which no channel operations are waiting. Let \(\texttt {tt}_1=\langle 1, \emptyset \rangle ^{\smallfrown }\langle 1, \lbrace {ch}?\rbrace \rangle\) . Then, we have
\begin{equation*} \begin{array}{ccc} \texttt {tt}_0\Vert _{\lbrace {ch}\rbrace }\texttt {tt}_1 &\leadsto & \langle \mathit {dh}?, 3\rangle ^{\smallfrown }\langle 1, \emptyset \rangle ^{\smallfrown }\langle 1, \lbrace {ch}?\rbrace \rangle \end{array} \end{equation*}
by [NoSynIO], [SynWait], [Empty], and Additivity of wait blocks (see Definition 3.4). Note that if \(\texttt {tt}_0\) and \(\texttt {tt}_1\) also communicate via \(\mathit {dh}\) , then no rules can be applied for \(\texttt {tt}_0\Vert _{\lbrace {ch}, \mathit {dh}\rbrace }\texttt {tt}_1\) , because \(\mathit {dh}?\) cannot be synchronised, i.e., they are uncomposable.
Example 3.8.
Let \(\texttt {tt}_0=\langle {ch}?, 3\rangle ^{\smallfrown }\langle \mathit {dh}?, 4\rangle\) and \(\texttt {tt}_1=\langle {ch}!, 3\rangle ^{\smallfrown }\langle \mathit {dh}!, 5\rangle\) , then no rules can be applied for \(\texttt {tt}_0\Vert _{\lbrace {ch},\mathit {dh}\rbrace }\texttt {tt}_1\) as the values along channel \(\mathit {dh}\) are different although the heads of \(\texttt {tt}_0\) and \(\texttt {tt}_1\) are matched by [SynIO], i.e., \(\texttt {tt}_0\) and \(\texttt {tt}_1\) are uncomposable.
Example 3.9.
Let \(\texttt {tt}_0=\langle 1, \lbrace {ch}?\rbrace \rangle\) and \(\texttt {tt}_1=\langle 1, \lbrace {ch}!\rbrace \rangle\) , then no rules can be applied for \(\texttt {tt}_0\Vert _{\lbrace {ch}\rbrace }\texttt {tt}_1\) , as it is impossible that the symmetric channel operations \({ch}?\) and \({ch}!\) are waiting simultaneously, indicating that \(\texttt {tt}_0\) and \(\texttt {tt}_1\) are uncomposable.
Example 3.10.
Consider two infinite timed traces:
\begin{equation*} \begin{array}{rcl} \texttt {tt}_0&=&\langle {ch}?, 0\rangle ^{\smallfrown }\langle {ch}?, 1\rangle ^{\smallfrown }\cdots \langle {ch}?, n\rangle ^{\smallfrown }\cdots ,\\ \texttt {tt}_1&=&\tau ^{\smallfrown }\langle {ch}!, 0\rangle ^{\smallfrown }\tau ^{\smallfrown }\langle {ch}!, 1\rangle ^{\smallfrown }\cdots ^{\smallfrown }\tau ^{\smallfrown }\langle {ch}!, n\rangle ^{\smallfrown }\cdots . \end{array} \end{equation*}
The timed trace \(\texttt {tt}_0\) indicates that the process is receiving values along channel \({ch}\) endlessly at current time. The timed trace \(\texttt {tt}_1\) indicates the infinite sending events at current time and before each sending event there is an internal action. Their parallel composition over \({ch}\) is also infinite:
\begin{equation*} \begin{array}{rcl} \texttt {tt}_0\Vert _{\lbrace {ch}\rbrace }\texttt {tt}_1&\leadsto &\tau ^{\smallfrown }\langle {ch}, 0\rangle ^{\smallfrown }\tau ^{\smallfrown }\langle {ch}, 1\rangle ^{\smallfrown }\cdots ^{\smallfrown }\tau ^{\smallfrown }\langle {ch}, n\rangle ^{\smallfrown }\cdots \end{array} \end{equation*}
by [ \(\tau\) -Act] and [SynIO]. It is a divergence as the time never progresses.
Example 3.11.
Let \(\texttt {tt}_0=\tau ^{\smallfrown }\langle 1,\lbrace {ch}?\rbrace \rangle ^{\smallfrown }\langle 2,\emptyset \rangle\) and \(\texttt {tt}_1=\langle 2,\emptyset \rangle ^{\smallfrown }\tau ^{\smallfrown }\langle 1,\lbrace {ch}!\rbrace \rangle\) . Then, their parallel composition via channel \({ch}\) is
\begin{equation*} \begin{array}{rcl} \texttt {tt}_0\Vert _{\lbrace {ch}\rbrace }\texttt {tt}_1 &\leadsto & \tau ^{\smallfrown }\langle 1,\lbrace {ch}?\rbrace \rangle ^{\smallfrown }\langle 1,\emptyset \rangle ^{\smallfrown }\tau ^{\smallfrown }\langle 1,\lbrace {ch}!\rbrace \rangle \end{array} \end{equation*}
by [ \(\tau\) -Act], [SynWait], and Additivity of wait blocks (see Definition 3.4). Notice that in this example, the symmetric channel operations \({ch}?\) and \({ch}!\) appear in one timed trace. Similar to Example 3.6, \({ch}\) is a common channel between \(\texttt {tt}_0\) and \(\texttt {tt}_1\) and therefore it does not connect to the environment. \({ch}?\) and \({ch}!\) appearing in one timed trace is the result from parallel composition, which indicates this trace denotes a parallel process, not a sequential one. Actually, we can construct the following parallel HCSP processes:
\begin{equation*} \begin{array}{ccl} \rm {P}&\widehat{=}&x:= 0;\langle \dot{x}=1\&x\lt 1\rangle \unrhd ⫾ ({ch}?x\rightarrow {\bf \textsf {skip}});{\bf \textsf {wait}}(2),\\ \rm {Q}&\widehat{=}&{\bf \textsf {wait}}(2);y:= 0;\langle \dot{y}=1\&y\lt 1\rangle \unrhd ⫾ ({ch}!y\rightarrow {\bf \textsf {skip}}). \end{array} \end{equation*}
The timed trace of \(\textsf {P}\Vert \rm {Q}\) is exactly \(\tau ^{\smallfrown }\langle 1,\lbrace {ch}?\rbrace \rangle ^{\smallfrown }\langle 1,\emptyset \rangle ^{\smallfrown }\tau ^{\smallfrown }\langle 1,\lbrace {ch}!\rbrace \rangle\) as shown above.

4 Hybrid Processes

Following Hehner’s philosophy “programs as predicates” [25], the first-order relational calculus of UTP [28] can be used to model programs specified with possibly different or heterogeneous programming styles. In this section, we introduce continuous dynamics into rational predicates to model hybrid systems, and use the term hybrid processes to denote such extended predicates. Hybrid processes serve as the basis of the HUTP theory in this article.
To use UTP for modelling hybrid systems’ behaviours, one first needs to extend it with an explicit notion of time. This can be done by introducing two observational variables \(\mathit {ti}, \mathit {ti}^{\prime }\in \mathbb {R}_{\ge 0}\cup \lbrace +\infty \rbrace\) to specify the start- and end-time of the observed behaviour. Let \(\mathit {tr}\in \mathit {add}\circ \mathit {lz}(\mathbb {TT})\) , where \(\mathit {add}\circ \mathit {lz}(\mathbb {TT})\) denotes the set of canonical normal traces, represent the timed trace before the process is started and \(\mathit {tr}^{\prime }\in \mathit {add}\circ \mathit {lz}(\mathbb {TT})\) stand for timed trace up to the moment of observation.
In a real-time setting, process state variables are interpreted as functions over time. To specify its real-time value, we use \({s̰}\) to represent it. In a word, we have three versions for each state variable s:
the version \(s\in \mathbb {D}\) stands for its initial value in the domain \(\mathbb {D}\) , i.e., the input state variable, where \(\mathbb {D}\) could be a Banach space;
the primed version \(s^{\prime }\in \mathbb {D}\) stands for the final value, i.e., the output state variable; and
the real time version \({s̰}:[\mathit {ti}, \mathit {ti}^{\prime })\rightarrow \mathbb {D}\) stands for its dynamic trajectory from the start time \(\mathit {ti}\) to the end time \(\mathit {ti}^{\prime }\) , and \({ṡ̰}:(\mathit {ti}, \mathit {ti}^{\prime })\rightharpoonup \mathbb {D}\) is a partial function denoting the derivative of \({s̰}\) .
We use the boldface symbols \(\mathbf {s}\) , \(\mathbf {s}^{\prime }\) , \({\mathbf {s}̰}\) , and \({\mathbf {s}̰̇}\) to denote respective vectors of input, output, real-time state variables and the derivatives. The alphabet that our theory depends on is \(\lbrace \mathit {ti}, \mathit {ti}^{\prime }, \mathit {tr}, \mathit {tr}^{\prime }, \mathbf {s}, {\mathbf {s}̰}, {\mathbf {s}̰̇}, \mathbf {s}^{\prime }\rbrace\) by default. Therefore, first-order predicate \(P(\mathbf {x},\mathbf {x}^{\prime })\) introduced in Section 2 can be extended to high-order differential relation \(P(\mathit {ti}, \mathit {ti}^{\prime }, \mathit {tr}, \mathit {tr}^{\prime }, \mathbf {s}, {\mathbf {s}̰}, {\mathbf {s}̰̇}, \mathbf {s}^{\prime })\) . By introducing some healthiness conditions for high-order differential relations, we can then define hybrid processes.

4.1 Healthiness Conditions

As stated in Section 2, healthiness conditions play an important role in UTP. Therefore, we introduce the following healthiness conditions to identify the features of hybrid processes:
Time must be irreversible, and trace should be monotonically increasing, because processes are not permitted to undo past events, i.e.,
\begin{equation*} \begin{array}{ccc} \mathcal {H}_{\mbox{0}}(X)&=&X\wedge \mathit {ti}\le \mathit {ti}^{\prime }\wedge \mathit {tr}\le \mathit {tr}^{\prime }. \end{array} \end{equation*}
This healthiness condition is a variant of \(\mathbf {R1}\) of reactive processes in References [15, 28]. Here, however, \(\mathbf {R1}\) does not deal with time. Similar variants include \(\mathbf {HC1}\) in Reference [23] and \(\mathbf {HCT1}\) in Reference [17], but they do not involve the concept of traces.
Trace should keep consistent with time:
\begin{equation*} \begin{array}{ccc} \mathcal {H}_{\mbox{1}}(X)&=&X\wedge \mathit {ti}=\lim (\mathit {tr})\wedge \mathit {ti}^{\prime }=\lim (\mathit {tr}^{\prime }). \end{array} \end{equation*}
If the preceding process does not terminate, i.e., \(\mathit {ti}=+\infty \vee |\mathit {tr}|=+\infty\) , then the current process should do nothing but keep the time and trace observations unchanged, i.e.,
\begin{equation*} \begin{array}{ccc} \mathcal {H}_{\mbox{2}}(X)&=&(\mathit {ti}=\mathit {ti}^{\prime }\wedge \mathit {tr}=\mathit {tr}^{\prime })\lhd (\mathit {ti}=+\infty \vee |\mathit {tr}|=+\infty)\rhd X, \end{array} \end{equation*}
where \(P\lhd b\rhd Q~\widehat{=}~(b\wedge P)\vee (\lnot b\wedge Q)\) . This healthiness condition is similar to \(\mathbf {R3}\) in Reference [28] and \(\mathbf {R3}_h\) in Reference [15], except that we specify the auxiliary variable \(\mathit {wait}\) explicitly as \(\mathit {ti}=+\infty \vee |\mathit {tr}|=+\infty\) . Hence, unlike in classical UTP, variables \(\mathit {wait}\) and \(\mathit {wait}^{\prime }\) are not used in our HUTP.
If the current process does not terminate, i.e., \(\mathit {ti}^{\prime }=+\infty \vee |\mathit {tr}^{\prime }|=+\infty\) , then the values of the output state variables are unobservable, i.e.,
\begin{equation*} \begin{array}{ccc} \mathcal {H}_{\mbox{3}}(X)&=&(\exists \mathbf {s}^{\prime }\cdot X)\lhd (\mathit {ti}^{\prime }=+\infty \vee |\mathit {tr}^{\prime }|=+\infty)\rhd X. \end{array} \end{equation*}
As stated above, \(\mathit {ti}^{\prime }=+\infty \vee |\mathit {tr}^{\prime }|=+\infty\) specify variable \(\mathit {wait}^{\prime }\) in the traditional definition of reactive processes of the UTP, where \(\mathit {wait}^{\prime }\) being \(\mathbf {true}\) denotes the observation that all primed variables stand for intermediate observations, not final ones. However, in our work, we hide the primed state variables by an existential quantifier when \(\mathit {ti}^{\prime }=+\infty \vee |\mathit {tr}^{\prime }|=+\infty\) , since observing the state values at time infinity makes no sense. The idea of \(\mathcal {H}_{\mbox{3}}\) is similar to \(\mathbf {R4}\) for a healthy process in Reference [9], which expresses the intuition that “program variable state is not visible while waiting for external events.”
If the process evolves for a period of time, i.e., \(\mathit {ti}\lt \mathit {ti}^{\prime }\) , then the real-time value \({\mathbf {s}̰}\) should keep right-continuous and semi-differentiable, i.e.,
\begin{equation*} \begin{array}{ccc} \mathcal {H}_{\mbox{4}}(X)&=&X\wedge {RC}\wedge \mathit {SD,} \end{array} \end{equation*}
where
\begin{equation*} \begin{array}{rcl} {RC}&\widehat{=}&\forall i\cdot \forall t\in [\mathit {ti}, \mathit {ti}^{\prime })\cdot {\mathbf {s}̰}_i(t)={\mathbf {s}̰}_i(t^+),\\ {SD}&\widehat{=}&\forall i\cdot \forall t\in (\mathit {ti}, \mathit {ti}^{\prime })\cdot \exists d_0, d_1\cdot {\mathbf {s}̰̇}_i(t^-)=d_0\wedge {\mathbf {s}̰̇}_i(t^+)=d_1 \end{array} \end{equation*}
denote the right-continuity and semi-differentiability, respectively, and \(\mathbf {s}_i\) ( \({\mathbf {s}̰}_i\) and \(\mathbf {s}_i^{\prime }\) ) is the ith variable in \(\mathbf {s}\) ( \({\mathbf {s}̰}\) and \(\mathbf {s}^{\prime }\) ). \(\mathcal {H}_{\mbox{4}}\) rules out some ill behaviours, such as the Dirichlet function (returning 1 if t is a rational number and 0 otherwise) and the Weierstrass function (continuous everywhere but differentiable nowhere).
Definition 4.1 (Hybrid Process).
A hybrid process HP is a fixed point of \(X=\mathcal {H}_{\mbox{HP}}(X)\) , where
\begin{equation*} \begin{array}{ccc} \mathcal {H}_{\mbox{HP}}&\widehat{=}&\mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}}\circ \mathcal {H}_{\mbox{4}}, \end{array} \end{equation*}
and the alphabet of HP is \(\alpha (\textsf {HP})~\widehat{=}~\lbrace \mathit {ti}, \mathit {ti}^{\prime }, \mathit {tr}, \mathit {tr}^{\prime }, \mathbf {s}, {\mathbf {s}̰}, {\mathbf {s}̰̇}, \mathbf {s}^{\prime }\rbrace\) .
Since \(\mathit {wait}\) and \(\mathit {wait}^{\prime }\) , used to denote reactivity in discrete models [15, 28], are interpreted, respectively, as \(\mathit {ti}=+\infty \vee |\mathit {tr}|=+\infty\) and \(\mathit {ti}^{\prime }=+\infty \vee |\mathit {tr}^{\prime }|=+\infty\) in \(\mathcal {H}_{\mbox{2}}\) and \(\mathcal {H}_{\mbox{3}}\) , the hybrid processes defined by Definition 4.1 are essentially reactive.
Theorem 4.2 (Idempotence and Monotonicity).
\(\mathcal {H}_{\mbox{HP}}\) is idempotent and monotonic.

4.2 Operations

4.2.1 Meet and Join.

Let \(\textsf {HP}_0\) and \(\textsf {HP}_1\) be two hybrid processes. Non-determinism is simply modelled by logical disjunction:
\begin{equation*} \begin{array}{rcl} \textsf {HP}_0\sqcap \textsf {HP}_1&\widehat{=}&\textsf {HP}_0\vee \textsf {HP}_1. \end{array} \end{equation*}
It stands for a process that internally chooses to execute either \(\textsf {HP}_0\) or \(\textsf {HP}_1\) , regardless of the environment. The dual of non-determinism choice is conjunction:
\begin{equation*} \begin{array}{rcl} \textsf {HP}_0\sqcup \textsf {HP}_1&\widehat{=}&\textsf {HP}_0\wedge \textsf {HP}_1. \end{array} \end{equation*}
The alphabets are expanded by \(\sqcap\) and \(\sqcup\) , i.e., \(\alpha (\textsf {HP}_0\sqcap \textsf {HP}_1)=\alpha (\textsf {HP}_0\sqcup \textsf {HP}_1)=\alpha (\textsf {HP}_0)\cup \alpha (\textsf {HP}_1)\) . In the literature, \(\sqcap\) is also called demonic choice or meet, while \(\sqcup\) is called angelic choice or join. Generally, \(\sqcap\) means \(\wedge\) and \(\sqcup\) means \(\vee\) , but in UTP [15, 28], they always denote \(\vee\) and \(\wedge\) , respectively. Thus, in this article, we will follow this notational convention.
Property 2.
If \(\textsf {HP}_0\) and \(\textsf {HP}_1\) are \(\mathcal {H}_{\mbox{HP}}\) -healthy, then so are \(\textsf {HP}_0\sqcap \textsf {HP}_1\) and \(\textsf {HP}_0\sqcup \textsf {HP}_1\) .

4.2.2 Sequential Composition.

The sequential composition in traditional UTP can be adapted to hybrid processes:
\begin{equation*} \begin{array}{rcl} \textsf {HP}_0⨟ \textsf {HP}_1&\widehat{=}&\exists \mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast \cdot \textsf {HP}_0[\mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast /\mathit {ti}^{\prime }, \mathbf {s}^{\prime }, \mathit {tr}^{\prime }]\wedge \textsf {HP}_1[\mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast /\mathit {ti}, \mathbf {s}, \mathit {tr}], \end{array} \end{equation*}
provided that \(\alpha _\mathit {out}(\textsf {HP}_0)=\alpha _\mathit {in}^{\prime }(\textsf {HP}_1)\) , where \(\alpha _\mathit {out}(\textsf {HP}_0)\) and \(\alpha _\mathit {in}(\textsf {HP}_1)\) denote the sets of output and input variables (containing time and trace variables) in \(\alpha (\textsf {HP}_0)\) and \(\alpha (\textsf {HP}_1)\) , respectively, and \(\alpha _\mathit {in}^{\prime }(\textsf {HP}_1)\) is the primed version by priming all the variables in \(\alpha _\mathit {in}(\textsf {HP}_1)\) . If \(\alpha _\mathit {out}(\textsf {HP}_0)\ne \alpha _\mathit {in}^{\prime }(\textsf {HP}_1)\) , then we can extend the alphabets by \(\alpha _\mathit {out}^+(\textsf {HP}_0)=\alpha _\mathit {in}^+{}^{\prime }(\textsf {HP}_1)~\widehat{=}~\alpha _\mathit {out}(\textsf {HP}_0)\cup \alpha _\mathit {in}^{\prime }(\textsf {HP}_1)\) to ensure the well-definedness of sequential composition.
Property 3.
If \(\textsf {HP}_0\) and \(\textsf {HP}_1\) are \(\mathcal {H}_{\mbox{HP}}\) -healthy, then so is \(\textsf {HP}_0⨟ \textsf {HP}_1\) .
With the definition of sequential composition, we can prove the following property for the healthiness condition \(\mathcal {H}_{\mbox{3}}\) defined before.
Property 4.
The healthiness condition \(\mathcal {H}_{\mbox{3}}\) is equivalent to
\begin{equation*} \begin{array}{ccl} X&=&X⨟ \textsf {skip}, \end{array} \end{equation*}
where \(\textsf {skip}~\widehat{=}~\mathcal {H}_{\mbox{2}}(\mathit {ti}=\mathit {ti}^{\prime }\wedge \mathit {tr}=\mathit {tr}^{\prime }\wedge \mathbf {s}=\mathbf {s}^{\prime })\) .

4.2.3 Complement.

The complement of a hybrid process HP is interpreted by
\begin{equation*} \begin{array}{rcl} \lnot _\wr \textsf {HP}&\widehat{=}&\mathcal {H}_{\mbox{HP}}(\lnot \textsf {HP}). \end{array} \end{equation*}
Note that \(\lnot \textsf {HP}\) is not \(\mathcal {H}_{\mbox{HP}}\) -healthy, as it may violate constraints such as \(\mathit {tr}\le \mathit {tr}^{\prime }\) . Therefore, we enforce healthiness by \(\mathcal {H}_{\mbox{HP}}\) , resulting in \(\lnot _\wr \textsf {HP}\) . Intuitively, \(\lnot _\wr \textsf {HP}\) can be seen as the complement of \(\textsf {HP}\) in the context of hybrid processes, i.e., we can prove that \(\lnot _\wr \textsf {HP}\sqcap \textsf {HP}=\bot _\mbox{HP}\) and \(\lnot _\wr \textsf {HP}\sqcup \textsf {HP}=\top _\mbox{HP}\) . Then, we define implication by
\begin{equation*} \begin{array}{rcl} \textsf {HP}_0\Rightarrow _\wr \textsf {HP}_1&\widehat{=}&\lnot _\wr \textsf {HP}_0\vee \textsf {HP}_1. \end{array} \end{equation*}
These logic operators are similar to \(\lnot _r\) and \(\Rightarrow _r\) , used in Reference [15].

4.2.4 Quantification.

To support declaration, abstraction, and instantiation of variables, as in Reference [28], we introduce quantifications over three versions s, \(s^{\prime }\) , and \({s̰}\) of observable state variable s in a hybrid process \(\textsf {HP}\) . Existential quantifications of first-order variables s and \(s^{\prime }\) are as usual:
\begin{equation*} \begin{array}{lcl} \exists s\cdot \textsf {HP}&\widehat{=}&\bigvee \lbrace \textsf {HP}[d/s]\mid d\in \mathbb {D}\rbrace ,\\ \exists s^{\prime }\cdot \textsf {HP}&\widehat{=}&\bigvee \lbrace \textsf {HP}[d/s^{\prime }]\mid d\in \mathbb {D}\rbrace . \end{array} \end{equation*}
The quantification of real-time variable \({s̰}\) is of higher-order and it will meanwhile quantifies the derivative of \({s̰}\) , as follows:
\begin{equation*} \begin{array}{lcl} \exists {s̰}\cdot \textsf {HP}&\widehat{=}&\bigvee \left\lbrace \textsf {HP}[f,g/{s̰},{ṡ̰}]\bigg \vert \begin{array}{c} f:[\mathit {ti},\mathit {ti}^{\prime })\rightarrow \mathbb {D}\qquad g:(\mathit {ti},\mathit {ti}^{\prime })\rightharpoonup \mathbb {D}\\ \forall t\in (\mathit {ti},\mathit {ti}^{\prime })\cdot \dot{f}(t^-)=g(t^-)\wedge \dot{f}(t^+)=g(t^+) \end{array}\!\! \right\rbrace . \end{array} \end{equation*}
Existential quantification removes the quantified variables from the alphabet of a process, i.e., \(\alpha (\exists s\cdot \textsf {HP})=\alpha (\textsf {HP})\backslash \lbrace s\rbrace\) , \(\alpha (\exists s^{\prime }\cdot \textsf {HP})=\alpha (\textsf {HP})\backslash \lbrace s^{\prime }\rbrace\) and \(\alpha (\exists {s̰}\cdot \textsf {HP})=\alpha (\textsf {HP})\backslash \lbrace {s̰}, {ṡ̰}\rbrace\) . Dually, universal quantifications of s, \(s^{\prime }\) , and \({s̰}\) can be defined by
\begin{equation*} \begin{array}{lcl} \forall s\cdot \textsf {HP}&\widehat{=}&\lnot _\wr (\exists s\cdot \lnot _\wr \textsf {HP}),\\ \forall s^{\prime }\cdot \textsf {HP}&\widehat{=}&\lnot _\wr (\exists s^{\prime }\cdot \lnot _\wr \textsf {HP}),\\ \forall {s̰}\cdot \textsf {HP}&\widehat{=}&\lnot _\wr (\exists {s̰}\cdot \lnot _\wr \textsf {HP}). \end{array} \end{equation*}
Since real-time variables are functions over time, such quantifications are of higher-order (functional). It can also be proven that quantifiers \(\exists\) and \(\forall\) are \(\mathcal {H}_{\mbox{HP}}\) -preserving. In addition, higher-order \(\exists\) and \(\forall\) enjoy the following properties:
Property 5 (Quantification).
\begin{equation*} \begin{array}{rcl} \exists {x̰}\cdot (\textsf {HP}_0⨟ \textsf {HP}_1) &=& (\exists {x̰}\cdot \textsf {HP}_0)⨟ (\exists {x̰}\cdot \textsf {HP}_1), \\ \forall {x̰}\cdot (\textsf {HP}_0⨟ \textsf {HP}_1) &\sqsupseteq & (\forall {x̰}\cdot \textsf {HP}_0)⨟ (\forall {x̰}\cdot \textsf {HP}_1), \\ \exists {x̰}\cdot (\textsf {HP}_0\sqcap \textsf {HP}_1) &=& (\exists {x̰}\cdot \textsf {HP}_0)\sqcap (\exists {x̰}\cdot \textsf {HP}_1),\\ \forall {x̰}\cdot (\textsf {HP}_0\sqcap \textsf {HP}_1) &\sqsupseteq & (\forall {x̰}\cdot \textsf {HP}_0)\sqcap (\forall {x̰}\cdot \textsf {HP}_1), \\ \exists {x̰}\cdot (\textsf {HP}_0\sqcup \textsf {HP}_1) &\sqsubseteq & (\exists {x̰}\cdot \textsf {HP}_0)\sqcup (\exists {x̰}\cdot \textsf {HP}_1), \\ \forall {x̰}\cdot (\textsf {HP}_0\sqcup \textsf {HP}_1) &=& (\forall {x̰}\cdot \textsf {HP}_0)\sqcup (\forall {x̰}\cdot \textsf {HP}_1), \\ \lnot _\wr \exists {x̰}\cdot \textsf {HP} &=& \forall {x̰}\cdot \lnot _\wr \textsf {HP},\\ \lnot _\wr \forall {x̰}\cdot \textsf {HP} &=& \exists {x̰}\cdot \lnot _\wr \textsf {HP}. \end{array} \end{equation*}
Note that in the above the refinement order \(\sqsubseteq\) for predicates introduced in Section 2 can be lifted to hybrid processes naturally. With the aid of higher-order quantifiers, the lexical locality of a variable s in a hybrid process \(\textsf {HP}\) can be specified by
\begin{equation*} \begin{array}{rcl} \mathbf {local}~s\cdot \textsf {HP}&\widehat{=}&\exists s, {s̰}, s^{\prime }\cdot \textsf {HP}. \end{array} \end{equation*}
Example 4.3.
Consider a hybrid process with independent state variables x and y:
\begin{equation*} \begin{array}{rcl} \textsf {HP}_0&\widehat{=}&\mathcal {H}_{\mbox{HP}}\! \left(\!\!\! \begin{array}{l} \mathit {ti}\lt \mathit {ti}^{\prime }\wedge \mathit {tr}^{\prime }-\mathit {tr}=\langle \mathit {ti}^{\prime }-\mathit {ti}, \emptyset \rangle \wedge \\ x={x̰}(\mathit {ti})\wedge x^{\prime }={x̰}(\mathit {ti}^{\prime -})\wedge y={y̰}(\mathit {ti})\wedge y^{\prime }={y̰}(\mathit {ti}^{\prime -})\\ \wedge \forall t\in (\mathit {ti}, \mathit {ti}^{\prime })\cdot \dot{{x̰}}(t)=1\wedge \dot{{y̰}}(t)=2 \end{array} \!\!\!\right), \end{array} \end{equation*}
with \(\alpha (\textsf {HP}_0)=\lbrace \mathit {ti}, \mathit {ti}^{\prime }, \mathit {tr}, \mathit {tr}^{\prime }, x, {x̰}, \dot{{x̰}}, x^{\prime }, y, {y̰}, \dot{{y̰}}, y^{\prime }\rbrace\) . A literal abstraction of variable x from \(\textsf {HP}_0\) is
\begin{equation*} \begin{array}{rcl} \mathbf {local}~x\cdot \textsf {HP}_0&=&\mathcal {H}_{\mbox{HP}}\! \left(\!\!\! \begin{array}{l} \mathit {ti}\lt \mathit {ti}^{\prime }\wedge \mathit {tr}^{\prime }-\mathit {tr}=\langle \mathit {ti}^{\prime }-\mathit {ti}, \emptyset \rangle \wedge \\ y={y̰}(\mathit {ti})\wedge y^{\prime }={y̰}(\mathit {ti}^{\prime -})\wedge \forall t\in (\mathit {ti}, \mathit {ti}^{\prime })\cdot \dot{{y̰}}(t)=2 \end{array} \!\!\!\right), \end{array} \end{equation*}
with the alphabet \(\lbrace \mathit {ti}, \mathit {ti}^{\prime }, \mathit {tr}, \mathit {tr}^{\prime }, y, {y̰}, \dot{{y̰}}, y^{\prime }\rbrace\) .
Example 4.4.
Consider another interesting example, let
\begin{equation*} \begin{array}{rcl} \textsf {HP}_1&\widehat{=}&\mathcal {H}_{\mbox{HP}}\! \left(\!\!\! \begin{array}{l} \mathit {ti}\lt \mathit {ti}^{\prime }\wedge \mathit {tr}^{\prime }-\mathit {tr}=\langle \mathit {ti}^{\prime }-\mathit {ti}, \emptyset \rangle \wedge \\ x={x̰}(\mathit {ti})\wedge x^{\prime }={x̰}(\mathit {ti}^{\prime -})\wedge y={y̰}(\mathit {ti})\wedge y^{\prime }={y̰}(\mathit {ti}^{\prime -})\\ \wedge \forall t\in (\mathit {ti}, \mathit {ti}^{\prime })\cdot \dot{{x̰}}(t)+\dot{{y̰}}(t)=0 \end{array} \!\!\!\right). \end{array} \end{equation*}
The abstraction of x from \(\textsf {HP}_1\) is
\begin{equation*} \begin{array}{rcl} \mathbf {local}~x\cdot \textsf {HP}_1&=&\mathcal {H}_{\mbox{HP}}\left(\mathit {ti}\lt \mathit {ti}^{\prime }\wedge y={y̰}(\mathit {ti})\wedge y^{\prime }={y̰}(\mathit {ti}^{\prime -})\wedge \mathit {tr}^{\prime }-\mathit {tr}=\langle \mathit {ti}^{\prime }-\mathit {ti}, \emptyset \rangle \right), \end{array} \end{equation*}
with the alphabet \(\lbrace \mathit {ti}, \mathit {ti}^{\prime }, \mathit {tr}, \mathit {tr}^{\prime }, y, {y̰}, \dot{{y̰}}, y^{\prime }\rbrace\) . Note that when x is localised, the ODE \(\dot{{x̰}}(t)+\dot{{y̰}}(t)=0\) is also abstracted away.

4.2.5 Parallel Composition.

The parallel composition of two hybrid processes is written as
\begin{equation*} \textsf {HP}_0\Vert _M\textsf {HP}_1, \end{equation*}
which uses the parallel-by-merge scheme developed in UTP [15, 28]. We assume that the state variables of concurrent processes are disjoint, i.e., \(\alpha (\textsf {HP}_0)\cap \alpha (\textsf {HP}_1)=\lbrace \mathit {ti}, \mathit {ti}^{\prime }, \mathit {tr}, \mathit {tr}^{\prime }\rbrace\) . Let the notation \(\textsf {HP}_X\) make an X-version of \(\textsf {HP}\) by adding the shared outputted variables ( \(\mathit {ti}^{\prime }\) and \(\mathit {tr}^{\prime }\) ) in \(\textsf {HP}\) with the X-subscript, i.e.,
\begin{equation*} \begin{array}{ccccc} \textsf {HP}_X&\widehat{=}&\textsf {HP}⨟ (\mathit {ti}=\mathit {ti}_X^{\prime }\wedge \mathit {tr}=\mathit {tr}_X^{\prime }\wedge \mathbf {s}=\mathbf {s}^{\prime })&=&\textsf {HP}[\mathit {ti}_X^{\prime }, \mathit {tr}_X^{\prime }/\mathit {ti}^{\prime }, \mathit {tr}^{\prime }]. \end{array} \end{equation*}
Let \(\mathbf {s}_0\) and \(\mathbf {s}_1\) be state variables ( \(\mathbf {s}_0\cap \mathbf {s}_1=\emptyset\) ) owned by hybrid processes \(\textsf {HP}_0\) and \(\textsf {HP}_1\) , respectively. Let \(\mathbf {s}~\widehat{=}~\mathbf {s}_0\uplus \mathbf {s}_1\) . Then, the parallel composition of \(\textsf {HP}_0\) and \(\textsf {HP}_1\) can be defined as follows:
\begin{equation*} \begin{array}{rcl} \textsf {HP}_0\Vert _M\textsf {HP}_1&\widehat{=}&\mathcal {H}_{\mbox{HP}}\left((\textsf {HP}_{0,X}\wedge \textsf {HP}_{1,Y}\wedge \mathit {tr}=\mathit {tr}^{\prime })⨟ M\right), \end{array} \end{equation*}
where M is a merge predicate with \(\alpha (M)~\widehat{=}~\lbrace \mathit {ti}_X, \mathit {ti}_Y, \mathit {ti}^{\prime }, \mathit {tr}_X, \mathit {tr}_Y, \mathit {tr}, \mathit {tr}^{\prime }, \mathbf {s}, {\mathbf {s}̰}, {\mathbf {s}̰̇}, \mathbf {s}^{\prime }\rbrace\) . Therefore, the alphabet of the parallel composition is \(\alpha (\textsf {HP}_0\Vert _M\textsf {HP}_1)~\widehat{=}~\lbrace \mathit {ti},\mathit {ti}^{\prime }, \mathit {tr}, \mathit {tr}^{\prime }, \mathbf {s}, {\mathbf {s}̰}, {\mathbf {s}̰̇}, \mathbf {s}^{\prime }\rbrace\) . The parallel-by-merge scheme is illustrated pictorially in Figure 1.
Fig. 1.
Fig. 1. Parallel-by-merge scheme ( \(\mathbf {s}=\mathbf {s}_0\uplus \mathbf {s}_1\) and \(\mathbf {s}^{\prime }=\mathbf {s}_0^{\prime }\uplus \mathbf {s}_1^{\prime }\) ).
From Figure 1, we can see that by renaming the common dashed variables, two processes \(\textsf {HP}_{0,X}\) and \(\textsf {HP}_{1,Y}\) execute independently and their respective outputs, as well as their common input trace \(\mathit {tr}\) , are fed into the merge predicate M. Then, M produces the merged result as the output of the parallel composition. Each merge predicate reflects a corresponding parallel scheme, i.e., the parallel composition is parametric over M. Figure 1 in this article looks like Figure 1 in Reference [15], which illustrates the parallel-by-merge dataflow, because both of them are derived from the parallel-by-merge of the classic UTP of Reference [28], but they differ in the following two aspects:
(1)
Continuous state variables \({\mathbf {s}̰}_0\) and \({\mathbf {s}̰}_1\) are involved and state variables \(\mathbf {s}_0\) and \(\mathbf {s}_1\) are disjoint (no sharing state variables) in our setting;
(2)
We only transfer the input trace \(\mathit {tr}\) to the merge predicate, as it would be necessary in parallel composition (see Definition 4.5 for an example).
A merge predicate M is well-defined if it satisfies the following three properties:
(1)
\(M=M[\mathit {ti}_X, \mathit {tr}_X, \mathbf {s}_0, \mathit {ti}_Y, \mathit {tr}_Y, \mathbf {s}_1/\mathit {ti}_Y, \mathit {tr}_Y, \mathbf {s}_1, \mathit {ti}_X, \mathit {tr}_X, \mathbf {s}_0]\) , indicating \(\Vert _M\) is commutative.
(2)
The end time of the parallel composition is infinite iff the end time of one process is infinite:
\begin{equation*} \begin{array}{ccl} M&=&M\wedge \left(\mathit {ti}^{\prime }=+\infty \lhd (\mathit {ti}_X=+\infty \vee \mathit {ti}_Y=+\infty)\rhd \mathit {ti}^{\prime }\lt +\infty \right)\!. \end{array} \end{equation*}
(3)
Similarly, the timed trace of the parallel composition is infinite iff the timed trace of one process is infinite:
\begin{equation*} \begin{array}{ccl} M&=&M\wedge \left(|\mathit {tr}^{\prime }|=+\infty \lhd (|\mathit {tr}_X|=+\infty \vee |\mathit {tr}_Y|=+\infty)\rhd |\mathit {tr}^{\prime }|\lt +\infty \right)\!. \end{array} \end{equation*}
The following defines a well-defined merge predicate \(M_I\) that specifies parallelism.
Definition 4.5 (MI)
Let \(\mathbf {s}_0\) and \(\mathbf {s}_1\) be state variables ( \(\mathbf {s}_0\cap \mathbf {s}_1=\emptyset\) ) owned by \(\textsf {HP}_0\) and \(\textsf {HP}_1\) , respectively, and I a set of channels via which \(\textsf {HP}_0\) and \(\textsf {HP}_1\) communicate. Then, we define
\begin{equation*} \begin{array}{rcl} M_I&\widehat{=}&\textsf {syn}⨟ \textsf {mrg}, \end{array} \end{equation*}
where
\begin{equation*} \begin{array}{ccl} \textsf {syn}&\widehat{=}&\mathit {ti}_X^{\prime }=\mathit {ti}_X\wedge \mathit {ti}_Y^{\prime }=\mathit {ti}_Y\wedge \mathbf {s}_0^{\prime }=\mathbf {s}_0\wedge \mathbf {s}_1^{\prime }=\mathbf {s}_1\wedge \mathit {tr}^{\prime }=\mathit {tr}\wedge \\ && \left(\mathit {ti}_X\lt \mathit {ti}_Y\Rightarrow \forall t\in [\mathit {ti}_X,\mathit {ti}_Y)\cdot {\mathbf {s}̰}_0(t)=\mathbf {s}_0\wedge \mathit {tr}_X^{\prime }={\mathit {tr}_X}^{\smallfrown }\langle \mathit {ti}_Y-\mathit {ti}_X,\emptyset \rangle \wedge \mathit {tr}_Y^{\prime }=\mathit {tr}_Y\right)\wedge \\ && \left(\mathit {ti}_X\gt \mathit {ti}_Y\Rightarrow \forall t\in [\mathit {ti}_Y,\mathit {ti}_X)\cdot {\mathbf {s}̰}_1(t)=\mathbf {s}_1\wedge \mathit {tr}_X^{\prime }=\mathit {tr}_X\wedge \mathit {tr}_Y^{\prime }={\mathit {tr}_Y}^{\smallfrown }\langle \mathit {ti}_X-\mathit {ti}_Y,\emptyset \rangle \right)\wedge \\ && \left(\mathit {ti}_X=\mathit {ti}_Y\Rightarrow \mathit {tr}_X^{\prime }=\mathit {tr}_X\wedge \mathit {tr}_Y^{\prime }=\mathit {tr}_Y\right),\\ \textsf {mrg}&\widehat{=}&\mathit {ti}^{\prime }=\max \lbrace \mathit {ti}_X,\mathit {ti}_Y\rbrace \wedge \mathbf {s}_0^{\prime }=\mathbf {s}_0\wedge \mathbf {s}_1^{\prime }=\mathbf {s}_1\wedge (\mathit {tr}_X-\mathit {tr})\Vert _I(\mathit {tr}_Y-\mathit {tr})\leadsto (\mathit {tr}^{\prime }-\mathit {tr}). \end{array} \end{equation*}
The predicate syn synchronises the termination time ( \(+\infty\) if non-terminated) of two processes. If \(\mathit {ti}_X\lt \mathit {ti}_Y\) , then it delays the termination of the process tagged by X until it reaches \(t_Y\) . During the period of delay, the values of the corresponding state variables keep unchanged, and meanwhile the wait block \(\langle \mathit {ti}_Y-\mathit {ti}_X,\emptyset \rangle\) should be added to the end of the trace \(\mathit {tr}_X\) . Following syn is mrg, which merges times, states, and traces, respectively. It uses the maximum time as the termination time ( \(+\infty\) if non-terminated) of the parallel composition ( \(\mathit {ti}^{\prime }=\max \lbrace \mathit {ti}_X,\mathit {ti}_Y\rbrace\) ).
Property 6 (Well-Definedness).
\(M_I\) is well-defined.

4.3 Complete Lattice

Property 7 (Monotonicity).
The operators \(⨟\) , \(\sqcup\) , \(\sqcap\) , \(\lhd \rhd\) , \(\exists\) and \(\Vert _M\) are monotonic and \(\lnot _\wr\) is anti-monotone, with respect to the refinement order \(\sqsubseteq\) .
A (finite or infinite) set \(\mathcal {S}\) of hybrid processes equipped with the refinement order \(\sqsubseteq\) forms a partially ordered set, denoted by \((\mathcal {S}, \sqsubseteq)\) . Let \(\sqcap \mathcal {S}\) the disjunction of all hybrid processes in \(\mathcal {S}\) , denoting a system that behaves as any of hybrid processes in \(\mathcal {S}\) . Obviously, \(\textsf {HP}\sqsubseteq \sqcap \mathcal {S}\) iff \(\forall X\in \mathcal {S}\cdot \textsf {HP}\sqsubseteq X\) . Conversely, the conjunction of all hybrid processes in \(\mathcal {S}\) is denoted by \(\sqcup \mathcal {S}\) , which denotes a system that only has the behaviour shared by all hybrid processes in \(\mathcal {S}\) . In lattice theory, \(\sqcap \mathcal {S}\) and \(\sqcup \mathcal {S}\) are called the infimum and supremum of a set \(\mathcal {S}\) , respectively.
Theorem 4.6 (Complete Lattice).
Let \(\mathbb {HP}\) be the image of \(\mathcal {H}_{\mbox{HP}}\) , i.e., it is the set of all hybrid processes, then it forms a complete lattice with top and bottom:
\begin{equation*} \begin{array}{ccccc} \top _\mbox{HP}&\widehat{=}&\bigsqcup \mathbb {HP}&=&\mathcal {H}_{\mbox{HP}}(\mathbf {false}),\\ \bot _\mbox{HP}&\widehat{=}&\sqcap \mathbb {HP}&=&\mathcal {H}_{\mbox{HP}}(\mathbf {true}). \end{array} \end{equation*}
Recursion can be specified once a complete lattice is formed. Theoretically speaking, recursion can be denoted by the fixed points of the equation \(X=F(X)\) , where F constructs the body of the recursion. If \(F(X)\) is monotonic, then the fixed points of \(X=F(X)\) form a complete lattice according to Knaster-Tarski theorem [57]. The least fixed point is denoted by \(\mu X\cdot F(X)\) and the greatest fixed point by \(\nu X\cdot F(X)\) . The definitions of the least and greatest fixed points refer to Section 2. In particular, \(\mu X\cdot X=\bot _\mbox{HP}\) and \(\nu X\cdot X=\top _\mbox{HP}\) .

4.4 Non-termination, Deadlock, Divergence, and Miracle

Non-termination has a special property, i.e., being a left zero of sequential composition:
Definition 4.7 (Non-termination).
A hybrid process \(\underline{\textsf {HP}}\) is non-terminated iff \(\underline{\textsf {HP}}⨟ \textsf {HP}=\underline{\textsf {HP}}\) for any hybrid process \(\textsf {HP}\) .
A hybrid process is divergent if its execution time is finite but its trace is infinite; and deadlocked if its execution time is infinite with some channel operations waiting forever, specified as follows. It can be demonstrated that both divergence and deadlock are non-terminated by Definition 4.7.
Definition 4.8 (Divergence).
A divergence is a fixed point of \(X=\mathcal {H}_{\mbox{HP}}\circ \mathit {DIV}(X),\) where
\begin{equation*} \begin{array}{ccc} \mathit {DIV}(X)&=&X\wedge \mathit {ti}^{\prime }\lt +\infty \wedge |\mathit {tr}^{\prime }|=+\infty . \end{array} \end{equation*}
Definition 4.9 (Deadlock).
A deadlock is a fixed point of \(X=\mathcal {H}_{\mbox{HP}}\circ \mathit {DL}(X),\) where
\begin{equation*} \begin{array}{ccc} \mathit {DL}(X)&=&X\wedge \mathit {ti}^{\prime }=+\infty \wedge \exists N\in \mathbb {N}\cdot \forall n\ge N\cdot \mathit {tr}^{\prime }(n)=\langle \delta , \mathit {RS}\rangle \wedge \mathit {RS}\ne \emptyset . \end{array} \end{equation*}
A miracle is a miraculous process that only occurs after a non-terminating hybrid process, and thus it is impossible to implement a miracle in reality. Formally,
Definition 4.10 (Miracle).
A miracle is a fixed point of \(X=\mathcal {H}_{\mbox{HP}}\circ \mathit {MIR}(X),\) where
\begin{equation*} \begin{array}{ccc} \mathit {MIR}(X)&=&X\wedge (\mathit {ti}=+\infty \vee |\mathit {tr}|=+\infty). \end{array} \end{equation*}
Theorem 4.11 (Miracle).
\(\top _\mbox{HP}\) is the only miracle of hybrid processes.
Miracle refines any hybrid process HP, i.e., \(\textsf {HP}\sqsubseteq \top _\mbox{HP}\) , and it enjoys the following two desired properties, where Property 8 indicates miracle is non-terminated by Definition 4.7.
Property 8 (Left Zero of ⨟)
\(\top _\mbox{HP}⨟ \textsf {HP}=\top _\mbox{HP}\) for any hybrid process \(\textsf {HP}\) .
Property 9 (Zero of ||M)
\(\textsf {HP}\Vert _M\top _\mbox{HP}=\top _\mbox{HP}\Vert _M\textsf {HP}=\top _\mbox{HP}\) for any hybrid process \(\textsf {HP}\) .
In contrast to the miracle, \(\bot _\mbox{HP}\) is the most unpredictable behaviour, because it allows non-deterministically choosing a behaviour among possibly infinitely many processes; in other words, chaos. Generally, chaos is expected to be non-terminated (left-zero of sequential composition) and the parallel composition of any process with chaos should result in chaos (zero of parallel composition). However, the chaos \(\bot _\mbox{HP}\) does not enjoy such desired properties as the miracle \(\top _\mbox{HP}\) does. This problem will be solved in Section 5.

4.5 Abstract Hybrid Processes

A hybrid process models the state and trace of a system over time to accurately render its concrete behavior. For the purpose of verification, however, such a level of detail is certainly excessive, as instead modular abstraction may be desired. To address this issue, we propose the notion of abstract hybrid process, where traces are simply abstracted. Formally,
Definition 4.12 (Abstract Hybrid Process).
An abstract hybrid process is a fixed point of
\begin{equation*} \begin{array}{ccc} X&=&\mathcal {H}_{\mbox{0}}^{\rm\small A}\circ \mathcal {H}_{\mbox{2}}^{\rm\small A}\circ \mathcal {H}_{\mbox{3}}^{\rm\small A}\circ \mathcal {H}_{\mbox{4}}(X), \end{array} \end{equation*}
where
\begin{equation*} \begin{array}{ccl} \mathcal {H}_{\mbox{0}}^{\rm\small A}(X)&\widehat{=}&X\wedge \mathit {ti}\le \mathit {ti}^{\prime },\\ \mathcal {H}_{\mbox{2}}^{\rm\small A}(X)&\widehat{=}&(\mathit {ti}=\mathit {ti}^{\prime })\lhd \mathit {ti}=+\infty \rhd X,\\ \mathcal {H}_{\mbox{3}}^{\rm\small A}(X)&\widehat{=}&(\exists \mathbf {s}^{\prime }\cdot X)\lhd \mathit {ti}^{\prime }=+\infty \rhd X. \end{array} \end{equation*}
For brevity, we use \(\mathcal {H}_{\mbox{HP}}^{\rm\small A}\) to denote \(\mathcal {H}_{\mbox{0}}^{\rm\small A}\circ \mathcal {H}_{\mbox{2}}^{\rm\small A}\circ \mathcal {H}_{\mbox{3}}^{\rm\small A}\circ \mathcal {H}_{\mbox{4}}\) .
Theorem 4.13 (Idempotence and Monotonicity).
\(\mathcal {H}_{\mbox{HP}}^{\rm\small A}\) is idempotent and monotonic.
Note that \(\mathcal {H}_{\mbox{0}}^{\rm\small A}\) , \(\mathcal {H}_{\mbox{2}}^{\rm\small A}\) , and \(\mathcal {H}_{\mbox{3}}^{\rm\small A}\) are abstract versions of \(\mathcal {H}_{\mbox{0}}\) , \(\mathcal {H}_{\mbox{2}}\) , and \(\mathcal {H}_{\mbox{3}}\) in Definition 4.1, respectively. A predicate is an abstract hybrid process iff it is \(\mathcal {H}_{\mbox{HP}}^{\rm\small A}\) -healthy. Compared with Definition 4.1 of \(\mathcal {H}_{\mbox{HP}}\) , abstract hybrid processes only consider time ( \(\mathit {ti}\) and \(\mathit {ti}^{\prime }\) ) and state ( \(\mathbf {s}\) , \({\mathbf {s}̰}\) , \({\mathbf {s}̰̇}\) , and \(\mathbf {s}^{\prime }\) ), ignoring the timed trace ( \(\mathit {tr}\) and \(\mathit {tr}^{\prime }\) ). Therefore, \(\mathcal {H}_{\mbox{1}}\) is no longer necessary here. By Theorem 4.13, abstract hybrid processes can form a complete lattice with top and bottom
\begin{equation*} \begin{array}{ccccl} \top _\mbox{AHP}&\widehat{=}&\mathcal {H}_{\mbox{HP}}^{\rm\small A}(\mathbf {false})&\equiv &\mathit {ti}=\mathit {ti}^{\prime }=+\infty ,\\ \bot _\mbox{AHP}&\widehat{=}&\mathcal {H}_{\mbox{HP}}^{\rm\small A}(\mathbf {true})&\equiv &(\mathit {ti}=\mathit {ti}^{\prime })\lhd \mathit {ti}=+\infty \rhd (\mathit {ti}\le \mathit {ti}^{\prime }\wedge {RC}\wedge {SD}). \end{array} \end{equation*}
Clearly, \(\top _\mbox{AHP}\sqsubseteq \top _\mbox{HP}\) and \(\bot _\mbox{AHP}\sqsubseteq \bot _\mbox{HP}\) if we ignore the alphabets as \(\top _\mbox{AHP}\) and \(\bot _\mbox{AHP}\) do not contain \(\mathit {tr}\) or \(\mathit {tr}^{\prime }\) . Abstract hybrid processes also form a Boolean algebra \((\mathbb {AHP}, \sqcap , \sqcup , \lnot _\wr ^{\rm\small A}, \top _\mbox{AHP}, \bot _\mbox{AHP})\) , where \(\mathbb {AHP}\) is the image of \(\mathcal {H}_{\mbox{HP}}^{\rm\small A}\) and the negation \(\lnot _\wr ^{\rm\small A}\) can be defined by \(\lnot _\wr ^{\rm\small A} X\ \widehat{=}\ \mathcal {H}_{\mbox{HP}}^{\rm\small A}(\lnot X)\) . Furthermore, an abstract hybrid condition is a fixed point of \(\lnot _\wr ^{\rm\small A} X=\lnot _\wr ^{\rm\small A} X⨟ \bot _\mbox{AHP}\) . The equation can be denoted by a healthiness condition \(\mathcal {H}_{\mbox{HC}}^{\rm\small A}\) , i.e., a predicate is an abstract hybrid condition iff it is \(\mathcal {H}_{\mbox{HC}}^{\rm\small A}\) -healthy. Abstract hybrid conditions are a subset of abstract hybrid processes.
Additionally, by definition of the refinement relation, for a hybrid process HP with explicit timed traces, if \(P\sqsubseteq \textsf {HP}\) (ignoring the alphabets), then \(\textsf {HP}\) does satisfy the property P, which is represented by an abstract hybrid process. Hence, thanks to the lattice structure induced by \(\sqsubseteq\) , we can define an \((\alpha ,\gamma)\) Galois-connection stipulating the concretisation of a property P to be the meet of all hybrid processes that satisfy it, i.e., \(\gamma (P)\ \widehat{=}\ \sqcap \lbrace \textsf {HP}\mid P\sqsubseteq \textsf {HP}\rbrace\) and, dually, the abstraction of a hybrid process HP to be the join of all properties satisfied by it, i.e., \(\alpha (\textsf {HP})\ \widehat{=}\ \sqcup \lbrace P\mid P\sqsubseteq \textsf {HP}\rbrace\) . In particular, \(\gamma (\bot _\mbox{AHP})=\bot _\mbox{HP}\) and \(\alpha (\top _\mbox{HP})=\top _\mbox{AHP}\) .

5 Hybrid Designs

In this section, we propose the concept of hybrid design. Hybrid designs are a subset of hybrid processes but enjoy some desired algebraic properties that are not featured by hybrid processes. Based on hybrid designs, we will then propose normal hybrid designs, which are the first-class notions in the HUTP theory.
The notion of design in UTP was first introduced by Hoare and He in Reference [28]. A design consists of a pair of predicates \((\mathit {Pre}, \mathit {Post})\) standing for the pre- and post-condition of a program, augmented with the Boolean variables \(\mathit {ok}\) and \(\mathit {ok}^{\prime }\) to express whether the program has started and terminated:
\begin{equation*} \begin{array}{ccccc} \mathit {Pre}\vdash \mathit {Post}&\widehat{=}&(\mathit {ok}\wedge \mathit {Pre})\Rightarrow (\mathit {ok}^{\prime }\wedge \mathit {Post})&\equiv &\lnot \mathit {ok}\vee \lnot \mathit {Pre}\vee (\mathit {ok}^{\prime }\wedge \mathit {Post}), \end{array} \end{equation*}
which states that if the program starts in a state satisfying the pre-condition \(\mathit {Pre}\) , it will terminate, and on termination the post-condition \(\mathit {Post}\) will be true. The pair \((\mathit {Pre}, \mathit {Post})\) describes a contract between the component and its environment, and therefore supports the decomposition of engineering tasks to resolve system design complexity.
In related works, the meanings of \(\mathit {ok}\) and \(\mathit {ok}^{\prime }\) are slightly different. For example, in Reference [15], \(\mathit {ok}\) and \(\mathit {ok}^{\prime }\) are used to indicate divergence. Here, however, since divergence can be observed from time and trace ( \(\mathit {ti}^{\prime }\lt +\infty \wedge |\mathit {tr}^{\prime }|=+\infty\) ), \(\mathit {ok}\) and \(\mathit {ok}^{\prime }\) have different meanings. We extend the notion of design to hybrid systems by interpreting \(\mathit {ok}\) and \(\mathit {ok}^{\prime }\) to denote a process’s status: \(\mathit {ok}^{\prime }=\mathbf {true}\) indicates that the process executes normally (not necessarily terminating), and hence \(\mathit {ok}=\mathbf {false}\) means that the preceding process executed abnormally, causing the behaviour of the current process to be unpredictable. Thus, it is necessary to distinguish normal and abnormal non-termination. Specifically, \(\lnot \mathit {ok}^{\prime }\) indicates the abnormal behaviour, which should be abnormal non-termination; \(\mathit {ok}^{\prime }\wedge (\mathit {ti}^{\prime }=+\infty \vee |\mathit {tr}^{\prime }|=+\infty)\) indicates normal non-termination; and \(\mathit {ok}^{\prime }\wedge \mathit {ti}^{\prime }\lt +\infty \wedge |\mathit {tr}^{\prime }|\lt +\infty\) indicates (normal) termination.
Definition 5.1 (Hybrid Designs).
Let \(\textsf {HP}_A\) and \(\textsf {HP}_C\) be hybrid processes. Then,
\begin{equation*} \begin{array}{ccc} \textsf {HP}_A\vdash \textsf {HP}_C &\widehat{=}& (\mathit {ok}\wedge \textsf {HP}_A)\Rightarrow _\wr (\mathit {ok}^{\prime }\wedge \textsf {HP}_C) \end{array} \end{equation*}
is a hybrid design with the alphabet \(\alpha (\textsf {HP}_A)\cup \alpha (\textsf {HP}_C)\cup \lbrace \mathit {ok}, \mathit {ok}^{\prime }\rbrace\) , where \(\textsf {HP}_A\) and \(\textsf {HP}_C\) are called assumption and commitment, respectively.
Hybrid designs are a subset of hybrid processes, as \(\mathit {ok}\wedge \textsf {HP}_A\) and \(\mathit {ok}^{\prime }\wedge \textsf {HP}_C\) are \(\mathcal {H}_{\mbox{HP}}\) -healthy and \(\Rightarrow _\wr\) is \(\mathcal {H}_{\mbox{HP}}\) -preserving, which means \(\textsf {HP}_A\vdash \textsf {HP}_C\) is also \(\mathcal {H}_{\mbox{HP}}\) -healthy. Intuitively, when a hybrid design starts normally ( \(\mathit {ok}={\bf true}\) ) and its assumption \(\textsf {HP}_A\) holds, then it executes normally and the commitment \(\textsf {HP}_C\) holds. The hybrid design \(\bot _\mbox{HP}\vdash \textsf {HP}_C\) , where \(\bot _\mbox{HP}\) is a variant of \(\mathbf {true}\) (which is a convention in UTP), is abbreviated as \(\vdash \textsf {HP}_C\) meaning not making any assumptions.

5.1 Matrix Representation

Given a hybrid design \(\textsf {HP}_A\vdash \textsf {HP}_C\) , if \(\mathit {ok}\) is false, i.e., its predecessor is abnormal, then it will behave unpredictably, i.e., chaos \(\bot _\mbox{HP}\) ; if \(\mathit {ok}\) is true but \(\mathit {ok}^{\prime }\) is false, it indicates that the assumption \(\textsf {HP}_A\) must be violated, i.e., \(\lnot _\wr \textsf {HP}_A\) ; if both \(\mathit {ok}\) and \(\mathit {ok}^{\prime }\) are true, the commitment \(\textsf {HP}_C\) can be guaranteed as soon as the assumption \(\textsf {HP}_A\) holds, i.e., \(\textsf {HP}_A\Rightarrow _\wr \textsf {HP}_C\) . To make this semantics of hybrid designs clearer, we use the matrix encoding proposed in Reference [43] to define hybrid designs as \(2\times 2\) matrices by instantiating \(\mathit {ok}\) and \(\mathit {ok}^{\prime }\) with all four possible combinations:
With the aid of the matrix representation, the proofs of the theorems and properties on hybrid designs become more concise and intuitive. The proofs of this section are largely based on this matrix representation and the calculus on it (see the Appendix for the details).

5.2 Normal Hybrid Designs

In general, UTP allows the assumption to refer to both primed and unprimed versions of variables. However, as explained in Reference [21], only normal designs, whose assumptions are conditions and do not refer to primed (state) variables, are significant. Similarly, only normal hybrid designs, whose assumptions are fixed points of the equation below, make sense in hybrid system design:
\begin{equation*} \begin{array}{ccc} \lnot _\wr X&=&\lnot _\wr X⨟ \bot _\mbox{HP}. \end{array} \end{equation*}
Intuitively, the above equation indicates that if an execution violates an assumption, then we expect that any extension of the execution will also violate the assumption. For example, let \(\textsf {HP}\) be a hybrid process specifying the expected behaviour, then \(\lnot _\wr \textsf {HP}\) represents the unexpected behaviour. It is desirable that all the extensions of \(\lnot _\wr \textsf {HP}\) should also be unexpected, i.e., \(\lnot _\wr \textsf {HP}\sqsubseteq \lnot _\wr \textsf {HP}⨟ \bot _\mbox{HP}\) . Meanwhile, we have \(\lnot _\wr \textsf {HP}⨟ \bot _\mbox{HP}\sqsubseteq \lnot _\wr \textsf {HP}\) according to the monotonicity of \(⨟\) (Property 7). In a word, we expect the property \(\lnot _\wr \textsf {HP}=\lnot _\wr \textsf {HP}⨟ \bot _\mbox{HP}\) .
The above equation can be denoted by a healthiness condition:
\begin{equation*} \begin{array}{rcl} \mathcal {H}_{\mbox{HC}}(X) &=& \lnot _\wr (\lnot _\wr X⨟ \bot _\mbox{HP}). \end{array} \end{equation*}
A predicate is called a hybrid condition if it is \(\mathcal {H}_{\mbox{HC}}\) -healthy in a similar manner as reactive conditions in Reference [15]. Hybrid conditions are a subset of hybrid processes. For example, \(\mathcal {H}_{\mbox{HP}}(s\gt 0)\) is a hybrid condition while \(\mathcal {H}_{\mbox{HP}}(s^{\prime }\gt 0)\) is not. Especially, \(\bot _\mbox{HP}\) and \(\top _\mbox{HP}\) are hybrid conditions, because it is easy to check that \(\bot _\mbox{HP}=\lnot _\wr (\lnot _\wr \bot _\mbox{HP}⨟ \bot _\mbox{HP})\) and \(\top _\mbox{HP}=\lnot _\wr (\lnot _\wr \top _\mbox{HP}⨟ \bot _\mbox{HP})\) .
A hybrid design, say \(\textsf {HC}\vdash \textsf {HP}\) , is called normal hybrid design, if \(\textsf {HC}\) is a hybrid condition ( \(\mathcal {H}_{\mbox{HC}}\) -healthy). In what follows, we only consider normal hybrid designs. Operations on normal hybrid designs can be thus reduced to standard matrix operations. In particular, non-deterministic choice (meet) \(\sqcap\) and sequential composition \(⨟\) correspond to matrix addition and multiplication, respectively. Moreover, reasoning is completely component-free (with no mention to \(\mathit {ok}\) and \(\mathit {ok}^{\prime }\) or variable substitutions), i.e., compositional.
Theorem 5.2 (Operations).
(1)
\((\textsf {HC}_0\vdash \textsf {HP}_0)\sqcap (\textsf {HC}_1\vdash \textsf {HP}_1)=(\textsf {HC}_0\wedge \textsf {HC}_1)\vdash (\textsf {HP}_0\vee \textsf {HP}_1)\)
(2)
\((\textsf {HC}_0\vdash \textsf {HP}_0)\sqcup (\textsf {HC}_1\vdash \textsf {HP}_1)=(\textsf {HC}_0\vee \textsf {HC}_1)\vdash ((\textsf {HC}_0\Rightarrow _\wr \textsf {HP}_0)\wedge (\textsf {HC}_1\Rightarrow _\wr \textsf {HP}_1))\)
(3)
\((\textsf {HC}_0\vdash \textsf {HP}_0)⨟ (\textsf {HC}_1\vdash \textsf {HP}_1)=(\textsf {HC}_0\wedge \lnot _\wr (\textsf {HP}_0⨟ \lnot _\wr \textsf {HC}_1))\vdash (\textsf {HP}_0⨟ \textsf {HP}_1)\)
Parallel-by-merge, as defined for hybrid processes in Section 4.2.5, can naturally be extended to normal hybrid designs, except that the form of merge predicates should be adapted to deal with auxiliary variables \(\mathit {ok}\) and \(\mathit {ok}^{\prime }\) . Concretely, the merge predicate for normal hybrid designs can be defined by
\begin{equation*} \begin{array}{rcl} \text{NHD}(M) &\widehat{=}& (\mathit {ok}_X\wedge \mathit {ok}_Y)\Rightarrow _\wr (M\wedge \mathit {ok}^{\prime }), \end{array} \end{equation*}
where \(\alpha (M)\cap \lbrace \mathit {ok}_X, \mathit {ok}_Y, \mathit {ok}, \mathit {ok}^{\prime }\rbrace =\emptyset\) . It states that only if parallel designs execute normally (i.e., \(\mathit {ok}_X\wedge \mathit {ok}_Y\) ) can they merge by M as usual. Otherwise, merge becomes unpredictable.
Theorem 5.3 (Parallel Composition).
\begin{equation*} \begin{array}{rcl} (\textsf {HC}_0\vdash \textsf {HP}_0)\Vert _{\mbox{NHD}(M)}(\textsf {HC}_1\vdash \textsf {HP}_1) &=& \begin{pmatrix}\lnot _\wr (\lnot _\wr \textsf {HC}_0\Vert _{\bot _\mbox{HP}}\lnot _\wr \textsf {HC}_1)\\ \wedge \lnot _\wr (\lnot _\wr \textsf {HC}_0\Vert _{\bot _\mbox{HP}}\textsf {HP}_1)\\ \wedge \lnot _\wr (\lnot _\wr \textsf {HC}_1\Vert _{\bot _\mbox{HP}}\textsf {HP}_0) \end{pmatrix} \vdash \textsf {HP}_0\Vert _M\textsf {HP}_1. \end{array} \end{equation*}
Intuitively, the assumption of parallel composition, above, stipulates that unexpected interactions cannot happen. For example, \(\lnot _\wr \textsf {HC}_0\) denotes the unexpected behaviour of \(\textsf {HC}_0\vdash \textsf {HP}_0\) . Hence, its parallel composition with the behaviour of the other side, i.e., \(\lnot _\wr \textsf {HC}_0\Vert _{\bot _\mbox{HP}}\lnot _\wr \textsf {HC}_1\) and \(\lnot _\wr \textsf {HC}_0\Vert _{\bot _\mbox{HP}}\textsf {HP}_1\) , should also be unexpected, as negated by the assumption, where the merge predicate \(\bot _\text{HP}\) denotes the weakest parallelism, i.e., \((\textsf {P}\Vert _{\bot _\mbox{HP}}\rm {Q})\sqsubseteq (\textsf {P}\Vert _M\rm {Q})\) for any merge predicate M.
If state variables play the dominant role in the assumptions and traces are not the concern, such as \(\mathcal {H}_{\mbox{HP}}(s\gt 0)\) , then we can give an alternative definition to parallel composition as follows, which is simpler than \(\Vert _{\mbox{NHD}(M)}\) .
Definition 5.4 (Parallel Composition).
\begin{equation*} \begin{array}{rcl} (\textsf {HC}_0\vdash \textsf {HP}_0)\Vert _M^\mbox{HP}(\textsf {HC}_1\vdash \textsf {HP}_1) &\widehat{=}& (\textsf {HC}_0\wedge \textsf {HC}_1)\vdash (\textsf {HP}_0\Vert _M\textsf {HP}_1). \end{array} \end{equation*}
For normal hybrid designs, the assumption for non-chaos is made explicitly. Since parallel execution involves the evaluation of both processes, it is necessary to ensure that both their assumptions are valid to start with. Thus, the assumption for successful execution of the parallel composition is the conjunction of their separate assumptions rather than their disjunction. It states that the parallel composition \(\textsf {HP}_0\Vert _M^\mbox{HP}\textsf {HP}_1\) can only be executed when both of the components successfully start with the assumptions \(\textsf {HC}_0\) and \(\textsf {HC}_1\) holding, otherwise, it is a chaos. Note that \(\Vert _M^\mbox{HP}\) is different from \(\Vert _{\mbox{NHD}(M)}\) , the former composes assumptions and commitments separately while the latter treats hybrid designs as hybrid processes.
Theorem 5.5 (Closure).
Normal hybrid designs are closed on \(\sqcap\) , \(\sqcup\) , \(⨟\) , \(\Vert _{\mbox{NHD}(M)}\) and \(\Vert _M^\mbox{HP}\) .
Theorem 5.6 (Refinement).
\((\textsf {HC}_0\vdash \textsf {HP}_0)\sqsubseteq (\textsf {HC}_1\vdash \textsf {HP}_1)\) iff
\begin{equation*} \textsf {HC}_1\sqsubseteq \textsf {HC}_0\quad {and}\quad \textsf {HP}_0\sqsubseteq (\textsf {HC}_0\wedge \textsf {HP}_1). \end{equation*}
This theorem indicates that the monotonic hybrid refinement relation can be lifted to a contravariant design refinement relation, and hence the monotonicity of modalities for normal hybrid designs can be proved, specified as the following two properties:
Property 10 (Contra-variance).
If \(\textsf {HC}_0\sqsupseteq \textsf {HC}_1\) and \(\textsf {HP}_0\sqsubseteq \textsf {HP}_1\) , then
\begin{equation*} (\textsf {HC}_0\vdash \textsf {HP}_0)\sqsubseteq (\textsf {HC}_1\vdash \textsf {HP}_1). \end{equation*}
Property 11 (Monotonicity).
Operators \(\sqcup\) , \(\sqcap\) , \(⨟\) , \(\Vert _{\mbox{NHD}(M)}\) and \(\Vert _M^\text{HP}\) for normal hybrid designs are monotonic with respect to \(\sqsubseteq\) .

5.3 Complete Lattice

The laws for \(\sqcap\) and \(\sqcup\) can be, respectively, generalised to \(\sqcap\) and \(\sqcup\) , which indicates that normal hybrid designs form a complete lattice, as stated by the following theorem.
Theorem 5.7 (Complete Lattice).
Normal hybrid designs form a complete lattice with top
\begin{equation*} \begin{array}{rcl} \top _\mbox{NHD}&\widehat{=}&\bot _\mbox{HP}\vdash \top _\mbox{HP} \end{array} \end{equation*}
and bottom
\begin{equation*} \begin{array}{rcl} \bot _\mbox{NHD}&\widehat{=}&\top _\mbox{HP}\vdash \bot _\mbox{HP}. \end{array} \end{equation*}
Property 12 (Chaos).
\(\bot _\mbox{NHD}=\top _\mbox{HP}\vdash \textsf {HP}\) for any hybrid process \(\textsf {HP}\) .
Property 13 (Non-termination).
For any normal hybrid design \(\textsf {NHD}\) ,
\begin{equation*} \bot _\mbox{NHD}⨟ \textsf {NHD}=\bot _\mbox{NHD} \quad {and}\quad \top _\mbox{NHD}⨟ \textsf {NHD}=\top _\mbox{NHD}. \end{equation*}
Property 14 (Parallel Composition).
For any normal hybrid design \(\textsf {NHD}\) ,
\begin{equation*} \begin{array}{ccccc} \textsf {NHD}\Vert _{\mbox{NHD}(M)}\top _\mbox{NHD} &=& \top _\mbox{NHD}\Vert _{\mbox{NHD}(M)}\textsf {NHD} &=& \top _\mbox{NHD}. \end{array} \end{equation*}
Property 15 (Parallel Composition).
For any normal hybrid design \(\textsf {NHD}\) ,
\begin{equation*} \begin{array}{ccccc} \textsf {NHD}\Vert _M^\mbox{HP}\bot _\mbox{NHD} &=& \bot _\mbox{NHD}\Vert _M^\mbox{HP}\textsf {NHD} &=& \bot _\mbox{NHD}. \end{array} \end{equation*}
The above conclusions demonstrate that the chaos \(\bot _\mbox{NHD}\) of normal hybrid designs enjoys the desired properties that (1) it is non-terminated (left-zero of sequential composition, see Property 13) and (2) its parallel composition with any normal hybrid design can result in itself (zero of parallel composition \(\Vert _M^\text{HP}\) , see Property 15), which fixes the hole in the HUTP theory mentioned at the end of Section 4 (where the chaos \(\bot _\mbox{HP}\) of hybrid processes does not enjoy such desired properties).

6 Reflection of HCSP and Simulink with HUTP

In this section, we give the HUTP semantics of HCSP [64] and Simulink [39], representing two representative imperative and data-flow formalisms for hybrid system design in academia and industry, respectively. Furthermore, we prove the consistency between the operational semantics of HCSP and its HUTP semantics. We illustrate by an example the refinement relation between a Simulink diagram and the corresponding HCSP model within the HUTP framework, demonstrating the expressive capabilities of the UTP as a meta-theory for translation validation.

6.1 Syntactic Sugar

For brevity, we introduce the following syntax sugar for HUTP. We define the notations \({\lceil\!\!\lceil} \cdot {\rfloor\!\!\rfloor}\) and \(\lceil \cdot \rfloor\) , which are similar to \({\lceil\!\!\lceil} \cdot {\rceil\!\!\rceil}\) and \(\lceil \cdot \rceil\) from Duration Calculus [66]. Let \(P({\mathbf {s}̰}, {\mathbf {s}̰̇})\) denote a predicate relating \({\mathbf {s}̰}\) and its derivative \({\mathbf {s}̰̇}\) .
\begin{equation*} \begin{array}{rcl} {\lceil\!\!\lceil} P({\mathbf {s}̰}, {\mathbf {s}̰̇}), \mathit {RS}{\rfloor\!\!\rfloor} _{\phi (\mathit {ti}, \mathit {ti}^{\prime })} &\widehat{=}& \mathcal {H}_{\mbox{HP}}\! \left(\!\!\! \begin{array}{l} \mathit {ti}\lt \mathit {ti}^{\prime }\lt +\infty \wedge \phi (\mathit {ti}, \mathit {ti}^{\prime })\wedge \mathbf {s}={\mathbf {s}̰}(\mathit {ti})\wedge \mathbf {s}^{\prime }={\mathbf {s}̰}(\mathit {ti}^{\prime -})\wedge \\ \forall t\in (\mathit {ti}, \mathit {ti}^{\prime })\cdot P({\mathbf {s}̰}(t), {\mathbf {s}̰̇}(t))\wedge \mathit {tr}^{\prime }-\mathit {tr}=\langle \mathit {ti}^{\prime }-\mathit {ti}, \mathit {RS}\rangle \end{array} \!\!\!\right) \end{array} \end{equation*}
is a finite continuous process over the time interval \([\mathit {ti}, \mathit {ti}^{\prime })\) , and it says that \({\mathbf {s}̰}\) is differentiable with \(P({\mathbf {s}̰}, {\mathbf {s}̰̇})\) holding at every instant t from \(\mathit {ti}\) to \(\mathit {ti}^{\prime }\) . During the period, the channel operations in the ready set \(\mathit {RS}\) are waiting, recorded by the trace history \(\mathit {tr}^{\prime }-\mathit {tr}\) . The predicate \(\phi (\mathit {ti}, \mathit {ti}^{\prime })\) characterises the relation of \(\mathit {ti}\) and \(\mathit {ti}^{\prime }\) , such as \(\mathit {ti}^{\prime }-\mathit {ti}=2\) . We omit \(\phi (\mathit {ti}, \mathit {ti}^{\prime })\) if \(\phi (\mathit {ti}, \mathit {ti}^{\prime })=(\mathit {ti}\lt \mathit {ti}^{\prime })\) . For brevity,
\begin{equation*} \begin{array}{lcl} {\lceil\!\!\lceil} P({\mathbf {s}̰}, {\mathbf {s}̰̇}), \mathit {RS}{\rfloor\!\!\rfloor} _d&\widehat{=}&{\lceil\!\!\lceil} P({\mathbf {s}̰}, {\mathbf {s}̰̇}), \mathit {RS}{\rfloor\!\!\rfloor} _{\mathit {ti}^{\prime }-\mathit {ti}=d},\\ {\lceil\!\!\lceil} P({\mathbf {s}̰}, {\mathbf {s}̰̇}), \mathit {RS}{\rfloor\!\!\rfloor} _{\sim d}&\widehat{=}&{\lceil\!\!\lceil} P({\mathbf {s}̰}, {\mathbf {s}̰̇}), \mathit {RS}{\rfloor\!\!\rfloor} _{(\mathit {ti}^{\prime }-\mathit {ti})\sim d}, \end{array} \end{equation*}
where \(d\gt 0\) and \(\sim ~\in \lbrace \lt , \le , \ne , \gt , \ge \rbrace\) .
A sequence of operations that is assumed to take no time is called super-dense computation [38]. Under super-dense computation, the computer processing the sequence of the operations is much faster than the physical devices attached to it, rendering the time to compute the discrete operations negligible. However, the temporal order of computations is still present. Under the assumption of super-dense computation, a discrete process without communication is defined by
\begin{equation*} \begin{array}{rcl} \lceil P(\mathbf {s}, \mathbf {s}^{\prime })\rfloor &\widehat{=}& \mathcal {H}_{\mbox{HP}}(\mathit {ti}=\mathit {ti}^{\prime }\lt +\infty \wedge P(\mathbf {s}, \mathbf {s}^{\prime })\wedge \mathit {tr}^{\prime }-\mathit {tr}=\tau). \end{array} \end{equation*}
It executes instantly at time \(\mathit {ti}=\mathit {ti}^{\prime }\lt +\infty\) , rather than continuously over a time interval. Especially,
\begin{equation*} \begin{array}{rcl} \lceil \rfloor &\widehat{=}&\mathcal {H}_{\mbox{HP}}(\mathit {ti}=\mathit {ti}^{\prime }\lt +\infty \wedge \mathbf {s}=\mathbf {s}^{\prime }\wedge \mathit {tr}=\mathit {tr}^{\prime }). \end{array} \end{equation*}
Note that \(\lceil \rfloor \ne \lceil \mathbf {s}=\mathbf {s}^{\prime }\rfloor\) as their traces are different. We can prove the following property for \(\lceil \rfloor\) .
Property 16 (Unit).
\(\lceil \rfloor\) is a unit of hybrid processes w.r.t. \(⨟\) .
Corollary 1 (Unit).
\(\vdash \lceil \rfloor\) is a unit of normal hybrid designs w.r.t. \(⨟\) .
Instant communications can be defined by discrete processes with communication:
\begin{equation*} \begin{array}{ccl} \lceil \mathit {ch?}, \mathbf {s}_i\rfloor &\widehat{=}& \mathcal {H}_{\mbox{HP}} (\mathit {ti}=\mathit {ti}^{\prime }\lt +\infty \wedge \exists d\cdot \mathbf {s}_i:= d \wedge \mathit {tr}^{\prime }-\mathit {tr}=\langle \mathit {ch?}, d\rangle),\\ \lceil \mathit {ch!}, e(\mathbf {s})\rfloor &\widehat{=}& \mathcal {H}_{\mbox{HP}}(\mathit {ti}=\mathit {ti}^{\prime }\lt +\infty \wedge \mathbf {s}^{\prime }=\mathbf {s}\wedge \mathit {tr}^{\prime }-\mathit {tr}=\langle \mathit {ch!}, e(\mathbf {s})\rangle), \end{array} \end{equation*}
where \(\mathbf {s}_i\) is the ith variable in \(\mathbf {s}\) and \(e(\mathbf {s})\) is an expression containing \(\mathbf {s}\) .
In addition, we also define infinite continuous processes:
\begin{equation*} \begin{array}{rcl} {\lceil\!\!\lceil} P({\mathbf {s}̰}, {\mathbf {s}̰̇}), \mathit {RS}{\rfloor\!\!\rfloor} _\infty &\widehat{=}& \mathcal {H}_{\mbox{HP}}\! \left(\!\!\! \begin{array}{l} \mathit {ti}\lt \mathit {ti}^{\prime }=+\infty \wedge \mathbf {s}={\mathbf {s}̰}(\mathit {ti})\wedge \\ \forall t\in (\mathit {ti}, +\infty)\cdot P({\mathbf {s}̰}(t), {\mathbf {s}̰̇}(t))\\ \wedge \mathit {tr}^{\prime }-\mathit {tr}=\langle +\infty , \mathit {RS}\rangle \end{array} \!\!\!\right). \end{array} \end{equation*}
Since we cannot observe the state variables of the point at infinity, the process will not output any values of state variables, i.e., the output state variables \(\mathbf {s}^{\prime }\) are hidden.

6.2 From HCSP to HUTP

HCSP is a formal language for describing hybrid systems. It extends Communicating Sequential Processes by introducing differential equations for modelling continuous evolutions and interrupts for modelling arbitrary interactions between continuous evolutions and discrete jumps. The syntax of HCSP is given as follows, adapted from Reference [64].
\begin{equation} \begin{array}{lll} \textsf {P}&::= &{\bf \textsf {skip}}\mid x:= e\mid {\bf \textsf {wait }}d\mid \mathit {ch?x}\mid \mathit {ch!e}\mid X\mid \mbox{rec }X\cdot \textsf {P}\mid \textsf {P};\textsf {P}\mid \textsf {P}\sqcap \textsf {P}\mid B\rightarrow \textsf {P}\mid \\ &&\langle F(\dot{\mathbf {s}},\mathbf {s})=0\&B\rangle \mid \langle F(\dot{\mathbf {s}},\mathbf {s})=0\& B\rangle \unrhd _d\textsf {P}\mid \langle F(\dot{\mathbf {s}}, \mathbf {s})=0\& B\rangle \unrhd ⫾ _{i\in I}(\mathit {io_i}\rightarrow \textsf {P}_i),\\ \textsf {S}&::= &\textsf {P}\mid \textsf {S}\Vert \textsf {S}. \end{array} \end{equation}
(3)
The HCSP constructs can be defined, respectively, by normal hybrid designs (Section 5.2) as follows:
The skip statement terminates immediately having no effect on variables, and it is modelled as the relational identity:
\begin{equation*} \begin{array}{rcl} 〚 {\bf \textsf {skip}}〛 _\mbox{HUTP}&\widehat{=}&\vdash \lceil \rfloor . \end{array} \end{equation*}
It is a unit of normal hybrid designs w.r.t. \(⨟\) by Corollary 1.
The assignment of the value e to a variable x is modelled as setting x to e and keeping all other variables constant if e can be successfully evaluated [28]. Therefore, the assignment can be interpreted as follows:
\begin{equation*} \begin{array}{rcl} 〚 x:= e〛 _\mbox{HUTP}&\widehat{=}&\text{E}(e)\vdash \lceil x:= e\rfloor , \end{array} \end{equation*}
where \(\text{E}(e)\) is \(\mathcal {H}_{\mbox{HC}}\) -healthy and specifies the condition by which e can be evaluated. Note that E is a function supplied for all expressions. For example, \(\mathcal {H}_{\mbox{HP}}(s\ne 0)\) can be such a condition, where s is a state variable in e.
The wait statement will keep idle for d time units keeping variables unchanged:
\begin{equation*} \begin{array}{rcl} 〚 {\bf \textsf {wait }}d〛 _\mbox{HUTP}&\widehat{=}&\vdash {\lceil\!\!\lceil} {\mathbf {s}̰̇}=\mathbf {0}, \emptyset {\rfloor\!\!\rfloor} _d. \end{array} \end{equation*}
The sequential composition \(\textsf {P}_0; \textsf {P}_1\) behaves as \(\textsf {P}_0\) first, and if it terminates, as \(\textsf {P}_1\) afterwards:
\begin{equation*} \begin{array}{rcl} 〚 \textsf {P}_0; \textsf {P}_1〛 _\mbox{HUTP}&\widehat{=}&〚 \textsf {P}_0〛 _\mbox{HUTP}⨟ 〚 \textsf {P}_1〛 _\mbox{HUTP}. \end{array} \end{equation*}
\(\textsf {P}_0\sqcap \textsf {P}_1\) denotes internal choice, which behaves as either \(\textsf {P}_0\) or \(\textsf {P}_0\) , and the choice is made by the process. It is interpreted as a demonic choice of two operands in a standard way:
\begin{equation*} \begin{array}{rcl} 〚 \textsf {P}_0\sqcap \textsf {P}_1〛 _\mbox{HUTP}&\widehat{=}&〚 \textsf {P}_0〛 _\mbox{HUTP}\sqcap 〚 \textsf {P}_1〛 _\mbox{HUTP}. \end{array} \end{equation*}
The alternative \(B\rightarrow \textsf {P}\) , where B is a Boolean expression, behaves as \(\textsf {P}\) if B is true; otherwise it terminates immediately:
\begin{equation*} \begin{array}{rcl} 〚 B\rightarrow \textsf {P}〛 _\mbox{HUTP}&\widehat{=}&〚 \textsf {P}〛 _\mbox{HUTP}\lhd B\rhd 〚 {\bf \textsf {skip}}〛 _\mbox{HUTP}. \end{array} \end{equation*}
A process variable X is interpreted as a predicate variable:
\begin{equation*} \begin{array}{rcl} 〚 X〛 _\mbox{HUTP}&\widehat{=}&X. \end{array} \end{equation*}
The recursion \(\mbox{rec }X\cdot \textsf {P}\) means that the execution of \(\textsf {P}\) can be repeated by replacing each occurrence of X with \(\mbox{rec }X\cdot \textsf {P}\) itself during executing \(\textsf {P}\) , i.e., \(\mbox{rec }X\cdot \textsf {P}\) behaves like \(\textsf {P}[\mbox{rec }X\cdot \textsf {P}/X]\) . The semantics for recursion can be defined as the least or the greatest fixed point by
\begin{equation*} \begin{array}{rcl} 〚 \mbox{rec }X\cdot \textsf {P}〛 _\mbox{HUTP}^\mu &\widehat{=}&\mu X\cdot 〚 \textsf {P}〛 _\mbox{HUTP},\\ 〚 \mbox{rec }X\cdot \textsf {P}〛 _\mbox{HUTP}^\nu &\widehat{=}&\nu X\cdot 〚 \textsf {P}〛 _\mbox{HUTP}. \end{array} \end{equation*}
For example, the semantics of \(\textsf {P}^{\ast }\) can be defined as
\begin{equation*} \begin{array}{ccccc} 〚 \textsf {P}^{\ast }〛 _\mbox{HUTP}&\widehat{=}&〚 \mbox{rec }X\cdot {\bf \textsf {skip}}\sqcap (\textsf {P};X)〛 _\mbox{HUTP}^\mu &=&\exists n\in \mathbb {N}\cdot 〚 \textsf {P}^n〛 _\mbox{HUTP}, \end{array} \end{equation*}
where \(\mathbb {N}\) is the set of non-negative integers and \(\textsf {P}^0\ \widehat{=}\ {\bf \textsf {skip}}\) .
A continuous evolution statement \(\langle F(\dot{\mathbf {s}},\mathbf {s})=0\& B\rangle\) says that the process keeps waiting, and meanwhile keeps continuously evolving following the differential equations F, until the domain constraint B is violated:
\begin{equation*} \begin{array}{rcl} 〚 \langle F(\dot{\mathbf {s}},\mathbf {s})=0\& B\rangle 〛 _\mbox{HUTP}&\widehat{=}&\textsf {exit}\sqcap (\textsf {ode}^\emptyset ⨟ \textsf {exit}), \end{array} \end{equation*}
where
\begin{equation*} \begin{array}{ccl} \textsf {ode}^\mathit {RS}&\widehat{=}&\vdash {\lceil\!\!\lceil} F({\mathbf {s}̰̇}, {\mathbf {s}̰})=0\wedge B[{\mathbf {s}̰}/\mathbf {s}], \mathit {RS}{\rfloor\!\!\rfloor} ,\\ \textsf {exit}&\widehat{=}&\vdash \lceil \lnot B(\mathbf {s})\wedge \mathbf {s}^{\prime }=\mathbf {s}\rfloor . \end{array} \end{equation*}
Note that \(\textsf {ode}^\mathit {RS}\) and \(\textsf {exit}\) take F and B as parameters. For brevity, the parameters are not shown in the following content. The above states that the process can either terminate at the beginning, i.e., \(\textsf {exit}\) , or evolve for a finite period before terminating without waiting communications, i.e., \(\textsf {ode}^\emptyset ⨟ \textsf {exit}\) , depending on whether B holds or not.
\(\langle F(\dot{\mathbf {s}},\mathbf {s})=0\& B\rangle \unrhd _d\textsf {P}\) behaves like \(\langle F(\dot{\mathbf {s}},\mathbf {s})=0\& B\rangle\) , if the evolution terminates before d time units. Otherwise, after d time units of evolution according to F, it moves on to execute \(\textsf {P}\) :
\begin{equation*} \begin{array}{rcl} 〚 \langle F(\dot{\mathbf {s}},\mathbf {s})=0\& B\rangle \unrhd _d\textsf {P}〛 _\mbox{HUTP}&\widehat{=}& \textsf {exit}\sqcap (\textsf {ode}_{\lt d}^\emptyset ⨟ \textsf {exit})\sqcap (\textsf {ode}_d^\emptyset ⨟ 〚 \textsf {P}〛 _\mbox{HUTP}). \end{array} \end{equation*}
It can either terminate at the beginning, i.e., \(\textsf {exit}\) , evolve less than d time units before terminating, i.e., \(\textsf {ode}_{\lt d}^\emptyset ⨟ \textsf {exit}\) , or evolve for d time units and then continue to execute \(\textsf {P}\) , i.e., \(\textsf {ode}_{d}^\emptyset ⨟ 〚 \textsf {P}〛 _\mbox{HUTP}\) .
\(\langle F(\dot{\mathbf {s}}, \mathbf {s})=0\& B\rangle \unrhd ⫾ _{i\in I}(\mathit {io_i}\rightarrow \textsf {P}_i)\) behaves like \(\langle F(\dot{\mathbf {s}}, \mathbf {s})=0\& B\rangle\) , except that the continuous evolution is preempted as soon as one of the communications \(\mathit {io_i}\) takes place, which is followed by the respective \(\textsf {P}_i\) , where \(io_i\) stands for a communication event, i.e., either \(\mathit {ch?x}\) or \(\mathit {ch!e}\) . Notice that if the continuous part terminates before a communication among \(\lbrace \mathit {io}_i\mid i\in I\rbrace\) occurs, then the process terminates without communicating. Concretely,
\[\begin{eqnarray} 〚 \langle F(\dot{\mathbf {s}}, \mathbf {s})=0\& B\rangle \unrhd ⫾ _{i\in I}(\mathit {io_i}\rightarrow \textsf {P}_i) 〛 _\mbox{HUTP}&\widehat{=}&\textsf {exit}\sqcap \textsf {comm}\sqcap (\textsf {ode}^\mathit {RS}⨟ (\textsf {exit}\sqcap \textsf {comm})). \end{eqnarray}\]
(4)
Without loss of generality, we let \(I=I_?\cup I_!\) with \(I_?\cap I_!=\emptyset\) such that \(\mathit {io_i}=\mathit {ch_i?x_i}\) if \(i\in I_?\) and \(\mathit {ch_i!e_i}\) otherwise ( \(i\in I_!\) ), where \(e_i\) is an arithmetic expression. Note that \(x_i\) and \(x_j\) can denote the same variable even if \(i\ne j\) . Let \(\mathit {RS}=\lbrace \mathit {ch_i?}, \mathit {ch_j!}\mid i\in I_?, j\in I_!\rbrace\) be the set of waiting channel operations. Then, the preemption by communication can be described by
\begin{equation*} \begin{array}{rcl} \textsf {comm}&\widehat{=}& \textsf {input}\sqcap \textsf {output,} \end{array} \end{equation*}
where
\begin{equation*} \begin{array}{ccl} \textsf {input}&\widehat{=}&\sqcap _{i\in I_?}(\vdash \lceil \mathit {ch_i?}, x_i\rfloor \quad ⨟ \quad 〚 \textsf {P}_i〛 _\mbox{HUTP}),\\ \textsf {output}&\widehat{=}&\sqcap _{j\in I_!}(\text{E}(e_j(\mathbf {s}))\vdash \lceil \mathit {ch_j!}, e_j(\mathbf {s})\rfloor \quad ⨟ \quad 〚 \textsf {P}_j〛 _\mbox{HUTP}). \end{array} \end{equation*}
It states that the process can either terminate or communicate at the beginning: \(\textsf {exit}\sqcap \textsf {comm}\) , evolve for a finite period before terminating or communicating: \(\textsf {ode}^\mathit {RS}⨟ (\textsf {exit}\sqcap \textsf {comm})\) .
\(\mathit {ch?x}\) receives a value along channel \(\mathit {RS}\) and assigns it to x, and as the dual, \(\mathit {ch!e}\) sends the value of e along channel \(\mathit {RS}\) :
\begin{equation*} \begin{array}{rcl} 〚 \mathit {ch?x}〛 _\mbox{HUTP}&\widehat{=}&〚 \langle \dot{\mathbf {s}}=\mathbf {0}\&\mathbf {true}\rangle \unrhd (\mathit {ch?x}\rightarrow {\bf \textsf {skip}})〛 _\mbox{HUTP},\\ 〚 \mathit {ch!e}〛 _\mbox{HUTP}&\widehat{=}&〚 \langle \dot{\mathbf {s}}=\mathbf {0}\&\mathbf {true}\rangle \unrhd (\mathit {ch!e}\rightarrow {\bf \textsf {skip}})〛 _\mbox{HUTP}. \end{array} \end{equation*}
A communication takes place when both the sending and the receiving parties are ready and may cause one side to wait if the other side is not ready.
\(\textsf {P}_0\Vert \textsf {P}_1\) behaves as if sequential processes \(\textsf {P}_0\) and \(\textsf {P}_1\) run independently except that all communications along the common channels connecting \(\textsf {P}_0\) and \(\textsf {P}_1\) are to be synchronised. The processes \(\textsf {P}_0\) and \(\textsf {P}_1\) in parallel can neither share variables, nor input or output channels, i.e., a channel operation \(\mathit {ch\ast }\) appears in \(\textsf {P}_0\) iff it does not appear in \(\textsf {P}_1\) . The parallel composition of \(\textsf {P}_0\) and \(\textsf {P}_1\) can be translated as the parallel composition defined by Definition 5.4 with the merge predicate \(M_I\) defined by Definition 4.5:
\[\begin{eqnarray} 〚 \textsf {P}_0\Vert \textsf {P}_1〛 _\mbox{HUTP}&\widehat{=}&〚 \textsf {P}_0〛 _\mbox{HUTP}\Vert _{M_I}^\text{HP}〚 \textsf {P}_1〛 _\mbox{HUTP}, \end{eqnarray}\]
(5)
where I is the set of common channels between \(\textsf {P}_0\) and \(\textsf {P}_1\) .
In what follows, we prove the consistency between the HUTP semantics and the structural operational semantics (SOS) of HCSP given in Reference [64]. The latter consists of a collection of transition rules of the following form1:
\begin{equation*} \frac{\text{Pre-condition}}{(\textsf {P},(\mathit {ti}, \mathbf {s}))\xrightarrow {\textsf {tb}}(\textsf {P}^{\prime },(\mathit {ti}^{\prime }, \mathbf {s}^{\prime }), {\mathbf {s}̰})}, \end{equation*}
where \(\textsf {P}\) and \(\textsf {P}^{\prime }\) are HCSP processes, \((\mathit {ti}, \mathbf {s})\) and \((\mathit {ti}^{\prime }, \mathbf {s}^{\prime })\) are process states, described by the state variables with timestamps, tb denote a trace block defined in Section 3, and \({\mathbf {s}̰}\) is a flow depicting the evolution from \(\mathbf {s}\) to \(\mathbf {s}^{\prime }\) . Note that the flow \({\mathbf {s}̰}\) can be removed if the above transition describes a discrete action. For example, the SOS of an assignment is
\begin{equation*} \frac{e\text{ can be evaluated on }\mathbf {s}}{(x:= e, (\mathit {ti}, \mathbf {s}))\xrightarrow {\tau }(\varepsilon , (\mathit {ti}, \mathbf {s}[x\mapsto e(\mathbf {s})]))}, \end{equation*}
where \(\varepsilon\) denotes termination. A more comprehensive understanding of the operational semantics can be found in Reference [64].
We say the HUTP semantics and the SOS of an HCSP process are consistent iff they keep consistent on time, state and trace. Concretely, the changes of time and state and the generated trace of the SOS of an HCSP process should be reflected by its HUTP semantics, and vice versa. For example, the SOS for the ODE with communication interruption,
\begin{equation*} \begin{array}{rcl} \textsf {ODE}_\unrhd &\widehat{=}& \langle F(\dot{\mathbf {s}}, \mathbf {s})=0\& B\rangle \unrhd ⫾ _{i\in I}({ch}_i\ast _i\rightarrow \textsf {P}_i), \end{array} \end{equation*}
is described by the following rules:
\begin{equation*} \begin{array}{c} \!\!\displaystyle \frac{\lnot B(\mathbf {s})}{(\textsf {ODE}_\unrhd , (\mathit {ti},\mathbf {s}))\xrightarrow {\tau }(\varepsilon , (\mathit {ti},\mathbf {s}))}\![\text{Int0}], \frac{ \begin{matrix}{\mathbf {s}̰}\text{ is a solution of }F(\dot{\mathbf {s}}, \mathbf {s})=0\\ {\mathbf {s}̰}(\mathit {ti})=\mathbf {s}\quad \mathbf {s}^{\prime }={\mathbf {s}̰}(\mathit {ti}^{\prime -})\quad \forall t\in [\mathit {ti}, \mathit {ti}^{\prime })\cdot B({\mathbf {s}̰}(t)) \end{matrix} }{(\textsf {ODE}_\unrhd ,(\mathit {ti},\mathbf {s}))\xrightarrow {\langle \mathit {ti}^{\prime }-\mathit {ti}, \lbrace \mathit {ch_i?}, \mathit {ch_j!}\mid i\in I_?, j\in I_!\rbrace \rangle }(\textsf {ODE}_\unrhd , (\mathit {ti}^{\prime }, \mathbf {s}^{\prime }), {\mathbf {s}̰})}\![\text{Int1}],\\ \quad \\ \displaystyle \frac{i\in I_?\quad {ch}_i\ast _i={ch}_i?x_i}{(\textsf {ODE}_\unrhd , (\mathit {ti},\mathbf {s}))\xrightarrow {\langle {ch}_i?, d\rangle }(\textsf {P}_i, (\mathit {ti},\mathbf {s}[x_i\mapsto d]))}\![\text{Int2}], \quad \frac{ \begin{matrix}j\in I_!\quad {ch}_j\ast _j={ch}_j!e_j\\ e_j\text{ can be evaluated on }\mathbf {s} \end{matrix} }{(\textsf {ODE}_\unrhd , (\mathit {ti}, \mathbf {s}))\xrightarrow {\langle {ch}_j!, e_j(\mathbf {s})\rangle }(\textsf {P}_j, (\mathit {ti},\mathbf {s}))}\![\text{Int3}]. \end{array} \end{equation*}
The rule [Int0] is reflected by exit and the rule [Int1] is reflected by \(\textsf {ode}^\mathit {RS}⨟ 〚 \textsf {ODE}_\unrhd 〛 _\mbox{HUTP}\) , where \(\mathit {RS}=\lbrace \mathit {ch_i?}, \mathit {ch_j!}\mid i\in I_?, j\in I_!\rbrace\) . The rule [Int2], equivalent to \(\vdash \lceil {ch}_i?, x_i\rfloor\) , depicts a receiving event that interrupts the ODE, followed by the process \(\textsf {P}_i\) . The HUTP semantics of \(\textsf {P}_i\) is denoted by \(〚 \textsf {P}_i〛 _\mbox{HUTP}\) , and therefore, the HUTP semantics for the receiving event can be described by
\begin{equation*} \vdash \lceil {ch}_i?, x_i\rfloor \quad ⨟ \quad 〚 \textsf {P}_i〛 _\mbox{HUTP}. \end{equation*}
Similarly, by rule [Int3], the HUTP semantics of the sending event is denoted by
\begin{equation*} \text{E}(e_j)\vdash \lceil {ch}_j?, e_j\rfloor \quad ⨟ \quad 〚 \textsf {P}_j〛 _\mbox{HUTP}, \end{equation*}
where \(\text{E}(e_j)\) means \(e_j\) can be evaluated successfully. The process comm summarises the HUTP semantics of the above communications. Note that \(\textsf {ODE}_\unrhd\) terminates after applying these rules except [Int1]. In summary, we can get
\begin{equation*} \begin{array}{rcl} \textsf {ODE}_\unrhd &=& \textsf {exit}\sqcap \textsf {comm}\sqcap (\textsf {ode}^\mathit {RS}⨟ \textsf {ODE}_\unrhd). \end{array} \end{equation*}
In other words, the SOS of \(\textsf {ODE}_\unrhd\) is equivalent to the equation \(X=F(X)\) , where
\begin{equation*} \begin{array}{rcl} F(X)&\widehat{=}&\textsf {exit}\sqcap \textsf {comm}\sqcap (\textsf {ode}^\mathit {RS}⨟ X). \end{array} \end{equation*}
Any solution of \(X=F(X)\) can be treated as the HUTP semantics of \(\textsf {ODE}_\unrhd\) . In this article, we use the least fixed point:
\begin{equation*} \begin{array}{rcl} \mu X.F(X)&=&\textsf {exit}\sqcap \textsf {comm}\sqcap \left(\textsf {ode}^\mathit {RS}⨟ (\textsf {exit}\sqcap \textsf {comm})\right), \end{array} \end{equation*}
as shown in Equation (4). It can be checked that the SOS of \(\textsf {ODE}_\unrhd\) , denoted by the above four transition rules, is reflected by Equation (4), and vice versa.
Remark 1.
Another solution of the equation \(X=F(X)\) above is
\begin{equation*} \textsf {ode}^\mathit {RS}_\infty \sqcap \textsf {exit}\sqcap \textsf {comm}\sqcap (\textsf {ode}^\mathit {RS}⨟ (\textsf {exit}\sqcap \textsf {comm})), \end{equation*}
where \(\textsf {ode}_\infty ^\mathit {RS}~\widehat{=}~\vdash {\lceil\!\!\lceil} F({\mathbf {s}̰̇},{\mathbf {s}̰})=0\wedge B[{\mathbf {s}̰}/\mathbf {s}],\mathit {RS}{\rfloor\!\!\rfloor} _\infty\) denotes \(\textsf {ODE}_\unrhd\) will evolve forever, never terminate. This solution is reasonable, although it is neither the least nor the greatest fixed point.
Theorem 6.1 (Semantic Consistency).
The HUTP semantics of HCSP is consistent with the structural operational semantics of HCSP.

6.3 From Simulink to HUTP

Matlab/Simulink [39] is an industrial de-facto standard for modelling embedded systems. A Simulink diagram consists of a set of blocks. Each block has a set of input and output signals. A discrete block computes the output signal from the input signals periodically or aperiodically. A continuous block computes the output signal continuously by following a differential equation. Blocks can be organised into hierarchical subsystems, such as normal, triggered, and enabled subsystems.
In this subsection, we introduce a formal semantics of Simulink by encoding it into HUTP. Overall, a Simulink diagram is split into discrete and continuous sub-diagrams, which consist of discrete and continuous blocks, respectively. We define the respective semantics for the discrete and continuous sub-diagrams and then compose them together. The following Simulink diagrams serve as running examples. The left of Figure 2 is the original diagram and the right is the diagram where the ports between some blocks are added.
Fig. 2.
Fig. 2. Examples of plant-control loop.

6.3.1 Discrete Block.

A discrete block is executed during a finite time interval. The left of Figure 3 is the typical periodic discrete block of a math operation, such as Bias and Gain in Figure 2. This kind of blocks are stateless. The block on the right is a periodic Unit Delay block, which is stateful.
Fig. 3.
Fig. 3. Discrete blocks.
For each discrete block, we use ports to serve as channels for the input and output lines. For a periodic Math Operation block, periodically, one first gets data from the input ports (get), then performs a computation based on the inputs (comp), next puts the computed result to its output ports (put), and finally keeps silent for a period of time (period), as follows:
\begin{equation*} \begin{array}{rcl} \textsf {get} &\widehat{=}& \vdash \lceil \mathit {in}_0?, x\rfloor \wedge \text{G}_0(x^{\prime })\quad ⨟ \quad \vdash \lceil \mathit {in}_1?, y\rfloor \wedge \text{G}_1(y^{\prime }),\\ \textsf {comp} &\widehat{=}& \text{A}(x, y)\vdash \lceil z:= \text{op}(x, y)\rfloor ,\\ \textsf {put} &\widehat{=}& \vdash \lceil \mathit {out}_0!, z\rfloor \quad ⨟ \quad \vdash \lceil \mathit {out}_1!, z\rfloor ,\\ \textsf {period} &\widehat{=}& \vdash {\lceil\!\!\lceil} {ż̰}=0, \emptyset {\rfloor\!\!\rfloor} _{\textsf {st}}. \end{array} \end{equation*}
In the above, get acquires data x and y from channels (ports) \(\mathit {in}_0\) and \(\mathit {in}_1\) , respectively. Note that \(\lceil \mathit {in}_0?, x\rfloor\) and \(\lceil \mathit {in}_1?, y\rfloor\) are instant communications (i.e., fetch data whenever they require). The constraints \(\text{G}_0(x^{\prime })\) and \(\text{G}_1(y^{\prime })\) mean that only the desired data is received. Receiving does not consume time, so the order of channel operations \(\mathit {in}_0?\) and \(\mathit {in}_1?\) does not matter. comp denotes a computation, where \(\text{A}(x, y)\) means that the operands x and y, provided from get, must satisfy some assumption, such as \(y\ne 0\) . Therefore, the relation \(\text{G}_0(x)\wedge \text{G}_1(y)\Rightarrow \text{A}(x, y)\) must hold. If \(\text{A}(x, y)\) is satisfied, then the computation \(\lceil z:= \text{op}(x, y)\rfloor\) executes. By contrast with get, put just puts the computed result to channels (ports) \(\mathit {out}_0\) and \(\mathit {out}_1\) . Note that get, comp and put do not consume time. Then, the process keeps quiescent for some period specified by the sample time st, denoted by period, where \(\emptyset\) indicates the \(\mathit {in}_0?\) , \(\mathit {in}_1?\) , \(\mathit {out}_0!\) and \(\mathit {out}_1!\) are not ready to communicate with the context during the period. Finally, it can terminate at any time before the next period arrives ( \(\textsf {tail}\) ). In summary, a periodic Math Operation block can be translated as
\[\begin{eqnarray} \textsf {MOP} &\widehat{=}& \textsf {get}⨟ \textsf {comp}⨟ \textsf {put}⨟ (\textsf {period}⨟ \textsf {get}⨟ \textsf {comp}⨟ \textsf {put})^{\ast }⨟ \textsf {tail,} \end{eqnarray}\]
(6)
where
\begin{equation*} \begin{array}{rcl} \textsf {tail}&\widehat{=}&\vdash {\lceil\!\!\lceil} {ż̰}=0, \emptyset {\rfloor\!\!\rfloor} _{\lt \textsf {st}}\quad \sqcap \quad \vdash \lceil \rfloor . \end{array} \end{equation*}
Unlike a Math Operation block, a Unit Delay block is stateful. The right of Figure 3 is a typical periodic Unit Delay block. At the beginning, its state is initialised (init). Then, periodically, it puts the current state to the output channels (ports) \(\mathit {out}_0\) and \(\mathit {out}_1\) , then gets data from the input channel (port) \(\mathit {in}\) and updates the state (get), and finally holds the state for some period (period). In summary, a Unit Delay block can be translated as follows:
\begin{equation*} \begin{array}{rcl} \textsf {UnitDelay} &\widehat{=}& \textsf {init}⨟ \textsf {put}⨟ \textsf {get}⨟ (\textsf {period}⨟ \textsf {put}⨟ \textsf {get})^{\ast }⨟ \textsf {tail,} \end{array} \end{equation*}
where
\begin{equation*} \begin{array}{rcl} \textsf {init} &\widehat{=}& \vdash \lceil y:= \textsf {init}_{-}\textsf{y}\rfloor ,\\ \textsf {put} &\widehat{=}& \vdash \lceil \mathit {out}_0!, y\rfloor \quad ⨟ \quad \vdash \lceil \mathit {out}_1!, y\rfloor ,\\ \textsf {get} &\widehat{=}& \vdash \lceil \mathit {in}?, x\rfloor \quad ⨟ \quad \vdash \lceil y:= x\rfloor ,\\ \textsf {period} &\widehat{=}& \vdash {\lceil\!\!\lceil} \dot{{y̰}}=0, \emptyset {\rfloor\!\!\rfloor} _{\textsf {st}},\\ \textsf {tail} &\widehat{=}& \vdash {\lceil\!\!\lceil} \dot{{y̰}}=0, \emptyset {\rfloor\!\!\rfloor} _{\lt \textsf {st}}\quad \sqcap \quad \vdash \lceil \rfloor . \end{array} \end{equation*}

6.3.2 Continuous Block.

Blocks containing continuous states are called continuous blocks. Computing a continuous state requires one to know the derivative of the state variables. The left of Figure 4 is an Integrator block and the right is a Math Operation block of which sample time is 0. They are both typical continuous blocks.
Fig. 4.
Fig. 4. Continuous blocks.
For an Integrator block, the state is first initialised by an initial condition, as follows:
\begin{equation*} \begin{array}{rcl} \textsf {init} &\widehat{=}& \vdash \lceil y:= \textsf {init}_{-}\textsf{y}\rfloor . \end{array} \end{equation*}
Then, its semantics depends on the context it interacts with:
If the source block is continuous (such as Integrator0 in Figure 2), then the block will evolve continuously following the ODE \(\dot{y}=x\) , instead of receiving x periodically or aperiodically from the input port. Meanwhile, it will send the current value of y to the output port \(\mathit {out}_1\) whenever the non-continuous target block requires it (put). In that case, the corresponding HUTP semantics will be given as follows:
\[\begin{eqnarray} \textsf {Integrator} &\widehat{=}& \textsf {init}⨟ (\textsf {put}\sqcap \textsf {ode}_0)^{\ast }, \end{eqnarray}\]
(7)
where
\begin{equation*} \begin{array}{ccl} \textsf {put} &\widehat{=}& \vdash \lceil \mathit {out}_1!, y\rfloor ,\\ \textsf {ode}_0 &\widehat{=}& \vdash {\lceil\!\!\lceil} \dot{{y̰}}={x̰}, \lbrace \mathit {out}_1!\rbrace {\rfloor\!\!\rfloor} . \end{array} \end{equation*}
However, if the source block is non-continuous (such as Integrator1 in Figure 2), then, at each iteration, the block will either get data x from the input channel (get), evolve according to the latest received data ( \(\textsf {ode}_1\) ), or put the current value of y to the output channel (put), as follows:
\[\begin{eqnarray} \textsf {Integrator} &\widehat{=}& \textsf {init}⨟ (\textsf {get}\sqcap \textsf {put}\sqcap \textsf {ode}_1)^{\ast }, \end{eqnarray}\]
(8)
where
\begin{equation*} \begin{array}{ccl} \textsf {get} &\widehat{=}& \vdash \lceil \mathit {in}?, x\rfloor ,\\ \textsf {ode}_1 &\widehat{=}& \vdash {\lceil\!\!\lceil} \dot{{y̰}}={x̰}\wedge \dot{{x̰}}=0, \lbrace \mathit {in}?, \mathit {out}_1!\rbrace {\rfloor\!\!\rfloor} . \end{array} \end{equation*}
The semantics of a Math Operation block with sample time 0 in the right of Figure 4 is a bit more complicated. It requires the conversions between discrete and continuous signals, as illustrated by Figure 5. Concretely, the conversion of the discrete input signal \(y_0\) to the continuous one (D/A converter) can be specified as follows:
\begin{equation*} \begin{array}{rcl} \textsf {converter0} &\widehat{=}& (\vdash \lceil \mathit {in}_1?, y_0\rfloor \wedge \text{G}(y_0^{\prime })\quad ⨟ \quad \vdash {\lceil\!\!\lceil} \dot{{y̰}_0}=0, \lbrace \mathit {in}_1?\rbrace {\rfloor\!\!\rfloor})^\ast . \end{array} \end{equation*}
At each iteration, converter0 gets the desired data (constrained by \(\text{G}(y_0^{\prime })\) ) from the input channel \(\mathit {in}_1\) and then keeps the data unchanged for some period. Thus, the input \(y_0\) can be treated as a continuous signal from the view of the continuous block.
Fig. 5.
Fig. 5. Continuous Math Operation Block with D/A and A/D converters.
However, the conversion of the continuous output signal \(z_1\) to the discrete one (A/D converter) can be specified as follows:
\begin{equation*} \begin{array}{rcl} \textsf {converter1} &\widehat{=}& (\vdash \lceil \mathit {out}_1!, z_1\rfloor \quad ⨟ \quad \vdash \exists {d̰}\cdot {\lceil\!\!\lceil} \dot{{z̰}_1}={d̰}, \lbrace \mathit {out}_1!\rbrace {\rfloor\!\!\rfloor})^\ast . \end{array} \end{equation*}
At each iteration, converter1 puts the current value of \(z_1\) to the output channel \(\mathit {out}_1\) and then \(z_1\) continues evolving for some period. Thus, from the view of \(\mathit {out}_1\) , the signal \(z_1\) is discrete.
Given that all the inputs and outputs are continuous, the computation of the continuous Math Operation block can be translated as follows:
\begin{equation*} \begin{array}{rcl} \textsf {comp} &\widehat{=}& \text{A}({x̰}, {y̰}_1)\vdash {\lceil\!\!\lceil} {z̰}_0=\text{op}({x̰},{y̰}_1){\rfloor\!\!\rfloor} \quad \sqcap \quad \vdash \lceil \rfloor . \end{array} \end{equation*}
It means that if the continuous inputs \({x̰}\) and \({y̰}_1\) satisfy the expected assumptions during the whole course of action then, the value of \({z̰}_0\) is, at all times, equal to the result of the operation of \({x̰}\) and \({y̰}_1\) in time. \(\vdash \lceil \rfloor\) denotes the case that simulation is not started ( \(\mathit {ti}=\mathit {ti}^{\prime }\) ). In summary, the semantics of the continuous Math Operation block of Figure 5 can be described by the following parallel composition:
\[\begin{eqnarray*} \nonumber \nonumber \textsf {CMOP} &\widehat{=}& \mathcal {H}_{\mbox{share}}\left(\textsf {converter0}\Vert _\emptyset \textsf {comp}\Vert _\emptyset \textsf {converter1}\right), \end{eqnarray*}\]
where (1) \(\Vert _\emptyset\) denotes \(\Vert ^\mbox{HP}_{\mathit {SYN}_\emptyset }\) with the merge predicate \(\mathit {SYN}_\emptyset ~\widehat{=}~\mathit {ti}_X=\mathit {ti}_Y\wedge M_\emptyset\) . According to Definition 4.5, \(M_\emptyset\) allows \(\mathit {ti}_X\ne \mathit {ti}_Y\) . However, blocks in Simulink are synchronous on time, i.e., \(\mathit {ti}_X=\mathit {ti}_Y\) ; (2) \(\mathcal {H}_{\mbox{share}}\) is the healthiness condition that enforces the consistency between \({y̰}_0\) and \({y̰}_1\) and between \({z̰}_0\) and \({z̰}_1\) when the diagram evolves, specified as follows:
\[\begin{eqnarray*} \nonumber \nonumber \mathcal {H}_{\mbox{share}}(P\vdash Q)&\widehat{=}&P\quad \vdash \quad Q\wedge \forall t\in [\mathit {ti},\mathit {ti}^{\prime })\cdot {y̰}_0(t)={y̰}_1(t)\wedge {z̰}_0(t)={z̰}_1(t). \end{eqnarray*}\]
As the parallel-by-merge in this article (Section 4.2.5) does not support shared state variables, the healthiness condition above is used to enforce the parallel by shared state variables, i.e., it states that the evolutions of \(y_0\) and \(z_0\) should keep consistent with \(y_1\) and \(z_1\) , respectively, because \(y_1\) and \(z_1\) are respective aliases of \(y_0\) and \(z_0\) .

6.3.3 Lines between Discrete Blocks.

To synchronise communications between discrete blocks, lines between blocks serve as buffers to receive data whenever source blocks provide and target blocks require, as illustrated by the buffer in Figure 2. A typical buffer is shown in Figure 6.
Fig. 6.
Fig. 6. A two-branch line between discrete blocks.
Notice that the buffer must render the causality relation of ports (if the source block provides data and, meanwhile, a target block requires). It should guarantee that the target block can only get the latest data from the source block. In other words, if the channel operations \(\mathit {out}?\) and \(\mathit {in}_0!\) of the buffer occur simultaneously, \(\mathit {out}?\) should be earlier than \(\mathit {in}_0!\) . Therefore, the buffer should be translated as follows:
\[\begin{eqnarray} \textsf {buffer} &\widehat{=}& \left(\textsf {get}⨟ (\textsf {put}_0\sqcap \textsf {put}_1\sqcap \textsf {ready})^\ast ⨟ \textsf {ready}\right)^\ast , \end{eqnarray}\]
(9)
where
\begin{equation*} \begin{array}{ccl} \textsf {get} &\widehat{=}& \vdash \lceil \mathit {out}?, x\rfloor ,\\ \textsf {put}_0 &\widehat{=}& \vdash \lceil \mathit {in}_0!, x\rfloor ,\\ \textsf {put}_1 &\widehat{=}& \vdash \lceil \mathit {in}_1!, x\rfloor ,\\ \textsf {ready} &\widehat{=}& \vdash {\lceil\!\!\lceil} \dot{{x̰}}=0, \lbrace \mathit {out}?, \mathit {in}_0!, \mathit {in}_1!\rbrace {\rfloor\!\!\rfloor} . \end{array} \end{equation*}
This means that, at each iteration, the buffer first gets data from the output port of the source block (get), then puts the received data to the input ports of the target blocks whenever they require it ( \(\textsf {put}_0\) and \(\textsf {put}_1\) ), and finally waits for a period of time, during which channel operations \(\mathit {out}?\) , \(\mathit {in}_0!\) and \(\mathit {in}_1!\) are made ready for communication (ready).

6.3.4 Composition.

According to the above analysis, before translating the Simulink diagram on the left of Figure 2, we need to add the necessary ports between some blocks, resulting in the diagram shown on the right of Figure 2. Before execution, the system must be initialised by setting time to 0 and trace to \(\epsilon\) , as follows:
\begin{equation*} \begin{array}{rcl} \textsf {INIT} &\widehat{=}& \vdash \mathcal {H}_{\mbox{HP}}(\mathit {ti}^{\prime }=0\wedge \mathit {tr}^{\prime }=\epsilon). \end{array} \end{equation*}
Then, the composite system, the Diagram on the right of Figure 2, can be defined in two parts. The Plant part consists of the continuous blocks Integrator0 and Integrator1. The Control part consists of the discrete blocks Bias and Gain, and a buffer between them. The Diagram can be modelled by the parallel composition of Plant and Control, as follows:
\[\begin{eqnarray} \textsf {Plant} &\widehat{=}& \mathcal {H}_{\mbox{share}}\left(\textsf {Integrator0}~\Vert _\emptyset ~\textsf {Integrator1}\right), \end{eqnarray}\]
(10)
\[\begin{eqnarray} \textsf {Control} &\widehat{=}& \textsf {Bias}~\Vert _{\mathit {out}_0}~\textsf {buffer}~\Vert _{\mathit {in}_1}~\textsf {Gain}, \end{eqnarray}\]
(11)
\[\begin{eqnarray} 〚 \textsf {Diagram}〛 _\mbox{HUTP} &\widehat{=}& \textsf {INIT}⨟ (\textsf {Plant}~\Vert _{\lbrace \mathit {in}_0, \mathit {out}_1\rbrace }~\textsf {Control}), \end{eqnarray}\]
(12)
where \(\Vert _I\) represents \(\Vert _{\mathit {SYN}_I}^\mbox{HP}\) with \(\mathit {SYN}_I~\widehat{=}~\mathit {ti}_X=\mathit {ti}_Y\wedge M_I\) , for simplicity, and we use \(\Vert _{ch}\) if \(I=\lbrace {ch}\rbrace\) . It can be proved that \(\mathit {SYN}_I\) is well-defined from Property 6. In the above case,
\[\begin{eqnarray} \mathcal {H}_{\mbox{share}}(P\vdash Q)&\widehat{=}&P\quad \vdash \quad Q\wedge \forall t\in [\mathit {ti},\mathit {ti}^{\prime })\cdot {a̰}_0(t)={a̰}_1(t)={a̰}(t), \end{eqnarray}\]
(13)
because the continuous blocks Integrator0 and Integrator1 share a, which has two aliases, \(a_0\) and \(a_1\) .
Example 6.2 (Algebraic Loop).
Consider the loop of Bias and Gain in Figure 7. It forms an algebraic loop. According to Equation (6), the respective prefixes of the communication traces of Bias and Gain are \(\langle \mathit {in}_0?, d_0\rangle ^{\smallfrown }\tau ^{\smallfrown }\langle \mathit {out}_0!, d_1\rangle\) and \(\langle \mathit {in}_1?, d_2\rangle ^{\smallfrown }\tau ^{\smallfrown }\langle \mathit {out}_1!, d_3\rangle\) , where \(\tau\) denotes timeless computations. According to Equation (9), the prefixes of the communication traces of two buffers between Bias and Gain are \(\langle \mathit {out}_0?, d_4\rangle ^{\smallfrown }\tau ^{\smallfrown }\langle \mathit {in}_1!, d_5\rangle\) and \(\langle \mathit {out}_1?, d_6\rangle ^{\smallfrown }\tau ^{\smallfrown }\langle \mathit {in}_0!, d_7\rangle\) . It can be verified that these four traces cannot be composed by \(\Vert _I\) , as defined in Section 3.3, which indicates that this unsound system will reduce to \(\top _\mbox{HD}\) , i.e., it can do nothing.
Fig. 7.
Fig. 7. A loop of discrete blocks.
In addition to the above, the semantics of hierarchical diagrams, such as normal, triggered and enabled subsystems, can also be defined using HUTP. Since describing their HUTP semantics clearly needs pages of space, they are not introduced in this article. Note that compared with the HUTP semantics of Simulink given in References [10, 64], the HUTP semantics defined above is much simpler with more solid foundations as HUTP theory is well developed in this article.

6.4 Applying HUTP to Justify the Translation from Simulink to HCSP

In Reference [69], we introduced how to translate Simulink (as well as Stateflow) diagrams to the corresponding HCSP models for the purpose of verifying them using a Hybrid Hoare Logic prover in Isabelle/HOL [65, 68]. The translation is automatic and is implemented as a module ss2hcsp in the tool chain MARS2 [63, 64], which implements the flow from modelling, simulation, verification to code generation for hybrid systems. In this subsection, we justify the correctness of the translation by ss2hcsp from the Simulink diagram of Figure 2 to the corresponding HCSP model as an application of the HUTP proposed in this article.
Definition 6.3 (Conditional Equivalence).
Let \(T~\widehat{=}~\lbrace t_n\rbrace \subseteq \mathbb {R}_{\ge 0}\) be a chain of increasing time points ( \(t_n\lt t_{n+1}\) ) with \(\lim _{n\rightarrow +\infty }t_n=+\infty\) and S the set of state variables. Let P and Q be differential relations. Then, we say P is equivalent to Q w.r.t. T and S iff \(\textsf {P}\wedge \mathit {ti}^{\prime }\in T\) is equivalent to \(\rm {Q}\wedge \mathit {ti}^{\prime }\in T\) on the state variables in S, denoted by \(\textsf {P}\equiv _{T,S}\rm {Q}\) .
Intuitively, \(\textsf {P}\equiv _{T,S}\rm {Q}\) means that if P and Q terminate at time \(\mathit {ti}^{\prime }\in T\) , then their state variables in S are consistent. In this subsection, we wish to justify \(〚 \textsf {Diagram}〛 _\mbox{HUTP}\equiv _{T,S}〚 〚 \textsf {Diagram}〛 _\mbox{HCSP}〛 _\mbox{HUTP}\) with some T and S, where Diagram is the Simulink diagram of Figure 2.
For the Simulink Diagram of Figure 2, ss2hcsp generates the following HCSP process:
\begin{equation*} \begin{array}{rcl} 〚 \textsf {Diagram}〛 _\mbox{HCSP} &\widehat{=}& \textsf {Plant}_\mbox{HCSP}\Vert \textsf {Control}_\mbox{HCSP}, \end{array} \end{equation*}
where
\begin{equation*} \begin{array}{rcl} \textsf {Plant}_\mbox{HCSP}&\widehat{=}& a:= 0; x_0:= 0; \left(\langle \dot{x}_0=a, \dot{a}=z_1, \dot{z}_1=0\&\mathbf {true}\rangle \unrhd ⫾ \left(\!\!\! \begin{array}{l} {ch}_x!x_0\rightarrow {\bf \textsf {skip}},\\ {ch}_z?z_1\rightarrow {\bf \textsf {skip}} \end{array} \!\!\! \right) \right)^\ast ,\\ \textsf {Control}_\mbox{HCSP}&\widehat{=}& \mathit {ti}:= 0;{ch}_x?x_1;y:= x_1+1.5;z_0:= y\times 2.4; {ch}_z!z_0;\\ && \left(\!\!\! \begin{array}{cl} {\bf \textsf {wait}}(1);&\mathit {ti}\%2=0\rightarrow ({ch}_x?x_1; y:= x_1+1.5);\\ &\mathit {ti}\%3=0\rightarrow (z_0:= y\times 2.4; {ch}_z!z_0) \end{array} \!\!\! \right)^\ast . \end{array} \end{equation*}
The period of \(\textsf {Control}_\mbox{HCSP}\) is the greatest common divisor (GCD) of periods of the blocks (Bias and Gain) it contains, i.e., \(\text{GCD}(2, 3)=1\) , as denoted by \({\bf \textsf {wait}}(1)\) . Since no shared variable is allowed between different HCSP processes in parallel, we let \(x_0\) and \(x_1\) be two aliases of x, and \(z_0\) and \(z_1\) the aliases of z, as shown by the right diagram in Figure 2.
We first study the conditions under which \(\textsf {Plant}_\mbox{HCSP}\) is consistent with Plant of Equation (10). According to Equation (4), \(\textsf {Plant}_\mbox{HCSP}\) can be translated as follows:
\begin{equation*} \begin{array}{rcl} 〚 \textsf {Plant}_\mbox{HCSP}〛 _\mbox{HUTP} &\widehat{=}& \textsf {init}_a⨟ \textsf {init}_{x_0}⨟ \left(\textsf {put}_x\sqcap \textsf {get}_z\sqcap (\textsf {ode}⨟ (\textsf {put}_x\sqcap \textsf {get}_z))\right)^\ast , \end{array} \end{equation*}
where \(\textsf {init}_a~\widehat{=}\vdash \lceil a:= 0\rfloor\) , \(\textsf {init}_{x_0}~\widehat{=}\vdash \lceil x_0:= 0\rfloor\) , \(\textsf {put}_x~\widehat{=}\vdash \lceil {ch}_x!, x_0\rfloor\) , \(\textsf {get}_z~\widehat{=}\vdash \lceil {ch}_z?, z_1\rfloor\) and
\[\begin{eqnarray} \textsf {ode}&\widehat{=}&\vdash {\lceil\!\!\lceil} \dot{{x̰}}_0={a̰}\wedge \dot{{a̰}}={z̰}_1\wedge {ż̰}_1=0, \lbrace {ch}_x!, {ch}_z?\rbrace {\rfloor\!\!\rfloor} . \end{eqnarray}\]
(14)
According to Equations (7) and (8), Plant of Equation (10) can be unfolded as
\[\begin{eqnarray} \textsf {Plant}&\widehat{=}& \mathcal {H}_{\mbox{share}}\left((\textsf {init}_{x_0}⨟ (\textsf {put}_x\sqcap \textsf {ode}_0)^\ast)\Vert _\emptyset (\textsf {init}_{a_0}⨟ (\textsf {get}_z\sqcap \textsf {ode}_1)^\ast)\right), \end{eqnarray}\]
(15)
where \(\textsf {init}_{a_0}~\widehat{=}~\vdash \lceil a_0:= 0\rfloor\) , \(\textsf {put}_x~\widehat{=}\vdash \lceil \mathit {in}_0!, x_0\rfloor\) , \(\textsf {get}_z~\widehat{=}\vdash \lceil \mathit {out}_1?, z_1\rfloor\) and
\begin{equation*} \begin{array}{ccl} \textsf {ode}_0&\widehat{=}&\vdash {\lceil\!\!\lceil} \dot{{x̰}}_0={a̰}_1, \lbrace \mathit {in}_0!\rbrace {\rfloor\!\!\rfloor} ,\\ \textsf {ode}_1&\widehat{=}&\vdash {\lceil\!\!\lceil} \dot{{a̰}_0}={z̰}_1\wedge {ż̰}_1=0, \lbrace \mathit {out}_1?\rbrace {\rfloor\!\!\rfloor} . \end{array} \end{equation*}
Note that \(\mathit {in}_0\) and \(\mathit {out}_1\) are the aliases of \({ch}_x\) and \({ch}_z\) , respectively, so we do not distinguish \(\textsf {put}_x\) and \(\textsf {get}_z\) in \(〚 \textsf {Plant}_\mbox{HCSP}〛 _\mbox{HUTP}\) and Plant. Now, we demonstrate Plant of Equation (15) is equivalent to
\[\begin{eqnarray} \textsf {init}_a⨟ \textsf {init}_{x_0}⨟ \left(\textsf {get}_z\sqcap \textsf {put}_x\sqcap \textsf {ode}\right)^\ast \end{eqnarray}\]
(16)
if \(\tau\) s are removed from traces, where ode is defined in Equation (14). The traces of \(\textsf {init}_{x_0}⨟ (\textsf {put}_x\sqcap \textsf {ode}_0)^\ast\) and \(\textsf {init}_{a_0}⨟ (\textsf {get}_z\sqcap \textsf {ode}_1)^\ast\) can be modelled by the finite state machines (FSMs) in Figure 8, where \(\tau _x\) and \(\tau _a\) denote the actions of \(\textsf {init}_{x_0}\) and \(\textsf {init}_{a_0}\) , respectively.
Fig. 8.
Fig. 8. FSMs of traces of \(\textsf {init}_{x_0}⨟ (\textsf {put}_x\sqcap \textsf {ode}_0)^\ast\) (left) and \(\textsf {init}_{a_0}⨟ (\textsf {get}_z\sqcap \textsf {ode}_1)^\ast\) (right).
According to the definition of \(\Vert _\emptyset\) , the two FSMs in Figure 8 only synchronise on wait blocks \(\langle \delta _x,\lbrace \mathit {in}_0!\rbrace \rangle\) and \(\langle \delta _z,\lbrace \mathit {out}_1?\rbrace \rangle\) , which indicates \(\delta _x=\delta _z\) . Therefore, their composition is illustrated in Figure 9, which reflects the following result:
\begin{equation} \left((\textsf {init}_{x_0}⨟ \textsf {put}_x^\ast ⨟ \textsf {init}_{a_0})\sqcap (\textsf {init}_{a_0}⨟ \textsf {get}_z^\ast ⨟ \textsf {init}_{x_0})\right)⨟ (\textsf {put}_x\sqcap \textsf {get}_z\sqcap \textsf {ode}_{01})^\ast , \end{equation}
(17)
where
\begin{equation*} \begin{array}{rcl} \textsf {ode}_{01}&\widehat{=}&\vdash {\lceil\!\!\lceil} \dot{{x̰}_0}={a̰}_1\wedge \dot{{a̰}_0}={z̰}_1\wedge \dot{{z̰}_1}=0,\lbrace \mathit {in}_0!,\mathit {out}_1?\rbrace {\rfloor\!\!\rfloor} . \end{array} \end{equation*}
If we ignore \(\tau\) s in traces, then \((\textsf {init}_{x_0}⨟ \textsf {put}_x^\ast ⨟ \textsf {init}_{a_0})\) and \((\textsf {init}_{a_0}⨟ \textsf {get}_z^\ast ⨟ \textsf {init}_{x_0})\) are equivalent to \((\textsf {init}_{a_0}⨟ \textsf {init}_{x_0}⨟ \textsf {put}_x^\ast)\) and \((\textsf {init}_{a_0}⨟ \textsf {init}_{x_0}⨟ \textsf {get}_z^\ast)\) , respectively. Then, Equation (17) can be simplified:
\[\begin{eqnarray} &&\left((\textsf {init}_{a_0}⨟ \textsf {init}_{x_0}⨟ \textsf {put}_x^\ast)\sqcap (\textsf {init}_{a_0}⨟ \textsf {init}_{x_0}⨟ \textsf {get}_z^\ast)\right)⨟ (\textsf {put}_x\sqcap \textsf {get}_z\sqcap \textsf {ode}_{01})^\ast \nonumber \nonumber\\ &&=\quad \textsf {init}_{a_0}⨟ \textsf {init}_{x_0}⨟ (\textsf {put}_x^\ast \sqcap \textsf {get}_z^\ast)⨟ (\textsf {put}_x\sqcap \textsf {get}_z\sqcap \textsf {ode}_{01})^\ast \nonumber \nonumber\\ &&=\quad \textsf {init}_{a_0}⨟ \textsf {init}_{x_0}⨟ (\textsf {put}_x\sqcap \textsf {get}_z\sqcap \textsf {ode}_{01})^\ast . \end{eqnarray}\]
(18)
Note that the above result should be constrained by the healthiness condition of Equation (13), which states \(a_0\) and \(a_1\) are aliases of a. Therefore, the above result is equivalent to Equation (16), which is equivalent to Plant of Equation (15) (if \(\tau\) s are removed from traces).
Fig. 9.
Fig. 9. The composition of FSMs in Figure 8 by \(\Vert _\emptyset\) .
However, Equation (16) is not equivalent to \(〚 \textsf {Plant}_\mbox{HCSP}〛 _\mbox{HUTP}\) , because the latter always terminates with \(\textsf {put}_x\) or \(\textsf {get}_z\) . Thus, the premise of the equivalence should be (1) \(\phi _0\) : ignoring \(\tau\) s in traces and (2) \(\phi _1\) : Plant terminates with \(\textsf {put}_x\) or \(\textsf {get}_z\) . In summary,
\[\begin{eqnarray} \textsf {Plant}&\equiv _{\phi _0\wedge \phi _1}&〚 \textsf {Plant}_\mbox{HCSP}〛 _\mbox{HUTP}. \end{eqnarray}\]
(19)
However, we study the conditions under which \(\textsf {Control}_\mbox{HCSP}\) is consistent with Control of Equation (11) under some conditions. \(\textsf {Control}_\mbox{HCSP}\) can be translated as follows:
\begin{equation*} \begin{array}{rcl} 〚 \textsf {Control}_\mbox{HCSP}〛 _\mbox{HUTP} &\widehat{=}& \vdash \lceil \mathit {ti}:= 0\rfloor ⨟ \textsf {get}_x⨟ \textsf {comp}_y⨟ \textsf {comp}_z⨟ \textsf {put}_z⨟ \\ &&\left(\!\!\! \begin{array}{l} \vdash {\lceil\!\!\lceil} \dot{{x̰}}_1=\dot{{y̰}}={ż̰}_0=0, \emptyset {\rfloor\!\!\rfloor} _1⨟ \\ (\textsf {get}_x⨟ \textsf {comp}_y)\lhd \mathit {ti}\%2=0\rhd 〚 {\bf \textsf {skip}}〛 _\mbox{HUTP}⨟ \\ (\textsf {comp}_z⨟ \textsf {put}_z)\lhd \mathit {ti}\%3=0\rhd 〚 {\bf \textsf {skip}}〛 _\mbox{HUTP} \end{array}\!\!\! \right)^\ast , \end{array} \end{equation*}
as illustrated by Figure 10, where \(\textsf {get}_x~\widehat{=}~\vdash \lceil {ch}_x?, x_1\rfloor\) , \(\textsf {put}_z~\widehat{=}~\vdash \lceil {ch}_z!, z_0\rfloor\) , \(\textsf {comp}_y~\widehat{=}~\vdash \lceil y:= x_1+1.5\rfloor ,\) and \(\textsf {comp}_z~\widehat{=}~\vdash \lceil z_0:= y\times 2.4\rfloor\) . During the time intervals, the values of \(x_1\) , y, and \(z_0\) keep unchanged, denoted by the side condition on the left of the time line.
Fig. 10.
Fig. 10. Visual representation of HUTP semantics of \(\textsf {Control}_\mbox{HCSP}\) .
Similarly, the behavior of Control in Equation (11) can be illustrated by Figure 11, where \(\textsf {put}_y^0~\widehat{=}~\vdash \lceil \mathit {out}_0!,y_0\rfloor\) , \(\textsf {get}_y^0~\widehat{=}~\vdash \lceil \mathit {out}_0?,y\rfloor\) , \(\textsf {put}_y^1~\widehat{=}~\vdash \lceil \mathit {in}_1!,y\rfloor ,\) and \(\textsf {get}_y^1~\widehat{=}~\vdash \lceil \mathit {in}_1?,y_1\rfloor\) . Compared to Figure 10, the behaviour of Control is consistent with \(〚 \textsf {Control}_\mbox{HCSP}〛 _\mbox{HUTP}\) , except that the latter does not contain \(\textsf {syn}_y^0\) (synchronous communication between \(\textsf {put}_y^0\) and \(\textsf {get}_y^0\) ) and \(\textsf {syn}_y^1\) (synchronous communication between \(\textsf {put}_y^1\) and \(\textsf {get}_y^1\) ), because \(〚 \textsf {Control}_\mbox{HCSP}〛 _\mbox{HUTP}\) is a sequential process (and does not involve synchronous communication). This will lead to the inconsistency of the traces of \(〚 \textsf {Control}_\mbox{HCSP}〛 _\mbox{HUTP}\) and Control. In addition, the end time \(\mathit {ti}^{\prime }\) of Control is not necessarily equal to the end time \(\mathit {ti}+n=n\) of \(〚 \textsf {Control}_\mbox{HCSP}〛 _\mbox{HUTP}\) . Thus, the premise of the consistency should be (1) \(\varphi _0\) : ignoring trace variables and (2) \(\varphi _1\) : \(\mathit {ti}^{\prime }\%1=0\) . In summary,
\[\begin{eqnarray} \textsf {Control}&\equiv _{\varphi _0\wedge \varphi _1}&〚 \textsf {Control}_\mbox{HCSP}〛 _\mbox{HUTP}. \end{eqnarray}\]
(20)
Fig. 11.
Fig. 11. Visual representation HUTP semantics of Control.
Finally, according to Equation (5), the HUTP semantics of \(〚 \textsf {Diagram}〛 _\mbox{HCSP}\) is
\begin{equation*} \begin{array}{rcl} 〚 〚 \textsf {Diagram}〛 _\mbox{HCSP}〛 _\mbox{HUTP}&=&〚 \textsf {Plant}_\mbox{HCSP}\Vert \textsf {Control}_\mbox{HCSP}〛 _\mbox{HUTP}\\ &=&〚 \textsf {Plant}_\mbox{HCSP}〛 _\mbox{HUTP}\Vert _{M_{\lbrace {ch}_x, \mathit {ch_z}\rbrace }}^\mbox{HP}〚 \textsf {Control}_\mbox{HCSP}〛 _\mbox{HUTP}. \end{array} \end{equation*}
Note that the merge predicate \(M_I\) (Definition 4.5) allows the different termination time of parallel processes and it will delay the shorter one for the final time synchronisation. For example, \(〚 \textsf {Plant}_\mbox{HCSP}〛 _\mbox{HUTP}\) always terminate with \(\textsf {put}_x\) or \(\textsf {get}_z\) , while \(〚 \textsf {Control}_\mbox{HCSP}〛 _\mbox{HUTP}\) can terminate with \(\textsf {put}_z⨟ \vdash {\lceil\!\!\lceil} \dot{{x̰}}_1=\dot{{y̰}}={ż̰}_0=0, \emptyset {\rfloor\!\!\rfloor} _1⨟ 〚 {\bf \textsf {skip}}〛 _\mbox{HUTP}⨟ 〚 {\bf \textsf {skip}}〛 _\mbox{HUTP}\) . For time synchronisation, \(M_{\lbrace {ch}_x,{ch}_z\rbrace }\) will delay \(〚 \textsf {Plant}_\mbox{HCSP}〛 _\mbox{HUTP}\) for 1 time unit. In fact, the termination time difference raises when \(〚 \textsf {Control}_\mbox{HCSP}〛 _\mbox{HUTP}\) terminates with \(\mathit {ti}^{\prime }\%2\ne 0\wedge \mathit {ti}^{\prime }\%3\ne 0\) . Thus, if \(\mathit {ti}^{\prime }\%2=0\vee \mathit {ti}^{\prime }\%3=0\) , then \(\Vert _{M_{\lbrace {ch}_x,{ch}_z\rbrace }}^\mbox{HP}\) will degrade into \(\Vert _{\lbrace {ch}_x,{ch}_z\rbrace }\) . Consider the condition \(\gamma :\mathit {ti}^{\prime }\%2=0\vee \mathit {ti}^{\prime }\%3=0\) , it implies \(\varphi _1\) , and furthermore, it implies \(\phi _1\) , because Control terminates with \(\textsf {get}_x\) or \(\textsf {put}_z\) under \(\gamma\) , which indicates Plant will terminate with \(\textsf {put}_x\) or \(\textsf {get}_z\) . In a word, \(\gamma \Rightarrow \phi _1\wedge \varphi _1\) . Additionally, it is known \(\varphi _0\Rightarrow \phi _0\) .
Combining Equations (19), (20), and the monotonicity of \(\Vert _{\lbrace {ch}_x, \mathit {ch_y}\rbrace }\) (Property 11),
\begin{equation*} \begin{array}{rcl} \textsf {Plant}\Vert _{\lbrace \mathit {in}_0, \mathit {out}_1\rbrace }\textsf {Control} &\equiv _{\varphi _0\wedge \gamma }&〚 \textsf {Plant}_\mbox{HCSP}〛 _\mbox{HUTP}\Vert _{\lbrace {ch}_x, {ch}_z\rbrace }〚 \textsf {Control}_\mbox{HCSP}〛 _\mbox{HUTP}, \end{array} \end{equation*}
given that \(\mathit {in}_0\) and \(\mathit {out}_1\) are aliases of \({ch}_x\) and \({ch}_z\) , respectively. In summary, we can conclude
\begin{equation*} \begin{array}{rcl} 〚 \textsf {Diagram}〛 _\mbox{HUTP}&\equiv _{T,S}&〚 〚 \textsf {Diagram}〛 _\mbox{HCSP}〛 _\mbox{HUTP}, \end{array} \end{equation*}
where \(T=\lbrace t\ge 0\mid t\%2=0\vee t\%3=0\rbrace\) and \(S=\lbrace x_0,x_1,y,z_0,z_1,a_0,a_1\rbrace\) .

7 Related Work

Hoare and He’s UTP [28] is the first proposal toward unifying different programming paradigms and models under a common relational calculus suitable for design and verification. Another notable contribution in the same spirit is Milner’s theory of bigraphs [40].
The concept of higher-order UTP dates back to Reference [28], where higher-order variables are introduced to represent predicates of methods and procedures. Based on Reference [28], Zeyda et al. propose a concept of higher-order UTP to set up a UTP theory for object-orientation [61, 62]. Reference [61] builds a UTP theory for object-orientation by addressing four problems: consistency of the program model, redefinition of methods in subclasses, recursion and mutual recursion, and simplicity. Reference [62] supports recursion, dynamic binding, and compositional method definitions all at the same time. It shows how higher-order programming can be used to reason about object-oriented programs in a compositional manner and exemplifies its use by creating an object-oriented variant of a refinement language for real-time systems. In addition, it introduces the higher-order quantification over predicates. Unlike these higher-order extensions to UTP [28, 61, 62], that presented in this article focuses on higher-order quantification and the semantics of hybrid systems in the UTP theory.
Besides the de facto standards Simulink and Modelica, many more programming paradigms and domain-specific formalisms have been explored in the aim of modeling hybrid systems.
Functional programming variants of data-flow concepts comprise Yampa [29], an extension of Haskell’s functional reactive programming (FRP) model [46], or Zelus [8], a functional synchronous data-flow language to model ordinary differential equations in a so-called non-standard time model. Like Modelica and Simulink, both lines of formalisms are devoted to simulation supporting type-based analysis of basic safety properties, such as causality.
Another line of work regards earlier attempts to extend rich process algebras like the \(\pi\) -calculus with continuous real variables [7, 11, 30]. These extensions lead to the corresponding formulations of bi-simulations as a foundation to verification.
Our approach is to define such an algebraically expressive formalism, yet devoted to the primary purpose of verifying hybrid system models, by embedding its definition, verification, and proof capabilities in higher-order logic (Isabelle/HOL).
Hybrid automata [3] are a popular formalism to model hybrid systems, but composition/ decompo-sition of automata results in an exponential blowup, which is intractable to analyze in practice. I/O hybrid automata [35] is an extension of hybrid automata with explicit inputs and outputs. Assume-guarantee reasoning [26] on such automata tackles composability to prevent state-space explosion. However, their applicability is in practice restricted to linear hybrid automata.
Alternatively, proof-theoretic approaches to hybrid systems verification, such as differential dynamic logic (dL) [47, 50], action systems [4], hybrid Hoare logic together with hybrid communicating sequential processes (HHL/HCSP) [22, 31, 64, 67], and hybrid Event-B [2, 12], have gained much attention by offering powerful abstraction, refinement and proof mechanisms to scale hybrid system verification to practical use cases. Also note an early model of CPSs in the Coq proof assistant [37], which introduces a library VeriDrone, a foundational framework for reasoning about CPSs at all levels from high-level models to C code that implements the system. Empowered by the extension of SAT modulo-theory of real variables, implementing the seminal works on \(\delta\) -decidability of References [19, 20] in tools like dReal and dReach, provers allow for the tactics-guided mechanized verification of logic models of hybrid systems. Reference [41] introduces a verification-driven engineering toolset \(\text{S}\varphi \text{nx}\) , which uses KeYmaera as hybrid verification tool, for modelling hybrid systems, exchanging and comparing models and proofs, and managing verification tasks.
Differential dL [47, 48] extends dynamic logic [51] to hybrid systems by allowing modalities over hybrid programs that extend classical sequential programs with ordinary differential equations to model the continuous evolution of physical artifacts. To deal with more complex behaviour of hybrid systems, some variants of dL were established, e.g., stochastic differential dynamic logic [48] and differential game logic [49].
In Reference [32] Loos and Platzer propose a differential refinement logic to cope with refinement relations among different levels of abstraction for a given simple hybrid system. It remains an open problem whether the approach can apply to more complex hybrid system. In Reference [52] the authors investigate how to apply dL to define architecture of CPSs. However, parallelism and communication are not considered in dL and its variants. Recent component-based verification methodologies developed in dL [44, 45, 55] propose composition operators in dL to split verification of systems into more manageable pieces. References [33, 34] extend them with parallel composition (true concurrency) using design patterns defined using the primitive constructs of dL, which requires the corresponding mechanization effort as tactics in KeYmaeraX.
HCSP is a hybrid extension of Hoare’s Communicating Sequential Processes [27]. It features a native parallel composition operator and communicating primitives in addition to standard constructs for hybrid systems (sequences, loops, ODEs) and a proof calculus called hybrid Hoare logic(HHL) [31], as well as several extensions for dealing with more complex behavious, e.g., fault-tolerance [58] and noise [59]. Also, the notion of approximate bisimulation was proposed in Reference [60], with which a refinement relation between continuous models and between continuous models and discrete models can be well coped with. However, it is difficult to use HHL/HCSP to play a semantics foundation for the design of hybrid systems using MBD and DbC, as it is not easy to unify different models and views at different level of abstraction in HHL/HCSP.
Röonkkö and Ravn, and Back et al. extended action system [5] to hybrid systems, called hybrid action system [53] and continuous action system [4], respectively. As conservative extensions of action systems, theoretically speaking, both hybrid action system and continuous action system could support MBD and DbC. But both of them are very weak on dealing with continuous behaviours. In particular, they lack refinement relations, different continuous models and/or discrete models, which are essential for the MBD and DbC of hybrid systems.

8 Conclusions and Future Works

This article defines a conservative extension to Hoare and He’s UTP theory with higher-order quantification and provides a formal semantics for modeling and verifying hybrid systems, mixing discrete real-time processes and continuous dynamics. To this end, HUTP introduces real-time variables, denoted as functions over time, and allows the derivative of a variable to occur in a predicate, to define differential relations. This introduction allows to draw the concepts of hybrid processes and (normal) hybrid designs.
Our goal is to use the HUTP to define a formal semantics for engineering and formally verifying models such as Matlab’s Simulink/Stateflow and/or the SAE’s AADL, but also HCSP, and support the workflow of these environment with formal verification and certification functionalities inherited from its implementation in higher-order logic (Isabelle/HOL). We intend to additionally use HUTP to justify the correctness of the translation between these models and HCSP by translation validation as an appliaction of HUTP.

Future Works.

Starting from the definition of normal hybrid design contracts presented in this article, we are seeking principled abstractions to support modular analysis and compositional verification of hybrid system models. Our initial intuition was to start from the definition of abstract hybrid processes in Section 4.5 and define a theory of contracts using Abadi and Lamport’s “composing specifications” [1] or Benveniste’s meta-theory of contracts [6]. Indeed, consider a normal hybrid design \(\textsf {HC}\vdash \textsf {HP}\) and \(\mathcal {H}_{\mbox{HP}}^{\rm\small A}\) -healthy abstract hybrid processes C and P such that \(\textsf {HC}\sqsubseteq C\) and \(P\sqsubseteq \textsf {HP}\) . The abstract design \(C\vdash P\) naturally forms a contractual property of \(\textsf {HC}\vdash \textsf {HP}\) , in the sense of Abadi and Lamport or Benveniste.
However, related works in the UTP [15] show that the algebraic elaboration of such a contract theory requires non-constructive definitions such as weakest liberal preconditions and weakest rely, which prevent the structural decomposition and abstraction of parallel composition and communications from the abstracted hybrid designs (unless one uses, e.g., Lamport’s encoding of hand-shake communications using shared variables).
As a result, our aim will instead be to elaborate a dependently typed session calculus [36] in the UTP and equip it with abstract model checking capabilities [56].

Footnotes

1
We here revise the operational semantics in Reference [64] a little bit.

A Proofs

A.1 Proofs for Timed Trace Model

Property 1 (Associativity). Let \(I_0\) , \(I_1\) and \(I_2\) be the respective common channel sets between timed traces \(\texttt {tt}_0\) and \(\texttt {tt}_1\) , \(\texttt {tt}_1\) and \(\texttt {tt}_2\) , and \(\texttt {tt}_2\) and \(\texttt {tt}_0\) , then \((\texttt {tt}_0\Vert _{I_0}\texttt {tt}_1)\Vert _{I_1\uplus I_2}\texttt {tt}_2=\texttt {tt}_0\Vert _{I_0\uplus I_2}(\texttt {tt}_1\Vert _{I_1}\texttt {tt}_2)\) .
Proof.
We prove by induction on the length of traces. Since the traces we consider could be infinite, the proof is coinductive. We start from the cases that \(|\texttt {tt}_0|, |\texttt {tt}_1|, |\texttt {tt}_2|\le 1\) , such as \(\texttt {tt}_0=\texttt {tt}_1=\texttt {tt}_2=\epsilon\) , then by the fact that \(I_0\) , \(I_1\) and \(I_2\) are disjoint, we can get
\begin{equation*} \begin{array}{ccc} (\texttt {tt}_0\Vert _{I_0}\texttt {tt}_1)\Vert _{I_1\uplus I_2}\texttt {tt}_2 &=& \texttt {tt}_0\Vert _{I_0\uplus I_2}(\texttt {tt}_1\Vert _{I_1}\texttt {tt}_2), \end{array} \end{equation*}
which indicates the associativity of the parallel composition of timed traces.□

A.2 Proofs for Hybrid Processes

Theorem 4.2 (Idempotence and Monotonicity). \(\mathcal {H}_{\mbox{HP}}\) is idempotent and monotonic.
Proof.
First, \(\mathcal {H}_{\mbox{HP}}\) is monotonic, because it is constructed by monotonic operators. Then, we prove it is also idempotent. It can be checked that \(\mathcal {H}_{\mbox{0}}\) , \(\mathcal {H}_{\mbox{1}}\) , \(\mathcal {H}_{\mbox{2}}\) , \(\mathcal {H}_{\mbox{3}}\) and \(\mathcal {H}_{\mbox{4}}\) themselves are idempotent and \(\mathcal {H}_{\mbox{0}}\) and \(\mathcal {H}_{\mbox{4}}\) are commutative with their partners. Therefore,
\[\begin{eqnarray} \mathcal {H}_{\mbox{HP}}\circ \mathcal {H}_{\mbox{HP}}&=&(\mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}}\circ \mathcal {H}_{\mbox{4}})\circ (\mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}}\circ \mathcal {H}_{\mbox{4}}) \nonumber \nonumber\\ &=&(\mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}})\circ (\mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}}\circ \mathcal {H}_{\mbox{4}}\circ \mathcal {H}_{\mbox{4}}) \nonumber \nonumber\\ &=&(\mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}})\circ (\mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}}\circ \mathcal {H}_{\mbox{4}}). \end{eqnarray}\]
(21)
It is not difficult to check that \(\mathcal {H}_{\mbox{3}}\) is commutative with \(\mathcal {H}_{\mbox{0}}\) and \(\mathcal {H}_{\mbox{1}}\) , and we prove it is also commutative with \(\mathcal {H}_{\mbox{2}}\) . For short, let \(\textsf {inf}~\widehat{=}~(\mathit {ti}=+\infty \vee |\mathit {tr}|=+\infty)\) and \(\textsf {inf}^{\prime }~\widehat{=}~(\mathit {ti}^{\prime }=+\infty \vee |\mathit {tr}^{\prime }|=+\infty)\) correspondingly. Let \(\natural ~\widehat{=}~(\mathit {ti}=\mathit {ti}^{\prime }\wedge \mathit {tr}=\mathit {tr}^{\prime })\) . Then,
\begin{equation*} \begin{array}{rcl} \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}}(X)&=&\natural \lhd \textsf {inf}\rhd \left((\exists \mathbf {s}^{\prime }\cdot X)\lhd \textsf {inf}^{\prime }\rhd X\right) \end{array} \end{equation*}
and
\begin{equation*} \begin{array}{rcl} \mathcal {H}_{\mbox{3}}\circ \mathcal {H}_{\mbox{2}}(X)&=&\left(\exists \mathbf {s}^{\prime }\cdot (\natural \lhd \textsf {inf}\rhd X)\right)\lhd \textsf {inf}^{\prime }\rhd (\natural \lhd \textsf {inf}\rhd X)\\ &=&(\natural \lhd \textsf {inf}\rhd \exists \mathbf {s}^{\prime }\cdot X)\lhd \textsf {inf}^{\prime }\rhd (\natural \lhd \textsf {inf}\rhd X)\\ &=&\natural \lhd \textsf {inf}\rhd (\exists \mathbf {s}^{\prime }\cdot X\lhd \textsf {inf}^{\prime }\rhd X). \end{array} \end{equation*}
Therefore, \(\mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}}=\mathcal {H}_{\mbox{3}}\circ \mathcal {H}_{\mbox{2}}\) , and we can continue Equation (21):
\begin{equation*} \begin{array}{rcl} \mathcal {H}_{\mbox{HP}}\circ \mathcal {H}_{\mbox{HP}}&=&(\mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}})\circ (\mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}}\circ \mathcal {H}_{\mbox{4}})\\ &=&\mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}}\circ \mathcal {H}_{\mbox{3}}\circ \mathcal {H}_{\mbox{4}}\\ &=&\mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}}\circ \mathcal {H}_{\mbox{4}}. \end{array} \end{equation*}
Last, we prove the composition \(\mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}\) is idempotent. Concretely,
\begin{equation*} \begin{array}{rcl} \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}(X)&=&\mathcal {H}_{\mbox{1}}(\natural)\lhd \textsf {inf}\rhd \mathcal {H}_{\mbox{1}}(X). \end{array} \end{equation*}
Then,
\begin{equation*} \begin{array}{rcl} \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}(X)&=&\mathcal {H}_{\mbox{1}}(\natural)\lhd \textsf {inf}\rhd \mathcal {H}_{\mbox{1}}\left(\mathcal {H}_{\mbox{1}}(\natural)\lhd \textsf {inf}\rhd \mathcal {H}_{\mbox{1}}(X)\right)\\ &=&\mathcal {H}_{\mbox{1}}(\natural)\lhd \textsf {inf}\rhd \left(\mathcal {H}_{\mbox{1}}(\natural)\lhd \textsf {inf}\rhd \mathcal {H}_{\mbox{1}}(X)\right)\\ &=&\mathcal {H}_{\mbox{1}}(\natural)\lhd \textsf {inf}\rhd \mathcal {H}_{\mbox{1}}(X)\quad =\quad \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}(X). \end{array} \end{equation*}
In summary, \(\mathcal {H}_{\mbox{HP}}\circ \mathcal {H}_{\mbox{HP}}=\mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}}\circ \mathcal {H}_{\mbox{4}}=\mathcal {H}_{\mbox{HP}}\) , i.e., it is idempotent.□
Property 2. If \(\textsf {HP}_0\) and \(\textsf {HP}_1\) are \(\mathcal {H}_{\mbox{HP}}\) -healthy, then \(\textsf {HP}_0\sqcap \textsf {HP}_1\) and \(\textsf {HP}_0\sqcup \textsf {HP}_1\) are \(\mathcal {H}_{\mbox{HP}}\) -healthy.
Proof.
It can be checked that \(\mathcal {H}_{\mbox{4}}\) is commutative with its partners, thus
\[\begin{eqnarray} \mathcal {H}_{\mbox{HP}}(X)&=&\mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}}\circ \mathcal {H}_{\mbox{4}}(X) \nonumber \nonumber\\ &=&\mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{4}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}}(X) \nonumber \nonumber\\ &=&\mathcal {H}_{\mbox{014}}\left(\natural \lhd \textsf {inf}\rhd ((\exists \mathbf {s}^{\prime }\cdot X)\lhd \textsf {inf}^{\prime }\rhd X)\right), \end{eqnarray}\]
(22)
where \(\mathcal {H}_{\mbox{014}}~\widehat{=}~\mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{4}}\) . Then, \(\mathcal {H}_{\mbox{HP}}(X)\sqcap \mathcal {H}_{\mbox{HP}}(Y)\) and \(\mathcal {H}_{\mbox{HP}}(X)\sqcup \mathcal {H}_{\mbox{HP}}(Y)\) are
\begin{equation*} \begin{array}{rcl} \mathcal {H}_{\mbox{014}}\left(\natural \lhd \textsf {inf}\rhd \left((\exists \mathbf {s}^{\prime }\cdot X\vee Y)\lhd \textsf {inf}^{\prime }\rhd (X\vee Y)\right)\right) &=&\mathcal {H}_{\mbox{HP}}(X\vee Y),\\ \mathcal {H}_{\mbox{014}}\left(\natural \lhd \textsf {inf}\rhd \left((\exists \mathbf {s}^{\prime }\cdot X\wedge Y)\lhd \textsf {inf}^{\prime }\rhd (X\wedge Y)\right)\right) &=&\mathcal {H}_{\mbox{HP}}(X\wedge Y).\\ \end{array} \end{equation*}
Therefore, \(\sqcap\) and \(\sqcup\) are \(\mathcal {H}_{\mbox{HP}}\) -preserving.□
Lemma A.1.
\(\mathcal {H}_{\mbox{014}}(X)⨟ \mathcal {H}_{\mbox{014}}(Y)=\mathcal {H}_{\mbox{014}}(X⨟ Y)\) , where \(\mathcal {H}_{\mbox{014}}~\widehat{=}~\mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{4}}\) .
Proof. The computation is performed as follows:
\begin{equation*} \begin{array}{rcl} \mathcal {H}_{\mbox{014}}(X)⨟ \mathcal {H}_{\mbox{014}}(Y)&=&X\wedge \lim (\mathit {tr})=\mathit {ti}\le \mathit {ti}^{\prime }=\lim (\mathit {tr}^{\prime })\wedge \mathit {tr}\le \mathit {tr}^{\prime }\wedge {RC}\wedge {SD}\quad ⨟ \\ Y\wedge \lim (\mathit {tr})=\mathit {ti}\le \mathit {ti}^{\prime }=\lim (\mathit {tr}^{\prime })\wedge \mathit {tr}\le \mathit {tr}^{\prime }\wedge {RC}\wedge {SD}\\ =\exists \mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast \cdot X[\mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast /\mathit {ti}^{\prime }, \mathbf {s}^{\prime }, \mathit {tr}^{\prime }]\wedge Y[\mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast /\mathit {ti}, \mathbf {s}, \mathit {tr}]\\ \wedge \lim (\mathit {tr})=\mathit {ti}\le \mathit {ti}_\ast =\lim (\mathit {tr}_\ast)\le \mathit {ti}^{\prime }=\lim (\mathit {tr}^{\prime })\wedge \mathit {tr}\le \mathit {tr}_\ast \le \mathit {tr}^{\prime }\\ \wedge {RC}[\mathit {ti}_\ast /\mathit {ti}^{\prime }]\wedge {RC}[\mathit {ti}_\ast /\mathit {ti}]\wedge {SD}[\mathit {ti}_\ast /\mathit {ti}^{\prime }]\wedge {SD}[\mathit {ti}_\ast /\mathit {ti}]\\ =(X⨟ Y)\wedge \lim (\mathit {tr})=\mathit {ti}\le \mathit {ti}^{\prime }=\lim (\mathit {tr}^{\prime })\wedge \mathit {tr}\le \mathit {tr}^{\prime }\wedge {RC}\wedge {SD}\\ =\mathcal {H}_{\mbox{014}}(X⨟ Y). \end{array} \end{equation*}
Property 3. If \(\textsf {HP}_0\) and \(\textsf {HP}_1\) are \(\mathcal {H}_{\mbox{HP}}\) -healthy, then so is \(\textsf {HP}_0⨟ \textsf {HP}_1\) .
Proof.
As stated in the proof of Property 2 (see Equation (22)), a hybrid process \(\mathcal {H}_{\mbox{HP}}(X)\) can be represented by \(\mathcal {H}_{\mbox{014}}\left(\natural \lhd \textsf {inf}\rhd (\exists \mathbf {s}^{\prime }\cdot X\lhd \textsf {inf}^{\prime }\rhd X)\right)\) . This form can be transformed into a matrix:
Imagine \(⨟\) and \(\vee\) as matrix multiplication and addition, respectively, then \(\mathcal {H}_{\mbox{HP}}(X)⨟ \mathcal {H}_{\mbox{HP}}(Y)\) is
\begin{equation*} \begin{pmatrix}\begin{matrix}\mathcal {H}_{\mbox{014}}(X\wedge \lnot \textsf {inf}\wedge \lnot \textsf {inf}^{\prime })⨟ \\ \mathcal {H}_{\mbox{014}}(Y\wedge \lnot \textsf {inf}\wedge \lnot \textsf {inf}^{\prime }) \end{matrix} & \begin{matrix}\mathcal {H}_{\mbox{014}}(X\wedge \lnot \textsf {inf}\wedge \lnot \textsf {inf}^{\prime })⨟ \mathcal {H}_{\mbox{014}}(\exists \mathbf {s}^{\prime }\cdot Y\wedge \lnot \textsf {inf}\wedge \textsf {inf}^{\prime })\\ \vee \mathcal {H}_{\mbox{014}}(\exists \mathbf {s}^{\prime }\cdot X\wedge \lnot \textsf {inf}\wedge \textsf {inf}^{\prime })⨟ \mathcal {H}_{\mbox{014}}(\natural \wedge \textsf {inf}\wedge \textsf {inf}^{\prime }) \end{matrix} \\ & \\ \mathbf {false} & \mathcal {H}_{\mbox{014}}(\natural \wedge \textsf {inf}\wedge \textsf {inf}^{\prime })\\ \end{pmatrix}. \end{equation*}
By Lemma A.1, it can be simplified to
\begin{equation*} \begin{pmatrix}\mathcal {H}_{\mbox{014}}\left((X⨟ Y)\wedge \lnot \textsf {inf}\wedge \lnot \textsf {inf}^{\prime }\right) & \mathcal {H}_{\mbox{014}}\left(\exists \mathbf {s}^{\prime }\cdot (X⨟ Y~\vee ~ X)\wedge \lnot \textsf {inf}\wedge \textsf {inf}^{\prime }\right) \\ \mathbf {false} & \mathcal {H}_{\mbox{014}}(\natural \wedge \textsf {inf}\wedge \textsf {inf}^{\prime })\\ \end{pmatrix}, \end{equation*}
which is equivalent to
\begin{equation} \mathcal {H}_{\mbox{014}}\left(\natural \lhd \textsf {inf}\rhd \left(\exists \mathbf {s}^{\prime }\cdot (X⨟ Y~\vee ~ X)\lhd \textsf {inf}^{\prime }\rhd (X⨟ Y)\right)\right), \end{equation}
(23)
where \(⨟\) takes precedence over \(\vee\) . The predicate \(\natural \lhd \textsf {inf}\rhd \left(\exists \mathbf {s}^{\prime }\cdot (X⨟ Y\vee X)\lhd \textsf {inf}^{\prime }\rhd (X⨟ Y)\right)\) is \(\mathcal {H}_{\mbox{2}}\) -healthy, and we prove it is also \(\mathcal {H}_{\mbox{3}}\) -healthy:
\begin{equation*} \begin{array}{l} \mathcal {H}_{\mbox{3}}\left(\natural \lhd \textsf {inf}\rhd \left(\exists \mathbf {s}^{\prime }\cdot (X⨟ Y\vee X)\lhd \textsf {inf}^{\prime }\rhd (X⨟ Y)\right)\right)\\ \quad =\left(\natural \lhd \textsf {inf}\rhd \left(\exists \mathbf {s}^{\prime }\cdot (X⨟ Y\vee X)\lhd \textsf {inf}^{\prime }\rhd \exists \mathbf {s}^{\prime }\cdot (X⨟ Y)\right)\right)\\ \quad \lhd \textsf {inf}^{\prime }\rhd \quad \left(\natural \lhd \textsf {inf}\rhd \left(\exists \mathbf {s}^{\prime }\cdot (X⨟ Y\vee X)\lhd \textsf {inf}^{\prime }\rhd (X⨟ Y)\right)\right), \end{array} \end{equation*}
which can be simplified to \(\left(\natural \lhd \textsf {inf}\rhd \left(\exists \mathbf {s}^{\prime }\cdot (X⨟ Y\vee X)\right)\right)\lhd \textsf {inf}^{\prime }\rhd \left(\natural \lhd \textsf {inf}\rhd (X⨟ Y)\right)\) , equivalent to \(\natural \lhd \textsf {inf}\rhd \left(\exists \mathbf {s}^{\prime }\cdot (X⨟ Y\vee X)\lhd \textsf {inf}^{\prime }\rhd (X⨟ Y)\right)\) itself. Therefore, Equation (23) is \(\mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{4}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}}\) - healthy, i.e., \(\mathcal {H}_{\mbox{HP}}(X)⨟ \mathcal {H}_{\mbox{HP}}(Y)\) is \(\mathcal {H}_{\mbox{HP}}\) -healthy.□
Property 4. The healthiness condition \(\mathcal {H}_{\mbox{3}}\) is equivalent to
\begin{equation*} \begin{array}{ccl} X&=&X⨟ \textsf {skip}, \end{array} \end{equation*}
where \(\textsf {skip}~\widehat{=}~\mathcal {H}_{\mbox{2}}(\mathit {ti}=\mathit {ti}^{\prime }\wedge \mathit {tr}=\mathit {tr}^{\prime }\wedge \mathbf {s}=\mathbf {s}^{\prime })\) .
Proof.
With the aid of matrix representation, \(\textsf {skip}\) can be transformed into a matrix:
Then, \(X⨟ \textsf {skip}\) can be computed as
\begin{equation*} \begin{pmatrix}X\wedge \lnot \textsf {inf}\wedge \lnot \textsf {inf}^{\prime } & X\wedge \lnot \textsf {inf}\wedge \textsf {inf}^{\prime }\\ X\wedge \textsf {inf}\wedge \lnot \textsf {inf}^{\prime } & X\wedge \textsf {inf}\wedge \textsf {inf}^{\prime } \end{pmatrix} ⨟ \begin{pmatrix}\natural \wedge \mathbf {s}=\mathbf {s}^{\prime }\wedge \lnot \textsf {inf}\wedge \lnot \textsf {inf}^{\prime } & \mathbf {false}\\ \mathbf {false} & \natural \wedge \textsf {inf}\wedge \textsf {inf}^{\prime } \end{pmatrix}, \end{equation*}
which can be simplified to
\begin{equation*} \begin{pmatrix}X\wedge \lnot \textsf {inf}\wedge \lnot \textsf {inf}^{\prime } & \exists \mathbf {s}^{\prime }\cdot X\wedge \lnot \textsf {inf}\wedge \textsf {inf}^{\prime }\\ X\wedge \lnot \textsf {inf}\wedge \lnot \textsf {inf}^{\prime } & \exists \mathbf {s}^{\prime }\cdot X\wedge \lnot \textsf {inf}\wedge \textsf {inf}^{\prime } \end{pmatrix}. \end{equation*}
The above is equivalent to \((\exists \mathbf {s}^{\prime }\cdot X)\lhd \textsf {inf}^{\prime }\rhd X\) , which denotes \(\mathcal {H}_{\mbox{3}}\) .□
Property 5 (Quantification).
\begin{equation*} \begin{array}{rcl} \exists {x̰}\cdot (\textsf {HP}_0⨟\textsf {HP}_1) &=& (\exists {x̰}\cdot \textsf {HP}_0)⨟ (\exists {x̰}\cdot \textsf {HP}_1), \\ \forall {x̰}\cdot (\textsf {HP}_0⨟ \textsf {HP}_1) &\sqsupseteq& (\forall {x̰}\cdot \textsf {HP}_0)⨟ (\forall {x̰}\cdot \textsf {HP}_1), \\ \exists {x̰}\cdot (\textsf {HP}_0\sqcap \textsf {HP}_1) &=& (\exists {x̰}\cdot \textsf {HP}_0)\sqcap (\exists {x̰}\cdot \textsf {HP}_1),\\ \forall {x̰}\cdot (\textsf {HP}_0\sqcap \textsf {HP}_1) &\sqsupseteq & (\forall {x̰}\cdot \textsf {HP}_0)\sqcap (\forall {x̰}\cdot \textsf {HP}_1), \\ \exists {x̰}\cdot (\textsf {HP}_0\sqcup \textsf {HP}_1) &\sqsubseteq & (\exists {x̰}\cdot \textsf {HP}_0)\sqcup (\exists {x̰}\cdot \textsf {HP}_1), \\ \forall {x̰}\cdot (\textsf {HP}_0\sqcup \textsf {HP}_1) &=& (\forall {x̰}\cdot \textsf {HP}_0)\sqcup (\forall {x̰}\cdot \textsf {HP}_1), \\ \lnot _\wr \exists {x̰}\cdot \textsf {HP} &=& \forall {x̰}\cdot \lnot _\wr \textsf {HP},\\ \lnot _\wr \forall {x̰}\cdot \textsf {HP} &=& \exists {x̰}\cdot \lnot _\wr \textsf {HP}. \end{array} \end{equation*}
Proof. We only prove the second item, and others can be proved similarly.
\begin{equation*} \begin{array}{l} (\forall {x̰}\cdot \textsf {HP}_0)⨟ (\forall {x̰}\cdot \textsf {HP}_1)\\ =\bigwedge \lbrace \textsf {HP}_0[f_0,g_0/{x̰},\dot{{x̰}}]\mid f_0:[\mathit {ti}, \mathit {ti}^{\prime })\rightarrow \mathbb {D}\wedge g_0:(\mathit {ti}, \mathit {ti}^{\prime })\rightharpoonup \mathbb {D}\wedge \text{CON}(f_0,g_0,\mathit {ti},\mathit {ti}^{\prime })\rbrace ⨟ \\ \bigwedge \lbrace \textsf {HP}_1[f_1,g_1/{x̰},\dot{{x̰}}]\mid f_1:[\mathit {ti}, \mathit {ti}^{\prime })\rightarrow \mathbb {D}\wedge g_1:(\mathit {ti}, \mathit {ti}^{\prime })\rightharpoonup \mathbb {D}\wedge \text{CON}(f_1,g_1,\mathit {ti},\mathit {ti}^{\prime })\rbrace \\ =\exists \mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast \cdot \bigwedge \bigg \lbrace \textsf {HP}_0[\mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast , f_0,g_0/\mathit {ti}^{\prime }, \mathbf {s}^{\prime }, \mathit {tr}^{\prime }, {x̰}, \dot{{x̰}}]\bigg \vert \begin{matrix}f_0:[\mathit {ti}, \mathit {ti}^{\prime })\rightarrow \mathbb {D}\wedge g_0:(\mathit {ti}, \mathit {ti}^{\prime })\rightharpoonup \mathbb {D}\\ \wedge ~\text{CON}(f_0,g_0,\mathit {ti},\mathit {ti}^{\prime }) \end{matrix} \bigg \rbrace \\ \wedge ~\bigwedge \bigg \lbrace \textsf {HP}_1[\mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast , f_1,g_1/\mathit {ti}^{\prime }, \mathbf {s}^{\prime }, \mathit {tr}^{\prime }, {x̰}, \dot{{x̰}}]\bigg \vert \begin{matrix}f_1:[\mathit {ti}, \mathit {ti}^{\prime })\rightarrow \mathbb {D}\wedge g_1:(\mathit {ti}, \mathit {ti}^{\prime })\rightharpoonup \mathbb {D}\\ \wedge ~\text{CON}(f_1,g_1,\mathit {ti},\mathit {ti}^{\prime }) \end{matrix} \bigg \rbrace \\ =\exists \mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast \cdot \bigwedge \Bigg \lbrace \begin{matrix}\textsf {HP}_0[\mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast , f_0, g_0/\mathit {ti}^{\prime }, \mathbf {s}^{\prime }, \mathit {tr}^{\prime }, {x̰}, \dot{{x̰}}]\\ \wedge \textsf {HP}_1[\mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast , f_1, g_1/\mathit {ti}, \mathbf {s}, \mathit {tr}, {x̰}, \dot{{x̰}}] \end{matrix} \Bigg \vert \!\! \begin{array}{l} f_0:[\mathit {ti}, \mathit {ti}_\ast)\rightarrow \mathbb {D}\wedge g_0:(\mathit {ti}, \mathit {ti}_\ast)\rightharpoonup \mathbb {D}\wedge \\ f_1:[\mathit {ti}_\ast , \mathit {ti}^{\prime })\rightarrow \mathbb {D}\wedge g_1:(\mathit {ti}, \mathit {ti}_\ast)\rightharpoonup \mathbb {D}\wedge \\ \mbox{CON}(f_0, g_0, \mathit {ti}, \mathit {ti}_\ast)\wedge \mbox{CON}(f_1, g_1, \mathit {ti}_\ast , \mathit {ti}^{\prime }) \end{array}\!\!\! \Bigg \rbrace \\ =\exists \mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast \cdot \bigwedge \Bigg \lbrace \begin{pmatrix}\textsf {HP}_0[\mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast , f_0, g_0/\mathit {ti}^{\prime }, \mathbf {s}^{\prime }, \mathit {tr}^{\prime }, {x̰}, \dot{{x̰}}]\\ \wedge \textsf {HP}_1[\mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast , f_1, g_1/\mathit {ti}, \mathbf {s}, \mathit {tr}, {x̰}, \dot{{x̰}}] \end{pmatrix}[f,g/{x̰},\dot{{x̰}}] \Bigg \vert \!\! \begin{array}{l} f:[\mathit {ti}, \mathit {ti}^{\prime })\rightarrow \mathbb {D}\wedge \\ g:(\mathit {ti}, \mathit {ti}^{\prime })\rightharpoonup \mathbb {D}\wedge \\ \mbox{CON}(f, g, \mathit {ti}, \mathit {ti}^{\prime }) \end{array}\!\!\! \Bigg \rbrace \\ \sqsubseteq \bigwedge \Bigg \lbrace \exists \mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast \cdot \begin{pmatrix}\textsf {HP}_0[\mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast , f_0, g_0/\mathit {ti}^{\prime }, \mathbf {s}^{\prime }, \mathit {tr}^{\prime }, {x̰}, \dot{{x̰}}]\\ \wedge \textsf {HP}_1[\mathit {ti}_\ast , \mathbf {s}_\ast , \mathit {tr}_\ast , f_1, g_1/\mathit {ti}, \mathbf {s}, \mathit {tr}, {x̰}, \dot{{x̰}}] \end{pmatrix}[f,g/{x̰},\dot{{x̰}}] \Bigg \vert \!\! \begin{array}{l} f:[\mathit {ti}, \mathit {ti}^{\prime })\rightarrow \mathbb {D}\wedge \\ g:(\mathit {ti}, \mathit {ti}^{\prime })\rightharpoonup \mathbb {D}\wedge \\ \mbox{CON}(f, g, \mathit {ti}, \mathit {ti}^{\prime }) \end{array}\!\!\! \Bigg \rbrace \\ =\bigwedge \lbrace (\textsf {HP}_0⨟ \textsf {HP}_1)[f,g/{x̰},\dot{{x̰}}]\mid f:[\mathit {ti}, \mathit {ti}^{\prime })\rightarrow \mathbb {D}\wedge g:(\mathit {ti}, \mathit {ti}^{\prime })\rightharpoonup \mathbb {D}\wedge \text{CON}(f,g,\mathit {ti},\mathit {ti}^{\prime })\rbrace \\ =\forall {x̰}\cdot (\textsf {HP}_0⨟ \textsf {HP}_1), \end{array} \end{equation*}
where f is the concatenation of \(f_0\) and \(f_1\) , g is the concatenation of \(g_0\) and \(g_1\) , and
\begin{equation*} \begin{array}{rcl} \qquad \qquad \quad \ \mbox{CON}(f,g,\mathit {ti},\mathit {ti}^{\prime })&\widehat{=}&\forall t\in (\mathit {ti},\mathit {ti}^{\prime })\cdot \dot{f}(t^-)=g(t^-)\wedge \dot{f}(t^+)=g(t^+). \end{array} \end{equation*}
Property 6 (Well-Definedness). \(M_I\) is well-defined.
Proof.
Conditions (1) and (2) of well-definedness of \(M_I\) hold by its definition. We prove condition (3) also holds for \(M_I\) . First, if \(|\mathit {tr}_X|\lt +\infty\) and \(|\mathit {tr}_Y|\lt +\infty\) , then by the definition of \(\Vert _I\) , \(|\mathit {tr}^{\prime }|\lt +\infty\) . Then, if \(|\mathit {tr}_X|=+\infty\) or \(|\mathit {tr}_Y|=+\infty\) , say \(|\mathit {tr}_X|=+\infty\) , then there are two cases: (a) \(|\mathit {tr}|=+\infty\) , since \(\mathit {tr}\le \mathit {tr}^{\prime }\) by \(\mathcal {H}_{\mbox{0}}\) , \(|\mathit {tr}^{\prime }|=+\infty\) ; (b) \(|\mathit {tr}|\lt +\infty\) , meaning \(|\mathit {tr}_X-\mathit {tr}|=+\infty\) . According to the definition of \(\Vert _I\) , the composed trace is not shorter than the individual traces, i.e., \(|\mathit {tr}^{\prime }-\mathit {tr}|\ge |\mathit {tr}_X-\mathit {tr}|=+\infty\) , implying \(|\mathit {tr}^{\prime }|=+\infty\) . Therefore, condition (3) holds, indicating \(M_I\) is well-defined.□
Property 7 (Monotonicity). The operators \(⨟\) , \(\sqcup\) , \(\sqcap\) , \(\lhd \rhd\) , \(\exists\) and \(\Vert _M\) are monotonic and \(\lnot _\wr\) is anti-monotone, with respect to the refinement order \(\sqsubseteq\) .
Proof.
These operators except \(\lnot _\wr\) are constructed by monotonic atom operators.□
Theorem 4.6 (Complete Lattice). Let \(\mathbb {HP}\) be the image of \(\mathcal {H}_{\mbox{HP}}\) , i.e., it is the set of all hybrid processes, then it forms a complete lattice with top and bottom:
\begin{equation*} \begin{array}{ccccc} \top _\mbox{HP}&\widehat{=}&\bigsqcup \mathbb {HP}&=&\mathcal {H}_{\mbox{HP}}(\mathbf {false}),\\ \bot _\mbox{HP}&\widehat{=}&\sqcap \mathbb {HP}&=&\mathcal {H}_{\mbox{HP}}(\mathbf {true}).\\ \end{array} \end{equation*}
Proof.
Since \(\mathcal {H}_{\mbox{HP}}\) is idempotent and monotonic by Theorem 4.2, the complete lattice can be proved directly by the properties of \(\sqcap\) and \(\bigsqcup\) .□
Theorem 4.11 (Miracle). \(\top _\mbox{HP}\) is the only miracle of hybrid processes.
Proof.
As mentioned in the proof of Property 2 (see Equation (22)), a hybrid process \(\mathcal {H}_{\mbox{HP}}(X)\) can be unfolded as \(\mathcal {H}_{\mbox{014}}\left(\natural \lhd \textsf {inf}\rhd ((\exists \mathbf {s}^{\prime }\cdot X)\lhd \textsf {inf}^{\prime }\rhd X)\right)\) , where \(\mathcal {H}_{\mbox{014}}~\widehat{=}~\mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{4}}\) . Then,
\begin{equation*} \begin{array}{rcl} \mathcal {H}_{\mbox{HP}}\circ \mathit {MIR}(X)&=&\mathcal {H}_{\mbox{014}}\left(\natural \lhd \textsf {inf}\rhd ((\exists \mathbf {s}^{\prime }\cdot X\wedge \textsf {inf})\lhd \textsf {inf}^{\prime }\rhd (X\wedge \textsf {inf}))\right)\\ &=&\mathcal {H}_{\mbox{014}}\left(\natural \lhd \textsf {inf}\rhd (\mathbf {false}\lhd \textsf {inf}^{\prime }\rhd \mathbf {false})\right)\\ &=&\mathcal {H}_{\mbox{014}}\circ \mathcal {H}_{\mbox{2}}\circ \mathcal {H}_{\mbox{3}}(\mathbf {false})\quad =\quad \mathcal {H}_{\mbox{HP}}(\mathbf {false}), \end{array} \end{equation*}
which is equivalent to \(\top _\text{HP}\) .□
Property 8 (Left Zero of \(⨟\) ). \(\top _\mbox{HP}⨟ \textsf {HP}=\top _\mbox{HP}\) for any hybrid process \(\textsf {HP}\) .
Proof.
As mentioned in the proof (see Equation (23)) of Property 3, \(\top _\mbox{HP}⨟ \mathcal {H}_{\mbox{HP}}(X)\) is equivalent to
\begin{equation*} \begin{array}{rcl} \mathcal {H}_{\mbox{HP}}(\mathbf {false})⨟ \mathcal {H}_{\mbox{HP}}(X)&=& \mathcal {H}_{\mbox{014}}\left(\natural \lhd \textsf {inf}\rhd \left(\exists \mathbf {s}^{\prime }\cdot (\mathbf {false}⨟ X\vee \mathbf {false})\lhd \textsf {inf}^{\prime }\rhd (\mathbf {false}⨟ X)\right)\right), \end{array} \end{equation*}
which can be simplified to \(\mathcal {H}_{\mbox{HP}}(\mathbf {false})=\top _\mbox{HP}\) .□
Property 9 (Zero of \(\Vert _M\) ). \(\textsf {HP}\Vert _M\top _\mbox{HP}=\top _\mbox{HP}\Vert _M\textsf {HP}=\top _\mbox{HP}\) for any hybrid process \(\textsf {HP}\) .
Proof.
As mentioned in the proof (see Equation (22)) of Property 2, \(\top _\mbox{HP}\) can be unfolded as
\begin{equation*} \begin{array}{rcl} \mathcal {H}_{\mbox{014}}(\natural \lhd \textsf {inf}\rhd \mathbf {false})&=&\textsf {inf}\wedge \mathcal {H}_{\mbox{014}}(\natural). \end{array} \end{equation*}
Then, \(\top _{\mbox{HP},X}=\textsf {inf}\wedge \mathcal {H}_{\mbox{014}}(\natural)_X\) (note that \(\textsf {inf}\) has no primed variables). Therefore,
\begin{equation*} \begin{array}{rcl} \top _\mbox{HP}\Vert _M\textsf {HP}&=&\mathcal {H}_{\mbox{HP}}\left(\top _{\mbox{HP},X}\wedge \textsf {HP}_Y\wedge \text{II}\quad ⨟ \quad M\right)\\ &=&\mathcal {H}_{\mbox{HP}}\left(\textsf {inf}\wedge (\mathcal {H}_{\mbox{014}}(\natural)_X\wedge \textsf {HP}_Y\wedge \text{II}\quad ⨟ \quad M)\right)\\ &=&\mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{4}}\circ \mathcal {H}_{\mbox{3}}\circ \mathcal {H}_{\mbox{2}}\left(\textsf {inf}\wedge (\mathcal {H}_{\mbox{014}}(\natural)_X\wedge \textsf {HP}_Y\wedge \text{II}\quad ⨟ \quad M)\right), \end{array} \end{equation*}
where \(\text{II}~\widehat{=}~\mathit {ti}=\mathit {ti}^{\prime }\wedge \mathbf {s}=\mathbf {s}^{\prime }\wedge \mathit {tr}=\mathit {tr}^{\prime }\) . Notice that \(\mathcal {H}_{\mbox{2}}\left(\textsf {inf}\wedge (\mathcal {H}_{\mbox{014}}(\natural)_X\wedge \textsf {HP}_Y\wedge \text{II}⨟ M)\right)\) is
\begin{equation*} \begin{array}{ccccc} \natural \lhd \textsf {inf}\rhd (\textsf {inf}\wedge (\mathcal {H}_{\mbox{014}}(\natural)_X\wedge \textsf {HP}_Y\wedge \text{II}⨟ M)) &=& \natural \lhd \textsf {inf}\rhd \mathbf {false} &=& \mathcal {H}_{\mbox{2}}(\mathbf {false}), \end{array} \end{equation*}
which means
\begin{equation*} \begin{array}{ccccc} \top _\mbox{HP}\Vert _M\textsf {HP} &=& \mathcal {H}_{\mbox{0}}\circ \mathcal {H}_{\mbox{1}}\circ \mathcal {H}_{\mbox{4}}\circ \mathcal {H}_{\mbox{3}}\circ \mathcal {H}_{\mbox{2}}(\mathbf {false}) &=& \top _\mbox{HP}. \end{array} \end{equation*}
Similarly, we can prove \(\textsf {HP}\Vert _M\top _\mbox{HP}=\top _\mbox{HP}\) .□
Theorem 4.13 (Idempotence and Monotonicity). \(\mathcal {H}_{\mbox{HP}}^{\rm\small A}\) is idempotent and monotonic.
Proof.
First, \(\mathcal {H}_{\mbox{HP}}^{\rm\small A}\) is monotonic as it is constructed by monotonic operators. Then, it can be proved that \(\mathcal {H}_{\mbox{0}}^{\rm\small A}\) , \(\mathcal {H}_{\mbox{2}}^{\rm\small A}\) , \(\mathcal {H}_{\mbox{3}}^{\rm\small A}\) and \(\mathcal {H}_{\mbox{4}}\) are idempotent and commutative with each other, thus \(\mathcal {H}_{\mbox{HP}}^{\rm\small A}=\mathcal {H}_{\mbox{0}}^{\rm\small A}\circ \mathcal {H}_{\mbox{2}}^{\rm\small A}\circ \mathcal {H}_{\mbox{3}}^{\rm\small A}\circ \mathcal {H}_{\mbox{4}}\) is idempotent.□

A.3 Proofs for Hybrid Designs

Theorem 5.2 (Operations).
(1)
\((\textsf {HC}_0\vdash \textsf {HP}_0)\sqcap (\textsf {HC}_1\vdash \textsf {HP}_1)=(\textsf {HC}_0\wedge \textsf {HC}_1)\vdash (\textsf {HP}_0\vee \textsf {HP}_1),\)
(2)
\((\textsf {HC}_0\vdash \textsf {HP}_0)\sqcup (\textsf {HC}_1\vdash \textsf {HP}_1)=(\textsf {HC}_0\vee \textsf {HC}_1)\vdash ((\textsf {HC}_0\Rightarrow _\wr \textsf {HP}_0)\wedge (\textsf {HC}_1\Rightarrow _\wr \textsf {HP}_1)),\)
(3)
\((\textsf {HC}_0\vdash \textsf {HP}_0)⨟ (\textsf {HC}_1\vdash \textsf {HP}_1)=(\textsf {HC}_0\wedge \lnot _\wr (\textsf {HP}_0⨟ \lnot _\wr \textsf {HC}_1))\vdash (\textsf {HP}_0⨟ \textsf {HP}_1).\)
Proof.
The matrix representations of \(\textsf {HC}_0\vdash \textsf {HP}_0\) and \(\textsf {HC}_1\vdash \textsf {HP}_1\) are, respectively,
\begin{equation*} \begin{pmatrix}\bot _\mbox{HP} & \bot _\mbox{HP}\\ \lnot _\wr \textsf {HC}_0 & \textsf {HC}_0\Rightarrow _\wr \textsf {HP}_0 \end{pmatrix} \quad {and}\quad \begin{pmatrix}\bot _\mbox{HP} & \bot _\mbox{HP}\\ \lnot _\wr \textsf {HC}_1 & \textsf {HC}_1\Rightarrow _\wr \textsf {HP}_1 \end{pmatrix}. \end{equation*}
Their meet ( \(\sqcap\) , \(\vee\) ) and join ( \(\sqcup\) , \(\wedge\) ) can be computed component-wisely as
\begin{equation*} \begin{array}{ccc} \begin{pmatrix}\bot _\mbox{HP}\vee \bot _\mbox{HP} & \bot _\mbox{HP}\vee \bot _\mbox{HP}\\ \lnot _\wr \textsf {HC}_0\vee \lnot _\wr \textsf {HC}_1 & \begin{matrix}(\textsf {HC}_0\Rightarrow _\wr \textsf {HP}_0)\vee \\ (\textsf {HC}_1\Rightarrow _\wr \textsf {HP}_1) \end{matrix} \end{pmatrix} &=& \begin{pmatrix}\bot _\mbox{HP} & \bot _\mbox{HP}\\ \lnot _\wr (\textsf {HC}_0\wedge \textsf {HC}_1) & \begin{matrix}(\textsf {HC}_0\wedge \textsf {HC}_1)\Rightarrow _\wr \\ (\textsf {HP}_0\vee \textsf {HP}_1) \end{matrix} \end{pmatrix},\\ \begin{pmatrix}\bot _\mbox{HP}\wedge \bot _\mbox{HP} & \bot _\mbox{HP}\wedge \bot _\mbox{HP}\\ \lnot _\wr \textsf {HC}_0\wedge \lnot _\wr \textsf {HC}_1 & \begin{matrix}(\textsf {HC}_0\Rightarrow _\wr \textsf {HP}_0)\wedge \\ (\textsf {HC}_1\Rightarrow _\wr \textsf {HP}_1) \end{matrix} \end{pmatrix} &=& \begin{pmatrix}\bot _\mbox{HP} & \bot _\mbox{HP}\\ \lnot _\wr (\textsf {HC}_0\vee \textsf {HC}_1) & \begin{matrix}(\textsf {HC}_0\Rightarrow _\wr \textsf {HP}_0)\wedge \\ (\textsf {HC}_1\Rightarrow _\wr \textsf {HP}_1) \end{matrix} \end{pmatrix}, \end{array} \end{equation*}
respectively, which are, respectively, equivalent to
\begin{equation*} (\textsf {HC}_0\wedge \textsf {HC}_1)\vdash (\textsf {HP}_0\vee \textsf {HP}_1) ~\text{and}~ (\textsf {HC}_0\vee \textsf {HC}_1)\vdash ((\textsf {HC}_0\Rightarrow _\wr \textsf {HP}_0)\wedge (\textsf {HC}_1\Rightarrow _\wr \textsf {HP}_1)), \end{equation*}
according to Equation (2). Imagine \(⨟\) and \(\vee\) as matrix multiplication and addition, respectively, then
\begin{equation*} \begin{array}{l} \begin{pmatrix}\bot _\mbox{HP} & \bot _\mbox{HP}\\ \lnot _\wr \textsf {HC}_0 & \textsf {HC}_0\Rightarrow _\wr \textsf {HP}_0 \end{pmatrix} ⨟ \begin{pmatrix}\bot _\mbox{HP} & \bot _\mbox{HP}\\ \lnot _\wr \textsf {HC}_1 & \textsf {HC}_1\Rightarrow _\wr \textsf {HP}_1 \end{pmatrix}\\ =\begin{pmatrix}(\bot _\mbox{HP}⨟ \bot _\mbox{HP})\vee (\bot _\mbox{HP}⨟ \lnot _\wr \textsf {HC}_1) & (\bot _\mbox{HP}⨟ \bot _\mbox{HP})\vee (\bot _\mbox{HP}⨟ (\textsf {HC}_1\Rightarrow _\wr \textsf {HP}_1))\\ &\\ \begin{matrix}(\lnot _\wr \textsf {HC}_0⨟ \bot _\mbox{HP})\vee \\ ((\textsf {HC}_0\Rightarrow _\wr \textsf {HP}_0)⨟ \lnot _\wr \textsf {HC}_1) \end{matrix} & \begin{matrix}(\lnot _\wr \textsf {HC}_0⨟ \bot _\mbox{HP})\vee \\ ((\textsf {HC}_0\Rightarrow _\wr \textsf {HP}_0)⨟ (\textsf {HC}_1\Rightarrow _\wr \textsf {HP}_1)) \end{matrix} \end{pmatrix}\\ =\begin{pmatrix}\bot _\mbox{HP} & \bot _\mbox{HP}\\ \lnot _\wr \textsf {HC}_0\vee (\textsf {HP}_0⨟ \lnot _\wr \textsf {HC}_1) & \lnot _\wr \textsf {HC}_0\vee (\textsf {HP}_0⨟ \lnot _\wr \textsf {HC}_1)\vee (\textsf {HP}_0⨟ \textsf {HP}_1) \end{pmatrix}\\ =\begin{pmatrix}\bot _\mbox{HP} & \bot _\mbox{HP}\\ \lnot _\wr (\textsf {HC}_0\wedge \lnot _\wr (\textsf {HP}_0⨟ \lnot _\wr \textsf {HC}_1))& (\textsf {HC}_0\wedge \lnot _\wr (\textsf {HP}_0⨟ \lnot _\wr \textsf {HC}_1))\Rightarrow _\wr (\textsf {HP}_0⨟ \textsf {HP}_1) \end{pmatrix}, \end{array} \end{equation*}
which is equivalent to \((\textsf {HC}_0\wedge \lnot _\wr (\textsf {HP}_0⨟ \lnot _\wr \textsf {HC}_1))\vdash (\textsf {HP}_0⨟ \textsf {HP}_1)\) by Equation (2).□
Theorem 5.3 (Parallel Composition).
\begin{equation*} \begin{array}{rcl} (\textsf {HC}_0\vdash \textsf {HP}_0)\Vert _{\mbox{NHD}(M)}(\textsf {HC}_1\vdash \textsf {HP}_1) &=& \begin{pmatrix}\lnot _\wr (\lnot _\wr \textsf {HC}_0\Vert _{\bot _\mbox{HP}}\lnot _\wr \textsf {HC}_1)\\ \wedge \lnot _\wr (\lnot _\wr \textsf {HC}_0\Vert _{\bot _\mbox{HP}}\textsf {HP}_1)\\ \wedge \lnot _\wr (\lnot _\wr \textsf {HC}_1\Vert _{\bot _\mbox{HP}}\textsf {HP}_0) \end{pmatrix} \vdash \textsf {HP}_0\Vert _M\textsf {HP}_1. \end{array} \end{equation*}
Proof.
According to Section 4.2.5, \((\textsf {HC}_0\vdash \textsf {HP}_0)\Vert _{\mbox{NHD}(M)}(\textsf {HC}_1\vdash \textsf {HP}_1)\) can be unfolded as
\begin{equation} \mathcal {H}_{\mbox{HP}}\left((\textsf {HC}_0\vdash \textsf {HP}_0)_X\wedge (\textsf {HC}_1\vdash \textsf {HP}_1)_Y\wedge \text{II}\quad ⨟ \quad \mbox{NHD}(M) \right)\!, \end{equation}
(24)
where \(\text{II}~\widehat{=}~(\mathit {ti}=\mathit {ti}^{\prime }\wedge \mathbf {s}=\mathbf {s}^{\prime }\wedge \mathit {tr}=\mathit {tr}^{\prime })\) . With the aid of the matrix representation, the predicate \((\textsf {HC}_0\vdash \textsf {HP}_0)_X\wedge (\textsf {HC}_1\vdash \textsf {HP}_1)_Y\wedge \text{II}\) can be rewritten as
Similarly, \(\mbox{NHD}(M)\) can also be represented by the transpose of the following matrix:
Then, \((\textsf {HC}_0\vdash \textsf {HP}_0)_X\wedge (\textsf {HC}_1\vdash \textsf {HP}_1)_Y\wedge \text{II}⨟ \mbox{NHD}(M)\) , where \(\wedge\) takes precedence over \(⨟\) , is equivalent to the following matrix, by treating \(⨟\) and \(\vee\) as matrix multiplication and addition, respectively,
This matrix can be simplified as follows:
Therefore, the hybrid process of Equation (24) is equivalent to the following matrix:
which is equivalent to
\begin{equation*} \lnot _\wr \begin{pmatrix}(\lnot _\wr \textsf {HC}_0\Vert _{\bot _\mbox{HP}}\textsf {HP}_1)\vee (\textsf {HP}_0\Vert _{\bot _\mbox{HP}}\lnot _\wr \textsf {HC}_1)\\ \vee (\lnot _\wr \textsf {HC}_0\Vert _{\bot _\mbox{HP}}\lnot _\wr \textsf {HC}_1) \end{pmatrix} \vdash (\textsf {HP}_0\Vert _{M}\textsf {HP}_1), \end{equation*}
by the matrix representation of hybrid designs (see Equation (2)).□
Lemma A.2.
\(\mathcal {H}_{\mbox{HP}}(X)⨟ \bot _\mbox{HP}=\mathcal {H}_{\mbox{HP}}(X⨟ \bot _\mbox{HP})\) .
Proof.
As mentioned in the proof (see Equation (23)) of Property 3, \(\mathcal {H}_{\mbox{HP}}(X)⨟ \bot _\mbox{HP}\) is equivalent to
\begin{equation*} \begin{array}{rcl} \mathcal {H}_{\mbox{HP}}(X)⨟ \mathcal {H}_{\mbox{HP}}(\bot _\mbox{HP})&=& \mathcal {H}_{\mbox{014}}\left(\natural \lhd \textsf {inf}\rhd \left(\exists \mathbf {s}^{\prime }\cdot (X⨟ \bot _\mbox{HP}\vee X)\lhd \textsf {inf}^{\prime }\rhd X⨟ \bot _\mbox{HP}\right)\right), \end{array} \end{equation*}
which can be simplified to \(\mathcal {H}_{\mbox{HP}}(X⨟ \bot _\mbox{HP})\) according to Equation (22).□
Theorem 5.5 (Closure). Normal hybrid designs are closed on \(\sqcap\) , \(\sqcup\) , \(⨟\) , \(\Vert _{\mbox{NHD}(M)}\) and \(\Vert _M^\mbox{HP}\) .
Proof.
We only prove the closure of normal hybrid designs on \(\Vert _{\text{NHD}(M)}\) . The key is to prove
\begin{equation*} \begin{array}{rcl} \textsf {Assumption}&\widehat{=}&\lnot _\wr (\lnot _\wr \textsf {HC}_0\Vert _{\bot _\mbox{HP}}\textsf {HP}_1)\wedge \lnot _\wr (\textsf {HP}_0\Vert _{\bot _\mbox{HP}}\lnot _\wr \textsf {HC}_1)\wedge \lnot _\wr (\lnot _\wr \textsf {HC}_0\Vert _{\bot _\mbox{HP}}\lnot _\wr \textsf {HC}_1) \end{array} \end{equation*}
is \(\mathcal {H}_{\mbox{HC}}\) -healthy, i.e., \(\lnot _\wr \textsf {Assumption}⨟ \bot _\mbox{HP}=\lnot _\wr \textsf {Assumption}\) . Concretely,
\begin{equation*} \begin{array}{rcl} \lnot _\wr \textsf {Assumption}⨟ \bot _\mbox{HP}&=&\left((\lnot _\wr \textsf {HC}_0\Vert _{\bot _\mbox{HP}}\textsf {HP}_1)\vee (\textsf {HP}_0\Vert _{\bot _\mbox{HP}}\lnot _\wr \textsf {HC}_1)\vee (\lnot _\wr \textsf {HC}_0\Vert _{\bot _\mbox{HP}}\lnot _\wr \textsf {HC}_1)\right)⨟ \bot _\mbox{HP}. \end{array} \end{equation*}
By Lemma A.2, \((\lnot _\wr \textsf {HC}_0\Vert _{\bot _\mbox{HP}}\textsf {HP}_1)⨟ \bot _\mbox{HP}\) is equivalent to
\begin{equation*} \begin{array}{rcl} \mathcal {H}_{\mbox{HP}}(\lnot _\wr \textsf {HC}_0^X\wedge \textsf {HP}_1^Y\wedge \text{II}⨟ \bot _\mbox{HP})⨟ \bot _\mbox{HP} &=& \mathcal {H}_{\mbox{HP}}(\lnot _\wr \textsf {HC}_0^X\wedge \textsf {HP}_1^Y\wedge \text{II}⨟ \bot _\mbox{HP}⨟ \bot _\mbox{HP})\\ &=&\mathcal {H}_{\mbox{HP}}(\lnot _\wr \textsf {HC}_0^X\wedge \textsf {HP}_1^Y\wedge \text{II}⨟ \bot _\mbox{HP})\\ &=&\lnot _\wr \textsf {HC}_0\Vert _{\bot _\mbox{HP}}\textsf {HP}_1. \end{array} \end{equation*}
Similarly, we can prove that \((\textsf {HP}_0\Vert _{\bot _\mbox{HP}}\lnot _\wr \textsf {HC}_1)⨟ \bot _\mbox{HP}\) and \((\lnot _\wr \textsf {HC}_0\Vert _{\bot _\mbox{HP}}\lnot _\wr \textsf {HC}_1)⨟ \bot _\mbox{HP}\) are equivalent to \(\textsf {HP}_0\Vert _{\bot _\mbox{HP}}\lnot _\wr \textsf {HC}_1\) and \(\lnot _\wr \textsf {HC}_0\Vert _{\bot _\mbox{HP}}\lnot _\wr \textsf {HC}_1\) , respectively. Thus, \(\lnot _\wr \textsf {Assumption}⨟ \bot _\mbox{HP}=\lnot _\wr \textsf {Assumption}\) , i.e., \(\textsf {Assumption}\) is \(\mathcal {H}_{\mbox{HC}}\) -healthy.□
Lemma A.3.
\(\textsf {HP}_0\sqsubseteq \textsf {HP}_1\) iff \(\textsf {HP}_0\sqcap \lnot _\wr \textsf {HP}_1=\bot _\mbox{HP}\) for \(\textsf {HP}_0\) and \(\textsf {HP}_1\) are hybrid processes.
Proof.
\(\textsf {HP}_0\sqsubseteq \textsf {HP}_1\) iff \(\lnot _\wr \textsf {HP}_1\sqsubseteq \lnot _\wr \textsf {HP}_0\) iff
\begin{equation*} \begin{array}{ccccccc} \bot _\mbox{HP}&\sqsubseteq &(\textsf {HP}_0\sqcap \lnot _\wr \textsf {HP}_1)&\sqsubseteq &(\textsf {HP}_0\sqcap \lnot _\wr \textsf {HP}_0)&=&\bot _\mbox{HP}, \end{array} \end{equation*}
according to the monotonicity of \(\sqcap\) (Property 7).□
Theorem 5.6 (Refinement). \((\textsf {HC}_0\vdash \textsf {HP}_0)\sqsubseteq (\textsf {HC}_1\vdash \textsf {HP}_1)\) iff
\begin{equation*} \textsf {HC}_1\sqsubseteq \textsf {HC}_0\quad {and}\quad \textsf {HP}_0\sqsubseteq (\textsf {HC}_0\wedge \textsf {HP}_1). \end{equation*}
Proof. According to the matrix representation, \((\textsf {HC}_0\vdash \textsf {HP}_0)\sqsubseteq (\textsf {HC}_1\vdash \textsf {HP}_1)\) iff
\begin{equation*} \begin{array}{ccc} \begin{pmatrix}\bot _\mbox{HP}&\bot _\mbox{HP}\\ \lnot _\wr \textsf {HC}_0&\textsf {HC}_0\Rightarrow _\wr \textsf {HP}_0 \end{pmatrix} &\sqsubseteq & \begin{pmatrix}\bot _\mbox{HP}&\bot _\mbox{HP}\\ \lnot _\wr \textsf {HC}_1&\textsf {HC}_1\Rightarrow _\wr \textsf {HP}_1 \end{pmatrix}, \end{array} \end{equation*}
iff \(\lnot _\wr \textsf {HC}_0\sqsubseteq \lnot _\wr \textsf {HC}_1\) and \((\textsf {HC}_0\Rightarrow _\wr \textsf {HP}_0)\sqsubseteq (\textsf {HC}_1\Rightarrow _\wr \textsf {HP}_1)\) . Then,
\begin{equation*} \begin{array}{rclr} \bot _\mbox{HP} &=& (\textsf {HC}_0\Rightarrow _\wr \textsf {HP}_0)\sqcap \lnot _\wr (\textsf {HC}_1\Rightarrow _\wr \textsf {HP}_1)&[\mbox{Lemma~A.3}]\\ &=&\lnot _\wr \textsf {HC}_0\sqcap \textsf {HP}_0\sqcap (\textsf {HC}_1\sqcup \lnot _\wr \textsf {HP}_1)&\\ &=&\lnot _\wr \textsf {HC}_0\sqcap \textsf {HP}_0\sqcap \lnot _\wr \textsf {HP}_1 & [\lnot _\wr \textsf {HC}_0\sqsubseteq \lnot _\wr \textsf {HC}_1\mbox{ and Lemma~A.3}]. \end{array} \end{equation*}
Applying Lemma A.3 again, the above result is equivalent to
\begin{equation*} \begin{array}{ccccc} \qquad \ \ \ \qquad \qquad \qquad \textsf {HP}_0&\sqsubseteq &\lnot _\wr (\lnot _\wr \textsf {HC}_0\sqcap \lnot _\wr \textsf {HP}_1)&\equiv &(\textsf {HC}_0\wedge \textsf {HP}_1). \end{array} \end{equation*}
Property 10 (Contra-variance). If \(\textsf {HC}_0\sqsupseteq \textsf {HC}_1\) and \(\textsf {HP}_0\sqsubseteq \textsf {HP}_1\) , then
\begin{equation*} (\textsf {HC}_0\vdash \textsf {HP}_0)\sqsubseteq (\textsf {HC}_1\vdash \textsf {HP}_1). \end{equation*}
Proof.
It can be proved directly from Theorem 5.6.□
Property 11 (Monotonicity). Operators \(\sqcup\) , \(\sqcap\) , \(⨟\) , \(\Vert _{\mbox{NHD}(M)}\) and \(\Vert _M^\mbox{HP}\) for normal hybrid designs are monotonic with respect to \(\sqsubseteq\) .
Proof.
These operators are constructed from monotonic atomic operators.□
Theorem 5.7 (Complete Lattice). Normal hybrid designs from a complete lattice with top
\begin{equation*} \begin{array}{rcl} \top _\mbox{NHD}&\widehat{=}&\bot _\mbox{HP}\vdash \top _\mbox{HP} \end{array} \end{equation*}
and bottom
\begin{equation*} \begin{array}{rcl} \bot _\mbox{NHD}&\widehat{=}&\top _\mbox{HP}\vdash \bot _\mbox{HP}. \end{array} \end{equation*}
Proof. According to Theorem 5.2, \(\sqcap\) and \(\sqcup\) can be generalised to
\begin{equation*} \begin{array}{rcl} \sqcap _i(\textsf {HC}_i\vdash \textsf {HP}_i)&=&(\bigwedge _i\textsf {HC}_i)\vdash (\bigvee _i\textsf {HP}_i),\\ \bigsqcup _i(\textsf {HC}_i\vdash \textsf {HP}_i)&=&(\bigvee _i\textsf {HC}_i)\vdash \left(\bigwedge _i(\textsf {HC}_i\Rightarrow _\wr \textsf {HP}_i)\right)\!. \end{array} \end{equation*}
Therefore, normal hybrid designs form a complete lattice under \(\sqsubseteq\) with
\begin{equation*} \begin{array}{rcl} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \top _\mbox{NHD}&\widehat{=}&\bot _\mbox{HP}\vdash \top _\mbox{HP},\\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \bot _\mbox{NHD}&\widehat{=}&\top _\mbox{HP}\vdash \bot _\mbox{HP}.\end{array} \end{equation*}
Property 12 (Chaos). \(\bot _\mbox{NHD}=\top _\mbox{HP}\vdash \textsf {HP}\) for any hybrid process \(\textsf {HP}\) .
Proof. By the matrix representation,
\begin{equation*} \begin{array}{ccccccc} \qquad \qquad \top _\mbox{HP}\vdash \textsf {HP} &=& \begin{pmatrix}\bot _\mbox{HP} & \bot _\mbox{HP}\\ \lnot _\wr \top _\mbox{HP} & \top _\mbox{HP}\Rightarrow _\wr \textsf {HP} \end{pmatrix} &=& \begin{pmatrix}\bot _\mbox{HP} & \bot _\mbox{HP}\\ \bot _\mbox{HP} & \bot _\mbox{HP} \end{pmatrix} &=& \bot _\mbox{NHD}. \end{array} \end{equation*}
Property 13 (Non-termination). For any normal hybrid design \(\textsf {NHD}\) ,
\begin{equation*} \bot _\mbox{NHD}⨟ \textsf {NHD}=\bot _\mbox{NHD} \quad {and}\quad \top _\mbox{NHD}⨟ \textsf {NHD}=\top _\mbox{NHD}. \end{equation*}
Proof. Let \(\textsf {NHD}\ \widehat{=}\ \textsf {HC}\vdash \textsf {HP}\) . By matrix representations,
\begin{equation*} \begin{array}{ccccccc} \bot _\textrm {NHD}⨟ \textsf {NHD} &=& \begin{pmatrix}\bot _\mbox{HP} & \bot _\mbox{HP}\\ \bot _\mbox{HP} & \bot _\mbox{HP} \end{pmatrix} ⨟ \begin{pmatrix}\bot _\mbox{HP} & \bot _\mbox{HP}\\ \lnot _\wr \textsf {HC} & \textsf {HC}\Rightarrow _\wr \textsf {HP} \end{pmatrix} &=& \begin{pmatrix}\bot _\mbox{HP} & \bot _\mbox{HP}\\ \bot _\mbox{HP} & \bot _\mbox{HP} \end{pmatrix} &=& \bot _\textrm {NHD},\\ \top _\textrm {NHD}⨟ \textsf {NHD} &=& \begin{pmatrix}\bot _\mbox{HP} & \bot _\mbox{HP}\\ \top _\mbox{HP} & \top _\mbox{HP} \end{pmatrix} ⨟ \begin{pmatrix}\bot _\mbox{HP} & \bot _\mbox{HP}\\ \lnot _\wr \textsf {HC} & \textsf {HC}\Rightarrow _\wr \textsf {HP} \end{pmatrix} &=& \begin{pmatrix}\bot _\mbox{HP} & \bot _\mbox{HP}\\ \top _\mbox{HP} & \top _\mbox{HP} \end{pmatrix} &=& \top _\mbox{NHD}. \end{array} \end{equation*}
Property 14 (Parallel Composition). For any normal hybrid design \(\textsf {NHD}\) ,
\begin{equation*} \begin{array}{ccccc} \textsf {NHD}\Vert _{\mbox{NHD}(M)}\top _\mbox{NHD} &=& \top _\mbox{NHD}\Vert _{\mbox{NHD}(M)}\textsf {NHD} &=& \top _\mbox{NHD}. \end{array} \end{equation*}
Proof.
It can be proved by Property 9 and Theorem 5.3.□
Property 15 (Parallel Composition). For any normal hybrid design \(\textsf {NHD}\) ,
\begin{equation*} \begin{array}{ccccc} \textsf {NHD}\Vert _M^\mbox{HP}\bot _\mbox{NHD} &=& \bot _\mbox{NHD}\Vert _M^\mbox{HP}\textsf {NHD} &=& \bot _\mbox{NHD}. \end{array} \end{equation*}
Proof.
Let \(\textsf {NHD}~\widehat{=}~\textsf {HC}\vdash \textsf {HP}\) , then by Definition 5.4 and Property 12, we can get
\begin{equation*} \begin{array}{ccccccc} \bot _\mbox{NHD}\Vert _M\textsf {NHD} &=& (\top _\mbox{HP}\vdash \bot _\mbox{HP})\Vert _M(\textsf {HC}\vdash \textsf {HP}) &=& \top _\mbox{HP}\vdash (\bot _\mbox{HP}\Vert _M\textsf {HP}) &=& \bot _\mbox{NHD}. \end{array} \end{equation*}
Similarly, we can prove \(\textsf {NHD}\Vert _M\bot _\mbox{NHD}=\bot _\mbox{NHD}\) .□

A.4 Proofs for Reflection of HCSP and Simulink with HUTP

Property 16 (Unit). \(\lceil \rfloor\) is a unit of hybrid processes w.r.t. \(⨟\) .
Proof.
According to the intermediate result Equation (23) from Property 3,
\begin{equation*} \begin{array}{rcl} \mathcal {H}_{\mbox{HP}}(X)⨟ \lceil \rfloor &=&\mathcal {H}_{\mbox{HP}}(X)⨟ \mathcal {H}_{\mbox{HP}}(\mathit {ti}=\mathit {ti}^{\prime }\lt +\infty \wedge \mathbf {s}=\mathbf {s}^{\prime }\wedge \mathit {tr}=\mathit {tr}^{\prime })\\ &=&\mathcal {H}_{\mbox{014}}(\natural \lhd \textsf {inf}\rhd ((\exists \mathbf {s}^{\prime }\cdot X)\lhd \textsf {inf}^{\prime }\rhd X))\quad =\quad \mathcal {H}_{\mbox{HP}}(X), \end{array} \end{equation*}
by the intermediate result Equation (22) from Property 2. Similarly, we can prove \(\lceil \rfloor ⨟ \mathcal {H}_{\mbox{HP}}(X)=\mathcal {H}_{\mbox{HP}}(X)\) .□
Corollary 1 (Unit). \(\vdash \lceil \rfloor\) is a unit of hybrid processes w.r.t. \(⨟\) .
Proof. According to Property 16 and Theorem 5.2(3), for any normal hybrid design \(\textsf {HC}\vdash \textsf {HP}\) ,
\begin{equation*} \begin{array}{ccccl} \qquad \quad \ (\vdash \lceil \rfloor)⨟ (\textsf {HC}\vdash \textsf {HP}) &=&(\bot _\mbox{HP}\wedge \lnot _\wr (\lceil \rfloor ⨟ \lnot _\wr \textsf {HC}))\vdash (\lceil \rfloor ⨟ \textsf {HP})&=&\textsf {HC}\vdash \textsf {HP},\\ \qquad \quad \ (\textsf {HC}\vdash \textsf {HP})⨟ (\vdash \lceil \rfloor) &=&(\textsf {HC}\wedge \lnot _\wr (\textsf {HP}⨟ \lnot _\wr \bot _\mbox{HP}))\vdash (\textsf {HP}⨟ \lceil \rfloor)&=&\textsf {HC}\vdash \textsf {HP}. \end{array} \end{equation*}
Theorem 6.1 (Semantics Consistency). The HUTP semantics of HCSP is consistent with the operational semantics of HCSP.
Proof.
We say the HUTP semantics and the SOS of an HCSP process are consistent iff they keep consistent on time, state and trace. The proof for the consistency is divided into two stages: first, we prove the consistency between the SOS and the HUTP semantics of the syntactic entities in Equation (3); then, we prove the semantics consistency for the operators, such as sequential and parallel compositions, which connect the syntactic entities. For brevity, we just show the key points of the proof in the following content.
The semantics consistency for ODE with communication interruption ( \(\textsf {ODE}_\unrhd\) ) has been proved in the end of Section 6.2. \(\textsf {ODE}_\unrhd\) is representative, because it covers the characteristic features of HCSP, such as continuous evolution, communication and interruption. The semantics consistency for other syntactic entities of HCSP like \(\langle F(\dot{\mathbf {s}}, \mathbf {s})=0\& B\rangle\) and \({ch}?x\) can be proved similarly. So, we only focus on proving the semantics consistency for the connector of parallel composition as it plays an important role in composite systems.
The SOS for parallel composition of HCSP processes is described by the following rules, where I is the common channel set between parallel operands:
\begin{equation*} \begin{array}{c} \displaystyle \frac{}{(\varepsilon \Vert \varepsilon , (\mathit {ti},\mathbf {s}_P\uplus \mathbf {s}_Q))\rightarrow (\varepsilon , (\mathit {ti},\mathbf {s}_P\uplus \mathbf {s}_Q))}\![\text{Par0}], \frac{(\textsf {P}, (\mathit {ti},\mathbf {s}_P))\xrightarrow {\tau }(\textsf {P}^{\prime }, (\mathit {ti},\mathbf {s}_P^{\prime }))}{(\textsf {P}\Vert \rm {Q}, (\mathit {ti},\mathbf {s}_P\uplus \mathbf {s}_Q))\xrightarrow {\tau }(\textsf {P}^{\prime }\Vert \rm {Q}, (\mathit {ti},\mathbf {s}_P^{\prime }\uplus \mathbf {s}_Q))}\![\text{Par1}],\\ \quad \\ \displaystyle \frac{ \begin{matrix}{ch}\in I\quad (\textsf {P}, \mathbf {s}_P)\xrightarrow {\langle {ch}?, d\rangle }(\textsf {P}^{\prime }, \mathbf {s}_P^{\prime })\\ (\rm {Q}, \mathbf {s}_Q)\xrightarrow {\langle {ch}!, d\rangle }(\rm {Q}^{\prime }, \mathbf {s}_Q^{\prime }) \end{matrix} }{(\textsf {P}\Vert \rm {Q}, \mathbf {s}_P\uplus \mathbf {s}_Q)\xrightarrow {\langle {ch}, d\rangle }(\textsf {P}^{\prime }\Vert \rm {Q}^{\prime }, \mathbf {s}_P^{\prime }\uplus \mathbf {s}_Q^{\prime })}\![\text{Par2}], \quad \frac{{ch}\notin I\quad (\textsf {P}, (\mathit {ti},\mathbf {s}_P))\xrightarrow {\langle {ch}\ast , d\rangle }(\textsf {P}^{\prime }, (\mathit {ti},\mathbf {s}_P^{\prime }))}{(\textsf {P}\Vert \rm {Q}, \mathbf {s}_P\uplus \mathbf {s}_Q)\xrightarrow {\langle {ch}\ast , d\rangle }(\textsf {P}^{\prime }\Vert \rm {Q}, \mathbf {s}_P^{\prime }\uplus \mathbf {s}_Q)}\![\text{Par3}], \end{array} \end{equation*}
\begin{equation*} \begin{array}{c} \displaystyle \frac{ \begin{matrix}(\textsf {P}, (\mathit {ti},\mathbf {s}_P))\xrightarrow {\langle \mathit {ti}^{\prime }-\mathit {ti}, \mathit {RS}_P\rangle }(\textsf {P}^{\prime }, (\mathit {ti}^{\prime },\mathbf {s}_P^{\prime }), {\mathbf {s}̰}_P)\\ (\rm {Q}, (\mathit {ti},\mathbf {s}_Q))\xrightarrow {\langle \mathit {ti}^{\prime }-\mathit {ti}, \mathit {RS}_Q\rangle }(\rm {Q}^{\prime }, (\mathit {ti}^{\prime },\mathbf {s}_Q^{\prime }), {\mathbf {s}̰}_Q)\quad \overline{\mathit {RS}_P}\cap \mathit {RS}_Q=\emptyset \end{matrix} }{(\textsf {P}\Vert \rm {Q}, (\mathit {ti},\mathbf {s}_P\uplus \mathbf {s}_Q))\xrightarrow {\langle \mathit {ti}^{\prime }-\mathit {ti}, \mathit {RS}_P\uplus \mathit {RS}_Q\rangle }(\textsf {P}^{\prime }\Vert \rm {Q}^{\prime }, (\mathit {ti}^{\prime },\mathbf {s}_P^{\prime }\uplus \mathbf {s}_Q^{\prime }), {\mathbf {s}̰}_P\uplus {\mathbf {s}̰}_Q)}\![\text{Par4}]. \end{array} \end{equation*}
Since parallel HCSP processes do not share variables, the HUTP semantics and the SOS of the parallel connector are consistent on state variables naturally. Then, we focus on the parallel composition of timed traces, i.e., prove the consistency on timed traces. The above five rules correspond to the respective five composition rules of timed traces in Section 3.3. Concretely, [Par0], [Par1], [Par2], [Par3], and [Par4] relate to [Empty], [ \(\tau\) -Act], [SynIO], [NoSynIO], and [SynWait], respectively.
Consider two HCSP processes \(\textsf {P}\) and \(\rm {Q}\) with respective states \(\mathbf {s}_P\) and \(\mathbf {s}_Q\) . Their execution histories are recorded by timed traces \(\texttt {tt}_P\) and \(\texttt {tt}_Q\) , respectively, denoted by
\begin{equation*} (\textsf {P},(\mathit {ti},\mathbf {s}_P))\xrightarrow [\ast ]{\texttt {tt}_P}(\varepsilon ,(\mathit {ti}^{\prime },\mathbf {s}_P^{\prime }),{\mathbf {s}̰}_P)\quad \text{and}\quad (\rm {Q},(\mathit {ti},\mathbf {s}_Q))\xrightarrow [\ast ]{\texttt {tt}_Q}(\varepsilon ,(\mathit {ti}^{\prime },\mathbf {s}_Q^{\prime }),{\mathbf {s}̰}_Q), \end{equation*}
where \({\mathbf {s}̰}_P\) and \({\mathbf {s}̰}_Q\) can be removed if there is no flow generated. The proof is by induction, because all the traces involved are finite in the semantics of HCSP. So, this proceeds by induction on the length of traces and start from cases that \(|\texttt {tt}_P|, |\texttt {tt}_Q|\le 1\) , such as \(\texttt {tt}_P=\texttt {tt}_Q=\epsilon\) or \(\texttt {tt}_P\) and \(\texttt {tt}_Q\) are just trace blocks. Then, by induction on the derivation of \(\texttt {tt}_P\Vert _I\texttt {tt}_Q\leadsto \texttt {tt}\) , we can prove that the parallel composition \(\Vert _I\) of timed traces in HUTP can be interpreted by the structural operational semantics of the parallel composition of HCSP. On the other side, we can prove that the SOS of the parallel composition of HCSP can also be interpreted by \(\Vert _I\) of timed traces in HUTP by induction on the derivation of
\begin{equation*} (\textsf {P}\Vert \rm {Q}, (\mathit {ti},\mathbf {s}_P\uplus \mathbf {s}_Q))\xrightarrow [\ast ]{\texttt {tt}}(\varepsilon , (\mathit {ti}^{\prime },\mathbf {s}_P^{\prime }\uplus \mathbf {s}_Q^{\prime })) \end{equation*}
from the cases that \(|\texttt {tt}|\le 1\) . In summary, the SOS and the HUTP semantics of the parallel composition of HCSP are consistent on time, state and trace. The semantics consistency for other operators can be proved similarly.□

Acknowledgments

The authors thank the editors and anonymous reviewers, whose criticisms and suggestions did improve the presentation of our work very much.

References

[1]
M. Abadi and L. Lamport. 1993. Composing specifications. ACM Trans. Program. Lang. Syst. 15, 1 (Jan.1993), 73–132.
[2]
J.-R. Abrial, W. Su, and H. Zhu. 2012. Formalizing hybrid systems with event-B. In Proceedings of the 3rd International Conference on Abstract State Machines, Alloy, B, VDM, and Z (ABZ’12)(Lecture Notes in Computer Science, Vol. 7316). Springer, 178–193.
[3]
R. Alur, C. Courcoubetis, T. A. Henzinger, and P.-H. Ho. 1993. Hybrid automata: An algorithmic approach to the specification and verification of hybrid systems. In Hybrid Systems. Springer, 209–229.
[4]
R.-J. Back, L. Petre, and I. Porres. 2000. Generalizing action systems to hybrid systems. In Proceedings of the 6th International Symposium on Formal Techniques in Real-Time and Fault-Tolerant Systems (FTRTFT’00)(Lecture Notes in Computer Science, Vol. 1926). 202–213.
[5]
R.-J. Back and J. v. Wright. 1998. Refinement Calculus—A Systematic Introduction. Springer.
[6]
A. Benveniste, B. Caillaud, D. Nickovic, R. Passerone, J.-B. Raclet, P. Reinkemeier, A.-L. Sangiovanni-Vincentelli, W. Damm, T.-A. Henzinger, and K. G. Larsen. 2018. Contracts for system design. Found. Trends Electr. Design Autom. 12, 2–3 (2018), 124–400.
[7]
J. A. Bergstra and C. A. Middelburg. 2005. Process algebra for hybrid systems. Theoret. Comput. Sci. 335, 2–3 (May2005), 215–280.
[8]
T. Bourke and M. Pouzet. 2013. Zélus: A synchronous language with ODEs. In Proceedings of the 16th International Conference on Hybrid Systems: Computation and Control (HSCC’13). ACM, 113–118.
[9]
A. Butterfield, P. Gancarski, and J. Woodcock. 2009. State visibility and communication in unifying theories of programming. In Proceedings of the 3rd IEEE International Symposium on Theoretical Aspects of Software Engineering. IEEE Computer Society, 47–54.
[10]
M. Chen, A. P. Ravn, S. Wang, M. Yang, and N. Zhan. 2016. A two-way path between formal and informal design of embedded systems. In Proceedings of the 6th International Symposium on Unifying Theories of Programming (UTP’16)(Lecture Notes in Computer Science, Vol. 10134). Springer, 65–92.
[11]
P. J. L. Cuijpers and M. A. Reniers. 2005. Hybrid process algebra. J. Logic Algebr. Program. 62, 2 (Feb.2005), 191–245.
[12]
G. Dupont, Y. Ait-Ameur, N. K. Singh, and M. Pantel. 2021. Event-B hybridation: A proof and refinement-based framework for modelling hybrid systems. ACM Trans. Embed. Comput. Syst. 20, 4 (June2021), 35:1–35:37.
[13]
S. Foster. 2019. Hybrid relations in Isabelle/UTP. In Proceedings of the 7th International Symposium on Unifying Theories of Programming (UTP’19), Dedicated to Tony Hoare on the Occasion of His 85th Birthday(Lecture Notes in Computer Science, Vol. 11885). Springer, 130–153.
[14]
S. Foster and J. Baxter. 2020. Automated algebraic reasoning for collections and local variables with lenses. In Proceedings of the 18th International Conference on Relational and Algebraic Methods in Computer Science (RAMiCS’20)(Lecture Notes in Computer Science, Vol. 12062). Springer, 100–116.
[15]
S. Foster, A. Cavalcanti, S. Canham, J. Woodcock, and F. Zeyda. 2020. Unifying theories of reactive design contracts. Theoret. Comput. Sci. 802 (Jan.2020), 105–140.
[16]
S. Foster, A. Cavalcanti, J. Woodcock, and F. Zeyda. 2018. Unifying theories of time with generalised reactive processes. Inform. Process. Lett. 135 (July2018), 47–52.
[17]
S. Foster, B. Thiele, A. Cavalcanti, and J. Woodcock. 2016. Towards a UTP semantics for Modelica. In Proceedings of the 6th International Symposium on Unifying Theories of Programming (UTP’16)(Lecture Notes in Computer Science, Vol. 10134). Springer, Reykjavik, Iceland, 44–64.
[18]
D. D. Gajski, S. Abdi, A. Gerstlauer, and G. Schirner. 2009. Embedded System Design: Modeling, Synthesis, Verification. Springer-Verlag.
[19]
S. Gao, J. Avigad, and E. M. Clarke. 2012. Delta-Decidability over the reals. In Proceedings of the 27th Annual IEEE Symposium on Logic in Computer Science (LICS’12). IEEE Computer Society, Dubrovnik, Croatia, 305–314.
[20]
S. Gao, S. Kong, and E. M. Clarke. 2013. dReal: An SMT solver for nonlinear theories over the reals. In Proceedings of the 24th International Conference on Automated Deduction on Automated Deduction (CADE’13)(Lecture Notes in Computer Science, Vol. 7898). Springer, 208–214.
[21]
W. Guttmann and B. Möller. 2010. Normal design algebra. J. Logic Algebr. Program. 79, 2 (Feb.2010), 144–173.
[22]
J. He. 1994. From CSP to hybrid systems. In A Classical Mind: Essays in Honour of C. A. R. Hoare. Prentice Hall International, UK, 171–189.
[23]
J. He and Q. Li. 2017. A hybrid relational modelling language. In Concurrency, Security, and Puzzles—Essays Dedicated to Andrew William Roscoe on the Occasion of His 60th Birthday. Number 10160 in Lecture Notes in Computer Science. Springer, 124–143.
[24]
J. He, X. Li, and Z. Liu. 2006. A theory of reactive components. Electr. Notes Theoret. Comput. Sci. 160 (August2006), 173–195.
[25]
E. C. R. Hehner. 1993. A Practical Theory of Programming. Springer.
[26]
T. A. Henzinger, M. Minea, and V. Prabhu. 2001. Assume-Guarantee reasoning for hierarchical hybrid systems. In Proceedings of the 4th International Workshop on Hybrid Systems: Computation and Control (HSCC’01)(Lecture Notes in Computer Science, Vol. 2034). Springer, 275–290.
[27]
C. A. R. Hoare. 1978. Communicating sequential processes. Commun. ACM 21, 8 (Aug.1978), 666–677.
[28]
C. A. R. Hoare and J. He. 1998. Unifying Theories of Programming. Prentice Hall, Englewood Cliffs, NJ.
[29]
P. Hudak, A. Courtney, H. Nilsson, and J. Peterson. 2002. Arrows, robots, and functional reactive programming. In Proceedings of the 4th International School on Advanced Functional Programming (AFP’02)(Lecture Notes in Computer Science, Vol. 2638). Springer, Oxford, UK, 159–187.
[30]
R. Lanotte and M. Merro. 2017. A calculus of cyber-physical systems. In Proceedings of the 11th International Conference on Language and Automata Theory and Applications (LATA’17)(Lecture Notes in Computer Science, Vol. 10168). Springer International Publishing, Umeå, Sweden, 115–127.
[31]
J. Liu, J. Lv, Z. Quan, N. Zhan, H. Zhao, C. Zhou, and L. Zou. 2010. A calculus for hybrid CSP. In Proceedings of the 8th Asian Symposium on Programming Languages and Systems (APLAS’10)(Lecture Notes in Computer Science, Vol. 6461). Springer, Shanghai, China, 1–15.
[32]
S. M. Loos and A. Platzer. 2016. Differential refinement logic. In Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science (LICS’16). ACM, New York, NY, 505–514.
[33]
S. Lunel, B. Boyer, and J.-P. Talpin. 2017. Compositional proofs in differential dynamic logic dL. In Proceedings of the 17th International Conference on Application of Concurrency to System Design (ACSD’17). IEEE Computer Society, Zaragoza, Spain, 19–28.
[34]
S. Lunel, S. Mitsch, B. Boyer, and J.-P. Talpin. 2019. Parallel composition and modular verification of computer controlled systems in differential dynamic logic. In Proceedings of the 3rd World Congress on Formal Methods (FM’19)(Lecture Notes in Computer Science, Vol. 11800). Springer, 354–370.
[35]
N. A. Lynch, R. Segala, and F. W. Vaandrager. 2003. Hybrid I/O automata. Info. Comput. 185, 1 (Aug.2003), 105–157.
[36]
R. Majumdar, N. Yoshida, and D. Zufferey. 2020. Multiparty motion coordination: from choreographies to robotics programs. Proceedings of the ACM on Programming Languages 4, OOPSLA (Nov.2020), 134:1–134:30.
[37]
G. Malecha, D. Ricketts, M. M. Alvarez, and S. Lerner. 2016. Towards foundational verification of cyber-physical systems. In Proceedings of the Science of Security for Cyber-Physical Systems Workshop. IEEE Computer Society, 1–5.
[38]
O. Maler, Z. Manna, and A. Pnueli. 1991. From timed to hybrid systems. In Proceedings of the REX Workshop on Real-Time: Theory in Practice(Lecture Notes in Computer Science, Vol. 600). Springer, 447–484.
[39]
MathWorks. 2013. Simulink User’s Guide. Retrieved from http://www.mathworks.com/help/pdf_doc/simulink/sl_using.pdf.
[40]
R. Milner. 2006. Pure bigraphs: Structure and dynamics. Info. Comput. 204, 1 (Jan.2006), 60–122.
[41]
S. Mitsch. 2013. Modeling and Analyzing Hybrid Systems with Sphinx. Carnegie Mellon University/Johannes Kepler University.
[42]
Modelica. 2014. Modelica—A Unified Object-Oriented Language for Systems Modeling (Language Specification) (version 3.3 revision, 1st ed.). Modelica Association.
[43]
B. Möller. 2006. The linear algebra of UTP. In Proceedings of the 8th International Conference on Mathematics of Program Construction (MPC’06)(Lecture Notes in Computer Science, Vol. 4014). Springer, Kuressaare, Estonia, 338–358.
[44]
A. Müller, S. Mitsch, W. Retschitzegger, W. Schwinger, and A. Platzer. 2016. A component-based approach to hybrid systems safety verification. In Proceedings of the 12th International Conference on Integrated Formal Methods (IFM’16)(Lecture Notes in Computer Science, Vol. 9681). Springer, 441–456.
[45]
A. Müller, S. Mitsch, W. Retschitzegger, W. Schwinger, and A. Platzer. 2018. Tactical contract composition for hybrid system component verification. Int. J. Softw. Tools Technol. Transfer 20, 6 (Nov.2018), 615–643.
[46]
J. Peterson, Z. Wan, P. Hudak, and H. Nilsson. 2001. Yale FRP User’s Manual. Department of Computer Science, Yale University. Retrieved from http://www.haskell.org/frp/manual.html.
[47]
A. Platzer. 2008. Differential dynamic logic for hybrid systems. J. Autom. Reason. 41, 2 (Aug.2008), 143–189.
[48]
A. Platzer. 2010. Logical Analysis of Hybrid Systems. Springer.
[49]
A. Platzer. 2015. Differential game logic. ACM Trans. Comput. Logic 17, 1 (Dec.2015), 1:1–1:51.
[50]
A. Platzer. 2018. Logical Foundations of Cyber-Physical Systems. Springer.
[51]
V. R. Pratt. 1976. Semantical considerations on Floyd-Hoare logic. In Proceedings of the 17th Annual Symposium on Foundations of Computer Science (FOCS’76). IEEE Computer Society, 109–121.
[52]
A. Rajhans, A. Bhave, I. Ruchkin, B. H. Krogh, D. Garlan, A. Platzer, and B. R. Schmerl. 2014. Supporting heterogeneity in cyber-physical systems architectures. IEEE Trans. Autom. Control 59, 12 (Dec.2014), 3178–3193.
[53]
M. Rönkkö and A. P. Ravn. 1997. Action systems with continuous behaviour. In Proceedings of the 5th International Workshop on Hybrid Systems (Hybrid Systems V)(Lecture Notes in Computer Science, Vol. 1567). Springer, 304–323.
[54]
A. W. Roscoe. 1997. The Theory and Practice of Concurrency. Prentice Hall.
[55]
I. Ruchkin, B. Schmerl, and D. Garlan. 2015. Architecture abstractions for hybrid programs. In Proceedings of the 18th International ACM SIGSOFT Symposium on Component-Based Software Engineering (CBSE’15). ACM, 65–74.
[56]
A. Scalas, N. Yoshida, and E. Benussi. 2019. Verifying message-passing programs with dependent behavioural types. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI’19). 502–516.
[57]
A. Tarski. 1955. A lattice-theoretical fixpoint theorem and its applications. Pacific J. Math. 5, 2 (1955), 285–309.
[58]
S. Wang, F. Nielson, H. R. Nielson, and N. Zhan. 2017. Modelling and verifying communication failure of hybrid systems in HCSP. Comput. J. 60, 8 (Aug.2017), 1111–1130.
[59]
S. Wang, N. Zhan, and L. Zhang. 2017. A compositional modelling and verification framework for stochastic hybrid systems. Formal Aspects Comput. 29, 4 (July2017), 751–775.
[60]
G. Yan, L. Jiao, S. Wang, L. Wang, and N. Zhan. 2020. Automatically generating SystemC code from HCSP formal models. ACM Trans. Softw. Eng. Methodol. 29, 1 (Jan.2020), 4:1–4:39.
[61]
F. Zeyda and A. Cavalcanti. 2012. Higher-Order UTP for a theory of methods. In Proceedings of the 4th International Symposium on Unifying Theories of Programming (UTP’12)(Lecture Notes in Computer Science, Vol. 7681). Springer, Paris, France, 204–223.
[62]
F. Zeyda, T. Santos, A. Cavalcanti, and A. Sampaio. 2014. A modular theory of object orientation in higher-order UTP. In Proceedings of the 19th International Symposium on Formal Methods (FM’14)(Lecture Notes in Computer Science, Vol. 8442). Springer, Singapore, 627–642.
[63]
Bohua Zhan, Bin Gu, Xiong Xu, Xiangyu Jin, Shuling Wang, Bai Xue, Xiaofeng Li, Yao Chen, Mengfei Yang, and Naijun Zhan. 2021. Brief industry paper: modeling and verification of descent guidance control of mars lander. In Proceedings of the IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS’21). 457–460.
[64]
N. Zhan, S. Wang, and H. Zhao. 2017. Formal Verification of Simulink/Stateflow Diagrams—A Deductive Approach. Springer.
[65]
H. Zhao, M. Yang, N. Zhan, B. Gu, L. Zou, and Y. Chen. 2014. Formal verification of a descent guidance control program of a lunar lander. In Proceedings of the 19th International Symposium on Formal Methods (FM’14)(Lecture Notes in Computer Science, Vol. 8442). Springer, 733–748.
[66]
C. Zhou, C. A. R. Hoare, and A. P. Ravn. 1991. A calculus of durations. Inform. Process. Lett. 40, 5 (Dec.1991), 269–276.
[67]
C. Zhou, J. Wang, and A. P. Ravn. 1995. A formal description of hybrid systems. In Proceedings of the DIMACS/SYCON Workshop on Verification and Control of Hybrid Systems, Hybrid Systems III: Verification and Control(Lecture Notes in Computer Science, Vol. 1066). Springer, 511–530.
[68]
L. Zou, N. Zhan, S. Wang, and M. Fränzle. 2015. Formal verification of Simulink/Stateflow diagrams. In Proceedings of the 13th International Symposium on Automated Technology for Verification and Analysis (ATVA’15)(Lecture Notes in Computer Science, Vol. 9364). Springer, 464–481.
[69]
L. Zou, N. Zhan, S. Wang, M. Fränzle, and S. Qin. 2013. Verifying Simulink diagrams via a hybrid Hoare logic prover. In Proceedings of the International Conference on Embedded Software (EMSOFT’13). IEEE, 9:1–9:10.

Cited By

View all
  • (2023)Unified Graphical Co-modelling, Analysis and Verification of Cyber-physical Systems by Combining AADL and Simulink/StateflowACM SIGAda Ada Letters10.1145/3631483.363148743:1(46-49)Online publication date: 31-Oct-2023
  • (2023)Translating and verifying Cyber–Physical systems with shared-variable concurrency in SpaceExInternet of Things10.1016/j.iot.2023.10086423(100864)Online publication date: Oct-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Software Engineering and Methodology
ACM Transactions on Software Engineering and Methodology  Volume 32, Issue 1
January 2023
954 pages
ISSN:1049-331X
EISSN:1557-7392
DOI:10.1145/3572890
  • Editor:
  • Mauro Pezzè
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 February 2023
Online AM: 23 April 2022
Accepted: 07 February 2022
Revised: 30 January 2022
Received: 05 March 2021
Published in TOSEM Volume 32, Issue 1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. UTP
  2. CPS
  3. model-based design
  4. semantic model

Qualifiers

  • Research-article
  • Refereed

Funding Sources

  • NSFC
  • Inria’s joint research project CONVEX

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)446
  • Downloads (Last 6 weeks)67
Reflects downloads up to 12 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Unified Graphical Co-modelling, Analysis and Verification of Cyber-physical Systems by Combining AADL and Simulink/StateflowACM SIGAda Ada Letters10.1145/3631483.363148743:1(46-49)Online publication date: 31-Oct-2023
  • (2023)Translating and verifying Cyber–Physical systems with shared-variable concurrency in SpaceExInternet of Things10.1016/j.iot.2023.10086423(100864)Online publication date: Oct-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media