Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Methods to Realize Preemption in Phased Execution Models

Published: 09 September 2023 Publication History

Abstract

Phased execution models are a good solution to tame the increased complexity and contention of commercial off-the-shelf (COTS) multi-core platforms, e.g., Acquisition-Execution-Restitution (AER) model, PRedictable Execution Model (PREM). Such models separate execution from access to shared resources on the platform to minimize contention. All data and instructions needed during an execution phase are copied into the local memory of the core before starting to execute. Phased execution models are generally used with non-preemptive scheduling to increase predictability. However, the blocking time in non-preemptive systems can reduce schedulability. Therefore, an investigation of preemption methods for phased execution models is warranted. Although, preemption for phased execution models must be carefully designed to retain its execution semantics, i.e., the handling of local memory during preemption becomes non-trivial.
This paper investigates different methods to realize preemption in phased execution models while preserving their semantics. To the best of our knowledge, this is the first paper to explore different approaches to implement preemption in phased execution models from the perspective of data management. We introduce two strategies to realize preemption of execution phases based on different methods of handling local data of the preempted task. Heuristics are used to create time-triggered schedules for task sets that follow the proposed preemption methods. Additionally, a schedulability-aware preemption heuristic is proposed to reduce the number of preemptions by allowing preemption only when it is beneficial in terms of schedulability. Evaluations on a large number of synthetic task sets are performed to compare the proposed preemption models against each other and against a non-preemptive version. Furthermore, our schedulability-aware preemption heuristic has higher schedulability with a clear margin in all our experiments compared to the non-preemptive and fully-preemptive versions.

1 Introduction

Commercial off-the-shelf (COTS) multi-core platforms offer various benefits, such as high availability of processing resources and high performance. However, the main memory and interconnect are shared resources with several possible points of contention. This makes the timing verification of time-critical applications on such platforms challenging as their behavior becomes unpredictable [9]. Phased execution models are an established approach to make these complex systems more deterministic and predictable [26]. The Acquisition-Execution-Restitution (AER) model [10] and the PRedictable Execution Model (PREM) model [26] fall under the category of phased execution models. These models separate the execution of the tasks from access to shared resources by separating the total execution of the task into dedicating phases, which makes the execution predictable by minimizing contention. In this paper, we mainly focus on 3-phase models such as the AER model or the 3-phase PREM model [1, 36], while the core concepts are applicable to other phased execution models such as the 2-phase PREM model as well. In the 3-phase model, a task is divided into a read, execute and write phase. The read phase copies all input data and the task’s code to the memory local of the core (for example, scratchpad memory). The execution phase then exclusively operates on the data in local memory, completely avoiding access to shared global memory. Therefore, during execution, successive accesses to the same input data always deliver the same value, irrespective of changes in global memory. Finally, in the write phase, the output data is written to global memory.
Phased execution models are typically used non-preemptively to achieve predictability by avoiding interruption by other tasks [10, 26]. However, the blocking time due to the inflexible nature of non-preemptive scheduling can cause tasks to miss their deadlines while other tasks utilize the resources [5]. Therefore, an investigation of preemption methods for phased execution models is warranted. However, unlike conventional execution, allowing preemption in phased execution models requires additional care. During execution, all data a task operates on is stored in the local memory of the core. Any preemption during the execution phase must therefore, not only save the execution context of the task but also consider the local copies of the task’s data. Thus, when preemption is allowed in such execution models, handling the above-mentioned data already stored in the local memory of the core becomes a non-trivial challenge. We refer to the data that needs to be managed in the case of preemption as intermediate data. The existing works that allow preemptive scheduling of phased execution models either do not detail how the handling of intermediate data is considered in their methods [12, 31, 32] or make optimistic assumptions about the local memory size, e.g., that the local memory is sufficient to hold the partitions of all the tasks running on a core [2, 38]. Therefore, we believe that handling intermediate data of the preempted task, considering the limited local memory size becomes a practical and non-trivial challenge that needs to be addressed.
In this paper, two approaches are proposed for implementing preemption for phased execution models taking the limited size of local memory into account when handling the intermediate data. While the two approaches improve memory or runtime overheads, we explore and discuss the benefits and limitations of each approach. As analysability and predictability are crucial in safety-critical applications, we further explore time-triggered scheduling to maintain predictability while allowing preemption in the generated schedules. Several previous works have discussed the benefit of splitting task execution to improve schedulability [7, 34]. There exist complier-based techniques that support the segmentation of tasks that follow the phased execution models as well [33]. In contrast to these compiler-based techniques that divide a task into several smaller tasks at design time, our preemption approaches do not require a modification of the original 3-phase tasks. Srinivasan et al. [34] list types of preemption-related overheads that must be considered in preemptive scheduling, i.e., scheduling overheads, context-switching overheads, and cache-related preemption delays. The cache-related preemption delays are avoided by design in our approaches with the phased execution and with the consideration of spatial isolation for local memories. Handling of intermediate data is considered in our proposed methods, which is the most time-consuming part of the context switch. Moreover, the scheduling overheads are avoided with the use of time-triggered scheduling. We use heuristics to generate time-triggered schedules for the proposed preemption approaches. Furthermore, we propose a schedulability-aware preemption heuristic that allows preemption only where needed and thus improves schedulability. To the best of our knowledge, this is the first work that introduces and compares different methods of allowing preemption in phased execution models especially taking the handling of intermediate data and the limited size of the local memory into consideration.
Contributions: The following are the main contributions of this paper.
Two approaches to implement preemption of the execution phase of tasks that follow the phased execution models, such as PREM and AER, while preserving their predictable properties.
(1)
Waiting time minimizing preemption method (WMPM)
(2)
Overhead minimizing preemption method (OMPM)
The schedulability aware preemption heuristic (SAPH), a novel heuristic proposed to create time-triggered schedules. This algorithm reduces the number of preemption occurrences by allowing preemption only when beneficial in terms of schedulability for the proposed WMPM and OMPM.
Evaluation of the preemption methods and the SAPH heuristic proposed for phased execution models with different system parameters.

2 Related Work

Timing verification of multi-core real-time systems is a challenging topic and has received significant attention in recent years [16, 19]. Techniques of separating tasks into dedicated memory and execution phases to improve predictability by minimizing contention is a good approach to tackle the complexity of modern COTS platforms. The initial idea of the phased execution model is presented by Pellizoni et al. [27] and is extended to the PRedictable Execution Model (PREM) [26]. In their paper, the periodic tasks are broken into two different types of intervals, predictable intervals, and compatible intervals, to increase predictability. This idea has been refined into a three-phase model in their successive works [1, 36]. The AER model falls under this category of phased execution models and is a generalized version of the PREM model [10]. It separates memory operations and the execution of a task by dividing it into three phases. Additionally, it removes contention when accessing the main memory by not allowing two memory phases at the same time. These phased execution models are generally used with non-preemptive scheduling, and much research work has been done for phased execution models assuming non-preemptive scheduling [3, 4, 18, 25, 35]. However, we believe that it is important to explore the benefits of allowing preemption for phased execution models.
A memory-centric scheduling algorithm has been proposed by Yao et al. [38] for multi-core systems. Their work uses the PREM task model with global fully-preemptive scheduling. Concurrent accesses to the main memory are allowed up to a certain limit. Task migration after preemption is assumed to be allowed using shared last-level caches. However, the space limitation of the last level cache to accommodate all partitions of the tasks is still a problem. Huang et al. [12] propose a timing analysis for partitioned scheduling of real-time tasks that follow phased execution models with preemptive execution phases and non-preemptive shared resource access on multi-core platforms. However, no information is provided about the considerations for handling data of the preempted task. Arora et al. [2] have proposed a schedulability analysis for the 3-phase task model under limited preemptive scheduling where memory phases are considered non-preemptive while execution phases are preemptive. They assume that the local memory can be partitioned for the tasks running on a core and that the local memory space is sufficient to hold all partitions. They suggest dividing the tasks into multiple segments using existing frameworks [33]. However, these task-splitting techniques must be done at design time, and tasks can only be split at specific places considering the properties of their programs. In contrast, the methods we propose consider the limitation of the local memory capacity. Thus, we take away the need to alter the source code, enabling the preemption possibility at any place of the task. Senoussaoui et al. [32] propose job-level and task-level time-triggered scheduling approaches and an online scheduling approach for tasks that follow the PREM model on partitioned multi-core platforms. While they also consider non-preemptive memory phases and preemptive execution phases, the considered method of data handling during preemption is not explained. In addition to the above-mentioned work that allows preemption of the execution phases in phased execution models, Schwäricke et al. [30] allow the preemption of the memory phases, to boost schedulability while keeping the execution phases non-preemptive with fixed-priority global scheduling of the tasks that follow the PREM model. Also, Rivas et al. [28] present an implementation of a memory-centric scheduler for phased execution model with preemptive memory phases.
Therefore, we believe there is a need to investigate different methods of handling the intermediate data during preemption of execution phases, which we address and present in this paper, taking necessary factors into account, such as memory limitation. In our work, time-triggered scheduling is used to maintain predictability during runtime, and a novel heuristic algorithm is proposed to create time-triggered schedules for phased execution models where preemption is allowed only where needed and beneficial. Time-triggered scheduling not only helps to ensure certainty during the run time but also simplifies the design, verification, and certification. Thus, it is commonly used in safety-critical applications [22]. Different techniques are used when creating time-triggered schedules for preemptive systems, e.g, Integer linear programming (ILP) [17], constraint programming [14], optimization techniques such as genetic algorithms [24] and ant colony optimization, satisfiability modulo theories (SMT) [8], heuristic algorithms [6], etc.
In contrast to existing work, in this paper, we present a detailed investigation of different methods to realize preemption for phased execution models and their implications on memory consumption and runtime overheads. To the best of our knowledge, this is the first paper to propose and discuss methods of implementing preemption for phased execution models. We then leverage preemption in time-triggered schedules that we heuristically create under consideration of preemption overheads. In addition, the approach does not require modification of the task’s code which can be beneficial during certification.

3 System Overview

3.1 Platform Model

The multiprocessor system we consider has \(m\) identical cores, where each core has a private, local scratch-pad memory (SPM) of size \(S\) . This local memory stores instructions and data needed for the task that runs on its core to be accessed during the execution. We assume that the local SPM capacity of a core is enough to store the data and instructions of the largest task in the task set. All cores can access the main memory via a shared interconnect (e.g., a bus).
Note: We assume a platform with SPMs as they are more suitable for real-time applications in terms of achieving predictable execution times. Many of the modern multi-core platforms offer SPMs, e.g., NXP S32, Renesas R-car, STM Stellar, and Aurix Infineon platform series. However, the local memory requirement for our proposed methods as well as in general for phased execution models to achieve contention-free execution is a private local memory that avoids corruption of data during execution. Private cache partitions using cache partitioning techniques, e.g., cache coloring, together with cache locking to prevent unpredictable cache evictions, is one way of achieving this in cache-based platforms. Furthermore, some platforms provide the ability to use a part of the cache as tightly coupled memory (TCM) to achieve predictable local memory partitions, e.g., this can be achieved with ARM Cortex-M cache controllers [21]. Moreover, there exist works that present implementations of the phased model on cache-based COTS multi-core platforms [10, 28].

3.2 Application Model

An application is described as a set of \(n\) independent tasks \(\tau = \lbrace \tau _1, \tau _2, \ldots , \tau _n\rbrace\) . Each task is divided into three phases, the Acquisition phase, Execute phase, and Restitution phase.1 The phases are executed in the given sequence. A-phase, E-phase, and R-phase are respectively dedicated to read operations, execution, and write operations of the task. Thus, read operations, execution, and write operations are only performed in their dedicated phases. In the A-phase, all data and instructions needed to execute the task are copied from the main memory into the core’s local memory. Next is the E-phase phase, in which the computations are performed using the data and instructions that have already been stored in local memory. Thus, the execution is performed without having to access the main memory. Thus, without any interference. Finally, the R-phase writes the results back to the main memory.
Note: In our paper, we assume that the task programs have already been converted to be compatible with phased execution. Converting the existing task programs to be compatible with the phased execution model is a practical issue that is not always straightforward. These issues have been discussed, and methods have been proposed to automate this process in several existing works and, thus, can be used for program transformation [20, 25, 33].
Each task \(\tau _i\) can be represented by the tuple \((T_i, D_i, M_i, C_i^r, C_i^e, C_i^w)\) . Where \(T_i\) and \(D_i\) represent the period and deadline of the task, respectively. The period and deadline are relative to the release time where \(D_i \le T_i\) . The maximum memory footprint is represented by \(M_i\) , which is considered as the sum of the code size ( \(Im_i\) ), total data size ( \(Dm_i\) ), and the maximum stack usage ( \(Ms_i\) ) of the task. The worst-case times for the read phase, execution phase, and write phase are represented by \(C_i^r\) , \(C_i^e\) , and \(C_i^w\) , respectively. The worst-case total execution time \(C_i\) is their sum, i.e., \(C_i = C_i^{r} + C_i^{e} + C_i^{w}\) . The utilization of a task can be represented as \(u_i = C_i/ T_i\) , where the total utilization of the task set is \(\sum _{i=1}^{n} u_i\) .
Systems with tasks that follow the periodic task model repeat after each hyperperiod, which is the least common multiple (LCM) of the task periods in the task set. A task can have multiple instances (jobs) within a hyperperiod. \(\tau _{i,j}\) represents the \(j^{th}\) job of \(\tau _i\) . The release time of the job is represented by \(r_{i,j}\) , and the absolute deadline by \(d_{i,j}\) . We consider a task set to be schedulable if the task set is schedulable within a hyperperiod. For the task set to be schedulable within a hyperperiod, all the jobs of all the tasks within the hyperperiod must meet their deadlines.

4 Methods to Preempt Execution Phases

This section presents the preemption approaches we propose for the tasks that follow phased execution models, e.g., PREM, AER.
Between different phases of a task, the execution phase is considerably bigger than the memory phases [29]. Therefore, in the non-preemptive variant of phased execution models, the waiting time of a task while other tasks utilize the computational resource, contributes more to the overall waiting time of the task than the waiting time for the memory access resource. Intuitively, allowing preemption of execution phases is more effective than preemption of memory phases. Thus, as a simplifying step, we execute the memory phases non-preemptively, and only the execution phases are allowed to be preempted.
The method of handling the intermediate data of the preempted task is one of the main challenges when preempting execution phases of tasks that follow phased execution models (see Section 1). The preemption approaches we propose in this paper are based on different methods to handle the intermediate data of a preempted task while considering the limitations of the local memory. In a case where a task preempts another task, the task that preempts the other task is referred to as the preempting task, and the task that gets preempted is referred to as the preempted task in this paper.

4.1 Waiting Time Minimizing Preemption Method (WMPM)

The objective of this preemption approach is to minimize the waiting time of a preempting task while other tasks utilize the resources. Therefore, it is ensured that a task always has the ability to preempt a running task. Ensuring that there is always enough space in the local SPM of the core to store instructions and data of the preempting task becomes crucial when achieving this. To satisfy this requirement, the intermediate data of the preempted task stored in the local memory of the core is written back to the main memory before starting to execute the preempting task. To write back the intermediate data of the preempted task, an intermediate write phase is scheduled at the time of preemption (see Figure 1(a)). In this intermediate write phase ( \(W_i^*\) as in Figure 1 (a)), the intermediate stack data as well as the existing input/output data of the job, are written back to the main memory. The time for the intermediate write phase of a task is represented by \(C_i^{imw}\) . The read phase of the preempting task is triggered right after the completion of the intermediate write phase of the preempted task without any additional scheduling decisions.
Fig. 1.
Fig. 1. Preemption approaches (a) WMPM and (b) OMPM where \(\tau _1\) (in green) is preempted by \(\tau _2\) (in yellow).
We assume that there are separate, dedicated memory sections in the main memory allocated to store written-back intermediate data of tasks in the case of preemption. Therefore, in a system with \(n\) tasks and \(m\) cores, there is a maximum required memory capacity of \(S_{M}\) in the main memory in addition when using this preemption approach (see Equation (1)). The maximum possible size of the data that can be written back during the intermediate write phase of \(\tau _i\) is \((Ms_i + Dm_i)\) , which is the sum of the maximum stack size ( \(Ms_i\) ) and the total data size of the task ( \(Dm_i\) ). \(W\) represents a list of these maximum intermediate write data sizes of all the tasks in the task set sorted in descending order and \(W^{(k)}\) represents the \(k^{th}\) item in the list. \((n-m)\) is the maximum possible number of tasks that can be in preempted state at the same time. However, in a case where a task that has its intermediate data stored in the main memory preempts a running task, there should be enough additional space to store the intermediate data of the preempted task as well since the intermediate data of the preempting task is copied to the local memory after the intermediate write phase of the preempted task. Thus, we consider that \(S_{M}\) is the sum of the largest \((n-m+1)\) intermediate write data sizes in the task set.
\begin{equation} S_{M} = \sum ^{n - m + 1}_{k=1} W^{(k)} \end{equation}
(1)
Note: In a case where the time of preemption is equal to the start time of the preempted task’s execution phase, the intermediate write phase is avoided as there is no intermediate data to write back. In this case, the read phase of the preempting task is started right at the time of preemption.
After the intermediate write phase of the preempted task, it can continue the rest of its execution on any available core. Thus, with this preemption approach, we allow job-level migration compared to task-level migration in non-preemptive execution. In that case, the preempted task schedules an intermediate read phase ( \(R_i^*\) as in Figure 1(a)) to copy the intermediate data from the main memory to the SPM of the new core where it continues to execute. The time for the intermediate read phase of a task is represented by \(C_i^{imr}\) . With this preemption approach, the waiting time of the preempted task is also minimized as it can continue its execution on any available core after its intermediate write phase. However, an overhead of \((C_i^{imr} + C_i^{imw})\) occurs when using this approach due to intermediate write and read phases.
Rules for preemption decision:. For \(\tau _2\) to preempt \(\tau _1\) running on \(core\) \(z\) at \(t=t_x\) , the conditions below must be satisfied.
(1)
\(\tau _1\) must be in its execution phase
Rules for handling intermediate data:. If \(\tau _2\) preempts \(\tau _1\) running on \(core\) \(z\) at \(t=t_x\) ,
(1)
The shared interconnect is allocated to \(\tau _1\) for a duration of \(C_1^{imw}\) , which allows writing the intermediate data from the SPM of \(core\) \(z\) to the memory partition for intermediate data in the main memory without contention.
(2)
At \(t=t_x + C_1^{imw}\) , the shared interconnect is allocated to \(\tau _2\) for a duration of \(C_2^r\) , which allows to read data and instructions needed for \(\tau _2\) from the main memory without contention and store them in the local SPM of \(core\) \(z\) .
(3)
\(\tau _1\) is allowed to resume execution on any available core. In that case, the original code of the job, and the written back intermediate data are read into the local memory of the new core.

4.2 Overhead Minimizing Preemption Method (OMPM)

The objective of this preemption approach is to minimize the overheads during preemption by avoiding additional memory phases to handle the intermediate data. Writing back the intermediate data of the preempted task to the main memory must be avoided to achieve this. Thus, the intermediate data of the preempted task is kept in the local memory of the core in the case of preemption instead of writing to the main memory. The execution phase of the preempted task is paused until all three phases of the preempting task are completed (see Figure 1(b)). Therefore, for the preempted task, there is a minimum waiting time of the total execution time of the preempting task. Once the preempting task has completed its write phase, the execution phase of the preempted task can resume. With this approach, preemption can take place only if the available local memory space at the time of preemption is enough to fit the maximum memory capacity ( \(M_i\) ) required by the preempting task. The available space on the local SPM of \(core\) \(z\) at \(time = t\) is represented by \(S_z(t)\) . At the beginning at \(t=0\) , \(S_z(0) = S\) . This approach will not create fragmentation in the SPM, as memory can be managed as a stack.
Rules for preemption decision:. For \(\tau _2\) to preempt \(\tau _1\) running on \(core\) \(z\) at \(t=t_x\) , the conditions below must be satisfied.
(1)
\(S_z(t_x)\) > \(M_2\)
(2)
\(\tau _1\) must be in its execution phase
Rules for handling intermediate data:. If \(\tau _2\) preempts \(\tau _1\) running on \(core\) \(z\) at \(t=t_x\) ,
(1)
The execution phase of \(\tau _1\) is paused at \(t=t_x\) .
(2)
The read phase of \(\tau _2\) is scheduled at \(t=t_x\) .
(3)
\(Core\) \(z\) is reserved for \(\tau _2\) until the end of its write phase unless another task preempts it.
(4)
\(\tau _1\) resume its execution after \(\tau _2\) has completed all three phases.

4.3 Discussion

The two preemption methods we propose in this paper have their benefits and drawbacks. As the name suggests, the WMPM approach minimizes the waiting time and enables the possibility of preemption at any time. Additionally, the waiting time of the preempted task until it resumes execution after preemption is also minimized by allowing migration to any available core. However, WMPM has an overhead associated with each preemption occurrence. On the other hand, while the OMPM approach minimizes the preemption overhead, as a consequence of being paused on the same core, the preempted task is bound to that core and has a waiting time until it resumes execution. Additionally, with the OMPM approach, it is necessary to keep track of the SPM memory usage as the preemption decision requires it. Due to the limitation of the SPM capacity, the number of preemptions is limited. In a system where priority-driven scheduling is used, this limitation can result in a priority inversion, i.e., consider a case where a medium-priority task preempts a low-priority task and uses all the available space in the SPM. This prevents a high-priority task that comes afterward from preempting. On the other hand, in the WMPM approach, although the SPM is not a limiting factor, there is an additional memory requirement on the main memory to accommodate the intermediate data of preempted tasks.

5 Creating Time-triggered Schedules for Preemptive Phased Execution Model

We consider time-triggered scheduling for the proposed preemption methods to maintain predictability during run time. This section presents an algorithm to create time-triggered schedules for the 3-phase task model. We present different variations of the algorithm to create fully preemptive as well as non-preemptive schedules. The preemptive variants can be used with either of the proposed preemption methods for phased execution models. In addition, a heuristic algorithm called schedulability aware preemption heuristic (SAPH) is proposed to reduce the number of preemptions in the fully preemptive versions of the proposed preemption approaches (WMPM and OMPM), aiming to allow preemption only when there is a possibility to improve schedulability (Section 5.3). Section 5.1 describes the base algorithm used when creating the time-triggered schedules regardless of whether non-preemptive scheduling, fully-preemptive scheduling, or SAPH is used.

5.1 Base Algorithm to Generate Time-Triggered Schedules

This section describes the base algorithm (presented in Algorithm 1) used when creating time-triggered schedules for all the scheduling variants we consider, i.e., non-preemptive scheduling, fully preemptive scheduling, and the SAPH. Scheduling the jobs from a memory perspective is an effective and well-known approach for systems with limited access to shared memory [3, 37, 38]. This work uses an earliest-deadline-first (EDF) inspired memory-centric scheduler. State-of-the-art heuristics based on EDF memory-centric scheduling exist to create time-triggered schedules for non-preemptive phased execution models, where a single job is treated as three sub-jobs [3]. Meaning that each phase is considered as a sub-job with an individual deadline. Our work focuses on evaluating different preemption approaches for the phased execution model. To be more suitable for our purpose, we propose a simpler algorithm to be used as the baseline non-preemptive algorithm for introducing preemption. We consider that each job has one absolute deadline (Section 3.2). Thus, our EDF-based scheduler uses a single absolute deadline when making preemption decisions. This avoids making preemption decisions based on two different types of deadlines when preempting and preempted jobs have different types of ongoing phases. Furthermore, our evaluation results show that the proposed non-preemptive baseline provides better schedulability than the state-of-the-art non-preemptive algorithm (Section 6.2.1).

5.1.1 Algorithm Overview.

Within a hyperperiod, each task can have several jobs, and each job becomes ready to be scheduled once it is released. In our algorithm, two successive jobs of the same task can execute on different cores, similar to global scheduling, avoiding restrictions on tasks to execute on specific cores. Thus, a single global queue is used in our algorithm to manage the released jobs while waiting to be scheduled. As the algorithm proceeds, memory phases are scheduled from the global waiting queue depending on the availability of the memory access resource. Finally, once the scheduling is completed, the outputs are returned.
We describe our base algorithm, presented in Algorithm 1, in subsections for better readability. The algorithm setup, including the inputs, outputs, and variables are introduced in Section 5.1.2, which corresponds to Lines 1–5 in Algorithm 1. Section 5.1.3 describes how the released jobs are managed and sorted until they are scheduled and corresponds to Lines 7–10 in Algorithm 1. Section 5.1.4 describes how jobs are scheduled depending on the availability of the interconnect and corresponds to Lines 11–27. Finally, Section 5.1.5 describes how the outputs are returned at the end of the hyperperiod and corresponds to Lines 29–32 in Algorithm 1.

5.1.2 Algorithm Setup.

Algorithm 1 takes all the jobs of the task set for a hyperperiod as input and returns the generated time-triggered schedule and the schedulability status of the task set as outputs. \(\mathit {Sched}\) in our algorithm is a list that saves the time-triggered schedule. Necessary details of each scheduling decision are recorded in \(\mathit {Sched}\) , i.e., the time stamp, task index, job index, type of the scheduled memory phase (read or write), and an indication of whether the scheduled memory phase is an intermediate memory phase. The variable \(ct\) keeps track of the current time as the algorithm progresses from \(ct = 0\) until the hyperperiod to create the time-triggered schedule. The hyperperiod is represented by \(\mathit {HP}\) in Algorithm 1.

5.1.3 Adding Jobs to the Waiting Queue.

Released jobs are added to a global waiting queue, referred to as \(\mathit {WaitingQ}\) in Algorithm 1 until their memory phases (read or write) are scheduled. Before adding to the \(\mathit {WaitingQ}\) , each job is checked using the function \(\mathit {AddtoWaitingQ()}\) to identify released memory phases (Lines 7–9).
The execution phase of a job could be divided into segments of execution phases due to preemption (see Figure 1). However, a job can only have one ongoing execution phase at a time. \(s^e_{i,j}\) represents the start time of the current execution phase of \(\tau _{i,j}\) . \(C_{i,j}^{re}\) represents the remainder time of the execution phase of \(\tau _{i,j}\) . In the beginning, before the start time of the execution phase, \(C_{i,j}^{re} = C^e_{i,j}\) . The value of \(C_{i,j}^{re}\) is only changed in the case where \(\tau _{i,j}\) has been preempted, in which \(C_{i,j}^{re}\) represents the remainder of its execution phase time after preemption. Two conditions are checked in \(\mathit {AddtoWaitingQ()}\) to identify memory phase releases of jobs.
- If a job’s next memory phase is a read and if the job is not already in the \(\mathit {WaitingQ}\) , the current time ( \(ct\) ) is compared against the release time of the job ( \(r_{i,j}\) ) to check if the read phase (initial read or intermediate read) of the job is released to be scheduled.
-If a job’s next memory phase is a write phase, the current time ( \(ct\) ) is compared against the time the write phase becomes released to be scheduled. For a job that has not been preempted and for a job that has been preempted but has resumed its execution, this is the time by which the job would complete its execution phase, i.e., \(s^e_{i,j} + C_{i,j}^{re}\) .
In case ②, after adding the jobs to the \(\mathit {WaitingQ}\) , those jobs are removed from \(\mathit {Jobs}\) as their execution is finishing and would not be added to the \(\mathit {WaitingQ}\) again.
Note: Intermediate write phases are not added to the \(\mathit {WaitingQ}\) before scheduling as they are scheduled as a consequence of preemption by a job that was in the \(\mathit {WaitingQ}\) .
After adding the jobs of the released memory phases to the \(\mathit {WaitingQ}\) , the added jobs are sorted according to the ascending order of their deadlines (Line 10 of Algorithm 1).

5.1.4 Scheduling Memory Phases from the Waiting Queue.

The \(\mathit {WaitingQ}\) consists of jobs that want to schedule their read phases as well as write phases. To avoid contention when accessing the main memory via the interconnect and to increase predictability, only one memory phase (read or write) is allowed to be scheduled at a time. Access to the interconnect must be available for any memory phase to be scheduled. When the \(\mathit {WaitingQ}\) is not empty, and the interconnect is available, the waiting jobs are scheduled (Lines 11–24). However, in a case where the \(\mathit {WaitingQ}\) is empty, the current time ( \(ct\) ) is incremented to the next time a memory phase is released (Lines 25–27). This time is returned by \(\mathit {NextMemoryReleaseTime()}\) . The following conditions are checked to schedule the waiting jobs, starting with the job at the front of the \(\mathit {WaitingQ}\) , which is the job with the earliest deadline.
- If the job at the front wants its write phase to be scheduled, it is scheduled if the interconnect is available (Line 13 of Algorithm 1).
- If the job at the front wants its read phase to be scheduled, it is scheduled if a core and the interconnect are available (Line 15 of Algorithm 1).
- If the job at the front wants its read phase to be scheduled but a core is not available, if write phases are found in \(\mathit {WaitingQ}\) , the write phase with the earliest deadline in the queue will proceed over the read phase at the front to free up a core for the read phase to be scheduled and to avoid a possible deadlock [3]. This is also a decision based on the intuition that scheduling a write phase that would have been scheduled eventually, regardless, to free up a core is better than preempting an active job to free up a core. This aims to reduce the cost of preemption by minimizing unnecessary preemptions. (Line 18 of Algorithm 1).
- If the job at the front wants its read phase to be scheduled and if a core is not available, and if no write phase is found in the \(\mathit {WaitingQ}\) , possible preemption options are checked depending on the preemption method used (Line 21 of Algorithm 1).
\(\mathit {ScheduleRead()}\) and \(\mathit {ScheduleWrite()}\) are used when scheduling read and write phases, respectively. The start time of each scheduled memory phase, along with the rest of the information about the job (i.e., the task index, job index, type of the memory phase, and an indication if it is an intermediate memory phase), is added as a record to \(\mathit {Sched}\) every time a memory phase (read or write) is scheduled. The variable \(ct\) is incremented by the time of the scheduled memory phase every time a memory phase is scheduled. In the case of intermediate memory phases, \(ct\) is incremented by the intermediate memory phase time ( \(C_i^{imr}\) or \(C_i^{imw}\) ) of the corresponding job. The availability of cores is also updated accordingly depending on whether a read or a write phase is scheduled. A read phase is scheduled when condition is satisfied. Thus, an available core is assigned to the corresponding job, and that core is made unavailable. When conditions and are satisfied, write phases are scheduled, and thus, the core is made available. In a system with \(m\) processing cores, there can only be a maximum of \(m\) active jobs simultaneously. A global queue ( \(\mathit {ActiveQ}\) ) is used to keep the active jobs (in their execution phases) on all cores at a given time. At the end of each scheduled write phase, \(ct\) is compared against the deadline of that job to check if it has met its deadline. In a case of a deadline miss, the algorithm will return that the task set is unschedulable. After finishing a write phase (except for intermediate write), it is checked if there are any paused jobs on the core of the finished write job as there can be preempted jobs waiting to resume their execution phase when the OMPM is used. If there exist such jobs on the core, the job with the earliest deadline (which is also the job that started last) out of the paused jobs is selected to resume and is added to the \(\mathit {ActiveQ}\) . When the condition is satisfied (see Line 21 in Algorithm 1), different preemption options are checked for the \(\mathit {NextJob}\) , which is the job with the earliest deadline, using \(\mathit {PreemptionOptions()}\) (see Section 5.2).

5.1.5 Returning the Outputs.

At the end of the hyperperiod, Algorithm 1 checks \(\mathit {Jobs}\) , \(\mathit {WaitingQ}\) , and \(\mathit {ActiveQ}\) for unfinished jobs as these queues must be empty in the case where all the jobs are scheduled within the hyperperiod. In a case where such jobs exist in the queues even after the hyperperiod, the algorithm considers the task set to be unschedulable. Also, during the process of creating the schedule, before the end of the hyperperiod, if a situation is found where the \(\mathit {WaitingQ}\) is empty and the function \(\mathit {NextMemoryReleaseTime()}\) does not return any value, the algorithm concludes that the task set is schedulable. Thus, the schedule is returned before the end of the hyperperiod. This is because such a situation implies that all memory phases have already been scheduled.

5.2 Algorithms for Preemptive Scheduling

This section describes the algorithms used to create the time-triggered schedules for different preemption variants we propose in this paper, i.e., fully preemptive scheduling of WMPM and OMPM, and the SAPH. If non-preemptive scheduling is assumed, which is used in this paper to compare against the proposed preemption approaches, \(\mathit {PreemptionOptions()}\) returns straightaway. If preemptive scheduling is used (either fully preemptive scheduling or SAPH), the jobs in \(\mathit {ActiveQ}\) are sorted according to the descending order of their deadlines. Hence, the preemption possibilities are checked starting with the active job that has the furthest deadline. Depending on whether WMPM or OMPM is used either with fully preemptive scheduling or SAPH, Algorithm 2 or Algorithm 3 is used respectively from \(\mathit {PreemptionOptions()}\) . The input to both these algorithms is the \(\mathit {ActiveQ}\) and the \(\mathit {NextJob}\) , which is at the front of the \(\mathit {WaitingQ}\) .
To check preemption possibilities for the \(\mathit {NextJob}\) , requirements for preemption are checked for jobs in the \(\mathit {ActiveQ}\) (see Line 2 in Algorithm 2 and Line 2 in Algorithm 3). While the preemption requirements are different for WMPM and OMPM (see Section 4), depending on whether OMPM and WMPM are used with fully preemptive scheduling or the SAPH, the requirements further differ at Line 4 of Algorithm 2 and Line 4 of Algorithm 3. These requirements correspond to fully preemptive scheduling and SAPH are described in Tables 1 and 2. The additional requirements related to SAPH in the tables are introduced and explained in Section 5.3.
Table 1.
Preemption VariantPreemption Requirement
fully preemptive \(d_{\mathit {ActiveJob}} \gt d_{\mathit {NextJob}}\)
SAPH \(\mathit {ActiveJob}\) is preemptable and \(d_{\mathit {ActiveJob}} \gt d_{\mathit {NextJob}}\) and \(X_{\mathit {ActiveJob}} \gt X_{min}^{\mathit {WMPM}}(\mathit {ActiveJob})\)
Table 1. Preemption Requirements for WMPM
Table 2.
Preemption VariantPreemption Requirement
fully preemptive \(d_{\mathit {ActiveJob}} \gt d_{\mathit {NextJob}}\) and \(M_{\mathit {NextJob} } \lt S_{\mathit {CurrCore}}(ct)\)
SAPH \(\mathit {ActiveJob}\) is preemptable and \(d_{\mathit {ActiveJob}} \gt d_{\mathit {NextJob}}\) and \(M_{\mathit {NextJob}} \lt S_{\mathit {CurrCore}}(ct)\) and \(X_{\mathit {ActiveJob}} \gt X_{min}^{\mathit {OMPM}}(\mathit {ActiveJob})\)
Table 2. Preemption Requirements for OMPM
If WMPM is used as the preemption approach, once an active job is found in the \(\mathit {ActiveQ}\) that satisfies the preemption requirements, an intermediate write phase is scheduled for the preempted job (Line 5 in Algorithm 2). The read phase of the preempting job ( \(\mathit {NextJob}\) ) is scheduled followed by the intermediate write phase. If OMPM is used, and the preemption requirements are satisfied, the preempted \(\mathit {ActiveJob}\) is paused and its core is made available to start the preempting job (see Algorithm 3).

5.3 The Schedulability Aware Preemption Heuristic (SAPH)

One of the advantages of using time-triggered scheduling is that the schedule is generated before runtime, and no scheduling decisions are taken during runtime. Taking this advantage, we aim to further improve the schedulability with the proposed SAPH algorithm by reducing the number of preemption occurrences in the fully preemptive versions of WMPM and OMPM.
This heuristic is designed to preempt as few jobs as possible while deadlines are still met. It minimizes the number of preemptions by only allowing preemption when needed. The proposed heuristic uses non-preemptive scheduling by default. Meaning that none of the jobs are permitted to preempt another job by default. This default setting is changed only in the case where a job misses its deadline. Different reasons for a job to miss its deadline are identified as follows.
- Deadline miss due to the unschedulable nature of the task set. Allowing preemption cannot improve schedulability in this case.
- Deadline miss due to waiting time after being released as a consequence of the unavailability of a free core to start executing. Allowing preemption can help to improve schedulability in these situations.
- Either a processing core was available () or a write phase was scheduled to free up a core to start executing (). However, still misses the deadlines due to reasons such as waiting time to schedule the write phase. Allowing preemption of execution phases cannot improve schedulability in this case.
- Misses the deadline due to being preempted by another job during its execution phase. A situation as this can be avoided by disallowing the preemption of the job.
During the process of creating the time-triggered schedule, in a case of a deadline miss, the proposed heuristic attempts to increase the chance of jobs meeting their deadlines by using preemption and preventive measures. These preventive measures involve the permission to preempt other jobs (PermitToPreempt) and the ability to be preempted by other jobs (preemptability). By default, PermitToPreempt of all tasks is set to False and preemptability of all tasks is set to True.
During the process of finding a schedule for a given task set, at the time of finishing the write phase of each job, the current time is compared with the job’s deadline (see Section 5.1). When SAPH is used, if a job finishes its write phase after its deadline (deadline miss), the heuristic algorithm identifies if the possible reason is either or (See Algorithm 4). If the job has been preempted by another job during its execution phase, the algorithm considers it as a possible reason for the deadline miss. Thus, the heuristic sets the job’s preemptability to False (Line 8 of Algorithm 4) and attempts to find a new schedule where the job meets its deadline by restarting the generation process of the time-triggered schedule with the updated parameters. Similarly, if a job misses its deadline while its PermitToPreempt is False, the heuristic considers that the reason for the deadline miss might be . Thus, the algorithm gives the job permission to preempt other jobs to avoid probable waiting time (see Line 5 of Algorithm 4) and starts to find a new schedule with the new adjustments, aiming for better schedulability.
As previously described, Algorithm 1 is common for the SAPH as well. However, when checking for preemption options when using either WMPM or OMPM, the preemption requirements differ from fully preemptive scheduling as described in Tables 1 and 2.
To make the process of finding a schedule more efficient, we further use a slack-time-based method to allow preemption only when it is sensible. Slack is the time a job might have after finishing its execution until its deadline (see Equation (2)). Equation (2) represents the slack time of \(\tau _{i,j}\) , which is represented by \(X_{i,j}\) . For a job that has not been preempted during the execution phase, \(s^e_{i,j}\) is the start time of the execution phase. For a job that has been preempted during its execution phase, \(s^e_{i,j}\) represents the start time of the remaining execution phase after preemption. Similarly, \(C^{re}_{i,j}\) represents the remaining execution phase after preemption for a job that has been preempted, and \(C^{re}_{i,j}\) = \(C^e_{i,j}\) for a job that has not been preempted.
\begin{equation} X_{i,j} = d_{i,j} - s^e_{i,j} - C^w_i - C^{re}_{i,j} \end{equation}
(2)
Note: In the slack calculation, the waiting time to schedule the write phase is not accounted for as it cannot be predetermined.
In this slack-time-based method, the slack time of the job is compared with the minimum needed slack time to meet its deadline in a case of being preempted before making the preemption decision (see row SAPH in Tables 1 and 2). Depending on the preemption method (WMPM or OMPM), the minimum needed slack time differs. The minimum required slack time ( \(X_{min}\) ) a job \(j_1\) must have when getting preempted by job \(j_2\) with WMPM and OMPM preemption approaches are presented respectively in Equations (3) and (4).
\begin{equation} X_{min}^{WMPM}(j_1) = C^{imr}_1 + C^{imw}_1 \end{equation}
(3)
\begin{equation} X_{min}^{OMPM} (j_1) = {\left\lbrace \begin{array}{ll} C^r_2 + C^e_2 + C^w_2 & \text{; if $j_2$ has not been preempted}\\ C^{imr}_2 + C^{re}_2 + C^w_2 & \text{; if $j_2$ has been preempted} \end{array}\right.} \end{equation}
(4)
In the WMPM, the preempted job has a compulsory and minimum overhead of its own intermediate write and read phases. Therefore, to identify whether the preemption is sensible, we check the slack time of the job that might get preempted with this minimum slack requirement. Similarly, in OMPM, the preempted job has to afford a compulsory waiting time of the sum of all three phases of the preempting task. If the preempting job has been preempted before, in Equation (4), the read time and the execution time must be the intermediate read time and the remaining execution time after preemption (second condition in Equation (4)). Therefore, with this slack-based filtering method, we aim to avoid ineffective preemption attempts when finding a schedule.

5.3.1 Discussion.

We have presented how the SAPH heuristic is used with the scheduling algorithm presented in this paper. It can also be integrated into other non-preemptive scheduling algorithms to generate time-triggered schedules to allow preemption when necessary. This requires the ability to introduce the basic task properties used in SAPH related to preemption, i.e., PermitToPreempt and preemptability into the tasks of the existing scheduling algorithm.

6 Evaluation

We have performed experiments to evaluate the proposed SAPH heuristic and methods of implementing preemption. Synthetic task sets have been used in the evaluation process. All experiments are performed on an Intel(R) i7-10850H CPU with 12 cores and 32GB of RAM, with a clock rate of 2.7GHz.

6.1 Task Set Generation

The common period values, \(\lbrace 1, 2, 5, 10, 20, 50, 100, 200, 1000\rbrace ms\) reported by Kramer et al. for automotive applications [13] are used with their associated probability percentages, \(\lbrace 3, 2, 2, 25, 25, 3, 20, 1, 4\rbrace\) respectively to randomly generate period values. For a given total utilization \(U\) , the Dirichlet-Rescale (DRS) algorithm [11] is used to randomly generate \(u_i\) utilization values for each task in the task set. Using the relationship, \(C_i = u_i \cdot T_i\) , the total execution time of the task is determined. The total size of the data used by a task, \(Dm_i\) , is generated based on the distribution of the label sizes for an example engine control application provided by Kramer et al. in automotive applications [13], for a given number of labels ( \(l_i\) ) for the task. \(l_i\) values for tasks are generated using a uniform distribution in the range [2, 100] labels per task. Read data, \(Rm_i\) , and write data, \(Wm_i\) sizes are calculated from \(Dm_i\) using the percentages provided for read-only, write-only, and read-write label data in [13]. Thus we consider, \(Rm_i = 0.9Dm_i\) and \(Wm_i = 0.6Dm_i\) . The code size of a task, \(Im_i\) , is generated using a uniform distribution in the range [5KB, 20KB]. The maximum stack usage of a task, \(Ms_i\) , is selected from a uniform distribution between [1KB, 4KB]. We consider the maximum memory footprint of a task, \(M_i = Dm_i + Ms_i + Im_i\) .
For simplifying purposes, the task generation has not considered a relationship between read and write data sizes, and read and write times. The ratio between the total time for memory phases and the execution phase, \(\gamma\) , is considered to be constant for all the tasks. The total amount of data read during the read phase is considered as \(Rm_i + Im_i\) . The total amount of data written during the write phase is considered as \(Wm_i\) . Therefore, the ratio between the read phase time, \(C_i^r\) , and the write phase time, \(C_i^w\) , can be expressed as \(\alpha = (Rm_i + Im_i)/ Wm_i\) . Also, we know the relationship \(\gamma \cdot C_i = C_i^r + C_i^w\) . Using these two relationships, we get the write phase time, \(C_i^w = C_i \cdot \gamma /(\alpha +1)\) . Accordingly, the read phase time is calculated using \(C_i^r = C_i \cdot \gamma - C_i^w\) . The time for the execution phase is calculated using the relationship, \(C_i^e = (1- \gamma)\cdot C_i\)
In the case of WMPM, the times for the intermediate memory phases need to be calculated. We assume that the preempted task’s memory usage at the time of preemption is equal to its maximum memory footprint. We consider the amount of data written back during an intermediate write phase of a task as \(Dm_i + Ms_i\) . The amount of data read during an intermediate read phase is considered as \(Dm_i + Ms_i + Im_i\) . The ratio between the intermediate write phase time and the write phase time of a task is, \(\beta = (Dm_i + Ms_i)/ Wm_i\) . Therefore, the time for the intermediate write phase can be calculated using the relationship, \(C_i^{imw} = \beta \cdot C_i^w\) . Similarly, the relationship between the intermediate read phase time and the read phase time can be expressed as, \(\lambda = (Dm_i + Ms_i + Im_i)/ (Dm_i + Im_i)\) . Accordingly, the time for the intermediate read phase can be calculated using the expression, \(C_i^{imr} = \lambda \cdot C_i^r\) . Table 3 summarizes the equations used for task set generation.
Table 3.
Parameter of the task \(M_i\) \(C_i^w\) \(C_i^r\) \(C_i^e\) \(C_i^{imw}\) \(C_i^{imr}\)
Equation to derive \(Dm_i + Ms_i + Im_i\) \(C_i \cdot \gamma /(\alpha +1)\) \(C_i \cdot \gamma - C_i^w\) \((1-\gamma) \cdot C_i\) \(\beta \cdot C_i^w\) \(\lambda \cdot C_i^r\)
Table 3. Summary of Equations used in Task Generation
We have evaluated the schedulability of the generated task sets using fully preemptive versions of our proposed preemption methods and the SAPH heuristic. We compare these preemption variants with their non-preemptive variant. A basic configuration of \(U=1, m=4, \gamma = 0.1, S=64KB\) and \(n=32\) is used in our experiments. In the following sections, we present a schedulability evaluation (Section 6.2) and an evaluation of the number of preemptions (Section 6.4).

6.2 Schedulability Evaluation

We have performed five different experiments in our schedulability evaluation by changing one parameter of the basic configuration at a time.

6.2.1 Varying Utilization.

The first experiment varies the total utilization ( \(U\) ) of the task sets while other parameters are kept constant. In this experiment, we have varied \(U\) from 0.1 to 3.1 with increments of 0.3 (see Figure 2(a)). The percentage of schedulable task sets reduces with the increase of \(U\) as expected for all evaluated variants. Both versions of SAPH (WMPM or OMPM) provide more schedulable task sets than the non-preemptive version and fully preemptive versions of WMPM and OMPM. The SAPH performs slightly better with the OMPM than WMPM approach throughout the \(U\) range we experimented with. The SAPH provides up to an 11.6% (at \(U=1\) with OMPM) improvement over the non-preemptive variant. The fully preemptive version of the OMPM approach performs better than the fully preemptive WMPM approach for all \(U\) values we consider. Moreover, the fully preemptive version of the OMPM approach provides slightly more schedulability than the non-preemptive variant in the range \(U = [0.7, 1.3]\) . In the cases where the task sets have very low utilization ( \(U \lt 0.5\) ), there isn’t a considerable benefit in using one evaluated scheduling approach over the others. However, using SAPH to create time-triggered schedules can be considerably more beneficial in a task set with higher utilization.
Fig. 2.
Fig. 2. Schedulable percentage of task sets of fully preemptive versions of WMPM and OMPM, SAPH with WMPM and OMPM and non-preemptive version for (a) a varying total utilization per task set, (b) a varying number of cores, and (c) a varying number of tasks. (d) presents the schedulable percentage of task sets of the SOTA non-preemptive algorithm and SAPH with SOTA as the baseline non-preemptive algorithm.
Comparison with an Existing Non-Preemptive Algorithm: The same experiment of varying the total utilization was performed for an existing state-of-the-art (SOTA) non-preemptive scheduling algorithm [3] for comparison (see Figure 2(d)). Our baseline non-preemptive algorithm provides better schedulability compared to the non-preemptive SOTA under our evaluation setup. We also integrated the SAPH heuristic into the SOTA non-preemptive algorithm, which demonstrates the applicability of the proposed SAPH heuristic to other non-preemptive algorithms (Section 5.3.1).

6.2.2 Varying the Number of Cores.

The second experiment varies the number of cores ( \(m\) ) in the system while other parameters are kept constant. In this experiment, \(m\) is varied from 1 to 15 with increments of 1 (see Figure 2(b)). Both versions of SAPH perform better than all the other variants throughout the range of \(m\) values we consider. The WMPM variant of SAPH provides \(21.6\%\) more schedulability than the non-preemptive variant at \(m=2\) . Between the fully preemptive versions of the WMPM and OMPM approaches, the OMPM approach performs better for all experimented \(m\) values. Additionally, the fully preemptive OMPM performs marginally better in schedulability compared to the non-preemptive variant in the range of \(m = [2,5]\) . The schedulability of all variants reaches saturation after \(m=12\) , implying that after a certain point increasing the number of cores does not help with the schedulability as the memory access resource is limited. From the results, it is seen that SAPH reaches the saturation point with 3 cores, while the fully preemptive version of OMPM and the non-preemptive version reach saturation only once they have twice the amount of cores available ( \(m=6\) ). The fully preemptive version of WMPM reaches saturation only after four times the number of cores ( \(m=12\) ) is available. Thus, with either variant of SAPH, the compute cores on the platform can be used more efficiently, even under scenarios where only a few compute cores are available. It is also interesting to notice that the non-preemptive variant and both SAPH variants show a slight decrease in schedulability as the number of cores in the system further increases, while none of the fully preemptive versions are affected. Since the SAPH heuristic also uses non-preemptive scheduling as a default setting unless altered, this can be due to the inherent unsustainability of non-preemptive scheduling [23].

6.2.3 Varying the Number of Tasks.

The third experiment varies the number of tasks per task set ( \(n\) ). In this experiment, \(n\) is varied from 2 to 48 with increments of 2 (see Figure 2(c)). The schedulability of all the variants decreases with the increase in the number of tasks, as expected. Both versions of the SAPH perform better than all other variants throughout the \(n\) range considered. The SAPH gives up to \(13\%\) more schedulability compared to the non-preemptive variant. While the fully preemptive version of the OMPM performs better than the same of the WMPM for all values of \(n\) , it also performs better than the non-preemptive variant in the range \(n=[30, 44]\) . The SAPH heuristic with either preemption variant performs best, which we can attribute to the more efficient usage of the available platform resources which was also indicated by the results in Section 6.2.2.

6.2.4 Varying the Memory Ratio per Task.

The fourth experiment varies the ratio between the memory phases and the execution phase ( \(\gamma\) ). In this experiment, \(\gamma\) is varied from 0.01 to 0.6 with increments of 0.02 (see Figure 3(a)). The percentage of schedulable task sets decreases with the increase of \(\gamma\) . When \(\gamma\) increases, the total memory phase times ( \(C^r_i + C^w_i\) ) increase, and the execution phase time ( \(C^e_i\) ) decreases. This causes low schedulability as memory access is limited (refer to Section 6.2.2). Both versions of SAPH perform better than the rest of the variants for all values of \(\gamma\) . It gives the maximum improvement of \(51.6\%\) with the WMPM approach over the non-preemptive variant at \(\gamma = 0.01\) . Furthermore, the fully preemptive OMPM performs marginally better than the non-preemptive variant in the range \(\gamma = [0.07, 1.1]\) . It is interesting to notice that the fully preemptive WMPM performs better than the fully preemptive OMPM at lower values of \(\gamma\) . This shows that even though the fully-preemptive OMPM outperforms the fully-preemptive WMPM in the basic configuration used in our experiments, there are other ranges of \(\gamma\) where the fully-preemptive WMPM performs better. This can be explained using the resulting overheads. The resulting overheads from the intermediate memory phases are a drawback of the WMPM approach. Therefore, when \(\gamma\) is high, it becomes more difficult for task sets to be schedulable as the overhead becomes larger while the memory access is limited. However, at lower values of \(\gamma\) , it becomes more favorable for the WMPM as the overhead becomes lower compared to higher \(\gamma\) values (see Figure 3(b)). The size of the memory phases depends on several aspects, such as the amount of data read and written during the phase and the bandwidth of the interconnect, which depends on the bit-width and the clock frequency. Thus, depending on these factors of the system, in the cases where the time for phases becomes smaller, the WMPM method would be preferable.
Fig. 3.
Fig. 3. Varying the memory to execution phase ratio. (a) depicts the percentage of schedulable task sets and (b) depicts the bus utilization per task set of the fully preemptive WMPM approach.
Figure 3(b) depicts the percentage of the total bus utilization of task sets, which corresponds to the fully preemptive WMPM curve of Figure 3(a). The orange horizontal strip in each box plot represents the median value of the distribution. A box represents the interquartile range, and each dot represents an outlier. While Figure 3(b) depicts the total bus utilization of the system, it also reflects the overheads by the WMPM approach compared to the respective \(1\%\) , \(5\%\) , \(15\%\) and \(25\%\) bus utilization of the non-preemptive variant. The median value of the bus utilization increases and the interquartile range becomes wider, showing the increase of the resulted overheads with \(\gamma\) .

6.2.5 Varying the SPM Size.

The fifth experiment varies the size of the private SPM of the core ( \(S\) ). In this experiment, \(S\) is varied from 16KB to 144KB with increments of 16KB (see Figure 4(a)). The size of the SPM does not have any effect on the fully preemptive and SAPH versions of WMPM and the non-preemptive variant, as expected. However, the fully preemptive and SAPH versions of the OMPM approach increase the schedulability with the increase of the SPM size, and after a certain point, their schedulability reaches saturation at their own levels. After this point, the SPM is large enough to afford possible preemptions and not limit the number of preemptions. Figure 4(b) corresponds to the fully preemptive version of the OMPM method from the same experiment as in Figure 4(a). The average of the maximum number of jobs on the same core at a time (paused or active) also increases and reaches a saturation (see the upper plot of Figure 4(b)). Furthermore, the average waiting time per preempted jobs in fully preemptive OMPM is depicted in the lower plot of Figure 4(b), and it follows the same trend with the increase of \(S\) . Moreover, the fully preemptive OMPM approach provides higher schedulability than the non-preemptive variant throughout the considered range of \(S\) after reaching the maximum saturation. Additionally, the results of this experiment illustrate how the performances of the two SAPH versions change compared to each other. The OMPM version of SAPH only performs similarly to that of WMPM when the SPM is large enough not to limit the needed preemptions. This is not visible in our other experiments, as the SPM size in the base configuration ( \(S=64KB\) ) is already past the saturation point. Thus, the WMPM version of SAPH is preferable for systems with smaller local memory sizes.
Fig. 4.
Fig. 4. Varying the SPM size. (a) depicts the percentage of schedulable task sets, and (b) depicts the average number of tasks on a core simultaneously and the average waiting time of a preempted task in fully preemptive OMPM approach.

6.3 Evaluation on Priority Inversion in OMPM

The possibility of priority inversion due to the limitation of the SPM size is a drawback of the OMPM preemption approach (Section 4.3). This section presents an evaluation of the number of priority inversion occurrences in the fully preemptive and SAPH versions of the OMPM approach. Such priority inversion occurs if a job with an earlier deadline is not able to preempt another job with a later deadline due to unavailability of sufficient SPM space for preemption while all other preemption requirements are satisfied. We recorded the number of task sets that underwent such a priority inversion. Additionally, out of the above-mentioned task sets, we counted how many had jobs that could not preempt a job on any core due to insufficient SPM space while satisfying other preemption requirements. This evaluation was performed using 100 task sets for each point for the experiments where the number of cores (Section 6.2.2) and the SPM size (Section 6.2.5) are varied.

6.3.1 Varying the Number of Cores.

When varying \(m\) in the range of [1,15] while keeping the SPM size constant at 64KB, priority inversions were observed for 6, 5, and 2 task sets for 1, 2, and 3 cores, respectively, with the fully preemptive OMPM. Accordingly, 7 and 3 task sets showed priority inversion respectively for 1 and 2 cores for SAPH with OMPM. None of the task sets that faced priority inversion were schedulable. The number of task sets that underwent priority inversion reduced with the increase of \(m\) and only occurred for systems with a small core count (m \(\le 3\) ). The fraction of the number of task sets with jobs that could not find any job to preempt due to priority inversion out of the total number of task sets that faced priority inversions was also reduced from 1.0 to 0.5 with the increase of \(m\) from 1 to 3 for the fully preemptive OMPM. The same for SAPH was reduced from 1 to 0 with the increase of \(m\) from 1 to 2. This reflects how global scheduling could help minimize priority inversion compared to partitioned scheduling in multi-core systems.

6.3.2 Varying the SPM Size.

When varying \(S\) in the range of [16, 144] KB while \(m\) is kept constant at four cores, we noticed priority inversions for \(S= [16,48]\) KB. The number of task sets that faced priority inversion reduced from 95 to 8 with the increase of \(S\) from 16KB to 48KB for fully preemptive OMPM. No priority inversion was observed for larger SPM sizes. Around \(25\%\) of the task sets that faced priority inversion still made all their deadlines. However, with the SAPH, no priority inversion occurred throughout the evaluated \(S\) range.

6.4 Number of Preemptions

This section presents an evaluation of the number of occurred preemptions in fully preemptive variants of WMPM and OMPM (see Figure 5(a)) and WMPM and OMPM versions of the SAPH heuristic (see Figure 5(b)). The plots in Figures 5(a) and 5(b) are from the experiment presented in Section 6.2.3, where the number of tasks per task set is varied. Thus, only the schedulable task sets are considered. In this experiment, we observed that the number of preemption permits given to tasks is equal to the number of preemptions that occurred in SAPH variants. Meaning that all the tasks that obtained the permit to preempt due to deadline miss have been able to meet their deadlines after preempting once. In both plots in Figures 5(a) and 5(b), the number of preemptions increases with the increase of the number of tasks. When the number of tasks per task set increases while the number of cores remains constant, the computational resources become limited, and thus, more preemptions are required. Moreover, it is interesting to notice that the average number of preemptions is around 50 times higher in the fully preemptive versions compared to their SAPH versions, where the SAPH provides around \(10\%\) or \(20\%\) more schedulability compared to the fully preemptive variants of OMPM and WMPM respectively in the presented experiment (see Figure 2(c)). This provides good evidence that preemption must be allowed only when needed, balancing the larger preemption costs. In Figure 5(b), we also present the average number of jobs that have been reconfigured as un-preemptable (preemptability = False). These values for SAPH with both WMPM and OMPM are either zero or become approximately zero when the average is calculated. This reflects that out of the reasons we pointed out in Section 5.3 for a task to miss its deadline, contributes much less than for a deadline miss.
Fig. 5.
Fig. 5. Average Number of (a) Preemptions in fully preemptive scheduling and (b) preemption permits given and un-preemptable tasks in SAPH.

6.5 Discussion

6.5.1 Discussion on Fully-Preemptive Versions.

Considering the fully preemptive versions of the proposed preemption approaches, WMPM and OMPM, the OMPM approach performs better in most cases compared to fully-preemptive WMPM for the used memory-to-execution phase ratio in the base configuration. Also, in our experimental setup, the fully-preemptive WMPM provides lower schedulability overall when compared with the non-preemptive version. The fully-preemptive OMPM slightly outperforms the non-preemptive variant in some ranges of our experimental configurations, as mentioned in Section 6.2. However, this is not a considerable improvement over the non-preemptive variant. Therefore, when looking at our results, it is noticeable that the fully-preemptive versions of the proposed preemption approaches do not provide much benefit over the non-preemptive variant. This is an interesting result worth investigating more.
Each preemption method we propose has its drawbacks and limitations (Section 4.3). The WMPM approach’s main drawback is the overhead resulted from the additional intermediate memory phases, which increases the utilization of the memory access resource (see Figure 3(b)). This overhead affects not only the preempted task but also the preempting task. The preempting task also experiences a delay equal to the intermediate write phase time of the preempted task before starting to execute. Moreover, other tasks in the system are also indirectly affected by every preemption occurrence as the demand for the limited memory access resource gets high, making it difficult to find a schedule. Thus, as the memory phase times depend on it, the data transfer speed of the interconnect can be a limitation when using this preemption method. In contrast, when using the OMPM approach, the overall overhead is zero, and only preempted tasks are affected by preemption. However, there is a mandatory waiting time for the preempted task until it resumes its execution. Additionally, the number of preemptions is limited by the local memory size. To further identify the reasons behind the results for fully-preemptive versions, we evaluated the fully preemptive versions under optimal/ ideal conditions they could have (see Figure 6(a)). Since the drawback of the WMPM method is the overhead caused by the additional intermediate memory phases, the ideal case would be to have intermediate memory phase times ( \(C_i^{imw}\) and \(C_i^{imr}\) ) equal to zero. Thus, for evaluation purposes, we assume a very high speed for data transfer which makes the intermediate memory times negligible.
Fig. 6.
Fig. 6. Percentage of schedulable task sets with a varying total utilization for (a) fully-preemptive and ideal fully preemptive version of WMPM and non-preemptive variants, and (b) SAPH with WMPM, OMPM, and ideal WMPM.
Note: Even though such a case would make the times for read and write phases ( \(C_i^{r}\) and \(C_i^{w}\) ) equal to nearly zero as well, due to the purpose of the experiment, only the intermediate memory phases are assumed to be negligible.
We performed the experiment where the total utilization of the task set is varied with the usual experimental setup. The results in Figure 6(a) show that the ideal version of fully-preemptive WMPM outperforms the non-preemptive variant up to around 4%. This is also a considerable schedulability gain over the actual fully-preemptive WMPM (up to more than 10%). This provides evidence to explain the reasons behind the low performance of the fully-preemptive WMPM. The preemption cost due to resulted overhead and increased bus utilization brings down the schedulability of task sets. This also highlights that such preemption overheads cannot be neglected. However, the schedulability could be improved with a high data transfer rate. In contrast, with the OMPM approach, the limitation is the local SPM size. As in Figure 4(a), the schedulability reaches saturation after 32KB of SPM size under our experimental setup. Meaning that under this base configuration (where S = 64KB), the SPM size is not a limiting factor. Thus, the schedulability would remain the same even in an ideal case with a much larger SPM size. This provides an interesting insight as while the schedulability of the fully-preemptive WMPM could be improved if platform limitations are minimized, (e.g, high data rate) the schedulability of the fully-preemptive OMPM cannot be improved with such a method. Because the main obstacle for schedulability is the waiting time of the preempted task that must resume execution on the same core. Therefore we can conclude that for systems with lower memory-to-execution phase ratios, the WMPM method is more suitable as the overheads and the load on the limited memory access resource becomes lower (see Figure 3(a)). However, in other cases, the fully-preemptive OMPM outperformed the fully-preemptive WMPM by a considerable amount.

6.5.2 Discussion on SAPH.

After exploring the proposed methods to realize preemption and their respective benefits and limitations, the SAPH heuristic is proposed based on time-triggered scheduling aiming for predictability and better schedulability. Minimizing the number of preemptions is a way to reduce the associated preemption costs, i.e., preemption overheads and waiting times in WMPM and OMPM approaches due to the properties of the phased execution models. This motivates the proposed SAPH heuristic. It reduces unnecessary preemptions and only allows preemption when there is a possibility to improve schedulability. Based on our results, the number of preemptions needed to improve the schedulability by a considerable amount is low once it is identified where preemption is needed, e.g., a \(50\%\) schedulability improvement can be achieved compared to both non-preemptive and fully preemptive scheduling (at \(\gamma = 0.01\) in Figure 3(a)) with only an average of 6 preemptions per task set, which is around 20x lower than fully preemptive scheduling. It is also worth noticing how SAPH performs similarly with WMPM and OMPM with our base experimental configuration, despite the fact that the fully-preemptive versions of WMPM and OMPM perform very differently. Furthermore, both WMPM and OMPM versions of SAPH perform nearly as good as the WMPM version of SAPH under ideal conditions (see Figure 6(b)). Thus, having evaluation results of SAPH with both WMPM and OMPM suggest that our SAPH heuristic can find a feasible schedule with either WMPM or OMPM independent of how they perform with fully-preemptive scheduling if the SPM size is large enough not to limit the preemptions in OMPM (see Section 6.2.5).
Therefore, with the SAPH heuristic we propose, the schedulability of task sets that follow phased execution models can improve by a considerable amount while preserving predictability. This also highlights that allowing a lesser number of preemptions where needed can give more schedulability than both fully-preemptive and non-preemptive variants. Several existing works have evaluated the schedulability gain of limited preemption techniques for single-and multi-core systems with tasks that follow conventional execution models [5, 15]. Results in the works of Butazzo et al. [5] and Lee et al. [15] show that controlling the preemptions helps to improve the schedulability over both fully-preemptive and non-preemptive scheduling. Also, it is interesting to note that in the results presented in the above-mentioned work for conventional execution models, the fully-preemptive scheduling clearly outperforms non-preemptive scheduling in terms of schedulability, unlike the results presented in our paper for the phased execution model. This is likely due to additional costs to handle the intermediate data in the phased execution model. Thus, it also suggests that limiting the preemption is more needed and beneficial for phased execution models than conventional execution. Arora et al. [2] present an analysis that supports preemption points. Additionally, they discuss the impact of preemption point selection on the inter-core memory interference suffered by tasks that follow the phased execution model. However, they have not compared the fully-preemptive and limited-preemptive versions. Also, due to pessimism in their analysis, the benefits of the limited preemption method over the non-preemptive version are limited.
The SAPH provided solutions within approximately an average of 1s per task set in our experiments. Thus, being a heuristic algorithm, SAPH provides schedules fast, making it suitable for iterative design space exploration as well. While providing fast solving time, sub-optimality is a drawback of heuristic algorithms. Proposing an optimal scheduling algorithm for multi-core systems becomes very complex due to the drastic increase in scheduling possibilities. Optimal solutions can be obtained using exhaustive searching techniques such as constraint programs [4]. However, these techniques often have large solving times showing the trade-off between optimality and solving time.

7 Conclusions and Future Work

Phased execution models are designed to minimize contention resulting in COTS multi-core systems by separating the total execution of tasks into dedicated memory and execution phases. These models are generally used non-preemptively aiming predictability. However, schedulability is reduced due to blocking times in non-preemptive scheduling. Realizing preemption in phased execution models while maintaining their inherent phased semantics becomes non-trivial in contrast to conventional execution. This paper investigates different methods to realize preemption, aiming for improved schedulability while preserving the semantics of phased execution models and the predictability of the system. In addition, a schedulability-aware preemption heuristic is proposed to create time-triggered schedules for phased execution models, which allows preemption only where beneficial, which provides significant schedulability gain compared to both non-preemptive and fully-preemptive variants of the proposed preemption methods. Moreover, this work gives rise to several directions for future work. Such as investigation of the proposed preemptive phased execution model under dynamic fully- and limited-preemptive scheduling and implementation on a real platform.

Footnote

1
In the remainder of the paper, we refer to A-phase, E-phase, and R-phase interchangeably as read, execute, and write phase.

References

[1]
Ahmed Alhammad and Rodolfo Pellizzoni. 2014. Time-predictable execution of multithreaded applications on multicore systems. In 2014 Design, Automation & Test in Europe Conference & Exhibition (DATE’14). 1–6.
[2]
Jatin Arora, Syed Aftab Rashid, Cláudio Maia, and Eduardo Tovar. 2022. Analyzing fixed task priority based memory centric scheduler for the 3-phase task model. In 2022 IEEE 28th International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA’22). IEEE, 51–60.
[3]
Matthias Becker, Dakshina Dasari, Borislav Nicolic, Benny Akesson, Vincent Nélis, and Thomas Nolte. 2016. Contention-free execution of automotive applications on a clustered many-core platform. In 2016 28th Euromicro Conference on Real-Time Systems (ECRTS’16). 14–24.
[4]
Matthias Becker, Saad Mubeen, Dakshina Dasari, Moris Behnam, and Thomas Nolte. 2018. Scheduling multi-rate real-time applications on clustered many-core architectures with memory constraints. In 2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC’18). 560–567.
[5]
Giorgio C. Buttazzo, Marko Bertogna, and Gang Yao. 2013. Limited preemptive scheduling for real-time systems. A survey. IEEE Transactions on Industrial Informatics 9, 1 (2013), 3–15.
[6]
Thomas Carle, Dumitru Potop-Butucaru, Yves Sorel, and David Lesens. 2015. From dataflow specification to multiprocessor partitioned time-triggered real-time implementation *. Leibniz Transactions on Embedded Systems (2015).
[7]
Hyeonjoong Cho, Binoy Ravindran, and E Douglas Jensen. 2006. An optimal real-time scheduling algorithm for multiprocessors. In 2006 27th IEEE International Real-Time Systems Symposium (RTSS’06). IEEE, 101–110.
[8]
Silviu S. Craciunas and Ramon Serna Oliver. 2014. SMT-based task- and network-level static schedule generation for time-triggered networked systems. In Proceedings of the 22nd International Conference on Real-Time Networks and Systems (Versaille, France) (RTNS’14). Association for Computing Machinery, New York, NY, USA, 45–54.
[9]
Dakshina Dasari, Benny Akesson, Vincent Nélis, Muhammad Ali Awan, and Stefan M. Petters. 2013. Identifying the sources of unpredictability in COTS-based multicore systems. In 2013 8th IEEE International Symposium on Industrial Embedded Systems (SIES’13). 39–48.
[10]
Guy Durrieu, Madeleine Faugère, Sylvain Girbal, Daniel Gracia Pérez, Claire Pagetti, and Wolfgang Puffitsch. 2014. Predictable flight management system implementation on a multicore processor. In Emb. Real Time Software (ERTS’14).
[11]
David Griffin, Iain Bate, and Robert I. Davis. 2020. Generating utilization vectors for the systematic evaluation of schedulability tests. In 2020 IEEE Real-Time Systems Symposium (RTSS’20). 76–88.
[12]
Wen-Hung Huang, Jian-Jia Chen, and Jan Reineke. 2016. MIRROR: Symmetric timing analysis for real-time tasks on multicore platforms with shared resources. In Proceedings of the 53rd Annual Design Automation Conference. 1–6.
[13]
Simon Kramer, Dirk Ziegenbein, and Arne Hamann. 2015. Real world automotive benchmarks for free. In 6th International Workshop on Analysis Tools and Methodologies for Embedded and Real-time Systems (WATERS’15).
[14]
Erjola Lalo, Raphael Weber, Andreas Sailer, Juergen Mottok, and Christian Siemers. 2019. On solving task allocation and schedule generation for time-triggered LET systems using constraint programming. In ARCS Workshop 2019; 32nd International Conference on Architecture of Computing Systems. 1–8.
[15]
Jinkyu Lee and Kang G Shin. 2012. Controlling preemption for better schedulability in multi-core systems. In 2012 IEEE 33rd Real-Time Systems Symposium. IEEE, 29–38.
[16]
Tamara Lugo, Santiago Lozano, Javier Fernández, and Jesus Carretero. 2022. A survey of techniques for reducing interference in real-time applications on multicore platforms. IEEE Access 10 (2022), 21853–21882.
[17]
Martin Lukasiewycz, Reinhard Schneider, Dip Goswami, and Samarjit Chakraborty. 2012. Modular scheduling of distributed heterogeneous time-triggered automotive systems. In 17th Asia and South Pacific Design Automation Conference. 665–670.
[18]
Cláudio Maia, Luis Nogueira, Luis Miguel Pinho, and Daniel Gracia Pérez. 2016. A closer look into the AER Model. In 2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA’16). IEEE, 1–8.
[19]
Claire Maiza, Hamza Rihani, Juan M. Rivas, Joël Goossens, Sebastian Altmeyer, and Robert I. Davis. 2019. A survey of timing verification techniques for multi-core real-time systems. ACM Computing Surveys (CSUR) 52, 3 (2019), 1–38.
[20]
Joel Matějka, Björn Forsberg, Michal Sojka, Zdeněk Hanzálek, Luca Benini, and Andrea Marongiu. 2018. Combining PREM compilation and ILP scheduling for high-performance and predictable MPSoC execution. In Proceedings of the 9th International Workshop on Programming Models and Applications for Multicores and Manycores. 11–20.
[21]
Microchip Technology Inc 2018. How to Achieve Deterministic Code Performance Using a Cortex™-M Cache Controller. Microchip Technology Inc. Accessed: May. 10, 2023. [Online]. Available: https://www.microchip.com
[22]
Anna Minaeva and Zdeněk Hanzálek. 2021. Survey on periodic scheduling for time-triggered hard real-time systems. ACM Computing Surveys (CSUR) 54, 1 (2021), 1–32.
[23]
Mitra Nasri and Bjorn B. Brandenburg. 2017. An exact and sustainable analysis of non-preemptive scheduling. In 2017 IEEE Real-Time Systems Symposium (RTSS’17). 12–23.
[24]
Roman Nossal. 1998. An evolutionary approach to multiprocessor scheduling of dependent tasks. Future Generation Computer Systems 14, 5 (1998), 383–392. Bio-inspired solutions to parallel processing problems.
[25]
Claire Pagetti, Julien Forget, Heiko Falk, Dominic Oehlert, and Arno Luppold. 2018. Automated generation of time-predictable executables on multicore. In Proceedings of the 26th International Conference on Real-Time Networks and Systems. 104–113.
[26]
Rodolfo Pellizzoni, Emiliano Betti, Stanley Bak, Gang Yao, John Criswell, Marco Caccamo, and Russell Kegley. 2011. A predictable execution model for COTS-based embedded systems. In 2011 17th IEEE Real-Time and Embedded Technology and Applications Symposium. IEEE, 269–279.
[27]
Rodolfo Pellizzoni, Bach D. Bui, Marco Caccamo, and Lui Sha. 2008. Coscheduling of CPU and I/O transactions in COTS-based embedded systems. In 2008 Real-Time Systems Symposium. IEEE, 221–231.
[28]
Juan M. Rivas, Joël Goossens, Xavier Poczekajlo, and Antonio Paolillo. 2019. Implementation of memory centric scheduling for COTS multi-core real-time systems. In 31st Euromicro Conference on Real-Time Systems (ECRTS’19). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.
[29]
Matheus Schuh, Claire Maiza, Joël Goossens, Pascal Raymond, and Benoît Dupont de Dinechin. 2020. A study of predictable execution models implementation for industrial data-flow applications on a multi-core platform with shared banked memory. In 2020 IEEE Real-Time Systems Symposium (RTSS’20). IEEE, 283–295.
[30]
Gero Schwäricke, Tomasz Kloda, Giovani Gracioli, Marko Bertogna, and Marco Caccamo. 2020. Fixed-priority memory-centric scheduler for COTS-based multiprocessors. In 32nd Euromicro Conference on Real-Time Systems (ECRTS’20). Schloss Dagstuhl-Leibniz-Zentrum für Informatik.
[31]
Ikram Senoussaoui, Mohammed Kamel Benhaoua, Houssam-Eddine Zahaf, and Giuseppe Lipari. 2022. Toward memory-centric scheduling for PREM task on multicore platforms, when processor assignments are specified. In 2022 3rd International Conference on Embedded & Distributed Systems (EDiS’22). IEEE, 11–15.
[32]
Ikram Senoussaoui, Houssam-Eddine Zahaf, Giuseppe Lipari, and Kamel Benhaoua. 2022. Contention-free scheduling of PREM tasks on partitioned multicore platforms. In 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA’22).
[33]
Muhammad R. Soliman and Rodolfo Pellizzoni. 2019. PREM-based optimal task segmentation under fixed priority scheduling. In 31st Euromicro Conference on Real-Time Systems (ECRTS’19)(Leibniz International Proceedings in Informatics (LIPIcs), Vol. 133), Sophie Quinton (Ed.). Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany, 4:1–4:23.
[34]
Anand Srinivasan, Philip Holman, James H Anderson, and Sanjoy Baruah. 2003. The case for fair multiprocessor scheduling. In Proceedings International Parallel and Distributed Processing Symposium. IEEE, 10–pp.
[35]
Thilanka Thilakasiri and Matthias Becker. 2023. An exact schedulability analysis for global fixed-priority scheduling of the AER task model. In Proceedings of the 28th Asia and South Pacific Design Automation Conference (ASP-DAC). 326–332.
[36]
Saud Wasly and Rodolfo Pellizzoni. 2014. Hiding memory latency using fixed priority scheduling. In 2014 IEEE 19th Real-Time and Embedded Technology and Applications Symposium (RTAS’14). 75–86.
[37]
Gang Yao, Rodolfo Pellizzoni, Stanley Bak, Emiliano Betti, and Marco Caccamo. 2012. Memory-centric scheduling for multicore hard real-time systems. Real-Time Systems 48 (2012), 681–715.
[38]
Gang Yao, Rodolfo Pellizzoni, Stanley Bak, Heechul Yun, and Marco Caccamo. 2015. Global real-time memory-centric scheduling for multicore systems. IEEE Trans. Comput. 65, 9 (2015), 2739–2751.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Embedded Computing Systems
ACM Transactions on Embedded Computing Systems  Volume 22, Issue 5s
Special Issue ESWEEK 2023
October 2023
1394 pages
ISSN:1539-9087
EISSN:1558-3465
DOI:10.1145/3614235
  • Editor:
  • Tulika Mitra
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Journal Family

Publication History

Published: 09 September 2023
Accepted: 30 June 2023
Revised: 02 June 2023
Received: 23 March 2023
Published in TECS Volume 22, Issue 5s

Check for updates

Author Tags

  1. Phased execution model
  2. preemptive scheduling

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 600
    Total Downloads
  • Downloads (Last 12 months)502
  • Downloads (Last 6 weeks)41
Reflects downloads up to 04 Oct 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media