Efficient Development of Airborne Software With SCADE Suite
Efficient Development of Airborne Software With SCADE Suite
Airborne Software
with SCADE Suite™
Abstract
This white paper addresses the issue of cost and productivity improvement in
the development of safety-critical software for avionics systems. Such
developments, driven by the ED-12/DO-178B guidelines traditionally require
very difficult and precise development and verification efforts. This paper
first reviews traditional development practices and then the optimization of
the development process with the SCADE Suite methodology and tool.
SCADE Suite supports “correct by construction” and automated production
of the life cycle elements. The effects on savings in the development and
verification activities are presented in detail. Industry examples demonstrate
the efficiency of this approach, and business benefits are analyzed.
Authors
Table of Contents
1 Executive Summary .................................................................................................. 5
3 The ARP 4754 and ED-12/DO-178B Guidelines for the Development of Safety-
Critical Systems and Software ......................................................................................... 7
3.1 What Are DO-178B and ARP 4754? ..........................................................................7
3.2 DO-178B Development Processes........................................................................... 10
3.3 DO-178B Verification Processes.............................................................................. 11
3.4 Why Is Traditional Development of Safety-Critical Software So Expensive? ................ 14
6 Business Benefits.................................................................................................... 34
7 Appendix A References........................................................................................... 36
List of Figures
1 Executive Summary
Companies developing avionics software are facing a challenge. Safety is an absolute requirement.
This makes the development of such systems very expensive, as shown by the figures below,
observed for avionics software:
• The average development and test of 10K Lines of Code (KLOC) level B software is 16
person-years
• The cost of a minor bug is in the range $100K-$500K
• The cost of a major bug is in the range $1M-$500M
The growing complexity of those systems increases the cost and time for their development to a
level that conflicts with business constraints such as time-to-market and competitiveness.
This paper addresses the issue of productivity in the development of software for civil aircrafts, as
specified in the guideline ED-12/DO-178B and explores the nature of these costs and how to
reduce them by adopting efficient methods and tools.
First, an introduction to DO-178B guidelines is provided, to illustrate how safety objectives lead to
high costs when traditional development processes are used. In particular, verification activities
are analyzed: including testing, analysis and review requirements. Also explored is the process for
change or error correction. It is shown that these verification activities are the cost-driver and the
bottleneck in project schedule.
Next, SCADE Suite, an encompassing method and tool, is described. SCADE Suite maintains the
highest quality standards while reducing costs based on a “correct by construction” approach
SCADE Suite provides:
• A unique and accurate software description that can be shared among project participants.
• The prevention of many specification or design errors.
• The early identification of most remaining design errors allowing them to be fixed in the
requirements/design phase, rather than in the code testing or integration phase.
• Qualified code generation that not only saves writing the code by hand but also verifying it.
Industrial application examples at Eurocopter and Airbus are described, which demonstrate the
efficiency of the approach.
Finally, business benefits of the investment in SCADE Suite are analyzed. They include a 50%
cost reduction compared to the traditional approach, a dramatic reduction in the time to implement
a change (from weeks to days), and a significant improvement in the ability to reuse components,
all leading to a high competitive advantage.
To compound this, the amount and complexity of software increase every year. As an example, the
progression of software size for the Airbus family of avionics software is shown below:
The decades of the ‘70’s, 80’s, and 90’s have also seen increasing competitive pressures and an
explosion of costs.
In order to complete projects within cost and schedule constraints, companies developing safety-
critical software feel they have had no choice but to innovate in their development processes.
In this paper we address the issue of productivity in the development of airborne software, as
guided by ED-12/DO-178B. We will identify the areas of cost and explore how to reduce them by
adopting efficient methods and tools.
ARP 4754 [ARP4754] was defined in 1996 by the SAE (Society of Automotive Engineers).
This document discusses the certification aspects of highly integrated or complex systems installed
on aircraft, taking into account the overall aircraft-operating environment and functions. The term
“highly-integrated” refers to systems that perform or contribute to multiple aircraft-level functions.
The guidance material in this document was developed in the context of Federal Aviation
Regulations (FAR) and Joint Airworthiness Requirements (JAR) Part 25. In general, this material
is also applicable to engine systems and related equipment.
ARP 4754 addresses the total life cycle for systems that implement aircraft-level functions. It
excludes specific coverage of detailed systems, software and hardware design processes beyond
those of significance in establishing the safety of the implemented system. More detailed coverage
of the software aspects of design are dealt with in RTCA document DO-178B and its EUROCAE
counterpart, ED-12B. Coverage of complex hardware aspects of design are dealt with in RTCA
document DO-254 [DO-254] .
3.1.2 DO-178B
The information flow between the system processes and software processed is summarized by
Figure. 1.
System requirements
allocated to software
Fault containment boundaries
Software level(s)
Error sources
Design constraints identified/eliminated
ARP 4754 identifies the relationships with DO-178B in the following terms:
The point where requirements are allocated to hardware and software is also the point where the
guidelines of this document transition to the guidelines of DO-178B (for software), DO-254 (for
complex hardware), and other existing industry guidelines. The following data is passed to the
software and hardware processes as part of the requirements allocation:
a. Requirements allocated to hardware.
b. Requirements allocated to software.
c. Development assurance level for each requirement and a description of associated
failure condition(s), if applicable.
d. Allocated failure rates and exposure interval(s) for hardware failures of significance.
e. Hardware/software interface description (system design).
f. Design constraints, including functional isolation, separation, and partitioning
requirements.
g. System validation activities to be performed at the software or hardware development
level, if any.
h. System verification activities to be performed at the software or hardware development
level.
ARP4754 defines guidelines for the assignment of so called “Development Assurance Levels” to
the system, to its components, and to software, related to the most severe failure condition of the
corresponding part.
The essence of DO-178B is the formulation of appropriate objectives, and the verification that
these objectives have been achieved. The authors of DO-178B acknowledged that objectives are
more essential and stable than specific procedures. The ways of achieving an objective may vary
from one company to another, and may vary over time, with the evolution of methods, techniques
and tools. DO-178B never states that one should use design method X, or coding rules Y, or tool
Z. DO-178B does not even impose a specific life-cycle.
The general approach is the following:
Ensure that appropriate objectives are defined. For instance:
a. The development assurance level of the software.
b. The design standards.
Define procedures for the verification of the objectives. The achievement of the
objectives form transition criteria from one activity to another. For instance:
a. Design review.
b. Software integration testing.
Define procedures for verifying that the above-mentioned verification activities have
been performed satisfactorily. For instance:
a. Remarks of document reviews have been answered.
b. Structural coverage of code is achieved.
Standar ds
Environment
Planning Development
process process
Integral process
Verification
Configuration process Testing
Certification
Management
Liaison
process
process Quality Assurance
In the remainder of this document we will focus on the development process and on the
corresponding planning and verification activities.
System Development
System
Requirements
Processes (ARP4754)
process System
Requirements
Allocated to Software
Software Development
SW
High-Level Requirements
Requirements Processes (DO-178B)
process
SW Low-Level Requirements
Design & Architecture
Change process
requests Source Integrated
Code Executable
SW Coding
process
Change
requests
SW Integration
process
The high-level software requirements (HLR) are produced directly through analysis of system
requirements and system architecture and their allocation to software. They include specifications
of functional and operational requirements, timing and memory constraints, hardware and software
interfaces, failure detection and safety monitoring requirements, partitioning requirements.
The HLR are further developed during the software design process, thus producing the software
architecture and the low-level requirements (LLR). These include descriptions of the input/output,
the data and control flow, resource limitations, scheduling and communications mechanisms, and
software components. If the system contains “deactivated” code (see glossary), description of the
means to ensure that this code cannot be activated in the target computer is also required.
Through the coding process, the low-level requirements are implemented as source code.
The source code is compiled and linked by the integration process up to an executable code linked
to the target environment.
At all stages traceability is required: between system requirements and HLR, between HLR and
LLR, between LLR and code, and also between tests and requirements.
The purpose of the software verification processes is “to detect and report errors that may have
been introduced during the software development processes”. DO-178B defines verification
objectives, rather than specific verification techniques, since the latter may vary from one project
to another and/or over time.
Testing is part of the verification processes, but verification is not just testing. The verification
processes also rely on reviews and analyses. Reviews are qualitative and generally performed
once, while analyses are more detailed and should be reproducible (ex: conformance to coding
standards).
Verification activities cover all the processes, from the planning process to the development
process, and there are even verifications of the verification activities.
The objective of reviews and analyses is to confirm that the HLR satisfy the following:
a. Compliance with the system requirements.
b. Accuracy and consistency: each HLR is accurate and unambiguous and sufficiently detailed,
and requirements do not conflict with each other.
c. Compatibility with target computer.
d. Verifiability: Each HLR has to be verifiable.
e. Conformance to standards, as defined by the planning process.
f. Traceability with the system requirements.
g. Algorithm accuracy.
The objective of these reviews and analyses is to detect and report requirements errors that may
have been introduced during the software design process. These reviews and analyses confirm that
the software low-level requirements satisfy these objectives:
a. Compliance with high-level requirements: The software low-level requirements satisfy the
software high-level requirements.
b. Accuracy and consistency: Each low-level requirement is accurate and unambiguous and the
low-level requirements do not conflict with each other.
c. Compatibility with the target computer: No conflicts exist between the software requirements
and the hardware/software features of the target computer; especially, the use of resources
(such as bus loading), system response times, and input/output hardware.
d. Verifiability: Each low-level requirement can be verified.
e. Conformance to standards: The Software Design Standards (defined by the software planning
process) were followed during the software design process, and deviations from the standards
are justified.
f. Traceability: Ensure that the high-level requirements were developed into the low-level
requirements.
g. Algorithm aspects: Ensure the accuracy and behavior of the proposed algorithms, especially in
the area of discontinuities (ex: mode changes, crossing value boundaries).
h. The SW architecture is compatible with the HLR, consistent, compatible with the target
computer, verifiable, and conforms to standards.
i. Software partitioning integrity is confirmed.
The objective is to detect and report errors that may have been introduced during the software
coding process. These reviews and analyses confirm that the outputs of the software coding
process are accurate, complete, and can be verified. Primary concerns include correctness of the
code with respect to the LLRs and the software architecture, and conformance to the Software
Code Standards. These reviews and analyses are usually confined to the Source Code. The topics
should include:
a. Compliance with the low-level requirements: The Source Code is accurate and complete with
respect to the software low-level requirements, and no Source Code implements an
undocumented function.
b. Compliance with the software architecture: The Source Code matches the data flow and control
flow defined in the software architecture.
c. Verifiability: The Source Code does not contain statements and structures that cannot be
verified and the code does not have to be altered to test it.
d. Conformance to standards: The Software Code Standards (defined by the software planning
process) were followed during the development of the code, especially complexity restrictions
and code constraints that would be consistent with the system safety objectives. Complexity
includes the degree of coupling between software components, the nesting levels for control
structures, and the complexity of logical or numeric expressions. This analysis also ensures that
deviations to the standards are justified.
e. Traceability: The software low-level requirements were developed into Source Code.
f. Accuracy and consistency: The objective is to determine the correctness and consistency of the
Source Code, including stack usage, fixed point arithmetic overflow and resolution, resource
contention, worst-case execution timing, exception handling, use of non-initialized variables or
constants, unused variables or constants, and data corruption due to task or interrupt conflicts.
Testing of avionics software has two complementary objectives. One objective is to demonstrate
that the software satisfies its requirements. The second objective is to demonstrate, with a high
degree of confidence that all the errors which could lead to unacceptable failure conditions, as
determined by the system safety assessment process, have been removed.
SW
Requirements-Based
Test Generation
Low-Level SW HW/SW
Tests Integration Tests Integration Tests
SW Requirements
Coverage Analysis
SW Structure
Additional Coverage Analysis
Verification
As shown by Figure 4, DO-178B imposes that all test cases are requirements-based; that means
that test procedures have to be written from the specifications, not from the code. This is even
imposed for low-level testing.
Case A B C Outcome
1 FALSE FALSE TRUE FALSE
2 TRUE FALSE TRUE TRUE
3 FALSE TRUE TRUE TRUE
4 FALSE TRUE FALSE FALSE
At each step, the description of the software is rewritten into another form.
This rewriting is not only expensive, it is error-prone. There is a major risk to have inconsistencies
between these different descriptions. This results in a huge amount of effort being devoted to the
verification of the compliance of each level to the previous level. The purpose of many of the
activities described in DO-178B is to detect the errors introduced during transformations from one
written form to another.
Requirements and design specifications are traditionally written in natural language, possibly
complemented by non-formal figures and diagrams. It is an everyday experience that natural
language is subject to interpretation, even when it is constrained by requirements standards. Its
inherent ambiguity can lead to different interpretations depending on the reader.
This is especially true for the dynamic behavior. For instance, how to interpret several parallel
sentences containing “before X” or “after Y”?
Many specification and design errors are only detected during software integration testing.
One reason is that the requirements/design specification is often ambiguous and subject to
interpretation. The other reason is that it is difficult for a human reader to understand details,
regarding dynamic behavior without being able to exercise it. In a traditional process, the first
time one can exercise the software is during integration. This is very late in the process.
When a specification error can only be detected during the software integration phase, the cost of
fixing it is much higher than if it had been detected during the specification phase.
There are many sources of changes in the software, ranging from bug fixing to function
improvement or the introduction of new functions.
When something has to be changed in the software, all products of the software life cycle have to
be updated consistently and all verification must be performed accordingly.
The level of verification for avionics software is much higher than for other commercial software.
For Level A software, the overall verification cost (including testing) may account for up to 80%
of the budget [Amey] Verification is also a bottleneck for the project completion. So, clearly any
change in the speed and/or cost of verification has a major impact on the project time and budget.
The objective of this paper is to show how to retain a complete and thorough verification process
but dramatically improve the efficiency of the process. The methods we will describe achieve at
least the level of quality achieved by traditional means by optimizing the whole development
process.
Software
Model
Source Code
SCADE Suite enables the saving of a significant amount of verification effort, essentially because
it supports a “correct-by-construction” process.
Tools have to be “qualified” when processes described in DO-178B are eliminated, reduced or
automated (DO-178B, section 12.2). SCADE Suite has been specified and developed with
qualification objectives. This goes far beyond careful development of the tools. It also requires
appropriate definition of the modeling techniques, and of the generated code characteristics such
as traceability and safety. Appendix D provides details about the qualification of the code
generator.
The next section shows how SCADE Suite can be used in the development and verification
process.
SCADE Suite can be used in the following activities, as shown in green on Figure 6:
Definition of High-level and Low-level Requirements:
o The Editor: Supports the edition, documentation and verification of models.
o The Simulator: Supports interactive or batch simulation of a model.
o The Design Verifier: Supports corner bug detection and formal verification of
requirements.
o The Simulink to SCADE Gateway: supports translation of discrete time
Simulink models to SCADE, and S-function generation from SCADE.
The Code Generators: Automatically generate C or ADA code from the model. One of
them (KCG) is qualifiable as a development tool for Level A software.
The SCADE-DOORS gateway supports traceability management between SCADE and
other documents such as requirements or test plans.
System
Requirements process
Text, Simulink,
etc System
Requirements
Allocated to Software
SW Requirements process
SCADE
High-Level Requirements
Editor & Architecture
SW Design process
SCADE Low-Level Requirements
& Architecture
Editor
Integrated
Source Executable
SW Coding process Code
SCADE
KCG
SW Integration
process
By continuous control, we mean sampling sensors at regular time intervals, performing signal-
processing computations on their values, and outputting values often using complex mathematical
formulae. Data is continuously subject to the same transformation. In SCADE, continuous control
is graphically specified using block diagrams such as the one depicted in Figure 7.
Boxes compute mathematical functions, filters, and delays, while arrows denote flows of data
between the boxes. Data flow between blocks that continuously compute their outputs from their
inputs. All blocks compute concurrently, and the blocks only communicate through the flows. To
add some flexibility in functioning modes control, some flows may carry Boolean or discrete
values tested in computational blocks or acting on flow switches.
SCADE blocks are fully hierarchical: blocks at a description level can themselves be composed of
smaller blocks interconnected by local flows. In Figure 7, the ExternalConditions block is
hierarchical, and one can zoom into it with the editor. Hierarchy makes it possible to break design
complexity by a divide-and-conquer approach and to design reusable library blocks. Because of
support for hierarchy, the set of primitive blocks can remain very small: there is no need to write
complex blocks directly in C or ADA, since defining them hierarchically from smaller blocks is
semantically better, much more readable, and just as efficient. Compared to other block-diagram
formalisms, hierarchy in SCADE is purely architectural and does not imply complex hierarchical
evaluation rules: a hierarchical block occurring in a higher-level block is simply replaced by its
contents, conceptually removing its boundaries.
By discrete control we mean changing behavior according to external events originating either
from discrete sensors and user inputs or from internal program events, e.g. value threshold
detection. Discrete control is when the behavior keeps changing, a characteristic of modal human-
machine interface, alarm handling, complex functioning mode handling, or communication
protocols.
State machines have been very extensively studied in the last 50 years, and their theory is well-
understood. However, in practice, they have not been adequate even for medium-size applications,
since their size and complexity tend to explode very rapidly. For this reason, a richer concept of
hierarchical state machines has been introduced, the initial one being Statecharts [D. Harel]. The
Esterel Technologies hierarchical state machines are called Safe State Machines (SSMs), see
Figure 8 for an example. These evolved from the Esterel programming language [G. Berry] and
the SyncCharts state machine [C. André] model. SSMs have proved to be scalable in large
avionics systems.
SSMs are hierarchical and concurrent. States can be either simple states or macro states,
themselves recursively containing a full SSM or a concurrent product of SSMs. When a macro
state is active, so are the SSMs it contains. When a macro state is exited by taking a transition out
of its boundary, the macro state is exited and all the active SSMs it contains are pre-empted
whichever state they were in. Concurrent state machines communicate by exchanging signals,
which may be scoped to the macro state that contains them.
The definition of SSMs carefully forbids dubious constructs found in other hierarchical state
machine formalisms: transitions crossing macro state boundaries, transitions that can be taken
halfway and then backtracked, etc. These are non-modular, semantically ill-defined, and very hard
to figure out, hence inappropriate for safety-critical designs. Their use is usually not recommended
by methodological guidelines.
Large applications contain cooperating continuous and discrete control parts. SCADE Suite makes
it possible to seamlessly couple both data flow and state machine styles. Most often, one includes
SSMs into block-diagrams design to compute and propagate functioning modes. Then, the discrete
signals to which a SSM reacts and which it sends back are simply transformed back-and-forth into
Boolean data flows in the block diagram. The computation models are fully compatible.
Acquire inputs
Scope of SC ADE
SuiteTM
Clock &
Events
Compute
Periodic Response
Output results
In a SCADE block diagram specification, each block has a cycle and all blocks act concurrently.
Blocks can all have the same cycle, or they can have different cycles, which subdivide a master
cycle. At each of its cycle, a block reads its inputs and generates its outputs. If two connected
blocks A and B have the same cycle, the outputs of A are used by B in the same cycle, unless an
explicit delay is added between A and B. This is the essence of the synchronous semantics.
SSMs have the very same notion of a cycle. For a simple state machine, a cycle consists of
performing the adequate transition from the current state and outputting the transition output in the
cycle, if any. Concurrent state machines communicate synchronously with each other, receiving
the signals sent by other machines and possibly sending signals back. Finally, block diagrams and
SSMs in the same design also communicate synchronously at each cycle.
Notice that this cycle-based computation model carefully distinguishes between logical
concurrency and physical concurrency. The application is described in terms of logically
concurrent activities, block-diagram or SSMs. Concurrency is resolved at compile-time, and the
generated code remains standard sequential and deterministic C or ADA code, all contained within
a very simple subset of these languages. What matters is that the final sequential code behaves
exactly as the original concurrent specification, which can be formally guaranteed. Notice that
there is no overhead for communication, which is internally, implemented using well-controlled
shared variables without any context switching.
4.2.5 Determinism
4.3.1.1 Introduction
We will illustrate the SCADE Suite development process with a small example: the Altitude
Control System.
Note: this example has been derived from an altitude control system produced by the SafeAir
project1. It is an element of a case study and was intended to be used for training. The original
specification can be retrieved as a public document from the SafeAir WEB site (www.safeair.org).
The Altitude Control System (ACS) of the A/C is defined as all the hardware and software
necessary to control the given 2-dimensional speed/altitude of the A/C. The heart of the ACS is the
Flight Control Computer (FCC).
Throttle
actuator
FCC Elevator
actuator
Sensors
4.3.1.2 Inputs
The flat list of inputs is summarized in the table below
name type units range purpose
Phase_Button bool Rising edge means switch to next phase
Mode_Switch bool True = Manual stick command mode
Pitch_Cmd real degrees -5 to +20 Pitch Stick command
Throttle_Command real 0 to 1 Manual throttle command
Speed_Setpoint real Knots 120-180 Speed setpoint
Note: the hardware driver ensures that this data is
updated only when the knob is pressed.
1
The SafeAir project (IST 1999-10913) was partly funded by the European Community.
4.3.1.3 Outputs
The flat list of outputs is summarized in the table below.
The starting point for our example are the systems requirements allocated to software (SR1, …,
SRn). This includes the functional requirements of the software, its performance requirements and
safety-related requirements.
System
Requirements SR1 SR2 SR3 SR4 SR5 … SRn
Allocated to
Software
not captured
captured
capturedwith
withQDE
SCADE
tool with
WithQDE
SCADEtool
High-Level
Requirements HLR1 HLR2 HLR3 HLR4 … HLRm
captured
captured with SCADE
Described in graphical
Described notation of QDE with QDE tool
in SCADE
Low-Level
Requirements
LLR1 LLR2 … LLRp
Within the requirements process, we may have manual formalization from some of the system
requirements to high-level requirements described in SCADE. Other high-level requirements are
not described in SCADE and are still described in a natural language or some other notation.
More precisely, a system requirement (SRi) may be formalized into several high-level
requirements (HLRi), as is the case with SR3 that is refined into HLR2 and HLR3. Furthermore, a
given high-level requirement may participate in several system requirements, as is the case of
HLR3 with SR3 and SR4. Finally, there may also exist some “derived” high-level requirements that
are not directly obtained by formalization of a system requirement, but may occur due to
implementation or safety constraints.
Within the design process, we will only have to consider those high-level requirements that were
not described in SCADE. For those, we may have a manual translation from high-level
requirements to low-level requirements expressed in SCADE. Some of the low-level requirements
may still be expressed in the form of pseudo-code or any other kind of low-level description, as it
would be typically the case for low-level executive functions close to hardware.
The documentation of the modules, their interfaces, data types and flows can be generated and
inserted in the Software Requirements Document.
It formalizes the top-level functions, the interfaces of the system and of its high-level functions,
and the data flow between those functions. Note that these flows are strongly typed and structured,
using the data types definitions given below. This will provide a clean framework for the design.
Structured types
TYPE Field name Field type Meaning
Sensors Speed real Measured speed
Alt real Measured altitude
Commands Speed alt Desired speed
Alt real Desired altitude
PhaseMode Pitch real Pitch angle
Throttle real Throttle command
Buttons Phase bool W hen pressed, go to next phase
Manu bool W hen pressed, enter manual mode
StatusLights Speed LightColors Speed error warning light
Alt LightColors Altitude error warning light
PhaseMode PARK bool Parking phase
T_OFF_Gnd bool Take off, ground phase
T_OFF_UP1 bool Take off, max climb phase
_T_OFF_UP2 bool Take off, complete climbing
M_CRS bool Mid cruise phase
LD bool Landing phase
Manu bool Manual command mode
Enumerated types
TYPE Case Meaning
LightColors green OK
amber beware
red danger
State machines are used to explicit the states of the system and how it reacts to events. In the
example below, this corresponds to parking, take off (on ground, 1st phase, 2nd phase), mid cruise,
and landing.
1: NextPhase and
(Sensors.speed
M_CRS >=120.0) and T_OFF_Up2
(Sensors.alt
1: NextPhase
>=3000.0)
LD
1: (Sensors.alt
>=1000.0)
1: true
T_OFF_Up1
The following simple diagram, which factors two similar requirements, could directly express this
function: one for the altitude, the other for the speed. Note that is typically a matter of
project/company culture: some would prefer to do that during requirements, others during design.
Color
ReferenceInput
LowThresh
Green
Amber
Then, during the design activity, we formalize SWHLR61 and SWHLR63 in the following
SCADE diagram. The speed from the sensors is first limited in the range 120 to 180, and this value
is ready for memorization. The memorization (MEM block) occurs when the A/C enters (rising
edge) the Auto mode.
MEM
Sensors .alt L H Init Write
1000.0 40000.0 3000.0
Auto
Observers
Displays
System Software
inputs
Environment
Actuators
Observers
Code generation SCADE
System SCADE
Displays
& inputs Generated
Actuators
embedding Code
DO-178B
Qualified CG level A
Software
Matlab/Simulink S-function
Figure 18 From system analysis in Simulink™ to software with the SCADE Simulink™ Gateway
Moreover, the user can benefit from the qualified Code Generator:
• The generated code can be executed in the Simulink model of its environment, to validate its
behavior in a model of its environment.
• The generated code can be downloaded to a prototype of the target, or the final target possibly
with level A quality objectives, as a result of the qualified code generator (see section on code
generation).
Below is a simple example with a piece of the control loop of the ACS.
Control engineers have provided the definition of a control loop in the form of control engineering
block diagrams (Figure 19).
The translation of the control engineering diagrams to SCADE (Figure 20) is quite easy, either
manually for small models, or by using the Simulink to SCADE gateway for large models. When
formalizing in SCADE, it may be necessary to specify details, such as initial conditions to remove
any remaining ambiguity.
vcf
Look_Up_Table_caller
tetacvl
vc_sens_real
IntegrTrapez
0.1
R T L H
0.02 -1.0 10.0
PithMode
The SCADE Code Generators automatically generate the complete C or ADA code implementing
the requirements and architecture defined in SCADE. It is not just a generation of skeletons: the
complete dynamic behavior is implemented.
Various code generation options can be used to tune the generated code to the users needs; for
example:
1) Generate one C function for a node or inline code of that node.
2) Pass external structured data by copy or by reference.
SW Design process
LLR &
SCADE Architecture
Editor/Simulator
C Source
Code
SW Integration
process
Legacy code
Legacy code can be integrated easily as imported nodes and imported data.
Scheduling
The only scheduling code that the user has to write is the periodic call to the SCADE root function,
typically based on the real time clock. The code generator, based on the data flow, automatically
computes all of the internal scheduling of the model. It is a deterministic, sequential scheduling of
the code. There is no overhead due to scheduling and communication, which one would have if the
model pieces were implemented as tasks managed by an operating system. The generated code is
both deterministic and efficient.
void WatchVariable (void)
void WatchVariable (void)
{_L98_WatchVariable = (WatchedInput - ReferenceInput);
{_L98_WatchVariable = (WatchedInput - ReferenceInput); Node Abs is expanded
/*#code
/*#codefor
fornode
nodeAbs_1
Abs_1*/*/ (user option).
It is computed before its
if ((_L98_WatchVariable >= (_L98_WatchVariable - _L98_WatchVariable)))
if ((_L98_WatchVariable >= (_L98_WatchVariable - _L98_WatchVariable)))
{_L5_WatchVariable
{_L5_WatchVariable==_L98_WatchVariable;
_L98_WatchVariable; result is used.
} else { _L5_WatchVariable = (-_L98_WatchVariable);
} else { _L5_WatchVariable = (-_L98_WatchVariable);
}}
/*#end
/*#endcode
codeforfornode
nodeAbs_1
Abs_1*/*/
if ((_L5_WatchVariable < LowThresh))
Traceable
if ((_L5_WatchVariable < LowThresh))
{ {_L99_WatchVariable
constant names
_L99_WatchVariable==Green;
Green;
} else {_L99_WatchVariable = Amber;
} else {_L99_WatchVariable = Amber;
}}
if ((_L5_WatchVariable > HighThresh))
if ((_L5_WatchVariable > HighThresh))
{ Color = Red;
{ Color = Red;
Traceable
} }else
else variable names
{ Color = _L99_WatchVariable;
{ Color = _L99_WatchVariable;
}/*#end
}/*#endcode
codefor
fornode
nodeWatchVariable
WatchVariable*/*/
}
}
Safety
The generated code is safe: there is no pointer arithmetic, no dynamic memory allocation, no
operating system call; the only loops, which are for delays or array handling have a fixed length.
The generated code is traceable to the model: nodes, variables, constants are traceable by name
and/or comments as shown on Figure 22.
First, one must check the consistency of the requirements. The syntactic and semantic checker of
SCADE Suite performs an in-depth analysis of model consistency, including:
Detection of missing definitions.
Warnings on unused definitions.
Detection of non-initialized variables.
Coherence of data types and interfaces.
Coherence of “clocks”, i.e. of production/consumption rates of data.
It is also possible to add custom verification rules, using the programmable interface of the
SCADE editor.
The SCADE requirements have to be reviewed for conformance with system requirements. The
SCADE Suite report generator ensures that the documentation is up-to-date. The advanced “find”
feature of the editor facilitates an in-depth review.
A requirements management tool such as DOORS may also help in managing traceability with
other life cycle data, such as textual requirements or test cases. Since SCADE and DOORS are
integrated, the traceability of SCADE with textual requirements is simple and efficient.
It is also helpful to exercise dynamically the behavior of that specification, to better understand
how it behaves. As soon as a SCADE model (or pieces of it) is available, it can be simulated with
the SCADE simulator. Simulation can be interactive or batch. Scenarios (input/output sequences)
can be recorded, saved and replayed later on the simulator or on the target. Note that all simulation
scenarios, like all testing activities, have to be based on the system requirements.
Robustness and safety must be addressed at each level, as explained by ARP 4754. We
recommend that robustness be addressed differently at the requirements and coding level.
At the requirements/design level, the specification should explicitly identify and manage
robustness of the software with respect to invalid input data. This requires techniques such as
voting, confirmation and range checking. At the requirements level, one should explicitly manage
the ranges of variables. For instance, one should use generally integrators with a limiter. Or, if
there is a division, the case where the divider is zero has to be managed explicitly at the
requirements level, in the context calling the division: the division should only be called when the
divider is not zero, and the action to be taken when the divider is zero in a foreseeable situation has
to be defined by the writer of the specification, not by the programmer.
On the contrary, if an attempt to divide by zero happens at run time in spite of the above-
mentioned design principles, this has to be handled as an abnormal situation, caused by a defect in
the software specification, or by a hardware failure. The detection of the event can be typically
part of the arithmetic library (the implementation of that library is generally target dependant). The
action to be taken (ex: raise an exception and disconnect the computer) has to be defined as a
global design decision for the computer.
It is easy to define libraries of robust blocks, such as voters, confirmators and limiters. Their
presence in the diagrams is very explicit for the reader. It is also recommended to use the same
numeric data types (in particular fixed point, if the application uses this technique) on the host and
on the target, with libraries that have the same behavior.
Requirements analyses and reviews have to include the above-mentioned rules. These rules ensure
that robustness is effectively managed in the software specifications and in the libraries, rather
than being spread over the code.
The code generator is a “development tool”, in DO-178B terminology. That means that a failure in
the code generator may introduce an error in the code that will be embedded.
As explained in appendix C, the qualification of a tool may reduce the requirement around the
verification of its output. There are two ways of using a SCADE Code Generator for the
development of software, and presenting it accordingly to the certification authority:
a) Unqualified: the code generator is just a way of writing the code more effectively. But no
reduction of verification is possible. The code is verified as if it was written manually. That means
among others reading the code and ensuring its structural coverage during testing.
b) Qualified: the qualification of the SCADE code generator may save or eliminate low-level
testing and structural coverage.
Appendix C provides details about the savings on verification activities, and appendix D give
details about the qualification of the code generator.
In practice, we propose to organize the testing process in the following way (Figure 24):
Application
SW Requirements Based Code generated
Low-Level Testing by SCADE KCG
Application
saved with KCG Sample
g
st in
it t e
Un
1) For the functions that are hand-coded in the Source Code language (e.g., library functions
and/or executives):
The user performs classical verification activities (including low-level testing and
structural coverage analysis at the Source Code level).
The Source to Object Code compiler is used in the same version and with the same
options (no optimization) and in the same execution environment as when it is used to
compile Source Code obtained from the KCG.
Analysis of this Object Code is performed according to CAST Paper P-12 [CAST-12] to
demonstrate that the object that is not directly traceable to source code is correct.
2) For the Source Code automatically generated by KCG:
The user performs testing activities of a sample of the generated Source Code that
comprises all used Source Code programming constructs in order to demonstrate that the
Object Code generated from this Source Code is correct and does not introduce erroneous
extra code that is not traceable at the Source Code level (as in CAST Paper P-12).
3) For the whole application:
The user performs extensive systems requirements-based software and hardware/software
integration testing. It is verified at this stage that all systems requirements allocated to
software are covered by those tests.
The stopping criterion is to achieve both coverage of the system requirements and the
structural coverage of the model (see appendix C).
We acknowledge that, by specification, KCG uses only a small subset of the general purpose
Source Code language, with a low-level of complexity (mostly expressions with comparisons, +; -
etc) and generates very regular code structures. If the combination of all the above activities does
not detect any error in the object code, then we can have sufficient confidence that the compiler
does not introduce errors in the code generated by KCG for that application.
6 Business benefits
This chapter analyzes the business benefits of the above described process improvement.
6.1.2 Time to Market and Cost Reduction with the “Y” Cycle
Requirements
System
c testing
Design
Integration
b Testing
Coding
a
Unit Testing
Time/Cost
If we take the traditional “V cycle” as a reference, Figure 25 summarizes the time and cost
savings, depending on the lifecycle:
• Lifecycle a: Traditional development.
• Lifecycle b: The Code Generator is just used as a productivity tool; it saves the tasks of
writing the code, and the cost reduction is about 15% if there are no LLR changes at all, and
20% if some changes are made. All verification activities have to be performed.
• Lifecycle c: The Code Generator is qualified as a development tool; compliance of the code
with the low-level requirements is guaranteed and the corresponding verification activities are
saved. If the risk of undetected error introduction by the compiler is mastered, the compliance
of the executable with the low-level requirements is guaranteed. We then shift from the V
cycle to the “Y cycle”, meaning that the cost of producing the object code and verifying its
conformance to the requirements is near to zero.
Introduction of proof techniques might lead to 10% more savings, as estimated by Airbus
projections [Pilarski] . This has to be further confirmed at a large scale.
As an example, the chart below shows potential costs reductions on a typical project. The figures
concern the part of the application for which SCADE is applicable. Although the effort of writing
the requirements and design can be reduced (for the same level of detail than in a traditional
document), we assume that this is compensated by a more detailed description and more
preparation of test cases at simulation stage. The effort for coding itself is almost completely
saved. The integration effort is decreased thanks to the higher consistency of the SCADE model
and of its generated code. When SCADE is used, the verification cost is decreased by 20%, due
primarily to the verification of the design by the SCADE checking tools. When using KCG, there
are major supplementary savings: one is the reduction of the “normal” verification, the other is due
to the savings in the verification of code when correcting a requirement and reflecting this change
in the implementation.
Total
Requ errors
Cost with KCG
Integration Cost with CG
Traditional cost
Verification
Requirements
0 20 40 60 80 100
The effect of the Y cycle is not only to reduce direct costs. Reducing the development time leads
to earlier availability of the product. This is clearly a competitive advantage for the company with
the first version of an aircraft or equipment on the market.
This is even truer with new versions: a company has often to build new versions/variants of an
equipment to adapt to various requirements defined by the customers. The Y cycle dramatically
decreases the time to build the variants. This may favor the selection of the company offering such
a shorter response time.
In the traditional approach, the project produces on one hand a textual description and on the other
software code. The code is low-level, hard to read and hard to maintain, and often target
dependant, even if good coding rules have been applied. When new equipment has to be
developed, it is hard to reuse the old software.
A SCADE software model is much more functional and target independent. Experience has shown
that large amounts of SCADE blocks could be reused from one project to another. Libraries of
functional blocks could be defined in one project and reused in another one. The software could be
implemented on new targets by just adapting the bodies of the bottom-level library elements,
without changing the SCADE models.
7 Appendix A References
[C. André] “Representation and Analysis of Reactive Behaviors: A Synchronous Approach”, proc.
CESA'96, IEEE-SMC, Lille, France (1996)
[Amey] “Correctness by Construction: better can also be cheaper,” Peter Amey, Crosstalk, the
Journal of Defense Software Engineering, March 2002
[ARP4754] “Certification considerations for highly integrated or complex aircraft systems”,
Society of Automotive Engineers, 1996
[G. Berry] “The Foundations of Esterel. In “Proofs, Languages, and Interaction, Essays in Honour
of Robin Milner”, G. Plotkin, C. Stirling and M. Tofte, ed., MIT Press (2000).
[CAST-12] “Guidelines for Approving Source Code to Object Code Traceability”, CAST-12
Position Paper, December 2002
[Chilenski] “A practical Tutorial on Modified Condition/Decision Coverage”, Kelly J. Hayhurst
(NASA), Dan S. Veerhusen (Rockwell Collins), John J. Chilenski (Boeing), Leanna K. Rierson
(FAA)
[DO-178B/ED-12B]“Software Considerations in Airborne Systems and Equipment Certification,”
RTCA/EUROCAE, December 1992
[DO-248B] Final report for clarification of DO-178B “Software Considerations in Airborne
Systems and Equipment Certification”, RTAC Inc, Oct 2001
[DO-254] “Design Assurance Guidance for Airborne Electronic Hardware”, RTCA Inc
[Lustre] “The Synchronous Dataflow Programming Language Lustre”, N. Halbwachs, P. Caspi, P.
Raymond, and D. Pilaud, Proceedings of the IEEE, 79(9):1305-1320, September 1991
[D. Harel] Statecharts: a Visual Approach to Complex Systems. Science of Computer
Programming, vol. 8, pp. 231-274 (1987).
[N81110.91] “Guidelines for the Qualification of Software Tools using RTCA/DO-178B”, FAA
Notice, N81110.91, January 16th, 2001
[Pilarski] “Cost effectiveness of formal methods in the development of avionics systems at
Aerospatiale,” François Pilarski,, 17th Digital Avionics Conference, Nov 1st-5th 1998, Seattle, WA.
[SCADE_Lang] “SCADE Language Reference Manual,” Esterel Technologies 2003
8.1 Acronyms
A/C: Aircraft.
COTS: commercial off-the-shelf.
DER: Designated Engineering Representative.
EUROCAE: European Organization for Civil Aviation Equipment .
FAA: Federal Aviation Administration.
HLR: High-level Requirement.
LLR: Low-level Requirement.
JAA: Joint Aviation Authorities.
JAR: Joint Aviation Requirements.
MC/DC: Modified Condition/Decision Coverage.
RTCA: RTCA, Inc.
SCADE: Safety Critical Application Development Environment.
SQA: software quality assurance.
SW: software.
Note: the statements of the next sections apply only to the parts modeled with SCADE and
generated from the SCADE model.
Objective Impact
1 Source Code complies with low-level requirements Eliminated
2 Source Code complies with software architecture Eliminated
3 Source Code is verifiable Eliminated
4 Source Code conforms to standards Eliminated
5 Source Code is traceable to low-level requirements Eliminated
6 Source Code is accurate and consistent Eliminated
7 Output of software integration process is complete and correct
DO-178B Table A-5
Objective Impact
1 Executable Object Code complies with high-level requirements Efficiency
2 Executable Object Code is robust with high-level requirements Efficiency
3 Executable Object Code complies with low-level requirements Efficiency
4 Executable Object Code is robust with low-level requirements Efficiency
5 Executable Object Code is compatible with target computer Efficiency
DO-178B Table A-6
Objective Impact
1 Test procedures are correct
2 Test results are correct and discrepancies explained Efficiency
3 Test coverage of high-level requirements is achieved Efficiency
4 Test coverage of low-level requirements is achieved Eliminated
5 Test coverage of software structure (modified Eliminated
condition/decision) is achieved
6 Test coverage of software structure (decision coverage) is Eliminated
achieved
7 Test coverage of software structure (statement coverage) is Eliminated
achieved
8 Test coverage of software structure (data coupling and control Efficiency
coupling) is achieved
DO-178B Table A-7
respect to system requirements. Contrarily to dead code, a “dead requirement” is most often not a
piece of text that has not been purged. It is more frequently a wanted feature, inhibited by a
complex chain of dependencies that the requirements writers did not identify. Traditionally, this
identification could only be achieved by human analysis of the software requirements, where it
was difficult to analyze all possible dynamic situations. With SCADE Suite, it is possible to run
system requirements based test cases of the software requirements by using the SCADE simulator.
We therefore propose to evaluate the coverage of the software requirements in a measurable way.
The definition of requirements coverage criteria is an emerging topic. The objective is to answer
the question: “did I exercise every piece of the software requirements during system requirements
based testing?” (Simulation is considered here to be part of testing).
As a first example, with block diagrams, the criteria could be the activation of nodes, of selectors,
and of characteristic inputs/outputs responses for library operators, the definition of these criteria
being part of the library definition. For instance the confirmation of an input is one of the
characteristic events for a confirmation node.
With state machines, one could typically use state and transition coverage criteria.
As another example, in the case we are dealing with a block diagram notation, it is possible to
transpose the classical MC/DC criterion that normally applies to Source Code to these block
diagrams, as is explained in the tutorial on MC/DC [Chilenski]. This is just one possible approach,
and it has not been demonstrated that the same criteria should be used for requirements than for
source code.
Step 1 2 3 4 5 6 7
WatchedInput:real 1 10 11 100 101 -12 56
Inputs
ReferenceInput:real 0 0 0 0 0 0 56
Remark Column
Hidden LowThresh:real 10 10 10 10 10 10 10
inputs HighThresh:real 100 100 100 100 100 100 100
Expected
Color:LightColors 0 1 1 1 2 1 0
Outputs
Simulated
Color:LightColors 0 1 1 1 2 1 0
Outputs
OK Status OK OK OK OK OK OK OK
For the example of Figure 27, we may characterize the test cases given by Figure 28 in the
following manner:
Cases 1, 3 and 5 cover the block diagrams from a strict MC/DC criterion.
Cases 2 and 4 are added for checking the accuracy of the comparator.
Cases 6 and 7 are added to show correctness of absolute value function and are beyond
the coverage analysis of the current block diagram.
Structural coverage analysis of software requirements, although not required, is a plus of our
methodology, which will help in identifying dead requirements. This was not possible in a more
traditional setting because, when one is dealing with informal requirements.
System Requirements
Allocated to Software
Review & Reviews & Model
?
Analyses Simulation Coverage
Software Requirements
• ED-12/DO-178B defines development tools as “tools whose output is part of the airborne
software and thus can introduce errors.” If there is a possibility that a tool can generate an
error in the airborne software that would not be detected, then the tool cannot be treated as a
verification tool. An, example of this would be a tool that instrumented the code for testing
and then removed the instrumentation code after the tests were completed. If there was no
further verification of the tool’s output, then this tool could have altered the original code in
some unknown way. Typically, the original code prior to instrumentation is what is used in the
product. This example is included to demonstrate that tools used during verification are not
necessarily verification tools. The effect on the final product must be assessed to determine
the tool’s classification
Section 12.2.1 states that:
a. If a software development tool is to be qualified, the software development processes for the
tool should satisfy the same objectives as the software development processes of airborne
software.
b. The software level assigned to the tool should be the same as that for the airborne software it
produces, unless the applicant can justify a reduction in software level of the tool to the
certification authority.
In summary, the user has to make sure that if he intends to use a tool, that tool has been developed
in such a way that it can be qualified for its intended role (verification or development) and the
safety level of the target software. Note that qualification is on a “per project” basis, although it is
usually simpler and faster to re-qualify a tool in a context similar to the one it has already been
qualified.
Analyses and reviews have been conducted in the same way as for embedded software.
MC/DC coverage has been achieved.
Tool Accomplishment Summary: Summarizes and concludes what has been achieved, DO-
178B Level A objectives, usage conditions, possible limitations
Tool Configuration Index: Identifies the configuration of the tool and its components