Understanding and Using The Controller Area Network: Marco Di Natale October 30, 2008
Understanding and Using The Controller Area Network: Marco Di Natale October 30, 2008
Understanding and Using The Controller Area Network: Marco Di Natale October 30, 2008
Network
Marco Di Natale
1 Introduction 7
3
4 CONTENTS
Chapter 1
Introduction
This book is the result of several years of study and practical experience in the design
and analysis of communication systems based on the Controller Area Network (CAN)
standard. CAN is a multicast-based communication protocol characterized by the de-
terministic resolution of the contention, low cost and simple implementation. The Con-
troller Area Network (CAN) [4] was developed in the mid 1980s by Bosch GmbH, to
provide a cost-effective communications bus for automotive applications, but is today
widely used also in factory and plant controls, in robotics, medical devices, and also in
some avionics systems.
CAN is a broadcast digital bus designed to operate at speeds from 20kb/s to 1Mb/s,
standardized as ISO/DIS 11898 [1] for high speed applications (500 kbit/s) and ISO
11519-2 [2] for lower speed applications (125Kbit/s). The transmission rate depends
on the bus length and transceiver speed. CAN is an attractive solution for embedded
control systems because of its low cost, light protocol management, the deterministic
resolution of the contention, and the built-in features for error detection and retrans-
mission. Controllers supporting the CAN communication standard are today widely
available as well as sensors and actuators that are manufactured for communicating
data over CAN. CAN networks are today successfully replacing point-to-point connec-
tions in many application domains, including automotive, avionics, plant and factory
control, elevator controls, medical devices and possibly more.
Commercial and open source implementation of CAN drivers and middleware soft-
ware are today available from several sources, and support for CAN is included in
automotive standards, including OSEKCom and AUTOSAR. The standard has been
developed with the objective of time determinism and support for reliable communi-
cation. With respect to these properties, it has been widely studied by academia and
industry and methods and tools have been developed for predicting the time and relia-
bility characteristics of messages.
This book attempts at providing an encompassing view on the study and use of the
CAN bus, with references to theory and analysis methods, but also a description of the
issues in the practical implementation of the communication stack for CAN and the
implications of design choices at all levels, from the selection of the controller, to the
SW developer and the architecture designer. We believe such an approach may be of
5
6 CHAPTER 1. INTRODUCTION
advantage to those interested in the use of CAN, from students of embedded system
courses, to researchers, architecture designers, system developers and all practitioners
that are interested in the deployment and use of a CAN network and its nodes.
As such, the book attempts at covering all aspects of the design and analysis of
a CAN communication system. The second chapter contains a short summary of the
standard, with emphasis on the bus access protocol and on the protocol features that are
related or affect the reliability of the communication. The third chapter focuses on the
time analysis of the message response times or latencies. The fourth chapter addresses
reliability issues. The fifth chapter deals with the analysis of message traces. The sixth
chapter contains a summary of the main transport level and application-level protocols
that are based on CAN.
Chapter 2
This chapter introduces the version 2.0b of the CAN Standard, as described in the
official Bosch specification document [4] with the main protocol features. Although the
presentation provides enough details, the reader may want to check the freely available
official specification document from the web for a complete description, together with
the other standard sources referenced throughout this chapter.
The CAN network protocol has been defined to provide deterministic communica-
tion in complex distributed systems with the following features/capabilities:
Possibility of assigning priority to messages and guaranteed maximum latency
times.
Multicast communication with bit-oriented synchronization.
System wide data consistency.
Multimaster access to the bus.
Error detection and signalling with automatic retransmission of corrupted mes-
sages.
Detection of possible permanent failures of nodes and automatic switching off
of defective nodes.
If seen in the context of the ISO/OSI reference model, the CAN specification, orig-
inally developed by Robert Bosch Gmbh covers only the Physical and Data link layers.
Later, ISO provided its own specification of the CAN protocol, with additional details
on the implementation of the physical layer.
The purpose of the Physical Layer is in general to define how bits are encoded into
(electrical or electromagnetic) signals with defined physical characteristics, to be trans-
mitted over wired or wireless links from one node to another. In the Bosch CAN stan-
dard, however, the description is limited to the definition of the bit timing, bit encoding,
and synchronization, which leaves out the specification of the physical transmission
medium, the acceptable (current/voltage) signal levels, the connectors and other char-
acteristics that are necessary for the definition of the driver/receiver stages and the
7
8 CHAPTER 2. THE CAN 2.0B STANDARD
physical wiring. Other reference documents and implementations have filled this gap,
providing solutions for the practical implementation of the protocol.
The Data-link layer consists of the Logical Link Control (LLC) and Medium Ac-
cess Control (MAC) sublayers. The LLC sublayer provides all the services for the
transmission of a stream of bits from a source to a destination. In particular, it defines
The MAC sublayer is probably the kernel of the CAN protocol specification. The
MAC sublayer is responsible for message framing, arbitration of the communication
medium, acknowledgment management, error detection and signalling. For the pur-
pose of fault containment and additional reliability, in CAN, the MAC operations are
supervised by a controller entity monitoring the error status and limiting the operations
of a node if a possible permanent failure is detected.
The following sections provide more detail into each sublayer, including require-
ments and operations.
node A node B
time
of the transmitted bits. The synchronization protocol uses transition edges to resyn-
chronize nodes. Hence, long sequences without bit transitions should be avoided to
avoid drifts in the node bit clocks. This is the reason why the protocol employs the so-
called bit stuffing or bit padding technique, which forces a complemented bit
in the stream after 5 bits of the same type have been transmitted. Stuffing bits are au-
tomatically inserted by the transmission node and removed at the receiving side before
processing the frame contents.
Synchronous bit transmission enables the CAN arbitration protocol and simplifies
data-flow management, but also requires a sophisticated synchronization protocol. Bit
synchronization is performed first upon the reception of the start bit available with each
asynchronous transmission. Later, to enable the receiver(s) to correctly read the mes-
sage content, continuous resynchronization is required. Other features of the protocol
influence the definition of the bit timing. For the purpose of bus arbitration, message
acknowledgement and error signalling, the protocol requires that nodes can change the
status of a transmitted bit from recessive to dominant, with all the other nodes in the
network being informed of the change in the bit status before the bit transmission ends.
This means that the bit time must be at least large enough to accomodate the signal
propagation from any sender to any receiver and back to the sender.
The bit time includes a propagation delay segment that takes into account the signal
propagation on the bus as well as signal delays caused by transmitting and receiving
nodes. In practice, this means that the signal propagation is determined by the two
nodes within the system that are farthest apart from each other (Figure 2.1).
The leading bit edge from the transmitting node (node A in the figure) reaches
nodes B after the signal propagates all the way from the two nodes. At this point, B
can change its value from recessive to dominant, but the new value will not reach A
10 CHAPTER 2. THE CAN 2.0B STANDARD
until the signal propagates all the way back. Only then can the first node decide whether
its own signal level (recessive in this case) is the actual level on the bus or whether it
has been replaced by the dominant level by another node.
Considering the synchronization protocol and the need that all nodes agree on the
bit value, the nominal bit time (reciprocal of the bit rate or bus speed) can be defined
as composed of four segments (Figure 2.2)
Propagation segment (PROP SEG) This part of the bit time is used to com-
pensate for the (physical) propagation delays within the network. It is twice the
sum of the signals propagation time on the bus line, the input comparator delay,
and the output driver delay.
Phase segments (PHASE SEG1 and PHASE SEG2) These phase segments
are time buffers used to compensate for phase errors in the position of the bit
edge. These segments can be lengthened or shortened to resynchronize the posi-
tion of SYNCH SEG with respect to the following bit edge.
Sample point (SAMPLE POINT) The sample point is the point of time at
which the bus level is read and interpreted as the value of that respective bit.
The quantity INFORMATION PROCESSING TIME is defined as the time re-
quired to convert the electrical state of the bus, as read at the SAMPLE POINT
in the corresponding bit value.
All bit segments are multiple of the TIME QUANTUM, a time unit derived from
the local oscillator. This is typically obtained by a prescaler applied to a clock with rate
MINIMUM TIME QUANTUM as
TIME QUANTUM = m * MINIMUM TIME QUANTUM
with m the value of the prescaler. The TIME QUANTUM is the minimum resolution in
the definition of the bit time and the maximum error assumed for the bit-oriented syn-
chronization protocol. The segments are defined as, respectively, SYNC SEG equal
to 1 TIME QUANTUM. PROP SEG and PHASE SEG are between 1 and 8 TIME
QUANTUM. PHASE SEG2 is the maximum between PHASE SEG1 and the INFOR-
MATION PROCESSING TIME, which must always be less than or equal to 2 TIME
QUANTA.
This is how the synchronization protocol works. Two types of synchronization are
defined: hard synchronization, and resynchronization.
Hard synchronization takes place at the beginning of the frame, when the start
of frame bit (see frame definition in the following section) changes the state of
the bus from recessive to dominant. Upon detection of the corresponding edge,
the bit time is restarted at the end of the sync segment. Therefore the edge of the
start bit lies within the sync segment of the restarted bit time.
2.1. PHYSICAL LAYER 11
time quantum
NOMINAL BIT TIME
SAMPLE POINT
PHASE ERROR
Synchronization takes place during transmission. The phase segments are short-
ened or lengthened so that the following bit starts within the SYNCH SEG por-
tion of the following bit time. In detail, PHASE SEG1 may be lengthened or
PHASE SEG2 may be shortened. Damping is applied to the synchronization
protocol. The amount of lengthening or shortening of the PHASE BUFFER
SEGMENTs has an upper bound given by a programmable parameter RESYN-
CHRONIZATION JUMP WIDTH (between 1 and min(4, PHASE SEG1) TIME
QUANTA).
Synchronization information may be only be derived from transitions from one bit
value to the other. Therefore, the possibility of resynchronizing a bus unit to the bit
stream during a frame depends on the property that a maximum interval of time exists
between any two bit transitions (enforced by the bit stuffing protocol).
The device designer may program the bit-timing parameters in the CAN controller
by means of the appropriate registers. Please note that depending on the size of the
propagation delay segment the maximum possible bus length at a specific data rate (or
the maximum possible data rate at a specific bus length) can be determined.
ISO 11898-2
ISO 11898-2 is the most used physical layer standard for CAN networks. The data rate
is defined up to 1 Mbit/s with a required bus length of 40 m at 1 Mbit/s. The high-speed
standard specifies a two-wire differential bus whereby the number of nodes is limited
by the electrical busload. The two wires are identified as CAN H and CAN L. The
characteristic line impedance is 120 , the common mode voltage ranges from -2 V
on CAN L to +7 V on CAN H. The nominal propagation delay of the two-wire bus
line is specified at 5 ns/m. For automotive applications the SAE published the SAE
J2284 specification. For industrial and other non-automotive applications the system
designer may use the CiA 102 recommendation. This specification defines the bit-
timing for rates of 10 kbit/s to 1 Mbit/s. It also provides recommendations for bus lines
and for connectors and pin assignment.
ISO 11898-3
This standard is mainly used for body electronics in the automotive industry. Since
for this specification a short network was assumed, the problem of signal reflection is
not as important as for long bus lines. This makes the use of an open bus line possible.
This means low bus drivers can be used for networks with very low power consumption
and the bus topology is no longer limited to a linear structure. It is possible to transmit
data asymmetrically over just one bus line in case of an electrical failure of one of
the bus lines. ISO 11898-3 defines data rates up to 125 kbit/s with the maximum bus
length depending on the data rate used and the busload. Up to 32 nodes per network are
specified. The common mode voltage ranges between -2 V and +7 V. The power supply
is defined at 5 V. The fault-tolerant transceivers support the complete error management
including the detection of bus errors and automatic switching to asymmetrical signal
transmission.
An unshielded single wire is defined as the bus medium. A linear bus topology structure
is not necessary. The standard includes selective node sleep capability, which allows
regular communication to take place among several nodes while others are left in a
sleep state.
Others
Not standardized are fiber-optical transmissions of CAN signals. With optical media
the recessive level is represented by dark and the dominant level by light. Due to
the directed coupling into the optical media, the transmitting and receiving lines must
be provided separately. Also, each receiving line must be externally coupled with each
transmitting line in order to ensure bit monitoring. A star coupler can implement this.
The use of a passive star coupler is possible with a small number of nodes, thus this
kind of network is limited in size. The extension of a CAN network with optical media
is limited by the light power, the power attenuation along the line and the star coupler
rather than the signal propagation as in electrical lines. Advantages of optical media
are emission- and immission-free transmission and complete galvanic decoupling. The
electrically neutral behavior is important for applications in explosive or electromag-
netically disturbed environments.
CAN controller (50 ns to 62 ns), optocoupler (40 ns to 140 ns), transceiver (120 ns to
250 ns), and cable (about 5 ns/m).
For short, high speed networks, the biggest limitation to bus length is the transceivers
propagation delay. The parameters of the electrical medium become important when
the bus length is increased. Signal propagation, the line resistance and wire cross sec-
tions are factors when dimensioning a network. In order to achieve the highest possible
bit rate at a given length, a high signal speed is required. Figure 2.3 plots the correspon-
dence between bus length and bit rate when the delays of the controller, the optocoupler
and the transceiver add to 250 ns, the cable delay is 5ns/m and the bit time is divided
in, respectively, 21, 17 or 13 TIME QUANTA, of which 8 (the maximum allowed)
represent the propagation delay.
2000
bit rate (kb/s)
1000 21 QUANTA
17 QUANTA
1500 13 QUANTA
500
0
1000 0 200 400 600 800 1000
21 QUANTA
500 17 QUANTA
13 QUANTA
0
0 1000 2000 3000 4000 5000
bus length (m)
Figure 2.3: Relationship between bus length and bit rate for some possible configura-
tions.
Some reference (not mandatory) value pairs for bus length and transmission speed
are shown in Table 2.1 ([?]) and represented in figure 2.3 as plots. The CANopen
consortium has a similar table of correspondencies (see Chapter ??).
Table 2.1: Typical transmission speeds and corresponding bus lengths (CANopen)
2.1. PHYSICAL LAYER 15
In addition, for long bus lines the voltage drops over the length of the bus line.
The wire cross section necessary is calculated by the permissible voltage drop of the
signal level between the two nodes farthest apart in the system and the overall input
resistance of all connected receivers. The permissible voltage drop must be such that
the signal level can be reliably interpreted at any receiving node. The consideration
of electromagnetic compatibility and choice of cables and connectors belongs also to
the tasks of a system integrator. Table 2.2 shows possible cable sections and types for
selected network configurations.
Bus termination
Electrical signals on the bus are reflected at the ends of the electrical line unless mea-
sures are taken. For the node to read the bus level correctly it is important that signal
reflections are avoided. This is done by terminating the bus line with a termination
resistor at both ends of the bus and by avoiding unnecessarily long stubs lines of the
bus. The highest possible product of transmission rate and bus length line is achieved
by keeping as close as possible to a single line structure and by terminating both ends
of the line. Specific recommendations for this can be found in the according standards
(i.e. ISO 11898-2 and -3). The method of terminating your CAN hardware varies
depending on the physical layer of your hardware: High-Speed, Low-Speed, Single-
Wire, or Software-Selectable. For High-Speed CAN, both ends of the pair of signal
wires (CAN H and CAN L) must be terminated. The termination resistors on a cable
should match the nominal impedance of the cable. ISO 11898 requires a cable with a
nominal impedance of 120 ohms, and therefore 120 ohm resistors should be used for
termination. If multiple devices are placed along the cable, only the devices on the
ends of the cable need termination resistors. Figure 2.4 gives an example of how to
terminate a high-speed network.
For Low-Speed CAN, each device on the network needs a termination resistor for
each data line: R(RTH) for CAN H and R(RTL) for CAN L. Unlike the High-Speed
CAN, Low-Speed CAN requires termination on the transceiver rather than on the cable.
Figure 3 indicates where the termination resistors should be placed on a single-wire,
low-speed CAN network (The National Instruments Single-Wire CAN hardware in-
cludes a built-in 9.09 kohm load resistor.)
16 CHAPTER 2. THE CAN 2.0B STANDARD
CAN_L CAN_L
Tx Tx
CAN_H CAN_H
Rx 120 120 Rx
Rx
Rx
Rx
Tx
Tx
Tx
Figure 2.4: Terminating a High-Speed Network.
Tx single wire CAN Tx
Rx Rx
9k 9k
Rx
Rx
Rx
Tx
Tx
Tx
Figure 2.5: Placement of Termination Resistors on a Low-Speed Network.
It is possible to overcome the limitations of the basic line topology by using re-
peaters, bridges or gateways. A repeater transfers an electrical signal from one physical
bus segment to another segment. The signal is only refreshed and the repeater can be
regarded as a passive component comparable to a cable. The repeater divides a bus
into two physically independent segments. This causes an additional signal propaga-
tion time. However, it is logically just one bus system. A bridge connects two logically
separated networks on the data link layer (OSI layer 2). This is so that the CAN identi-
fiers are unique in each of the two bus systems. Bridges implement a storage function
and can forward messages or parts thereof in an independent time-delayed transmis-
sion. Bridges differ from repeaters since they forward messages, which are not local,
while repeaters forward all electrical signals including the CAN identifier. A gateway
provides the connection of networks with different higher-layer protocols. It therefore
performs the translation of protocol data between two communication systems. This
translation takes place on the application layer (OSI layer 7).
differential 5.0
voltage
3.0
CAN_H
3.5 dominant
dominant
0.05 0.5
recessive recessive
1.5
0.5 1.0
CAN_L
Output ranges Input ranges
on the CAN bus inputs from -3V to +32V and transient voltages from -150V to +100V.
Table 2.3 shows the major ISO11898-2 electrical requirements,
Table 2.3: Acceptable voltage ranges for CAN transmitters and receivers
OVERLOAD FRAMEs are used fro flow control, to request an additional time
delay before the transmission of a DATA or REMOTE frame.
18 CHAPTER 2. THE CAN 2.0B STANDARD
Identifier field
The CAN protocol requires that all contending messages have a unique identifier. The
identifier field consists of 11 (+1) bits in standard format and 29 (+3) bits in extended
format, following the scheme of Figure 2.8. In both cases, the field starts with the 11
bits (the most significant bits, in the extended format) of the identifier, followed by
the RTR (Remote Transmission Request) bit in the standard format and by the SRR
(Substitute Remote Request) in the extended format. The RTR bit distinguishes data
frames from remote request frames. It is dominant for data frames, recessive for re-
mote frames. The SRR is only a placeholder (always recessive) for guaranteeing the
deterministic resolution of the arbitration between standard and extended frames.
The extended frame continues with a single IDE bit (IDentifier Extension, always
recessive), then the remaining 18 least significant identifier bits and finally, the RTR bit.
The IDE bit is part of the control field in standard frames (where it is always dominant).
2.2. MESSAGE FRAME FORMATS 19
Arbitration Control
SOF
RTR
IDE
11bit identifier DLC
r0
0 for data, 1 for remote frame
if the first 11bits are the same, standard frames prevail
Arbitration Control
SOF
SRR
RTR
Base identifier Extended identifier
IDE
DLC
r1
r0
Id28Id18 Id17Id0
0 for data frame
1 for remote frame
DLC3
DLC2
DLC1
DLC0
r0
Control field
The control field contains 6 bits. the first two bits are reserved or predefined in con-
tent. In the standard message format the first bit is the IDE (Identifier Extension Bit),
followed by a reserved bit. The IDE bit is dominant in standard formats and recessive
in extended formats and ensures the deterministic resolution of the contention (in favor
of standard frames) when the first eleven identifier bits of two messages (one standard,
one extended) are the same. In the extended format there are two reserved bits. For
these reserved bits, the standard specifies that they are to be sent as recessive, but re-
ceivers will accept any value (dominant or recessive). The following four bits define
the length of the data content (Data Length Content, or DLC) in bytes. If the dominant
bit is interpreted as 1 (contrary to the common notation in which it is read as 0) and the
recessive as 0, the four DLC bits are the unsigned binary coding of the length.
delimiter
delimiter
CRC15CRC0
Interframe space
Data frames and remote frames are separated by an INTERFRAME space (7 recessive
bits) on the bus.
The frame segments START OF FRAME, ARBITRATION FIELD, CONTROL
FIELD, DATA FIELD and CRC SEQUENCE are subject to bit stuffing. The remain-
ing bit fields of the DATA FRAME or REMOTE FRAME (CRC DELIMITER, ACK
FIELD, and END OF FRAME) are of fixed form and not stuffed.
the identifier field is used to indicate the identifier of the requested message
the data field is always empty (0 bytes)
the DLC field indicates the data length of the requested message (not the trans-
mitted one)
the RTR bit in the arbitration field is always set to recessive
queued after the transmission has started. The CAN[?] bus is essentially a wired AND
channel connecting all nodes. The media access protocol works by alternating con-
tention and transmission phases. The time axis is divided into slots, which must be
larger than or equal to the time it takes for the signal to travel back and forth along the
channel. The contention and transmission phases take place during the digital trans-
mission of the frame bits.
If a node wishing to transmit finds the shared medium in a idle state, it waits for
the next slot and starts an arbitration phase by issuing a start- of-frame bit. At this
point, each node with a message to be transmitted (e.g., the message may be placed
in a peripheral register called TXObject) can start racing to grant access of the shared
medium, by serially transmitting the identifier (priority) bits of the message in the
arbitration slots, one bit for each slot starting from the most significant. Collisions
among identifier bits are resolved by the logical AND semantics, and if a node reads
its priority bits on the medium without any change, it realizes it is the winner of the
contention and it is granted access for transmitting the rest of the message while the
other nodes switch to a listening mode. In fact, if one of the bits is changed when
reading it back from the medium, this means there is a higher priority (dominant bit)
contending the medium and thus the message withdraws.
0 1 1 1 0 1 0 1 0 1 0 transmitted id
1=compare
0=dont care 1 0 1 0 0 1 1 1 0 0 0 mask
0 1 1 0 1 masked id
filter1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 0 1 0 1 1 0 1 filter2
RxObject1 0 1 1 1 0 1 0 1 0 1 0 RxObject2
X 15 + X 14 + X 10 + X 8 + X 7 + X 4 + X 3 + 1.
The remainder of the division is the CRC SEQUENCE portion of the frame. Details
on the computation of the CRC SEQUENCE field (including the generating algorithm)
can be found in the Bosch specification. The CAN CRC has the following properties:
it detects all errors with 5 or fewer modified bits, and all burst errors up to 15 bits long
and all errors affecting an odd number of bits. That specification states that multi-bit
errors outside this specification (6 or more disturbed bits or bursts longer than 15 bits)
are undetected with a probability of 3x105 . Unfortunately, this evaluation does not
take into account the effect of bit stuffing. It is shown in [?] that in reality, the protocol
is much more vulnerable to bit errors, and even 2-bit errors can happen undetected.
More details will be provided in the Reliability chapter.
2.5. ERROR MANAGEMENT 23
2.5.2 Acknowledgement
The protocol requires that receivers acknowledge reception of the message by changing
the content of the ACK FIELD. The ACK FIELD is two bits long and contains the ACK
SLOT and the ACK DELIMITER. In the ACK FIELD the transmitting station sends
two recessive bits. All receiver nodes that detect a correct message (after CRC checks),
inform the sender by changing the recessive bit of the ACK SLOT into a dominant bit.
BIT ERROR A unit that is sending a bit on the bus also monitors the bus. A BIT
ERROR has to be detected at that bit time, when the bit value that is monitored
is different from the bit value that is sent. Exceptions are the recessive bits sent
as part of the arbitration process or the ACK SLOT.
CRC ERROR If the CRC computed by the receiver differs from the one stored
in the message frame.
FORM ERROR when a fixed-form bit field contains one or more illegal bits.
(Note, that for a Receiver a dominant bit during the last bit of END OR FRAME
is not treated as FORM ERROR, this is the possible cause of inconsistent mes-
sage omission and duplicates, as further explained in the chapter on reliability).
6 6 8 3
Possible new
transmission
Active error flag Superimposed Error delimiter Interframe
Error flags space
Error
6 6 8 8 3
Possible new
transmission
Passive error flag Superimposed Error delimiter Suspend transmission Interframe
Error flags space
sequence of dominant bits which actually can be monitored on the bus results from a
superposition of different ERROR FLAGs transmitted by individual stations. The total
length of this sequence varies between a minimum of six and a maximum of twelve
bits. The PASSIVE ERROR FLAG sent by error passive nodes has no effect on the
bus. However, the signalling node will still have to wait for six consecutive bits of equal
polarity, beginning at the start of the PASSIVE ERROR FLAG before continuing.
The recovery time from detecting an error until the start of the next message is at
most 31 bit times, if there is no further error.
Error active units in this state are assumed to function properly. Units can
transmit on the bus and signal errors with an ACTIVE ERROR FLAG.
Error passive units are suspected of faulty behavior (in transmission or recep-
tion). They can still transmit on the bus, but their error signalling capability is
restricted to the transmission of a PASSIVE ERROR FLAG.
Bus off units are very likely corrupted and cannot have any influence on the bus.
(e.g. their output drivers are switched off.)
Units change their state according to the value of two integer counters: TRANS-
MIT ERROR COUNT and RECEIVE ERROR COUNT, which give a measure of the
likelyhood of a faulty behavior on transmission and reception, respectively. The state
transitions are represented in Figure 2.13.
The transition from Bus off to Active error state is subject to the additional
condition that 128 occurrence of 11 consecutive recessive bits have been monitored on
the bus.
The counters are updated according to the rules of Table 2.5.5:
A special condition may happen during start-up or wake-up. If during start-up only
1 node is online, and if this node transmits some message, it will get no acknowledg-
ment, detect an error and repeat the message. It can become error passive but not bus
off due to this reason.
2.5. ERROR MANAGEMENT 25
Error Bus
passive off
Table 2.4:
26 CHAPTER 2. THE CAN 2.0B STANDARD
Chapter 3
The CAN protocol adopts a collision detection and resolution scheme, where the mes-
sage to be transmitted is chosen according to its identifier. When multiple nodes need
to transmit over the bus, the lowest identifier message is selected for transmission. This
MAC arbitration protocol allows encoding the message priority into the identifier field
and implementing priority-based real-time scheduling of periodic and aperiodic mes-
sages. Predictable scheduling of real-time messages on the CAN bus is then made
possible by adapting existing real-time scheduling algorithms to the MAC arbitration
protocol or by superimposing a higher-level purposely designed scheduler.
Starting from the early 90s solutions have been derived for the worst case latency
evaluation. The analysis method, commonly known in the automotive world as Tin-
dells analysis (from [14]) is very popular in the academic community and had a sub-
stantial impact on the development of industrial tools and automotive communication
architectures.
The original paper has been cited more than 200 times, its results influenced the
design of on-chip CAN controllers like the Motorola msCAN and have been included
in the development of the early versions of Volcanos Network Architect tool. Volvo
used these tools and the timing analysis results from Tindells theory to evaluate com-
munication latency in several car models, including the Volvo S80, XC90, S60, V50
and S40 [5].
The analysis was later found to be flawed, although under quite high network load
conditions, that are very unlikely to occur in practical systems. Davis and Bril provided
evidence of the problem as well as a set of formulas for the exact or approximate
evaluation of the worst case message response times (or latencies) in [6].
The real problem with the existing analysis methods, however, are a number of as-
sumptions on the architecture of the CAN communication stack that are seldom true
in practical automotive systems. These assumptions include the existence of a perfect
priority-based queue at each node for the outgoing messages, the availability of one
output register for each message (or preemptability of the transmit registers) and im-
mediate (zero-time) or very fast copy of the highest priority message from the software
queue used at the driver or middleware level, to the transmit register(s) or TxObjects at
the source node as soon as they are made available by the transmission of a message.
27
28 CHAPTER 3. TIME ANALYSIS OF CAN MESSAGES
When these assumptions do not hold, as unfortunately is the case for many sys-
tems, the latency of the messages can be significantly larger then what is predicted by
the analysis. A relatively limited number of studies have attempted the analysis of the
effect of additional priority inversion and blocking caused by limited TxObject avail-
ability at the CAN adapter. The problem has been discussed first in [16] where two
peripheral chips (Intel 82527 and Philips 82C200) are compared with respect to the
availability of TxObjects. The possibility of unbounded priority inversion is shown for
the single TxObject implementation with preemption and message copy times larger
than the time interval that is required for the transmission of the interframe bits. The
effect of multiple TxObject availability is discussed in [11] where it is shown that even
when the hardware transmits the messages in the TxObjects according to the priorities
(identifiers) of the messages, the availability of a second TxObject is not sufficient to
bound priority inversion. It is only by adding a third TxObject, under these assump-
tions, that priority inversion can be avoided.
However, sources of priority inversion go well beyond the limited availability of
TxObjects at the bus adapter. In this chapter we review the time analysis of the ideal
system configuration and later, the possible causes of priority inversion, as related to
implementation choices at all levels in the communication stack. We will detail and
analyze all possible causes of priority inversion and describe their expected impact on
message latencies. Examples derived from actual message traces will be provided to
show typical occurrences of the presented cases.
3.1.1 Notation
We assume the system is composed by a set of periodic or sporadic messages with
queuing jitter, that is, messages are enqueued at periodic time instants or two consec-
utive instances of the same message are enqueued with a minimum interarrival time.
In addition, the actual queuiing instants can be delayed with respect to the reference
periodic event stream by a variable amount of time, upper bounded by the queuing
jitter.
For the purpose of schedulability analysis, a periodic or sporadic message stream
Mi is characterized by the t-uple
Mi = {mi , idi , Ni , Ti , Ji , Di }
3.1. IDEAL BEHAVIOR AND WORST CASE RESPONSE TIME ANALYSIS 29
where mi is the length of the message in bits. CAN frames can only contain a data
content that is multiple of 8 bits. Hence, mi is the smallest multiple of 8 bits that is
larger than the actual payload. idi is the CAN identifier of the message, Ni the index of
its sending node, Ti its period or minimum interarrival time, Ji (in case the message is
periodic, Ji = 0 if the message is sporadic) its arrival jitter and Di its deadline (relative
to the release time of the message). In almost all cases of practical interest it must be
Di < Ti because a new instance of a message will overwrite the data content of the old
one if it has not been transmitted yet. Messages are indexed according to their identifier
priority, that is, i < j idi < idj . Br is the bit rate of the bus, and p is the number
of protocol bits in a frame. In the case of a standard frame format (11-bit identifier),
then p = 46, if an extended format (29-bit identifier) is used, then p = 65. In the
following, we assume a standard frame format, but, the formulas can be easily adapted
to the other case. The worst case message length needs to account for bit stuffing (only
34 of the 46 protocol bits are subject to stuffing) and ei is the time required to transmit
the message provided that it wins the contention on the bus
mi + p + mi 4+34
ei =
Br
Finally, wi is the queuing delay, that is, the worst case interval, from the time
message Mi is enqueued (at the middleware level) to the time it starts transmission on
the bus.
Please note that, in reality, messages are enqueued by tasks. In most cases, it is
actually a single task, conceptually at the middleware level, that is called TxTask. If
this is the case, the queuing jitter of each message can be assumed to be the worst case
response time of the queuing TxTask.
In modeling the scheduling problem for CAN, one assumption is commonly used.
Each time a new bus arbitration starts, the CAN controller at each node enters the high-
est priority message that is available. In other words, it is assumed that the peripheral
TxObjects always contain the highest priority messages in the middleware queue, and
the controller selects the highest priority (lowest id) among them.
If this is the case, the worst case response time of Mi is given by
Ri = Ji + wi + ei (3.1)
The queueing delay term wi consists of two terms:
The queuing delay wi is part of the busy period of level i, defined as follows: a con-
tiguous interval of time that starts at the time ts when a message of priority higher than
i is queued for transmission and there are no other higher priority messages waiting
to be transmitted that were queued before ts . During the busy period, only messages
with priority higher than i are transmitted. It ends at the earliest time te when the bus
becomes idle or a message with priority lower than i is transmitted.
30 CHAPTER 3. TIME ANALYSIS OF CAN MESSAGES
M 1
.....
M i1
M i ei
L
Bi I i
w i
Ri
Figure 3.1: Worst case response time, busy period and time critical instant for Mi .
The worst-case queuing delay for message Mi occurs for some instance of Mi
queued within a level-i busy period that starts immediately after the longest lower pri-
ority frame is transmitted on the bus. The busy period that corresponds to the worst
case response time must occur at the critical instant for Mi [10], that is, when Mi is
queued at the same time together with all the other higher priority messages in the net-
work. Subsequently, these messages are enqueued with their highest possible rate (if
sporadic). The situation is represented in figure 3.1.
Formally, this requires the evaluation of the following fixed-point formula for the
worst-case queuing delay:
X wi + Jk + bit
wi = Bi + ek (3.2)
Tk
khp(i)
where hp(i) is the set of messages with priorities higher than i and bit is the time
for the transmission of one bit on the network. The solution to the fixed-point formula
can be computed considering that the right hand side is a monotonic non-decreasing
function of wi . The solution can be found iteratively using the following recurrence.
& '
(m)
(m+1)
X wi + Jk + bit
wi = Bi + ek (3.3)
Tk
khp(i)
(0) (0)
A suitable starting value is wi = Bi , or wi = , with Cj j for the lowest
(m+1)
priority message. The recurrence relation iterates until, either wi > Di , in which
(m+1) (m)
case the message is not schedulable, or wi = wi in which case the worst-case
(m)
response time is simply given by wi .
The flaw in the above analysis is that, given the constraint Di Ti , it implicitly
assumes that if Mi is schedulable, then the priority level-i busy period will end at or
before Ti . Please note that this assumption fails only when the network is loaded to the
point that the response time of Mi is very close to its period. Given the typical load of
CAN networks (seldom beyond 65%) this is almost always not the case.
When there is such a possibility, the analysis needs to be corrected as follows.
To explain the problem and the fix, we refer to the same example in [6] with three
3.1. IDEAL BEHAVIOR AND WORST CASE RESPONSE TIME ANALYSIS 31
messages (A, B and C) with very high utilization. All messages have a transmission
time of 1 unit and periods of, respectively, 2.5, 3.5 and 3.5 units. The critical instant is
represented in Figure 3.2. Because of the impossibility of preempting the transmission
of the first instance of C, the second instance of message A is delayed, and, as a result,
it pushes back the execution of the second instance of C. The result is that the worst
case response time for message C does not occur for the first instance (3 time units),
but for the second one, with an actual response time of 3.5 time units. The fix can be
found observing that the worst case response time is always inside the busy period for
message C. Hence, to find the correct worst case, the formula to be applied is a small
modification of (3.4) that checks all the q instances of message Mi that fall inside
the busy period starting from the critical instant. Analytically, the worst-case queuing
delay for the q-th instance in the busy period is:
X wi + Jk + bit
wi (q) = Bi + qei + ei (3.4)
Tk
khp(i)
and its response time is
A B C A B A C
0 1 2 3 4 5 6 7
RC
Figure 3.2: An example showing the need for a fix in the evaluation of the worst-case
response time.
Please note that the previous analysis methods are based on the assumption that the
highest priority active message at each node is considered for bus arbitration. This sim-
ple statement has many implications. Before getting into details, however, the defini-
tion of priority inversion, as opposed to blocking, must be provided. Blocking is defined
as the amount of time a message Mi must wait because of the ongoing transmission of
a lower priority message on the network (at the time Mi is queued). Blocking cannot
be avoided and derives from the impossibility of preempting an ongoing transmission.
Priority inversion is defined as the amount of time Mi must wait for the transmission
of lower priority messages because of other causes, including queuing policies, active
waits or other events. Priority inversion may occur after the message is queued.
32 CHAPTER 3. TIME ANALYSIS OF CAN MESSAGES
The configuration and management of the peripheral transmit and receive objects is
of utmost importance in the evaluation of the priority inversion at the adapter and of
the worst case blocking times for real-time messages. A set of CAN controllers are
considered in this paper with respect to the availability of message objects, priority
ordering when multiple messages are ready for transmission inside TxObjects, and
possibility of aborting a transmission request and changing the content of a TxObject.
The last option can be used when a higher priority message becomes ready and all the
TxObjects are used by lower priority data.
There is a large number of CAN controllers available on the market. In this paper,
seven controller types, from major chip manufacturers, are analyzed. The results are
summarized in Table 3.1. The chips listed in the table are MCUs with integrated con-
trollers or simple bus controllers. In case both options are available, controller product
codes are shown between parenthesis. All controllers allow both polling-based and
interrupt-based management of both transmission and reception of messages.
Some of those chips have a fixed number of TxObjects and RxObjects. Others
give the programmer freedom in allocating the number of Objects available to the role
of transmission or reception registers. When multiple messages are available in the
TxObjects at the adapter, most chips select for transmission the object with the lowest
identifier, not necessarily the message with the lowest Id. Finally, in most chips, a
message that is currently inside a TxObject can be evicted from it (the object preempted
or the transmission aborted), unless the transmission is actually taking place. In the
following sections we will see how these decision can affect the timing predictability
and become a possible source of priority inversion.
3.1. IDEAL BEHAVIOR AND WORST CASE RESPONSE TIME ANALYSIS 33
29 13
28 12
27 11
(RxObjects)
26 10 0x544
25 9 0x4FE
24 8 0x411
23 7 0x3A2
22 6 0x36F
21 5 0x33A
20 4 0x2B8
19 3 0x267
18 2 0x211
17 1 0x14A
16 0 0x130
This solution is possible in some cases. In [16] it is argued that, since the total
number of available objects for transmission and reception of messages can be as high
as 14 (for the Intel 82527) or 32 (for most TouCAN-based devices), the device driver
can assign a buffer to each outgoing stream and preserve the ID order in the assignment
of buffer numbers. Unfortunately, this is not always possible in current automotive
architectures where message input is typically polling-based rather than interrupt-based
and a relatively large number of buffers must be reserved to input streams in order
to avoid message loss by overwriting. Furthermore, for some ECUs, the number of
outgoing streams can be very large, such as, for example, gateway ECUs. Of course,
in the development of automotive embedded solutions, the selection of the controller
chip is not always an option and designers should be ready to deal with all possible
HW configurations.
The other possible solution, when the number of available TxObjects is not suffi-
cient, is to put all outgoing messages, or a subset of them, in a software queue (figure
3.4). When a TxObject is available, a message is extracted from the queue and copied
into it. This is the solution used by the Vector CAN drivers. Preserving the priority
oder would require that:
The queue is sorted by message priority (message CAN identifier)
When a TxObject becomes free, the highest priority message in the queue is im-
mediately extracted and copied into the TxObject (interrupt-driven management
of message transmissions).
34 CHAPTER 3. TIME ANALYSIS OF CAN MESSAGES
If, at any time, a new message is placed in the queue, and its priority is higher
than the priority of any message in the TxObjects, then the lowest priority mes-
sage holding a TxObjects needs to be evicted, placed back in the queue and the
newly enqeueud message copied in its place.
Finally, messages in the TxObjects must be sent in order of their identifiers (pri-
orities). If not, the position of the messages in the Txobjects should be dynami-
cally rearranged.
When any of these conditions does not hold, priority inversion occurs and the worst
case timing analysis fails, meaning that the actual worst-case can be larger than what is
predicted by Eq. (3.4). Each of these causes of priority inversion will now be analyzed
in detail. However, before delving into these details, it is necessary to recall another,
more subtle cause of priority inversion that may happen even when all the previous
conditions are met. This problem arises because because of the necessary finite copy
time between the queue and the TxObjects.
31 15 31 15
Objects for reception
30 14 30 14
Objects for reception
28 12 28 12
(RxObjects)
27 11 27 11
(RxObjects)
(RxObjects)
26 10 26 10
(RxObjects)
25 9 25 9
Software queue
24 8 Software queue 24 8
23 7 23 7
22 6 22 6 0x544
21 5 21 5 0x4FE
20 0x3A2
4 20 4
0x4FE 0x380
19 3 0x3A2 19 3
18 0x267
2 18 2
0x130
17 1 17 1 0x267
16 0 16 0 0x130
0x180
0 0x2A8 0 0 A lower priority message
can get the bus
0x340 evicted 0x180 not copied yet 0x2A8 suffers priority inversion
because of the arrival of 0x180
0 0x180 0 0x180 0
A lower priority message
1 0x340 1 1 can get the bus
0x340 evicted 0x2A8 not copied yet 0x340 suffers priority inversion
because of the arrival of 0x2A8
0x2A8
0 0x180 0 0x180 0 0x2A8 is blocked
1 0x340 1 0x340 1 0x340 but no priority inversion occurs
2 0x488 2 2
Single buffer with preemption. This case was discussed first in [16]. Figure 3.5
shows the possible cause of priority inversion. Suppose message 0x2A8 is in the only
available TxObject when higher priority message 0x180 arrives. Transmission of
0x2A8 is aborted and the message is evicted from the TxObject. However, after evic-
tion, and before 0x180 is copied, a new contention can start on the bus and, possibly,
a low priority message can win, resulting in a priority inversion for 0x2A8.
The priority inversion illustrated in figure 3.5 can happen multiple times during the
time a medium priority message (like 0x2A8 in our example), is enqueued and waits
for transmisison. the combination of these multiple priority inversions can result in a
quite nasty worst case scenario. Figure 3.6 shows the sequence of events that result in
the worst case delay. The top of the figure shows the mapping of messages to nodes,
the bottom part, the timeline of the message transmission on the bus and the state of
the buffer. The message experiencing the priority inversion is labeled as M, indicating
a medium priority message.
Suppose message M arrives at the queue right after message L1 started being trans-
mitted on the network (from its node). In this case, M needs to wait for L1 to complete
its transmission. This is unavoidable and considered as part of the blocking term Bi .
Right after L1 ends its transmission, M starts being copied into the TxObject. If the
message copy time is larger than the interframe bits, a new transmission of a lower
priority message L2 from another node can start while M is being copied. Before L2
ends its transmission, a new higher priority message Hn arrives on the same node as
M and aborts its transmission. Unfortunately, while Hn is being copied into the buffer,
another lower priority message from another node, L3 can be transmitted. This prior-
ity inversion can happen multiple times, considering that transmission of Hn can be
aborted by another message Hn1 , and so on, until the highest priority message from
the node, H1 is written into the buffer and eventually transmitted.
H1 H2 ... Hn M L 1 L 2 L 3 L 4 ... L n+4
Buffer L1 M Hn Hn1 H1 H2
The priority inversion may continue even after the transmission of H1 . While H2
is being copied into the buffer, another lower priority message from another node can
be transmitted on the bus and so on. In the end, each message with a priority higher
than M allows for the transmission of two messages with priority lower than M, one
while it is preempting the buffer and the other after transmission. Consider that during
36 CHAPTER 3. TIME ANALYSIS OF CAN MESSAGES
the time in which the queue of higher priority messages is emptied (in the right part
of the scheduling timeline of Figure 3.6), a new chain of message preemptions may be
activated because of new arrivals of high priority messages. All these factors lead to a
very pessimistic evaluation of the worst case transmission times.
Please note, if message copy times are smaller than the transmission time of the
interframe bits, then it is impossible for a lower priority message to start its transmis-
sion when a message is copied right after the transmission of another message and the
second set of priority inversions cannot possibly happen. In this case, the possibility
of additional priority inversion is limited to the event of a high priority message per-
forming preemption during the time interval in which interframe bits are transmitted.
To avoid this event, it is sufficient to disable preemption from the end of the message
transmission to the start of a new contention phase. Since the message transmission
is signalled by an interrupt, the implementation of such a policy should not be diffi-
cult. In this case, even availability of a single buffer does not prevent implementation
of priority based scheduling and the use of the feasibility formula in [15], with the
only change that, in the computation of the blocking factor Bi , the transmission time
of the interframe bits must be added to the transmission time of local messages with
lower priority. In the case of a single message buffer and copy times longer than the
transmission time of the interframe bits, avoiding buffer preemption can improve the
situation by breaking the chain of priority inversions resulting from message preemp-
tion in the first stage. However, additional priority inversion must be added considering
that lower priority messages from other nodes can be transmitted in between any two
higher priority message from the same node as Mi .
Dual buffer with preemption In [11] the discussion of the case of single buffer man-
agement with preemption was extended to the case of two buffers. Figure 3.5 shows
a priority inversion event and Figure 3.7 shows a combination of multiple priority in-
versions in a worst-case scenario that can lead to large delays. The case of Figure 3.5
defines a priority inversion when 0x340 is evicted from the TxObject while the mes-
sage in the other TxObject (0x180) is finishing its transmission on the network. At
this time, before the newly arrived message is copied into the TxObject, both buffers
are empty and a lower priority message from a remote node can win the arbitration
and be transmitted. 0x340 experiences in this case a priority inversion. As for the
description of the effect of multiple instances of this event, in the top half of figure 3.7,
the scheduling on the bus is represented (allocation of messages to nodes is the same
as in Figure 3.6). In the bottom half, the state of the two buffers is shown. We assume
the peripheral hardware selects the higher priority message for transmission from any
of the two buffers.
The initial condition is the same as in the previous case. Message M arrives at
the middleware queue right after message L1 started being transmitted and it is copied
in the second available buffer. Before L1 completes its transmission, message Hn
arrives. The transmitting buffer cannot be preempted and the only possible option is
preempting the buffer of message M. Unfortunately, during the time it takes to copy
Hn in this buffer, the bus becomes available and a lower priority message from another
node (L2 ) can perform priority inversion. This priority inversion scenario can repeat
multiple times considering that a new higher priority message Hn1 can preempt M
right before Hn ends its transmission, therefore allowing transmission of another lower
3.1. IDEAL BEHAVIOR AND WORST CASE RESPONSE TIME ANALYSIS 37
H n arrives
H n1 arrives H n2 arrives
M arrives 1
M Hn M
Buffer L1 M Hn1
ily, subvert the priority order of CAN messages. If, at some point, a higher priority
message is copied in a higher id TxObject, it will have to wait for the transmission
of the lower priority messages in the lower id TxObjects, thereby inheriting their la-
tency. This type of priority inversion is however unlikely, and restricted to the case
in which there is dynamic allocation of TxObjects to messages, as when the message
extracted from the message queue can be copied in a set of TxObjects. In almost all
cases of practical interest, there is either a 1-to-1 mapping of messages to Txobjects, or
a message queue associated with a single TxObject.
0x6F8
0x3A8
0x103 0x103
0x130 is delayed and experiences
0x6F8 is copied into the TxObject priority inversion
In this case, there is a possible priority inversion when a lower priority message
(0x6F8 in the figure), enqueued by a previous instance of the TxTask, is still wait-
ing for transmission in the TxObject when the next instance of TxTask arrives and
enqueues higher priority messages (like 0x6F8 in the figure). This type of priority
inversion clearly violates the rules on which Eq. (3.4) is derived. Determination od
the worst case response time of messages becomes extremely difficult because of the
need of understanding what is the critical instant configuration for this case. A way
of separating concerns and alleviating the problem could be to separate the outgoing
messages in two queues. A high priority queue, linked to the highest priority (lowest
id) Txobject, and a lower priority queue, linked to a lower priority object.
Besides the possibility that the same type of priority inversion described in fig-
ure 3.8 still occurs for one or both queues, there is also the possibility that a slight pri-
ority inversion between the messages in the two queues occurs because of finite copy
times between the queues and the TxObjects. variation of the single-available TxOb-
3.1. IDEAL BEHAVIOR AND WORST CASE RESPONSE TIME ANALYSIS 39
31 15
30 14
(RxObjects)
(RxObjects)
26 10
25 9 Low priority
24 8 Software queue High priority
23 7 Software queue
22 6
21 5
0x4FE
20 4 0x3A2 0x31A
0x38A
19 3 0x290
0x350
0x267
18 2
0x130
17 1
16 0
ject case occurs. Figure 3.10 shows a CAN trace where a message from the lower
priority queue manages to get in between two messages from the higher priority queue
taking advantage of the copy time from the end of the transmission of 0x138 to the
copy of the following message 0x157 from the high priority queue to the TxObject.
In this case (not uncommon), the copy time is larger than the transmission time of the
interframe bits on the bus.
1.857316 1 11A Rx d 8 xx xx xx xx xx xx xx xx
1.857548 1 138 Rx d 8 xx xx xx xx xx xx xx xx 157 335
1.857696 1 20A Rx d 3 xx xx xx
1.858256 1 157 Rx d 5 xx xx xx xx xx ... 138 20A
..... 157 ...
3.877361 1 11A Rx d 8 xx xx xx xx xx xx xx xx 138 335
3.877597 1 138 Rx d 8 xx xx xx xx xx xx xx xx 11A 20A copy 157 335
3.877819 1 335 Rx d 7 xx xx xx xx xx xx xx time
3.878309 1 157 Rx d 5 xx xx xx xx xx 20A
Application layer
TxTask
message queue
Driver/MW layers
Polling task period
Polling task
CAN bus
t buf t que
M M
Buffer ML M H1 M H2
00000000000000000
11111111111111111
00000000000000000M
11111111111111111
Bus
00000000000000000
11111111111111111 M L1 M H1 M L1 M
00000000000000000 L
11111111111111111 H2
bus is idle
low priority msg from
remote node gets the bus
ure 3.13. The left side of the figure shows (Y-axis) the cumulative fraction of messages
from a given medium priority stream that are transmitted with the latency values shown
on the X-axis. Despite the medium priority, there are instances that are as late as more
than 15 ms (the X-scale is in s). This is quite unusual, compared to the typical shape
of a latency distribution for a message with a similar priority. These messages should
not be delayed more than a few ms (Tindells analysis for this case predicted a worst
case latency of approx. 8 ms). However, the problem is that the left hand side node
transmits by polling, and the period of the polling task is 2.5 ms (as proven by the steps
in the latency distribution.)
number of instances
cumulative fraction
number of instances
cumulative fraction
14 1 5 1
number of instances
cumulative fraction
number of instances
12 cumulative fraction
0.8 4 0.8
10 rate depends on
remote interference
0.6 3 0.6
8 ~2500 ms
6
0.4 2 0.4
0 0 0 0
0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000 22000 0 500 1000 1500 2000 2500 3000
Figure 3.13: Latency distribution for a message with polling-based output compared to
interrupt-driven output.
Indeed, the message is enqueued with other messages, in a node with polling-based
output. the number of messages in front of it in the queue varies from 1 to 7, which is
the cause of the corresponding large worst case latency, and also of substantial jitter.
[1] Road vehicles - interchange of digital information - controller area network (can)
for high-speed communication. ISO 11898, 1993.
[2] Low-speed serial data communication part 2: Low-speed controller area network
(can). ISO 11519-2, 1994.
[5] L. Casparsson, Antal Rajnak, Ken Tindell, and P. Malmberg. Volcanoa revolution
in on-board communications. Volvo Technology Report, 1998.
[6] Robert I. Davis, Alan Burns, Reinder J. Bril, and Johan J. Lukkien. Controller
area network (can) schedulability analysis: Refuted, revisited and revised. Real
Time Systems Journal, 35(3):239272, 2007.
[10] C.L. Liu and J.W. Layland. Scheduling algorithms for multiprogramming in a
hard-real-time environment. Journal of the Association for Computing Machin-
ery, 20(1), 1973.
[11] Antonio Meschi, Marco DiNatale, and Marco Spuri. Priority inversion at the
network adapter when scheduling messages with earliest deadline techniques. In
Proceedings of Euromicro Conference on Real-Time Systems, June 12-14 1996.
[12] Microchip. MCP2515 Datasheet, Stand Alone CAN Controller with SPI Inter-
face. web page: http://ww1.microchip.com/downloads/en/devicedoc/21801d.pdf.
43
44 BIBLIOGRAPHY
[13] Philips. P8xC592; 8-bit microcontroller with on-chip CAN Datasheet. web
page: http://www.semiconductors.philips.com/acrobat download/datasheets/
P8XC592 3.pdf.
[15] Ken Tindell and Alan Burns. Guaranteeing message latencies on control area
network (can). Proceedings of the 1st International CAN Conference, 1994.
[16] Ken Tindell, Hans Hansson, and Andy J. Wellings. Analysing real-time commu-
nications: Controller area network (can). Proceedings of the 15th IEEE Real-Time
Systems Symposium (RTSS94), 3(8):259263, December 1994.
List of Figures
3.1 Worst case response time, busy period and time critical instant for Mi . 32
3.2 An example showing the need for a fix in the evaluation of the worst-
case response time. . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3 Static allocation of TxObjects. . . . . . . . . . . . . . . . . . . . . . 35
3.4 Temporary queuing of outgoing messages. . . . . . . . . . . . . . . . 36
3.5 Priority inversion for the two buffer case. . . . . . . . . . . . . . . . 36
3.6 Priority inversion for the single buffer case. . . . . . . . . . . . . . . 37
3.7 Priority inversion for the two buffer case. . . . . . . . . . . . . . . . 39
3.8 Priority inversion when the TxObject cannot be revoked. . . . . . . . 40
3.9 double software queue. . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.10 Trace of a priority inversion for a double software queue. . . . . . . . 41
3.11 The Tx polling task introduces delays between transmission. . . . . . 42
3.12 Priority inversion for polling-based output. . . . . . . . . . . . . . . . 42
3.13 Latency distribution for a message with polling-based output compared
to interrupt-driven output. . . . . . . . . . . . . . . . . . . . . . . . . 43
45
46 LIST OF FIGURES
List of Tables
47