Joint Quality Management in The Supply Chain: Automotive SPICE
Joint Quality Management in The Supply Chain: Automotive SPICE
Joint Quality Management in The Supply Chain: Automotive SPICE
Automotive SPICE®
Guidelines
Copyright 2017 by
Germany
Printing house:
Henrich Druck + Medien GmbH
Schwanheimer Straße 110, 60528 Frankfurt am Main, Germany
Exclusion of liability
VDA volumes are recommendations available for general use. Anyone ap-
plying them is responsible for ensuring that they are used correctly in each
case.
This VDA volume takes into account state of the art technology, current at
the time of issue. Implementation of VDA recommendations relieves no one
of responsibility for their own actions. In this respect everyone acts at their
own risk. The VDA and those involved in VDA recommendations shall bear
no liability.
If during the use of VDA recommendations, errors or the possibility of mis-
interpretation are found, it is requested that these be notified to the VDA
immediately so that any possible faults can be corrected.
Copyright
This document and all of its constituent parts are subject to copyright. Use
outside of the strict limits of copyright law without the consent of the VDA is
prohibited; such use constitutes a criminal offense.
This applies in particular to copying, translation, microfilming, and storing or
processing in electronic systems.
All rights reserved. Unless specified otherwise, it is prohibited to reproduce
this document, in part or in full, to store this document electronically or by
any other means, or to transmit, photocopy, or record this document in any
way without prior written consent by the publisher.
Term Definition
Assessing The organization which performs the assessment. Usually the lead as-
organization sessor and other assessment team members are part of the assessing
organization.
Assessment scope Definition of the boundaries of the assessment, provided as part of the
assessment input, encompassing the boundaries of the organizational
unit for the assessment, the processes to be included, the quality level
for each process to be assessed, and the context within which the pro-
cesses operate.
→ [ISO/IEC 33001:2015, 3.2.8]
Assessment team One or more individuals who jointly perform a process assessment.
→ [ISO/IEC 33001:2015, 3.2.10]
Automotive SPICE A process assessment and reference model conformant to the re-
quirements of ISO/IEC 33002:2015. It is primarily addressing the de-
velopment of embedded software-based systems within the automotive
domain. It can be downloaded free of charge on
www.automotivespice.com.
AUTOSAR domains Categories used to classify electronic control units by their area of ap-
plication, e.g. chassis, powertrain, telematics, body.
Capability level Point on a scale of achievement of the process capability derived from
the process attribute ratings for an assessed process.
Certification body A central body which administrates the certification information of the
trained assessors and classifies the trained assessors by their qualifi-
cations and practical experience according to a certification scheme.
Certification scheme A set of rules and procedures used by a certification body to certify as-
sessors.
Evidence repository Repository for storing evidences which have been obtained.
Feedback presenta- A process step at the end of the assessment, when the assessment
tion team provides early feedback on the results of the assessment. It usu-
ally covers the main strengths and potential improvements. The set of
provisional process capability profiles is also presented if appropriate.
HIS process scope A selected set of processes from Automotive SPICE which are as-
sessed (where applicable) in every assessment carried out by the au-
tomotive manufacturers represented in the HIS. Due to the termination
of the HIS work the HIS scope has been replaced by the → VDA pro-
cess scope.
Organization as- The organizational unit which is assessed. This usually refers to pro-
sessed jects in one or more departments in the assessed organization.
Practice level Lowest level of granularity within the Automotive SPICE process as-
sessment model, determined by the “base practices” and “generic prac-
tices” of the processes. Strengths and potential improvements should be
traceable to this level and are derived from expectations regarding a
state-of-the-art implementation of the practices. Although these expecta-
tions constitute good practices in engineering, their achievement might
not be satisfied in all cases because “state-of-the art” is highly depending
on the context and on individual interpretation.
Process assess- Model suitable for the purpose of assessing a specified process quality
ment model (PAM) characteristic, based on one or more process reference models
→ [ISO/IEC 33001:2015, 3.3.9]
Process context Set of factors, documented in the assessment input, that influence the
judgment, comprehension, and
comparability of process attribute ratings
→ [ISO/IEC 33001:2015, 3.2.16]
Set of process (ca- The collective representation of the capability profiles of each process
pability) profiles in the scope of the assessment.
Tier 1…n The term “Tier 1…n” is used to refer to suppliers at various levels in the
supply chain. Direct suppliers to the OEM are referred to as “Tier 1”, a
supplier to a Tier 1 supplier is referred to as a “Tier 2”, etc.
VDA process scope Standard set of processes to be considered in the Automotive domain.
The VDA Scope is based on the release 3.1 of the Automotive SPICE pro-
cess reference and assessment model [Automotive SPICE].
VDA Scope
This process set correlates with the former HIS scope. The additional pro-
cess was necessary to reflect the structural changes in the engineering
processes.
The purpose of part one of the current publication is to support the assessors
in interpreting the Automotive SPICE process reference and assessment
models and rating the process attributes for the given target capability level.
Since most of the assessments in the automotive domain do not address
capability levels higher than 3, no guidelines are provided for level 4 or 5.
Chapter 1, “Application of interpretation and rating guidelines” provides an
overall guideline on rating in an assessment. It introduces a clearer defini-
tion of how to set-up and consider the assessment scope and how to rate
based on this assessment scope.
An integral part of the interpretation and rating guidelines is rules and rec-
ommendations addressing specific key concepts, application environments
and the different capability levels.
In chapter 2, “Key concepts and overall guidelines” rules and recommenda-
tions related to key concepts introduced or modified with the 3.1 version of
Automotive SPICE are given. Further, rules and recommendations for rat-
ing in specific application environments are provided.
Chapter 3, “Rating guidelines on process performance (level 1)” is related
to the process specific outcomes, base practices and work products asso-
ciated with the capability level1. In this chapter, specific rating rules and
recommendations are given for each process of the VDA Scope.
In chapter 4, “Rating guidelines on process capability level 2” and chap-
ter 5, “Rating guidelines on process capability level 3” specific rating rules
and recommendations for each process attribute of level 2 and 3 are given.
(2+2+3) / 3
L (2) L (2) F (3) L (2.33)
(0+1+3) / 3
N (0) P (1) F (3) P (1.33)
1.4.1.2 Recommendations
The formal definition of the term “Recommendation” from Oxford dictionar-
ies [Oxford] is as following:
“A suggestion or proposal as to the best course of action, especially one
put forward by an authoritative body.”
The aim of giving rating recommendations is to provide proposals for best
course of actions as stated in this formal definition. In an assessment the
assessor may consider a recommendation or may not, depending on his
objective judgment whether the recommendation is applicable in the con-
text of the rating decision. Nevertheless, also recommendations should
provide the best approach in the majority of assessment situations. So an
1.4.2 Terminology
For the formulation of rules and recommendations a defined terminology is
used in this document. An overview of used terminology and an additional
explanation is given in the following table, if applicable:
Wording Explanation
1 If …, PAx.y must not be rated F. Any rating other than F might be chosen,
If …, the indicator … must not be rated F. depending on the impact of the detected
weakness.
3 If …, the indicator … must not be down- The found issue shall not lead to a down-
rated. rating.
4 If ..., the indicator … shall be downrated. The indicator(s) shall be downrated for at
If ..., the corresponding indicators … shall least one step of the rating scale. It is the
be downrated. decision of the assessor, if a further down-
rating is necessary to reflect the identified
weakness.
5 If ... the indicator A is downrated / rated N / See Rule 2. This rule is used to ensure
P / L, the indicator B must not be rated consistency within the rating.
higher.
6 If ... the indicator A is downrated / rated N / See Rule 4. This rule is used to ensure
P / L, the indicator B shall be downrated. consistency within the rating.
7 If ... the indicator A is downrated / rated N / See Rule 6, in case a specific aspect of in-
P / L due to …, the indicator B shall be dicator A was the root cause for its down-
downrated. rating.
8 If ..., this must not be used to downrate the See Rule 3, in case a specific aspect shall
… indicator …. be excluded as a root cause for downrat-
ing.
In general, the term “downrate” means that the initial rating of the indica-
tor(s) without applying the rule shall be reduced. The degree of downrating
depends on the significance and number of identified weaknesses.
Wording Explanation
1 If ..., the indicator … should not be rated F. A rating other than F should be chosen,
depending on the impact of the detected
weakness.
3 If ..., the indicator … should not be down- The found issue should not lead to a down-
rated. rating.
4 If ..., the indicator … should be downrated. The indicator should be downrated for at
least one step of the rating scale.
5 If ..., this should not be used to downrate See Recommendation 3, in case a specific
the … indicator …. aspect should be excluded as a root cause
for downrating.
9 If ... the indicator A is downrated / rated N / This rule is used to support consistency
P / L, it should have no influence on … within the rating.
10 If ... the indicator A is downrated / rated N / “To be inline” does not mean that the rat-
P / L due to ..., this should be in line with ings should be the same. It should be
the rating of the indicator … checked, whether both ratings have been
performed based on the same insight. This
is especially related to the findings ob-
tained during the assessment.
Ratings which differ by more than one step
of the rating scale might be an indicator of
inconsistency.
In case a blue target box is shown, the dependency is located within the
same process or capability level. A green box represents a target indicator
which is located outside the rated process or capability level.
SYS.2 BP6
SYS.2 BP7
SYS.5 BP5
SYS.5 BP6
System requirements System qualification
test specification
SYS.5 BP5 System qualification
SYS.3 BP6 Test cases
SYS.3 BP7 test results
SYS.4 BP7
SYS.4 BP8
System architecture System integration
test specification
SYS.4 BP7 System integration
SWE.1 BP6 Test cases
SWE.1 BP7 test results
SWE.6 BP5
SWE.6 BP6
Software requirements Software qualification
test specification
SWE.6 BP5 Software qualification
SWE.2 BP7 Test cases
SWE.2 BP8 test results
SWE.5 BP7
SWE.5 BP8
Software integration
Software architecture
test specification
SWE.5 BP7 Software integration
SWE.3 BP5 Test cases test results
SWE.3 BP6
SWE.4 BP5
SWE.3 BP5 SWE.4 BP6 SWE.4 BP5
SWE.3 BP6 Software detailed
Unit test specification Unit test results
design
SWE.3 BP5
SWE.3 BP6 SWE.4 BP5 Static verification
Software units
results
Granularity of traceability
The granularity is required to be respectively at least on the lowest granu-
larity mentioned in the PAM:
single stakeholder requirement
single system requirement
single system architecture element
single software requirement
single software architecture component
single software detailed design element
single software unit
single verification criterion
single test case
single test result
single change request
single problem record
Recommendations and rules:
[TAC.RC.1] If the granularity is not at least on the lowest granularity
mentioned above, the traceability indicator should be downrated.
Related to:
- SYS.2.BP6 “Establish bidirectional traceability”
- SYS.3.BP6 “Establish bidirectional traceability”
- SYS.4.BP7 “Establish bidirectional traceability”
- SYS.5.BP5 “Establish bidirectional traceability”
Purpose of consistency
Consistency
addresses content and semantics by ensuring that all project related
work products are in line with each other across affected parties and
not in contradiction to each other and
reduces the risk of misinterpretation and faults.
2.1.1.3 Redundancy
In the engineering processes for SWE.1 und SWE.3 there are parallel
paths for traceability and consistency established e.g. for SWE.1:
First path from system requirements to system architecture (in SYS.3)
to software requirements (in SWE.1) and
40 Dokument wurde bereitgestellt vom
VDA-QMC Internetportal am 18.01.2018 um 08:09
Risk management
Customers, company or project requirements often require integrating risk
management for the development projects, and this risk management
needs to be integrated into the agile project.
For example, if the customer requires managing of project and technical
risks then the project has to identify, mitigate and manage project risks at
Software architecture
A software architectural design has to be defined that identifies the ele-
ments of the software and software requirements are to be allocated to the
elements of the software.
Agile projects have to ensure that a software architecture is developed and
maintained and that traceability between requirements and architecture,
between architecture and design and between architecture and integration
tests is established.
Example of a proceeding for creating a software architecture within an agile
environment can be that a basic architecture and architecture rules are de-
fined at project start and the architecture is incrementally completed within
Sprints (for SCRUM based projects). For all architectural modifications, an
impact analysis is performed.
[AGE.RC.5] If no software architecture is developed and maintained,
the base practice SWE.2.BP1 should be downrated.
[AGE.RC.6] If the software architecture is modified incrementally in-
cluding impact analysis, this should not be used to downrate the indi-
cator SWE.2.BP1.
Software testing
Software Unit Verification, Software Integration Test and Software Qualifi-
cation Tests need to be established in software development projects
which require all these 3 levels of testing.
Agile methods may combine these test levels within other methods or lev-
els. For example, testing can be integrated into Sprints in SCRUM based
projects. Then the agile project has to ensure that the process purposes of
all 3 software testing processes (SWE.4, SWE.5 and SWE.6) are fulfilled
by the defined activities in project Sprints.
Pair programming
Agile methods may use pair programming in which two software develop-
ers work together at one computer. One writes code while the other re-
views each line of code as the other developer types it in. The developers
frequently switch roles.
[AGE.RC.11] If the used pair programming method is not in conflict
with code review requirements (e.g. inspection is required due to safe-
ty context), the base practices SUP.1.BP2 and SWE.4.BP3 should not
be downrated.
Software architecture
The third-party software and its interfaces (e.g. external API) have to be
part of the software architecture.
For example, a purchased operating system has to be defined in the soft-
ware architecture together with its interfaces and how the operating system
is connected to the relevant software architecture elements.
Requirements changes
Requirements changes may have an impact on whether the platform soft-
ware and/or legacy software used by the assessed project still fits. As a con-
sequence, change requests which may have a relation to the platform soft-
ware and/or legacy software should be analyzed and assessed accordingly.
[PLS.RC.6] If change requests are not analyzed with respect to an
impact on the used platform software and/or legacy software and this
aspect is significant in the context of SUP.10.BP4, the indicator
SUP.10.BP4 (analyze change requests) should be downrated.
Example 2:
The customer wants features F1 and F2 only. Therefore, it was decided
to choose variant V1. Correspondingly, parameters X and Y were set.
However, during requirements reviews, design reviews, and code re-
views it remained unnoticed that parameter Y also activates feature F3
which was never wanted.
The purpose of the Supplier Monitoring Process is to track and assess the
performance of the supplier against agreed requirements.
The customer has to introduce a supplier monitoring process for the follow-
ing relationships with suppliers:
Supplier develops a component on basis of the customer requirements
Supplier delivers and maintains a component which is provided off the
shelf to the customer (e.g. operating system, device drivers, system
with hard- and software)
Supplier delivers a component with off the shelf sub-components and
development on basis of customer requirements
Excluded are suppliers which deliver products without any support
(e.g. open-source software)
Interfaces between supplier and customer have to be established for ex-
changing, monitoring and tracking all relevant information between both
parties. Even for a small number of deliveries (e.g. commercial off the shelf
component) interfaces have to be set up and maintained for at least com-
ponent deliveries and managing changes and problem reports.
accordi ng to
accordi ng to
related to
related to related to
These relationships are used as the basis for the rating rules and recom-
mendations defined in the following subchapters.
Generic aspects regarding communication (2.1.2) shall also be considered
for rating.
3.2.1.3 Analyze
The analysis of system requirements is the basis for a correct implementa-
tion. Even though requirements sometimes are very detailed or their im-
plementation seems to be very simple, a well-founded analysis has to be
conducted for those requirements, too. The scope and appropriateness of
the analysis and its documentation depend on the context of product (e.g.
platform) and organization. The result of analysis can vary from a simple at-
tribute in a list to a complex simulation or the building of a demonstrator to
evaluate the feasibility of system requirements. Doubts in feasibility of func-
tionality have to be reflected in MAN.5.
Recommendations and rules:
[SYS.2.RL.6] If the system requirements and their interdependencies
are not evaluated in terms of correctness, technical feasibility and veri-
fiability, the indicator BP3 must not be rated F.
[SYS.2.RC.2] If the analysis of impact on cost and schedule is cov-
ered by the estimation of work packages in the project planning, this
must not be used to downrate the indicator BP3.
Related to:
- BP3 “Analyze system requirements”
- Output WP 15-01 “Analysis report”
- Output WP 17-50 “Verification criteria”
MAN.5 BP.3
Identify Risks
uses
MAN.3 BP.5 analyzes BP.5
Define, monitor and adjust the impact
Develop verification
project estimates and
criteria
resources
uses
BP.3
Analyze system
requirements
BP.1
analyzes uses
structures Specify system communicates
requirements
BP.6 BP.7
uses
Establish bidirectional establish ensure
Ensure consistency
traceability between between
SYS.1 PA1.1
Requirements elicitation
(stakeholder requirements)
These relationships are used as the basis for the rating rules and recom-
mendations defined in the following subchapters.
Generic aspects regarding traceability and consistency (2.1.1), communica-
tion (2.1.2), and verification criteria (2.1.3) shall also be considered for rating.
For technical projects in most cases the solution space for an architecture
is manifold and not biunique. In addition, the solution for the architecture is
influenced by several other not necessarily technical drivers (non-functional
technical requirements).
Possible system requirements for the definition of an architecture are e.g.
Non-functional technical requirements
- Performance (response time, cycle time, deadline, flow)
- Safety (non-functional safety aspects e.g. two microcontroller sys-
tem)
- Security
- COTS (Commercial Of The Shelf) elements with defined interfaces
- etc.
Maintainability requirements
- Usability
- Simplicity
- Maximum cohesion and minimum coupling
- Testability
- Analyzability
- Modifiability
- etc.
Business requirements
- Costs
- Portability (reuse, platform, legacy interfaces)
- Scalability
- etc.
Some of these aspects are in contradiction to each other so that in most
cases the finally selected architecture is a compromise between these cri-
teria.
allocates to
BP.3 BP.5
Define interfaces of according to Evaluate alternative
system elements based on defined criteria system architectures
BP.4 BP.8
Communicate agreed
Describe
system architectural
dynamic behavior
design
BP.1
based on Develop system communicate
architectural design
These relationships are used as the basis for the rating rules and recom-
mendations defined in the following subchapters.
Generic aspects regarding traceability and consistency (2.1.1), and com-
munication (2.1.2) shall also be considered for rating.
The purpose of the System Integration and Integration Test Process is to in-
tegrate the system items to produce an integrated system consistent with the
system architectural design and to ensure that the system items are tested to
provide evidence for compliance of the integrated system items with the sys-
tem architectural design, including the interfaces between system items.
BP.8
consi stent with accordi ng to Ensure consistency
BP.2
Develop system integration
test strategy including
regression test strategy
ensure
accordi ng to between
using
These relationships are used as the basis for the rating rules and recom-
mendations defined in the following subchapters.
Generic aspects regarding traceability and consistency (2.1.1), summarize
and communication (2.1.2), and strategy and plan (2.1.4) shall also be con-
sidered for rating.
The purpose of the System Qualification Test Process is to ensure that the
integrated system is tested to provide evidence for compliance with the
system requirements and that the system is ready for delivery.
accordi ng to
BP.2 SYS.2 PA1.1
Develop specification for based on System requirements
select system qualification test analysis
from
BP.3
establish
Select test cases between
using
BP.4 BP.5
establish Est ablish bidirectional
Test integrated system
between traceability
summarize
test results
BP.7
Summarize and
communicate results
These relationships are used as the basis for the rating rules and recom-
mendations defined in the following subchapters.
Generic aspects regarding traceability and consistency (2.1.1), summarize
and communication (2.1.2), and strategy and plan (2.1.4) shall also be con-
sidered for rating.
3.6.1.3 Analyze
The analysis of software requirements is the basis for a correct implemen-
tation. Even though requirements sometimes are very detailed or their im-
plementation seems to be very simple, a well-founded analysis has to be
conducted for those requirements, too. The scope and appropriateness of
the analysis and its documentation depend on the context of product (e.g.,
platform) and organization. The result of analysis can vary from a simple at-
tribute in a list to a complex simulation or the building of a demonstrator to
evaluate the feasibility of software requirements. Doubts in feasibility of
functionality have to be reflected in MAN.5.
Recommendations and rules:
[SWE.1.RL.6] If the software requirements and their interdependen-
cies are not evaluated in terms of correctness, technical feasibility and
verifiability the indicator BP3 must not be rated F.
[SWE.1.RC.2] If the analysis of impact on cost and schedule is cov-
ered by the estimation of work packages in the project planning this
should not be used to downrate the indicator BP3.
Related to:
- BP3 “Analyze software requirements”
- Output WP 15-01 “Analysis report”
- Output WP 17-50 “Verification criteria”
Related to:
- BP4 “Analyze the impact on the operating environment”
- Output WP 15-01 “Analysis report”
- Output WP 17-08 “Interface requirements specification”
BP.3 BP.5
Analyze software Develop verification
analyzes
requirements criteria
the impact
These relationships are used as the basis for the rating rules and recom-
mendations defined in the following subchapters.
Generic aspects regarding traceability and consistency (2.1.1), communica-
tion (2.1.2), and verification criteria (2.1.3) shall also be considered for rating.
For technical projects in most cases the solution space for an architecture
is manifold and not biunique. In addition, the solution for the architecture is
influenced by several other not necessarily technical drivers (non-functional
technical requirements).
Possible software requirements for the definition of an architecture are e.g.
Non-functional technical requirements
- Performance (response time, sample time, cycle time, deadline,
flow)
- Safety (non-functional safety aspects e.g. fault tolerant software
architecture)
- Security
- COTS (Commercial Of The Shelf) elements with defined interfaces
- etc.
Maintainability requirements
- Usability
- Simplicity
- Maximum cohesion and minimum coupling
- Testability
- Analyzability
- Modifiability
- Application interface, coder
- etc.
Business requirements
- Costs
- Portability (reuse, platform, legacy interfaces)
- Scalability
- etc.
BP.3 BP.5
Define interfaces of Define resource
software elements based on based on consumption objectives
BP.6 BP.9
Communicate agreed
Evaluate alternative
software architectural
software architectures
design
BP.1
according to
Develop software
defined criteria communicate
architectural design
(=> software elements)
related to
each other with respect to
BP.7 BP.8
Establish bidirectional
Ensure consistency
traceability establish ensure
between between
SWE.1 PA1.1
Software requirements
analysis
(software requirements)
These relationships are used as the basis for the rating rules and recom-
mendations defined in the following subchapters.
Generic aspects regarding traceability and consistency (2.1.1), and com-
munication (2.1.2) shall also be considered for rating.
The purpose of the Software Detailed Design and Unit Construction Pro-
cess is to provide an evaluated detailed design for the software compo-
nents and to specify and to produce the software units.
The software detailed design refines the components specified in the Soft-
ware Architecture Design process into software units and their interfaces.
These software units that are not further refined on the design level and
their interfaces are the basis for generating or developing the source code
for the derived software units.
The detailed design for a component shall describe the approach to satisfy
the mapped software requirements by describing plans of how code will be
organized both statically and dynamically. It shall also describe how differ-
ent modules will interact.
For technical projects in most cases the solution space for a detailed de-
sign is not biunique. In addition, the solution for the detailed design is influ-
enced by several other not necessarily technical drivers (non-functional
technical requirements.
Possible software requirements for the definition of a detailed design are e.g.:
Non-functional technical requirements
- Performance (response time, sample time, cycle time, deadline,
flow)
- Safety (non-functional safety aspects e.g. program flow monitor-
ing)
- Security
- COTS (Commercial of the Shelf) elements with defined interfaces
- etc.
Maintainability requirements
- Usability
- Simplicity
- Maximum cohesion and minimum coupling
- Testability
- Analyzability
- Modifiability
BP.2 BP.7
Define interfaces of Communicate agreed
software units based on communicates software detailed design
BP.3 BP.8
BP.4
Evaluate software detailed
design based on
BP.1
Develop software
detailed design
(=> software units)
based on / w.r.t.
These relationships are used as the basis for the rating rules and recom-
mendations defined in the following subchapters.
Generic aspects regarding traceability and consistency (2.1.1), and com-
munication (2.1.2) shall also be considered for rating.
The software unit verification process covers not only software unit testing
aspects but also unit verification aspects e.g. static verification of units.
BP.5
establish Establish bidirectional establish
using
between traceability between
according to
BP.1
establish Develop software unit
between verification strategy
including regression strategy
according to
SWE.3 PA1.1 BP.2
Software detailed design show evidence Develop criteria for unit
and unit construction for compliance verification
using
These relationships are used as the basis for the rating rules and recom-
mendations defined in the following subchapters.
Generic aspects regarding traceability and consistency (2.1.1), summarize
and communication (2.1.2), and strategy and plan (2.1.4) shall also be con-
sidered for rating.
The purpose of the Software Integration and Integration Test Process is to in-
tegrate the software units into larger software items up to a complete inte-
grated software consistent with the software architectural design and to en-
sure that the software items are tested to provide evidence for compliance of
the integrated software items with the software architectural design, including
the interfaces between the software units and between the software items.
BP.8
consi stent with accordi ng to Ensure consistency
BP.2
Develop software
integration test strategy incl.
regression test strategy
ensure
accordi ng to between
using
These relationships are used as the basis for the rating rules and recom-
mendations defined in the following subchapters.
Generic aspects regarding traceability and consistency (2.1.1), summarize
and communication (2.1.2), and strategy and plan (2.1.4) shall also be con-
sidered for rating.
using
BP.4 BP.5
establish Establish bidirectional
Test integrated software
between traceability
summarize
test results
BP.7
Summarize and
communicate results
These relationships are used as the basis for the rating rules and recom-
mendations defined in the following subchapters.
Generic aspects regarding traceability and consistency (2.1.1), summarize
and communication (2.1.2), and strategy and plan (2.1.4) shall also be con-
sidered for rating.
Related to:
- SUP.1.BP1 “Develop a project quality assurance strategy”
- Output WP 08-13 “Quality plan”
- Output WP 18-07 “Quality criteria”
Related to:
- SUP.1.BP1 “Develop a project quality assurance strategy”
- Output WP 08-13 “Quality plan”
- Output WP 18-07 “Quality criteria”
3.12.1.4 Escalation
Based on the established independence (see chapter 3.12.1.2) an escalation
mechanism has to be established. The mechanism should cover all relevant
stakeholders (e.g. technical and quality management, management, custom-
er, suppliers). After escalations, these stakeholders shall drive corrective ac-
tions.
[SUP.1.RL.8] If escalations are not followed up by corrective actions,
the indicator BP6 must not be rated higher than P.
Related to:
- SUP.1.BP6 “Implement an escalation mechanism”
- Output WP 13-04 “Communication record”
- Output WP 14-02 “Corrective action register”
- Output WP 13-07 “Problem record”
according
includes to
aspects of
related related
to to
related to
BP.5 BP.6
Ensure resolution of Implement an escalation
non-conformances mechanism
treated as treated as
problems problems
SUP.9 PA1.1
Problem Resolution
Management
related to
These relationships are used as the basis for the rating rules and recom-
mendations defined in the following subchapters.
Generic aspects regarding summarize and communication (2.1.2), and strat-
egy and plan (2.1.4) shall also be considered for rating.
3.13.1.2 Baselines
The expectations for establishing baselines cover these aspects:
a) Definition of the items that are to be controlled in which kind of base-
line.
b) Internal and external baselines are created for all events as defined in
the strategy (required or optional).
c) Overall baselines are created over different disciplines, sites, process-
es etc. and have to be consistent.
d) The baselines contain complete and consistent sets of items neces-
sary to reproduce the work products.
e) The baselines are created according to the naming convention defined
in the strategy.
Recommendations and rules:
[SUP.8.RL.6] If it is not defined for each kind of baseline which config-
uration items are to be controlled, the indicator BP6must not be rated
higher than P.
[SUP.8.RL.6] If required baselines do not exist for events defined in
the strategy, the indicator BP6 shall be downrated.
[SUP.8.RL.7] If established baselines for different disciplines, sites, pro-
cesses etc. (according to c) are not consistent or if overall baselines do
not exist, the indicator BP6 shall be downrated.
[SUP.8.RL.8] If content of a baseline is not verified (by e.g., a baseline
or configuration management audit), the indicator BP8 shall be down-
rated.
according according
to to to
support
SPL.2 BP.3 BP.1 BP.7
Establish a product release
reflects Develop a configuration Report configuration
classification and numbering
management strategy status
schema
for
according for
BP.2 to BP.8
Identify configuration for Verify the information
items about configured items
for
according
according to
BP.9
to Manage the storage of
configuration items and
SPL.2 BP.13 BP.3 baselines
BP.6
using
Establish baselines
supports external delivery using
These relationships are used as the basis for the rating rules and recom-
mendations defined in the following subchapters.
Generic aspects regarding strategy and plan (2.1.4) shall also be consid-
ered for rating.
assigned to
BP.5 BP.2
according Authorize urgent Identify and record the
to resolution action problem
based on
determined
actions
investigate
BP.6 BP.4
based on
Diagnose the cause and
according determined
Raise alert notifications determine the impact of the
to actions
problem
based on
based on
BP.7 determined BP.8
actions
according
Initiate problem resolution Track problems to closure
to
according
to initiate including related
These relationships are used as the basis for the rating rules and recom-
mendations defined in the following subchapters.
Generic aspects regarding traceability (2.1.1), and strategy and plan (2.1.4)
shall also be considered for rating.
The Change Request Management Process does not cover the actual im-
plementation of change requests since this is done according to standard
engineering processes (see also SUP.10.BP6). Therefore, any issues in
rating this process have to be considered also carefully when rating those
engineering processes.
The initiation of change requests (CRs) might come from a problem report
(see also SUP.9.BP7). Any issues in rating due to improper handling of
these change requests have to be considered also when rating the Prob-
lem Management Process.
Furthermore, traceability between change requests, problems, affected
work products and corresponding baselines has to be ensured over all af-
fected disciplines and all affected domains considering the project-specific
complexity.
to ensure
uses dependencies of application
establish
dependencies
BP.1 BP.2 BP.4 to
dependencies
Develop a change request according Identify and record the to Analyze and assess
management strategy to change requests change requests
based
on
assigned
to based
on
BP.3 BP.5
according to Record the status of Approve change requests
change requests before implementation
according to
according to
These relationships are used as the basis for the rating rules and recom-
mendations defined in the following subchapters.
Generic aspects regarding traceability (2.1.1), and strategy and plan (2.1.4)
shall also be considered for rating.
[MAN.3.RC.1] If the scope of work (BP1) does not address the re-
sponsibilities of all affected parties regarding the project and product,
the indicator BP1 should not be rated higher than L.
[MAN.3.RL.2] If the scope of work (BP1) is not appropriately docu-
mented at project start, the indicator BP1 must not be rated higher
than L.
Related to:
- BP1 “Define the scope of work”
- Output WP 08-12 “Project plan”
[MAN.3.RC.3] If the activities are not described with input and output
artifacts, the indicator BP4 should not be rated higher than P.
[MAN.3.RC.4] If the dependencies between activities are not identi-
fied, the indicator BP4 should not be rated higher than L.
[MAN.3.RC.5] If the work packages are too big (e.g. longer than the
update cycle for the schedule), the indicator BP8 should be down-
rated.
Related to:
- BP4 “Define, monitor and adjust project activities”
- BP8 “Define, monitor and adjust project schedule”
- Output WP 14-06 “Schedule”
- Output WP 14-09 “Work breakdown structure”
with
respect
to
BP.6 BP.3
BP.4 BP.5
Define, monitor and adjust
Define, monitor and adjust
project estimates
project activities comparison against each other
and resources
according to
schedules allocates
activities to resources
BP.7 BP.8
Identify, monitor and adjust based
Define, monitor and adjust
project interfaces and on
project schedule
agreed commitments
BP.9 Pro.y
Other Processes
Ensure consistency
(see following figures)
These relationships are used as the basis for the rating rules and recom-
mendations defined in the following subchapters.
Generic aspects regarding consistency (2.1.1) shall also be considered for
rating.
SYS BP.x
All SYS processes:
Communicate agreed related to
Summarize & communicate BP.7
Identify, monitor and adjust
project interfaces and
SWE BP.y agreed commitments
All SWE processes:
Communicate agreed related to
Summarize & communicate
SYS.2 BP.3
Analyze system related to
requirements
BP.3
Evaluate feasibility of the
project
SWE.1 BP.3
Analyze software related to
requirements
The objectives for the performance of the process including required activi-
ties, tasks, responsibilities, resources, and involved stakeholders have to
be defined on level 2 for the project in order to ensure a proper planning,
monitoring and adjusting of the activities of the corresponding process. This
includes also the planning, monitoring, and adjusting of all activities related
to work product management as required by PA 2.2, e.g. work product re-
views (see chapter 4.3). An explicit process description is not necessarily re-
quired for fulfilling PA 2.1 as long as all generic practices are accomplished.
Organizations do not need to structure the activities to be planned and
monitored in the same way as it is done in the PAM and can use their own
process naming conventions. Process assessors are in charge of mapping
planning and monitoring related data to the right processes. It is up to the
project to define its own structure, and consequently, uses this structure for
its planning, monitoring, and adjusting activities (which might also cover
more than one PAM process). Furthermore, it might even not be reasona-
ble to plan all single activities explicitly (e.g., requiring explicitly planned
check-in and check-out tasks in the project plan is not reasonable when
assessing the configuration management process SUP.8).
Important for process attribute 2.1 is also the identification of objectives
(e.g. planning goals or milestone conditions) for the planning. It is not re-
quired that this is described on an organizational level. However, if the ob-
jectives are described on an organizational level this may support the prac-
tices on capability level 2.
Generic practices of PA 2.1 are used to evaluate the capability of a project
to plan and monitor activities related to a certain process, and not the de-
gree to which planning and monitoring of particular processes are con-
sistent regarding the overall project (which is the main focus of the MAN.3
process, see also chapter 3.16). However, there is a strong relationship be-
tween PA 2.1 and MAN.3 (see also chapter 4.3.2.2, “Rating consistency to
processes at level 1”).
defi ne
accordi ng to the plan
Relevant work products of the process are those that are required to fully
achieve capability level 1, and additionally, evidence (work products) to
prove successful implementation of the process attributes 2.1 and 2.2.
A work product may not only be a document but could also be a record or
database entry in a tool (e.g., change requests or problem reports imple-
mented in a workflow tool are also work products).
Not included in the term “work product” are all process-related documents
like e.g., process descriptions, procedures, method descriptions, or role
descriptions. Any weaknesses in handling these process assets that are
not related to the content (e.g., improper versioning) must not be reflected
in the process attribute 2.2 of the process under investigation. However, if
organizational process documents are available they can support the im-
plementation of process attribute 2.2.
Work products are defined as output work products in the Automotive
SPICE PAM 3.1. Each of the output work products is associated with one
or more outcomes of the process and further detailed by work product
characteristics in Annex B of the PAM. These work products and their
characteristics can be used as a starting point for considering whether, giv-
en the context, they are contributing to the intended purpose of the pro-
cess.
4.3.1.3 Identify, document and control the work products (GP 2.2.3)
All identified work products must be documented, and controlled (indicator
GP 2.2.3) according to their requirements (indicator GP 2.2.2). Because of
this dependency, the corresponding rule is defined in chapter 4.3.2.1, “Rat-
ing consistency within PA 2.2”.
define
SUP.8 BP2 configuration SUP.10 BP1
Identify configuration items management
Develop a change request
SUP.8 BP1 develop management strategy
strategy
Develop a configuration GP.2.2.2
management strategy Define the requirements for
documentation and control
of the work products
in accordance with
SUP.8 BP6 the requirements SUP.10 BP7
Establish baselines perform Track change requests to
SUP.8 BP5 configuration closure
SUP.10 BP3
Control modifications and management manage Record the status of change
releases GP.2.2.3 changes requests
SUP.8 BP3 SUP.10 BP2
Identify, document and
Establish a configuration control the work products Identify and record the
management system change requests
GP 2.2.4 Review and adjust work products to meet the defined re-
quirements
[CL2.RL.41] If the indicator for defining requirements for the work
products (GP 2.2.1) is downrated due to non-appropriate review
and approval criteria, the indicator GP 2.2.4 shall be downrated.
On capability level 2 all projects may use “their” own process as long as the
requirements of Automotive SPICE are fulfilled.
On capability level 3 the projects have to use a standard process. A possi-
bility to cover variations between projects is to describe tailoring guidelines.
This derived process is the so-called “defined” process. The defined pro-
cess has to cover all activities and work products of capability level 1 and 2
for the assessed project.
Large organizations would have problems with only one standard process.
The organization may define several different standard processes (e.g. one
standard process for each development site, or one standard for each
business unit). The other possibility to cover variations between projects is
the afore-mentioned description of tailoring guidelines. Based on prede-
fined criteria the process may be tailored to the needs of the project.
Exceptionally waivers for the standard process may be used (which should
not be the rule), assessors should check whether these exceptions have a
rationale and are approved by appropriate organizational roles.
It has to be kept in mind that the advantage of organizational processes is
to standardize the approach to e.g.:
establish processes known by the stakeholders
establish interfaces to facilitate cooperation (also between different lo-
cations)
facilitate introduction of new personnel or exchange personnel be-
tween projects
facilitate reuse of assets and work products
establish benchmarking
The aim of establishing processes might get missed if there are too many vari-
ations of the processes. This should be reflected by the assessment result.
The rating of process attribute 3.2 should reflect the degree to which the
process is using the standard process under consideration of the tailoring
guidelines.
determine GP.3.2.2
interact ion Assign and communicate
GP.3.1.2 roles, responsibilities and
Determine the sequence
authorities fo r performing
and interaction between
the defined process
processes so that they
work as an integrated
system of processes
GP.3.2.3
Ensure necessary
competencies for
GP.3.1.3 performing the
Identify the roles and based on the defined process
identify
competencies, standard process
roles
responsibilities, and
authorities fo r performing based on the
standard process
GP.3.2.4
the standard process
Provide resources and
information to support the
performance of the
GP.3.1.4 defined process
identify Identify the required
infrastruct ure infrastructure and work based on the
environment for performing standard process GP.3.2.5
the standard process Provide adequate process
infrastructure to support
the performance of the
defined process
monit or
effectiveness
GP.3.1.5 GP.3.2.6
Determine suitable methods
Collect and analyze data
and measures to monitor
based on about performan ce of the
the effectiveness and
measures process to demonstrate its
suitability of the
suitability and effectiveness
standard process
GP.3.2.4
Provide resources and
information to support the
performance of the
defined process
GP.2.1.6 GP.3.2.5
Identify, prepare, and make Provide adequate process
available resources to based on identification infrastructure to support
perform the process and availablity of resources the performance of the
according to plan defined process
For the description of the responsibilities the following abbreviations are used:
R: Responsible
Those who do the work to achieve the task. There is at least one role
with a participation type of responsible, although others can be delegat-
ed to assist in the work required (see also RACI below for separately
identifying those who participate in a supporting role).
A: Accountable (also approver or final approving authority)
The one ultimately answerable for the correct and thorough completion
of the deliverable or task, and the one who delegates the work to those
responsible [7]. In other words, an accountable must sign off (approve)
work that responsible provides. There must be only one accountable
specified for each task or deliverable.
C: Consulted (sometimes counsel)
Those whose opinions are sought, typically subject matter experts;
and with whom there is two-way communication.
I: Informed
Those who are kept up-to-date on progress, often only on completion
of the task or deliverable; and with whom there is just one-way com-
munication.
Assessment purpose
Assessment agreement
Assessment scope
Process outputs Time frame
Contact persons in both organizations
Assessment team list
Assessment plan
Assessment scope
Time frame
Process inputs
Assessment team list
Assessment plan
The introduction should give all those involved an overview of the organiza-
tion assessed, the project, the assessment methodology and sequence.
Assessment report
Process inputs
List of immediate actions, if applicable
Activities
Activities
Perform the
improvement actions
Activities
The process improvement actions are monitored and any necessary ad-
justments are made, taking risks into account.
Activities
Company name
Organizational / Business unit
Organization
Assessed sites
Assessed Departments
Local assessment
Name of local assessment coordinator
coordinator
e.g.
Somebody was not available (e.g. off, sick)
Separated development areas have been included
via Video/WebEx (no on-site assessment)
Constraints Disclaimer (e.g. that the assessment results does
(if applicable) not allow conclusions to the complete organization
or other departments of the organization that has
been not assessed)
Confidentiality constraints, e.g. access to evidence
or to infrastructure and sites may be subject to le-
gal access rights.
Company name / Name(s) of the assessed companies and the assessed or-
organizational unit ganizational unit(s)
Assessment purpose
e.g.
Starting point for process improvement, process improvement progress check, supplier evalu-
ation, process related risk determination
Process context
e.g.
A subset of stakeholder requirements valid for a specific product release or
All stakeholder requirements valid for a specific product release or
All changes between two defined project milestones or
All software requirements implemented by changed processes.
Overall project
SUP.8 --- --- --- ---
- CL2
The following examples for process context category A and B show use
cases of assessment requests with different assessment scopes and give
guidance how the assessment scope and the process instances can be
documented in an assessment report.
Company Name /
TIERX AG
organizational unit
Assessment Purpose
Starting point for process improvement for the project and the organization.
Process context
Process context
A (Part of product/delivery)
category
All changes and affected stakeholder requirements in the delta project developing additional
functionalities based on the existing software.
All third-party used in the project.
Legacy and platform software is out of the scope of the assessment.
2.2.3 Distributed
NO 2.2.6 Application parameters NO
development
Company name /
TIERX AG
organizational unit
Assessment purpose
Process context
All changes and affected stakeholder requirements of TIERX project with former customer ex-
cluding legacy software.
All changes and affected stakeholder requirements of TIERX platform project including the
management of legacy and third-party software.
All changes and affected stakeholder requirements of TIERX previous sub-project excluding
legacy, platform and third-party software.
Company name /
TIERX AG
organizational unit
Assessment purpose
Determine process-related risks for the quality of the product. Set a starting point for process
improvement to reduce the identified risks.
Process context
All changes and affected stakeholder requirements valid for the recent release to the custom-
er.
The management of platform, interface and third-party software.
Instances of TIERX
Instances of legacy Instances of platform
new developed OEM
software project
project
Company name /
TIERX AG
organizational unit
Assessment purpose
Process context
All changes and affected stakeholder requirements valid for the recent release to the custom-
er. This includes the legacy and platform software developed.
NOTE: The English edition of the process assessment model Automotive SPICE
can be obtained free of charge via http://www.automotivespice.com.
Available from: