Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Opportunities For Explainable Artificial Intelligence in Aerospace Predictive Maintenance

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/343362982

Opportunities for Explainable Artificial Intelligence in Aerospace Predictive


Maintenance

Conference Paper · July 2020

CITATIONS READS

0 1,237

3 authors:

Bibhudhendu Shukla Ip-Shing Fan


Cranfield University Cranfield University
3 PUBLICATIONS   0 CITATIONS    91 PUBLICATIONS   772 CITATIONS   

SEE PROFILE SEE PROFILE

I.K. Jennions
Cranfield University
116 PUBLICATIONS   1,001 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

EU R&D Framework Programme History View project

Condition Monitoring and Diagnostics of Aircraft Environmental Control System View project

All content following this page was uploaded by Bibhudhendu Shukla on 01 August 2020.

The user has requested enhancement of the downloaded file.


Opportunities for Explainable Artificial Intelligence in Aerospace
Predictive Maintenance
Bibhudhendu Shukla1, Ip-Shing Fan2, and Ian Jennions3

1,2,3
IVHM Centre, Building 70, Cranfield University, Cranfield, Bedford, MK430AL, UK
bib.shukla@cranfield.ac.uk
i.s.fan@cranfield.ac.uk
i.jennions@cranfield.ac.uk

ABSTRACT 1. INTRODUCTION
This paper aims to look at the value and the necessity of XAI is an AI system that explains how the decision-making
XAI (Explainable Artificial Intelligence) when using DNNs rationale of the system operates in simple, human language
(Deep Neural Networks) in PM (Predictive Maintenance). with high prediction accuracy (DARPA, 2017). XAI is
The context will be the field of Aerospace IVHM human-centric and provide understandable explanation of
(Integrated Vehicle Health Management) when using how AI application producing outputs (EASA, 2020).
DNNs. An XAI (Explainable Artificial Intelligence) system
is necessary so that the result of an AI (Artificial The rise of the IoT (Internet of Things) and new analytical
Intelligence) solution is clearly explained and understood by tools has given aircraft operators and airlines new ways to
a human expert. This would allow the IVHM system to use realize significant benefits from the terabytes of data
XAI based PM to improve effectiveness of predictive generated by their aircraft. The engine and airframe
model. An IVHM system would be able to utilize the manufacturers have been installing various sensors in their
information to assess the health of the subsystems, and their products for decades, but the few data points these sensors
effect on the aircraft. Even if the underlying mathematical produced have traditionally been used for diagnostics. With
principles are understood, they lack an understandable today’s aircraft, including thousands of sensors — the
insight, hence have difficulty in generating the underlying Airbus A350 has nearly 250,000 of them, generating about
explanatory structures (i.e. black box). This calls for a 2.5 TB of data per day (Airbus, 2020) — sifting manually
process, or system, that enables decisions to be explainable, through all that data and getting actionable information
transparent, and understandable. It is argued that research in would be overwhelming.
XAI would generally help to accelerate the implementation Airlines face the challenge of enhancing the availability of
of AI/ML (Machine Learning) in the aerospace domain, and their fleet by avoiding flight delays and cancellations,
specifically help to facilitate compliance, transparency, and consequentially reducing costs to be able to support the
trust. This paper explains the following areas: forecasted growth of 38000 aircraft by 2025 (Lufthansa
• Challenges & benefits of AI based PM in aerospace Technik, 2020).

• Why XAI is required for DNNs in aerospace PM? With the expansion of business in the commercial aviation
industry, the MRO (maintenance, repair, and overhaul)
• Evolution of XAI models and industry adoption market that supports it is also expected to grow, and the
total MRO spend is expected to rise to $116 billion by 2029,
• Framework for XAI using XPA (Explainability
up from $81.9 billion in 2019 (Cooper at al. 2019).
Parameters)
The figure below shows the different categories of
• Discussion about future research in adopting XAI &
maintenance policies used by various organizations.
DNNs in improving IVHM.1

Bibhudhendu Shukla et. al. This is an open-access article distributed under


the terms of the Creative Commons Attribution 3.0 United States License,
which permits unrestricted use, distribution, and reproduction in any
medium, provided the original author and source are credited.

1
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020

IVHM is the transformation of system data on a complex


vehicle or system (such as a luxury car or a commercial
airplane) into information to support operational decisions
and optimize maintenance (Cranfield, 2008). IVHM was
initially introduced by the NASA (National Aeronautics
Space Administration) in 1992, as a capability to efficiently
perform timely status determination, diagnostics, and
prognostics and support fault-tolerant response including
system/subsystem reconfiguration to prevent catastrophic
failures, and IVHM must support the planning and
Figure 1 – Types of Maintenance Policies Scheduling of post-operational maintenance (NASA, 1992).
The estimate is that predictive maintenance will improve The main aim of IVHM in the aircraft industry is to better
technical dispatch reliability, will drive a reduction in no planning of the maintenance activities, reduce MRO costs,
fault found, and will support a reduced inventory and reduce delays, and increase the availability of aircraft by
improve labor productivity. That could generate about $3 enabling better prediction of failures and integrated health
billion of savings for the MRO industry (Cambier, 2020). monitoring.
But when we add the other indirect benefits like reduction in
customer delay compensation, an increase in customer
satisfaction, the impact on the airline is much higher and OEM Airbus
Skywise
Boeing
Analytix
Embraer
AHEAD-PRO
Offering
much more beneficial. IATA estimates the global cost of
irregular airline operations (delays, cancellations, in-flight Non-OEM AFKL Honeywell
Lufthansa
SITAONAIR Technik
turn backs, etc.) is $28B. These events are costly and drive Offering PROGNOS FORGE
AVIATAR
many inefficiencies across the airline’s operation, and
negatively impact passenger experience. Figure 2 – Machine Learning Tools available using Aircraft
Big Data
Past studies reported by the US Department of Energy have
estimated that a predictive maintenance program could The figure above shows the different offering being
realize an 8% to 12% savings over a preventative only developed by OEM and non-OEMs in adopting ML
program (U.S. Department of Energy, 2010). The survey (machine learning) in aerospace health monitoring. The
projected an ROI of 10 times the investment for a predictive availability of computing power and analytical tool have
maintenance program. fueled the insight produced by the terabytes of data
generated by the aircraft. It has allowed AI (Artificial
According to another paper by ARC, only 18% of assets Intelligence) to make machines capable of performing tasks
have an age-related failure pattern, while a full 82% of asset that usually require human intelligence. AI comprises all
failures occur randomly (Ralph, 2015). Even though ML techniques as well as other techniques such as search,
rigorous maintenance is in place, the preventive maintenance symbolic and logical reasoning, statistical techniques, and
performed on assets is ineffective. While Predictive behavior-based approaches. As technology and, more
maintenance uses condition-monitoring equipment to importantly, our understanding of how our minds work and
evaluate an asset’s performance in real-time. A key element interact with all that surrounds us has progressed, our
in this process is IoT. IoT provides an infrastructure that concept of AI has changed. We have seen an evolution of
allows rapid transmission of data, for different assets and machine learning models from rules-based to more
systems to connect, work together, and share, analyze data to sophisticated deep models and meta-learning models, as per
get actionable insight. the diagram below.
The aviation industry has come up with solutions to store, Nowadays, there is a paradigm shift by engine
sort, analyze, understand, and translate into meaningful manufacturers to sell flight hours instead of selling engines
MRO measures using complex machine learning models. As and spare parts (EASA, 2020). This shift implies that, to
the latest aircraft types produce 50 times more data than avoid penalties for delays, engine dispatch reliability and
older generations, the resulting increase in data volume safety are part of the same concept. AI-based predictive
leads to growing complexity in the business of MRO maintenance, increased by an enormous amount of fleet
providers on one side but also to chances to increase data, allows to anticipate failures, and provide preventive
efficiency and safety on the other side (Lufthansa Technik, remedies (EASA, 2020).
2020). Some usage of vibration sensors combined with
machine learning helps to estimate the remaining time of 1.1. Deep Neural Networks
life of component assets allowing aviation planning
managers to schedule maintenance operations in an efficient ANNs (Artificial Neural Networks), especially DNNs, have
way. shown better results on use cases like speech recognition,

2
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020

text recognition, image classification etc. Like other • A reliability-based methodology to support decision-
applications, data assembled for predictive maintenance are making regarding the operational performance of
sensor parameters that are collected over time. Utilizing equipment (Nadani et al. 2017).
deep models could reduce manual feature engineering effort • Deep learning, GPUs, and the concept of “Digital
and automatically construct relevant factors and the health Twins” offer enormous potential benefits for predictive
factors that indicate the health state of the aircraft or its maintenance in oil and gas (Modi, 2020).
components and its estimated remaining runtime before the • Based on DNNs, a novel intelligent method is proposed
next upcoming downtime (Jalali et al. 2019). This will allow to overcome the deficiencies of the intelligent diagnosis
aircraft operators better prepared by reducing the surprises methods (Jia et al. 2016).
of random asset failures. • How DNN architectures, based on convolutional layers,
There is a rapid advancement of DNNs because of the can classify the operating state of the wind turbine in
readily availability of low-cost GPUs (Graphic Processing terms of its load and speed without the use of ex-ante
Units), high-quality data in real-time, and highly scalable feature engineering (Stetco et al. 2019).
cloud infrastructure. AI has evolved from linear models to
deep models and meta-learning models as shown in figure 3 2. WHY XAI IS REQUIRED FOR DNNS?
below.
Despite the promising features of DNNs, their complex
Linear Models and
Heuristics/Rules
Decision Trees
Deep Models and
Ensembles
Meta-Learning architecture results in a lack of transparency. In their
0.94*last_years_sale +
conventional form, DNNs are considered as black-box
If temperature < 200 C:
start heater
1.16*this_month_sales +
0.28*product_age = total_sales
models – they are controlled by complex nonlinear
interactions between many parameters that are difficult to
If animal has feathers:
classify as bird
understand. It is very complicated to interpret and explain
their outcome, which is a severe issue that currently
prevents their adoption in the critical applications and
manufacturing domain (Jalali et al. 2019).
Figure 3 – Evolution of AI/Machine Learning (Google, For AI systems operating in black-box, XAI for simpler use
2020) cases like AI-powered chatbots or sentiment analysis of
The shift from explicitly programmed rules to using social feeds may not be that important. But being able to
computers to optimise models (deep models) to fit the data understand the decision-making process is mission-critical
have opened new opportunities in predictive accuracy. The for heavily regulated big human impact use cases like
advanced and more accurate models have resulted in a aircraft maintenance, military applications, autonomous
paradigm shift along multiple dimensions (Google, 2020): vehicles, aerial navigation, and drones. As people rely more
and more on AI in their everyday lives, understanding and
interpreting the AI models would be paramount. This would
• Expressiveness enables fitting a wide range of functions
allow to make changes and improvements of these models
in an increasing number of domains like forecasting,
over time. It is important to look at the role of human in
ranking, autonomous driving, particle physics, drug
adopting the models and increase their trust on a model or
discovery, etc.
prediction. Otherwise, they will not use it. For example,
• Versatility unlocks data modalities (image, audio, BBB (British Business Bank) implemented Temenos’ XAI
speech, text, tabular, time series, etc.) and enables platform which allows them explain in plain language to
joint/multi-modal applications. their customers and regulators how AI-based decisions are
• Adaptability to small data regimes through transfer and taken. The bank has successfully reduced its exposure to
multi-task learning. risk, eliminated time-consuming manual working, and
• The custom optimized hardware like GPUs and TPUs increased its pass rate by 20% (Temenos, 2020).
(Tensor Processing Units) has increase the efficiency.
This has enabled practitioners to train complex models The true value of the AI solution when the user changes his
faster and cheaper with big volume of data. behavior or takes action based on the AI output or
prediction and this trust is built when users can feel
Some examples of DNNs in predictive maintenance include: empowered and know how the AI system came up with the
recommendation or output (Casey, 2019).
• Analysis of technical parameters to optimize The complex models have become increasingly opaque, and
maintenance and operating processes and prevent as these models are still fundamentally built around
business interruptions (Jalali et al. 2019). correlation and association, have resulted in several
challenges (Google, 2020):

3
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020

• Loss of debuggability and transparency in testing - This regulation began in 2018, and the right to explanation in
leads to low trust as well as the inability to fix or GDPR covers only the local aspect of interpretability (ICO,
improve the models and/or outcomes. 2018).
• Lack of control - Model user’s reduced ability to locally In addition to needing to probe the internals of increasingly
adjust model behavior in problematic instances due to complex models, which in and of itself is a challenging
the lack of visibility on the hidden layers of the computational problem, a successful XAI system must
complex deep learning models. provide explanations to people, meaning that the field must
draw on lessons from philosophy, cognitive psychology,
• Unbiased outcomes - Undesirable data amplification HCI (Human-Computer interaction) and social sciences
reflecting biases that do not agree with our societal (Google, 2020).
norms and principles.
A final challenge for XAI methods for DL (Deep Learning)
• Exceptional Situation – Is there an exceptional situation need to address is providing explanations that are accessible
where the system may fail? for the society, policymakers, and the law. Conveying
• Incorrect correlations learning from the data - This explanations that require non-technical expertise will be
often inhibits the model's ability to generalize and paramount to both handle ambiguities, and to develop the
leading to poor real-world results. Incorrect alarms social right to the right for an explanation in the EU GDPR
issued by the predictive maintenance model could be (Wachter et al. 2017).
very expensive. The scope of interpretability could be divided into two
• Proxy objectives end up resulting in large differences categories – global & local. Global interpretations help us
between how models perform offline, often on understand the entire conditional distribution modeled by
matching proxy metrics, compared to how they perform the trained response function based on average values while
when deployed in the applications. local interpretations promote understanding of small regions
of the conditional distribution, such as clusters of input
The figure below explains the key four reasons for records, and their corresponding predictions, or deciles of
explaining the complex models and why such challenges predictions and their corresponding input rows (Hall, 2017).
needs to be answered.
Deep learning models can identify and abstract complex
patterns that humans may not be able to see in data.
However, there are many situations where introducing a-
priori expert domain knowledge into the features, or
abstracting key patterns identified in the deep learning
To To models as actual features; it would be possible to break
Justify Control down the model into subsequent, more explainable pieces
(Ethical Institute, 2019). Recalling that a good explanation
needs to influence the mental model of the user, i.e., the
To To representation of the external reality using, among other
Improve Discover things, symbols, it seems obvious that the use of the
symbolic learning paradigm is appropriate to produce an
explanation and could provide convincing explanations
while keeping or improving generic performance
(Donadello et al. 2017).

Figure 4 – Reasons to explain complex algorithms (Adadi, 3. EVOLUTION OF XAI MODELS FOR DNNS
Amina & Berrada, 2018)
The last six years have seen a big push to understand the
Specially in aerospace PM, the reasons to justify is very decisions made by complex multi-layered DNNs and build
critical. The lack of answers by AI systems leading to muted trust in those models.
trust and limited large-scale adoption. This lack of
explainability has hindered the adoption of these models, The model-independent approach is applied to all classes of
especially in regulated industries, e.g. aerospace, banking, algorithms or learning techniques, and the internal workings
finance, and healthcare. of the model treated as an unknown black box. The model-
specific approach is used only for specific techniques or
European Union introduced a right to explanation in GDPR narrow classes of techniques and the internal workings of
(General Data Protection Right) as an attempt to deal with the model treated as white box.
the potential problems stemming from the rising importance
of algorithms (ICO, 2018). The implementation of the

4
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020

The model-independent XAI models may apply to any Year XAI Models Reference Model- Global
model, but they may be more limited compared to the Agnost or
model-specific models (Carvalho, 2019). There is an ic or Local
increasing interest in model specific XAI models, as seen Model-
papers published in the CVPR (Conference on Computer specific
Vision and Pattern Recognition) workshop on XAI (CVPR, 2017 Integrated Sundararajan Agnost Global
2019). Gradients et al. 2017. ic
Below is a list of existing XAI models which have looked at 2017 TCAV Kim et al. Agnost Global
the different aspects of DNNs to improve explainability. (Testing with 2018. ic
Concept
Year XAI Models Reference Model- Global Activation
Agnost or Vectors)
ic or Local 2017 Distilling a Frosst & Agnost Global
Model- Neural Hinton, 2017. ic
specific Network into
2014 Guided Springenberg CNN Global a soft
propagation et al. 2014 decision tree
2018 Attention (Li et al. Agnost Global
2015 Distilling the Hinton et al. Agnost Global - Based 2018), (Arik & ic
knowledge 2015. ic 2019 Prototypical Pfister, 2019).
in a Neural Learning
Network 2019 XRAI Kapishnikov Agnost Global
2015 DeepR Wickramasing CNN Global et al. 2019. ic
- (Deep he et al. 2016.
2016 Record)
2016 RETAIN Choi et al. RNN Local
(Reversed 2016. Table 1 – Evolution Different typical XAI Models being
Time dedicated to explaining DNNs
Attention One of the key columns in the above table is to show
Model) whether an XAI model has a global or local interpretability.
2016 MMD Kim et al. K- Global This is to highlight accuracy. Small sections of the
(Maximum 2016. medoid conditional distribution are more likely to be linear,
Mean clusteri monotonic, or otherwise well-behaved, local explanations
Discrepancy) ng can be more accurate than global explanations (Hall, 2017).
Critic
2016 LIME (Local (Ribeiro et al. Agnost Local 4. INDUSTRY ADAPTION OF XAI
- Interpretable 2016), ic
2018 Model (Guidotti et al. One of the most notable entities in this research field is the
Agnostic 2018), (Mishra DARPA (Defense Advanced Research Projects Agency),
Explanation) et al. 2017). which, while funded by the U.S. Department of Defense,
created the XAI program for funding academic and military
2017 Anchors Ribeiro et al. Agnost Local
research and resulted in funding for 11 U.S. research
2018. ic
laboratories (DARPA, 2018). Google has made public its
research and practices in different AI-related areas, one of
2017 LOCO Lei et al. 2017. Agnost Local
which is entirely focused on explainability (What if tool,
(Leave one ic
2020). Apart from strategies and recommended practices,
covariate
explainability is also one of the main focuses in currently
out)
commercialized AI solutions and products. Facebook and
2017 SHAP Lundberg & Agnost Local
Georgia Tech published a paper where it shows an
(SHapley Lee, 2017. ic
interactive visual exploration tool of industry-scale DNN
Additive
models (Kahng et al. 2018). EASA AI roadmap has
exPlanations
highlighted the importance of XAI in the aviation domain
)
(EASA, 2020).
2017 DeepLift Shrikumar et RNN Global
al. 2017. Some of the recent open Source XAI Platforms have been
developed to help build the trust on the AI and have the

5
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020

transparency i.e. IBM AI Fairness 360, Microsoft Model constantly evolving. A commercial aircraft must be serviced
interpretability in Azure ML, Google’s What If Tool, after a certain number of flight hours to remain compliant
H2O.ai’s H2O Platform, Distill, Oracle’s Skater. with FAA (Federal Aviation Administration), EASA
(European Union Aviation Safety Agency), and ICAO
• IBM AI Fairness 360: “The AI Fairness 360 toolkit (International Civil Aviation Organization) standards.
(AIF360) is an open-source software toolkit that can
help detect and remove bias in machine learning As airworthiness authorities, OEMs (Original Equipment
models. It enables developers to use state-of-the-art Manufacturers), and airlines come to depend on AI-based
algorithms to regularly check for unwanted biases from dynamic systems, and clearer accountability will be required
their machine learning pipeline and to mitigate any for decision-making processes to ensure trust, and
biases that are discovered. AIF360 enables AI transparency. Evidence of this requirement gaining more
developers and data scientists to easily check for biases momentum can be seen with the launch of the first global
at multiple points along their machine learning pipeline, conference exclusively dedicated to this emerging
using the appropriate bias metric for their discipline, the International Joint Conference on Artificial
circumstances. It also provides a range of state-of-the- Intelligence: Workshop on XAI (IJCAI, 2017).
art bias mitigation techniques that enable the developer
It is important that people get to trust and buy into these
or data scientist to reduce any discovered bias. These
new AI systems and adapt the way they work to optimize
bias detection techniques can be deployed automatically
the benefit out of these systems. XAI is mean to do that, and
to enable an AI development team to perform
this needs to be understood by organizational leadership.
systematic checking for biases like checks for
Also, to work with the regulatory authorities to show
development bugs or security violations in a continuous
enough evidence of how certain service schedules are
integration pipeline” (IBM, 2020).
decided based on predictive maintenance DNNs models.
• Microsoft Model interpretability in Azure ML: Sharing data between different aviation organization still a
“Understanding what AI models are doing is super challenge. With sustainability and carbon neutral are top of
important both from a functional as well as ethical the agenda of these organizations would allow a push
aspects” (Microsoft, 2020). towards becoming more efficient in maintenance and use
• Google’s What If Tool: “Building effective machine the IVHM as a core piece to optimize the usage of the
learning models means asking a lot of questions. Look aircraft and its components.
for answers using the What-if Tool, an interactive On the other hand, different aviation organizations need to
visual interface designed to probe your models better” come together with airworthiness authorities to support XAI
(What if tool, 2020). model framework and policies as a mandatory design
• H2O.ai’s H2O Platform: “H2O Driverless AI does principle to support adoption of usage of DNNs in
explainable AI today with its MLI (Machine Learning predictive maintenance.
Interpretability) module. This capability in H2O Bringing the maintenance schedule earlier based on
Driverless AI employs a unique combination of forecasted failures by the AI model is only going to increase
techniques, and methodologies, such as LIME, Shapley, safety and help to plan the maintenance task better. But
surrogate decision trees, partial dependence and more, deferring maintenance (still operative instrument &
in an interactive dashboard to explain the results of both equipment) beyond the recommended schedules
Driverless AI models and external models” (H2O.ai, maintenance by OEMs (Original Equipment Manufacturers)
2020). would need to increase trust in the PM models and more
• Distill: “Machine learning will fundamentally change collaboration between OEMs, airlines/operators &
how humans and computers interact. It’s important to airworthiness authorities (FAA, EASA, ICAO, etc.). XAI
make those techniques transparent, so we can models would help in increasing transparency & trust for the
understand and safely control how they work” (Distill, AI models, thus increases the adoption.
2020).
6. LEVELS OF EXPLAINABILITY
• Oracle’s Skater: “Skater is a unified framework to
enable Model Interpretation for all forms of models to The main aim of XAI models is to explain the AI models.
help one build an Interpretable machine learning system The challenge is to have a consistent measurement
often needed for real-world use-cases” (Skater, 2020). framework to measure the explainability. There is also
challenge on how much testing required and the success
5. COMPLIANCE CHALLENGES IN AEROSPACE MRO criteria for it as a consistent explanation of the models.
Some key XPA are defined in the table below:
Compliance is a never-ending process aerospace industry,
and the regulatory requirements across most industries are

6
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020

Parameters Definition Measurement Parameters Definition Measurement


Ability to explain at Ability to explain the
local (specific regional DNN model including High,
conditional J. input, output & Medium,
distribution) and Decomposability prediction. Low
global (entire Local, Make sure that
A. Depth of conditional Global & sensitive and personal
Explanation distribution) both information are Yes,
Ability to predict for K. Privacy protected. No
future data based on Table 2 – Possible XPA with measurement criteria
the patterns. The
measurement would Certain heavily regulated industries like aerospace, medical
etc. would need a domain specific weightage for each
be dependent on the High,
parameter to reflect the importance of certain aspects of
B. Predictive result of DNN model Medium,
explainability. For example, the XPA parameter B
Accuracy accuracy. Low
(predictive accuracy) is far more important in aerospace
Defined as ability to
predictive maintenance, but K (privacy) would be much
closely explain the
more importance in predicting customer buying behaviour
DNN model output.
on a website. This would help in defining some of the
Explanation with low High,
threshold for the testing and help planning the work.
C. approximation is Medium,
Approximation useless. Low The table below shows a possible theoretical measurement
Defined as the ability framework.
to explain consistently High,
between different Medium, Measurement
D. Consistency models. Low DNN XAI
Model Model
A B C D E F G H I J K
Ability to compare
explanations between I
DNN XAI 50 n
similar L M H M H H M M Y
Model1 Model1 % d
instances and have a High, .
consistent outcome for Medium, DNN XAI 75
A
E. Stability the same model. Low G M H L M l L H H N
Model2 Model2 %
l
Ability to identify
F. Feature importance of specific Table 3 – Possible XPA Measurement Framework example
Importance feature. % Further research and development required to accurately
It describes how many measure the XPA for each different model and define
instances are covered baseline benchmark for certain DNN/XAI models. The
by the explanation. above theoretical example could be a way to for model
Can it cover the entire specific XAI models to measure the effectiveness. This also
model (e.g., emphasizes the need for looking at XAI design at the same
interpretation of time as the DNN models as part of the architecture.
weights in a linear
regression 7. CLOUD-BASED IVHM SYSTEM FRAMEWORK FOR XAI
model) or represent
G. Model only an individual All, Deploying DNN models requires integrating multiple
Coverage prediction. Individual software platforms with different programming languages
High, and several GPU processors. Thus, executing DNN models
H. Bias in Ability to explain bias Medium, is difficult for even the most experienced developers. In
prediction in the prediction Low addition, organizations need cloud infrastructure that can
maintain high availability to accommodate spikes in demand
Ability to explain High,
for the DNN models.
I. Abnormality abnormal in the Medium,
detection prediction. Low The diagram below shows possible system components for
the XAI for DNN models.

7
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020

REFERENCES
Big Data DARPA (2017). Explainable artificial intelligence. Defence
Prognosis
Component Condition
Advanced Research Project Agency. viewed 08 April
IoT
Database 2020, https://www.darpa.mil/program/explainable-
XAI Models
X artificial-intelligence.
Wireless Data
Collection
Measurement History P
A
EASA (2020). A human centric approach to AI in aviation.
DNN Models EASA. 21 April 2020,
Historical Faults https://www.easa.europa.eu/newsroom-and-
events/news/easa-artificial-intelligence-roadmap-10-
published.
XAI Interface
Jalali, A., Heistracher, C., Schindler, A. & Haslhofer, B
I understand why
(2019). Ercims-news. Viewed 18 April 2020,
I understand how accurate the model is
I understand when the failure may occur
https://ercim-news.ercim.eu/en116/r-i/understandable-
Users I understand when to take corrective action deep-neural-networks-for-predictive-maintenance-in-
Cloud Computing the-manufacturing-industry.
Airbus (2020). Data revolution in aviation. Airbus. Viewed
Figure 5 – Possible cloud-based IVHM system components 08 April 2020, https://www.airbus.com/public-
required for XAIs affairs/brussels/our-topics/innovation/data-revolution-
The figure above tries to highlight the DNN, XAI models & in-aviation.html.
XPA are integrated part of any IVHM framework and how Lufthansa Technik (2020). Aviatar, The digital operation
they could fit into the overall system architecture. Thinking suite. Lufthansa Technik. viewed 11 April 2020,
and building XAI models when developing the DNN https://www.lufthansa-technik.com/aviatar.
models would help increase the adaption of these models. Cooper, T., Reagan, I., Porter, C. and Precourt, C. (2019).
XPA should be part of the process to measure these XAI Global fleet & MRO market forecast commentary
models. Also cloud infrastructure would speed the adaption 2019-2029. viewed 11 April 2020,
of IVHM framework to implement DNN models. Recent https://www.oliverwyman.com/our-
advancement in cloud computing has opened opportunity expertise/insights/2019/jan/global-fleet-mro-market-
easy accessibility to computing power for XAI. forecast-commentary-2019-2029.html.
Cambier, Yann (2018). Big Data: Racing to platform
8. CONCLUSION maturity. aircraftIT. viewed 11 April 2020,
https://www.aircraftit.com/articles/big-data-racing-to-
Research is needed to have further development to make the platform-maturity/.
XAI models more mainstream to be able to produce useful U.S. Department of Energy (2010). Operations &
results. Inability to trust the DNNs has reduced the usability Maintenance best practices – a guide to achieving
of these complex models in Aerospace predictive operational efficiency. U.S. Department of Energy.
maintenance applications. viewed 18 April 2020,
More investigation required in DNNs to predict https://www.energy.gov/sites/prod/files/2013/10/f3/om
maintenance measures such as remaining useful life (RUL) guide_complete.pdf.
and time-to-failure (TTF). Research in XAI would generally Rio, Ralph (2015). Optimize asset performance with
help to accelerate the implementation of AI/ML in the industrial IoT and analytics. ARC Advisory Group.
aerospace domain, and specifically help to facilitate Viewed 11 April 2020,
compliance, transparency, and trust. https://www.arcweb.com/blog/optimize-asset-
performance-industrial-iot-and-analytics-0.
Our future research is focusing on the following key areas: Cranfield (2008). Integrated vehicle health management
centre (IVHM). Cranfield University. Viewed 11 April
• Use DNNs to improve the effectiveness of prediction
2020, https://www.cranfield.ac.uk/centres/integrated-
models in aerospace,
vehicle-health-management-ivhm-centre.
• To understand the behaviour of the DNNs by clarifying NASA (1992). Research and technology goals and
the conditions for specific outcome for aerospace objectives for Integrated Vehicle Health Management
predictive maintenance, (IVHM). NASA-CR-192656. Viewed 19 April 2020,
https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/199
• Advance the possible XPA measurement framework for 30013844.pdf.
model specific XAI to support compliance and Google (2020). AI Explainability Whitepaper. Google.
adoption. Viewed 11 April 2020,

8
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020

https://storage.googleapis.com/cloud-ai- Ethical Institute (2019). The 8 machine learning principles.


whitepapers/AI%20Explainability%20Whitepaper.pdf. Ethical Institute. Viewed 18 April 2020,
Jalali, A., Heistracher, C., Schindler, A., Haslhofer, B., https://ethical.institute/index.html#contact.
Nemeth, T., Glawar, R., Sihn and W., De Boer, P. Carvalho, D., Pereira, E. & Cardoso, j. (2019). Machine
(2019), Predicting Time-to-Failure of Plasma Etching learning interpretability: a survey on methods and
Equipment using Machine Learning. In Proceedings of metrics. Electron., vol. 8, no. 8, pp. 1–34, 2019, doi:
the IEEE International Conference on Prognostics and 10.3390/electronics8080832.
Health Management (PHM2019), June 17-19, 2019, in CVPR (2019). CVPR-19 Workshop on Explainable AI.
San Francisco, USA. CVPR. viewed 08 April 2020, https://explainai.net/.
Nadai, N., Melani, A., Souza, G. & Nabeta, S. (2017). Springenberg, J., Dosovitskiy, A., Brox, T. & Riedmiller,
Equipment failure prediction based on neural network M. (2014). Striving for Simplicity: The All
analysis incorporating maintainers inspection Convolutional Net. arXiv. arXiv:1412.6806.
findings. Annual Reliability and Maintainability G. Hinton, O. Vinyals and J. Dean (2015). Distilling the
Symposium (RAMS), Orlando, FL, 2017, pp. 1-7, doi: knowledge in a neural network. arXiv preprint arXiv:
10.1109/RAM.2017.7889684. 1503.02531, 2015
Modi, P. (2020). How AI is providing digital twins for Wickramasinghe, N., Nguyen, P., Truyen, T., Venkatesh, S.
predictive maintenance in oil And gas. Forbes. Viewed (2016). A Convolutional Net for Medical Records.
17 April 2020, IEEE journal of biomedical and health informatics 21.1
https://www.forbes.com/sites/nvidia/2018/06/21/how- (2017): 22-30
ai-is-providing-digital-twins-for-predictive- Choi, E., Bahadori, M. T., Kulas, J. A., Schuetz, A.,
maintenance-in-oil-and-gas/#395b27384780. Stewart, W. F. and Sun, J. (2016). RETAIN: an
Stetco, A., Mohammed, A., Djurović, S., Nenadic, G. and interpretable predictive model for healthcare using
Keane, J. (2019). Wind Turbine operational state reverse time attention mechanism. arXiv
prediction: towards featureless, end-to-end predictive (https://arxiv.org/abs/1608. 05745v4)
maintenance. IEEE International Conference on Big Been, K., Oluwasanmi, O. K., and Khanna, R. (2016).
Data (Big Data), Los Angeles, CA, USA, 2019, pp. Examples are not enough, learn to criticize! criticism
4422-4430. for interpretability. NIPS 2016. In Proceedings of the
Temenos (2020). British Business Bank success story. Conference on Advances in Neural Information
Temenos. Viewed 27 June 2020, Processing Systems. 2280–2288.
https://www.temenos.com/community/success- Ribeiro, M.T., Singh, S., and Guestrin, C. (2016). Why
stories/british-business-bank-success-story/. should I trust you: Explaining the predictions of any
Casey, Kevin (2019). What is explainable AI? The classifier. In Proc. 22nd ACM SIGKDD Int. Conf.
Enterprisers Project. viewed 11 April 2020, Knowl. Discovery Data Mining, 2016, pp. 1135–1144
https://enterprisersproject.com/article/2019/5/what- Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D.,
explainable-ai?page=1. Turini, F. and Giannotti, F. (2018). Local rule-based
Adadi, Amina & Berrada, Mohammed. (2018). Peeking explanations of black box decision systems. [Online].
inside the black-box: A survey on Explainable Artificial Available: https://arxiv.org/abs/1805.10820
Intelligence (XAI). IEEE Access. PP. 1-1. Mishra, R., Sturm, B. L., and Dixon, S. (2017), Local
10.1109/ACCESS.2018.2870052. interpretable modelagnostic explanations for music
ICO (2018). General Data Protection Regulation. ICO. content analysis. In Proc. ISMIR, 2017, pp. 537–543
viewed 11 April 2020, https://gdpr-info.eu/. Ribeiro M. T., Singh S., and Guestrin C. (2018), Anchors:
Wachter, Sandra, Mittelstadt, Brent, Floridi, Luciano. High-precision model-agnostic explanations. in Proc.
(2017) Why a Right to Explanation of Automated AAAI Conf. Artif. Intell., 2018, pp. 1–9.
Decision-Making Does Not Exist in the General Data Lei, J., G’Sell, M., Rinaldo, A., Tibshirani, R. J., and
Protection Regulation, International Data Privacy Law, Wasserman, L. (2017). Distribution-free predictive
Volume 7, Issue 2, May 2017, Pages 76– inference for regression. Journal of the American
99, https://doi.org/10.1093/idpl/ipx005. Statistical Association, 2017.
IJCAI (2017). Workshop on Explainable Artificial https://doi.org/10.1080/01621459.2017.1307116
Intelligence (XAI). IJCAI. Viewed 18 April 2020, Lundberg, S. and Lee, S. (2017). A unified approach to
http://home.earthlink.net/~dwaha/research/meetings/ijc interpreting model predictions. In Advances in Neural
ai17-xai/. Information Processing Systems 30 (NIPS), 2017.
Hall, P., Ambati, S. & Phan, W. (2017). Ideas on Shrikumar, A., Greenside, P. and Kundaje, A. (2017).
interpreting machine learning. Oreilly. Viewed 18 Learning important features through propagating
April 2020, https://www.oreilly.com/radar/ideas-on- activation differences. In Proceedings of the 34th
interpreting-machine-learning/. International Conference on Machine Learning -
Volume 70 (ICML’17). JMLR.org, 3145–3153.

9
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020

Sundararajan, M., Taly, A. and Yan, Q. (2017). Axiomatic Airways, Lufthansa Group, Thomas Cook, TUI & Virgin
attribution for deep networks. In Proceedings of the Atlantic in several roles in technology, gaining experience
34th International Conference on Machine Learning - in big data technologies in cloud, data science, MRO
Volume 70(ICML’17). JMLR.org, 3319–3328. Systems and IVHM. Bib has completed his MSc in Six
Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J. and Sigma (service) from Southampton Solent University, UK.
Viegas, F. (2018). Interpretability beyond Feature Currently Bib is part time research student in Cranfield
Attribution: Quantitative Testing with Concept University working on research to use DNN in IVHM. Bib
Activation Vectors (TCAV). In International also working as a Principal Enterprise Solution Architect in
Conference on Machine Learning. 2673–2682. Virgin Atlantic and involved in designing Maintenance &
Frosst, N. and Hinton, G. (2017). Distilling a neural network Engineering systems, big data analytics and cloud native
into a soft decision tree. arXiv:1711.09784, 2017. solutions.
Li, O., Liu, H., Chen, C. and Rudin, C. (2018). Deep
Dr Ip-Shing Fan Fan was born and studied in Hong Kong,
learning for case-based reasoning through prototypes: A
graduated with First Class Honours in Industrial
neural network that explains its predictions. In AAAI,
Engineering. He completed his graduate engineer training at
2018.
Qualidux Industrial Co Ltd in Hong Kong. He was awarded
Arik, S. O. and Pfister, T. (2019). Attention-based
the Commonwealth Scholarship and completed his PhD in
prototypical learning towards interpretable, confident
Computer Integrated Manufacturing in Cranfield. In 1990,
and robust deep neural networks. arXiv preprint
Fan started to work in The CIM Institute, endowed by IBM
arXiv:1902.06292, 2019.
in Cranfield, to carry out research, education, and
Kapishnikov, A., Bolukbasi, T., Viegas, F. and ´ Terry, M.
consultancy in new applications of computers in
(2019). Better attributions through regions. Xrai: In
manufacturing. He led many European and UK funded
Proc. ICCV, 2019.
research programs to create new tools and methods in
Kahng, M., Andrews, P.Y., Kalro, A. and Chau, D.H.P.
knowledge-based engineering design, business performance,
(2018). ActiVis: Visual exploration of industry-scale
quality management, supply chain, and complexity science.
deep neural network models. IEEE Trans. Vis. Comput.
Gr. 2018, 24, 88–97. The complex dynamics of people factor in technology
IBM (2020). AI Fairness 360. IBM. viewed 08 April 2020, implementation prompted him to create a European research
https://developer.ibm.com/open/projects/ai-fairness- consortium fort the Framework 5 research project BEST -
360/. Better Enterprise System Implementation. The 12 partners,
Microsoft (2020). Model interpretability in Azure Machine €4 million project created a body of knowledge that Fan
Learning. Microsoft. viewed 08 April 2020, worked to translate into Masters level teaching curriculum.
https://docs.microsoft.com/en-us/azure/machine- The holistic thinking also influences research developments
learning/how-to-machine-learning-interpretability. that brings together business, technology and organization
What if Tool (2020). What If. Google. viewed 08 April factors.
2020, <https://pair-code.github.io/what-if-tool/>.
Since 2010, Fan spends time in the IVHM Centre to lead the
H2O.ai (2020). Explaining explainable AI. H2O. viewed 08
Integrated Vehicle Health Management (IVHM) Design
April 2020, https://www.h2o.ai/explainable-ai/.
Distill (2020). Machine learning research should be clear, System project. This has delivered industry relevant
dynamic and vivid. Distill. Viewed 08 April 2020, solutions to partners and applied projects with tools to carry
out Cost Benefits Analysis and design methods and tools to
https://distill.pub/about/.
add IVHM capability in Unmanned Air Vehicles. He is the
Skater (2020). Oracle Skater. Oracle. Viewed 08 April
Course Director of the MSc in Management and
2020, https://github.com/oracle/Skater.
Donadello, I., Serafini, L. and Garcez, A.D. (2017). Logic Information Systems in Cranfield University, developing
tensor networks for semantic image interpretation. postgraduates who would understand the interaction
between IT, organization and people behaviour. Fan is the
Proceedings of the Twenty-Sixth International Joint
Chairman of the Bedford Branch of BCS and sits on the
Conference on Artificial Intelligence, IJCAI (2017),
BCS Council. He is also a member of the IFIP (International
pp. 1596-1602.
Federation for Information Processing) Working Group 5.8
on Enterprise Interoperability.
BIOGRAPHIES
Professor Ian Jennions Ian's career spans over 40 years,
Bibhudhendu Shukla Bib was born and
working mostly for a variety of gas turbine companies. He
graduated in India with First Class with
has a Mechanical Engineering degree and a PhD in CFD
distinction in Production Engineering. He
both from Imperial College, London. He has worked for
completed his Engineering in National
Rolls-Royce (twice), General Electric and Alstom in several
Institute of Technology, Jamshedpur in
technical roles, gaining experience in aerodynamics, heat
India. Bib has worked for Amadeus, British
transfer, fluid systems, mechanical design, combustion,

10
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020

services and IVHM. Ian moved to Cranfield in July 2008 as a Director of the PHM Society, Vice-chairman of SAE's
Professor and Director of the newly formed IVHM Centre. IVHM Steering Group, contributing member of the SAE
The Centre is funded by a number of industrial companies, HM-1 IVHM committee, a Chartered Engineer and a Fellow
including Boeing, BAE Systems, Thales, Meggitt, MOD of IMechE, RAeS and ASME. He is the editor of five recent
and Alstom Transport. He has led the development and SAE books: 1. IVHM - Perspectives on an Emerging Field;
growth of the Centre, in research and education since its 2. IVHM - Business Case Theory and Practise; 3. IVHM -
inception. The Centre offers an IVHM short course each the Technology; 4. IVHM - Essential Reading; 5. IVHM -
year and has offered an IVHM MSc. Ian is on the editorial Implementation and Lessons Learned and a co-author of the
Board for the International Journal of Condition Monitoring, book: ‘No Fault Found – The Search for the Root Cause’.

11

View publication stats

You might also like