Opportunities For Explainable Artificial Intelligence in Aerospace Predictive Maintenance
Opportunities For Explainable Artificial Intelligence in Aerospace Predictive Maintenance
Opportunities For Explainable Artificial Intelligence in Aerospace Predictive Maintenance
net/publication/343362982
CITATIONS READS
0 1,237
3 authors:
I.K. Jennions
Cranfield University
116 PUBLICATIONS 1,001 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Condition Monitoring and Diagnostics of Aircraft Environmental Control System View project
All content following this page was uploaded by Bibhudhendu Shukla on 01 August 2020.
1,2,3
IVHM Centre, Building 70, Cranfield University, Cranfield, Bedford, MK430AL, UK
bib.shukla@cranfield.ac.uk
i.s.fan@cranfield.ac.uk
i.jennions@cranfield.ac.uk
ABSTRACT 1. INTRODUCTION
This paper aims to look at the value and the necessity of XAI is an AI system that explains how the decision-making
XAI (Explainable Artificial Intelligence) when using DNNs rationale of the system operates in simple, human language
(Deep Neural Networks) in PM (Predictive Maintenance). with high prediction accuracy (DARPA, 2017). XAI is
The context will be the field of Aerospace IVHM human-centric and provide understandable explanation of
(Integrated Vehicle Health Management) when using how AI application producing outputs (EASA, 2020).
DNNs. An XAI (Explainable Artificial Intelligence) system
is necessary so that the result of an AI (Artificial The rise of the IoT (Internet of Things) and new analytical
Intelligence) solution is clearly explained and understood by tools has given aircraft operators and airlines new ways to
a human expert. This would allow the IVHM system to use realize significant benefits from the terabytes of data
XAI based PM to improve effectiveness of predictive generated by their aircraft. The engine and airframe
model. An IVHM system would be able to utilize the manufacturers have been installing various sensors in their
information to assess the health of the subsystems, and their products for decades, but the few data points these sensors
effect on the aircraft. Even if the underlying mathematical produced have traditionally been used for diagnostics. With
principles are understood, they lack an understandable today’s aircraft, including thousands of sensors — the
insight, hence have difficulty in generating the underlying Airbus A350 has nearly 250,000 of them, generating about
explanatory structures (i.e. black box). This calls for a 2.5 TB of data per day (Airbus, 2020) — sifting manually
process, or system, that enables decisions to be explainable, through all that data and getting actionable information
transparent, and understandable. It is argued that research in would be overwhelming.
XAI would generally help to accelerate the implementation Airlines face the challenge of enhancing the availability of
of AI/ML (Machine Learning) in the aerospace domain, and their fleet by avoiding flight delays and cancellations,
specifically help to facilitate compliance, transparency, and consequentially reducing costs to be able to support the
trust. This paper explains the following areas: forecasted growth of 38000 aircraft by 2025 (Lufthansa
• Challenges & benefits of AI based PM in aerospace Technik, 2020).
• Why XAI is required for DNNs in aerospace PM? With the expansion of business in the commercial aviation
industry, the MRO (maintenance, repair, and overhaul)
• Evolution of XAI models and industry adoption market that supports it is also expected to grow, and the
total MRO spend is expected to rise to $116 billion by 2029,
• Framework for XAI using XPA (Explainability
up from $81.9 billion in 2019 (Cooper at al. 2019).
Parameters)
The figure below shows the different categories of
• Discussion about future research in adopting XAI &
maintenance policies used by various organizations.
DNNs in improving IVHM.1
1
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020
2
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020
text recognition, image classification etc. Like other • A reliability-based methodology to support decision-
applications, data assembled for predictive maintenance are making regarding the operational performance of
sensor parameters that are collected over time. Utilizing equipment (Nadani et al. 2017).
deep models could reduce manual feature engineering effort • Deep learning, GPUs, and the concept of “Digital
and automatically construct relevant factors and the health Twins” offer enormous potential benefits for predictive
factors that indicate the health state of the aircraft or its maintenance in oil and gas (Modi, 2020).
components and its estimated remaining runtime before the • Based on DNNs, a novel intelligent method is proposed
next upcoming downtime (Jalali et al. 2019). This will allow to overcome the deficiencies of the intelligent diagnosis
aircraft operators better prepared by reducing the surprises methods (Jia et al. 2016).
of random asset failures. • How DNN architectures, based on convolutional layers,
There is a rapid advancement of DNNs because of the can classify the operating state of the wind turbine in
readily availability of low-cost GPUs (Graphic Processing terms of its load and speed without the use of ex-ante
Units), high-quality data in real-time, and highly scalable feature engineering (Stetco et al. 2019).
cloud infrastructure. AI has evolved from linear models to
deep models and meta-learning models as shown in figure 3 2. WHY XAI IS REQUIRED FOR DNNS?
below.
Despite the promising features of DNNs, their complex
Linear Models and
Heuristics/Rules
Decision Trees
Deep Models and
Ensembles
Meta-Learning architecture results in a lack of transparency. In their
0.94*last_years_sale +
conventional form, DNNs are considered as black-box
If temperature < 200 C:
start heater
1.16*this_month_sales +
0.28*product_age = total_sales
models – they are controlled by complex nonlinear
interactions between many parameters that are difficult to
If animal has feathers:
classify as bird
understand. It is very complicated to interpret and explain
their outcome, which is a severe issue that currently
prevents their adoption in the critical applications and
manufacturing domain (Jalali et al. 2019).
Figure 3 – Evolution of AI/Machine Learning (Google, For AI systems operating in black-box, XAI for simpler use
2020) cases like AI-powered chatbots or sentiment analysis of
The shift from explicitly programmed rules to using social feeds may not be that important. But being able to
computers to optimise models (deep models) to fit the data understand the decision-making process is mission-critical
have opened new opportunities in predictive accuracy. The for heavily regulated big human impact use cases like
advanced and more accurate models have resulted in a aircraft maintenance, military applications, autonomous
paradigm shift along multiple dimensions (Google, 2020): vehicles, aerial navigation, and drones. As people rely more
and more on AI in their everyday lives, understanding and
interpreting the AI models would be paramount. This would
• Expressiveness enables fitting a wide range of functions
allow to make changes and improvements of these models
in an increasing number of domains like forecasting,
over time. It is important to look at the role of human in
ranking, autonomous driving, particle physics, drug
adopting the models and increase their trust on a model or
discovery, etc.
prediction. Otherwise, they will not use it. For example,
• Versatility unlocks data modalities (image, audio, BBB (British Business Bank) implemented Temenos’ XAI
speech, text, tabular, time series, etc.) and enables platform which allows them explain in plain language to
joint/multi-modal applications. their customers and regulators how AI-based decisions are
• Adaptability to small data regimes through transfer and taken. The bank has successfully reduced its exposure to
multi-task learning. risk, eliminated time-consuming manual working, and
• The custom optimized hardware like GPUs and TPUs increased its pass rate by 20% (Temenos, 2020).
(Tensor Processing Units) has increase the efficiency.
This has enabled practitioners to train complex models The true value of the AI solution when the user changes his
faster and cheaper with big volume of data. behavior or takes action based on the AI output or
prediction and this trust is built when users can feel
Some examples of DNNs in predictive maintenance include: empowered and know how the AI system came up with the
recommendation or output (Casey, 2019).
• Analysis of technical parameters to optimize The complex models have become increasingly opaque, and
maintenance and operating processes and prevent as these models are still fundamentally built around
business interruptions (Jalali et al. 2019). correlation and association, have resulted in several
challenges (Google, 2020):
3
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020
• Loss of debuggability and transparency in testing - This regulation began in 2018, and the right to explanation in
leads to low trust as well as the inability to fix or GDPR covers only the local aspect of interpretability (ICO,
improve the models and/or outcomes. 2018).
• Lack of control - Model user’s reduced ability to locally In addition to needing to probe the internals of increasingly
adjust model behavior in problematic instances due to complex models, which in and of itself is a challenging
the lack of visibility on the hidden layers of the computational problem, a successful XAI system must
complex deep learning models. provide explanations to people, meaning that the field must
draw on lessons from philosophy, cognitive psychology,
• Unbiased outcomes - Undesirable data amplification HCI (Human-Computer interaction) and social sciences
reflecting biases that do not agree with our societal (Google, 2020).
norms and principles.
A final challenge for XAI methods for DL (Deep Learning)
• Exceptional Situation – Is there an exceptional situation need to address is providing explanations that are accessible
where the system may fail? for the society, policymakers, and the law. Conveying
• Incorrect correlations learning from the data - This explanations that require non-technical expertise will be
often inhibits the model's ability to generalize and paramount to both handle ambiguities, and to develop the
leading to poor real-world results. Incorrect alarms social right to the right for an explanation in the EU GDPR
issued by the predictive maintenance model could be (Wachter et al. 2017).
very expensive. The scope of interpretability could be divided into two
• Proxy objectives end up resulting in large differences categories – global & local. Global interpretations help us
between how models perform offline, often on understand the entire conditional distribution modeled by
matching proxy metrics, compared to how they perform the trained response function based on average values while
when deployed in the applications. local interpretations promote understanding of small regions
of the conditional distribution, such as clusters of input
The figure below explains the key four reasons for records, and their corresponding predictions, or deciles of
explaining the complex models and why such challenges predictions and their corresponding input rows (Hall, 2017).
needs to be answered.
Deep learning models can identify and abstract complex
patterns that humans may not be able to see in data.
However, there are many situations where introducing a-
priori expert domain knowledge into the features, or
abstracting key patterns identified in the deep learning
To To models as actual features; it would be possible to break
Justify Control down the model into subsequent, more explainable pieces
(Ethical Institute, 2019). Recalling that a good explanation
needs to influence the mental model of the user, i.e., the
To To representation of the external reality using, among other
Improve Discover things, symbols, it seems obvious that the use of the
symbolic learning paradigm is appropriate to produce an
explanation and could provide convincing explanations
while keeping or improving generic performance
(Donadello et al. 2017).
Figure 4 – Reasons to explain complex algorithms (Adadi, 3. EVOLUTION OF XAI MODELS FOR DNNS
Amina & Berrada, 2018)
The last six years have seen a big push to understand the
Specially in aerospace PM, the reasons to justify is very decisions made by complex multi-layered DNNs and build
critical. The lack of answers by AI systems leading to muted trust in those models.
trust and limited large-scale adoption. This lack of
explainability has hindered the adoption of these models, The model-independent approach is applied to all classes of
especially in regulated industries, e.g. aerospace, banking, algorithms or learning techniques, and the internal workings
finance, and healthcare. of the model treated as an unknown black box. The model-
specific approach is used only for specific techniques or
European Union introduced a right to explanation in GDPR narrow classes of techniques and the internal workings of
(General Data Protection Right) as an attempt to deal with the model treated as white box.
the potential problems stemming from the rising importance
of algorithms (ICO, 2018). The implementation of the
4
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020
The model-independent XAI models may apply to any Year XAI Models Reference Model- Global
model, but they may be more limited compared to the Agnost or
model-specific models (Carvalho, 2019). There is an ic or Local
increasing interest in model specific XAI models, as seen Model-
papers published in the CVPR (Conference on Computer specific
Vision and Pattern Recognition) workshop on XAI (CVPR, 2017 Integrated Sundararajan Agnost Global
2019). Gradients et al. 2017. ic
Below is a list of existing XAI models which have looked at 2017 TCAV Kim et al. Agnost Global
the different aspects of DNNs to improve explainability. (Testing with 2018. ic
Concept
Year XAI Models Reference Model- Global Activation
Agnost or Vectors)
ic or Local 2017 Distilling a Frosst & Agnost Global
Model- Neural Hinton, 2017. ic
specific Network into
2014 Guided Springenberg CNN Global a soft
propagation et al. 2014 decision tree
2018 Attention (Li et al. Agnost Global
2015 Distilling the Hinton et al. Agnost Global - Based 2018), (Arik & ic
knowledge 2015. ic 2019 Prototypical Pfister, 2019).
in a Neural Learning
Network 2019 XRAI Kapishnikov Agnost Global
2015 DeepR Wickramasing CNN Global et al. 2019. ic
- (Deep he et al. 2016.
2016 Record)
2016 RETAIN Choi et al. RNN Local
(Reversed 2016. Table 1 – Evolution Different typical XAI Models being
Time dedicated to explaining DNNs
Attention One of the key columns in the above table is to show
Model) whether an XAI model has a global or local interpretability.
2016 MMD Kim et al. K- Global This is to highlight accuracy. Small sections of the
(Maximum 2016. medoid conditional distribution are more likely to be linear,
Mean clusteri monotonic, or otherwise well-behaved, local explanations
Discrepancy) ng can be more accurate than global explanations (Hall, 2017).
Critic
2016 LIME (Local (Ribeiro et al. Agnost Local 4. INDUSTRY ADAPTION OF XAI
- Interpretable 2016), ic
2018 Model (Guidotti et al. One of the most notable entities in this research field is the
Agnostic 2018), (Mishra DARPA (Defense Advanced Research Projects Agency),
Explanation) et al. 2017). which, while funded by the U.S. Department of Defense,
created the XAI program for funding academic and military
2017 Anchors Ribeiro et al. Agnost Local
research and resulted in funding for 11 U.S. research
2018. ic
laboratories (DARPA, 2018). Google has made public its
research and practices in different AI-related areas, one of
2017 LOCO Lei et al. 2017. Agnost Local
which is entirely focused on explainability (What if tool,
(Leave one ic
2020). Apart from strategies and recommended practices,
covariate
explainability is also one of the main focuses in currently
out)
commercialized AI solutions and products. Facebook and
2017 SHAP Lundberg & Agnost Local
Georgia Tech published a paper where it shows an
(SHapley Lee, 2017. ic
interactive visual exploration tool of industry-scale DNN
Additive
models (Kahng et al. 2018). EASA AI roadmap has
exPlanations
highlighted the importance of XAI in the aviation domain
)
(EASA, 2020).
2017 DeepLift Shrikumar et RNN Global
al. 2017. Some of the recent open Source XAI Platforms have been
developed to help build the trust on the AI and have the
5
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020
transparency i.e. IBM AI Fairness 360, Microsoft Model constantly evolving. A commercial aircraft must be serviced
interpretability in Azure ML, Google’s What If Tool, after a certain number of flight hours to remain compliant
H2O.ai’s H2O Platform, Distill, Oracle’s Skater. with FAA (Federal Aviation Administration), EASA
(European Union Aviation Safety Agency), and ICAO
• IBM AI Fairness 360: “The AI Fairness 360 toolkit (International Civil Aviation Organization) standards.
(AIF360) is an open-source software toolkit that can
help detect and remove bias in machine learning As airworthiness authorities, OEMs (Original Equipment
models. It enables developers to use state-of-the-art Manufacturers), and airlines come to depend on AI-based
algorithms to regularly check for unwanted biases from dynamic systems, and clearer accountability will be required
their machine learning pipeline and to mitigate any for decision-making processes to ensure trust, and
biases that are discovered. AIF360 enables AI transparency. Evidence of this requirement gaining more
developers and data scientists to easily check for biases momentum can be seen with the launch of the first global
at multiple points along their machine learning pipeline, conference exclusively dedicated to this emerging
using the appropriate bias metric for their discipline, the International Joint Conference on Artificial
circumstances. It also provides a range of state-of-the- Intelligence: Workshop on XAI (IJCAI, 2017).
art bias mitigation techniques that enable the developer
It is important that people get to trust and buy into these
or data scientist to reduce any discovered bias. These
new AI systems and adapt the way they work to optimize
bias detection techniques can be deployed automatically
the benefit out of these systems. XAI is mean to do that, and
to enable an AI development team to perform
this needs to be understood by organizational leadership.
systematic checking for biases like checks for
Also, to work with the regulatory authorities to show
development bugs or security violations in a continuous
enough evidence of how certain service schedules are
integration pipeline” (IBM, 2020).
decided based on predictive maintenance DNNs models.
• Microsoft Model interpretability in Azure ML: Sharing data between different aviation organization still a
“Understanding what AI models are doing is super challenge. With sustainability and carbon neutral are top of
important both from a functional as well as ethical the agenda of these organizations would allow a push
aspects” (Microsoft, 2020). towards becoming more efficient in maintenance and use
• Google’s What If Tool: “Building effective machine the IVHM as a core piece to optimize the usage of the
learning models means asking a lot of questions. Look aircraft and its components.
for answers using the What-if Tool, an interactive On the other hand, different aviation organizations need to
visual interface designed to probe your models better” come together with airworthiness authorities to support XAI
(What if tool, 2020). model framework and policies as a mandatory design
• H2O.ai’s H2O Platform: “H2O Driverless AI does principle to support adoption of usage of DNNs in
explainable AI today with its MLI (Machine Learning predictive maintenance.
Interpretability) module. This capability in H2O Bringing the maintenance schedule earlier based on
Driverless AI employs a unique combination of forecasted failures by the AI model is only going to increase
techniques, and methodologies, such as LIME, Shapley, safety and help to plan the maintenance task better. But
surrogate decision trees, partial dependence and more, deferring maintenance (still operative instrument &
in an interactive dashboard to explain the results of both equipment) beyond the recommended schedules
Driverless AI models and external models” (H2O.ai, maintenance by OEMs (Original Equipment Manufacturers)
2020). would need to increase trust in the PM models and more
• Distill: “Machine learning will fundamentally change collaboration between OEMs, airlines/operators &
how humans and computers interact. It’s important to airworthiness authorities (FAA, EASA, ICAO, etc.). XAI
make those techniques transparent, so we can models would help in increasing transparency & trust for the
understand and safely control how they work” (Distill, AI models, thus increases the adoption.
2020).
6. LEVELS OF EXPLAINABILITY
• Oracle’s Skater: “Skater is a unified framework to
enable Model Interpretation for all forms of models to The main aim of XAI models is to explain the AI models.
help one build an Interpretable machine learning system The challenge is to have a consistent measurement
often needed for real-world use-cases” (Skater, 2020). framework to measure the explainability. There is also
challenge on how much testing required and the success
5. COMPLIANCE CHALLENGES IN AEROSPACE MRO criteria for it as a consistent explanation of the models.
Some key XPA are defined in the table below:
Compliance is a never-ending process aerospace industry,
and the regulatory requirements across most industries are
6
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020
7
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020
REFERENCES
Big Data DARPA (2017). Explainable artificial intelligence. Defence
Prognosis
Component Condition
Advanced Research Project Agency. viewed 08 April
IoT
Database 2020, https://www.darpa.mil/program/explainable-
XAI Models
X artificial-intelligence.
Wireless Data
Collection
Measurement History P
A
EASA (2020). A human centric approach to AI in aviation.
DNN Models EASA. 21 April 2020,
Historical Faults https://www.easa.europa.eu/newsroom-and-
events/news/easa-artificial-intelligence-roadmap-10-
published.
XAI Interface
Jalali, A., Heistracher, C., Schindler, A. & Haslhofer, B
I understand why
(2019). Ercims-news. Viewed 18 April 2020,
I understand how accurate the model is
I understand when the failure may occur
https://ercim-news.ercim.eu/en116/r-i/understandable-
Users I understand when to take corrective action deep-neural-networks-for-predictive-maintenance-in-
Cloud Computing the-manufacturing-industry.
Airbus (2020). Data revolution in aviation. Airbus. Viewed
Figure 5 – Possible cloud-based IVHM system components 08 April 2020, https://www.airbus.com/public-
required for XAIs affairs/brussels/our-topics/innovation/data-revolution-
The figure above tries to highlight the DNN, XAI models & in-aviation.html.
XPA are integrated part of any IVHM framework and how Lufthansa Technik (2020). Aviatar, The digital operation
they could fit into the overall system architecture. Thinking suite. Lufthansa Technik. viewed 11 April 2020,
and building XAI models when developing the DNN https://www.lufthansa-technik.com/aviatar.
models would help increase the adaption of these models. Cooper, T., Reagan, I., Porter, C. and Precourt, C. (2019).
XPA should be part of the process to measure these XAI Global fleet & MRO market forecast commentary
models. Also cloud infrastructure would speed the adaption 2019-2029. viewed 11 April 2020,
of IVHM framework to implement DNN models. Recent https://www.oliverwyman.com/our-
advancement in cloud computing has opened opportunity expertise/insights/2019/jan/global-fleet-mro-market-
easy accessibility to computing power for XAI. forecast-commentary-2019-2029.html.
Cambier, Yann (2018). Big Data: Racing to platform
8. CONCLUSION maturity. aircraftIT. viewed 11 April 2020,
https://www.aircraftit.com/articles/big-data-racing-to-
Research is needed to have further development to make the platform-maturity/.
XAI models more mainstream to be able to produce useful U.S. Department of Energy (2010). Operations &
results. Inability to trust the DNNs has reduced the usability Maintenance best practices – a guide to achieving
of these complex models in Aerospace predictive operational efficiency. U.S. Department of Energy.
maintenance applications. viewed 18 April 2020,
More investigation required in DNNs to predict https://www.energy.gov/sites/prod/files/2013/10/f3/om
maintenance measures such as remaining useful life (RUL) guide_complete.pdf.
and time-to-failure (TTF). Research in XAI would generally Rio, Ralph (2015). Optimize asset performance with
help to accelerate the implementation of AI/ML in the industrial IoT and analytics. ARC Advisory Group.
aerospace domain, and specifically help to facilitate Viewed 11 April 2020,
compliance, transparency, and trust. https://www.arcweb.com/blog/optimize-asset-
performance-industrial-iot-and-analytics-0.
Our future research is focusing on the following key areas: Cranfield (2008). Integrated vehicle health management
centre (IVHM). Cranfield University. Viewed 11 April
• Use DNNs to improve the effectiveness of prediction
2020, https://www.cranfield.ac.uk/centres/integrated-
models in aerospace,
vehicle-health-management-ivhm-centre.
• To understand the behaviour of the DNNs by clarifying NASA (1992). Research and technology goals and
the conditions for specific outcome for aerospace objectives for Integrated Vehicle Health Management
predictive maintenance, (IVHM). NASA-CR-192656. Viewed 19 April 2020,
https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/199
• Advance the possible XPA measurement framework for 30013844.pdf.
model specific XAI to support compliance and Google (2020). AI Explainability Whitepaper. Google.
adoption. Viewed 11 April 2020,
8
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020
9
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020
Sundararajan, M., Taly, A. and Yan, Q. (2017). Axiomatic Airways, Lufthansa Group, Thomas Cook, TUI & Virgin
attribution for deep networks. In Proceedings of the Atlantic in several roles in technology, gaining experience
34th International Conference on Machine Learning - in big data technologies in cloud, data science, MRO
Volume 70(ICML’17). JMLR.org, 3319–3328. Systems and IVHM. Bib has completed his MSc in Six
Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J. and Sigma (service) from Southampton Solent University, UK.
Viegas, F. (2018). Interpretability beyond Feature Currently Bib is part time research student in Cranfield
Attribution: Quantitative Testing with Concept University working on research to use DNN in IVHM. Bib
Activation Vectors (TCAV). In International also working as a Principal Enterprise Solution Architect in
Conference on Machine Learning. 2673–2682. Virgin Atlantic and involved in designing Maintenance &
Frosst, N. and Hinton, G. (2017). Distilling a neural network Engineering systems, big data analytics and cloud native
into a soft decision tree. arXiv:1711.09784, 2017. solutions.
Li, O., Liu, H., Chen, C. and Rudin, C. (2018). Deep
Dr Ip-Shing Fan Fan was born and studied in Hong Kong,
learning for case-based reasoning through prototypes: A
graduated with First Class Honours in Industrial
neural network that explains its predictions. In AAAI,
Engineering. He completed his graduate engineer training at
2018.
Qualidux Industrial Co Ltd in Hong Kong. He was awarded
Arik, S. O. and Pfister, T. (2019). Attention-based
the Commonwealth Scholarship and completed his PhD in
prototypical learning towards interpretable, confident
Computer Integrated Manufacturing in Cranfield. In 1990,
and robust deep neural networks. arXiv preprint
Fan started to work in The CIM Institute, endowed by IBM
arXiv:1902.06292, 2019.
in Cranfield, to carry out research, education, and
Kapishnikov, A., Bolukbasi, T., Viegas, F. and ´ Terry, M.
consultancy in new applications of computers in
(2019). Better attributions through regions. Xrai: In
manufacturing. He led many European and UK funded
Proc. ICCV, 2019.
research programs to create new tools and methods in
Kahng, M., Andrews, P.Y., Kalro, A. and Chau, D.H.P.
knowledge-based engineering design, business performance,
(2018). ActiVis: Visual exploration of industry-scale
quality management, supply chain, and complexity science.
deep neural network models. IEEE Trans. Vis. Comput.
Gr. 2018, 24, 88–97. The complex dynamics of people factor in technology
IBM (2020). AI Fairness 360. IBM. viewed 08 April 2020, implementation prompted him to create a European research
https://developer.ibm.com/open/projects/ai-fairness- consortium fort the Framework 5 research project BEST -
360/. Better Enterprise System Implementation. The 12 partners,
Microsoft (2020). Model interpretability in Azure Machine €4 million project created a body of knowledge that Fan
Learning. Microsoft. viewed 08 April 2020, worked to translate into Masters level teaching curriculum.
https://docs.microsoft.com/en-us/azure/machine- The holistic thinking also influences research developments
learning/how-to-machine-learning-interpretability. that brings together business, technology and organization
What if Tool (2020). What If. Google. viewed 08 April factors.
2020, <https://pair-code.github.io/what-if-tool/>.
Since 2010, Fan spends time in the IVHM Centre to lead the
H2O.ai (2020). Explaining explainable AI. H2O. viewed 08
Integrated Vehicle Health Management (IVHM) Design
April 2020, https://www.h2o.ai/explainable-ai/.
Distill (2020). Machine learning research should be clear, System project. This has delivered industry relevant
dynamic and vivid. Distill. Viewed 08 April 2020, solutions to partners and applied projects with tools to carry
out Cost Benefits Analysis and design methods and tools to
https://distill.pub/about/.
add IVHM capability in Unmanned Air Vehicles. He is the
Skater (2020). Oracle Skater. Oracle. Viewed 08 April
Course Director of the MSc in Management and
2020, https://github.com/oracle/Skater.
Donadello, I., Serafini, L. and Garcez, A.D. (2017). Logic Information Systems in Cranfield University, developing
tensor networks for semantic image interpretation. postgraduates who would understand the interaction
between IT, organization and people behaviour. Fan is the
Proceedings of the Twenty-Sixth International Joint
Chairman of the Bedford Branch of BCS and sits on the
Conference on Artificial Intelligence, IJCAI (2017),
BCS Council. He is also a member of the IFIP (International
pp. 1596-1602.
Federation for Information Processing) Working Group 5.8
on Enterprise Interoperability.
BIOGRAPHIES
Professor Ian Jennions Ian's career spans over 40 years,
Bibhudhendu Shukla Bib was born and
working mostly for a variety of gas turbine companies. He
graduated in India with First Class with
has a Mechanical Engineering degree and a PhD in CFD
distinction in Production Engineering. He
both from Imperial College, London. He has worked for
completed his Engineering in National
Rolls-Royce (twice), General Electric and Alstom in several
Institute of Technology, Jamshedpur in
technical roles, gaining experience in aerodynamics, heat
India. Bib has worked for Amadeus, British
transfer, fluid systems, mechanical design, combustion,
10
EUROPEAN CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2020
services and IVHM. Ian moved to Cranfield in July 2008 as a Director of the PHM Society, Vice-chairman of SAE's
Professor and Director of the newly formed IVHM Centre. IVHM Steering Group, contributing member of the SAE
The Centre is funded by a number of industrial companies, HM-1 IVHM committee, a Chartered Engineer and a Fellow
including Boeing, BAE Systems, Thales, Meggitt, MOD of IMechE, RAeS and ASME. He is the editor of five recent
and Alstom Transport. He has led the development and SAE books: 1. IVHM - Perspectives on an Emerging Field;
growth of the Centre, in research and education since its 2. IVHM - Business Case Theory and Practise; 3. IVHM -
inception. The Centre offers an IVHM short course each the Technology; 4. IVHM - Essential Reading; 5. IVHM -
year and has offered an IVHM MSc. Ian is on the editorial Implementation and Lessons Learned and a co-author of the
Board for the International Journal of Condition Monitoring, book: ‘No Fault Found – The Search for the Root Cause’.
11