Interpretability of Causal Discovery in Tracking Deterioration in a Highly Dynamic Process
Abstract
:1. Introduction
- Proposing a novel approach making use of causal discovery and adapting the FCI algorithm to time series data for tracking degradation processes together with proposed distance measures to quantify these changes;
- Developing visualizations to illustrate dynamic changes as a tool for communication with domain experts achieving the goal of interpretable results;
- Comparing our degradation monitoring results with those obtained using the state-of-the-art LSTM-based autoencoder, Deep Transformer Networks for Anomaly Detection in Multivariate Time Series Data (TranAD), and UnSupervised Anomaly Detection on Multivariate Time Series (USAD) methods, which are not considered to be interpretable.
2. Background and Related Work
2.1. Causal Discovery
- CD algorithms for independent and and identically distributed (i.i.d.) data i.e., non-time series data;
- CD algorithms for time series data.
- Independent: Each observation is not influenced by or dependent on any other observation. The occurrence or value of one data point does not affect the occurrence or value of another.
- Identically Distributed: All observations come from the same probability distribution. This implies that underlying statistical properties, such as mean, variance, and other distributional characteristics, do not change.
- Skeleton Construction: The PC algorithm begins by constructing an undirected graph, called the skeleton, based on conditional independence tests.
- Conditional Independence Tests: It tests for conditional independence between variables to identify potential causal relationships.
- V-Structure Identification: It identifies V-structures, which are indicative of potential causal relationships, in the undirected graph.
- Edge Orientation: The PC algorithm orients edges in the graph to form a partially directed acyclic graph (PDAG) by exploiting the identified V-structures.
- Causal Discovery algorithm for time series data: Among the most popular causal discovery algorithms for time series data are the tsFCI and PCMCI algorithm. The time series Fast Causal Inference (tsFCI) algorithm, adapted from the Fast Causal Inference (FCI) algorithm for non-temporal variables, is designed to infer causal relationships from time series data. It operates in two distinct phases: (i) an adjacency phase and (ii) an orientation phase. Leveraging temporal priority and consistency across time, it employs these phases to orient edges and constrain conditioning sets. The tsFCI algorithm yields a window causal graph, offering the advantage of detecting lagged hidden confounders. However, it comes with limitations, as it is unable to model cyclic contemporaneous causation and instantaneous relationships [7]. However, in the viscose fiber production process described in Section 4, the process consists of a cyclic behavior that involves two phases, namely the rejection and the filtration phases. Also, as described in Section 4.2, the data are a multivariate time series, thus having an instantaneous relationship between the features/variables. Due to these limitations of the tsFCI algorithm, it was not employed in our analysis.
2.2. Approaches to Unsupervised Degradation Monitoring
2.3. Causal Discovery in Manufacturing Industry
2.4. Interpreting Complex Systems: Explainable AI vs. Causal Interpretability
3. Approach
3.1. Data Preprocessing
3.2. Adapting the Causal Discovery Method (FCI)
- Initial Setup: Begin with a set of variables or characteristics. This is given as = , where X, Y, Z, A, and B are the column vectors representing the variables or features in the data.
- Data Modification: Modify the data to include lagged versions of the features to capture temporal dependencies. This is given as , representing data with lagged versions of the original features up to 40 lags as additional features.
- Graph Formation: Create a complete undirected graph using the variables as vertices.
- Iterative Process: Test pairs of variables for conditional independence given subsets of other variables. Remove edges between variables that are conditionally independent.
- Graph Orientation: Orient edges based on certain criteria, such as the absence of direct causal influence between certain pairs of variables.
- Edge Removal: Further refine the graph by removing edges between pairs of variables that are d-separated given subsets of other variables.
3.3. Similarity Measures
Algorithm 1 Jaccard similarity and Jaccard distance calculation [62] |
|
4. Case Study
4.1. Process Description
- Filtration Machine: Filtration and Rejection Phase
4.2. Data Description
4.3. Degradation Monitoring
- Causal Discovery: At a frequency of 1 s, the rejection group data were obtained after completing the preprocessing step. Subsequently, these data were partitioned on a monthly basis, further dividing each month into four distinct weeks as shown in Figure 8. This segmentation strategy was implemented to facilitate the monitoring of degradation in the viscose fiber production process on a weekly basis. The decision to operate on a weekly frequency was motivated by the computational cost and time-consuming nature of causal graph computation. The computation complexity of the causal graphs using FCI is discussed below. Daily monitoring was deemed impractical, while monthly intervals were considered too infrequent, risking potential losses in the efficiency of the entire viscose fiber production system. As a result, the weekly basis provided a balanced and effective approach for a timely degradation assessment.
- Causal Graphs and Reference Causal Graph: With the approach mentioned above, a total of 19 causal graphs were generated, each representing a specific week of each month from August (after the sieve was changed) to December 2022 as shown in the Causal Graphs Stage in Figure 9.
- Graph comparison: Once the reference graph was chosen, a comparative analysis was conducted with graphs over preceding time intervals using Jaccard distance, as illustrated in the graph comparison stage in Figure 9. The selection of the Jaccard distance as the comparison measure, instead of the Jaccard similarity score, was driven by the need to quantify the differences in causal graphs over time, as detailed in Section 3.3. These differences in causal graphs stem from variations in the dynamics of the sieve due to its degradation or deterioration during its operational span. Figure 11 visually presents the comparison between causal graphs and the reference graph (chosen to be the one on 9–11th August) using Jaccard distance for the rejection phase. Given the dynamic nature of the process, susceptible to variations over time, a trend analysis was performed after computing the Jaccard difference score to monitor degradation in the production process. The observed positive trend indicates an increase in degradation over time following the change in the sieve.
- Interpretability: Our approach not only facilitates the continuous monitoring of degradation in the viscose fiber production process but also empowers domain experts to integrate their knowledge into the creation and interpretation of causal graphs. As shown in Figure 12, this section focuses on interpreting the observed variations in the dynamics of the production process during degradation monitoring, employing two distinct methods.
- Visual Inspection of Causal Graphs for Root Cause Analysis: The initial method involves visually examining causal graphs to discern changes at specific time points. By setting a degradation threshold for the Jaccard distance, as demonstrated in Figure 11, domain experts can scrutinize changes and analyze the causal graph of the ongoing production process.
- Monitoring Changes in Feature Relations Over Time: The second approach involves monitoring changes in the relationship between specific pairs of desired features over time. As previously mentioned, the connections between features p1 and pdiff play a crucial role in initiating the rejection and filtration phases. Therefore, observing the dynamics of these features over time can provide valuable insights before a significant event occurs.
5. Evaluation
5.1. LSTM Based Autoencoder
5.1.1. Procedure and Results
5.2. TranAD and USAD
5.2.1. Procedure and Results
6. Conclusions and Future Scope
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Degradation Monitoring—Filtration Phase
Appendix A.1. Causal Graphs and Reference Causal Graph for Filtration Phase
Appendix A.2. Graph Comparison
Appendix B. Evaluation—Filtration Phase
References
- Surucu, O.; Gadsden, S.A.; Yawney, J. Condition Monitoring using Machine Learning: A Review of Theory, Applications, and Recent Advances. Expert Syst. Appl. 2023, 221, 119738. [Google Scholar] [CrossRef]
- Lee, J. Measurement of machine performance degradation using a neural network model. Comput. Ind. 1996, 30, 193–209. [Google Scholar] [CrossRef]
- Glymour, C.; Zhang, K.; Spirtes, P. Review of Causal Discovery Methods Based on Graphical Models. Front. Genet. 2019, 10, 524. [Google Scholar] [CrossRef] [PubMed]
- Xu, F.; Uszkoreit, H.; Du, Y.; Fan, W.; Zhao, D.; Zhu, J. Explainable AI: A brief survey on history, research areas, approaches and challenges. In Proceedings of the Natural Language Processing and Chinese Computing: 8th CCF International Conference, NLPCC 2019, Dunhuang, China, 9–14 October 2019; Proceedings, Part II 8. Springer: Dunhuang, China, 2019; pp. 563–574. [Google Scholar]
- Zanga, A.; Stella, F. A Survey on Causal Discovery: Theory and Practice. arXiv 2023, arXiv:cs.AI/2305.10032. [Google Scholar] [CrossRef]
- Assaad, C.K.; Devijver, E.; Gaussier, E. Survey and Evaluation of Causal Discovery Methods for Time Series. J. Artif. Int. Res. 2022, 73, 767–819. [Google Scholar] [CrossRef]
- Hasan, U.; Hossain, E.; Gani, M.O. A Survey on Causal Discovery Methods for I.I.D. and Time Series Data. arXiv 2023, arXiv:cs.AI/2303.15027. [Google Scholar]
- Arafeh, M.; Hammoud, A.; Otrok, H.; Mourad, A.; Talhi, C.; Dziong, Z. Independent and Identically Distributed (IID) Data Assessment in Federated Learning. In Proceedings of the GLOBECOM 2022—2022 IEEE Global Communications Conference, Rio de Janeiro, Brazil, 4–8 December 2022; pp. 293–298. [Google Scholar]
- Dafoe, A.; Zhang, B.; Caughey, D. Confounding in survey experiments. In Proceedings of the Annual Meeting of The Society for Political Methodology, University of Rochester, Rochester, NY, USA, 23–25 July 2015; Volume 23. [Google Scholar]
- Amer, M.; Goldstein, M.; Abdennadher, S. Enhancing one-class Support Vector Machines for unsupervised anomaly detection. In Proceedings of the KDD’ 13: The 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Chicago, IL, USA, 11 August 2013; pp. 8–15. [Google Scholar]
- Diez-Olivan, A.; Pagan Rubio, J.; Nguyen, K.; Sanz, R.; Sierra, B. Kernel-based support vector machines for automated health status assessment in monitoring sensor data. Int. J. Adv. Manuf. Technol. 2018, 95, 327–340. [Google Scholar] [CrossRef]
- Li, Z.; Li, X. Fault Detection in the Closed-loop System Using One-Class Support Vector Machine. In Proceedings of the 2018 IEEE 7th Data Driven Control and Learning Systems Conference (DDCLS), Enshi, China, 25–27 May 2018; pp. 251–255. [Google Scholar]
- Ma, J.; Perkins, S. Time-series novelty detection using one-class support vector machines. In Proceedings of the International Joint Conference on Neural Networks, Portland, OR, USA, 20–24 July 2003; Volume 3, pp. 1741–1745. [Google Scholar]
- Shawe-Taylor, J.; Žličar, B. Novelty Detection with One-Class Support Vector Machines. In Advances in Statistical Models for Data Analysis; Springer International Publishing: Cham, Switzerland, 2015; pp. 231–257. [Google Scholar]
- Chevrot, A.; Vernotte, A.; Legeard, B. CAE: Contextual Auto-Encoder for multivariate time-series anomaly detection in air transportation. Comput. Secur. 2022, 116, 102652. [Google Scholar] [CrossRef]
- Tziolas, T.; Papageorgiou, K.; Theodosiou, T.; Papageorgiou, E.; Mastos, T.; Papadopoulos, A. Autoencoders for Anomaly Detection in an Industrial Multivariate Time Series Dataset. Eng. Proc. 2022, 18, 23. [Google Scholar] [CrossRef]
- Li, G.; Jung, J.J. Deep learning for anomaly detection in multivariate time series: Approaches, applications, and challenges. Inf. Fusion 2023, 91, 93–102. [Google Scholar] [CrossRef]
- González-Muñiz, A.; Díaz, I.; Cuadrado, A.A.; García-Pérez, D. Health indicator for machine condition monitoring built in the latent space of a deep autoencoder. Reliab. Eng. Syst. Saf. 2022, 224, 108482. [Google Scholar] [CrossRef]
- Hasani, R.; Wang, G.; Grosu, R. A Machine Learning Suite for Machine Components’ Health-Monitoring. Proc. AAAI Conf. Artif. Intell. 2019, 33, 9472–9477. [Google Scholar] [CrossRef]
- Choi, K.; Yi, J.; Park, C.; Yoon, S. Deep Learning for Anomaly Detection in Time-Series Data: Review, Analysis, and Guidelines. IEEE Access 2021, 9, 120043–120065. [Google Scholar] [CrossRef]
- Tran, K.P.; Nguyen, H.D.; Thomassey, S. Anomaly detection using Long Short Term Memory Networks and its applications in Supply Chain Management. IFAC-PapersOnLine 2019, 52, 2408–2412. [Google Scholar] [CrossRef]
- Hsieh, R.J.; Chou, J.; Ho, C.H. Unsupervised Online Anomaly Detection on Multivariate Sensing Time Series Data for Smart Manufacturing. In Proceedings of the 2019 IEEE 12th Conference on Service-Oriented Computing and Applications (SOCA), Kaohsiung, Taiwan, 18–21 November 2019; pp. 90–97. [Google Scholar]
- Abbracciavento, F.; Formentin, S.; Balocco, J.; Rota, A.; Manzoni, V.; Savaresi, S.M. Anomaly detection via distributed sensing: A VAR modeling approach. IFAC-PapersOnLine 2021, 54, 85–90. [Google Scholar] [CrossRef]
- Diao, W.; Naqvi, I.H.; Pecht, M. Early detection of anomalous degradation behavior in lithium-ion batteries. J. Energy Storage 2020, 32, 101710. [Google Scholar] [CrossRef]
- Mejri, N.; Lopez-Fuentes, L.; Roy, K.; Chernakov, P.; Ghorbel, E.; Aouada, D. Unsupervised Anomaly Detection in Time-series: An Extensive Evaluation and Analysis of State-of-the-art Methods. arXiv 2023, arXiv:cs.LG/2212.03637. [Google Scholar]
- Huang, K.; Zhu, H.; Wu, D.; Yang, C.; Gui, W. EaLDL: Element-aware lifelong dictionary learning for multimode process monitoring. In IEEE Transactions on Neural Networks and Learning Systems; IEEE: Piscataway, NJ, USA, 2023. [Google Scholar]
- Huang, K.; Tao, Z.; Liu, Y.; Sun, B.; Yang, C.; Gui, W.; Hu, S. Adaptive Multimode Process Monitoring Based on Mode-Matching and Similarity-Preserving Dictionary Learning. IEEE Trans. Cybern. 2023, 53, 3974–3987. [Google Scholar] [CrossRef] [PubMed]
- Darban, Z.Z.; Webb, G.I.; Pan, S.; Aggarwal, C.C.; Salehi, M. Deep Learning for Time Series Anomaly Detection: A Survey. arXiv 2022, arXiv:cs.LG/2211.05244. [Google Scholar]
- Tuli, S.; Casale, G.; Jennings, N.R. TranAD: Deep Transformer Networks for Anomaly Detection in Multivariate Time Series Data. arXiv 2022, arXiv:cs.LG/2201.07284. [Google Scholar] [CrossRef]
- Biriukova, K.; Bhattacherjee, A. Using Transformer Models for Stock Market Anomaly Detection. J. Data Sci. 2023, 2023, 1–8. [Google Scholar]
- Kumar, A.S.; Raja, S.; Pritha, N.; Raviraj, H.; Lincy, R.B.; Rubia, J.J. An adaptive transformer model for anomaly detection in wireless sensor networks in real-time. Meas. Sens. 2023, 25, 100625. [Google Scholar] [CrossRef]
- Audibert, J.; Michiardi, P.; Guyard, F.; Marti, S.; Zuluaga, M.A. USAD: UnSupervised Anomaly Detection on Multivariate Time Series. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’20, New York, NY, USA, 6–10 July 2020; pp. 3395–3404. [Google Scholar]
- Abdulaal, A.; Liu, Z.; Lancewicki, T. Practical Approach to Asynchronous Multivariate Time Series Anomaly Detection and Localization. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, New York, NY, USA, 14–18 August 2021; KDD ’21. pp. 2485–2494. [Google Scholar]
- Albanese, A. Deep Anomaly Detection: An Experimental Comparison of Deep Learning Algorithms for Anomaly Detection in Time Series Data. Ph.D. Thesis, Politecnico di Torino, Turin, Italy, 2023. [Google Scholar]
- Fan, C.; Wang, Y.; Zhang, Y.; Ouyang, W. Interpretable Multi-Scale Neural Network for Granger Causality Discovery. In Proceedings of the ICASSP 2023—2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar]
- Nadim, K.; Ragab, A.; Ouali, M.S. Data-driven dynamic causality analysis of industrial systems using interpretable machine learning and process mining. J. Intell. Manuf. 2023, 34, 57–83. [Google Scholar] [CrossRef]
- Bi, X.; Wu, D.; Xie, D.; Ye, H.; Zhao, J. Large-scale chemical process causal discovery from big data with transformer-based deep learning. Process Saf. Environ. Prot. 2023, 173, 163–177. [Google Scholar] [CrossRef]
- Mehling, C.W.; Pieper, S.; Ihlenfeldt, S. Concept of a causality-driven fault diagnosis system for cyber-physical production systems. In Proceedings of the 2023 IEEE 21st International Conference on Industrial Informatics (INDIN), Lemgo, Germany, 18–20 July 2023; pp. 1–8. [Google Scholar]
- Xu, Z.; Dang, Y. Data-driven causal knowledge graph construction for root cause analysis in quality problem solving. Int. J. Prod. Res. 2023, 61, 3227–3245. [Google Scholar] [CrossRef]
- Wang, H.; Xu, Y.; Peng, T.; Agbozo, R.S.K.; Xu, K.; Liu, W.; Tang, R. Two-stage approach to causality analysis-based quality problem solving for discrete manufacturing systems. J. Eng. Des. 2023, 1–25. [Google Scholar] [CrossRef]
- Vuković, M.; Thalmann, S. Causal discovery in manufacturing: A structured literature review. J. Manuf. Mater. Process. 2022, 6, 10. [Google Scholar] [CrossRef]
- Ahang, M.; Charter, T.; Ogunfowora, O.; Khadivi, M.; Abbasi, M.; Najjaran, H. Intelligent Condition Monitoring of Industrial Plants: An Overview of Methodologies and Uncertainty Management Strategies. arXiv 2024, arXiv:2401.10266. [Google Scholar]
- Wuest, T.; Weimer, D.; Irgens, C.; Thoben, K.D. Machine learning in manufacturing: Advantages, challenges, and applications. Prod. Manuf. Res. 2016, 4, 23–45. [Google Scholar] [CrossRef]
- Moraffah, R.; Karami, M.; Guo, R.; Raglin, A.; Liu, H. Causal interpretability for machine learning-problems, methods and evaluation. ACM SIGKDD Explor. Newsl. 2020, 22, 18–33. [Google Scholar] [CrossRef]
- Saeed, W.; Omlin, C. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowl.-Based Syst. 2023, 263, 110273. [Google Scholar] [CrossRef]
- Gade, K.; Geyik, S.C.; Kenthapadi, K.; Mithal, V.; Taly, A. Explainable AI in Industry. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’19, New York, NY, USA, 4–8 August 2019; pp. 3203–3204. [Google Scholar]
- Chaddad, A.; Peng, J.; Xu, J.; Bouridane, A. Survey of Explainable AI Techniques in Healthcare. Sensors 2023, 23, 634. [Google Scholar] [CrossRef] [PubMed]
- Galhotra, S.; Pradhan, R.; Salimi, B. Explaining black-box algorithms using probabilistic contrastive counterfactuals. In Proceedings of the 2021 International Conference on Management of Data, Virtual Event, China, 20–25 June 2021; pp. 577–590. [Google Scholar]
- Chattopadhyay, A.; Manupriya, P.; Sarkar, A.; Balasubramanian, V.N. Neural Network Attributions: A Causal Perspective. In Proceedings of the 36th International Conference on Machine Learning; Chaudhuri, K., Salakhutdinov, R., Eds.; PMLR: London, UK, 2019; Volume 97, pp. 981–990. [Google Scholar]
- Harradon, M.; Druce, J.; Ruttenberg, B.E. Causal Learning and Explanation of Deep Neural Networks via Autoencoded Activations. arXiv 2018, arXiv:1802.00541. [Google Scholar]
- Parafita, Á.; Vitrià, J. Explaining visual models by causal attribution. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Korea, 27–28 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 4167–4175. [Google Scholar]
- Narendra, T.; Sankaran, A.; Vijaykeerthy, D.; Mani, S. Explaining Deep Learning Models using Causal Inference. arXiv 2018, arXiv:1802.00541. [Google Scholar]
- Wachter, S.; Mittelstadt, B.; Russell, C. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL Tech. 2017, 31, 841. [Google Scholar] [CrossRef]
- Grath, R.M.; Costabello, L.; Van, C.L.; Sweeney, P.; Kamiab, F.; Shen, Z.; Lecue, F. Interpretable Credit Application Predictions With Counterfactual Explanations. arXiv 2018, arXiv:cs.AI/1811.05245. [Google Scholar]
- Mothilal, R.K.; Sharma, A.; Tan, C. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, Barcelona, Spain, 27–30 January 2020; ACM: New York, NY, USA, 2020. [Google Scholar]
- Moore, J.; Hammerla, N.; Watkins, C. Explaining Deep Learning Models with Constrained Adversarial Examples. arXiv 2019, arXiv:cs.LG/1906.10671. [Google Scholar]
- Xu, G.; Duong, T.D.; Li, Q.; Liu, S.; Wang, X. Causality learning: A new perspective for interpretable machine learning. arXiv 2020, arXiv:2006.16789. [Google Scholar]
- Wang, J.; Dong, Y. Measurement of Text Similarity: A Survey. Information 2020, 11, 421. [Google Scholar] [CrossRef]
- Varma, S.; Shivam, S.; Thumu, A.; Bhushanam, A.; Sarkar, D. Jaccard Based Similarity Index in Graphs: A Multi-Hop Approach. In Proceedings of the 2022 IEEE Delhi Section Conference (DELCON), New Delhi, India, 11–13 February 2022; pp. 1–4. [Google Scholar]
- Cheng, L.; Guo, R.; Moraffah, R.; Sheth, P.; Candan, K.S.; Liu, H. Evaluation Methods and Measures for Causal Learning Algorithms. arXiv 2022, arXiv:cs.LG/2202.02896. [Google Scholar] [CrossRef]
- Shen, X.; Ma, S.; Vemuri, P.; Simon, G.; Alzheimer’s Disease Neuroimaging Initiative. Challenges and opportunities with causal discovery algorithms: Application to Alzheimer’s pathophysiology. Sci. Rep. 2020, 10, 2975. [Google Scholar] [CrossRef] [PubMed]
- Niwattanakul, S.; Singthongchai, J.; Naenudorn, E.; Wanapu, S. Using of Jaccard Coefficient for Keywords Similarity. In Proceedings of the International Multiconference of Engineers and Computer Scientists, Hong Kong, 13–15 March 2013. [Google Scholar]
- Hasan, M.J.; Sohaib, M.; Kim, J.M. An Explainable AI-Based Fault Diagnosis Model for Bearings. Sensors 2021, 21, 4070. [Google Scholar] [CrossRef] [PubMed]
- Salih, A.; Raisi-Estabragh, Z.; Galazzo, I.B.; Radeva, P.; Petersen, S.E.; Menegaz, G.; Lekadir, K. Commentary on explainable artificial intelligence methods: SHAP and LIME. arXiv 2023, arXiv:stat.ML/2305.02012. [Google Scholar]
Abbrv. | Description | Min | Max | Unit | Avg. Sampling Period | |
---|---|---|---|---|---|---|
p1 | p1-Pressure measured before the machine | 0.61 | 9.90 | 6.79 | Bar | 85 ms |
p2 | p2-Pressure measured after the filtration | 0.00 | 6.36 | 5.54 | Bar | 1 s 60 ms |
p3 | p3-Pressure measured after the rejection | 0.00 | 2.92 | 0.19 | Bar | 1 s 60 ms |
pdiff | Pressure difference between before and after filtration | 0.33 | 4.58 | 1.23 | Bar | 1 s 60 ms |
fm | Amount of fluid that passed through the filter | 0.00 | 8.55 | 0.68 | m3/h | 90 ms |
rm | Amount of fluid which was rejected | 0.00 | 10.0 | 0.02 | m3/h | 85 ms |
current | Current used to move rejection unit motor | 0.16 | 6.11 | 0.25 | Ampere | 1 s |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Choudhary, A.; Vuković, M.; Mutlu, B.; Haslgrübler, M.; Kern, R. Interpretability of Causal Discovery in Tracking Deterioration in a Highly Dynamic Process. Sensors 2024, 24, 3728. https://doi.org/10.3390/s24123728
Choudhary A, Vuković M, Mutlu B, Haslgrübler M, Kern R. Interpretability of Causal Discovery in Tracking Deterioration in a Highly Dynamic Process. Sensors. 2024; 24(12):3728. https://doi.org/10.3390/s24123728
Chicago/Turabian StyleChoudhary, Asha, Matej Vuković, Belgin Mutlu, Michael Haslgrübler, and Roman Kern. 2024. "Interpretability of Causal Discovery in Tracking Deterioration in a Highly Dynamic Process" Sensors 24, no. 12: 3728. https://doi.org/10.3390/s24123728