Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Self-adaptive Machine Learning Systems: Research Challenges and Opportunities

  • Conference paper
  • First Online:
Software Architecture (ECSA 2021)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13365))

Included in the following conference series:

Abstract

Today’s world is witnessing a shift from human-written software to machine-learned software, with the rise of systems that rely on machine learning. These systems typically operate in non-static environments, which are prone to unexpected changes, as is the case of self-driving cars and enterprise systems. In this context, machine-learned software can misbehave. Thus, it is paramount that these systems are capable of detecting problems with their machined-learned components and adapting themselves to maintain desired qualities. For instance, a fraud detection system that cannot adapt its machine-learned model to efficiently cope with emerging fraud patterns or changes in the volume of transactions is subject to losses of millions of dollars. In this paper, we take a first step towards the development of a framework for self-adaptation of systems that rely on machine-learned components. We describe: (i) a set of causes of machine-learned component misbehavior and a set of adaptation tactics inspired by the literature on machine learning, motivating them with the aid of two running examples from the enterprise systems and cyber-physical systems domains; (ii) the required changes to the MAPE-K loop, a popular control loop for self-adaptive systems; and (iii) the challenges associated with developing this framework. We conclude with a set of research questions to guide future work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    In ML, data cleaning corresponds to the process of identifying and correcting errors in a dataset that may negatively impact a predictive model.

  2. 2.

    Fraud detection systems normally rely on a fixed set of humans at any given time. This determines a maximum load of transactions that can be processed with a human in the loop.

References

  1. Abedjan, Z., et al.: Detecting data errors: where are we and what needs to be done? Proc. VLDB 9(12), 19993–1004 (2016)

    Google Scholar 

  2. Alipourfard, O., et al.: CherryPick: adaptively unearthing the best cloud configurations for big data analytics. In: Proceedings of NSDI (2017)

    Google Scholar 

  3. Aparício, D., et al.: Arms: automated rules management system for fraud detection. arXiv preprint arXiv:2002.06075 (2020)

  4. Badue, C., Guidolini, R., et al.: Self-driving cars: a survey. Expert Syst. App. 165, 113816 (2021)

    Article  Google Scholar 

  5. Bartocci, E., et al.: Specification-based monitoring of cyber-physical systems: a survey on theory, tools and applications. In: Bartocci, E., Falcone, Y. (eds.) Lectures on Runtime Verification. LNCS, vol. 10457, pp. 135–175. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-75632-5_5

  6. Bojarski, M., et al.: End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016)

  7. Bouchabou, D., Nguyen, S.M., Lohr, C., LeDuc, B., Kanellos, I.: A survey of human activity recognition in smart homes based on iot sensors algorithms: taxonomies, challenges, and opportunities with deep learning. Sensors 21(18), 6037 (2021)

    Article  Google Scholar 

  8. Branco, B., et al.: Interleaved sequence RNNs for fraud detection. In: Proceedings of KDD (2020)

    Google Scholar 

  9. Breiman, L.: Bagging predictors. Mach. Learn. 24, 123–140 (1996). https://doi.org/10.1007/BF00058655

    Article  MATH  Google Scholar 

  10. Bureš, T.: Self-adaptation 2.0. In: 2021 International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS) (2021)

    Google Scholar 

  11. Cámara, J., Lopes, A., Garlan, D., Schmerl, B.: Adaptation impact and environment models for architecture-based self-adaptive systems. Sci. Comput. Program. 127, 50–75 (2016)

    Article  Google Scholar 

  12. Cao, Y., Yang, J.: Towards making systems forget with machine unlearning. In: Proceedings of S &P. IEEE (2015)

    Google Scholar 

  13. Cao, Y., et al.: Efficient repair of polluted machine learning systems via causal unlearning. In: Proceedings of Asia CCS (2018)

    Google Scholar 

  14. Casimiro, M., Romano, P., Garlan, D., Moreno, G., Kang, E., Klein, M.: Self-adaptation for machine learning based systems. In: Proceedings of SAML (2021)

    Google Scholar 

  15. Casimiro, M., Garlan, D., Cámara, J., Rodrigues, L., Romano, P.: A probabilistic model checking approach to self-adapting machine learning systems. In: Procseedings of ASYDE, Co-located with SEFM 2021 (2021)

    Google Scholar 

  16. Casimiro, M., et al.: Lynceus: cost-efficient tuning and provisioning of data analytic jobs. In: Proceedings of ICDCS (2020)

    Google Scholar 

  17. Chen, T.: All versus one: an empirical comparison on retrained and incremental machine learning for modeling performance of adaptable software. In: Proceedings of SEAMS. IEEE (2019)

    Google Scholar 

  18. Chen, Z., Huang, X.: End-to-end learning for lane keeping of self-driving cars. In: Proceedings of IV (2017)

    Google Scholar 

  19. Cheng, B.H.C., et al.: Software engineering for self-adaptive systems: a research roadmap. In: Cheng, B.H.C., de Lemos, R., Giese, H., Inverardi, P., Magee, J. (eds.) Software Engineering for Self-Adaptive Systems. LNCS, vol. 5525, pp. 1–26. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-02161-9_1

    Chapter  Google Scholar 

  20. Cheng, H.T., et al.: Wide & deep learning for recommender systems. In: Proceedings of DLRS (2016)

    Google Scholar 

  21. Cheng, S.W., et al.: Evaluating the effectiveness of the rainbow self-adaptive system. In: Proceedings of SEAMS. IEEE (2009)

    Google Scholar 

  22. Christi, A., et al.: Evaluating fault localization for resource adaptation via test-based software modification. In: Proceedings of QRS (2019)

    Google Scholar 

  23. Cito, J., Dillig, I., Kim, S., Murali, V., Chandra, S.: Explaining mispredictions of machine learning models using rule induction. In: Proceedings of ESEC/FSE (2021)

    Google Scholar 

  24. Cruz, A.F., et al.: A bandit-based algorithm for fairness-aware hyperparameter optimization. CoRR abs/2010.03665 (2020)

    Google Scholar 

  25. deGrandis, P., Valetto, G.: Elicitation and utilization of application-level utility functions. In: Proceedings of ICAC (2009)

    Google Scholar 

  26. Elsken, T., Metzen, J.H., Hutter, F.: Neural architecture search: a survey. J. Mach. Learn. Res. 20(1), 1997–2017 (2019)

    MathSciNet  MATH  Google Scholar 

  27. Erickson, B.J., et al.: Machine learning for medical imaging. Radiographics 37(2), 505 (2017)

    Article  Google Scholar 

  28. Esrafilian-Najafabadi, M., Haghighat, F.: Occupancy-based HVAC control systems in buildings: a state-of-the-art review. Build. Environ. 197, 107810 (2021)

    Article  Google Scholar 

  29. Gao, D., Liu, Y., Huang, A., Ju, C., Yu, H., Yang, Q.: Privacy-preserving heterogeneous federated transfer learning. In: 2019 IEEE International Conference on Big Data (Big Data), pp. 2552–2559. IEEE (2019)

    Google Scholar 

  30. Ghahremani, S., Giese, H., Vogel, T.: Improving scalability and reward of utility-driven self-healing for large dynamic architectures. ACM Trans. Auton. Adapt. Syst. 14(3), 1–41 (2020)

    Article  Google Scholar 

  31. Gheibi, O., Weyns, D.: Lifelong self-adaptation: self-adaptation meets lifelong machine learning. In: Proceedings of SEAMS (2022)

    Google Scholar 

  32. Gheibi, O., et al.: Applying machine learning in self-adaptive systems: a systematic literature review. arXiv preprint arXiv:2103.04112 (2021)

  33. Gu, T., et al.: BadNets: evaluating backdooring attacks on deep neural networks. IEEE Access 7, 47230–47244 (2019)

    Article  Google Scholar 

  34. Guo, X., Shen, Z., Zhang, Y., Wu, T.: Review on the application of artificial intelligence in smart homes. Smart Cities 2(3), 402–420 (2019)

    Article  Google Scholar 

  35. Huang, L., et al.: Adversarial machine learning. In: Proceedings of AISec (2011)

    Google Scholar 

  36. Huchuk, B., Sanner, S., O’Brien, W.: Comparison of machine learning models for occupancy prediction in residential buildings using connected thermostat data. Build. Environ. 160, 106177 (2019)

    Article  Google Scholar 

  37. Jamshidi, P., et al.: Machine learning meets quantitative planning: enabling self-adaptation in autonomous robots. In: Proceedings of SEAMS (2019)

    Google Scholar 

  38. Kephart, J.O., Chess, D.M.: The vision of autonomic computing. Computer 36(1), 46–50 (2003)

    Article  Google Scholar 

  39. Krupitzer, C., et al.: A survey on engineering approaches for self-adaptive systems. Pervasive Mob. Comput. 17, 184–206 (2018)

    Article  Google Scholar 

  40. Langford, M.A., Chan, K.H., Fleck, J.E., McKinley, P.K., Cheng, B.H.: MoDALAS: model-driven assurance for learning-enabled autonomous systems. In: Proceedings of MODELS (2021)

    Google Scholar 

  41. Liu, B.: Learning on the job: online lifelong and continual learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34 (2020)

    Google Scholar 

  42. Liu, Y., et al.: A secure federated transfer learning framework. Proc. IS 35(4), 70–82 (2020)

    Google Scholar 

  43. Lucas, Y., Jurgovsky, J.: Credit card fraud detection using machine learning: a survey. CoRR abs/2010.06479 (2020)

    Google Scholar 

  44. Mallozzi, P., Pelliccione, P., Knauss, A., Berger, C., Mohammadiha, N.: Autonomous vehicles: state of the art, future trends, and challenges. In: Dajsuren, Y., van den Brand, M. (eds.)Automotive Systems and Software Engineering, pp. 347–367. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-12157-0_16

  45. Mendes, P., et al.: TrimTuner: Efficient optimization of machine learning jobs in the cloud via sub-sampling. In: MASCOTS (2020)

    Google Scholar 

  46. Miller, B., et al.: Reviewer integration and performance measurement for malware detection. In: Proceedings of DIMVA (2016)

    Google Scholar 

  47. Moreno, G.A., et al.: Flexible and efficient decision-making for proactive latency-aware self-adaptation. ACM Trans. Auton. Adapt. Syst. 13(1), 1–36 (2018)

    Article  Google Scholar 

  48. Moreno-Torres, J.G., Raeder, T., Alaiz-Rodríguez, R., Chawla, N.V., Herrera, F.: A unifying view on dataset shift in classification. Pattern Recogn. 45(1), 521–530 (2012)

    Article  Google Scholar 

  49. Nguyen, C., Hassner, T., Seeger, M., Archambeau, C.: LEEP: a new measure to evaluate transferability of learned representations. In: Proceedings of ICML. PMLR (2020)

    Google Scholar 

  50. Osborne, M.A., et al.: Gaussian processes for global optimization. In: LION (2009)

    Google Scholar 

  51. Ovadia, Y., et al.: Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift. In: Proceedings of NIPS (2019)

    Google Scholar 

  52. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE TKDE 22(10), 1345–4350 (2009)

    Google Scholar 

  53. Pandey, A., Moreno, G.A., Cámara, J., Garlan, D.: Hybrid planning for decision making in self-adaptive systems. In: Proceedings of SASO (2016)

    Google Scholar 

  54. Papamartzivanos, D., et al.: Introducing deep learning self-adaptive misuse network intrusion detection systems. IEEE Access 7, 13546–13560 (2019)

    Article  Google Scholar 

  55. Peng, Z., Yang, J., Chen, T.H., Ma, L.: A first look at the integration of machine learning models in complex autonomous driving systems: a case study on Apollo. In: Proceedings of ESEC/FSE (2020)

    Google Scholar 

  56. Pinto, F., et al.: Automatic model monitoring for data streams. arXiv preprint arXiv:1908.04240 (2019)

  57. Quionero-Candela, J., et al.: Dataset Shift in Machine Learning. The MIT Press, Cambridge (2009)

    Google Scholar 

  58. Rabanser, S., et al.: Failing loudly: an empirical study of methods for detecting dataset shift. In: Proceedings of NIPS (2019)

    Google Scholar 

  59. Saputri, T.R.D., Lee, S.W.: The application of machine learning in self-adaptive systems: a systematic literature review. IEEE Access 8, 205948–205967 (2020)

    Article  Google Scholar 

  60. Shi, J., Yu, N., Yao, W.: Energy efficient building HVAC control algorithm with real-time occupancy prediction. Energy Proc. 111, 267–276 (2017)

    Article  Google Scholar 

  61. Silver, D.L., Yang, Q., Li, L.: Lifelong machine learning systems: beyond learning algorithms. In: 2013 AAAI Spring Symposium Series (2013)

    Google Scholar 

  62. Singh, A., Sikdar, B.: Adversarial attack for deep learning based IoT appliance classification techniques. In: 2021 IEEE 7th World Forum on Internet of Things (WF-IoT). IEEE (2021)

    Google Scholar 

  63. Surantha, N., Wicaksono, W.R.: Design of smart home security system using object recognition and PIR sensor. Proc. Comput. Sci. 135, 465–472 (2018)

    Article  Google Scholar 

  64. Swersky, K., et al.: Multi-task Bayesian optimization. Proc. NIPS 26, 1–9 (2013)

    Google Scholar 

  65. Wang, Z.J., Choi, D., Xu, S., Yang, D.: Putting humans in the natural language processing loop: a survey. arXiv preprint arXiv:2103.04044 (2021)

  66. Wu, D., et al.: A highly accurate framework for self-labeled semisupervised classification in industrial applications. IEEE TII 14(3), 1–12 (2018)

    Google Scholar 

  67. Wu, Y., et al.: DeltaGrad: rapid retraining of machine learning models. In: Proceedings of ICML (2020)

    Google Scholar 

  68. Xiao, Y., et al.: Self-checking deep neural networks in deployment. In: Proceedings of ICSE (2021)

    Google Scholar 

  69. Yadwadkar, N.J., Hariharan, B., Gonzalez, J.E., Smith, B., Katz, R.H.: Selecting the \(<\)i\(>\)best\(<\)/i\(>\) vm across multiple public clouds: a data-driven performance modeling approach. In: Proceedings of SoCC, pp. 452–465 (2017)

    Google Scholar 

  70. Yang, J., Zhou, K., Li, Y., Liu, Z.: Generalized out-of-distribution detection: a survey. arXiv preprint arXiv:2110.11334 (2021)

  71. Yang, Z., Asyrofi, M.H., Lo, D.: BiasRV: uncovering biased sentiment predictions at runtime. CoRR abs/2105.14874 (2021)

    Google Scholar 

  72. Zhou, X., Lo Faro, W., Zhang, X., Arvapally, R.S.: A framework to monitor machine learning systems using concept drift detection. In: Abramowicz, W., Corchuelo, R. (eds.) BIS 2019. A Framework to Monitor Machine Learning Systems Using Concept Drift Detection, vol. 353, pp. 218–231. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20485-3_17

    Chapter  Google Scholar 

Download references

Acknowledgements

Support for this research was provided by Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through the Carnegie Mellon Portugal Program under Grant SFRH/BD/150643/2020 and via projects with references POCI-01-0247-FEDER-045915, POCI-01-0247-FEDER-045907, and UIDB/50021/2020. The contributions of Gabriel Moreno and Mark Klein are based upon work funded and supported by the Department of Defense under Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center. Such contributions are used with permission, but ownership of the underlying intellectual property embodied within such contributions is retained by Carnegie Mellon University. DM22-0149.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maria Casimiro .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Casimiro, M., Romano, P., Garlan, D., Moreno, G.A., Kang, E., Klein, M. (2022). Self-adaptive Machine Learning Systems: Research Challenges and Opportunities. In: Scandurra, P., Galster, M., Mirandola, R., Weyns, D. (eds) Software Architecture. ECSA 2021. Lecture Notes in Computer Science, vol 13365. Springer, Cham. https://doi.org/10.1007/978-3-031-15116-3_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-15116-3_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-15115-6

  • Online ISBN: 978-3-031-15116-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics