Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

veriFIRE: Verifying an Industrial, Learning-Based Wildfire Detection System

  • Conference paper
  • First Online:
Formal Methods (FM 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14000))

Included in the following conference series:

Abstract

In this short paper, we present our ongoing work on the veriFIRE project—a collaboration between industry and academia, aimed at using verification for increasing the reliability of a real-world, safety-critical system. The system we target is an airborne platform for wildfire detection, which incorporates two deep neural networks. We describe the system and its properties of interest, and discuss our attempts to verify the system’s consistency, i.e., its ability to continue and correctly classify a given input, even if the wildfire it describes increases in intensity. We regard this work as a step towards the incorporation of academic-oriented verification tools into real-world systems of interest.

All authors contributed equally.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Amir, G., et al.: Verifying Learning-Based Robotic Navigation Systems (2022). Technical report. https://arxiv.org/abs/2205.13536

  2. Amir, G., Schapira, M., Katz, G.: Towards scalable verification of deep reinforcement learning. In: Proceedings 21st International Conference on Formal Methods in Computer-Aided Design (FMCAD), pp. 193–203 (2021)

    Google Scholar 

  3. Amir, G., Wu, H., Barrett, C., Katz, G.: An SMT-based approach for verifying binarized neural networks. In: TACAS 2021. LNCS, vol. 12652, pp. 203–222. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72013-1_11

    Chapter  MATH  Google Scholar 

  4. Amir, G., Zelazny, T., Katz, G., Schapira, M.: Verification-aided deep ensemble selection. In: Proceedings of the 22nd International Conference on Formal Methods in Computer-Aided Design (FMCAD), pp. 27–37 (2022)

    Google Scholar 

  5. Baluta, T., Shen, S., Shinde, S., Meel, K.S., Saxena, P.: Quantitative verification of neural networks and its security applications. In: Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS), pp. 1249–1264 (2019)

    Google Scholar 

  6. Bassan, S., Katz, G.: Towards Formal Approximated Minimal Explanations of Neural Networks, Technical report (2022). https://arxiv.org/abs/2210.13915

  7. Bojarski, M., et al.: End to End Learning for Self-Driving Cars, Technical report (2016). http://arxiv.org/abs/1604.07316

  8. Casadio, M., et al.: Neural network robustness as a verification property: a principled case study. In: Shoham, S., Vizel, Y. (eds.) CAV 2022. Lecture Notes in Computer Science, vol. 13371, pp. 219–231. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-13185-1_11

    Chapter  Google Scholar 

  9. Corsi, D., Yerushalmi, R., Amir, G., Farinelli, A., Harel, D., Katz, G.: Constrained Reinforcement Learning for Robotics via Scenario-Based Programming, Technical report (2022). https://arxiv.org/abs/2206.09603

  10. Dong, S., Wang, P., Abbas, K.: A survey on deep learning and its applications. Comput. Sci. Rev. 40, 100379 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  11. Dutta, S., Jha, S., Sankaranarayanan, S., Tiwari, A.: Output range analysis for deep feedforward neural networks. In: Dutle, A., Muñoz, C., Narkawicz, A. (eds.) NFM 2018. LNCS, vol. 10811, pp. 121–138. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-77935-5_9

    Chapter  Google Scholar 

  12. Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 269–286. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68167-2_19

    Chapter  MATH  Google Scholar 

  13. Elboher, Y.Y., Cohen, E., Katz, G.: Neural network verification using residual reasoning. In: chlingloff, B.H., Chai, M. (eds.) Software Engineering and Formal Methods. SEFM 2022. Lecture Notes in Computer Science, vol. 13550, pp. 173–189. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-17108-6_11

  14. Elboher, Y.Y., Gottschlich, J., Katz, G.: An abstraction-based framework for neural network verification. In: Lahiri, S.K., Wang, C. (eds.) CAV 2020. LNCS, vol. 12224, pp. 43–65. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-53288-8_3

    Chapter  MATH  Google Scholar 

  15. Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: Proceedings 39th IEEE Symposium on Security and Privacy (S &P) (2018)

    Google Scholar 

  16. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)

    MATH  Google Scholar 

  17. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and Harnessing Adversarial Examples, Technical report (2014). http://arxiv.org/abs/1412.6572

  18. Gopinath, D., Katz, G., Păsăreanu, C.S., Barrett, C.: DeepSafe: a data-driven approach for assessing robustness of neural networks. In: Lahiri, S.K., Wang, C. (eds.) ATVA 2018. LNCS, vol. 11138, pp. 3–19. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01090-4_1

    Chapter  Google Scholar 

  19. Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 3–29. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_1

    Chapter  Google Scholar 

  20. Isac, O., Barrett, C., Zhang, M., Katz, G.: Neural network verification with proof production. In: Proceedings of the 22nd International Conference on Formal Methods in Computer-Aided Design (FMCAD), pp. 38–48 (2022)

    Google Scholar 

  21. Ivanov, R., Carpenter, T.J., Weimer, J., Alur, R., Pappas, G.J., Lee, I.: Verifying the safety of autonomous systems with neural network controllers. ACM Trans. Embedded Comput. Syst. (TECS) 20(1), 1–26 (2020)

    MATH  Google Scholar 

  22. Jin, P., Tian, J., Zhi, D., Wen, X., Zhang, M.: Trainify: A CEGAR-driven training and verification framework for safe deep reinforcement learning. In: Shoham, S., Vizel, Y. (eds.) Computer Aided Verification (CAV), CAV 2022. Lecture Notes in Computer Science, vol. 13371, pp. 193–218. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-13185-1_10

    Chapter  Google Scholar 

  23. Jumper, J., et al.: Highly accurate protein structure prediction with AlphaFold. Nature 596(7873), 583–589 (2021)

    Article  Google Scholar 

  24. Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5

    Chapter  Google Scholar 

  25. Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: a calculus for reasoning about deep neural networks. Formal Methods Syst. Des., 1–30 (2021). https://doi.org/10.1007/s10703-021-00363-7

  26. Katz, G., et al.: The marabou framework for verification and analysis of deep neural networks. In: Dillig, I., Tasiran, S. (eds.) CAV 2019. LNCS, vol. 11561, pp. 443–452. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-25540-4_26

    Chapter  Google Scholar 

  27. Könighofer, B., Lorber, F., Jansen, N., Bloem, R.: Shield synthesis for reinforcement learning. In: Margaria, T., Steffen, B. (eds.) ISoLA 2020. LNCS, vol. 12476, pp. 290–306. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61362-4_16

    Chapter  Google Scholar 

  28. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of 26th Conference on Neural Information Processing Systems (NeurIPS), pp. 1097–1105 (2012)

    Google Scholar 

  29. Kuper, L., Katz, G., Gottschlich, J., Julian, K., Barrett, C., Kochenderfer, M.: Toward Scalable Verification for Safety-Critical Deep Networks, Technical report (2018). https://arxiv.org/abs/1801.05950

  30. Kurakin, A., Goodfellow, I.J., Bengio, S: Adversarial examples in the physical world. In: Artificial Intelligence Safety and Security, pp. 99–112 (2018)

    Google Scholar 

  31. Lee, W., Kim, S., Lee, Y.T., Lee, H.W., Choi, M.: Deep neural networks for wild fire detection with unmanned aerial vehicle. In: Proceedings of 2017 IEEE International Conference on Consumer Electronics (ICCE), pp. 252–253 (2017)

    Google Scholar 

  32. Lekharu, A., Moulii, K. Y., Sur, A., Sarkar, A.: Deep learning based prediction model for adaptive video streaming. In: Proceedings of International Conference on Communication Systems & Networks (COMSNETS), pp. 152–159 (2020)

    Google Scholar 

  33. Levy, N., Katz, G.: RoMA: a Method for Neural Network Robustness Measurement and Assessment, Technical report (2021). https://arxiv.org/abs/2110.11088

  34. Li, P., Zhao, W.: Image fire detection algorithms based on convolutional neural networks. Case Stud. Therm. Eng. 19, 100625 (2020)

    Article  Google Scholar 

  35. Lomuscio, A., Maganti, L.: An Approach to Reachability Analysis for Feed-Forward ReLU Neural Networks, Technical report (2017). http://arxiv.org/abs/1706.07351

  36. Lyu, Z., Ko, C. Y., Kong, Z., Wong, N., Lin, D., Daniel, L.: Fastened crown: tightened neural network robustness certificates. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI), pp. 5037–5044 (2020)

    Google Scholar 

  37. Mnih, V., et al.: Playing Atari with Deep Reinforcement Learning. Technical report (2013). http://arxiv.org/abs/1312.5602

  38. Moosavi-Dezfooli, S., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1765–1773 (2017)

    Google Scholar 

  39. Nassif, A., Shahin, I., Attili, I., Azzeh, M., Shaalan, K.: Speech recognition using deep neural networks: a systematic review. IEEE Access 7, 19143–19165 (2019)

    Article  Google Scholar 

  40. Ostrovsky, M., Barrett, C., Katz, G.: An abstraction-refinement approach to verifying convolutional neural networks. In: Bouajjani, A., Holík, L., Wu, Z. (eds.) Automated Technology for Verification and Analysis. ATVA 2022. Lecture Notes in Computer Science, vol. 13505, pp. 391–396 (2022). https://doi.org/10.1007/978-3-031-19992-9_25

  41. Refaeli, I., Katz, G.: Minimal multi-layer modifications of deep neural networks. In: Isac, O., Ivanov, R., Katz, G., Narodytska, N., Nenzi, L. (eds.) Software Verification and Formal Methods for ML-Enabled Autonomous Systems. NSV (FoMLAS) 2022. Lecture Notes in Computer Science, vol. 13466, pp. 46–66. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-21222-2_4

  42. Sharma, J., Granmo, O.-C., Goodwin, M., Fidje, J.T.: Deep convolutional neural networks for fire detection in images. In: Boracchi, G., Iliadis, L., Jayne, C., Likas, A. (eds.) EANN 2017. CCIS, vol. 744, pp. 183–193. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-65172-9_16

    Chapter  Google Scholar 

  43. Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)

    Article  Google Scholar 

  44. Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large-Scale Image Recognition, Technical report (2014). http://arxiv.org/abs/1409.1556

  45. Strong, C.A., et al.: Global optimization of objective functions represented by ReLU networks. J. Mach. Learn., 1–28 (2021). https://doi.org/10.1007/s10994-021-06050-2

  46. Tjeng, V., Xiao, K., Tedrake, R.: Evaluating Robustness of Neural Networks with Mixed Integer Programming, Technical report (2017). http://arxiv.org/abs/1711.07356

  47. Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Formal security analysis of neural networks using symbolic intervals. In: Proceedings of the 27th USENIX Security Symposium, pp. 1599–1614 (2018)

    Google Scholar 

  48. Weng, T.: Towards Fast Computation of Certified Robustness for ReLU Networks, Technical report (2018). http://arxiv.org/abs/1804.09699

  49. Zelazny, T., Wu, H., Barrett, C., Katz, G.: On reducing over-approximation errors for neural network verification. In: Proceedings of the 22nd International Conference on Formal Methods in Computer-Aided Design (FMCAD), pp. 17–26 (2022)

    Google Scholar 

  50. Zhang, H., Shinn, M., Gupta, A., Gurfinkel, A., Le, N., Narodytska, N.: Verification of recurrent neural networks for cognitive tasks via reachability analysis. In: Proceedings of the 24th European Conference on Artificial Intelligence (ECAI), pp. 1690–1697 (2020)

    Google Scholar 

  51. Zhang, Q., Xu, J., Xu, L., Guo, H.: Deep convolutional neural networks for forest fire detection. In: Proceedings of the International Forum on Management, Education and Information Technology Application (IFMEITA), pp. 568–575 (2016)

    Google Scholar 

Download references

Acknowledgement

This work was supported by a grant from the Israel Innovation Authority. The work of Amir was also supported by a scholarship from the Clore Israel Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guy Amir .

Editor information

Editors and Affiliations

Appendices

Appendices

A Background: DNNs and Their Verification

Deep Neural Networks. A deep neural network (DNN) [16] is a computational, directed graph, comprised of layers. The network computes a value, by receiving inputs and propagating them through its layers until reaching the final (output) layer. These output values can be interpreted as a classification label or as a regression value, depending on the kind of network in question. The actual computation depends on each layer’s type. For example, a node y in a rectified linear unit (ReLU) layer calculates the value \(y=\text {ReLU} {}(x)=\max (0,x)\), for the value x of one of the nodes in its preceding layer. Additional layer types include weighted sum layers, as well as layers with various non-linear activations. Here, we focus on feed-forward neural networks, i.e., DNNs in which each layer is connected only to its following layer.

Fig. 2.
figure 2

A toy DNN.

Figure 2 depicts a toy DNN. For input \(V_1=[1, 3]^T\), the second layer computes the values \(V_2=[13,-6]^T\). In the third layer, the ReLU functions are applied, producing \(V_3=[13,0]^T\). Finally, the network’s single output value is \(V_4=[65]\).

DNN Verification. A DNN verification engine [15, 19, 24, 36, 47] receives a DNN N, a precondition P that defines a subspace of the network’s inputs, and a postcondition Q that limits the network’s output values. The verification engine then searches for an input \(x_0\) that satisfies \(P(x_0) \wedge Q(N(x_0))\). If such an input exists, the engine returns SAT and a concrete input that satisfies the constraints; otherwise, it returns UNSAT, indicating that no such input exists. The postcondition Q usually encodes the negation of the desired property, and hence a SAT answer indicates that the property is violated, and that the returned \(x_0\) triggers a bug. However, an UNSAT result indicates that the property holds.

For example, suppose we wish to verify that the simple DNN depicted in Fig. 2 always outputs a value strictly larger than 25; i.e., for any input \(x=\langle v_1^1,v_1^2\rangle \), it holds that \(N(x)=v_4^1 > 25\). This property is encoded as a verification query by choosing a precondition that does not restrict the input, i.e., \(P=(true)\), and by setting a postcondition \(Q=(v_4^1\le 25)\). For this verification query, a sound verification engine will return SAT, alongside a feasible counterexample such as \(x=\langle 1, 0\rangle \), which produces \(v_4^1=20 \le 25\), proving that the property does not hold for this DNN.

In our work, we used Marabou  [26]—a sound and complete DNN-verification engine, which has recently been used in a variety of applications [1, 2, 4, 6, 9, 13, 14, 20, 40, 41].

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Amir, G., Freund, Z., Katz, G., Mandelbaum, E., Refaeli, I. (2023). veriFIRE: Verifying an Industrial, Learning-Based Wildfire Detection System. In: Chechik, M., Katoen, JP., Leucker, M. (eds) Formal Methods. FM 2023. Lecture Notes in Computer Science, vol 14000. Springer, Cham. https://doi.org/10.1007/978-3-031-27481-7_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-27481-7_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-27480-0

  • Online ISBN: 978-3-031-27481-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics