Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Statistical Relational Extension of Answer Set Programming

  • Chapter
  • First Online:
Reasoning Web. Causality, Explanations and Declarative Knowledge

Abstract

This tutorial presents a statistical relational extension of the answer set programming language called \(\textrm{LP}^\mathrm{{MLN}}\), which incorporates the concept of weighted rules into the stable model semantics following the log-linear models of Markov Logic. An \(\textrm{LP}^\mathrm{{MLN}}\) program defines a probability distribution over “soft” stable models, which may not satisfy all rules, but the more rules with larger weights they satisfy, the higher their probabilities, thus allowing for an intuitive and elaboration tolerant representation of problems that require both logical and probabilistic reasoning. The extension provides a natural way to overcome the deterministic nature of the stable model semantics, such as resolving inconsistencies in answer set programs, associating probability to stable models, and applying statistical inference and learning with probabilistic stable models. We also present formal relations between \(\textrm{LP}^\mathrm{{MLN}}\) and other related formalisms, which produce ways of performing inference and learning in \(\textrm{LP}^\mathrm{{MLN}}\).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    It is straightforward to extend \(\textrm{LP}^\mathrm{{MLN}}\) to allow function constants of positive arity as long as the program is finitely groundable.

  2. 2.

    F in the weak constraint here is an arbitrary formula, that is more general than the form reviewed in Sect. 2. We can use a tool like f2lp [17] to turn formulas into the input language of clingo.

  3. 3.

    http://alchemy.cs.washington.edu.

  4. 4.

    Note that although any local maximum is a global maximum for the log-likelihood function, there can be multiple combinations of weights that achieve the maximum probability of the training data.

  5. 5.

    A Markov chain is ergodic if there is a number m such that any state can be reached from any other state in any number of steps greater than or equal to m.

    Detailed balance means \(P_{\varPi }(X)Q(X\rightarrow Y) = P_{\varPi }(Y)Q(Y\rightarrow X)\) for any samples X and Y, where \(Q(X\rightarrow Y)\) denotes the probability that the next sample is Y given that the current sample is X.

  6. 6.

    Note that \(\varPi ^{neg}\) is only used in MC-ASP. The output of Algorithm 2 may have positive weights.

  7. 7.

    https://github.com/azreasoners/lpmln.

  8. 8.

    Note that here “=” is just a part of the symbol for propositional atoms, and is not equality in first-order logic.

References

  1. Ahmadi, N., Lee, J., Papotti, P., Saeed, M.: Explainable fact checking with probabilistic answer set programming. In: Conference for Truth and Trust Online (2019)

    Google Scholar 

  2. Babb, J., Lee, J.: Action language \(\cal{BC}\)+: preliminary report. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2015)

    Google Scholar 

  3. Babb, J., Lee, J.: Action language \(\cal{BC}\)+. J. Logic Comput. 30(4), 899–922 (2020)

    Google Scholar 

  4. Balai, E., Gelfond, M.: On the relationship between P-log and LPMLN. In: IJCAI (2016)

    Google Scholar 

  5. Buccafurri, F., Leone, N., Rullo, P.: Enhancing disjunctive datalog by constraints. IEEE Trans. Knowl. Data Eng. 12(5), 845–860 (2000)

    Article  Google Scholar 

  6. Calimeri, F., et al.: ASP-Core-2: Input language format. ASP Standardization Working Group, Technical report (2012)

    Google Scholar 

  7. Eiter, T., Kaminski, T.: Exploiting contextual knowledge for hybrid classification of visual objects. In: Michael, L., Kakas, A. (eds.) JELIA 2016. LNCS (LNAI), vol. 10021, pp. 223–239. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-48758-8_15

    Chapter  Google Scholar 

  8. Erdem, E., Lifschitz, V.: Tight logic programs. Theory Pract. Logic Program. 3, 499–518 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  9. Ferraris, P., Lee, J., Lifschitz, V.: Stable models and circumscription. Artif. Intell. 175, 236–263 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Fierens, D., et al.: Inference and learning in probabilistic logic programs using weighted Boolean formulas. Theory Pract. Logic Program. 15(03), 358–401 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  11. Gebser, M., Schaub, T., Marius, S., Thiele, S.: xorro: near uniform sampling of answer sets by means of XOR (2016). https://potassco.org/labs/2016/09/20/xorro.html

  12. Gelfond, M., Lifschitz, V.: The stable model semantics for logic programming. In: Kowalski, R., Bowen, K. (eds.) Proceedings of International Logic Programming Conference and Symposium, pp. 1070–1080. MIT Press (1988)

    Google Scholar 

  13. Hahn, S., Janhunen, T., Kaminski, R., Romero, J., Rühling, N., Schaub, T.: Plingo: a system for probabilistic reasoning in clingo based on \(LP^{MLN}\). In: Governatori, G., Turhan, A.Y. (eds.) RuleML+RR 2022. LNCS, vol. 13752, pp. 54–62. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-21541-4_4

    Chapter  Google Scholar 

  14. Katzouris, N., Artikis, A.: WOLED: a tool for online learning weighted answer set rules for temporal reasoning under uncertainty. In: Proceedings of the International Conference on Principles of Knowledge Representation and Reasoning, vol. 17, pp. 790–799 (2020)

    Google Scholar 

  15. Lee, J., Meng, Y., Wang, Y.: Markov logic style weighted rules under the stable model semantics. In: CEUR Workshop Proceedings, vol. 1433 (2015)

    Google Scholar 

  16. Lee, J., Lifschitz, V.: Loop formulas for disjunctive logic programs. In: Palamidessi, C. (ed.) ICLP 2003. LNCS, vol. 2916, pp. 451–465. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-24599-5_31

    Chapter  MATH  Google Scholar 

  17. Lee, J., Palla, R.: System f2lp – computing answer sets of first-order formulas. In: Erdem, E., Lin, F., Schaub, T. (eds.) LPNMR 2009. LNCS (LNAI), vol. 5753, pp. 515–521. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04238-6_51

    Chapter  Google Scholar 

  18. Lee, J., Talsania, S., Wang, Y.: Computing LPMLN using ASP and MLN solvers. Theory Pract. Logic Program. (2017). https://doi.org/10.1017/S1471068417000400

    Article  MATH  Google Scholar 

  19. Lee, J., Wang, Y.: Weighted rules under the stable model semantics. In: Proceedings of International Conference on Principles of Knowledge Representation and Reasoning (KR), pp. 145–154 (2016)

    Google Scholar 

  20. Lee, J., Wang, Y.: A probabilistic extension of action language \(\cal{BC}\)+. Theory Pract. Logic Program. 18(3–4), 607–622 (2018)

    Google Scholar 

  21. Lee, J., Wang, Y.: Weight learning in a probabilistic extension of answer set programs. In: Proceedings of International Conference on Principles of Knowledge Representation and Reasoning (KR), pp. 22–31 (2018)

    Google Scholar 

  22. Lee, J., Yang, Z.: LPMLN, weak constraints, and P-log. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pp. 1170–1177 (2017)

    Google Scholar 

  23. Lifschitz, V., Pearce, D., Valverde, A.: Strongly equivalent logic programs. ACM Trans. Comput. Log. 2, 526–541 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  24. Lin, F., Zhao, Y.: ASSAT: computing answer sets of a logic program by SAT solvers. Artif. Intell. 157, 115–137 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  25. Luo, M., Lee, J.: Strong equivalence for LPMLN programs. In: ICLP (Technical Communications) (2019)

    Google Scholar 

  26. Poon, H., Domingos, P.: Sound and efficient inference with probabilistic and deterministic dependencies. In: AAAI, vol. 6, pp. 458–463 (2006)

    Google Scholar 

  27. Richardson, M., Domingos, P.: Markov logic networks. Mach. Learn. 62(1–2), 107–136 (2006)

    Article  MATH  Google Scholar 

  28. Sato, T.: A statistical learning method for logic programs with distribution semantics. In: Proceedings of the 12th International Conference on Logic Programming (ICLP), pp. 715–729 (1995)

    Google Scholar 

  29. Wang, B., Zhang, Z., Xu, H., Shen, J.: Splitting an LPMLN program. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1 (2018)

    Google Scholar 

  30. Wang, Y., Lee, J.: Elaboration tolerant representation of Markov decision process via decision-theoretic extension of probabilistic action language \(p\cal{BC}+\). In: Balduccini, M., Lierler, Y., Woltran, S. (eds.) LPNMR 2019. LNAI, vol. 11481, pp. 224–238. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20528-7_17

    Chapter  Google Scholar 

  31. Wang, Y., Zhang, S., Lee, J.: Bridging commonsense reasoning and probabilistic planning via a probabilistic action language. Theory Pract. Logic Program. 19(5–6), 1090–1106 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  32. Wu, W., et al.: LPMLNModels: a parallel solver for LPMLN. In: 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI), pp. 794–799. IEEE (2018)

    Google Scholar 

  33. Yang, Z., Ishay, A., Lee, J.: NeurASP: embracing neural networks into answer set programming. In: Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), pp. 1755–1762 (2020). https://doi.org/10.24963/ijcai.2020/243

Download references

Acknowledgements

This work was partially supported by the National Science Foundation under Grants IIS-1815337 and IIS-2006747.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joohyung Lee .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Lee, J., Yang, Z. (2023). Statistical Relational Extension of Answer Set Programming. In: Bertossi, L., Xiao, G. (eds) Reasoning Web. Causality, Explanations and Declarative Knowledge. Lecture Notes in Computer Science, vol 13759. Springer, Cham. https://doi.org/10.1007/978-3-031-31414-8_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-31414-8_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-31413-1

  • Online ISBN: 978-3-031-31414-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics