Abstract
This tutorial presents a statistical relational extension of the answer set programming language called \(\textrm{LP}^\mathrm{{MLN}}\), which incorporates the concept of weighted rules into the stable model semantics following the log-linear models of Markov Logic. An \(\textrm{LP}^\mathrm{{MLN}}\) program defines a probability distribution over “soft” stable models, which may not satisfy all rules, but the more rules with larger weights they satisfy, the higher their probabilities, thus allowing for an intuitive and elaboration tolerant representation of problems that require both logical and probabilistic reasoning. The extension provides a natural way to overcome the deterministic nature of the stable model semantics, such as resolving inconsistencies in answer set programs, associating probability to stable models, and applying statistical inference and learning with probabilistic stable models. We also present formal relations between \(\textrm{LP}^\mathrm{{MLN}}\) and other related formalisms, which produce ways of performing inference and learning in \(\textrm{LP}^\mathrm{{MLN}}\).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
It is straightforward to extend \(\textrm{LP}^\mathrm{{MLN}}\) to allow function constants of positive arity as long as the program is finitely groundable.
- 2.
- 3.
- 4.
Note that although any local maximum is a global maximum for the log-likelihood function, there can be multiple combinations of weights that achieve the maximum probability of the training data.
- 5.
A Markov chain is ergodic if there is a number m such that any state can be reached from any other state in any number of steps greater than or equal to m.
Detailed balance means \(P_{\varPi }(X)Q(X\rightarrow Y) = P_{\varPi }(Y)Q(Y\rightarrow X)\) for any samples X and Y, where \(Q(X\rightarrow Y)\) denotes the probability that the next sample is Y given that the current sample is X.
- 6.
Note that \(\varPi ^{neg}\) is only used in MC-ASP. The output of Algorithm 2 may have positive weights.
- 7.
- 8.
Note that here “=” is just a part of the symbol for propositional atoms, and is not equality in first-order logic.
References
Ahmadi, N., Lee, J., Papotti, P., Saeed, M.: Explainable fact checking with probabilistic answer set programming. In: Conference for Truth and Trust Online (2019)
Babb, J., Lee, J.: Action language \(\cal{BC}\)+: preliminary report. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2015)
Babb, J., Lee, J.: Action language \(\cal{BC}\)+. J. Logic Comput. 30(4), 899–922 (2020)
Balai, E., Gelfond, M.: On the relationship between P-log and LPMLN. In: IJCAI (2016)
Buccafurri, F., Leone, N., Rullo, P.: Enhancing disjunctive datalog by constraints. IEEE Trans. Knowl. Data Eng. 12(5), 845–860 (2000)
Calimeri, F., et al.: ASP-Core-2: Input language format. ASP Standardization Working Group, Technical report (2012)
Eiter, T., Kaminski, T.: Exploiting contextual knowledge for hybrid classification of visual objects. In: Michael, L., Kakas, A. (eds.) JELIA 2016. LNCS (LNAI), vol. 10021, pp. 223–239. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-48758-8_15
Erdem, E., Lifschitz, V.: Tight logic programs. Theory Pract. Logic Program. 3, 499–518 (2003)
Ferraris, P., Lee, J., Lifschitz, V.: Stable models and circumscription. Artif. Intell. 175, 236–263 (2011)
Fierens, D., et al.: Inference and learning in probabilistic logic programs using weighted Boolean formulas. Theory Pract. Logic Program. 15(03), 358–401 (2015)
Gebser, M., Schaub, T., Marius, S., Thiele, S.: xorro: near uniform sampling of answer sets by means of XOR (2016). https://potassco.org/labs/2016/09/20/xorro.html
Gelfond, M., Lifschitz, V.: The stable model semantics for logic programming. In: Kowalski, R., Bowen, K. (eds.) Proceedings of International Logic Programming Conference and Symposium, pp. 1070–1080. MIT Press (1988)
Hahn, S., Janhunen, T., Kaminski, R., Romero, J., Rühling, N., Schaub, T.: Plingo: a system for probabilistic reasoning in clingo based on \(LP^{MLN}\). In: Governatori, G., Turhan, A.Y. (eds.) RuleML+RR 2022. LNCS, vol. 13752, pp. 54–62. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-21541-4_4
Katzouris, N., Artikis, A.: WOLED: a tool for online learning weighted answer set rules for temporal reasoning under uncertainty. In: Proceedings of the International Conference on Principles of Knowledge Representation and Reasoning, vol. 17, pp. 790–799 (2020)
Lee, J., Meng, Y., Wang, Y.: Markov logic style weighted rules under the stable model semantics. In: CEUR Workshop Proceedings, vol. 1433 (2015)
Lee, J., Lifschitz, V.: Loop formulas for disjunctive logic programs. In: Palamidessi, C. (ed.) ICLP 2003. LNCS, vol. 2916, pp. 451–465. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-24599-5_31
Lee, J., Palla, R.: System f2lp – computing answer sets of first-order formulas. In: Erdem, E., Lin, F., Schaub, T. (eds.) LPNMR 2009. LNCS (LNAI), vol. 5753, pp. 515–521. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04238-6_51
Lee, J., Talsania, S., Wang, Y.: Computing LPMLN using ASP and MLN solvers. Theory Pract. Logic Program. (2017). https://doi.org/10.1017/S1471068417000400
Lee, J., Wang, Y.: Weighted rules under the stable model semantics. In: Proceedings of International Conference on Principles of Knowledge Representation and Reasoning (KR), pp. 145–154 (2016)
Lee, J., Wang, Y.: A probabilistic extension of action language \(\cal{BC}\)+. Theory Pract. Logic Program. 18(3–4), 607–622 (2018)
Lee, J., Wang, Y.: Weight learning in a probabilistic extension of answer set programs. In: Proceedings of International Conference on Principles of Knowledge Representation and Reasoning (KR), pp. 22–31 (2018)
Lee, J., Yang, Z.: LPMLN, weak constraints, and P-log. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pp. 1170–1177 (2017)
Lifschitz, V., Pearce, D., Valverde, A.: Strongly equivalent logic programs. ACM Trans. Comput. Log. 2, 526–541 (2001)
Lin, F., Zhao, Y.: ASSAT: computing answer sets of a logic program by SAT solvers. Artif. Intell. 157, 115–137 (2004)
Luo, M., Lee, J.: Strong equivalence for LPMLN programs. In: ICLP (Technical Communications) (2019)
Poon, H., Domingos, P.: Sound and efficient inference with probabilistic and deterministic dependencies. In: AAAI, vol. 6, pp. 458–463 (2006)
Richardson, M., Domingos, P.: Markov logic networks. Mach. Learn. 62(1–2), 107–136 (2006)
Sato, T.: A statistical learning method for logic programs with distribution semantics. In: Proceedings of the 12th International Conference on Logic Programming (ICLP), pp. 715–729 (1995)
Wang, B., Zhang, Z., Xu, H., Shen, J.: Splitting an LPMLN program. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1 (2018)
Wang, Y., Lee, J.: Elaboration tolerant representation of Markov decision process via decision-theoretic extension of probabilistic action language \(p\cal{BC}+\). In: Balduccini, M., Lierler, Y., Woltran, S. (eds.) LPNMR 2019. LNAI, vol. 11481, pp. 224–238. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20528-7_17
Wang, Y., Zhang, S., Lee, J.: Bridging commonsense reasoning and probabilistic planning via a probabilistic action language. Theory Pract. Logic Program. 19(5–6), 1090–1106 (2019)
Wu, W., et al.: LPMLNModels: a parallel solver for LPMLN. In: 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI), pp. 794–799. IEEE (2018)
Yang, Z., Ishay, A., Lee, J.: NeurASP: embracing neural networks into answer set programming. In: Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), pp. 1755–1762 (2020). https://doi.org/10.24963/ijcai.2020/243
Acknowledgements
This work was partially supported by the National Science Foundation under Grants IIS-1815337 and IIS-2006747.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Lee, J., Yang, Z. (2023). Statistical Relational Extension of Answer Set Programming. In: Bertossi, L., Xiao, G. (eds) Reasoning Web. Causality, Explanations and Declarative Knowledge. Lecture Notes in Computer Science, vol 13759. Springer, Cham. https://doi.org/10.1007/978-3-031-31414-8_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-31414-8_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-31413-1
Online ISBN: 978-3-031-31414-8
eBook Packages: Computer ScienceComputer Science (R0)