Hostname: page-component-586b7cd67f-dlnhk Total loading time: 0 Render date: 2024-11-22T04:35:18.204Z Has data issue: false hasContentIssue false

Declarative Approaches to Counterfactual Explanations for Classification

Published online by Cambridge University Press:  27 December 2021

LEOPOLDO BERTOSSI*
Affiliation:
Universidad Adolfo Ibáñez, Faculty of Engineering and Sciences, Santiago, Chile and Millennium Institute for Foundational Research on Data (IMFD) Santiago, Chile (e-mail: [email protected])
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

We propose answer-set programs that specify and compute counterfactual interventions on entities that are input on a classification model. In relation to the outcome of the model, the resulting counterfactual entities serve as a basis for the definition and computation of causality-based explanation scores for the feature values in the entity under classification, namely responsibility scores. The approach and the programs can be applied with black-box models, and also with models that can be specified as logic programs, such as rule-based classifiers. The main focus of this study is on the specification and computation of best counterfactual entities, that is, those that lead to maximum responsibility scores. From them one can read off the explanations as maximum responsibility feature values in the original entity. We also extend the programs to bring into the picture semantic or domain knowledge. We show how the approach could be extended by means of probabilistic methods, and how the underlying probability distributions could be modified through the use of constraints. Several examples of programs written in the syntax of the DLV ASP-solver, and run with it, are shown.

Type
Original Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press

Footnotes

*

In memory of Prof. Jack Minker (1927–2021), a scientist, a scholar, a visionary; a generous, wise and committed man.

References

Alviano, M., Calimeri, F., Dodaro, C., Fuscà, D., Leone, L., Perri, S., Ricca, F., Veltri, P. and Zangari, J. 2017. The ASP system dlv2. In Proceedings of the 14th International Conference on Logic Programming and Nonmonotonic Reasoning, LPNMR 2017, Balduccini, M. and Janhunen, T., Eds. LNCS, vol. 10377, Springer, 215–221.Google Scholar
Alviano, M., Amendola, G., Dodaro, C., Leone, N., Maratea, M. and Ricca, F. 2019. Evaluation of disjunctive programs in WASP. In Proceedings of the 15th International Conference on Logic Programming and Nonmonotonic Reasoning, LPNMR 2019. Marcello Balduccini, M., Lierler, Y. and Woltran, S., Eds. LNCS, vol. 11481, Springer, 241–255.Google Scholar
Arenas, M., Pablo Barceló, P., Bertossi, L. and Monet, M. 2012. The tractability of shap-scores over deterministic and decomposable boolean circuits. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021. AAAI Press, 66706678.Google Scholar
Baral, C., Gelfond, M. and Rushton, N. 2009. Probabilistic reasoning with answer sets. Theory and Practice of Logic Programming, 9, 1, 57144.CrossRefGoogle Scholar
Ben-Eliyahu, R. and Dechter, R. 1994. Propositional semantics for disjunctive logic programs. Annals of Mathematics in Artificial Intelligence, 12, 5387.CrossRefGoogle Scholar
Bertossi, L. 2011. Database Repairing and Consistent Query Answering . Synthesis Lectures in Data Management, Morgan & Claypool.Google Scholar
Bertossi, L. 2019. Database repairs and consistent query answering: origins and further developments. Gems of PODS paper. In Proceedings of the 38th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS 2019, Suciu, D., Skritek, S. and Koch, Ch, Eds. ACM, 48–58.Google Scholar
Bertossi, L. and Salimi, B. 2017a. From causes for database queries to repairs and model-based diagnosis and back. Theory of Computing Systems, 61, 1, 191232.CrossRefGoogle Scholar
Bertossi, L. and Salimi, B. 2017b. Causes for query answers from databases: datalog abduction, view-updates, and integrity constraints. International Journal of Approximate Reasoning, 90, 226252.CrossRefGoogle Scholar
Bertossi, L. 2020. An ASP-based approach to counterfactual explanations for classification. In Proceedings “Rules and Reasoning” - 4th International Joint Conference, RuleML+RR 2020, Gutiérrez-Basulto, V., Kliegr, T., Soylu, A., Giese, M. and Roman, D., Eds. LNCS vol. 12173, Springer, 70–81.Google Scholar
Bertossi, L. 2021. Characterizing and computing causes for query answers in databases from database repairs and repair programs. Knowledge and Information Systems, 63, 1, 199231.CrossRefGoogle Scholar
Bertossi, L., Li, J., Schleich, M., Suciu, D. and Vagena, Z. 2020. Causality-based explanation of classification outcomes. In Proceedings of the Fourth Workshop on Data Management for End-To-End Machine Learning, In Conjunction with the 2020 ACM SIGMOD/PODS Conference, DEEM@SIGMOD 2020, Sebastian Schelter, S., Whang, S. and Stoyanovich, J., Eds., 6:1–6:10.Google Scholar
Bertossi, L. and Geerts, F. 2020. Data quality and explainable AI. ACM Journal of Data and Information Quality, 12, 2, 19.CrossRefGoogle Scholar
Bertossi, L. and Reyes, G. 2021. Answer-set programs for reasoning about counterfactual interventions and responsibility scores for classification. In Proceedings of the 1st International Joint Conference on Learning and Reasoning, IJCLR 2021. LNCS, to appear, Springer. Extended version posted as ArXiv 2107.10159.Google Scholar
Brewka, G., Eiter, T. and Truszczynski, M. 2011. Answer set programming at a glance. Commununications of the ACM, 54, 12, 92103.CrossRefGoogle Scholar
Brewka, G., Delgrande, J., Romero, J. and Schaub, T. 2015. asprin: Customizing answer set preferences without a headache. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI 2015, Blai Bonet, B. and Koenig, S., Eds. AAAI Press, 14671474.Google Scholar
Calimeri, F., Cozza, S., Ianni, G. and Leone, N. 2009. An ASP system with functions, lists, and sets. In Proceedings of the 10th International Conference on Logic Programming and Nonmonotonic Reasoning, LPNMR 2009, Erdem, E., Lin, F. and Schaub, T., Eds. LNCS, vol. 5753, Springer, 483–489.Google Scholar
Calimeri, F., Faber, W., Gebser, M., Ianni, G., Kaminski, R., Krennwallner, T., Leone, N., Maratea, M., Ricca, F. and Schaub, T. 2020. ASP-core-2 input language format. Theory and Practice of Logic Programming, 20, 2, 294309.CrossRefGoogle Scholar
Caniupan, M. and Bertossi, L. 2010. The consistency extractor system: Answer set programs for consistent query answering in databases. Data & Knowledge Engineering, 69, 6, 545572.CrossRefGoogle Scholar
Chockler, H. and Halpern, J. Y. 2004. Responsibility and blame: a structural-model approach. Journal of Artificial Intelligence Research, 22, 93115.CrossRefGoogle Scholar
Choi, A., Shih, A., Goyanka, A. and Darwiche, A. 2020. On symbolically encoding the behavior of random forests. ArXiv 2007.01493, 2020.Google Scholar
Dantsin, E., Eiter, T., Gottlob, G. and Voronkov, A. 2001. Complexity and expressive power of logic programming. ACM Computing Surveys, 33, 3, 374425.CrossRefGoogle Scholar
Datta, A., Sen, S. and Zick, Y. 2016. Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In Proceedings of the IEEE Symposium on Security and Privacy, SP 2016. IEEE Computer Society, 598617.Google Scholar
Darwiche, A. and Hirth, A. 2020. On the reasons behind decisions. In Proceedings of the 24th European Conference on Artificial Intelligence, ECAI 2020, De Giacomo, G., Catalá, A., Dilkina, B., Milano, M., Barro, S., Bugarn, B. and Lang, J., Eds. IOS Press, 712720.Google Scholar
Eiter, T., Gottlob, G. and Leone, N. 1997. Abduction from logic programs: semantics and complexity. Theoretical Computer Science, 189, 12, 129–177.CrossRefGoogle Scholar
Eiter, T., Germano, S., Ianni, G., Kaminski, T., Redl, C., Schüller, P. and Weinzierl, A. 2019. The dlvhex system. Künstliche Intelligenz, 32, 23, 187–189.Google Scholar
Eiter, T., Kaminski, T., Redl, C., Schüller, P. and Weinzierl, A. 2017. Answer set programming with external source access. In Reasoning Web. Semantic Interoperability on the Web - 13th International Summer School 2017, Tutorial Lectures, Ianni, G., Lembo, D., Bertossi, L., Faber, W., Glimm, B., Gottlob, G. and Staab, S., Eds. LNCS, vol. 10370, Springer, 204–275.Google Scholar
Flach, P. Machine Learning. Cambridge University Press, 2012.CrossRefGoogle Scholar
Gebser, M., Kaminski, R. and Schaub, T. 2011. Complex optimization in answer set programming. Theory and Practice of Logic Programming, 11, 45, 821-839.CrossRefGoogle Scholar
Izza, Y. and Marques-Silva, J. 2021. On explaining random forests with SAT. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Z.-H. Zhou, Ed., 2584–2591.Google Scholar
Gelfond, M. and Kahl, Y. 2014. Knowledge Representation and Reasoning, and the Design of Intelligent Agents. Cambridge University Press.CrossRefGoogle Scholar
Gelfond, M. and Lifschitz, V. 1991. Classical negation in logic programs and disjunctive databases. New Generation Computing, 9, 365385.CrossRefGoogle Scholar
Giannotti, F., Greco, S., Sacca, D. and Zaniolo, C. 1997. Programming with non-determinism in deductive databases. Annals of Mathematics in Artificial Intelligence, 19, 12, 97–125.CrossRefGoogle Scholar
Halpern, J. and Pearl, J. 2005. Causes and explanations: a structural-model approach: part 1. British Journal Philosophy of Science, 56, 843887.CrossRefGoogle Scholar
Ignatiev, A., Narodytska, N. and Marques-Silva, J. 2019. Abduction-based explanations for machine learning models. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, AAAI Press, 1511–1519.Google Scholar
Ignatiev, A. 2020. Towards trustable explainable AI. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, C. Bessiere, Ed., 5154–5158.Google Scholar
Karimi, A-H., Barthe, G., Balle, B. and Valera, I. 2020a. Model-agnostic counterfactual explanations for consequential decisions. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, Chiappa, S. and Calandra, R., Eds. PMLR, vol. 108, 895905.Google Scholar
Karimi, A-H., von Kgen, B. J., Schölkopf, B. and Valera, I. 2020b. Algorithmic recourse under imperfect causal knowledge: a probabilistic approach. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, H. Larochelle, M. Ranzato, R., Balcan, M.-F. and Lin, H.-T., Eds.Google Scholar
Law, M., Russo, A. and Broda, K. 2019. Logic-based learning of answer set programs. In Reasoning Web. Explainable Artificial Intelligence - 15th International Summer School 2019, Tutorial Lectures, Krötzsch, M. and Stepanova, D., Eds. LNCS, vol. 11810, Springer, 196–231.Google Scholar
Lee, J. and Yang, Z. 2017. LPMLN, weak constraints, and p-log. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI 2017, Singh, S. P. and Markovitch, S., Eds. AAAI Press, 11701177.Google Scholar
Leone, N., Pfeifer, G., Faber, W., Eiter, T., Gottlob, G., Koch, C., Mateis, C., Perri, S. and Scarcello, F. 2006. The DLV system for knowledge representation and reasoning. ACM Transactions on Computational Logic, 7, 3, 499562.CrossRefGoogle Scholar
Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J., Nair, B., Katz, R., Himmelfarb, J., Bansal, N. and Lee, S-I. 2020. From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2, 1, 5667.CrossRefGoogle ScholarPubMed
Martens, D. and Provost, F. J. 2014. Explaining data-driven document classifications. MIS Quarterly, 38, 1, 7399.CrossRefGoogle Scholar
Meliou, A., Gatterbauer, W., Moore, K. F. and Suciu, D. 2010. The complexity of causality and responsibility for query answers and non-answers. Proceedings of the VLDB Endowment, 4, 1, 3445.Google Scholar
Mitchell, T. M. 1997. Machine Learning. McGraw-Hill.Google Scholar
Molnar, C. 2020. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. https://christophm.github.io/interpretable-ml-book Google Scholar
Narodytska, N., Shrotri, A., Meel, K., Ignatiev, A. and Marques-Silva, J. 2019. Assessing heuristic machine learning explanations with model counting. In Proceedings of the 22nd International Conference on Theory and Applications of Satisfiability Testing, SAT 2019, M. Janota and I. Lynce, LNCS, vol. 11628, Springer, 267–278.Google Scholar
Pearl, J. 2009. Causality: Models, Reasoning and Inference, 2nd edition. Cambridge University Press.CrossRefGoogle Scholar
Rudin, C. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1, 206215.CrossRefGoogle ScholarPubMed
Russell, Ch. 2019. Efficient search for diverse coherent explanations. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT * 2019, Boyd, D. and Morgenstern, J. H., Eds. ACM, 20–28.Google Scholar
Ribeiro, M. T., Singh, S. and Guestrin, C. 2016. “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, Krishnapuram, B., Shah, M., Smola, A. J., Aggarwal, C.C., Shen, D. and Rastogi, R., Eds. ACM, 11351144.Google Scholar
Ribeiro, M. T., Singh, S. and Guestrin, C. 2018. Anchors: high-precision model-agnostic explanations. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI 2018, McIlraith, S. A. and Weinberger, K. Q., Eds. AAAI Press, 15271535.Google Scholar
Schleich, M., Geng, Z., Zhang, Y. and Suciu, D. 2021. GeCo: Quality counterfactual explanations in real time. Proceedings of the VLDB Endowment, 14, 9, 16811693.Google Scholar
Shi, W., Shih, A., Darwiche, A. and Choi, A. 2020. On tractable representations of binary neural networks. In Proceedings of the 17th International Conference on Principles of Knowledge Representation and Reasoning, KR 2020, Calvanese, D., Erdem, E. and Thielscher, M., Eds., 882–892.Google Scholar
Shih, A., Choi, A. and Darwiche, A. 2018. Formal verification of Bayesian network classifiers. In Proceedings of the International Conference on Probabilistic Graphical Models, PGM 2018, Studený, M., and Kratochvl, V., Eds. PLMR, vol. 72, 157–168.Google Scholar
Ustun, B., Spangher, A. and Liu, Y. 2019. Actionable recourse in linear classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT * 2019, Boyd, D. and Morgenstern, J. H., Eds. ACM, 10–19.Google Scholar
Van den Broeck, G., Lykov, A., Schleich, M. and Suciu, D. 2021. On the tractability of SHAP explanations. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021. AAAI Press, 65056513.Google Scholar
Wachter, S., Mittelstadt, B. D. and Russell, C. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 31, 841.Google Scholar
Wang, E., Khosravi, P. and Van den Broeck, G. 2021. Probabilistic sufficient explanations. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Z.-H. Zhou, Ed., 3082–3088.Google Scholar