Counterfactual Collaborative Reasoning

Authors

Jianchao Ji, Zelong Li, Shuyuan Xu, Max Xiong, Juntao Tan, Yingqiang Ge, Hao Wang, Yongfeng Zhang

Abstract

Causal reasoning and logical reasoning are two important types of reasoning abilities for human intelligence. However, their relationship has not been extensively explored under machine intelligence context. In this paper, we explore how the two reasoning abilities can be jointly modeled to enhance both accuracy and explainability of machine learning models. More specifically, by integrating two important types of reasoning ability–counterfactual reasoning and (neural) logical reasoning–we propose Counterfactual Collaborative Reasoning (CCR), which conducts counterfactual logic reasoning to improve the performance. In particular, we use recommender system as an example to show how CCR alleviate data scarcity, improve accuracy and enhance transparency. Technically, we leverage counterfactual reasoning to generate “difficult” counterfactual training examples for data augmentation, which–together with the original training examples–can enhance the model performance. Since the augmented data is model irrelevant, they can be used to enhance any model, enabling the wide applicability of the technique. Besides, most of the existing data augmentation methods focus on “implicit data augmentation” over users’ implicit feedback, while our framework conducts “explicit data augmentation” over users explicit feedback based on counterfactual logic reasoning. Experiments on three real-world datasets show that CCR achieves better performance than non-augmented models and implicitly augmented models, and also improves model transparency by generating counterfactual explanations.

Publication
In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining
Avatar
Juntao Tan
PhD candidate

My research interests are mainly on Explainable AI, Recommender System, and some other subfields of AI and Machine Learning.