The fact: Taming latent factor models for explainability with factorization trees

Y Tao, Y Jia, N Wang, H Wang - … of the 42nd international ACM SIGIR …, 2019 - dl.acm.org
Proceedings of the 42nd international ACM SIGIR conference on research and …, 2019dl.acm.org
Latent factor models have achieved great success in personalized recommendations, but
they are also notoriously difficult to explain. In this work, we integrate regression trees to
guide the learning of latent factor models for recommendation, and use the learnt tree
structure to explain the resulting latent factors. Specifically, we build regression trees on
users and items respectively with user-generated reviews, and associate a latent profile to
each node on the trees to represent users and items. With the growth of regression tree, the …
Latent factor models have achieved great success in personalized recommendations, but they are also notoriously difficult to explain. In this work, we integrate regression trees to guide the learning of latent factor models for recommendation, and use the learnt tree structure to explain the resulting latent factors. Specifically, we build regression trees on users and items respectively with user-generated reviews, and associate a latent profile to each node on the trees to represent users and items. With the growth of regression tree, the latent factors are gradually refined under the regularization imposed by the tree structure. As a result, we are able to track the creation of latent profiles by looking into the path of each factor on regression trees, which thus serves as an explanation for the resulting recommendations. Extensive experiments on two large collections of Amazon and Yelp reviews demonstrate the advantage of our model over several competitive baseline algorithms. Besides, our extensive user study also confirms the practical value of explainable recommendations generated by our model.
ACM Digital Library