Conet: Collaborative cross networks for cross-domain recommendation
Proceedings of the 27th ACM international conference on information and …, 2018•dl.acm.org
The cross-domain recommendation technique is an effective way of alleviating the data
sparse issue in recommender systems by leveraging the knowledge from relevant domains.
Transfer learning is a class of algorithms underlying these techniques. In this paper, we
propose a novel transfer learning approach for cross-domain recommendation by using
neural networks as the base model. In contrast to the matrix factorization based cross-
domain techniques, our method is deep transfer learning, which can learn complex user …
sparse issue in recommender systems by leveraging the knowledge from relevant domains.
Transfer learning is a class of algorithms underlying these techniques. In this paper, we
propose a novel transfer learning approach for cross-domain recommendation by using
neural networks as the base model. In contrast to the matrix factorization based cross-
domain techniques, our method is deep transfer learning, which can learn complex user …
The cross-domain recommendation technique is an effective way of alleviating the data sparse issue in recommender systems by leveraging the knowledge from relevant domains. Transfer learning is a class of algorithms underlying these techniques. In this paper, we propose a novel transfer learning approach for cross-domain recommendation by using neural networks as the base model. In contrast to the matrix factorization based cross-domain techniques, our method is deep transfer learning, which can learn complex user-item interaction relationships. We assume that hidden layers in two base networks are connected by cross mappings, leading to the collaborative cross networks (CoNet). CoNet enables dual knowledge transfer across domains by introducing cross connections from one base network to another and vice versa. CoNet is achieved in multi-layer feedforward networks by adding dual connections and joint loss functions, which can be trained efficiently by back-propagation. The proposed model is thoroughly evaluated on two large real-world datasets. It outperforms baselines by relative improvements of 7.84% in NDCG. We demonstrate the necessity of adaptively selecting representations to transfer. Our model can reduce tens of thousands training examples comparing with non-transfer methods and still has the competitive performance with them.
ACM Digital Library