Fairness-aware federated matrix factorization

S Liu, Y Ge, S Xu, Y Zhang, A Marian - … of the 16th ACM conference on …, 2022 - dl.acm.org
Proceedings of the 16th ACM conference on recommender systems, 2022dl.acm.org
Achieving fairness over different user groups in recommender systems is an important
problem. The majority of existing works achieve fairness through constrained optimization
that combines the recommendation loss and the fairness constraint. To achieve fairness, the
algorithm usually needs to know each user's group affiliation feature such as gender or race.
However, such involved user group feature is usually sensitive and requires protection. In
this work, we seek a federated learning solution for the fair recommendation problem and …
Achieving fairness over different user groups in recommender systems is an important problem. The majority of existing works achieve fairness through constrained optimization that combines the recommendation loss and the fairness constraint. To achieve fairness, the algorithm usually needs to know each user’s group affiliation feature such as gender or race. However, such involved user group feature is usually sensitive and requires protection. In this work, we seek a federated learning solution for the fair recommendation problem and identify the main challenge as an algorithmic conflict between the global fairness objective and the localized federated optimization process. On one hand, the fairness objective usually requires access to all users’ group information. On the other hand, the federated learning systems restrain the personal data in each user’s local space. As a resolution, we propose to communicate group statistics during federated optimization and use differential privacy techniques to avoid exposure of users’ group information when users require privacy protection. We illustrate the theoretical bounds of the noisy signal used in our method that aims to enforce privacy without overwhelming the aggregated statistics. Empirical results show that federated learning may naturally improve user group fairness and the proposed framework can effectively control this fairness with low communication overheads.
ACM Digital Library