Positive Distribution Pollution: Rethinking Positive Unlabeled Learning from a Unified Perspective

Authors

  • Qianqiao Liang Zhejiang University
  • Mengying Zhu Zhejiang University
  • Yan Wang Macquarie University, Austrilia
  • Xiuyuan Wang Zhejiang University
  • Wanjia Zhao Zhejiang University
  • Mengyuan Yang Zhejiang University
  • Hua Wei MYbank, Ant Group
  • Bing Han MYbank, Ant Group
  • Xiaolin Zheng Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v37i7.26051

Keywords:

ML: Semi-Supervised Learning, ML: Deep Generative Models & Autoencoders, ML: Unsupervised & Self-Supervised Learning

Abstract

Positive Unlabeled (PU) learning, which has a wide range of applications, is becoming increasingly prevalent. However, it suffers from problems such as data imbalance, selection bias, and prior agnostic in real scenarios. Existing studies focus on addressing part of these problems, which fail to provide a unified perspective to understand these problems. In this paper, we first rethink these problems by analyzing a typical PU scenario and come up with an insightful point of view that all these problems are inherently connected to one problem, i.e., positive distribution pollution, which refers to the inaccuracy in estimating positive data distribution under very little labeled data. Then, inspired by this insight, we devise a variational model named CoVPU, which addresses all three problems in a unified perspective by targeting the positive distribution pollution problem. CoVPU not only accurately separates the positive data from the unlabeled data based on discrete normalizing flows, but also effectively approximates the positive distribution based on our derived unbiased rebalanced risk estimator and supervises the approximation based on a novel prior-free variational loss. Rigorous theoretical analysis proves the convergence of CoVPU to an optimal Bayesian classifier. Extensive experiments demonstrate the superiority of CoVPU over the state-of-the-art PU learning methods under these problems.

Downloads

Published

2023-06-26

How to Cite

Liang, Q., Zhu, M., Wang, Y., Wang, X., Zhao, W., Yang, M., Wei, H., Han, B., & Zheng, X. (2023). Positive Distribution Pollution: Rethinking Positive Unlabeled Learning from a Unified Perspective. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8737-8745. https://doi.org/10.1609/aaai.v37i7.26051

Issue

Section

AAAI Technical Track on Machine Learning II