Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Apr 1, 2024 · Title:Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models. Authors:Yuxin Wen, Leo Marchyok, Sanghyun Hong ...
Apr 1, 2024 · Since the poisoning involves minimizing the loss on target data points, there is also no increase in validation loss for the poisoned models, ...
People also ask
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models ... poison a training dataset can cause models trained on this ...
... models by fine-tuning large pre-trained models using a small ... Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models.
there is a connection between security and privacy attacks, and poisoning the training data of models ... Encodermi: Membership inference against pre-trained ...
Co-authors ; Privacy backdoors: Enhancing membership inference through poisoning pre-trained models. Y Wen, L Marchyok, S Hong, J Geiping, T Goldstein, N Carlini.
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models ... Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning.
AgrEvader: Poisoning Membership Inference against Byzantine-robust Federated Learning ... Quantifying Privacy Risks of Masked Language Models Using Membership ...
Missing: Backdoors: | Show results with:Backdoors:
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning (ICLR 2023) [paper] · Clean-image Backdoor: Attacking Multi-label Models with Poisoned ...
poisoning and membership inference attack games. The attacker ... while we achieve (3.10, 10 6)-differential privacy on poisoned models (bpoison = 1, 000,.
Missing: Pre- | Show results with:Pre-