CANO: Context-Aware Noise Optimization for Adversarial Privacy Protection

Adaptive noise injection that concentrates protection where it matters most

BubblePrivacy Research

Abstract

We present CANO (Context-Aware Noise Optimization), an adaptive noise injection system that optimizes the privacy-utility tradeoff in adversarial privacy protection. Unlike uniform noise strategies that apply identical perturbations across all features, CANO allocates noise proportionally to each feature's contribution to re-identification, concentrating protection where it matters most while preserving utility on low-impact features.

We additionally train a Deep Q-Network (DQN) policy through adversarial co-evolutionary training, where the defense adapts against an attacker that continuously retrains on protected data. We evaluate CANO against five baseline strategies (Gaussian, FGSM, PGD, Carlini-Wagner, and Laplace noise) across 4 datasets, 3 attack models, and 6 noise budgets, totaling 3,770 experimental configurations.

privacy protection adversarial noise reinforcement learning browser fingerprinting context-aware optimization

Key Results

74.8%
Baseline attacker accuracy
20.8%
After CANO (30 rounds)
3,770
Experimental configs
0.190
Mean accuracy reduction

Strategy Comparison

Strategy Accuracy Reduction Utility Preserved Adaptive Robust
Gaussian Uniform across features Moderate Low
FGSM Gradient-based, single step High Low
PGD Iterative gradient High Moderate
Carlini-Wagner Optimization-based Highest Moderate
Laplace DP-style uniform Low Low
CANO (Ours) Context-aware adaptive High High

Method

CANO resolves the fundamental tension in privacy-preserving noise injection: too little noise allows re-identification, too much breaks legitimate services, and uniform noise wastes budget on low-impact features while under-protecting critical ones.

  1. Feature Importance Analysis — Train a classifier on fingerprint samples and measure each feature's contribution to re-identification via permutation importance. High-impact features like canvas_hash and webgl_hash are identified.
  2. Proportional Noise Allocation — Concentrate noise budget on features that contribute most to tracking. Low-impact hardware attributes receive minimal perturbation.
  3. Reinforcement Learning — A DQN agent learns to allocate noise budgets dynamically based on observed attacker behavior and feature sensitivity.
  4. Adversarial Co-Evolution — Attacker and defender train against each other over 30 rounds. The defense adapts as the attacker retrains on protected data, producing robust long-term privacy guarantees.
δi = ε · (wi · nfeatures) · sign(zi)

where ε is the noise budget, wi is the feature importance weight, and zi is drawn from N(0,1) or from the gradient direction when a target model is available.

Evaluation

We evaluate across a comprehensive experimental grid:

In adversarial co-evolutionary training, CANO reduces attacker re-identification accuracy from 74.8% to 20.8% within 30 rounds, demonstrating sustained robustness against adaptive adversaries that retrain on protected data.

Contact

For questions about this research, please reach out via the project repository or email the research team.