Adaptive noise injection that concentrates protection where it matters most
We present CANO (Context-Aware Noise Optimization), an adaptive noise injection system that optimizes the privacy-utility tradeoff in adversarial privacy protection. Unlike uniform noise strategies that apply identical perturbations across all features, CANO allocates noise proportionally to each feature's contribution to re-identification, concentrating protection where it matters most while preserving utility on low-impact features.
We additionally train a Deep Q-Network (DQN) policy through adversarial co-evolutionary training, where the defense adapts against an attacker that continuously retrains on protected data. We evaluate CANO against five baseline strategies (Gaussian, FGSM, PGD, Carlini-Wagner, and Laplace noise) across 4 datasets, 3 attack models, and 6 noise budgets, totaling 3,770 experimental configurations.
| Strategy | Accuracy Reduction | Utility Preserved | Adaptive Robust |
|---|---|---|---|
| Gaussian | Uniform across features | Moderate | Low |
| FGSM | Gradient-based, single step | High | Low |
| PGD | Iterative gradient | High | Moderate |
| Carlini-Wagner | Optimization-based | Highest | Moderate |
| Laplace | DP-style uniform | Low | Low |
| CANO (Ours) | Context-aware adaptive | High | High |
CANO resolves the fundamental tension in privacy-preserving noise injection: too little noise allows re-identification, too much breaks legitimate services, and uniform noise wastes budget on low-impact features while under-protecting critical ones.
canvas_hash and webgl_hash are identified.
where ε is the noise budget, wi is the feature importance weight, and zi is drawn from N(0,1) or from the gradient direction when a target model is available.
We evaluate across a comprehensive experimental grid:
In adversarial co-evolutionary training, CANO reduces attacker re-identification accuracy from 74.8% to 20.8% within 30 rounds, demonstrating sustained robustness against adaptive adversaries that retrain on protected data.
For questions about this research, please reach out via the project repository or email the research team.