Published in AAAI Conference on Artificial Intelligence (AAAI; Oral presentation), 2023

  Code   Slides   Poster

tl;dr: A more sensible training method for randomized smoothing by incorporating a sample-wise control of target robustness.

  • Also appeared at ECCV AROW Workshop 2022
Additional information

Abstract

Any classifier can be "smoothed out" under Gaussian noise to build a new classifier that is provably robust to l2-adversarial perturbations, viz., by averaging its predictions over the noise via randomized smoothing. In this paper, we propose a simple training method leveraging the fundamental trade-off between accuracy and (adversarial) robustness to obtain more robust smoothed classifiers, in particular, through a sample-wise control of robustness over the training samples. We make this control feasible by using "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for an input: specifically, we differentiate the training objective depending on this proxy to filter out samples that are unlikely to benefit from the worst-case (adversarial) objective. Our experiments show that the proposed method, despite its simplicity, consistently exhibits improved certified robustness upon state-of-the-art training methods. Somewhat surprisingly, we find these improvements persist even for other notions of robustness, e.g., to various types of common corruptions.

BibTeX

@inproceedings{jeong2023catrs,
  title={Confidence-aware Training of Smoothed Classifiers for Certified Robustness},
  author={Jongheon Jeong and Seojin Kim and Jinwoo Shin},
  booktitle={AAAI Conference on Artificial Intelligence},
  year={2023}
}

Updated: