Paper

Metadata Normalization, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021), Nashville, TN, Jun 19–25, 2021 (Arxiv)

Bibtex

@inproceeding{lu2021metadata,
   title={Metadata Normalization},
   author={Mandy Lu, Qingyu Zhao, Jiequan Zhang, Kilian M. Pohl, Li Fei-Fei, Juan Carlos Niebles, Ehsan Adeli},
   year={2021},
   booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021)},
   publisher={IEEE/CVF}
}
Batch Normalization (BN) and its variants have delivered tremendous success in combating the covariate shift induced by the training step of deep learning methods. While these techniques normalize feature distributions by standardizing with batch statistics, they do not correct the influence on features from extraneous variables or multiple distributions. Such extra variables, referred to as metadata here, may create bias or confounding effects (e.g., race when classifying gender from face images). We introduce the Metadata Normalization (MDN) layer, a new batch-level operation which can be used end-to-end within the training framework, to correct the influence of metadata on feature distributions. MDN adopts a regression analysis technique traditionally used for preprocessing to remove (regress out) the metadata effects on model features during training. We utilize a metric based on distance correlation to quantify the distribution bias from the metadata and demonstrate that our method successfully removes metadata effects on four diverse settings: one synthetic, one 2D image, one video, and one 3D medical image dataset.

Code

https://github.com/mlu355/MetadataNorm