The ICLR 2022 paper Learning to Generalize across Domains on Single Test Samples by Zehao Xiao, Xiantong Zhen, Ling Shao and Cees G M Snoek is now available. We strive to learn a model from a set of source domains that generalizes well to unseen target domains. The main challenge in such a domain generalization scenario is the unavailability of any target domain data during training, resulting in the learned model not being explicitly adapted to the unseen target domains. We propose learning to generalize across domains on single test samples. We leverage a meta-learning paradigm to learn our model to acquire the ability of adaptation with single samples at training time so as to further adapt itself to each single test sample at test time. We formulate the adaptation to the single test sample as a variational Bayesian inference problem, which incorporates the test sample as a conditional into the generation of model parameters. The adaptation to each test sample requires only one feed-forward computation at test time without any fine-tuning or self-supervised training on additional data from the unseen domains. Extensive ablation studies demonstrate that our model learns the ability to adapt models by mimicking domain shift during training. Further, our model achieves at least comparable — and often better — performance than state-of-the-art methods on multiple benchmarks for domain generalization.

Leave a Reply

Your email address will not be published. Required fields are marked *