Pattern Recognition ( IF 7.5 ) Pub Date : 2022-10-23 , DOI: 10.1016/j.patcog.2022.109115 Mattia Segu , Alessio Tonioni , Federico Tombari
Domain generalization aims at training machine learning models to perform robustly across different and unseen domains. Several methods train models from multiple datasets to extract domain-invariant features, hoping to generalize to unseen domains. Instead, first we explicitly train domain-dependent representations leveraging ad-hoc batch normalization layers to collect independent domain’s statistics. Then, we propose to use these statistics to map domains in a shared latent space, where membership to a domain is measured by means of a distance function. At test time, we project samples from an unknown domain into the same space and infer properties of their domain as a linear combination of the known ones. We apply the same mapping strategy at training and test time, learning both a latent representation and a powerful but lightweight ensemble model. We show a significant increase in classification accuracy over current state-of-the-art techniques on popular domain generalization benchmarks: PACS, Office-31 and Office-Caltech.
中文翻译:
用于深度域泛化的批量归一化嵌入
领域泛化旨在训练机器学习模型在不同和不可见的领域中稳健地执行。几种方法从多个数据集中训练模型以提取域不变特征,希望推广到看不见的域。相反,首先我们显式地训练依赖于域的表示,利用临时批量标准化层来收集独立域的统计信息。然后,我们建议使用这些统计数据来映射共享潜在空间中的域,其中域的成员资格是通过距离函数来衡量的。在测试时,我们将来自未知域的样本投影到同一空间中,并将其域的属性推断为已知域的线性组合。我们在训练和测试时应用相同的映射策略,学习潜在表示和强大但轻量级的集成模型。我们在流行的领域泛化基准上展示了当前最先进技术的分类准确性显着提高:PACS、Office-31 和 Office-Caltech。