当前位置:
X-MOL 学术
›
IEEE Trans. Geosci. Remote Sens.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Self-Training-Based Unsupervised Domain Adaptation for Object Detection in Remote Sensing Imagery
IEEE Transactions on Geoscience and Remote Sensing ( IF 7.5 ) Pub Date : 2024-09-11 , DOI: 10.1109/tgrs.2024.3457789 Sihao Luo 1 , Li Ma 1 , Xiaoquan Yang 2 , Dapeng Luo 1 , Qian Du 3
IEEE Transactions on Geoscience and Remote Sensing ( IF 7.5 ) Pub Date : 2024-09-11 , DOI: 10.1109/tgrs.2024.3457789 Sihao Luo 1 , Li Ma 1 , Xiaoquan Yang 2 , Dapeng Luo 1 , Qian Du 3
Affiliation
We propose a novel two-stage cross-domain self-training (CDST) framework for unsupervised domain adaptive object detection in remote sensing. The first stage introduces the generative adversarial network (GAN)-based domain transfer strategy to preliminarily mitigate the domain shift for higher quality initial pseudo-labeled images, which utilizes the CycleGAN to transfer source-domain images to match the target domain. Moreover, the key issue in tailoring the self-training (ST) to unsupervised domain adaptive detection lies in the quality of pseudo-labeled images. To select high-quality pseudo-labeled images under the domain-shift circumstance, we propose hard example selection-based self-training (HES-ST) with the three key steps: 1) detector-based example division (DED), which divides the detected examples into easy examples and hard ones according to their confidence level; 2) confidence and relation joint score (CRJS)-based hard example selection, which combines two reliability levels calculated, respectively, by the detector and relation network (RN) module to mine reliable examples; and 3) union example (UE)-based training image selection, which combines both easy and reliable hard examples to choose target-domain images that may contain fewer detection errors. The experimental results on several remote sensing datasets demonstrate the effectiveness of our proposed framework. Compared with the baseline detector trained on the source dataset, our approach consistently improves the detection performance on the target dataset by 15.7%–16.8% mean average precision (mAP) and achieves the state-of-the-art (SOTA) results under various domain adaptation scenarios.
中文翻译:
基于自训练的遥感图像目标检测无监督域适应
我们提出了一种新颖的两阶段跨域自训练(CDST)框架,用于遥感中的无监督域自适应目标检测。第一阶段引入基于生成对抗网络(GAN)的域转移策略,以初步缓解高质量初始伪标记图像的域转移,该策略利用 CycleGAN 转移源域图像以匹配目标域。此外,将自训练(ST)定制为无监督域自适应检测的关键问题在于伪标记图像的质量。为了在域转移情况下选择高质量的伪标记图像,我们提出了基于硬示例选择的自训练(HES-ST),其中包括三个关键步骤:1)基于检测器的示例划分(DED),它将将检测到的示例根据置信度分为简单示例和困难示例; 2)基于置信度和关系联合评分(CRJS)的硬样本选择,它结合了分别由检测器和关系网络(RN)模块计算的两个可靠性级别来挖掘可靠样本; 3)基于联合示例(UE)的训练图像选择,结合简单和可靠的硬示例来选择可能包含较少检测错误的目标域图像。几个遥感数据集的实验结果证明了我们提出的框架的有效性。与在源数据集上训练的基线检测器相比,我们的方法在目标数据集上的检测性能持续提高了 15.7%–16.8% 的平均精度(mAP),并在各种条件下实现了最先进的(SOTA)结果。域适应场景。
更新日期:2024-09-11
中文翻译:
基于自训练的遥感图像目标检测无监督域适应
我们提出了一种新颖的两阶段跨域自训练(CDST)框架,用于遥感中的无监督域自适应目标检测。第一阶段引入基于生成对抗网络(GAN)的域转移策略,以初步缓解高质量初始伪标记图像的域转移,该策略利用 CycleGAN 转移源域图像以匹配目标域。此外,将自训练(ST)定制为无监督域自适应检测的关键问题在于伪标记图像的质量。为了在域转移情况下选择高质量的伪标记图像,我们提出了基于硬示例选择的自训练(HES-ST),其中包括三个关键步骤:1)基于检测器的示例划分(DED),它将将检测到的示例根据置信度分为简单示例和困难示例; 2)基于置信度和关系联合评分(CRJS)的硬样本选择,它结合了分别由检测器和关系网络(RN)模块计算的两个可靠性级别来挖掘可靠样本; 3)基于联合示例(UE)的训练图像选择,结合简单和可靠的硬示例来选择可能包含较少检测错误的目标域图像。几个遥感数据集的实验结果证明了我们提出的框架的有效性。与在源数据集上训练的基线检测器相比,我们的方法在目标数据集上的检测性能持续提高了 15.7%–16.8% 的平均精度(mAP),并在各种条件下实现了最先进的(SOTA)结果。域适应场景。