当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SAGN: Semantic-Aware Graph Network for Remote Sensing Scene Classification
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 1-24-2023 , DOI: 10.1109/tip.2023.3238310
Yuqun Yang 1 , Xu Tang 1 , Yiu-Ming Cheung 2 , Xiangrong Zhang 1 , Licheng Jiao 1
Affiliation  

The scene classification of remote sensing (RS) images plays an essential role in the RS community, aiming to assign the semantics to different RS scenes. With the increase of spatial resolution of RS images, high-resolution RS (HRRS) image scene classification becomes a challenging task because the contents within HRRS images are diverse in type, various in scale, and massive in volume. Recently, deep convolution neural networks (DCNNs) provide the promising results of the HRRS scene classification. Most of them regard HRRS scene classification tasks as single-label problems. In this way, the semantics represented by the manual annotation decide the final classification results directly. Although it is feasible, the various semantics hidden in HRRS images are ignored, thus resulting in inaccurate decision. To overcome this limitation, we propose a semantic-aware graph network (SAGN) for HRRS images. SAGN consists of a dense feature pyramid network (DFPN), an adaptive semantic analysis module (ASAM), a dynamic graph feature update module, and a scene decision module (SDM). Their function is to extract the multi-scale information, mine the various semantics, exploit the unstructured relations between diverse semantics, and make the decision for HRRS scenes, respectively. Instead of transforming single-label problems into multi-label issues, our SAGN elaborates the proper methods to make full use of diverse semantics hidden in HRRS images to accomplish scene classification tasks. The extensive experiments are conducted on three popular HRRS scene data sets. Experimental results show the effectiveness of the proposed SAGN. Our source codes are available at https://github.com/TangXu-Group/SAGN.

中文翻译:


SAGN:用于遥感场景分类的语义感知图网络



遥感 (RS) 图像的场景分类在 RS 社区中起着至关重要的作用,旨在为不同的 RS 场景分配语义。随着 RS 图像空间分辨率的提高,高分辨率 RS (HRRS) 图像场景分类成为一项具有挑战性的任务,因为 HRRS 图像中的内容类型多样、规模多样、体积巨大。最近,深度卷积神经网络 (DCNN) 提供了 HRRS 场景分类的有希望的结果。他们中的大多数将 HRRS 场景分类任务视为单标签问题。这样,手动注释所代表的语义直接决定了最终的分类结果。尽管这是可行的,但隐藏在 HRRS 图像中的各种语义被忽略,从而导致决策不准确。为了克服这一限制,我们提出了一种用于 HRRS 图像的语义感知图网络 (SAGN)。SAGN 由密集特征金字塔网络 (DFPN)、自适应语义分析模块 (ASAM)、动态图特征更新模块和场景决策模块 (SDM) 组成。它们的功能是提取多尺度信息,挖掘各种语义,利用不同语义之间的非结构化关系,并分别为 HRRS 场景做出决策。我们的 SAGN 不是将单标签问题转化为多标签问题,而是精心设计了正确的方法,以充分利用隐藏在 HRRS 图像中的各种语义来完成场景分类任务。在三个流行的 HRRS 场景数据集上进行了广泛的实验。实验结果表明了所提出的 SAGN 的有效性。我们的源代码可在 https://github.com/TangXu-Group/SAGN 获取。
更新日期:2024-08-28
down
wechat
bug