当前位置: X-MOL 学术Int. J. Appl. Earth Obs. Geoinf. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A fine crop classification model based on multitemporal Sentinel-2 images
International Journal of Applied Earth Observation and Geoinformation ( IF 7.6 ) Pub Date : 2024-09-27 , DOI: 10.1016/j.jag.2024.104172
Tengfei Qu, Hong Wang, Xiaobing Li, Dingsheng Luo, Yalei Yang, Jiahao Liu, Yao Zhang

Information on the sowing areas and yields of crops is important for ensuring food security and reforming the agricultural modernization process, while crop classification and identification are core issues when attempting to acquire information on crop planting areas and yields. Obtaining information on crop planting areas and yields in a timely and accurate manner is highly important for optimizing crop planting structures, formulating agricultural policies, and ensuring national economic development. In this paper, a fine crop classification model based on multitemporal Sentinel-2 images, CTANet, is proposed. It comprises a convolutional attention architecture (CAA) and a temporal attention architecture (TAA), incorporating spatial attention modules, channel attention modules and temporal attention modules. These modules adaptively weight each pixel, channel and temporal phase of the given feature map to mitigate the intraclass spatial heterogeneity, spectral variability and temporal variability of crops. Additionally, the auxiliary features of significant importance for each crop category are identified using the random forest-SHAP algorithm, enabling the construction of classification datasets containing spectral bands, spectral bands with auxiliary features, and spectral bands with optimized auxiliary features. Evaluations conducted on three crop classification datasets revealed that the proposed CTANet approach and its key CANet component demonstrated superior crop classification performance on the classification dataset consisting of spectral bands and optimized auxiliary features in comparisons with the other tested models. Based on this dataset, CTANet achieved higher validation accuracy and lower validation loss than those of the other methods, and during testing, it attained the highest overall accuracy (93.9 %) and MIoU (87.5 %). When identifying rice, maize, and soybeans, the F1 scores of CTANet reached 95.6 %, 95.7 %, and 94.7 %, and the IoU scores were 91.6 %, 91.7 %, and 89.9 %, respectively, significantly exceeding those of some commonly used deep learning models. This indicates the potential of the proposed method for distinguishing between different crop types.

中文翻译:


基于多时相Sentinel-2图像的作物精细分类模型



农作物播种面积和产量信息对于保障粮食安全和农业现代化进程具有重要意义,而农作物分类和识别是获取农作物播种面积和产量信息的核心问题。及时、准确地获取农作物种植面积和产量信息,对于优化农作物种植结构、制定农业政策、保障国民经济发展具有重要意义。本文提出了一种基于多时相 Sentinel-2 图像的作物精细分类模型 CTANet。它包括卷积注意力架构(CAA)和时间注意力架构(TAA),包含空间注意力模块、通道注意力模块和时间注意力模块。这些模块自适应地对给定特征图的每个像素、通道和时间相位进行加权,以减轻作物的类内空间异质性、光谱变异性和时间变异性。此外,使用随机森林-SHAP算法识别对每个作物类别具有重要意义的辅助特征,从而能够构建包含光谱带、具有辅助特征的光谱带和具有优化辅助特征的光谱带的分类数据集。对三个作物分类数据集进行的评估表明,与其他测试模型相比,所提出的 CTANet 方法及其关键 CANNet 组件在由光谱带和优化辅助特征组成的分类数据集上表现出优越的作物分类性能。 基于该数据集,CTANet 比其他方法实现了更高的验证精度和更低的验证损失,并且在测试过程中,它获得了最高的总体精度(93.9%)和 MIoU(87.5%)。在识别水稻、玉米和大豆时,CTANet 的 F1 分数分别达到 95.6%、95.7% 和 94.7%,IoU 分数分别为 91.6%、91.7% 和 89.9%,明显超过一些常用的深度学习模型。学习模型。这表明所提出的方法在区分不同作物类型方面的潜力。
更新日期:2024-09-27
down
wechat
bug