当前位置:
X-MOL 学术
›
Front. Marine Sci.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
RUSNet: Robust fish segmentation in underwater videos based on adaptive selection of optical flow
Frontiers in Marine Science ( IF 2.8 ) Pub Date : 2024-11-11 , DOI: 10.3389/fmars.2024.1471312 Peng Zhang, Zongyi Yang, Hong Yu, Wan Tu, Chencheng Gao, Yue Wang
Frontiers in Marine Science ( IF 2.8 ) Pub Date : 2024-11-11 , DOI: 10.3389/fmars.2024.1471312 Peng Zhang, Zongyi Yang, Hong Yu, Wan Tu, Chencheng Gao, Yue Wang
Fish segmentation in underwater videos can be used to accurately determine the silhouette size of fish objects, which provides key information for fish population monitoring and fishery resources survey. Some researchers have utilized underwater optical flow to improve the fish segmentation accuracy of underwater videos. However, the underwater optical flow is not evaluated and screen in existing works, and its predictions are easily disturbed by motion of non-fish. Therefore, in this paper, by analyzing underwater optical flow data, we propose a robust underwater segmentation network, RUSNet, with adaptive screening and fusion of input information. First, to enhance the robustness of the segmentation model to low-quality optical flow inputs, a global optical flow quality evaluation module is proposed for evaluating and aligning the underwater optical flow. Second, a decoder is designed by roughly localizing the fish object and then applying the proposed multidimension attention (MDA) module to iteratively recover the rough localization map from the spatial and edge dimensions of the fish. Finally, a multioutput selective fusion method is proposed in the testing stage, in which the mean absolute error (MAE) of the prediction using a single input is compared with that obtained using multisource input. Then, the information with the highest confidence is selected for predictive fusion, which facilitates the acquisition of the ultimate underwater fish segmentation results. To verify the effectiveness of the proposed model, we trained and evaluated it using a publicly available joint underwater video dataset and a separate DeepFish public dataset. Compared with the advanced underwater fish segmentation model, the proposed model has greater robustness to low-quality background optical flow in the DeepFish dataset, with the mean pixel accuracy (mPA) and mean intersection over union (mIoU) values reaching 98.77% and 97.65%, respectively. On the joint dataset, the mPA and mIoU of the proposed model are 92.61% and 90.12%, respectively, which are 0.72% and 1.21% higher than those of the advanced underwater video object segmentation model MSGNet. The results indicate that the proposed model can adaptively select the input and accurately segment fish in complex underwater scenes, which provides an effective solution for investigating fishery resources.
中文翻译:
RUSNet:基于自适应光流选择的水下视频中的鲁棒鱼类分割
水下视频中的鱼类分割可用于准确判断鱼类物体的轮廓大小,为鱼类种群监测和渔业资源调查提供关键信息。一些研究人员利用水下光流来提高水下视频的鱼类分割精度。然而,现有工作中没有对水下光流进行评估和筛选,其预测很容易受到非鱼类运动的干扰。因此,在本文中,通过分析水下光流数据,我们提出了一个稳健的水下分割网络 RUSNet,具有输入信息的自适应筛选和融合功能。首先,为了增强分割模型对低质量光流输入的鲁棒性,提出了一种全局光流质量评估模块,用于评估和对齐水下光流。其次,通过粗略定位鱼对象,然后应用提出的多维注意力 (MDA) 模块,从鱼的空间和边缘维度迭代恢复粗略的定位地图来设计解码器。最后,在测试阶段提出了一种多输出选择性融合方法,将单输入预测的平均绝对误差 (MAE) 与使用多源输入获得的预测的平均绝对误差 (MAE) 进行比较。然后,选择置信度最高的信息进行预测融合,这有助于获得最终的水下鱼类分割结果。为了验证所提出的模型的有效性,我们使用公开可用的联合水下视频数据集和单独的 DeepFish 公共数据集对其进行了训练和评估。 与先进的水下鱼类分割模型相比,所提模型对 DeepFish 数据集中低质量背景光流具有更强的鲁棒性,平均像素精度 (mPA) 和平均交并比 (mIoU) 值分别达到 98.77% 和 97.65%。在联合数据集上,所提模型的 mPA 和 mIoU 分别为 92.61% 和 90.12%,比先进的水下视频对象分割模型 MSGNet 高 0.72% 和 1.21%。结果表明,所提模型能够在复杂的水下场景中自适应地选择输入并准确分割鱼类,为调查渔业资源提供了有效的解决方案。
更新日期:2024-11-11
中文翻译:
RUSNet:基于自适应光流选择的水下视频中的鲁棒鱼类分割
水下视频中的鱼类分割可用于准确判断鱼类物体的轮廓大小,为鱼类种群监测和渔业资源调查提供关键信息。一些研究人员利用水下光流来提高水下视频的鱼类分割精度。然而,现有工作中没有对水下光流进行评估和筛选,其预测很容易受到非鱼类运动的干扰。因此,在本文中,通过分析水下光流数据,我们提出了一个稳健的水下分割网络 RUSNet,具有输入信息的自适应筛选和融合功能。首先,为了增强分割模型对低质量光流输入的鲁棒性,提出了一种全局光流质量评估模块,用于评估和对齐水下光流。其次,通过粗略定位鱼对象,然后应用提出的多维注意力 (MDA) 模块,从鱼的空间和边缘维度迭代恢复粗略的定位地图来设计解码器。最后,在测试阶段提出了一种多输出选择性融合方法,将单输入预测的平均绝对误差 (MAE) 与使用多源输入获得的预测的平均绝对误差 (MAE) 进行比较。然后,选择置信度最高的信息进行预测融合,这有助于获得最终的水下鱼类分割结果。为了验证所提出的模型的有效性,我们使用公开可用的联合水下视频数据集和单独的 DeepFish 公共数据集对其进行了训练和评估。 与先进的水下鱼类分割模型相比,所提模型对 DeepFish 数据集中低质量背景光流具有更强的鲁棒性,平均像素精度 (mPA) 和平均交并比 (mIoU) 值分别达到 98.77% 和 97.65%。在联合数据集上,所提模型的 mPA 和 mIoU 分别为 92.61% 和 90.12%,比先进的水下视频对象分割模型 MSGNet 高 0.72% 和 1.21%。结果表明,所提模型能够在复杂的水下场景中自适应地选择输入并准确分割鱼类,为调查渔业资源提供了有效的解决方案。