定量木材解剖学 (QWA) 已被证明是从树木年轮中提取相关环境信息的有效方法。尽管 ROXAS 等经典图像分析工具极大地改进并促进了解剖特征的测量,但生成 QWA 数据集仍然具有挑战性且耗时。近年来,深度学习技术极大地提高了大多数计算机视觉任务的性能。因此,我们研究了三种不同的深度学习模型(U-Net、Mask-RCNN、Panoptic Deeplab),以改善主要瓶颈——细胞检测。因此,我们创建了一个针叶树管腔分割(CoLuS)数据集用于训练和评估。它由来自多个针叶树物种的解剖图像的每个细胞腔的手动轮廓组成,涵盖了广泛的样本质量。此外,我们还将深度学习模型应用于之前发布的来自芬兰北部的高质量 QWA 年表,以将我们的深度学习方法的暖季 (AMJJAS) 温度重建技能与当前基于经典图像的 ROXAS 实现进行比较分析。根据我们的评估数据集,与自动 ROXAS 分割相比,我们表现最佳的深度学习模型 (U-Net) 的计算机视觉指标平均交集 (mIoU) 和全景质量 (PQ) 分别提高了 7.6% 和 8.1%,除了速度更快之外。此外,与自动 ROXAS 分析(往往会系统地低估管腔面积)相比,U-Net 将管腔面积百分比误差降低了 57.8%,平均细胞壁厚度降低了 63.2%,细胞计数误差降低了 54.1%。此外,与之前用于树细胞分割的 Mask-RCNN 相比,我们展示了 U-Net 更高的性能。这些改进与样品质量无关。对于芬兰北部 QWA 年表,我们的 U-Net 模型在有或没有手动后处理的情况下均匹配或优于 ROXAS,显示最大径向细胞壁厚度的共同信号 (Rbar) 为 0.72,AMJJAS 温度相关性为 0.81。解剖学晚材密度的明显改善尤其明显,这可能是由于更好地检测了小细胞管腔。我们的结果证明了深度学习在减少手动后处理时间的情况下实现更高质量分割的潜力,从而在不影响数据质量的情况下节省数周至数月的繁琐工作。因此,我们计划在 ROXAS 的未来版本中实施深度学习。
"点击查看英文标题和摘要"
Towards ROXAS AI: Deep learning for faster and more accurate conifer cell analysis
Quantitative wood anatomy (QWA) has proven to be a powerful method for extracting relevant environmental information from tree-rings. Although classical image-analysis tools such as ROXAS have greatly improved and facilitated measurements of anatomical features, producing QWA datasets remains challenging and time-consuming. In recent years, deep learning techniques have drastically improved the performance of most computer vision tasks. We, therefore, investigate three different deep learning models (U-Net, Mask-RCNN, Panoptic Deeplab) to improve the main bottleneck, cell detection. Therefore, we create a Conifer Lumen Segmentation (CoLuS) dataset for training and evaluation. It consists of manual outlines of each cell lumen from anatomical images of several conifer species that cover a wide range of sample qualities. We furthermore apply our deep learning model to a previously published high-quality QWA chronology from Northern Finland to compare the warm-season (AMJJAS) temperature reconstruction skill of our deep learning method with that of the current ROXAS implementation, which is based on classical image analysis. Based on our evaluation dataset we show improvements of 7.6% and 8.1% for our best performing deep learning model (U-Net) for the computer vision metrics mean Intersection over Union (mIoU) and Panoptic Quality (PQ) compared to automatic ROXAS segmentation, in addition to being much faster. Furthermore, U-Net reduces the percentage error compared to automatic ROXAS analysis - which tends to systematically underestimate lumen area - by 57.8% for lumen area, 63.2% for average cell wall thickness, and 54.1% for cell count. In addition, we show higher performance for the U-Net compared to the Mask-RCNN previously used for tree cell segmentation. These improvements are independent of sample quality. For the Northern Finland QWA chronology, our U-Net model matches or outperforms ROXAS with and without manual post-processing, showing a common signal (Rbar) of 0.72 and a AMJJAS temperature correlation of 0.81 for maximum radial cell wall thickness. A clear improvement is especially visible for the anatomical latewood density, likely due to the better detection of small cell lumina. Our results demonstrate the potential of deep learning for higher-quality segmentation with lower manual post-processing time, saving weeks to months of tedious work without compromising data quality. We thus plan to implement deep learning in a future version of ROXAS.