International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2024-11-22 , DOI: 10.1007/s11263-024-02264-8 Marcos Roberto e Souza, Helena de Almeida Maia, Helio Pedrini
Multiple deep learning-based stabilization methods have been proposed recently. Some of them directly predict the optical flow to warp each unstable frame into its stabilized version, which we called direct warping. These methods primarily perform online or semi-online stabilization, prioritizing lower computational cost while achieving satisfactory results in certain scenarios. However, they fail to smooth intense instabilities and have considerably inferior results in comparison to other approaches. To improve their quality and reduce this difference, we propose: (a) NAFT, a new direct warping semi-online stabilization method, which adapts RAFT to videos by including a neighborhood-aware update mechanism, called IUNO. By using our training approach along with IUNO, we can learn the characteristics that contribute to video stability from the data patterns, rather than requiring an explicit stability definition. Furthermore, we demonstrate how leveraging an off-the-shelf video inpainting method to achieve full-frame stabilization; (b) SynthStab, a new synthetic dataset consisting of paired videos that allows supervision by camera motion instead of pixel similarities. To build SynthStab, we modeled camera motion using kinematic concepts. In addition, the unstable motion respects scene constraints, such as depth variation. We performed several experiments on SynthStab to develop and validate NAFT. We compared our results with five other methods from the literature with publicly available code. Our experimental results show that we were able to stabilize intense camera motion, outperforming other direct warping methods and bringing its performance closer to state-of-the-art methods. In terms of computational resources, our smallest network has only about 7% of model size and trainable parameters than the smallest values among the competing methods.
中文翻译:
NAFT 和 SynthStab:基于 RAFT 的网络和用于数字视频稳定的合成数据集
最近提出了多种基于深度学习的稳定方法。其中一些直接预测光流,将每个不稳定的帧扭曲成其稳定版本,我们称之为直接扭曲。这些方法主要执行在线或半在线稳定,优先考虑较低的计算成本,同时在某些情况下获得令人满意的结果。然而,它们无法消除强烈的不稳定性,并且与其他方法相比,结果要差得多。为了提高它们的质量并减少这种差异,我们提出了:(a) NAFT,一种新的直接翘曲半在线稳定方法,它通过包含称为 IUNO 的邻域感知更新机制来使 RAFT 适应视频。通过将我们的训练方法与 IUNO 结合使用,我们可以从数据模式中了解有助于视频稳定性的特征,而无需明确的稳定性定义。此外,我们还演示了如何利用现成的视频修复方法来实现全帧稳定;(b) SynthStab,一种新的合成数据集,由成对的视频组成,允许通过相机运动而不是像素相似性进行监督。为了构建 SynthStab,我们使用运动学概念对摄像机运动进行建模。此外,不稳定运动会考虑场景约束,例如深度变化。我们在 SynthStab 上进行了几次实验,以开发和验证 NAFT。我们将结果与文献中具有公开可用代码的其他五种方法进行了比较。我们的实验结果表明,我们能够稳定强烈的摄像机运动,优于其他直接变形方法,使其性能更接近最先进的方法。 在计算资源方面,我们最小的网络只有大约 7% 的模型大小和可训练参数,是竞争方法中最小值的 7%。