当前位置:
X-MOL 学术
›
IEEE Trans. Image Process.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Exploration of Learned Lifting-Based Transform Structures for Fully Scalable and Accessible Wavelet-Like Image Compression
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2024-10-23 , DOI: 10.1109/tip.2024.3482877 Xinyue Li, Aous Naman, David Taubman
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2024-10-23 , DOI: 10.1109/tip.2024.3482877 Xinyue Li, Aous Naman, David Taubman
This paper provides a comprehensive study on features and performance of different ways to incorporate neural networks into lifting-based wavelet-like transforms, within the context of fully scalable and accessible image compression. Specifically, we explore different arrangements of lifting steps, as well as various network architectures for learned lifting operators. Moreover, we examine the impact of the number of learned lifting steps, the number of channels, the number of layers and the support of kernels in each learned lifting operator. To facilitate the study, we investigate two generic training methodologies that are simultaneously appropriate to a wide variety of lifting structures considered. Experimental results ultimately suggest that retaining fixed lifting steps from the base wavelet transform is highly beneficial. Moreover, we demonstrate that employing more learned lifting steps and more layers in each learned lifting operator do not contribute strongly to the compression performance. However, benefits can be obtained by utilizing more channels in each learned lifting operator. Ultimately, the learned wavelet-like transform proposed in this paper achieves over 25% bit-rate savings compared to JPEG 2000 with compact spatial support.
中文翻译:
探索基于学习的基于提升的变换结构,以实现完全可扩展和可访问的小波状图像压缩
本文在完全可扩展和可访问的图像压缩背景下,全面研究了将神经网络整合到基于提升的小波状转换中的不同方法的特性和性能。具体来说,我们探索了升降步骤的不同安排,以及为学习的起重操作员提供的各种网络架构。此外,我们研究了每个学习的提升算子中学习的提升步骤的数量、通道的数量、层数和内核的支持的影响。为了促进这项研究,我们调查了两种通用的训练方法,它们同时适用于所考虑的各种起重结构。实验结果最终表明,从基小波变换中保留固定的提升步长是非常有益的。此外,我们证明,在每个学习的举重操作员中采用更多学习的提升步骤和更多的层数对压缩性能没有太大贡献。但是,可以通过在每个学习的起重操作员中使用更多通道来获得好处。最终,与具有紧凑空间支持的 JPEG 2000 相比,本文提出的学习到的小波状变换实现了超过 25% 的比特率节省。
更新日期:2024-10-23
中文翻译:
探索基于学习的基于提升的变换结构,以实现完全可扩展和可访问的小波状图像压缩
本文在完全可扩展和可访问的图像压缩背景下,全面研究了将神经网络整合到基于提升的小波状转换中的不同方法的特性和性能。具体来说,我们探索了升降步骤的不同安排,以及为学习的起重操作员提供的各种网络架构。此外,我们研究了每个学习的提升算子中学习的提升步骤的数量、通道的数量、层数和内核的支持的影响。为了促进这项研究,我们调查了两种通用的训练方法,它们同时适用于所考虑的各种起重结构。实验结果最终表明,从基小波变换中保留固定的提升步长是非常有益的。此外,我们证明,在每个学习的举重操作员中采用更多学习的提升步骤和更多的层数对压缩性能没有太大贡献。但是,可以通过在每个学习的起重操作员中使用更多通道来获得好处。最终,与具有紧凑空间支持的 JPEG 2000 相比,本文提出的学习到的小波状变换实现了超过 25% 的比特率节省。