当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Accurate Low-bit Quantization towards Efficient Computational Imaging
International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2024-10-14 , DOI: 10.1007/s11263-024-02250-0
Sheng Xu, Yanjing Li, Chuanjian Liu, Baochang Zhang

Recent advances of deep neural networks (DNNs) promote low-level vision applications in real-world scenarios, e.g., image enhancement, dehazing. Nevertheless, DNN-based methods encounter challenges in terms of high computational and memory requirements, especially when deployed on real-world devices with limited resources. Quantization is one of effective compression techniques that significantly reduces computational and memory requirements by employing low-bit parameters and bit-wise operations. However, low-bit quantization for computational imaging (Q-Imaging) remains largely unexplored and usually suffer from a significant performance drop compared with the real-valued counterparts. In this work, through empirical analysis, we identify the main factor responsible for such significant performance drop underlies in the large gradient estimation error from non-differentiable weight quantization methods, and the activation information degeneration along with the activation quantization. To address these issues, we introduce a differentiable quantization search (DQS) method to learn the quantized weights and an information boosting module (IBM) for network activation quantization. Our DQS method allows us to treat the discrete weights in a quantized neural network as variables that can be searched. We achieve this end by using a differential approach to accurately search for these weights. In specific, each weight is represented as a probability distribution across a set of discrete values. During training, these probabilities are optimized, and the values with the highest probabilities are chosen to construct the desired quantized network. Moreover, our IBM module can rectify the activation distribution before quantization to maximize the self-information entropy, which retains the maximum information during the quantization process. Extensive experiments across a range of image processing tasks, including enhancement, super-resolution, denoising and dehazing, validate the effectiveness of our Q-Imaging along with superior performances compared to a variety of state-of-the-art quantization methods. In particular, the method in Q-Imaging also achieves a strong generalization performance when composing a detection network for the dark object detection task.



中文翻译:


学习精确的低位量化以实现高效的计算成像



深度神经网络 (DNN) 的最新进展促进了低级视觉在实际场景中的应用,例如图像增强、去雾。然而,基于 DNN 的方法在高计算和内存要求方面遇到了挑战,尤其是在资源有限的实际设备上部署时。量化是一种有效的压缩技术,它通过采用低位参数和按位运算来显著降低计算和内存需求。然而,用于计算成像的低位量化 (Q-Imaging) 在很大程度上仍未得到探索,与实值量子相比,性能通常会显著下降。在这项工作中,通过实证分析,我们确定了导致如此显着性能下降的主要因素是不可微分权重量化方法的大梯度估计误差,以及激活信息退化以及激活量化。为了解决这些问题,我们引入了一种可微量化搜索 (DQS) 方法来学习量化权重,并引入了一个用于网络激活量化的信息提升模块 (IBM)。我们的 DQS 方法允许我们将量化神经网络中的离散权重视为可以搜索的变量。我们通过使用差分方法来准确搜索这些权重来实现此目的。具体来说,每个权重都表示为一组离散值的概率分布。在训练过程中,这些概率被优化,并选择概率最高的值来构建所需的量化网络。 此外,我们的 IBM 模块可以对量化前的激活分布进行整流,以最大化自信息熵,从而在量化过程中保留最大信息。在一系列图像处理任务(包括增强、超分辨率、去噪和去雾)中进行的广泛实验验证了我们的 Q-Imaging 的有效性,以及与各种最先进的量化方法相比的卓越性能。特别是,Q-Imaging 中的方法在为暗物体检测任务构建检测网络时也实现了强大的泛化性能。

更新日期:2024-10-14
down
wechat
bug