Nature Electronics ( IF 33.7 ) Pub Date : 2024-07-25 , DOI: 10.1038/s41928-024-01213-0 Yuanbo Guo , Zheyu Yan , Xiaoting Yu , Qingpeng Kong , Joy Xie , Kevin Luo , Dewen Zeng , Yawen Wu , Zhenge Jia , Yiyu Shi
Ensuring the fairness of neural networks is crucial when applying deep learning techniques to critical applications such as medical diagnosis and vital signal monitoring. However, maintaining fairness becomes increasingly challenging when deploying these models on platforms with limited hardware resources, as existing fairness-aware neural network designs typically overlook the impact of resource constraints. Here we analyse the impact of the underlying hardware on the task of pursuing fairness. We use neural network accelerators with compute-in-memory architecture as examples. We first investigate the relationship between hardware platform and fairness-aware neural network design. We then discuss how hardware advancements in emerging computing-in-memory devices—in terms of on-chip memory capacity and device variability management—affect neural network fairness. We also identify challenges in designing fairness-aware neural networks on such resource-constrained hardware and consider potential approaches to overcome them.
中文翻译:
硬件设计和神经网络的公平性
将深度学习技术应用于医疗诊断和生命信号监测等关键应用时,确保神经网络的公平性至关重要。然而,当在硬件资源有限的平台上部署这些模型时,保持公平性变得越来越具有挑战性,因为现有的公平感知神经网络设计通常忽略了资源限制的影响。这里我们分析一下底层硬件对于追求公平这一任务的影响。我们使用具有内存计算架构的神经网络加速器作为示例。我们首先研究硬件平台和公平感知神经网络设计之间的关系。然后,我们讨论新兴内存计算设备的硬件进步(在片上内存容量和设备可变性管理方面)如何影响神经网络公平性。我们还确定了在此类资源受限的硬件上设计公平感知神经网络的挑战,并考虑克服这些挑战的潜在方法。