当前位置: X-MOL 学术arXiv.cs.AR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Field-Programmable Gate Array Architecture for Deep Learning: Survey & Future Directions
arXiv - CS - Hardware Architecture Pub Date : 2024-04-15 , DOI: arxiv-2404.10076
Andrew Boutros, Aman Arora, Vaughn Betz

Deep learning (DL) is becoming the cornerstone of numerous applications both in datacenters and at the edge. Specialized hardware is often necessary to meet the performance requirements of state-of-the-art DL models, but the rapid pace of change in DL models and the wide variety of systems integrating DL make it impossible to create custom computer chips for all but the largest markets. Field-programmable gate arrays (FPGAs) present a unique blend of reprogrammability and direct hardware execution that make them suitable for accelerating DL inference. They offer the ability to customize processing pipelines and memory hierarchies to achieve lower latency and higher energy efficiency compared to general-purpose CPUs and GPUs, at a fraction of the development time and cost of custom chips. Their diverse high-speed IOs also enable directly interfacing the FPGA to the network and/or a variety of external sensors, making them suitable for both datacenter and edge use cases. As DL has become an ever more important workload, FPGA architectures are evolving to enable higher DL performance. In this article, we survey both academic and industrial FPGA architecture enhancements for DL. First, we give a brief introduction on the basics of FPGA architecture and how its components lead to strengths and weaknesses for DL applications. Next, we discuss different styles of DL inference accelerators on FPGA, ranging from model-specific dataflow styles to software-programmable overlay styles. We survey DL-specific enhancements to traditional FPGA building blocks such as logic blocks, arithmetic circuitry, and on-chip memories, as well as new in-fabric DL-specialized blocks for accelerating tensor computations. Finally, we discuss hybrid devices that combine processors and coarse-grained accelerator blocks with FPGA-like interconnect and networks-on-chip, and highlight promising future research directions.

中文翻译:

用于深度学习的现场可编程门阵列架构:调查与未来方向

深度学习 (DL) 正在成为数据中心和边缘众多应用程序的基石。为了满足最先进的深度学习模型的性能要求,通常需要专门的硬件,但是深度学习模型的快速变化以及集成深度学习的系统种类繁多,使得除了最大的市场。现场可编程门阵列 (FPGA) 提供了可重编程性和直接硬件执行的独特结合,使其适合加速深度学习推理。与通用 CPU 和 GPU 相比,它们能够定制处理管道和内存层次结构,从而实现更低的延迟和更高的能源效率,而开发时间和成本仅为定制芯片的一小部分。它们多样化的高速 IO 还可以将 FPGA 直接连接到网络和/或各种外部传感器,使其适用于数据中心和边缘用例。随着深度学习已成为越来越重要的工作负载,FPGA 架构正在不断发展以实现更高的深度学习性能。在本文中,我们调查了 DL 的学术和工业 FPGA 架构增强功能。首先,我们简要介绍 FPGA 架构的基础知识以及其组件如何影响 DL 应用程序的优点和缺点。接下来,我们讨论 FPGA 上不同类型的深度学习推理加速器,从特定于模型的数据流类型到软件可编程覆盖类型。我们调查了对传统 FPGA 构建模块(例如逻辑模块、算术电路和片上存储器)的 DL 特定增强功能,以及用于加速张量计算的新结构内 DL 专用模块。最后,我们讨论了将处理器和粗粒度加速器块与类似 FPGA 互连和片上网络相结合的混合设备,并强调了有前景的未来研究方向。
更新日期:2024-04-17
down
wechat
bug