当前位置:
X-MOL 学术
›
ACM Comput. Surv.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Resource-efficient Algorithms and Systems of Foundation Models: A Survey
ACM Computing Surveys ( IF 23.8 ) Pub Date : 2024-11-29 , DOI: 10.1145/3706418 Mengwei Xu, Dongqi Cai, Wangsong Yin, Shangguang Wang, Xin Jin, Xuanzhe Liu
ACM Computing Surveys ( IF 23.8 ) Pub Date : 2024-11-29 , DOI: 10.1145/3706418 Mengwei Xu, Dongqi Cai, Wangsong Yin, Shangguang Wang, Xin Jin, Xuanzhe Liu
Large foundation models, including large language models, vision transformers, diffusion, and LLM-based multimodal models, are revolutionizing the entire machine learning lifecycle, from training to deployment. However, the substantial advancements in versatility and performance these models offer come at a significant cost in terms of hardware resources. To support the growth of these large models in a scalable and environmentally sustainable way, there has been a considerable focus on developing resource-efficient strategies. This survey delves into the critical importance of such research, examining both algorithmic and systemic aspects. It offers a comprehensive analysis and valuable insights gleaned from existing literature, encompassing a broad array of topics from cutting-edge model architectures and training/serving algorithms to practical system designs and implementations. The goal of this survey is to provide an overarching understanding of how current approaches are tackling the resource challenges posed by large foundation models and to potentially inspire future breakthroughs in this field.
中文翻译:
资源节约型算法和基础模型系统:一项调查
大型基础模型,包括大型语言模型、视觉转换器、扩散和基于 LLM 的多模态模型,正在彻底改变从训练到部署的整个机器学习生命周期。然而,这些模型在多功能性和性能方面的重大进步是在硬件资源方面付出了巨大的代价。为了以可扩展和环境可持续的方式支持这些大型模型的增长,人们一直非常重视开发资源节约型战略。这项调查深入探讨了此类研究的至关重要性,考察了算法和系统方面。它提供了从现有文献中收集的全面分析和有价值的见解,涵盖了从尖端模型架构和训练/服务算法到实际系统设计和实现的广泛主题。本调查的目标是全面了解当前方法如何应对大型基础模型带来的资源挑战,并可能激发该领域的未来突破。
更新日期:2024-11-29
中文翻译:
资源节约型算法和基础模型系统:一项调查
大型基础模型,包括大型语言模型、视觉转换器、扩散和基于 LLM 的多模态模型,正在彻底改变从训练到部署的整个机器学习生命周期。然而,这些模型在多功能性和性能方面的重大进步是在硬件资源方面付出了巨大的代价。为了以可扩展和环境可持续的方式支持这些大型模型的增长,人们一直非常重视开发资源节约型战略。这项调查深入探讨了此类研究的至关重要性,考察了算法和系统方面。它提供了从现有文献中收集的全面分析和有价值的见解,涵盖了从尖端模型架构和训练/服务算法到实际系统设计和实现的广泛主题。本调查的目标是全面了解当前方法如何应对大型基础模型带来的资源挑战,并可能激发该领域的未来突破。