当前位置: X-MOL 学术IEEE Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Large Multi-Modal Models (LMMs) as Universal Foundation Models for AI-Native Wireless Systems
IEEE NETWORK ( IF 6.8 ) Pub Date : 2024-07-15 , DOI: 10.1109/mnet.2024.3427313
Shengzhe Xu 1 , Christo Kurisummoottil Thomas 2 , Omar Hashash 2 , Nikhil Muralidhar 3 , Walid Saad 2 , Naren Ramakrishnan 1
Affiliation  

Large language models (LLMs) and foundation models have been recently touted as a game-changer for 6 G systems. However, recent efforts on LLMs for wireless networks are limited to a direct application of existing language models that were designed for natural language processing (NLP) applications. To address this challenge and create wireless-centric foundation models, this paper presents a comprehensive vision on how to design universal foundation models that are tailored towards the unique needs of next-generation wireless systems, thereby paving the way towards the deployment of artificial intelligence (AI)-native networks. Diverging from NLP-based foundation models, the proposed framework promotes the design of large multi-modal models (LMMs) fostered by three key capabilities: 1) processing of multi-modal sensing data, 2) grounding of physical symbol representations in real-world wireless systems using causal reasoning and retrieval-augmented generation (RAG), and 3) enabling instructibility from the wireless environment feedback to facilitate dynamic network adaptation thanks to logical and mathematical reasoning facilitated by neuro-symbolic AI. In essence, these properties enable the proposed LMM framework to build universal capabilities that cater to various cross-layer networking tasks and alignment of intents across different domains. Preliminary results from experimental evaluation demonstrate the efficacy of grounding using RAG in LMMs, and showcase the alignment of LMMs with wireless system designs. Furthermore, the enhanced rationale exhibited in the responses to mathematical questions by LMMs, compared to vanilla LLMs, demonstrates the logical and mathematical reasoning capabilities inherent in LMMs. Building on those results, we present a sequel of open questions and challenges for LMMs. We then conclude with a set of recommendations that ignite the path towards LMM-empowered AI-native systems.

中文翻译:


大型多模态模型 (LMM) 作为 AI 原生无线系统的通用基础模型



大型语言模型 ( LLMs ) 和基础模型最近被誉为 6G 系统的游戏规则改变者。然而,最近针对无线网络的LLMs的努力仅限于直接应用专为自然语言处理(NLP)应用程序设计的现有语言模型。为了应对这一挑战并创建以无线为中心的基础模型,本文提出了关于如何设计针对下一代无线系统的独特需求的通用基础模型的全面愿景,从而为人工智能的部署铺平了道路。 AI)-原生网络。与基于 NLP 的基础模型不同,所提出的框架促进了由三个关键功能培育的大型多模态模型(LMM)的设计:1)多模态传感数据的处理,2)现实世界中物理符号表示的基础使用因果推理和检索增强生成(RAG)的无线系统,以及3)通过神经符号人工智能促进的逻辑和数学推理,从无线环境反馈中获得指导,以促进动态网络适应。从本质上讲,这些属性使所提出的 LMM 框架能够构建通用功能,以满足各种跨层网络任务和跨不同域的意图一致性。实验评估的初步结果证明了 LMM 中使用 RAG 接地的有效性,并展示了 LMM 与无线系统设计的一致性。此外,与普通LLMs相比,LMM 在回答数学问题时表现出更强的基本原理,证明了 LMM 固有的逻辑和数学推理能力。 基于这些结果,我们向 LMM 提出了一系列开放性问题和挑战。最后,我们提出了一系列建议,这些建议开启了通往 LMM 赋能的 AI 原生系统之路。
更新日期:2024-07-15
down
wechat
bug