当前位置:
X-MOL 学术
›
arXiv.cs.CV
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2023-12-21 , DOI: arxiv-2312.14238 Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Zhong Muyan, Qinglong Zhang, Xizhou Zhu, Lewei Lu, Bin Li, Ping Luo, Tong Lu, Yu Qiao, Jifeng Dai
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2023-12-21 , DOI: arxiv-2312.14238 Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Zhong Muyan, Qinglong Zhang, Xizhou Zhu, Lewei Lu, Bin Li, Ping Luo, Tong Lu, Yu Qiao, Jifeng Dai
The exponential growth of large language models (LLMs) has opened up numerous
possibilities for multi-modal AGI systems. However, the progress in vision and
vision-language foundation models, which are also critical elements of
multi-modal AGI, has not kept pace with LLMs. In this work, we design a
large-scale vision-language foundation model (InternVL), which scales up the
vision foundation model to 6 billion parameters and progressively aligns it
with the large language model, using web-scale image-text data from various
sources. This model can be broadly applied to and achieve state-of-the-art
performance on visual perception tasks such as image-level or pixel-level
recognition, vision-language tasks such as zero-shot image/video
classification, zero-shot image/video-text retrieval, and link with LLMs to
create multi-modal dialogue systems. We hope that our research could contribute
to the development of multi-modal large models. Code and models are available
at https://github.com/OpenGVLab/InternVL.
中文翻译:
InternVL:扩大视觉基础模型并调整通用视觉语言任务
大型语言模型 (LLM) 的指数级增长为多模式 AGI 系统开辟了无数可能性。然而,视觉和视觉语言基础模型(也是多模态 AGI 的关键要素)的进展并没有跟上法学硕士的步伐。在这项工作中,我们设计了一个大规模视觉语言基础模型(InternVL),它将视觉基础模型扩展到 60 亿个参数,并使用来自各个领域的网络规模图像文本数据逐步将其与大型语言模型对齐。来源。该模型可以广泛应用于视觉感知任务(例如图像级或像素级识别)、视觉语言任务(例如零样本图像/视频分类、零样本图像)并实现最先进的性能/视频文本检索,并与法学硕士链接以创建多模式对话系统。我们希望我们的研究能够为多模态大型模型的开发做出贡献。代码和模型可在 https://github.com/OpenGVLab/InternVL 获取。
更新日期:2023-12-25
中文翻译:
InternVL:扩大视觉基础模型并调整通用视觉语言任务
大型语言模型 (LLM) 的指数级增长为多模式 AGI 系统开辟了无数可能性。然而,视觉和视觉语言基础模型(也是多模态 AGI 的关键要素)的进展并没有跟上法学硕士的步伐。在这项工作中,我们设计了一个大规模视觉语言基础模型(InternVL),它将视觉基础模型扩展到 60 亿个参数,并使用来自各个领域的网络规模图像文本数据逐步将其与大型语言模型对齐。来源。该模型可以广泛应用于视觉感知任务(例如图像级或像素级识别)、视觉语言任务(例如零样本图像/视频分类、零样本图像)并实现最先进的性能/视频文本检索,并与法学硕士链接以创建多模式对话系统。我们希望我们的研究能够为多模态大型模型的开发做出贡献。代码和模型可在 https://github.com/OpenGVLab/InternVL 获取。