当前位置: X-MOL 学术IEEE Trans. Softw. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning to Generate Structured Code Summaries From Hybrid Code Context
IEEE Transactions on Software Engineering ( IF 6.5 ) Pub Date : 2024-08-13 , DOI: 10.1109/tse.2024.3439562
Ziyi Zhou 1 , Mingchen Li 1 , Huiqun Yu 1 , Guisheng Fan 1 , Penghui Yang 1 , Zijie Huang 1
Affiliation  

Code summarization aims to automatically generate natural language descriptions for code, and has become a rapidly expanding research area in the past decades. Unfortunately, existing approaches mainly focus on the “one-to-one” mapping from methods to short descriptions, which hinders them from becoming practical tools: 1) The program context is ignored, so they have difficulty in predicting keywords outside the target method; 2) They are typically trained to generate brief function descriptions with only one sentence in length, and therefore have difficulty in providing specific information. These drawbacks are partially due to the limitations of public code summarization datasets. In this paper, we first build a large code summarization dataset including different code contexts and summary content annotations, and then propose a deep learning framework that learns to generate structured code summaries from hybrid program context, named StructCodeSum. It provides both an LLM-based approach and a lightweight approach which are suitable for different scenarios. Given a target method, StructCodeSum predicts its function description, return description, parameter description, and usage description through hybrid code context, and ultimately builds a Javadoc-style code summary. The hybrid code context consists of path context, class context, documentation context and call context of the target method. Extensive experimental results demonstrate: 1) The hybrid context covers more than 70% of the summary tokens in average and significantly boosts the model performance; 2) When generating function descriptions, StructCodeSum outperforms the state-of-the-art approaches by a large margin; 3) According to human evaluation, the quality of the structured summaries generated by our approach is better than the documentation generated by Code Llama.

中文翻译:


学习从 Hybrid Code Context 生成结构化代码摘要



代码摘要旨在为代码自动生成自然语言描述,在过去几十年中已成为一个迅速扩展的研究领域。遗憾的是,现有的方法主要集中在从方法到简短描述的 “一对一” 映射上,这阻碍了它们成为实用的工具:1) 忽略了程序上下文,因此很难预测目标方法之外的关键词;2) 他们通常被训练生成只有一句话的简短函数描述,因此难以提供具体信息。这些缺点部分是由于公共代码摘要数据集的限制。在本文中,我们首先构建了一个包含不同代码上下文和摘要内容注释的大型代码摘要数据集,然后提出了一个名为 StructCodeSum 的深度学习框架,该框架学习从混合程序上下文中生成结构化代码摘要。它既提供了基于 LLM 的方法,也提供了适用于不同场景的轻量级方法。给定一个目标方法,StructCodeSum 通过混合代码上下文预测其函数描述、返回描述、参数描述和使用描述,最终构建 Javadoc 风格的代码摘要。混合代码上下文由目标方法的路径上下文、类上下文、文档上下文和调用上下文组成。 广泛的实验结果表明:1) 混合上下文平均覆盖了 70% 以上的摘要标记,并显著提高了模型性能;2) 在生成函数描述时,StructCodeSum 的性能大大优于最先进的方法;3) 根据人工评估,我们的方法生成的结构化摘要的质量优于 Code Llama 生成的文档。
更新日期:2024-08-13
down
wechat
bug