当前位置: X-MOL 学术IEEE Trans. Softw. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Follow-Up Attention: An Empirical Study of Developer and Neural Model Code Exploration
IEEE Transactions on Software Engineering ( IF 6.5 ) Pub Date : 2024-08-23 , DOI: 10.1109/tse.2024.3445338
Matteo Paltenghi 1 , Rahul Pandita 2 , Austin Z. Henley 3 , Albert Ziegler 2
Affiliation  

Recent neural models of code, such as OpenAI Codex and AlphaCode, have demonstrated remarkable proficiency at code generation due to the underlying attention mechanism. However, it often remains unclear how the models actually process code, and to what extent their reasoning and the way their attention mechanism scans the code matches the patterns of developers. A poor understanding of the model reasoning process limits the way in which current neural models are leveraged today, so far mostly for their raw prediction. To fill this gap, this work studies how the processed attention signal of three open large language models - CodeGen, InCoder and GPT-J - agrees with how developers look at and explore code when each answers the same sensemaking questions about code. Furthermore, we contribute an open-source eye-tracking dataset comprising 92 manually-labeled sessions from 25 developers engaged in sensemaking tasks. We empirically evaluate five heuristics that do not use the attention and ten attention-based post-processing approaches of the attention signal of CodeGen against our ground truth of developers exploring code, including the novel concept of follow-up attention which exhibits the highest agreement between model and human attention. Our follow-up attention method can predict the next line a developer will look at with 47% accuracy. This outperforms the baseline prediction accuracy of 42.3%, which uses the session history of other developers to recommend the next line. These results demonstrate the potential of leveraging the attention signal of pre-trained models for effective code exploration.

中文翻译:


随访关注:开发人员和神经模型代码探索的实证研究



由于潜在的注意力机制,最近的代码神经模型(如 OpenAI Codex 和 AlphaCode)在代码生成方面表现出非凡的熟练程度。然而,通常仍然不清楚模型实际上是如何处理代码的,以及它们的推理和它们的注意力机制扫描代码的方式在多大程度上与开发人员的模式相匹配。对模型推理过程的了解不足限制了当今当前神经模型的利用方式,到目前为止主要用于其原始预测。为了填补这一空白,这项工作研究了三个开放的大型语言模型(CodeGen、InCoder 和 GPT-J)的处理注意力信号如何与开发人员在回答关于代码的相同意义问题时如何看待和探索代码保持一致。此外,我们还提供了一个开源的眼动追踪数据集,其中包括来自 25 名从事意义建构任务的开发人员的 92 个手动标记会话。我们实证评估了 CodeGen 注意力信号的五种不使用注意力的启发式方法和十种基于注意力的后处理方法,对照我们开发人员探索代码的基本事实,包括后续注意力的新概念,它表现出模型和人类注意力之间的最高一致性。我们的后续注意力方法可以预测开发人员将查看的下一行,准确率为 47%。这优于 42.3% 的基线预测准确率,后者使用其他开发人员的会话历史记录来推荐下一行。这些结果表明,利用预训练模型的注意力信号进行有效的代码探索的潜力。
更新日期:2024-08-23
down
wechat
bug