当前位置: X-MOL 学术Appl. Linguist. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Diversity and Standards in Writing for Publication in the Age of AI—Between a Rock and a Hard Place
Applied Linguistics ( IF 4.155 ) Pub Date : 2024-04-06 , DOI: 10.1093/applin/amae025
Maria Kuteeva 1 , Marta Andersson 2
Affiliation  

Research communities across disciplines recognize the need to diversify and decolonize knowledge. While artificial intelligence-supported large language models (LLMs) can help with access to knowledge generated in the Global North and demystify publication practices, they are still biased toward dominant norms and knowledge paradigms. LLMs lack agency, metacognition, knowledge of the local context, and understanding of how the human language works. These limitations raise doubts regarding their ability to develop the kind of rhetorical flexibility that is necessary for adapting writing to ever-changing contexts and demands. Thus, LLMs are likely to drive both language use and knowledge construction towards homogeneity and uniformity, reproducing already existing biases and structural inequalities. Since their output is based on shallow statistical associations, what these models are unable to achieve to the same extent as humans is linguistic creativity, particularly across languages, registers, and styles. This is the area where key stakeholders in academic publishing—authors, reviewers, and editors—have the upper hand, as our applied linguistics community strives to increase multilingual practices in knowledge production.

中文翻译:

人工智能时代出版写作的多样性和标准——进退两难

跨学科的研究团体认识到知识多样化和非殖民化的必要性。虽然人工智能支持的大语言模型(LLM)可以帮助获取北半球国家产生的知识并揭开出版实践的神秘面纱,但它们仍然偏向于主导规范和知识范式。法学硕士缺乏能动性、元认知、对当地背景的了解以及对人类语言如何运作的理解。这些局限性让人怀疑他们是否有能力发展修辞灵活性,而修辞灵活性是使写作适应不断变化的背景和需求所必需的。因此,法学硕士可能会推动语言使用和知识构建走向同质性和统一性,从而重现已经存在的偏见和结构性不平等。由于它们的输出基于浅层统计关联,因此这些模型无法达到与人类相同的语言创造力,特别是跨语言、语域和风格的创造力。这是学术出版的主要利益相关者(作者、审稿人和编辑)占据上风的领域,因为我们的应用语言学界致力于增加知识生产中的多语言实践。
更新日期:2024-04-06
down
wechat
bug