当前位置: X-MOL 学术Psychological Review › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learners restrict their linguistic generalizations using preemption but not entrenchment: Evidence from artificial-language-learning studies with adults and children.
Psychological Review ( IF 5.1 ) Pub Date : 2024-06-06 , DOI: 10.1037/rev0000463
Anna Samara 1 , Elizabeth Wonnacott 2 , Gaurav Saxena 3 , Ramya Maitreyee 4 , Judit Fazekas 5 , Ben Ambridge 6
Affiliation  

A central goal of research into language acquisition is explaining how, when learners generalize to new cases, they appropriately restrict their generalizations (e.g., to avoid producing ungrammatical utterances such as *the clown laughed the man; "*" indicates an ungrammatical form). The past 30 years have seen an unresolved debate between statistical preemption and entrenchment as explanations. Under preemption, the use of a verb in a particular construction (e.g., *the clown laughed the man) is probabilistically blocked by hearing that other verb constructions with similar meanings only (e.g., the clown made the man laugh). Under entrenchment, such errors (e.g., *the clown laughed the man) are probabilistically blocked by hearing any utterance that includes the relevant verb (e.g., by the clown made the man laugh and the man laughed). Across five artificial-language-learning studies, we designed a training regime such that learners received evidence for the (by the relevant hypothesis) ungrammaticality of a particular unattested verb/noun + particle combination (e.g., *chila + kem; *squeako + kem) via either preemption only or entrenchment only. Across all five studies, participants in the preemption condition (as per our preregistered prediction) rated unattested verb/noun + particle combinations as less acceptable for restricted verbs/nouns, which appeared during training, than for unrestricted, novel-at-test verbs/nouns, which did not appear during training, that is, strong evidence for preemption. Participants in the entrenchment condition showed no evidence for such an effect (and in 3/5 experiments, positive evidence for the null). We conclude that a successful model of learning linguistic restrictions must instantiate competition between different forms only where they express the same (or similar) meanings. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

中文翻译:


学习者使用先发制人而不是巩固来限制他们的语言概括:来自成人和儿童人工语言学习研究的证据。



语言习得研究的一个中心目标是解释,当学习者概括新的案例时,他们如何适当地限制他们的概括(例如,避免产生不合语法的话语,例如*小丑笑了男人;“*”表示不合语法的形式)。过去 30 年,统计先发制人和巩固作为解释之间的争论一直悬而未决。在先发制人的情况下,在特定结构中使用动词(例如,*小丑笑了那个男人),在概率上会因为听到仅具有相似含义的其他动词结构(例如,小丑让男人笑了)而被阻止。在防御下,此类错误(例如,*小丑笑了那个人)会通过听到任何包含相关动词的话语(例如,小丑让那个人笑了,那个人也笑了)而被概率地阻止。在五项人工语言学习研究中,我们设计了一种训练制度,使学习者能够获得证据证明特定未经证实的动词/名词 + 助词组合(例如,*chila + kem;*squeako + kem)不符合语法(根据相关假设)。 )通过仅抢占或仅巩固。在所有五项研究中,抢占条件下的参与者(根据我们预先注册的预测)将未经证明的动词/名词+助词组合评为训练期间出现的受限动词/名词比不受限制的、测试时新颖的动词/名词更不可接受。训练时没有出现的名词,即是抢占的有力证据。处于壕沟条件下的参与者没有表现出这种效应的证据(在 3/5 实验中,积极的证据为零)。 我们的结论是,学习语言限制的成功模型必须仅在不同形式表达相同(或相似)含义时实例化它们之间的竞争。 (PsycInfo 数据库记录 (c) 2024 APA,保留所有权利)。
更新日期:2024-06-06
down
wechat
bug