当前位置: X-MOL 学术Psychological Bulletin › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Evaluating the robustness of parameter estimates in cognitive models: A meta-analytic review of multinomial processing tree models across the multiverse of estimation methods.
Psychological Bulletin ( IF 17.3 ) Pub Date : 2024-06-27 , DOI: 10.1037/bul0000434
Henrik Singmann 1 , Daniel W Heck 2 , Marius Barth 3 , Edgar Erdfelder 4 , Nina R Arnold 5 , Frederik Aust 3 , Jimmy Calanchini 6 , Fabian E Gümüsdagli 7 , Sebastian S Horn 8 , David Kellen 9 , Karl C Klauer 10 , Dora Matzke 11 , Franziska Meissner 12 , Martha Michalkiewicz 7 , Marie Luisa Schaper 7 , Christoph Stahl 3 , Beatrice G Kuhlmann 4 , Julia Groß 4
Affiliation  

Researchers have become increasingly aware that data-analysis decisions affect results. Here, we examine this issue systematically for multinomial processing tree (MPT) models, a popular class of cognitive models for categorical data. Specifically, we examine the robustness of MPT model parameter estimates that arise from two important decisions: the level of data aggregation (complete-pooling, no-pooling, or partial-pooling) and the statistical framework (frequentist or Bayesian). These decisions span a multiverse of estimation methods. We synthesized the data from 13,956 participants (164 published data sets) with a meta-analytic strategy and analyzed the magnitude of divergence between estimation methods for the parameters of nine popular MPT models in psychology (e.g., process-dissociation, source monitoring). We further examined moderators as potential sources of divergence. We found that the absolute divergence between estimation methods was small on average (<.04; with MPT parameters ranging between 0 and 1); in some cases, however, divergence amounted to nearly the maximum possible range (.97). Divergence was partly explained by few moderators (e.g., the specific MPT model parameter, uncertainty in parameter estimation), but not by other plausible candidate moderators (e.g., parameter trade-offs, parameter correlations) or their interactions. Partial-pooling methods showed the smallest divergence within and across levels of pooling and thus seem to be an appropriate default method. Using MPT models as an example, we show how transparency and robustness can be increased in the field of cognitive modeling. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

中文翻译:


评估认知模型中参数估计的稳健性:跨估计方法多元宇宙的多项式处理树模型的元分析回顾。



研究人员越来越意识到数据分析决策会影响结果。在这里,我们系统地研究了多项式处理树 (MPT) 模型的这个问题,MPT 是一类流行的分类数据认知模型。具体来说,我们研究了 MPT 模型参数估计的稳健性,这些估计源于两个重要决策:数据聚合水平(完全池化、非池化或部分池化)和统计框架(频率主义或贝叶斯)。这些决策跨越了多种估计方法。我们使用荟萃分析策略综合了来自 13,956 名参与者(164 个已发表的数据集)的数据,并分析了心理学中九种流行的 MPT 模型参数(例如,过程解离、来源监测)的估计方法之间的差异大小。我们进一步研究了作为分歧的潜在来源的调节因素。我们发现估计方法之间的绝对差异平均很小 (<.04;MPT 参数范围在 0 到 1 之间);然而,在某些情况下,背离几乎达到了最大可能范围 (.97)。分歧部分是由少数调节因子(例如,特定的 MPT 模型参数、参数估计的不确定性)来解释的,但不能由其他合理的候选调节因子(例如,参数权衡、参数相关性)或它们的交互作用来解释。部分池化方法显示出池化水平内和之间的最小差异,因此似乎是一种合适的默认方法。以 MPT 模型为例,我们展示了如何在认知建模领域提高透明度和稳健性。(PsycInfo 数据库记录 (c) 2024 APA,保留所有权利)。
更新日期:2024-06-27
down
wechat
bug