当前位置: X-MOL 学术Nat. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Simple Behavioral Analysis (SimBA) as a platform for explainable machine learning in behavioral neuroscience
Nature Neuroscience ( IF 21.2 ) Pub Date : 2024-05-22 , DOI: 10.1038/s41593-024-01649-9
Nastacia L Goodwin 1, 2, 3 , Jia J Choong 1, 4 , Sophia Hwang 1 , Kayla Pitts 1 , Liana Bloom 1 , Aasiya Islam 1 , Yizhe Y Zhang 1, 2, 3 , Eric R Szelenyi 1, 3 , Xiaoyu Tong 5 , Emily L Newman 6 , Klaus Miczek 7 , Hayden R Wright 8, 9 , Ryan J McLaughlin 8, 9 , Zane C Norville 10 , Neir Eshel 11 , Mitra Heshmati 1, 2, 3, 12 , Simon R O Nilsson 1 , Sam A Golden 1, 2, 3
Affiliation  

The study of complex behaviors is often challenging when using manual annotation due to the absence of quantifiable behavioral definitions and the subjective nature of behavioral annotation. Integration of supervised machine learning approaches mitigates some of these issues through the inclusion of accessible and explainable model interpretation. To decrease barriers to access, and with an emphasis on accessible model explainability, we developed the open-source Simple Behavioral Analysis (SimBA) platform for behavioral neuroscientists. SimBA introduces several machine learning interpretability tools, including SHapley Additive exPlanation (SHAP) scores, that aid in creating explainable and transparent behavioral classifiers. Here we show how the addition of explainability metrics allows for quantifiable comparisons of aggressive social behavior across research groups and species, reconceptualizing behavior as a sharable reagent and providing an open-source framework. We provide an open-source, graphical user interface (GUI)-driven, well-documented package to facilitate the movement toward improved automation and sharing of behavioral classification tools across laboratories.



中文翻译:


简单行为分析 (SimBA) 作为行为神经科学中可解释机器学习的平台



由于缺乏可量化的行为定义和行为注释的主观性质,使用手动注释对复杂行为的研究通常具有挑战性。监督机器学习方法的集成通过包含可访问和可解释的模型解释来缓解其中一些问题。为了减少访问障碍,并强调模型的可解释性,我们为行为神经科学家开发了开源简单行为分析 (SimBA) 平台。 SimBA 引入了多种机器学习可解释性工具,包括 SHapley Additive exPlanation (SHAP) 分数,有助于创建可解释且透明的行为分类器。在这里,我们展示了如何添加可解释性指标来对研究群体和物种之间的攻击性社会行为进行量化比较,将行为重新概念化为可共享的试剂,并提供一个开源框架。我们提供开源、图形用户界面 (GUI) 驱动、文档齐全的软件包,以促进改进自动化和跨实验室共享行为分类工具。

更新日期:2024-05-22
down
wechat
bug