当前位置: X-MOL 学术ACS Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Making Sense of Citations
ACS Sensors ( IF 8.2 ) Pub Date : 2024-11-22 , DOI: 10.1021/acssensors.4c03076
Andrew J. deMello

This month, I would like to share a few personal thoughts about bibliometric indicators and specifically citations. As any scientist, publisher or journal editor will likely admit, the number of downloads, reads or citations associated with a journal publication are, for better or worse, ubiquitous metrics in modern-day scientific publishing. But what does a citation tell us? If an author cites a publication, they are simply making a declaration that a piece of work has relevance to their activities/interests and is worthy of comment. A citation makes no judgment on the “quality” of the cited work, but rather informs the reader that the prior study is worth inspection. That said, and to many, the number of citations does provide a measure of the relative “importance” or “impact” of an article to the wider community. My intention here is not to settle that argument, although I would say that broad-brush citation counting clearly fails to assess impact at the article level, ignoring the influence of the research field or time of publication, and that more nuanced metrics, such the relative citation ratio, (1) are far more instructive. Rather, I would like to recount an incident in my own research group. In the course of his studies, one of my graduate students realized that he needed an optical sensor for Pd2+ quantification. The sensor needed to be accessible, simple to implement, provide for good analytical sensitivities and detection limits and work in aqueous media. He performed a literature search and soon came across a number of optical sensors that on paper looked promising. One of these looked especially interesting, since it was based on measuring the fluorescence of a readily available coumarin laser dye. The authors claimed that their “turn-off” sensor was cheap, provided excellent (nM) detection limits, could sense Pd2+ in aqueous environments and could detect Pd2+ in live cells. The study had been published in a well-respected journal specializing in photophysical and photochemical research and had garnered over 20 citations within the four years since publication. All looked fine, so we decided to adopt the sensor and use it for the problem in hand. After a few weeks of testing and experimentation, we realized that the sensor might not be as useful as we had been led to believe. Through systematic reproduction of the experimental procedures reported in the original paper and a number of additional experiments, we came to the (correct) conclusion that the coumarin derivative was in fact not a fluorescence sensor for Pd2+ but was rather an extremely poor pH sensor able to operate over a restricted range of 1.5 pH units. This was clearly disappointing, but scientific research is rarely straightforward, and setbacks of this kind are not uncommon. What was far more worrisome was the fact that a number of the experimental procedures reported in the original paper were inaccurately or incompletely presented. This hindered our assessment of the sensor and meant that much effort was required to pinpoint earlier mistakes. This personal anecdote, rather than being an opportunistic diatribe, is intended to highlight the importance of providing an accurate and complete description of experimental methods used to generate the data presented in a scientific publication and the consequences of publishing inaccurate or erroneous findings. Fortunately for us, we developed an alternative Pd2+ sensor and additionally reported our “re-evaluation” of original work in the same peer-reviewed journal. However, this made me think more deeply about how we use the literature to inform and underpin contemporary science. The most obvious problem faced by all researchers, whatever their field of expertise, is the sheer number of peer-reviewed papers published each year. To give you some idea of the problem, over 2.8 million new papers were published and indexed by the Scopus and Web of Science databases in 2022: a number 47% higher than in 2016. (2) Even the most dedicated researcher would only be able to read a miniscule fraction of all papers relevant to their interests, so how should one prioritize and select which papers should be looked at and which should not? There is obviously no correct answer to this question, but for many, the strategy of choice will involve the use of scientific abstract and citation databases, such as Web of Science, Scopus, PubMed, SciFinder and The Lens, to find publications relevant to their area of interest. A citation index or database is simply an ordered register of cited articles along with a register of citing articles. Its utility lies in its ability to connect or associate scientific concepts and ideas. Put simply, if an author cites a previously published piece of work in their own paper, they have created an unambiguous link between their science and the prior work. Science citation indexing in its modern form was introduced by Eugene Garfield in the 1950s, with the primary goal of simplifying information retrieval, rather than identifying “important” or “impactful” publications. (3) Interestingly, a stated driver of his original science citation index was also to “eliminate the uncritical citation of fraudulent, incomplete, or obsolete data by making it possible for the conscientious scholar to be aware of criticisms of earlier papers”. Indeed, Garfield opines that “even if there were no other use for the citation index than that of minimizing the citation of poor data, the index would be well worth the effort”. This particular comment takes me back to my “palladium problem”. Perhaps, if I had looked more closely at the articles that cited the original paper, I would have uncovered concerns regarding the method and its sensing utility? So, having a spare hour, I did exactly this. Of course, this is one paper from many millions, but the results were instructive to me at least. In broad terms, almost all citations (to the original paper) appeared in the introductory section and simply stated that a Pd2+ sensor based on a coumarin dye had been reported. 80% made no comment on the quality (in terms of performance metrics) or utility of the work, 15% were self-citations by the authors, with only one paper providing comment on an aspect of the original data. Based on this analysis, I do not think that we can be too hard on ourselves for believing that the Pd2+ sensor would be fit for purpose. Nonetheless, how could we have leveraged the tools and features of modern electronic publishing to make a better analysis? One possible strategy could be to discriminate between citations based on their origin. For example, references in review articles may often have been cited without any meaningful analysis of the veracity of the work, while references cited in the results section of a research article are more likely to have been scrutinized by the authors in relation to their own work, whether the citation highlights a “good” or “bad” issue. Providing the reader with such information would clearly impart extra contrast to the citation metric and aid in their ability to identify articles “important” to their work. Fortunately, the advent of AI is beginning to make valuable contributions in this regard and a number of “smart citation” tools are being introduced. For example, citation analysis platforms such as Scite (4) leverage AI to better understand and utilize scientific citations. Rather than simply reporting the occurrence of a citation, citations can be classified by their contextual usage, for example, through the number of supporting, contrasting, and mentioning citation statements. This allows researchers to evaluate the utility and importance of a reference and ultimately enhance the scientific method. This would be especially useful in our field of sensor science, where knowledge of the sensors or sensing methods that have been successfully used in given scenarios would be invaluable when identifying the need to improve or develop new sensors. It will be some time before “smart citation metrics” are widely adopted by the scientific community. However, it is clear that all citations are not equal, and that we should be smarter in both the way we cite literature and the way we use literature citations. This article references 4 other publications. This article has not yet been cited by other publications.

中文翻译:


理解引文



这个月,我想分享一些关于文献计量指标,特别是引文的个人想法。正如任何科学家、出版商或期刊编辑都可能承认的那样,与期刊出版物相关的下载、阅读或引用数量,无论好坏,都是现代科学出版中无处不在的指标。但是引文告诉我们什么呢?如果作者引用了出版物,他们只是在声明某篇作品与他们的活动/兴趣相关,值得评论。引文不对所引用作品的 “质量” 做出判断,而是告诉读者先前的研究值得检查。也就是说,对许多人来说,引用次数确实提供了衡量文章对更广泛社区的相对 “重要性” 或 “影响” 的指标。我在这里的目的不是要解决这个争论,尽管我想说的是,广义引用计数显然无法评估文章层面的影响,忽略了研究领域或出版时间的影响,而且更细微的指标,如相对引用率,(1) 更具指导性。相反,我想讲述我自己的研究小组中的一个事件。在他的学习过程中,我的一位研究生意识到他需要一个用于 Pd2+ 定量的光学传感器。传感器需要易于接近、易于实施、具有良好的分析灵敏度和检测限,并在水性介质中工作。他进行了文献搜索,很快就发现了许多在纸面上看起来很有前途的光学传感器。其中一种看起来特别有趣,因为它是基于测量现成的香豆素激光染料的荧光。 作者声称他们的“关闭”传感器价格便宜,提供出色的 (nM) 检测限,可以在水性环境中感应 Pd2+,并且可以在活细胞中检测 Pd2+。该研究发表在一本专门从事光物理和光化学研究的备受尊敬的期刊上,并在发表后的四年内获得了 20 多次引用。一切看起来都很好,所以我们决定采用传感器并将其用于解决手头的问题。经过几周的测试和实验,我们意识到该传感器可能并不像我们想象的那么有用。通过系统地复制原始论文中报告的实验程序和一些额外的实验,我们得出了(正确的)结论,香豆素衍生物实际上不是 Pd2+ 的荧光传感器,而是一种极差的 pH 传感器,能够在 1.5 pH 单位的有限范围内工作。这显然令人失望,但科学研究很少是直截了当的,而且这种挫折并不少见。更令人担忧的是,原始论文中报告的许多实验程序都不准确或不完整。这阻碍了我们对传感器的评估,意味着需要付出很多努力来查明早期的错误。这个个人轶事,而不是机会主义的谩骂,旨在强调准确和完整地描述用于生成科学出版物中呈现的数据的实验方法的重要性,以及发表不准确或错误结果的后果。 幸运的是,我们开发了一种替代的 Pd2+ 传感器,并在同一家同行评审期刊上报告了我们对原始工作的“重新评估”。然而,这让我更深入地思考我们如何利用文献来告知和支撑当代科学。所有研究人员面临的最明显的问题是,无论他们的专业领域如何,每年发表的同行评议论文数量庞大。为了让您对这个问题有所了解,2022 年,Scopus 和 Web of Science 数据库发表了超过 280 万篇新论文并对其进行了索引:比 2016 年高出 47%。(2) 即使是最敬业的研究人员也只能阅读与他们的兴趣相关的所有论文的极小部分,那么应该如何确定和选择哪些论文应该被查看,哪些不应该被查看呢?这个问题显然没有正确的答案,但对于许多人来说,选择的策略将涉及使用科学摘要和引文数据库,例如 Web of ScienceScopusPubMedSciFinderThe Lens,以查找与他们感兴趣的领域相关的出版物。引文索引或数据库只是被引文献的有序注册册以及施引文献注册册。它的实用性在于它能够连接或关联科学概念和思想。简而言之,如果作者在自己的论文中引用了以前发表的作品,那么他们就已经在他们的科学和以前的工作之间建立了明确的联系。现代形式的科学引文索引由 Eugene Garfield 在 1950 年代引入,其主要目标是简化信息检索,而不是识别“重要”或“有影响力”的出版物。 (3) 有趣的是,他的原创科学引文索引的一个既定驱动力也是“通过使尽职尽责的学者能够意识到对早期论文的批评,消除对欺诈、不完整或过时数据的不加批判的引用”。事实上,加菲尔德认为,“即使引文索引除了最大限度地减少对不良数据的引用之外没有其他用途,该索引也非常值得付出努力”。这个特别的评论让我回到了我的“钯金问题”。也许,如果我更仔细地查看引用原始论文的文章,我会发现对该方法及其传感效用的担忧?所以,我有空闲时间,我就这样做了。当然,这是数百万篇论文中的一篇,但结果至少对我来说是有启发性的。从广义上讲,几乎所有的引文(对原始论文的引用)都出现在引言部分,并且只是简单地指出已经报道了基于香豆素染料的 Pd2+ 传感器。80% 的论文没有对作品的质量(就性能指标而言)或效用发表评论,15% 的论文是作者的自我引用,只有一篇论文对原始数据的某个方面进行了评论。基于这一分析,我认为我们不能因为相信 Pd2+ 传感器适合用途而对自己太苛刻。尽管如此,我们如何利用现代电子出版的工具和功能来进行更好的分析呢?一种可能的策略是根据引文的来源来区分引文。 例如,评论文章中的参考文献可能经常被引用,而没有对工作的真实性进行任何有意义的分析,而研究文章的结果部分引用的参考文献更有可能被作者与他们自己的工作相关审查,无论引用突出的是 “好 ”还是 “坏 ”问题。为读者提供此类信息显然会带来与引文指标的额外对比,并有助于他们识别对他们的工作“重要”的文章。幸运的是,AI 的出现开始在这方面做出有价值的贡献,并且正在引入许多“智能引用”工具。例如,Scite (4) 等引文分析平台利用 AI 来更好地理解和利用科学引文。引文不是简单地报告引文的出现,而是可以根据其上下文用法进行分类,例如,通过支持、对比和提及引文陈述的数量。这使研究人员能够评估参考文献的实用性和重要性,并最终增强科学方法。这在我们的传感器科学领域特别有用,在确定改进或开发新传感器的需求时,有关在给定场景中成功使用的传感器或传感方法的知识将非常宝贵。“智能引文指标”被科学界广泛采用还需要一段时间。然而,很明显,所有的引用都不是平等的,我们应该在引用文献和使用文献引用的方式上更加聪明。本文引用了其他 4 篇出版物。本文尚未被其他出版物引用。
更新日期:2024-11-22
down
wechat
bug