当前位置: X-MOL 学术ACS Energy Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Should We Publish Fewer Papers?
ACS Energy Letters ( IF 19.3 ) Pub Date : 2024-08-09 , DOI: 10.1021/acsenergylett.4c01991
Song Jin 1
Affiliation  

It is perhaps an understatement to say that all of us in modern society, especially academic researchers, are overwhelmed. There are always more papers to write, more grant applications to submit, more administrative reports to file, more committees to serve on, more conferences to attend, and more manuscript (and grant) reviews to perform. Particularly on the academic papers, the number of peer-reviewed research publications has been growing rapidly by about 8–9% each year. (1,2) Specifically, as one of the fastest growing research areas, renewal energy research has also witnessed a significant growth in the number of publications, as well as the number of energy research journals. It seems that we are on a perpetual treadmill that is getting faster and faster, and what is worse, it seems that we are powerless to change the course. It is clear that everyone is working harder and harder, trying to make an impact with research efforts, but it is not clear that we are making more progress or we are getting better. This phenomenon seems to fit well into the classic economic theory of “involution”. (3) Therefore, I venture to ask the following question to the academic research community: Should we all publish fewer papers? By this proposition, I am not advocating that we all “slack off”, but rather that we work harder to make sure we publish fewer but higher quality papers with new and significant scientific insights instead of reporting routine or incremental studies in large numbers of papers. This is by no means a new problem for our time─the issue of “salami” papers has been long known, and my editorial on “sandwich papers” also seemed to resonate with many readers. (4) However, the emerging generative and large language models artificial intelligence (AI) tools, such as ChatGPT, are making this problem even more acute. With the aid of such AI tools, seemingly reasonable research papers (both original research papers and reviews) can now be prepared very quickly (and guidelines are being developed on such practice). (5,6) Alarmingly, some research manuscripts must have been reviewed by using such AI tools already. (7,8) Then, some of us are probably relying on AI to summarize research papers so that we can read more papers faster. (9) With all of this going on, it is pointless for us Earthly beings to compete with AI machines to strive for generating more and more research papers. Instead, we must focus on doing what AI tools cannot do well: creative, original, and significant new research works that really answer some scientific questions and solve previously unsolvable problems. As far as I can tell, language AI tools have not (yet) been very good at addressing such challenges. Figure 1. Researchers are overwhelmed by the number of research papers being published these days. (Source: iStock.com/aldegonde) It is also quite curious why so many new academic research journals keep popping up every week? As a journal editor myself, I still could not keep up with the dizzying pace of announcements for new journals and cannot make sense out of all of these. Of course, I am not disputing the fact that some research fields experience rapid growth and indeed new publication venues distinct from those for the traditional disciplines are needed to accommodate the explosive growth of research papers, but this is clearly not always the case when one examines the names and scopes of many new journals. Especially now when most of these new journals are online only, what are the differences and benefits in setting up more and more small journals on highly specialized and specific topics (some of which might just publish a few dozen papers a year), instead of just archiving them together in one place? If the growth rate of academic publications is even slower than the growth rate of academic journals, the published papers are going to be increasingly archived in more and more fragmented fashion at different journal Web sites. This is not even mentioning the flood of predatory Open Access journals and publishers that have led to academic fraud and the retraction of a massive number of papers. (10) With the proliferation of new journals, a manuscript can bounce between more and more different journals for repeated resubmissions, which means more editors and reviewers will need to evaluate the same research work over and over again as voluntary service. This undoubtedly contributes to the increased workload for all academic researchers but results in very little real benefit for the community as a whole. Every paper written eventually gets published in some journal, so the fewer rounds of submission it has to go through, the less work for the whole community. This issue of journal proliferation is also related to the issue of having more papers─if the number of publications (regardless of how meaningful and valuable they are) keeps growing quickly, publishers see more content to capture and will be motivated to create more journals. Simply put, if there is more demand, there will be more supply. It is quite easy for us to say “no” to manuscript review requests, but would we resist the temptation to publish in yet another new journal, especially if we are invited to do so? Perhaps the research community (we, the practicing scientists) as a whole needs to have some honest and healthy conversations among ourselves about how we would approach this issue. Since we are all rational scientists, one needs to ask why we could not get out of such unproductive cycles. The conclusion must be that we are all motivated by the evaluation and incentive systems we are in. It is always simpler and easier to just use some quantitative metrics to evaluate the research output and researchers’ productivity and impact─the number of publications, the impact factors of the journals in which the research works are published, the citations, etc. So long as evaluation processes exist, it is probably impossible to completely avoid some form of “bean-counting”. Therefore, maybe we could try to improve the ways we “count the beans”, to provide more incentive for publishing fewer (and hopefully more well-thought-out and higher impact) papers. As flawed as it might be, the Hirsch index (h-factor) (11) has become one of the most commonly used metrics to characterize the scientific output of an individual researcher. Here I argue that the ratio of the H-index to the total number of publications for a given researcher reveals more information about what fraction of the research papers have truly made impact (at least in terms of citations). We could use this ratio as a weighting factor for the original h-index to calculate a “weighted h-index” (= h2/total number of publications) that can reflects the difference between someone who has a very large number of publications with a small fraction of them being highly influential versus someone who has a smaller number of publications but with a larger fraction of them being highly influential, and thus incentivize the latter case. Of course, we could further debate or refine how much this fraction should weigh differently by using the square root or other operation on the fraction of h/total number of publications. I must admit that, as a scientist who is considered reasonably “productive”, I have published quite a few papers myself and might well be part of the problem here. If we assume that half of the submitted manuscripts get published in a given journal, for each published manuscript, there would be at least 4 peer reviews conducted in general. I am not sure that I have contributed enough peer reviews commensurate with the number of papers I have published. Feeling the “guilt”, I have been increasingly asking myself─Do I really need to write that paper? What differences would my paper make? What scientific (or engineering) problems would my paper help to solve, and what new questions could my paper answer? Are these interesting, meaningful, and significant advances? Do I need to write a review manuscript when I do not have something burning to say and reviews on similar topics already exist? Of course, not all of these questions, especially about the significance and potential impact of a given research paper, could be fully answered a priori, but at least I need to engage in some internal debate about them. Furthermore, with the same body of experimental results, could I summarize them in more concise and efficient, but clear and accurate ways to be published in fewer papers so that future readers’ time and effort in reading my papers could be more worthwhile (instead of driving them to use AI tools to process my papers)? I am not sure that I have been making progress in achieving any of these goals above, but I must try, because the alternative is not good. I am also not sure if I am making some useful points in this editorial, but I hope to persuade you to join me in this pursuit of trying to publish fewer papers, because I am convinced that otherwise we are on an unsustainable path. Finally, switching back into my hat as an editor for ACS Energy Letters, you might ask: if everyone tries to write fewer papers, would we see fewer manuscript submissions to ACS Energy Letters? I do not have a crystal ball. That could well happen, but we could end up publishing similar (or maybe even higher) numbers of high-quality papers with a higher manuscript acceptance rate (note that most journals do not disclose the manuscript acceptance rate). We might have a lower journal impact factor (JIF), (12) just as many peer journals would also likely have, but if that means that authors write fewer papers and editors and reviewers have to evaluate fewer manuscripts, and thus all of us have a lighter workload, wouldn’t this be a bargain that we all should be happy to take? I thank you for reading this editorial and thinking about these issues and I welcome any debate and discussion. We at ACS Energy Letters look forward to receiving your next exciting renewable energy research work! The author sincerely thanks Dr. Prashant Kamat for providing valuable feedback. This article references 12 other publications. This article has not yet been cited by other publications.

中文翻译:


我们应该少发表论文吗?



或许可以轻描淡写地说,现代社会的我们所有人,尤其是学术研究人员,都不知所措。总是有更多的论文要写,更多的资助申请要提交,更多的行政报告要提交,更多的委员会要服务,更多的会议要参加,更多的手稿(和资助)审查要进行。特别是在学术论文方面,同行评审的研究出版物的数量每年以约 8-9% 的速度快速增长。 (1,2)具体而言,作为发展最快的研究领域之一,可再生能源研究的出版物数量以及能源研究期刊的数量也出现了显着增长。我们似乎正站在一台永不停息的跑步机上,跑步的速度越来越快,更糟糕的是,我们似乎无力改变方向。显然,每个人都在越来越努力地工作,试图通过研究工作产生影响,但尚不清楚我们是否正在取得更大的进步或我们正在变得更好。这种现象似乎很符合经典经济理论的“内卷化”。 (3)因此,我冒昧地向学术研究界提出以下问题:我们是否都应该少发表论文?通过这个主张,我并不是主张我们都“偷懒”,而是主张我们更加努力,确保发表更少但质量更高、具有新的、重要的科学见解的论文,而不是在大量论文中报告常规或增量研究。这对我们这个时代来说绝不是一个新问题──“萨拉米”纸的问题早已为人所知,而我关于“三明治纸”的社论似乎也引起了很多读者的共鸣。 (4) 然而,新兴的生成式和大型语言模型人工智能 (AI) 工具,例如 ChatGPT,使这个问题变得更加尖锐。借助此类人工智能工具,现在可以非常快速地准备看似合理的研究论文(包括原始研究论文和评论)(并且正在针对此类实践制定指南)。 (5,6) 令人震惊的是,一些研究手稿肯定已经使用此类人工智能工具进行了审查。 (7,8) 然后,我们中的一些人可能正在依靠人工智能来总结研究论文,以便我们可以更快地阅读更多论文。 (9) 随着这一切的发生,我们地球人与人工智能机器竞争以产生越来越多的研究论文是毫无意义的。相反,我们必须专注于做人工智能工具不能做好的事情:创造性的、原创的、重要的新研究工作,真正回答一些科学问题并解决以前无法解决的问题。据我所知,语言人工智能工具还不能很好地应对这些挑战。图 1. 研究人员对当今发表的研究论文数量感到不知所措。 (来源:iStock.com/aldegonde)同样令人好奇的是,为什么每周都会涌现出这么多新的学术研究期刊?作为一名期刊编辑,我仍然无法跟上新期刊发布的令人眼花缭乱的步伐,也无法理解所有这些。当然,我并不否认一些研究领域正在经历快速增长,确实需要有别于传统学科的新的发表场所来适应研究论文的爆炸性增长,但当人们审视研究论文时,情况显然并不总是如此。许多新期刊的名称和范围。 尤其是现在,当这些新期刊大多只是在线时,建立越来越多的高度专业化和特定主题的小型期刊(其中一些可能每年只发表几十篇论文),而不是仅仅出版,有什么区别和好处?将它们一起归档在一个地方?如果学术出版物的增长速度比学术期刊的增长速度还要慢,那么已发表的论文将越来越多地以越来越碎片化的方式归档在不同的期刊网站上。这甚至还没有提到掠夺性的开放获取期刊和出版商的泛滥,这些期刊和出版商导致了学术欺诈和大量论文被撤回。 (10)随着新期刊的激增,一篇稿件可以在越来越多的不同期刊之间反复投稿,这意味着更多的编辑和审稿人将需要一遍又一遍地评估相同的研究工作作为志愿服务。这无疑增加了所有学术研究人员的工作量,但对整个社区来说几乎没有真正的好处。每篇论文最终都会在某个期刊上发表,因此必须经过的提交轮次越少,整个社区的工作就越少。期刊激增的问题也与拥有更多论文的问题有关——如果出版物的数量(无论它们有多么有意义和价值)持续快速增长,出版商就会看到更多的内容可以捕获,并有动力创建更多的期刊。简单地说,如果有更多的需求,就会有更多的供给。我们很容易对稿件评审请求说“不”,但我们会抵制在另一本新期刊上发表的诱惑吗,特别是如果我们被邀请这样做的话? 也许研究界(我们,实践科学家)作为一个整体需要在我们之间就如何处理这个问题进行一些诚实和健康的对话。既然我们都是理性的科学家,我们就需要问为什么我们无法摆脱这种非生产性的循环。结论一定是,我们都受到我们所处的评估和激励体系的激励。用一些定量指标来评估研究成果和研究人员的生产力和影响力总是更简单、更容易——出版物的数量、影响力论文发表的期刊、被引用情况等因素。只要评估过程存在,就不可能完全避免某种形式的“数豆”。因此,也许我们可以尝试改进我们“数豆子”的方式,为发表更少的(希望是经过深思熟虑的和更高影响力的)论文提供更多的激励。尽管赫希指数(h 因子)(11) 可能存在缺陷,但它已成为描述个体研究人员科学成果的最常用指标之一。在这里,我认为 H 指数与特定研究人员出版物总数的比率揭示了更多关于研究论文中哪些部分真正产生影响的信息(至少在引用方面)。 我们可以用这个比率作为原始 h 指数的权重因子,计算出一个“加权 h 指数”(= h 2 /出版物总数),它可以反映拥有大量出版物的人与拥有大量出版物的人之间的差异。他们中的一小部分人具有很高的影响力,而出版物数量较少但其中大部分人具有很高的影响力,从而激励了后一种情况。当然,我们可以通过对 h/出版物总数的分数使用平方根或其他运算来进一步讨论或细化该分数应该有多少不同的权重。我必须承认,作为一名被认为相当“富有成效”的科学家,我自己发表了相当多的论文,很可能是这里问题的一部分。如果我们假设一半的提交稿件在给定期刊上发表,那么对于每一篇发表的稿件,一般都会至少进行 4 次同行评审。我不确定我是否贡献了与我发表的论文数量相称的足够同行评审。带着“愧疚”的感觉,我越来越多地问自己──我真的需要写那篇论文吗?我的论文会带来什么影响?我的论文将帮助解决哪些科学(或工程)问题,以及我的论文可以回答哪些新问题?这些进步是否有趣、有意义且重大?当我没有什么值得说的并且类似主题的评论已经存在时,我是否需要写一篇评论手稿?当然,并不是所有这些问题,特别是关于某篇研究论文的意义和潜在影响,都可以先验地得到充分回答,但至少我需要对它们进行一些内部辩论。 此外,在同样的实验结果下,我能否以更简洁、更高效、更清晰、更准确的方式总结出来,以更少的论文发表,以便未来的读者花在阅读我的论文上的时间和精力更值得(而不是促使他们使用人工智能工具来处理我的论文)?我不确定我是否在实现上述任何目标方面取得了进展,但我必须尝试,因为替代方案并不好。我也不确定我在这篇社论中是否提出了一些有用的观点,但我希望说服您与我一起努力减少发表论文,因为我坚信,否则我们将走在一条不可持续的道路上。最后,回到我作为ACS Energy Letters编辑的角色,您可能会问:如果每个人都尝试写更少的论文,我们会看到向ACS Energy Letters提交的稿件会减少吗?我没有水晶球。这种情况很可能发生,但我们最终可能会发表类似(甚至更高)数量的高质量论文,并且稿件接受率更高(请注意,大多数期刊不会披露稿件接受率)。我们的期刊影响因子 (JIF) 可能较低,(12) 就像许多同行期刊也可能有的那样,但如果这意味着作者写的论文更少,编辑和审稿人必须评估的手稿也更少,因此我们所有人都有工作量减轻了,这难道不是我们都应该乐意接受的便宜货吗?我感谢您阅读这篇社论并思考这些问题,我欢迎任何辩论和讨论。我们ACS Energy Letters期待收到您下一个令人兴奋的可再生能源研究工作!作者衷心感谢Dr. Prashant Kamat 提供了宝贵的反馈。本文引用了其他 12 篇出版物。这篇文章尚未被其他出版物引用。
更新日期:2024-08-09
down
wechat
bug