当前位置: X-MOL 学术Decis. Support Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Know where to go: Make LLM a relevant, responsible, and trustworthy searchers
Decision Support Systems ( IF 6.7 ) Pub Date : 2024-10-28 , DOI: 10.1016/j.dss.2024.114354
Xiang Shi, Jiawei Liu, Yinpeng Liu, Qikai Cheng, Wei Lu

The advent of Large Language Models (LLMs) has shown the potential to improve relevance and provide direct answers in web searches. However, challenges arise in validating the reliability of generated results and the credibility of contributing sources due to the limitations of traditional information retrieval algorithms and the LLM hallucination problem. We aim to transform LLM into a relevant, responsible, and trustworthy searcher in response to these challenges. Rather than following the traditional generative retrieval approach, simply allowing the LLM to summarize the search results, we propose a novel generative retrieval framework leveraging the knowledge of LLMs to foster a direct link between queries and web sources. This framework reforms the retrieval process of the traditional generative retrieval framework by integrating an LLM retriever, and it redesigns the validator while adding an optimizer to ensure the reliability of the retrieved web sources and evidence sentences. Extensive experiments show that our method outperforms several SOTA methods in relevance, responsibility, and trustfulness. It improves search result validity and precision by 2.54 % and 1.05 % over larger-parameter-scale LLM-based systems. Furthermore, it demonstrates significant advantages over traditional frameworks in question-answering and downstream tasks.

中文翻译:


知道去哪里:让 LLM 成为相关、负责任且值得信赖的搜索者



大型语言模型 (LLMs) 的出现表明了提高相关性并在 Web 搜索中提供直接答案的潜力。然而,由于传统信息检索算法和 LLM。我们的目标是将 LLM一个相关、负责任且值得信赖的搜索者,以应对这些挑战。我们没有遵循传统的生成检索方法,简单地让 LLM 来总结搜索结果,而是提出了一种新的生成检索框架,利用 LLMs来促进查询和 Web 资源之间的直接联系。该框架通过集成 LLM,并在增加优化器的同时重新设计了验证器,以确保检索到的 Web 源和证据句子的可靠性。广泛的实验表明,我们的方法在相关性、责任感和可信度方面优于几种 SOTA 方法。与基于较大参数规模 LLM%。此外,它在问答和下游任务方面表现出优于传统框架的显著优势。
更新日期:2024-10-28
down
wechat
bug