当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Is "my favorite new movie" my favorite movie? Probing the Understanding of Recursive Noun Phrases
arXiv - CS - Computation and Language Pub Date : 2021-12-15 , DOI: arxiv-2112.08326
Qing Lyu, Hua Zheng, Daoxin Li, Li Zhang, Marianna Apidianaki, Chris Callison-Burch

Recursive noun phrases (NPs) have interesting semantic properties. For example, "my favorite new movie" is not necessarily "my favorite movie", whereas "my new favorite movie" is. This is common sense to humans, yet it is unknown whether pre-trained language models have such knowledge. We introduce the Recursive Noun Phrase Challenge (RNPC), a challenge set targeting the understanding of recursive NPs. When evaluated on our dataset, state-of-the-art Transformer models only achieve around chance performance. Still, we show that such knowledge is learnable with appropriate data. We further probe the models for relevant linguistic features that can be learned from our tasks, including modifier semantic category and modifier scope. Finally, models trained on RNPC achieve strong zero-shot performance on an extrinsic Harm Detection task, showing the usefulness of the understanding of recursive NPs in downstream applications. All code and data will be released at https://github.com/veronica320/Recursive-NPs.

中文翻译:

“我最喜欢的新电影”是我最喜欢的电影吗?递归名词短语的理解探讨

递归名词短语 (NP) 具有有趣的语义特性。例如,“我最喜欢的新电影”不一定是“我最喜欢的电影”,而“我最喜欢的新电影”则是。这是人类的常识,但预训练的语言模型是否具有这样的知识尚不得而知。我们介绍了递归名词短语挑战 (RNPC),这是一个旨在理解递归 NP 的挑战集。在我们的数据集上进行评估时,最先进的 Transformer 模型只能达到随机表现。尽管如此,我们仍然表明这些知识可以通过适当的数据学习。我们进一步探索了可以从我们的任务中学习的相关语言特征的模型,包括修饰语语义类别和修饰语范围。最后,在 RNPC 上训练的模型在外在伤害检测任务上实现了强大的零样本性能,显示了在下游应用中理解递归 NP 的有用性。所有代码和数据将在 https://github.com/veronica320/Recursive-NPs 上发布。
更新日期:2021-12-16
down
wechat
bug