Description:
Entity disambiguation is a core task in NLP, yet LLMs natively do not have a concept of entities, e.g., “Joe Biden”, “Josef Biden” and “President Biden” are not formally recognized as referencing the same entity.
Although there are several proposals towards making LLMs entity-aware [1,2,3], these have not received widespread recognition.
The goal of this thesis is to investigate a lightweight alternative that does not manipulate the LLM, but instead, combines an LLM with a traditional named entity disambiguation (NED) tool such as Wikidata’s entity retrieval API, and to investigate the effectiveness of such a combination.
References:
[1] De Cao, N., et al. “Autoregressive Entity Retrieval.” ICLR 2021-9th International Conference on Learning Representations. Vol. 2021. ICLR, 2020.
[2] Heinzerling, Benjamin, and Kentaro Inui. “Language Models as Knowledge Bases: On Entity Representations, Storage Capacity, and Paraphrased Queries.” Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. 2021.
[3] Ding, Yifan, et al. “EntGPT: Linking Generative Large Language Models with Knowledge Bases.” arXiv preprint arXiv:2402.06738 (2024).