Speaker: Simon Razniewski, Max Planck Institute for Informatics, Germany
Venue and date: KI 2020, September 21 or 22, 2020, Bamberg, Germany
Description of its topic and goal. Machine-readable commonsense knowledge (CSK) is fundamental for automated reasoning about the general world, and relevant for downstream applications such as question answering and dialogue. In this tutorial, we focus on the construction and consolidation of large repositories of commonsense knowledge. After briefly surveying crowdsourcing approaches to commonsense knowledge compilation, in the main parts of this tutorial, we investigate (i) automated text extraction of CSK and relevant choices for extraction methodology and corpora, and (ii) knowledge consolidation techniques, that aim to canonicalize, clean, or enrich initial extraction results. We end the tutorial with an outlook on application scenarios, and the promises of deep pretrained language models.
Planned duration. 2x 90 minutes
- Introduction (10 minutes)
- Definition of commonsense knowledge
- Types of CSK: Properties of concepts, subparts, events, processes
- Crowdsourcing (15 minutes)
- General considerations
- ConceptNet, Atomic, Verbosity
- Extraction (90 minutes)
- Properties of concepts: TupleKB, Quasimodo
- Parts of concepts: hasPartKB, WebChild
- Visual knowledge: Visual Genome, NEIL
- Consolidation (40 minutes)
- Dice, COMET, (Quasimodo consolidation phase)
- Quality evaluation (all methods)
- Outlook (25 minutes)
- Using CSK: WorldTree, KBQA
- Deep pretrained language models: BERT et al.
Target audience and expected prerequisite knowledge. The target audience of this tutorial are researchers and practitioners of artificial intelligence areas such as automated reasoning, planning, question answering or dialogue, who are interested to learn about techniques to acquire structured knowledge to bootstrap their methods. The tutorial will provide them with an overview of extraction and consolidation paradigms which would help them to acquire commonsense knowledge for their own specific use cases, and it provides a survey of existing repositories that may be relevant for reuse.
Attendees are expected to have basic knowledge in knowledge representation. No previous knowledge of natural language processing is expected.
Organizer’s background. The organizer has considerable experience in knowledge base construction, and has more recently ventured into extraction and reasoning methods for commonsense knowledge. Two particularly relevant projects are Quasimodo  and Dice . The former focuses on salient commonsense knowledge extraction from question-corpora such as Reddit and Google Autocompletion, and provides the most extensive collection of general-world knowledge to date. The latter is a project aimed fighting sparsity and incoherence in existing commonsense knowledge repositories, utilizing a logical reasoning framework and taxonomical information in order to consolidate and complete existing repositories.
 Commonsense Properties from Query Logs and Question Answering Forums, Julien Romero, Simon Razniewski, Koninika Pal, Jeff Z. Pan, Archit Sakhadeo, Gerhard Weikum, CIKM, 2019
 Joint Reasoning for Multi-Faceted Commonsense Knowledge, Yohan Chalier, Simon Razniewski and Gerhard Weikum, arXive/under review at AKBC, 2020
 R. Speer and C. Havasi, “Representing General Relational Knowledge in ConceptNet 5,” LREC, 2012.
 B. Dalvi, N. Tandon, and P. Clark, “Domain-Targeted, High Precision Knowledge Extraction,” TACL, 2017.
 N. Tandon, G. de Melo, F. Suchanek, and G. Weikum, “WebChild : Harvesting and Organizing Commonsense Knowledge from the Web,” WSDM, 2014.
 P. Jansen, “Multi-hop Inference for Sentence-level TextGraphs: How Challenging is Meaningfully Combining Information for Science Question Answering?,” TextGraphs, 2018.
 M. Sap et al., “Atomic: An atlas of machine commonsense for if-then reasoning,” AAAI, 2018.
 A. Bosselut, H. Rashkin, M. Sap, C. Malaviya, A. Celikyilmaz, and Y. Choi, “COMET: Commonsense Transformers for Automatic Knowledge Graph Construction,” ACL, 2019.