B06

Predicting the limits of variability in discourse using neural models

PI(s): Prof. Dr. David Schlangen & Prof. Dr. Manfred Stede

We investigate the notion of utterance acceptability in context, and through it, the underlying competence notion coherence, understanding it quite directly as a limit on variability in discourse follow-ups. We will assemble a test suite of relevant cases (in English), and collect acceptability judgements. Building on this, we will study which aspects of this notion, if any, neural language models capture. Finally, we will use the results of these studies to investigate whether we can improve the models by providing inductive biases that introduce discourse structural knowledge. A particular focus in these studies is on coherence in dialogue.

Members

Beyer
Anne Beyer
Campus GolmHaus 14, Raum 2.28
Schlangen
Prof. Dr. David Schlangen
Campus GolmHaus 14, Raum 2.19
(+49) 331 977-2692
Stede
Prof. Dr. Manfred Stede
Campus GolmHaus 14, Raum 2.31
(+49) 331 977-2691

Publications

  • Peer-Reviewed: Papers, Journals, Books, Articles of the CRC
  • Talk or Presentation: Talks, Presentations, Posters of the CRC
  • SFB-Related: not produced in connection with the CRC, but are thematically appropriate
  • Other: Papers, Journals, Books, Articles of the CRC, but not peer-reviewed
Author(s)TitleYearPublished inLinksType
Loáiciga, S., Beyer, A., & Schlangen, D.New or Old? Exploring How Pre-Trained Language Models Represent Discourse Entities.2022Proceedings of the 29th International Conference on Computational Linguistics (pp. 875-886). Gyeongju, Republic of Korea.
Peer-Reviewed