Predicting the limits of variability in discourse using neural models
PI(s): Prof. Dr. David Schlangen & Prof. Dr. Manfred Stede
We investigate the notion of utterance acceptability in context, and through it, the underlying competence notion coherence, understanding it quite directly as a limit on variability in discourse follow-ups. We will assemble a test suite of relevant cases (in English), and collect acceptability judgements. Building on this, we will study which aspects of this notion, if any, neural language models capture. Finally, we will use the results of these studies to investigate whether we can improve the models by providing inductive biases that introduce discourse structural knowledge. A particular focus in these studies is on coherence in dialogue.
Prof. Dr. David Schlangen
Prof. Dr. Manfred Stede
|Loáiciga, S., Beyer, A., & Schlangen, D.||New or Old? Exploring How Pre-Trained Language Models Represent Discourse Entities.||2022||Proceedings of the 29th International Conference on Computational Linguistics (pp. 875-886). Gyeongju, Republic of Korea.||Paper Code||Peer-Reviewed||B06|