LIN389C: Topic list Fall 2019

Computational Semantics

Semantic role labeling

Abstract Meaning Representation

  • Introducing AMR:
    L. Banarescu, C. Bonial, S. Cai, M. Georgescu, K. Griffitt, U. Hermjakob, K. Knight, P. Koehn, M. Palmer, and N. Schneider, 2013. Abstract Meaning Representation for Sembanking. Proc. Linguistic Annotation Workshop, 2013. https://amr.isi.edu/a.pdf
  • A more recent data paper:
    Tim O'Gorman, Michael Regan, Kira Griffitt, Martha Palmer, Ulf Hermjakob and Kevin Knight, 2018, AMR Beyond the Sentence: the Multi-sentence AMR corpus. Proceedings of COLING. https://aclweb.org/anthology/C18-1313
  • Recent computational approaches:

Using AMR, using semantic roles


Kexin Liao, Logan Lebanoff, Fei Liu 2018. Abstract Meaning Representation for Multi-Document Summarization, COLING 2018.

Semantic parsing

Groningen Meaning Bank

Semantic decomposition

Compositionality in machines

  • What kind of compositionality can we expect to see in neural models?
    Marco Baroni 2019, Linguistic generalization and compositionality in modern artificial neural networks. To appear in the Philosophical Transactions of the Royal Society B, https://arxiv.org/abs/1904.00157
  • The SCAN dataset:
    • How do machines do at a task that requires them to learn systematic composition?
      Brenden M. Lake, Marco Baroni 2018. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. Proceedings of ICML, https://arxiv.org/abs/1711.00350
    • How do humans do at this task?
      Brenden M. Lake, Tal Linzen, Marco Baroni 2019. Human few-shot learning of compositional instructions. Proceedings of the 41st Annual Conference of the Cognitive Science Society. https://arxiv.org/abs/1901.04587
    • Meta-learning approach to the SCAN dataset:
      Brenden M. Lake, 2019. Compositional generalization through meta sequence-to-sequence learning https://arxiv.org/abs/1906.05381
  • Analysis of compositional structure of representations learned in a messaging game:
    Jacob Andreas, Dan Klein 2017. Analogs of Linguistic Structure in Deep Representations. EMNLP 2017, https://arxiv.org/abs/1707.08139
  • How compositional are the representations learned by neural models?
    Jacob Andreas 2019. Measuring Compositionality in Representation Learning. Proceedings of ICLR 2019, https://arxiv.org/abs/1902.07181


Context-aware language models, transformers and self-attention

Ethics in NLP


Further topics suggested by students.

Linguistics-related topics

  • Implicature and presupposition
  • Audience modeling
  • crosslingual representation learning

AI and general NLP topics

  • knowledge graphs and commonsense reasoning (we did some graph-NNs a while back; did we also do commonsense reasoning?)
  • language & vision

Annotation

  • Subjectivity in annotation

Machine learning

  • out-of-distribution, outlier detection
  • Compressing NLP models, including compressing BERT
  • explainability/interpretability
Comments