LIN 389C: Topic list Fall 2017

Machine learning and general NLP

Reporting score distributions makes a difference: performance study of LSTM-networks for sequence tagging : Nils Reimers and Iryna Gurevych

Recurrent neural network grammars. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros and Noah A. Smith. Proc. NAACL.

Neural Symbolic Machines: Learning Semantic  Parsers on Freebase with Weak Supervision

Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, Ni Lao

Measuring Thematic Fit with Distributional Feature Overlap (EMNLP 2017)
Enrico Santus, Emmanuele Chersoni, Alessandro Lenci and Philippe Blache
Lenci’s paper on modeling thematic fit with distributional model

World Knowledge for Reading Comprehension: Rare Entity Prediction with Hierarchical LSTMs Using External Descriptions (EMNLP 2017)
Teng Long, Emmanuel Bengio, Ryan Lowe, Jackie Chi Kit Cheung and Doina Precup
Predicting rare entities from Wikilinks dataset use a cloze-like setting, similar to Pengxiang's task

A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks (EMNLP 2017)
Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher
Socher’s paper on jointly learning multiple NLP tasks via a multi-layer bi-LSTM with attention-like shortcut connections, might provide some insight to jointly learning of coreference and event script

Lexical Features in Coreference Resolution: To be Used With Caution (ACL 2017)
Nafise Sadat Moosavi and Michael Strube
Analysis on the lexical features used in current coreference systems, might be useful in designing features in event model

Salience Rank: Efficient Keyphrase Extraction with Topic Modeling (ACL 2017)
Nedelina Teneva, Weiwei Cheng
A faster PageRank-like algorithm to get salience rank of words in documents, might be related to extracting salience features

Pay Attention to the Ending: Strong Neural Baselines for the ROC Story Cloze Task (ACL 2017)
Zheng Cai, Lifu Tu, Kevin Gimpel
Analysis on the systematic bias of the story cloze task, might provide some insight

Probabilistic programming languages

Discourse processing

Document-level Sentiment Inference with Social, Faction, and Discourse Context
Eunsol Choi, Hannah Rashkin, Luke Zettlemoyer and Yejin Choi
Association for Computational Linguistics (ACL), 2016.

Words over time

Question answering and in-depth processing

Learning Structured Natural Language Representations for Semantic Parsing

J. Cheng, S. Reddy, V. Sarawat and M. Lapata

Dynamic Entity Representations in Neural Language Models, Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi and Noah A. Smith : tracking how entities evolve over a long document

Narrative Schemas,  Generalized Event Knowledge, and prediction of implicit arguments

Logical Metonymy in a Distributional Model of Sentence Comprehension, Emmanuele Chersoni, Alessandro Lenci, and Philippe Blache : using generalized event knowledge to analyze logical metonymy ("she began a book")

Integrating Order Information and Event Relation for Script EventPrediction (EMNLP 2017)
Zhongqing Wang, Yue Zhang and Ching-Yun Chang
Script learning combining pair-wise comparison (Granroth-Wilding's work) and LSTM sequence modeling (Karl's work) by a multi-layer attention using a memory network. Pretty similar to one of Pengxiang's plan-to-do models.

Reference-Aware Language Models (EMNLP 2017)
Zichao Yang, Phil Blunsom, Chris Dyer, and Wang Ling
DeepMind paper on explicitly modeling entity mentions in language models, should be similar to Choi’s Dynamic Entity Representation paper

Ontology-Aware Token Embeddings for Prepositional Phrase Attachment (ACL 2017)
Pradeep Dasigi, Waleed Ammar, Chris Dyer and Eduard Hovy
DeepMind paper on using type-level word embeddings with WordNet grounding to improve predicting PP attachment

Deep Semantic Role Labeling: What Works and What’s Next (ACL 2017)
Luheng He, Kenton Lee, Mike Lewis, Luke Zettlemoyer
UW paper using BiLSTM to achieve state-of-the-art SRL results, might be closely related to the argument prediction task.

Revisiting Selectional Preferences for Coreference Resolution (EMNLP 2017)
Benjamin Heinzerling, Nafise Sadat Moosavi, Michael Strube
Using embedding of selectional preference to improve coreference systems, might be related

Learning about the world from data

Verb Physics: Relative Physical Knowledge of Actions and Objects
Maxwell Forbes and Yejin Choi
Association for Computational Linguistics (ACL), 2017.

Connotation Frames: A Data-Driven Investigation
Hannah Rashkin, Sameer Singh and Yejin Choi
Association for Computational Linguistics (ACL), 2016.

Learning Prototypical Event Structure from Photo Albums
Antoine Bosselut, Jianfu Chen, David Warren, Hannaneh Hajishirzi, and Yejin Choi
Association for Computational Linguistics (ACL), 2016.

Apples to Apples: Learning Semantics of Common Entities Through a Novel Comprehension Task (ACL 2017)
Omid Bakhshandeh, James F. Allen
Common entity representation learning, might also be interesting to property learning people

Distributional models

Detecting Asymmetric Semantic Relations in Context: A Case-Study on Hypernymy Detection Yogarishi Vyas and Marine Carpuat: hypernymy in context

Philosophy of the lexicon

Check out the workshop on Meaning in Context for pointers on many issues that we have been discussing

On what is in a distributional space:

On semantic primitives: All these papers are available online in the UT library.

  • Katz, J. J., & Fodor, J. A. (1963). The structure of a semantic theory. Language, 39(2), 170. They propose the structure that a semantic theory should have. Among other things, it involves semantic markers, the systematic part of the lexicon.

  • Fodor, J. D., Fodor, J. A., & Garrett, M. F. (1975). The Psychological Unreality of Semantic Representations. Linguistic Inquiry, 6(4), 515–531. If the mental lexicon were based on definitions, then it should be possible to detect processing differences based on the complexity of definitions

  • Fodor, J., Garrett, M. F., Walker, E. C. T., & Parkes, C. H. (1980). Against definitions. Cognition, 8(3), 263–367. An argument against a definition-based mental lexicon where definitions are built up of semantic primitives

Other topics from recent conferences