LIN 389C Schedule

Course outline Fall 2018

This semester we have one meeting per week. Each week, unless noted otherwise, we use one half of the class to discuss the topic of the week, and we use the other half for a round-table discussing people's research.

See this page for suggestions of topics of the week.

Week 1: Aug 29

  • Plan for the semester: Which topics to focus on, who gives talks where.
  • Extended round table:
    • Research results from the summer
    • Research plans for the fall, publication plans

Week 2: Sep 5

Graph-convolutional networks: We are reading Marcheggiani and Titov http://aclweb.org/anthology/D17-1159

who compute embeddings for nodes.

Additional readings (let me know if you'd like to discuss one of them in class):

  • Gilmer et al 2017 provide an overview of graph-convolutional (or neural message passing) models and point out that we can modify the graph structure to allow for additional inference, adding nodes whenever we want to allow for a potential influence.
  • Duvenaud et al is an early paper on graph-convolutional models. They are the ones who describe their model through pseudocode...
  • Kipf and Welling are the basis for the Marcheggiani and Titov model. They apply their model to citation graphs and the NELL knowledge graph.
  • Marcheggiani et al apply the Marcheggiani/Titov SRL model to machine translation.
  • Diego pointed me to an overview paper on graph embeddings by Goyal and Ferrara.

Week 3: Sep 12

Computing embeddings for paths in a graph: We are reading Das et al http://aclweb.org/anthology/E17-1013,  and we may also want to take a look at the more recent  http://www.akbc.ws/2017/papers/24_paper.pdf

Laura leads our discussion.

Week 4: Sep 19

We are reading Li et al, Learning Deep Generative Models of Graphs, https://arxiv.org/pdf/1803.03324v1.pdf

Eric leads the discussion.

Week 5: Sep 26

Label propagation: We are reading the original Zhu and Ghahramani paper, https://pdfs.semanticscholar.org/8a6a/114d699824b678325766be195b0e7b564705.pdf

As an example of a use of label propagation, we are taking a look at a paper from our very own Dan Garrette, Jason Mielens and Jason Baldridge:
Real-World Semi-Supervised Learning of POS-Taggers for Low-Resource Languages. You don't have to read this paper in depth; focus on its use of label propagation to make the most of a small amount of annotation.

Diego leads the discussion.

Week 6: Oct 3

Semi-supervised learning: 

Week 7: Oct 10

Our main paper is  "Improved Techniques for Training GANs", https://arxiv.org/pdf/1606.03498.pdf
While this paper does talk about improved techniques for training GANs, it also shows how to use them in a semi-supervised setting. In your reading, please focus on that aspect.

As a secondary paper, we are also looking at "Semi-supervised Learning on Graphs with Generative Adversarial Nets", https://arxiv.org/pdf/1809.00130.pdf
This is a secondary paper because it is a bit of a harder read, but we are including it because it is also a more recent take on GANs for semi-supervised learning.

Jialin leads the discussion.

First/second year students: please submit a short text (2 pages) describing the current status and plan for your first/second year paper. You can either hand me a hard copy in class, or submit via email to me.

Week 8: Oct 17

Transfer learning

We read the original Rich Caruana paper "Learning many related tasks at the same time with backpropagation"

As a secondary paper, we look at Yoshua Bengio's "Deep Learning of Representations for Unsupervised and Transfer Learning"

Pengxiang leads the discussion.

Here is another recent paper on transfer learning. We are not discussing it in class, but you may want to check it out: Chen et al, "Zero-resource multilingual model transfer: learning what to share", https://arxiv.org/pdf/1810.03552.pdf

Week 9: Oct 24

Multi-task learning


Elisa leads the discussion.

Week 10: Oct 31

Week 12: Nov 14

Variational autoencoders

Tutorials to draw on:

  • Tutorial 1: This is quite good all around, including an intuition of what the latent Gaussians do
  • Tutorial 2 does not go into any of the math and gives a cursory overview
  • Tutorial 3 is good at separating the neural ideas from the probabilistic ideas, but don't go to this one for the math
 General information on variational inference: 
More on autoencoders:

Week 13: Nov 21

Thanksgiving break

Week 14: Nov 28


Variational Autoencoders and Probabilistic Programming Languages with Pyro.

Readings:


Also, there is a neat blog post about probabilistic programming with webppl here.


Week 15: Dec 5

Variational autoencoders: Isabel and Laura talk about entity mention generation conditioned on type,
using variational autoencoders

Final course project papers due: Dec 13, end of day.