This semester we have one meeting per week. Each week, unless noted otherwise, we use one half of the class to discuss the topic of the week, and we use the other half for a round-table discussing people's research.

### Week 1: Aug 29

- Plan for the semester: Which topics to focus on, who gives talks where.

- Extended round table:
- Research results from the summer
- Research plans for the fall, publication plans

### Week 2: Sep 5

Graph-convolutional networks: We are reading Marcheggiani and Titov http://aclweb.org/anthology/D17-1159

who compute embeddings for nodes.

Additional readings (let me know if you'd like to discuss one of them in class):

- Gilmer et al 2017 provide an overview of graph-convolutional (or neural message passing) models and point out that we can modify the graph structure to allow for additional inference, adding nodes whenever we want to allow for a potential influence.

- Duvenaud et al is an early paper on graph-convolutional models. They are the ones who describe their model through pseudocode...
- Kipf and Welling are the basis for the Marcheggiani and Titov model. They apply their model to citation graphs and the NELL knowledge graph.
- Marcheggiani et al apply the Marcheggiani/Titov SRL model to machine translation.
- Diego pointed me to an overview paper on graph embeddings by Goyal and Ferrara.

### Week 3: Sep 12

*Laura leads our discussion.*

*Diego leads the discussion.*

### Week 6: Oct 3

Semi-supervised learning:

### Week 7: Oct 10

Our main paper is "Improved Techniques for Training GANs",

https://arxiv.org/pdf/1606.03498.pdf
While this paper does talk about improved techniques for training GANs,
it also shows how to use them in a semi-supervised setting. In your
reading, please focus on that aspect.

As a secondary paper, we are also looking at "Semi-supervised Learning on Graphs with Generative Adversarial Nets",

https://arxiv.org/pdf/1809.00130.pdf
This is a secondary paper because it is a bit of a harder read, but we
are including it because it is also a more recent take on GANs for
semi-supervised learning.

*
Jialin leads the discussion. *

*First/second year students: please submit a ***short text (2 pages) **describing
the current status and plan for your first/second year paper. You can
either hand me a hard copy in class, or submit via email to me.

### Week 9: Oct 24

Multi-task learning

*Elisa leads the discussion.*

Variational autoencoders

Tutorials to draw on:

- Tutorial 1: This is quite good all around, including an intuition of what the latent Gaussians do

- Tutorial 2 does not go into any of the math and gives a cursory overview
- Tutorial 3 is good at separating the neural ideas from the probabilistic ideas, but don't go to this one for the math

General information on variational inference:

More on autoencoders:

- The Autoencoder chapter from Goodfellow, Bengio, Courville: Deep Learning
- The vanishing KL problem for VAE:

- Applications of autoencoders (thanks to Jessy and Juan Diego for this list):

### Week 13: Nov 21

*Thanksgiving break*

### Week 14: Nov 28

Variational Autoencoders and Probabilistic Programming Languages with Pyro.

Readings:

Variational autoencoders: Isabel and Laura talk about entity mention generation conditioned on type,

using variational autoencoders

**Final course project papers due: Dec 13, end of day.**