Courses‎ > ‎

LIN350 Computational semantics


Fall 2018 | Instructor: Katrin Erk| Tuesday, Thursday 11-12:30  | SZB 524


How can we describe the meaning of words and sentences in such a way that we can process them automatically? That seems a huge task. There are so many words, all with their individual nuances of meaning -- do we have to define them all by hand? And there are so many things we want to do with sentences: Translate them. Answer questions. Extract important pieces of information. Figure out people's opinions. Can we even use one single meaning description to do all these tasks?

In this course, we discuss methods for automatically learning what words mean (at least to some extent) from huge amounts of text -- for example, from all the text that people have made available on the web. And we discuss ways of representing the meaning of words and sentences in such a way that we can use them in language technology tasks.

Our focus will be on two particular kinds of general representations, one that focuses on words and one that focuses on sentences. The first kind is distributional representations or embeddings. They have been most successful for words and short phrases. The main idea behind embeddings is that similar words appear in similar contexts -- which means that if words appear in similar contexts, we can conclude that they are similar in meaning. Embeddings can be obtained by simply counting words, or using neural models. We will discuss both methods. The second kind of representation is logic-based semantics. It focuses on representing sentences. It translates sentences into a format in which we can automatically reason with them and draw conclusions.

Prerequisites: Upper-division standing.

Textbook: Patrick Blackburn and Johan Bos, "Representation and Inference for Natural Language. A First Course in Computational Semantics", CSLI Publications, ISBN 1575864967

Additional  readings will be made available for download from the course website.

Flags: Quantitatve, Independent Inquiry