Courses‎ > ‎

LIN350 Computational semantics


Fall 2016 | Instructor: Katrin Erk| Tuesday, Thursday 11-12:30  | CLA 1.108



How can we describe the meaning of words and sentences in such a way that we can process them automatically? That seems a huge task. There are so many words, all with their individual nuances of meaning -- do we have to define them all by hand? And there are so many things we want to do with sentences: Translate them. Answer questions. Extract important pieces of information. Figure out people's opinions. Can we even use one single meaning description to do all these tasks?

In this course, we discuss methods for automatically learning what words mean (at least to some extent) from huge amounts of text -- for example, from all the text that people have made available on the web. And we discuss ways of representing the meaning of words and sentences in such a way that we can use them in language technology tasks.

Our focus will be on two particular kinds of general representations, one that focuses on sentences and one that focuses on words and short phrases. The first is distributional semantics. It has been most successful for words and short phrases. The main idea behind distributional semantics is that similar words appear in similar contexts -- which means that if words appear in similar contexts, we can conclude that they are similar in meaning. The second is logic-based semantics. It focuses on representing sentences. It translates sentences into a format in which we can draw conclusions from them.

Prerequisites: Upper-division standing.

Textbook: Patrick Blackburn and Johan Bos, "Representation and Inference for Natural Language. A First Course in Computational Semantics", CSLI Publications, ISBN 1575864967

Additional  readings will be made available for download from the course website.

Flags: Quantitatve, Independent Inquiry