Courses‎ > ‎

Computational Semantics (graduate)

Spring 2014 | Instructor: Katrin Erk| Tuesday, Thursday 2-3:30 | CLA 0.108

How can we represent the meaning of natural language words and sentences in a way that is helpful for automatic Natural Language Understanding and inferences? Representations can be defined specific to a particular language technology task, or we can strive to find general representations that will be helpful across tasks. In this course, we focus on two particular classes of general, task-independent representations, one that focuses on sentence semantics, and one that focuses on words and short phrases.

Logic-based semantics focuses on sentence semantics. It represents the meaning of a sentence through logic (in computational semantics, this is typically first-order logic or simpler, for processing reasons), such that general-purpose theorem provers can be used for inference over natural language statements. Distributional semantics has been most successful for words and short phrases. It represents the meaning of a word or phrase through the contexts in which it has been observed in text. Distributional models use distributional similarity, that is, similarity in observed contexts, to draw inferences. During most of the course, we talk about these two approaches separately. At the end of the course, we bring the two together.

Prerequisites: Graduate standing, introduction to computational linguistics

Readings will be made available for download from the course website.