Post-workshop material:

Overview

Modern NLP started with methods based on pure symbolic analysis of language. Statistical methods were introduced to NLP in its current form in the 1980s/1990s, allowing "soft" reasoning about language, and made NLP more data-driven. Over the last decade another step has been taken in this direction -- it was proposed to represent and analyze language in vector spaces. Now-a-days, context, symbolic and high-dimensional representations are often augmented with relatively low-dimensional vector-space representations. Vector space representations have been successfully used in different areas of NLP such as syntax and semantics.

This workshop is an opportunity to explore state of the art in the use of vector spaces in order to computationally analyze natural language. The focus of the workshop will be on the use of vector spaces to learn latent representations.

The goal of the workshop is to bring together researchers from areas such as deep learning and representation learning, spectral learning, distributional compositional semantics and others, in order to see their relevance to each other, and learn about the state of the art in these areas.

For a list of topics this workshop seeks to explore, see the call for papers.

Submission deadline: March 8, 2015

We thank the following sponsors:
Google DeepMind.
Textkernel (click here): machine learning for matching people and jobs.