Friday, May 6, 2016

Lecture 9: Neural Networks, word embeddings and deep learning

Motivation. The perceptron. Input encoding, sum and activation functions; objective function. Linearity of the perceptron. Neural networks. Training. Backpropagation. Connection to Maximum Entropy. Connection to language. Vector representations. NN for the bigram language model. Word2vec: CBOW and skip-gram. Word embeddings. Deep learning. Language modeling with NN. The big picture.

No comments:

Post a Comment