Using Neural Networks for Modeling and Representing Natural Language
This course will be introduced basic concepts in natural language processing and in the area of artificial neural networks and deep learning. This course will give some basic algorithms that can be used to learn features from text and to build neural language models. Finally, this course will describe the algorithms used in fastText which can classify text extremely efficiently. This tutorial does not assume any extensive prior knowledge of neural networks and is supposed to be mainly helpful to beginners in the area of NLP.
- Basic knowledge of machine learning and math
Level of complexity of course
Dr. Tomas Mikolov
Researcher at CIIRC, Czech Technical University
Tomas works on artificial intelligence research, and he is interested in building advanced models of language. His most famous results are the recurrent neural language models (RNNLM), vector representations of words (word2vec), and efficient text classification (fastText). Currently,Tomas is interested in developing less supervised models of language (and intelligence), which can develop in time through evolutionary mechanisms.
Fields of interests: Deep learning, NLP, Complex Systems, Artificial Intelligence
Contacts: [email protected]