Natural Language Processing 2019
- Text preprocessing
- Feature extraction
- Text classification
- Part-of-Speech Tagging, Parsing
- Sequential models (HMM, CRF, RNN, LSTM, etc.)
- Knowledge representation and relations (ontology, taxonomy, KG, KB, etc.)
- Topic modeling
- Entities, Entity linking, Entity Disambiguation
- Language modeling
- Word embeddings
Basic knowledge in python.
Familiarity with mathematical notation and scientific formalization.
Familiarity with basic probability theory and Bayesian statistics.
Familiarity with basic concepts of information retrieval (precision and recall).
Colab/IPython Notebook, NLTK, sklearn, gensim, numpy, scipy, TF
About the lecturer
Dr. Julia Proskurnia
Currently working in Apps Intelligence in Google Zurich with particular interest in text analysis, summarization, and autocompletion. Prior to Google, she obtained her Ph.D. degree in EPFL, Lausanne where she was working towards profiling, modeling and facilitation of online activism on social media. During her Ph.D. she filed a patent and published multiple papers in the top tier conferences, such as WWW, CIKM, ICWSM, etc. She currently holds two master degree, first, with specialization of Distributed Systems, that she obtained from both UPC, Barcelona and KTH, Stockholm, and second, with specialization of Applied System Analysis and Decision Making from NTUU KPI, Ukraine. She had received numerous scholarships during her studies, including, Anita Borg Scholarship, GHC grant, President scholarship of NTUU KPI, etc. Her current interests are Applied Machine Learning, NLP, Time series prediction, Text mining.