Security and Privacy of Machine Learning 2019

|
|
|
Security and Privacy of Machine Learning 2019
Machine learning has become ubiquitous in the technological industry and beyond. As a result, ML models are often deployed in settings where adversaries can disrupt the operation, backdoor, or otherwise modify the behavior of the models. If the training data is privacy-sensitive, as in medical domains, adversaries can infer sensitive information present in the data. Moreover, the rise in efficiency of ML techniques enables malicious actors to build powerful tools for achieving asocial goals, such as invading the privacy of people or manipulating behavior at scale. This course will overview such security-critical settings, the corresponding attack scenarios, and defenses. First, we will outline the security principles, definitions, threat models, and attack vectors relevant to machine learning. Second, we will introduce the concrete implementations of the attacks, and some proposed defenses against them. We will go over the theory behind the attacks and defenses, and practically implement some of them. Third, we will zoom out and take a broader look at different contexts where machine learning is deployed, and their implications to privacy and security.

Course keywords

Adversarial Machine Learning, Security and Privacy Course tools Python, Pytorch

About the lecturer

Bogdan Kulynych Bogdan is a Ph.D. student at EPFL working on the intersection of privacy, security, and machine learning. He holds a Master’s degree in Software and Systems from the Polytechnic University of Madrid and a Bachelor’s in Applied Mathematics from Kyiv-Mohyla Academy. During his studies, he interned at Google and CERN.

Про факультет

Важлива інформація

Контактна інформація