Fairness and privacy in machine learning
Teacher
BUTUCEA Cristina
Department: Statistics
ECTS:
4
Course Hours:
24
Tutorials Hours:
0
Language:
English
Examination Modality:
mém+CC
Objective
With the ubiquitous deployment of machine learning algorithms in nearly every area of our lives, the problem of unethical or discriminatory algorithm-based decisions becomes more and more prevalent. To partially address these concerns, a new sub-field of machine learning has emerged. The goal of the course is to introduce the audience to recent developments of fairness aware algorithms. The emphasise will be made on those methods which are supported by statistical guarantees and that can be implemented in practice. We will study classification and regression problems under the so called demographic parity constraint—a popular way to define fairness of an algorithm. Several research directions will be proposed through out the course.
References
E. Chzhen, et al. (2019) Leveraging labeled and unlabeled data for consistent fair binary classification. NeurIPS 32
E. Chzhen and N. Schreuder (2022) A minimax framework for quantifying risk-fairness trade-off in regression. Ann. Statist.
T.B. Berrett and C. Butucea (2020) Locally private non-asymptotic testing of discrete distributions is faster using interactive mechanisms. NeurIPS 34
C. Butucea, A. Dubois, M. Kroll and A. Saumard (2020) Local differential privacy: elbow effect in optimal density estimation and adaptation over Besov ellipsoids. Bernoulli, vol. 26, No 3, 1727-1764