Causal Fairness Analysis

This page provides information and materials about "Causal Fairness Analysis", following the tutorial presented at ICML 2022. For the corresponding material, check out the links:

We list below some additional resources, and expect to add more in the future, stay tuned!


About the Speakers

Elias Bareinboim is an associate professor in the Department of Computer Science and the director of the Causal Artificial Intelligence (CausalAI) Laboratory at Columbia University. His research focuses on causal and counterfactual inference and their applications to data-driven fields in the health and social sciences as well as artificial intelligence and machine learning. His work was the first to propose a general solution to the problem of ``data-fusion,'' providing practical methods for combining datasets generated under different experimental conditions and plagued with various biases. More recently, Bareinboim has been exploring the intersection of causal inference with decision-making (including reinforcement learning) and explainability (including fairness analysis). Bareinboim received his Ph.D. from the University of California, Los Angeles, where he was advised by Judea Pearl. Bareinboim was named one of ``AI's 10 to Watch'' by IEEE, and is a recipient of the NSF CAREER Award, the ONR Young Investigator Award, the Dan David Prize Scholarship, the 2014 AAAI Outstanding Paper Award, and the 2019 UAI Best Paper Award.




Drago Plečko is a PhD student in the Seminar for Statistics at ETH Zürich. His research interest is the causal inference methodology used for observational data. In particular, Drago is interested in methods that can be used for applied problems in social sciences and health sciences, as well as writing open-source software that can support such applications. Drago has worked on fair machine learning and explainability, and also epidemiological questions of causation in intensive care unit (ICU) research, and also applications of AI tools in the ICU. Before joining ETH, Drago obtained his Bachelors and Masters in Mathematics at the University of Cambridge.

Tutorial Overview

AI plays an increasingly prominent role in modern society since decisions that were once made by humans are now delegated to automated systems. These systems are currently in charge of deciding bank loans, criminals' incarceration, and the hiring of new employees, and it is not hard to envision that soon they will underpin most of the society's decision infrastructure. Despite the high stakes entailed by this task, there is still a lack of formal understanding of some basic properties of such systems, including issues of fairness, accountability, and transparency. In this tutorial, we introduce a framework of causal fairness analysis, with the intent of filling in this gap, i.e., understanding, modelling, and possibly solving issues of fairness in decision-making settings. The main insight of our approach will be to link the quantification of the disparities present in the observed data with the underlying, and often unobserved causal mechanisms that generate the disparity in the first place. We will study the problem of decomposing variations, which results in the construction of empirical measures of fairness that attribute such variations to causal mechanisms that generated them. Such attribution of disparity to specific causal mechanisms will allow us to propose a formal and practical framework for assessing legal doctrines of disparate treatment and impact, allowing also for considerations of business necessity. Finally, through the newly developed framework we will draw important connections with previous literature, both in and outside the causal inference arena. This effort will culminate in the "Fairness Map", which is the first cohesive and systematic classification device of multiple measures of fairness in terms of their causal properties, including admissibility, decomposability, and power. We hope this new set of principles, measures, and tools can help guide AI researchers and engineers when analyzing and/or developing decision-making systems that will be aligned with society's goals, expectations, and aspirations.

Outline

Part Subject Material
1 Foundations of Causal Fairness Analysis
  • Outline of Causal Fairness Analysis
  • Introduction to Structural Causal Models and Causal Graphs.
  • Fundamental Problem of Causal Fairness Analysis (FPCFA)
  • Theory of Decomposing Variations
2 Causal Fairness Analysis in Practice
  • Bias Detection: Fairness Cookbook
  • Fair Prediction: Fair Prediction Theorem
  • Failure of Optimal Transport Methods
  • Causal Optimal Transport (Causal Individual Fairness)

Target Audience

This tutorial is targeted at researchers working on the foundations of fairness analysis, transparency, and explainability. Furthermore, the tutorial is also well-suited for researchers in applied areas, such as data scientists, statisticians, ML engineers. Familiarity with causal inference and fair machine learning is desirable, but not necessary to attend the tutorial (the tutorial is intended to be self-contained, covering all necessary preliminaries).

Goals of the Tutorial

The goals of the tutorial are (1) to introduce the modern theory of causal inference and its tools to the ICML audience, and (2) to show their implications to the practice of fairness analysis, including both theory and practice. After the tutorial, attendees should be familiar with the basic concepts, principles, and algorithms to solve modern problems involving bias analysis and understanding when and how biases can be detected and perhaps corrected. In particular, they should also be aware of the differences between typical (a-causal) fairness analysis and the new generation of tools based on causal inference, including the fact how causal methods can help bridge the gap between legal and statistical notions of fairness.

Fairness Tasks

There are three main tasks studied in Causal Fairness Analysis, namely:

TASK 1
Bias Detection & Quantification
determining if unfairness is present

Detecting and measuring different types of bias (direct, indirect, spurious) in an observed dataset.

TASK 2
Fair Predictions
removing an observed bias

Constructing ML predictors of the outcome that are fair with respect to the legal doctrines on fairness (that is, that do not contain effects which are not explicitly allowed by considerations of business necessity).

TASK 3
Fair Decision-Making
controlling discrimination over time

Introducing fair policies (such as affirmative action) that result in the reduction of racial, gender or religious disparities, as they are implemented over time.

Acknowledgements

This research was supported in part by the NSF, ONR, AFOSR, DoE, Amazon, JP Morgan, and The Alfred P. Sloan Foundation.