Causal Fairness Analysis

This page provides information for the tutorial on "Causal Fairness Analysis" that will be presented at the European Conference on Artificial Integelligence (ECAI 2024) on 20th October, 2024, in Santigao de Compostela (starting at 2pm in D06 Pontevedra, School of Philology). Full tutorial materials will be made available on the day of the tutorial. We look forward to seeing you.
Past versions of the tutorial were presented at ICML 2022 and AAAI 2024. For the materials used, check out the links:

We list below some additional resources, and expect to add more in the future, stay tuned!


About the Speakers

Elias Bareinboim is an associate professor in the Department of Computer Science and the director of the Causal Artificial Intelligence (CausalAI) Laboratory at Columbia University. His research focuses on causal and counterfactual inference and their applications to artificial intelligence, machine learning, and the empirical sciences. His work was the first to propose a general solution to the problem of "causal data-fusion," providing practical methods for combining datasets generated under different experimental conditions and plagued with various biases. More recently, Elias has been exploring the intersection of causal inference with decision-making (including reinforcement learning) and explainability (including fairness analysis). Before joining Columbia, he was an assistant professor at Purdue University and received his Ph.D. in Computer Science from the University of California, Los Angeles. Bareinboim was named one of "AI's 10 to Watch" by IEEE, and is a recipient of an NSF CAREER Award, the Dan David Prize Scholarship, the 2014 AAAI Outstanding Paper Award, and the 2019 UAI Best Paper Award.




Drago Plečko is a postdoctoral scholar in the CausalAI Lab at Columbia University. His research interest is the causal inference methodology used for observational data. In particular, Drago is interested in methods that can be used for applied problems in health sciences, as well as writing open-source software that can support such applications. Drago has worked on fairness, algorithmic recourse, and explainability, and also epidemiological questions of causation in intensive care unit (ICU) research, including applications of AI tools in the ICU. Before joining Columbia, Drago obtained his PhD in the Seminar for Statistics at ETH Zürich.

Tutorial Overview

Decision-making systems based on AI and machine learning have been used throughout a wide range of real-world scenarios, including healthcare, law enforcement, education, and finance. It is no longer far-fetched to envision a future where autonomous systems will drive entire business decisions and, more broadly, support large-scale decision-making infrastructure to solve society’s most challenging problems. Issues of unfairness and discrimination are pervasive when decisions are being made by humans, and remain (or are potentially amplified) when decisions are made using machines with little transparency, accountability, and fairness.

In this tutorial, we describe the framework for causal fairness analysis with the intent of filling in this gap, i.e., understanding, modeling, and possibly solving issues of fairness in decision-making settings. The main insight of our approach will be to link the quantification of the disparities present in the observed data with the underlying, often unobserved, collection of causal mechanisms that generate the disparity in the first place, a challenge we call the Fundamental Problem of Causal Fairness Analysis (FPCFA). In order to solve the FPCFA, we study the problem of decomposing variations and empirical measures of fairness that attribute such variations to structural mechanisms and different units of the population. Our effort culminates in the Fairness Map, the first systematic attempt to organize and explain the relationship between various criteria found in the literature. Finally, we discuss which causal assumptions are minimally needed for performing causal fairness analysis and propose the Fairness Cookbook, which allows data scientists to assess the existence of disparate impact and disparate treatment in practice.

Outline

Part Subject Material
1 Foundations of Causal Fairness Analysis
  • Outline of Causal Fairness Analysis
  • Introduction to Structural Causal Models and Causal Graphs.
  • Fundamental Problem of Causal Fairness Analysis (FPCFA)
  • Theory of Decomposing Variations
  • Fairness Tasks
2 Causal Fairness Analysis in Practice
  • Bias Detection: Fairness Cookbook
  • Fair Prediction: Fair Prediction Theorem
  • Fair Decision-Making: Benefit Fairness & Fairness of Benefit
  • Case studies: Mortality in the ICU, Respirators in the ICU

Target Audience

This tutorial is targeted at researchers working on the foundations of fairness analysis, transparency, and explainability. Furthermore, the tutorial is also well-suited for researchers in applied areas, such as data scientists. Familiarity with causal inference and fair machine learning is desirable, but not necessary to attend the tutorial (the tutorial is intended to be self-contained, covering all necessary preliminaries).

Goals of the Tutorial

The goals of the tutorial are (1) to introduce the modern theory of causal inference and its tools to the ICML audience, and (2) to show their implications to the practice of fairness analysis, including both theory and practice. After the tutorial, attendees should be familiar with the basic concepts, principles, and algorithms to solve modern problems involving bias analysis and understanding when and how biases can be detected and perhaps corrected. In particular, they should also be aware of the differences between typical (a-causal) fairness analysis and the new generation of tools based on causal inference, including the fact how causal methods can help bridge the gap between legal and statistical notions of fairness.

Fairness Tasks

There are three main tasks that can be distinguished within Causal Fairness Analysis:

TASK 1
Bias Detection
determining if unfairness is present

Detecting and measuring different types of bias (direct, indirect, spurious) in an observed dataset.

TASK 2
Fair Predictions
removing an observed bias

Constructing ML predictors of the outcome that are fair with respect to the legal doctrines on fairness (that is, that do not contain effects which are not explicitly allowed by considerations of business necessity).

TASK 3
Fair Decision-Making
controlling discrimination over time

Introducing fair policies (such as affirmative action) that result in the reduction of racial, gender or religious disparities, as they are implemented over time.