- This event has passed.
The Conflict Graph Design: Estimating Causal Effects Under Interference
November 1 @ 11:00 am - 12:00 pm
Christopher Harshaw, Columbia University
E18-304
Event Navigation
Abstract:
From clinical trials to corporate strategy, randomized experiments are a reliable methodological tool for estimating causal effects. In recent years, there has been a growing interest in causal inference under interference, where treatment given to one unit can affect outcomes of other units. While the literature on interference has focused primarily on unbiased and consistent estimation, designing randomized network experiments to insure tight rates of convergence is relatively under-explored. Not only are the optimal rates of estimation for different causal effects under interference an open question but previously proposed designs are created in an ad-hoc fashion.
In this talk, we present the Conflict Graph Design, a new approach for constructing experimental designs to estimate causal effects under interference. Given a particular causal estimand (e.g. total treatment effect, direct effect, spill-over effect etc), we construct a so-called “conflict graph” which captures the fundamental unobservabiility associated with the estimand on the underlying network. The Conflict Graph Design aims to randomly assign treatment by first assigning “desired” exposures and then resolving these conflicts in desired exposures according to an algorithmically constructed importance ordering. In this way, the proposed experimental design depends on both the underlying network and the causal estimand under investigation. We show that a modified Horvitz–Thompson estimator attains a variance of $O( \lambda / n )$ under the design, where $\lambda$ is the largest eigenvalue of the adjacency matrix of the conflict graph, which is a global measure of connectivity. These rates improve upon the best known rates for a variety of estimands (e.g. total treatment effects and direct effects) and we conjecture that this rate is optimal. Finally, we provide consistent variance estimators and asymptotically valid confidence intervals, which facilitate inference of the causal effect under investigation.
Joint work with Vardis Kandiros, Charis Pipis, and Costis Dakalakis at MIT.
Bio:
Christopher Harshaw is an Assistant Professor in the Columbia Statistics department. He received a PhD from Yale University and was a FODSI postdoc hosted jointly between UC Berkeley and MIT. His research lies at the interface of causal inference, algorithm design, and machine learning with a particular focus on the design and analysis of randomized experiments. His research appears in Journal of the American Statistical Association, Electronic Journal of Statistics, COLT, ICML, NeurIPS, and won Best Paper Award at the NeurIPS 2022 workshop, CML4Impact.