How to select predictive models for causal inference?
M Doutreligne, G Varoquaux - arXiv preprint arXiv:2302.00370, 2023 - arxiv.org
arXiv preprint arXiv:2302.00370, 2023•arxiv.org
As predictive models--eg, from machine learning--give likely outcomes, they may be used to
reason on the effect of an intervention, a causal-inference task. The increasing complexity of
health data has opened the door to a plethora of models, but also the Pandora box of model
selection: which of these models yield the most valid causal estimates? Here we highlight
that classic machine-learning model selection does not select the best outcome models for
causal inference. Indeed, causal model selection should control both outcome errors for …
reason on the effect of an intervention, a causal-inference task. The increasing complexity of
health data has opened the door to a plethora of models, but also the Pandora box of model
selection: which of these models yield the most valid causal estimates? Here we highlight
that classic machine-learning model selection does not select the best outcome models for
causal inference. Indeed, causal model selection should control both outcome errors for …
As predictive models -- e.g., from machine learning -- give likely outcomes, they may be used to reason on the effect of an intervention, a causal-inference task. The increasing complexity of health data has opened the door to a plethora of models, but also the Pandora box of model selection: which of these models yield the most valid causal estimates? Here we highlight that classic machine-learning model selection does not select the best outcome models for causal inference. Indeed, causal model selection should control both outcome errors for each individual, treated or not treated, whereas only one outcome is observed. Theoretically, simple risks used in machine learning do not control causal effects when treated and non-treated population differ too much. More elaborate risks build proxies of the causal error using ``nuisance'' re-weighting to compute it on the observed data. But does computing these nuisance adds noise to model selection? Drawing from an extensive empirical study, we outline a good causal model-selection procedure: using the so-called ; using flexible estimators to compute the nuisance models on the train set; and splitting out 10\% of the data to compute risks.
arxiv.org