Considerations for Distribution Shift Robustness in Health
AuthorsArno Blaas*, Andrew C. Miller*, Luca Zappella, Jörn-Henrik Jacobsen, Christina Heinze-Deml
AuthorsArno Blaas*, Andrew C. Miller*, Luca Zappella, Jörn-Henrik Jacobsen, Christina Heinze-Deml
*=Equal Contributors
This paper was accepted at the workshop "Trustworthy Machine Learning for Healthcare Workshop" at the conference ICLR 2023.
When analyzing robustness of predictive models under distribution shift, many works focus on tackling generalization in the presence of spurious correlations. In this case, one typically makes use of covariates or environment indicators to enforce independencies in learned models to guarantee generalization under various distribution shifts. In this work, we analyze a class of distribution shifts, where such independencies are not desirable, as there is a causal association between covariates and outcomes of interest. This case is common in the health space where covariates can be causally, as opposed to spuriously, related to outcomes of interest. We formalize this setting and relate it to common distribution shift settings from the literature. We theoretically show why standard supervised learning and invariant learning will not yield robust predictors in this case, while including the causal covariates into the prediction model can recover robustness. We demonstrate our theoretical findings in experiments on both synthetic and real data.
Earlier this year, Apple hosted the Workshop on Machine Learning for Health. This two-day hybrid event brought together Apple and the academic research community and clinicians to discuss state-of-the-art machine learning (ML) research in health.