Imai, Kosuke; Jiang, Zhichao Principal fairness for human and algorithmic decision-making. (English) Zbl 07708434 Stat. Sci. 38, No. 2, 317-328 (2023). Summary: Using the concept of principal stratification from the causal inference literature, we introduce a new notion of fairness, called principal fairness, for human and algorithmic decision-making. Principal fairness states that one should not discriminate among individuals who would be similarly affected by the decision. Unlike the existing statistical definitions of fairness, principal fairness explicitly accounts for the fact that individuals can be impacted by the decision. This causal fairness formulation also enables online or post-hoc fairness evaluation and policy learning. We also explain how principal fairness relates to the existing causality-based fairness criteria. In contrast to the counterfactual fairness criteria, for example, principal fairness considers the effects of decision in question rather than those of protected attributes of interest. Finally, we discuss how to conduct empirical evaluation and policy learning under the proposed principal fairness criterion. MSC: 62-XX Statistics Keywords:algorithmic fairness; causal inference; potential outcomes; principal stratification × Cite Format Result Cite Review PDF Full Text: DOI arXiv References: [1] AGARWAL, A., BEYGELZIMER, A., DUDÍK, M., LANGFORD, J. and WALLACH, H. (2018). A reductions approach to fair classification. In International Conference on Machine Learning 60-69. PMLR. [2] BAROCAS, S., HARDT, M. and NARAYANAN, A. (2019). Fairness and Machine Learning. fairmlbook.org. Available at http://www.fairmlbook.org. [3] BAROCAS, S. and SELBST, A. D. (2016). Big data’s disparate impact. California Law Review 104 671-732. [4] BEN-MICHAEL, E., IMAI, K. and JIANG, Z. (2022). Policy learning with assymetric utilities. Technical Report. ArXiv Preprint. Available at https://arxiv.org/pdf/2206.10479.pdf. [5] BERK, R., HEIDARI, H., JABBARI, S., KEARNS, M. and ROTH, A. (2021). Fairness in criminal justice risk assessments: The state of the art. Sociol. Methods Res. 50 3-44. · doi:10.1177/0049124118782533 [6] BEUTEL, A., CHEN, J., DOSHI, T., QIAN, H., WOODRUFF, A., LUU, C., KREITMANN, P., BISCHOF, J. and CHI, E. H. (2019). Putting fairness principles into practice: Challenges, metrics, and improvements. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. AIES’19 453-459. Association for Computing Machinery, New York, NY, USA. [7] CELIS, L. E., HUANG, L., KESWANI, V. and VISHNOI, N. K. (2019). Classification with fairness constraints: A meta-algorithm with provable guarantees. In Proceedings of the Conference on Fairness, Accountability, and Transparency 319-328. [8] CHIAPPA, S. (2019). Path-specific counterfactual fairness. In Proceedings of the AAAI Conference on Artificial Intelligence 33 7801-7808. [9] Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5 153-163. · doi:10.1089/big.2016.0047 [10] CHOULDECHOVA, A. and ROTH, A. (2020). A snapshot of the frontiers of fairness in machine learning. Commun. ACM 63 82-89. [11] CORBETT-DAVIES, S. and GOEL, S. (2018). The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. Technical Report. Available at arXiv:1808.00023. [12] COSTON, A., MISHLER, A., KENNEDY, E. H. and CHOULDECHOVA, A. (2020). Counterfactual risk assessments, evaluation, and fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency 582-593. [13] D’AMOUR, A., SRINIVASAN, H., ATWOOD, J., BALJEKAR, P., SCULLEY, D. and HALPERN, Y. (2020). Fairness is not static: Deeper understanding of long term fairness via simulation studies. In FAT*’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency 525-534. [14] DWORK, C., HARDT, M., PITASSI, T., REINGOLD, O. and ZEMEL, R. (2012). Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference 214-226. ACM, New York. · Zbl 1348.91230 [15] FISHER, R. A. (1935). The Design of Experiments. Oliver and Boyd, London. [16] Frangakis, C. E. and Rubin, D. B. (2002). Principal stratification in causal inference. Biometrics 58 21-29. · Zbl 1209.62288 · doi:10.1111/j.0006-341X.2002.00021.x [17] GREINER, D. J. and RUBIN, D. B. (2011). Causal effects of perceived immutable characteristics. Rev. Econ. Stat. 93 775-785. [18] HARDT, M., PRICE, E. and SREBRO, N. (2016). Equality of opportunity in supervised learning. Technical Report. Available at arXiv:1610.02413. [19] Holland, P. W. (1986). Statistics and causal inference. J. Amer. Statist. Assoc. 81 945-970. · Zbl 0607.62001 [20] IMAI, K., JIANG, Z., GREINER, D. J., HALEN, R. and SHIN, S. (2022). Experimental evaluation of computer-assisted human decision-making: Application to pretrial risk assessment instrument (with discussions). J. Roy. Statist. Soc. Ser. A To appear. [21] JACKSON, J. W. and VANDERWEELE, T. J. (2018). Decomposition analysis to identify intervention targets for reducing disparities. Epidemiology 29 825-835. · doi:10.1097/EDE.0000000000000901 [22] JOHNDROW, J. E. and LUM, K. (2019). An algorithm for removing sensitive information: Application to race-independent recidivism prediction. Ann. Appl. Stat. 13 189-220. · Zbl 1417.62352 · doi:10.1214/18-AOAS1201 [23] KALLUS, N. and ZHOU, A. (2019). Assessing disparate impact of personalized interventions: Identifiability and bounds. In 33rd Conference on Neural Information Processing Systems. [24] KAMISHIMA, T., AKAHO, S. and SAKUMA, J. (2011). Fairness-aware learning through regularization approach. In 2011 IEEE 11th International Conference on Data Mining Workshops 643-650. [25] KILBERTUS, N., CARULLA, M. R., PARASCANDOLO, G., HARDT, M., JANZING, D. and SCHÖLKOPF, B. (2017). Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems 656-666. [26] KLEINBERG, J., MULLAINATHAN, S. and RAGHAVAN, M. (2017). Inherent trade-offs in the fair determination of risk scores. In 8th Innovations in Theoretical Computer Science Conference C. H. Papadimitrou, ed.). LIPIcs. Leibniz Int. Proc. Inform. 67 Art. No. 43. Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern. · Zbl 1402.68156 [27] KNOX, D., LOWE, W. and MUMMOLO, J. (2022). Administrative records mask racially biased policing. Am. Polit. Sci. Rev. 114 619-637. [28] KUSNER, M., LOFTUS, J., RUSSELL, C. and SILVA, R. (2017). Counterfactual fairness. In Proceedings of the 31st Conference on Neural Information Processing Systems. [29] MITCHELL, S., POTASH, E., BAROCAS, S., D’AMOUR, A. and LUM, K. (2021). Algorithmic fairness: Choices, assumptions, and definitions. Annu. Rev. Stat. Appl. 8 141-163. · doi:10.1146/annurev-statistics-042720-125902 [30] NABI, R. and SHPITSER, I. (2018). Fair inference on outcomes. In Thirty-Second AAAI Conference on Artificial Intelligence. [31] NEYMAN, J. (1923). On the application of probability theory to agricultural experiments: Essay on principles, Section 9. (Translated in 1990). Statist. Sci. 5 465-480. [32] PLECKO, D. and BAREINBOIM, E. (2022). Causal Fairness Analysis. Technical Report. ArXiv Preprint. Available at https://arxiv.org/abs/2207.11385. [33] PLEČKO, D. and MEINSHAUSEN, N. (2020). Fair data adaptation with quantile preservation. J. Mach. Learn. Res. 21 Paper No. 242. · Zbl 1532.68084 [34] RUBIN, D. B. (1974). Estimating causal effects of treatments in randomized and non-randomized studies. J. Educ. Psychol. 66 688-701. [35] VANDERWEELE, T. J. and SHPITSER, I. (2011). A new criterion for confounder selection. Biometrics 67 1406-1413. · Zbl 1274.62890 · doi:10.1111/j.1541-0420.2011.01619.x [36] ZAFAR, M. B., VALERA, I., GOMEZ RODRIGUEZ, M. and GUMMADI, K. P. (2017). Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web. WWW’17 1171-1180. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE. [37] ZHANG, J. and BAREINBOIM, E. (2018). Fairness in decision-making—the causal explanation formula. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence. AAAI’18/IAAI’18/EAAI’18. AAAI Press, Menlo Park, CA This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.