4. Kim K, Yang H, Yi J, Son HE, Ryu JY, Kim YC, et al. Real-time clinical decision support based on recurrent neural networks for in-hospital acute kidney injury: external validation and model interpretation. J Med Internet Res. 2021; 23(4):e24120.
https://doi.org/10.2196/24120.
Article
5. Tidjon LN, Khomh F. Never trust, always verify: a roadmap for Trustworthy AI? [Internet]. Ithaca (NY): arXiv. org;2022. [cited at 2023 Oct 31]. Available from:
https://arxiv.org/abs/2206.11981.
6. Neff G.Talking to bots: symbiotic agency and the case of Tay. Int J Commun. 2016; 10:4915–31.
8. Jaspers MW, Smeulers M, Vermeulen H, Peute LW.Effects of clinical decision-support systems on practitioner performance and patient outcomes: a synthesis of high-quality systematic review findings. J Am Med Inform Assoc. 2011; 18(3):327–34.
https://doi.org/10.1136/amiajnl-2011-000094.
Article
10. Arrieta AB, Diaz-Rodriguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, et al. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion. 2020; 58:82–115.
https://doi.org/10.1016/j.inffus.2019.12.012.
Article
11. Antoniadi AM, Du Y, Guendouz Y, Wei L, Mazo C, Becker BA, et al. Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: a systematic review. Appl Sci. 2021; 11(11):5088.
https://doi.org/10.3390/app11115088.
Article
13. Das A, Rad P. Opportunities and challenges in explainable artificial intelligence (XAI): a survey [Internet]. Ithaca (NY): arXiv.org;2020. [cited at 2023 Oct 31]. Available from:
https://arxiv.org/abs/2006.11371.
14. van der Veer SN, Riste L, Cheraghi-Sohi S, Phipps DL, Tully MP, Bozentko K, et al. Trading off accuracy and explainability in AI decision-making: findings from 2 citizens’ juries. J Am Med Inform Assoc. 2021; 28(10):2128–38.
https://doi.org/10.1093/jamia/ocab127.
Article
16. Moosavi-Dezfooli SM, Fawzi A, Frossard P. DeepFool: a simple and accurate method to fool deep neural networks. In : Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2016 Jun 27–30; Las Vegas, NV. p. 2574–82.
https://doi.org/10.1109/CVPR.2016.282.
Article
17. Goodfellow IJ, Shlens J, Szegedy C. Explaining and harnessing adversarial examples [Internet]. Ithaca (NY): arXiv.org;2014. [cited at 2023 Oct 31]. Available from:
https://arxiv.org/abs/1412.6572.
18. Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A. The limitations of deep learning in adversarial settings. In : Proceedings of 2016 IEEE European Symposium on Security and Privacy (EuroS&P); 2016 Mar 21–24; Saarbruecken, Germany. p. 372–87.
https://doi.org/10.1109/EuroSP.2016.36.
Article
20. Chromik M, Butz A. Human-XAI interaction: a review and design principles for explanation user interfaces. Ardito C, Lanzilotti R, Malizia A, editors. Human-computer interaction–INTERACT 2021. Cham, Switzerland: Springer;2021. p. 619–40.
https://doi.org/10.1007/978-3-030-85616-8_36.
Article
21. Grgic-Hlaca N, Lima G, Weller A, Redmiles EM. Dimensions of diversity in human perceptions of algorithmic fairness [Internet]. Ithaca (NY): arXiv.org;2022. [cited at 2023 Oct 31]. Available from:
https://arxiv.org/abs/2005.00808.
22. Baniecki H, Kretowicz W, Piatyszek P, Wisniewski J, Biecek P.Dalex: responsible machine learning with interactive explainability and fairness in Python. J Mach Learn Res. 2021; 22(1):9759–65.
29. Awasthi P, Beutel A, Kleindessner M, Morgenstern J, Wang X. Evaluating fairness of machine learning models under uncertain and incomplete information. In : Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency; 2021 Mar 3–10; Virtual Event, Canada. p. 206–14.
https://doi.org/10.1145/3442188.3445884.
Article
30. Hinnefeld JH, Cooman P, Mammo N, Deese R. Evaluating fairness metrics in the presence of dataset bias [Internet]. Ithaca (NY): arXiv.org;2018. [cited at 2023 Oct 31]. Available from:
https://arxiv.org/abs/1809.09245.
31. Madaio M, Egede L, Subramonyam H, Wortman Vaughan J, Wallach H.Assessing the fairness of AI systems: ai practitioners’ processes, challenges, and needs for support. Proc ACM Hum Comput Interact. 2022; 6(CSCW1):1–26.
https://doi.org/10.1145/3512899.
Article
32. Hardt M, Price E, Srebro N.Equality of opportunity in supervised learning. Adv Neural Inf Process Syst. 2016; 29:3315–23.
33. Srivastava M, Heidari H, Krause A. Mathematical notions vs. human perception of fairness: a descriptive approach to fairness for machine learning. In : Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining; ; 2019 Aug 4–8; Anchorage, AK. p. 2459–68.
https://doi.org/10.1145/3292500.3330664.
Article
34. Saravanakumar KK. The impossibility theorem of machine fairness: a causal perspective [Internet]. Ithaca (NY): arXiv.org;2020. [cited at 2023 Oct 31]. Available from:
https://arxiv.org/abs/2007.06024.
35. Dwork C, Ilvento C. Fairness under composition [Internet]. Ithaca (NY): arXiv.org;2018. [cited at 2023 Oct 31]. Available from:
https://arxiv.org/abs/1806.06122.
36. Binns R. On the apparent conflict between individual and group fairness. In : Proceedings of the 2020 Conference on Fairness, Accountability, And Transparency; 2020 Jan 27–30; Barcelona, Spain. 514–24.
https://doi.org/10.1145/3351095.3372864.
Article
40. Huq AZ.Racial equity in algorithmic criminal justice. Duke Law J. 2019; 68(6):1043.
41. Hu L, Kohler-Hausmann I. What’s sex got to do with fair machine learning? [Internet]. Ithaca (NY): arXiv. org;2020. [cited at 2023 Oct 31]. Available from: .
https://arxiv.org/abs/2006.01770.
42. Chohlas-Wood A, Nudell J, Yao K, Lin Z, Nyarko J, Goel S. Blind justice: algorithmically masking race in charging decisions. In : Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society; 2021 May 19–21; Virtual Event, USA. p. 35–45.
https://doi.org/10.1145/3461702.3462524.
Article
49. Scheibner J, Raisaro JL, Troncoso-Pastoriza JR, Ienca M, Fellay J, Vayena E, et al. Revolutionizing medical data sharing using advanced privacy-enhancing technologies: technical, legal, and ethical synthesis. J Med Internet Res. 2021; 23(2):e25120.
https://doi.org/10.2196/25120.
Article
50. Bai T, Luo J, Zhao J, Wen B, Wang Q. Recent advances in adversarial training for adversarial robustness [Internet]. Ithaca (NY): arXiv.org;2021. [cited at 2023 Oct 31]. Available from:
https://arxiv.org/abs/2102.01356.
53. Taghanaki SA, Das A, Hamarneh G. Vulnerability analysis of chest X-ray image classification against adversarial attacks. Stoyanov D, Taylor Z, Kia SM, editors. Understanding and interpreting machine learning in medical image computing applications. Cham, Switzerland: Springer;2018. 87–94.
https://doi.org/10.1007/978-3-030-02628-8_10.
Article