Human–machine teaming is key to AI adoption: clinicians' experiences with a deployed ... .

Open Coding

13 Views

        

  • Obermeyer, Z. & Emanuel, E. J. Artificial intelligence and the augmentation of health care decision-making. N. Engl. J. Med. 375, 1216–1219 (2016).

    Article  Google Scholar 

  • Bates, D. W., Saria, S., Ohno-Machado, L., Shah, A. & Escobar, G. Big data in health care: using analytics to identify and manage high-risk and high-cost patients. Health Aff. 33, 1123–1131 (2014).

    Article  Google Scholar 

  • Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. Nat. Med. 25, 44–56 (2019).

    CAS  Article  Google Scholar 

  • Khan, S. et al. Improving provider adoption with adaptive clinical decision support surveillance: An observational study. JMIR Hum. Factors 6, 1–10 (2019).

    Article  Google Scholar 

  • Kwan, J. L. et al. Computerised clinical decision support systems and absolute improvements in care: Meta-analysis of controlled clinical trials. BMJ 370, 1–11 (2020).

  • Mann, D. et al. Adaptive design of a clinical decision support tool: What the impact on utilization rates means for future CDS research. Digit. Health 5, 1–12 (2019).

    Google Scholar 

  • Chen, J. H. & Asch, S. M. Machine learning and prediction in medicine—beyond the peak of inflated expectations. N. Engl. J. Med. 376, 2507–2509 (2017).

    Article  Google Scholar 

  • Shortliffe, E. H. & Sepúlveda, M. J. Clinical decision support in the era of artificial intelligence. JAMA 10025, 9–10 (2018).

    Google Scholar 

  • Jacobs, M. et al. How machine-learning recommendations influence clinician treatment selections: the example of the antidepressant selection. Transl. Psychiatry 11, 1–9 (2021).

  • Tonekaboni, S., Joshi, S., McCradden, M. D. & Goldenberg, A. What clinicians want: contextualizing explainable machine learning for clinical end use. In proc. Machine Learning Research. 106, 359–380 (2019).

  • Narayanan, M. et al. How do humans understand explanations from machine learning systems? an evaluation of the human-interpretability of explanation. arXiv preprint. arXiv:1802.00682, 1–21 (2018).

  • Jacobs, M. et al. Designing AI for trust and collaboration in time-constrained medical decisions: a sociotechnical lens. In proc. CHI’21. https://doi.org/10.1145/3411764.3445385 (2021).

  • Dietvorst, B. J., Simmons, J. P. & Massey, C. Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 144, 114–126 (2015).

    Article  Google Scholar 

  • Gaube, S. et al. Do as AI say: susceptibility in deployment of clinical decision-aids. npj Digit. Med. https://doi.org/10.1038/s41746-021-00385-9 (2021)

  • Walter, Z. & Lopez, M. S. Physician acceptance of information technologies: role of perceived threat to professional autonomy. Decis. Support Syst. 46, 206–215 (2008).

    Article  Google Scholar 

  • Lee, J. D. & See, K. A. Trust in automation: designing for appropriate reliance. Hum. Factors 46, 50–80 (2004).

    Article  Google Scholar 

  • Rhee, C. et al. Incidence and trends of sepsis in US hospitals using clinical vs claims data, 2009–2014. JAMA 318, 1241–1249 (2017).

    Article  Google Scholar 

  • V., L. et al. Hospital deaths in patients with sepsis from 2 independent cohorts. JAMA 312, 90–92 (2014).

    Article  Google Scholar 

  • Paoli, C. J., Reynolds, M. A., Sinha, M., Gitlin, M. & Crouser, E. Epidemiology and costs of sepsis in the United States—an analysis based on timing of diagnosis and severity level. Critical Care Medicine 46, 1889–1897 (2018).

  • Singer, M. et al. The third international consensus definitions for sepsis and septic shock (sepsis-3). JAMA 315, 801–810 (2016).

    CAS  Article  Google Scholar 

  • Henry, K. E. et al. Factors driving provider adoption of the TREWS machine learning-based early warning system and its effects on sepsis treatment timing. Nat. Med. https://doi.org/10.1038/s41591-022-01895-z (2022).

  • Adams et al. Prospective, multi-site study of patient outcomes after imple-mentation of the TREWS machine learning-based early warning system for sepsis. Nat. Med. https://doi.org/10.1038/s41591-022-01894-0 (2022).

  • Greenes, R. A. et al. Clinical decision support models and frameworks: seeking to address research issues underlying implementation successes and failures. J. Biomed. Inform. 78, 134–143 (2018).

    Article  Google Scholar 

  • Ruppel, H. & Liu, V. To catch a killer: electronic sepsis alert tools reaching a fever pitch? BMJ Qual. Saf. https://doi.org/10.1136/bmjqs-2019-009463 (2019)

  • Mertz, L. From Annoying to Appreciated: turning clinical decision support systems into a medical professional’s best friend. IEEE Pulse 6, 4–9 (2015).

    Article  Google Scholar 

  • Centers for Medicare and Medicaid Services. CMS announces update on SEP-1 validation, public reporting for Hospital Inpatient Quality Reporting. https://qualitynet.cms.gov/news/5d014bfc1543e8002ceb1d45. (2016).

  • Sendak, M. et al. ‘The Human Body is a Black Box’: Supporting Clinical Decision-Making with Deep Learning. In proc. of the 2020 Conference on Fairness, Accountability, and Transparency (2020).

  • Shortreed, S. M., Cook, A. J., Coley, R. Y., Bobb, J. F. & Nelson, J. C. Commentary Challenges and opportunities for using big health care data to advance medical science and public health. Am. J. Epidemiol. 188, 851–861 (2019).

    Article  Google Scholar 

  • Wang, F., Casalino, L. P. & Khullar, D. Deep learning in medicine—promise, progress, and challenges. JAMA Intern. Med. 179, 293–294 (2019).

    Article  Google Scholar 

  • Wisniewski, H., Gorrindo, T., Rauseo-Ricupero, N., Hilty, D. & Torous, J. The role of digital navigators in promoting clinical care and technology integration into practice. Digit. Biomarkers 4, 119–135 (2020).

  • Schwartz, J. M., Moy, A. J., Rossetti, S. C., Elhadad, N. & Cato, K. D. Clinician involvement in research on machine learning-based predictive clinical decision support for the hospital setting: a scoping review. J. Am. Med. Inf. Assoc. 28, 653–663 (2021).

    Article  Google Scholar 

  • Stirman, S. W. et al. The sustainability of new programs and innovations: a review of the empirical literature and recommendations for future research. Implement. Sci. 7, 1–19 (2012).

  • Sebo, S. S., Dong, L. L., Chang, N. & Scassellati, B. Strategies for the inclusion of human members within human-robot teams. In proc. ACM/IEEE Int. Conf. Human-Robot Interact. 309–317 (2020).

  • de Visser, E. J. et al. Towards a theory of longitudinal trust calibration in human–robot teams. Int. J. Soc. Robot. 12, 459–478 (2020).

    Article  Google Scholar 

  • Demir, M., McNeese, N. J. & Cooke, N. J. Understanding human-robot teams in light of all-human teams: Aspects of team interaction and shared cognition. Int. J. Hum. Comput. Stud. 140, 102436 (2020).

    Article  Google Scholar 

  • Henry, K. E., Hager, D. N., Pronovost, P. J. & Saria, S. A targeted real-time early warning score (TREWScore) for septic shock. Sci. Transl. Med. 7, 1–9 (2015).

  • Soleimani, H., Hensman, J. & Saria, S. Scalable joint models for reliable uncertainty-aware event prediction. IEEE Trans. Pattern Anal. Mach. Intell. 40, 1948–1963 (2018).

    Article  Google Scholar 

  • Henry, K. E., Hager, D. N., Osborn, T. M., Wu, A. W. & Saria, S. Comparison of Automated Sepsis Identification Methods and Electronic health record-based Sepsis Phenotyping (ESP): improving case identification accuracy by accounting for confounding comorbid conditions. Crit. Care Explor. 1:e0053, 1–8 (2019).

  • Bhattacharjee, P., Edelson, D. P. & Churpek, M. M. Identifying patients with sepsis on the hospital wards. Chest 151, 898–907 (2017).

    Article  Google Scholar 

  • Harrison, A. M., Gajic, O., Pickering, B. W. & Herasevich, V. Development and implementation of sepsis alert systems Andrew. Clin. Chest Med. 37, 219–229 (2017).

    Article  Google Scholar 

  • Edmonson, A. C. & McManus, S. E. Methodological fit in management field research. Acad. Manag. Rev. 32, 1246–1264 (2007).

    Article  Google Scholar 

  • Strauss, A. & Corbin, J. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. (Sage publications, 1998).

  • O’Brien, B. C., Harris, I. B., J, B. T., Reed, D. A. & Cook, D. A. Standards for reporting qualitative research: a synthesis of recommendations. Acad. Med. 89, 1245–1251 (2014).

    Article  Google Scholar 

  • McDonald, N., Schoenebeck, S. & Forte, A. Reliability and inter-rater reliability in qualitative research: norms and guidelines for CSCW and HCI practice. Proc. ACM Hum. Computer Interact. 3, 1–23 (2019).

    Google Scholar 

  • Hill, C. E., Thompson, B. J. & Williams, E. N. A guide to conducting consensual qualitative research. Couns. Psychol. 25, 517–572 (1997).

    Article  Google Scholar