Estimated read time: 5 mins
Artificial intelligence (AI) is increasingly being used in healthcare, with a growing potential to improve clinical decision-making. Many AI systems are now providing treatment recommendations1 although human clinicians remain responsible for making the final decision about whether to implement an AI's recommendation. 2 A particular area of interest is where the recommendation deviates from clinical judgement3 and there is disagreement on the best course of action.
To explore this further, a project funded by the MPS Foundation is investigating how clinicians interact with AI in a variety of clinical scenarios with a focus on diabetes and obstetrics.
At the start of the project, the team developed six scenarios of clinical consultations involving AI. A patient panel was then asked to review these scenarios and give their views on the impact of AI.
The panel reviewed six scenarios of clinical consultations in which an AI decision-support system was used. Disagreeing with the AI was found to negatively impact the patient’s perception of the clinician, and lead to uncertainty, anxiety, and confusion among the patients.
Concerns focused on three aspects:
The panel reflected that they would not know who to trust if there was a split decision between the clinician and AI, “our confidence has always been in people, in the clinicians (...) and suddenly this machine’s coming in and saying something different. I’m going to lose confidence all round.” The panel acknowledged that the clinician is ultimately responsible for making decisions. Nevertheless, if the clinician disagreed with the recommendation of the AI, this led to uncertainty. The panel expressed concerns regarding the AI’s purpose in a clinical setting if the clinician would not be adhering to its recommendations, which could cause uncertainty about who is ‘right’, worry about who to believe, and a loss of faith in the clinician.
The panel was presented with a scenario in which a patient had suffered an adverse health event, where there was no clear national guidance on the best course of treatment, and the patient had no strong preference. The panel felt the clinician would be blamed and therefore liable, particularly if the clinician rejected the AI’s suggestion but also if the clinician agreed with the AI, “you [the clinician] can’t blame anyone. If it’s recorded all the information the clinician’s given and the AI’s decision, they wouldn’t have much grounds would they?”
A certified AI system should reduce the risk of unsafe recommendations that are suboptimal for the patient. For example, in a treatment decision, option A could be correct 80% of the time and option B 20%. The clinician should go with option A, knowing that they could be wrong. However, if the AI recommends B, then the clinician may consciously choose what they believe to be the worse option as an act of defensive medicine4 - not quite ‘avoidance’ or ‘reassurance’ but an act of ‘deference’ to the AI. While it is generally assumed that the human will act to prevent AI errors harming patients, systems designed with a ‘human in the loop’ may risk a clinician becoming a ‘liability sink’. 5 It appears that clinicians receiving advice from AI can reduce their risk of liability by accepting the advice, even when the AI recommends nonstandard care. We are used to the concept of unconscious automation bias where a clinician may over-trust an AI. 6 But this issue of ‘algorithmic deference’7 goes further where a clinician may consciously avoid disagreement when they feel the AI’s recommendation is suboptimal but not outright wrong, in a form of conscious emergent bias.8
It was perceived as dangerous for clinicians to disagree with the AI due to a potential loss of trust in clinicians, and difficulty justifying a decision that disagreed with the AI in the event of a harm to the patient. Clinicians may opt for a suboptimal choice to avoid disagreeing with the AI because of the apparent shielding effect of agreeing with the AI. As healthcare policy moves towards greater integration of AI into clinical pathways and clinical consultations, these factors need to be examined and addressed – so that the impact of AI is positively transformative for clinicians, patients and the wider healthcare system.
References
Tran T, Felfernig A, Trattner C and Holzinge, A. Recommender systems in the healthcare domain: state-of-the-art and research issues. Journal of Intelligent Information Systems 2021;57: 171-201.
NHS England. Information Governance Guidance: Artificial Intelligence. NHS England - Transformation Directorate; 2022 Available from: Artificial Intelligence - NHS Transformation Directorate (england.nhs.uk)
Tobia K, Nielsen A, Stremitzer A. When does physician use of AI increase liability?. Journal of Nuclear Medicine. 2021 Jan 1;62(1):17-21.
Katz ED. Defensive Medicine: A case and review of its status and possible solutions. Clin Pract Cases Emerg Med. 2019 Oct 21;3(4):329-332. doi: 10.5811/cpcem.2019.9.43975. PMID: 31763580; PMCID: PMC6861029.
Lawton T, et al. Clinicians risk becoming ‘liability sinks’ for Artificial Intelligence. Future Healthcare Journal. 2024 Mar;11(1):100007. https://doi:10.1016/j.fhj.2024.100007
Lyell D. Enrico Coiera, Automation bias and verification complexity: a systematic review, Journal of the American Medical Informatics Association. 2017; 24(2):423–431, https://doi.org/10.1093/jamia/ocw105.
Crootof, R, et al. Humans in the Loop. Vand. L. Rev. 76 (2023): 429. https://cdn.vanderbilt.edu/vu-wordpress-0/wp-content/uploads/sites/278/2023/03/1 9121604/Humans-in-the-Loop.pdf
Friedman B, Nissenbaum H. Bias in computer systems. ACM Transactions on Information Systems (TOIS). 1996 Jul 1;14(3):330-47.