MPS Foundation
Estimated read time: 4 mins
The white paper, Avoiding the AI ‘Off-Switch’: Make AI Work for Clinicians, to Deliver for Patients was a collaboration between the MPS Foundation, the Centre for Assuring Autonomy at the University of York and the Improvement Academy hosted at the Bradford Institute for Health Research.
The paper provides essential insights into how AI can be used to make healthcare systems more efficient and responsive. It aims to ensure that AI tools are integrated into healthcare delivery in a way that is usable, useful, and safe for both patients and the clinicians using them.
The authors state that the greatest threat to AI uptake in healthcare is the ‘off switch’. If frontline clinicians see the technology as burdensome, unfit for purpose or are wary about how it will impact on their decision-making, their patients, and their licences, they may refuse to use it.
Among the main concerns raised in the research is that clinicians could become ‘liability sinks’, absorbing all the legal responsibility for flawed AI-driven decisions.
The white paper is based on findings from the Shared CAIRE (Shared Care AI Role Evaluation) research project, which ran in partnership with the Centre for Assuring Autonomy and with the support of The MPS Foundation. Medical Protection Society set up the MPS Foundation in order to support important research just like this. and the research project assessed the impact of a range of AI decision-support tools on clinicians, bringing together experts in safety, medicine, AI, human-computer interaction, ethics, and law.
The authors concluded that to unlock the potential benefits of AI technologies to patients, more needs to be done to generate confidence in AI among the clinicians using them.
The white paper presents a range of recommendations intended to guide clinicians on AI use which will be applicable to doctors and dentists around the world. This guidance recommends that clinicians should:
Ask for training on the AI tools they are expected to use. This will help them to navigate their AI tool use more skilfully and know when confidence in an AI’s outputs would be justified, supporting their autonomy. This training should cover the AI tool’s scope, limitations, and decision thresholds, as well as how the model was trained and how it reaches its outputs.
Only use AI tools within areas of their existing expertise. AI tools should not be used outside of that expertise. If there are specific cases where a clinician’s knowledge is limited, clinicians should seek the advice of a colleague who understands the area well and can oversee the AI tool, rather than rely on the AI tool to fill their knowledge gap.
Feel confident to reject an AI output that they believe to be wrong, or even suboptimal for the patient. They should resist any temptation to defer to an AI’s output to avoid or reduce the likelihood of being held responsible for negative outcomes.
Regard the input from an AI tool as one part of a wider, holistic picture concerning the patient, rather than the most important input into the decision-making process. They should be aware that AI tools can be fallible, and those which perform well for an 'average’ patient may not perform well for the individual in front of them.
Feel empowered to trust their instincts and judgement about appropriate disclosure of the use of an AI tool, as part of a holistic, shared decision-making process with individual patients. However, they should also be aware that in some critical situations, patients should be made aware of the use of an AI tool. Clinicians should ask their healthcare organisations for explicit guidance on this issue.
Engage with healthcare AI developers, when asked and where possible, to ensure that AI tools are user-focused and fit for purpose for their intended contexts.
The white paper also states that healthcare organisations should ensure any procured AI systems have product liability which covers loss to a patient from an incorrect or harmful AI recommendation, or their contract with the AI company includes an indemnity or loss-sharing mechanism in cases where a patient alleges harm by an AI recommendation implemented by a clinician, and the clinician is subsequently held liable.
The white paper’s authors urge the Government, AI developers, and regulators to urgently consider these recommendations.
The potential opportunities of AI in healthcare are growing with each new development. It is an exciting time to be in the healthcare profession. These recommendations set out in the white paper could help significantly in generating greater confidence in AI among healthcare professionals. With the healthcare sector being one of the largest areas for AI investment globally, it is important that we understand and adapt to changes in a responsible and reasonable way.