Estimated read time: 13 mins
It has been said that we are in the “information age”. An age where technology governs our day-to-day lives in ways we did not think possible when we were children. The world is evolving faster than ever before.
One evolution is the use of AI. Gone are the days where AI was something we saw in science fiction films. It is now non-fiction and becoming more and more a part of everyday life.
And this is certainly the case in medicine. When I wrote this article, I had just read in the news about an AI breast cancer screening trial. The objective is to explore using AI for breast screening radiology in order to free up radiologists and cut waiting lists.
This is just one example of AI being used in medicine. Around the world we are seeing AI being tested and used in a variety of medical settings with the potential to revolutionise medicine and health care in momentous ways.
However, like every development, while there are benefits there are also risks. Medicine and law go hand in hand and while medicine is evolving faster than ever before, the law is taking a little longer to catch up. I can foresee that over the next few years we are going to see several claims head to trial where existing legal principles will be applied to the new age of medicine and the AI that is being used. While I do not foresee fundamental clinical negligence principles changing, I do think the context in which they are applied will be.
This is without a doubt uncharted territory for clinical negligence law and, while we do not know what the future holds, I highlight below some of the legal risks that clinicians should consider when using AI.
One of the big questions will be whether you should or should not use AI. The legal test has always been that you will have a defence if a responsible body of medical opinion would support your decision, and that support is on a logical basis. This begs two questions:
When will a responsible body of medical opinion support the use of AI for what I wish to use it for?
When will no responsible body of medical opinion support my decision not to use AI for what I wish to use it for?
Unfortunately, there is no hard and fast rule to answer these questions. At some point in time, it will go from being reasonable to not use AI to being unreasonable, it is just a question of when. We saw this in the metal-on-metal hip claims where claims were brought not only against the manufacturers but also against clinicians on the basis that they should have known they were unsafe but kept using them.
While there is no answer to these questions, things a clinician should always consider are:
What are your peers/colleagues doing? When are they using AI and when are they not?
What articles/publications are there to support/contradict your use of AI in general and in this clinical scenario?
Does your hospital/college have any guidelines/protocols on the use of AI in general and in this clinical scenario?
Does your regulator have any guidelines on the use of AI in general and in this clinical scenario?
Informed consent which I will address below
The decision to deploy AI will always be a clinical judgment and the above consideration will help you in your strategy because at one point I do feel there will be a successful claim brought against a clinician for not using AI when it was available.
Consent is a cornerstone of patient care. Given AI is relatively new to the world of medicine, I cannot emphasise enough how important it is that you take informed consent when using AI. This is not an exhaustive list but some of the things you should consider explaining are:
That AI is new and the reasons you wish to use AI
The benefits and limitations of AI
The safeguards you have for the AI limitations
The alternatives to the use of AI and the risks and benefits of each one
Comparing the AI and non-AI
A comprehensive consent discussion will need to be undertaken and documented.
We have all done it. We have pressed the wrong key on our keyboard and suddenly our message takes on a new meaning. But if we consider this through an AI lens, putting the wrong data into the AI system may lead to the wrong interpretation by the AI. This may lead to the wrong diagnosis and treatment.
Let me reinforce this with a case study.
I have broken this down into four subcomponents:
AI will have a pre-programmed set of data/assumptions that it will use when analysing the data inputted and providing conclusions. These pre-programmed assumptions may lead to incorrect interpretations being delivered by the AI system.
Let’s take a hypothetical scenario that you work in Country A and have bought an AI program from Country B to interpret radiology. It is natural for the population in Country B to have a shadow on their lung when an x-ray is performed. This is not natural in Country A. When using AI to interpret the x-ray there is a real risk that the AI would not alert a clinician to a lung shadow because the pre-existing data set has been created which deems that lung shadows are normal.
A clinician must therefore be aware that AI will have pre-disposed assumptions, and these may contradict the clinical picture. The golden rule to remember is just like humans, AI is not always right.
It is fair enough having AI to assist you, but you must always consider whether it is appropriate for the scenario. For example, you would not use an AI tool that diagnoses lung cancer for prostate scans. It is important that AI is only used for the purpose for which is has been developed for. Likewise, when purchasing an AI product, the purchasing clinic must make sure that the product achieves what they are seeking.
Coupled with this it is important for clinicians and clinics to understand the limitations of AI. AI is not going to be a magic wand and will have situations/scenarios that limit its performance. Make sure you understand these so you can make an informed choice regarding the use of AI.
Following on from this we must also consider the bigger picture. The AI will only hold so much data regarding the patient when making its interpretations. It will not hold all of the clinician’s knowledge in its processor unit. You must therefore be mindful of any facts you hold that the AI does not, and which could have affected its interpretation.
We are not at the stage of fully functioning driverless cars. We have speed limiters on cars but are not at the stage we can let the car drive us from A to B. Medicine is no different.
While we have AI, it is merely a tool. It is not a replacement clinician. A clinician’s judgment is still central to the patient’s care, and it is your judgement the patient will be seeking to rely on, not the AI’s. A clinician must always remember AI is a tool to assist them and the final call is by them based on all the evidence they have.
If we consider the case study above, the radiology scan was just one piece of the differential diagnosis. It was for the GP to decide on the diagnosis, and what further investigations should have been done. Irrespective of what the AI says the clinician remains in the driving seat regarding the patient’s care.
I can foresee a scenario where the clinician relies completely on AI and the AI gets it wrong. This would lead to the clinician and the AI manufacturer being subject to a claim. Unless the claimant can prove the AI malfunctioned or was defective, I think it would be difficult to bring a claim against the AI manufacturer.
With regards to the clinician, they will be judged under clinical negligence law and whether a responsible body of medical opinion would support their decision, and that support is on a logical basis. I do not feel a judge would accept a clinician simply saying the AI told me this was the answer. A judge would want to see that the AI outcome was considered with the clinical picture at the clinician’s disposal. A clinician who does this would have much stronger grounds to show that a responsible body of medical opinion would act the same way, even if the diagnosis was subsequently incorrect.
We then need to consider the other scenario. What happens if a clinician ignores the AI outcome. I do not foresee the fundamental clinical negligence legal tests being changed as a result of AI. Therefore, as long as a clinician can demonstrate it was reasonable to disregard the AI’s outcome then they should have a defence. As always though the key for any defence will be clear documentation explaining the reason the clinician disregarded the AI’s conclusion and that this was clearly relayed to the patient.
With the use of AI and processing of data comes the inherent risk of data protection. Clinicians need to be aware of data protection laws and comply with these when obtaining, processing, and retaining patient data and information that is processed by AI. Likewise, clinic managers and owners also need to be aware of the same and have suitable training and safeguards in place to protect the data. A failure to do so could expose the clinician and the clinic to civil, regulatory, and criminal liabilities. If you are using AI, you should seek specialist data protection advice.
Part of data protection is regarding data retention and making sure you have the facilities in place to retain any data you are required to. We are seeing that some AI tools require vast storage facilities due to what they are processing (histopathology slides being a prime example of this). It is therefore important that clinics/hospitals have the facilities in place that can handle the processing and storage that is required. A failure to do so could expose the clinician and the clinic/hospital to civil, regulatory, and criminal liabilities. Again, if you are using AI you should seek specialist data protection advice.
We have all faced the blue screen on our computers. The point where it crashes, and we call IT begging for assistance. AI is no different. It can crash, it can have glitches, things can go wrong. So, make sure you have safeguards in place for if it does go wrong. For example, if the AI is not working, what will you do to substitute the lack of AI? While the AI glitches are for the manufacturer to fix, the safeguarding for such an eventuality falls on the clinician and the clinic/hospital.
While safeguards apply to glitches, they also apply to natural occurrences with the AI software. We have all bought a product that is 99.9% effective. If we apply this in an AI setting, what safeguards do you have for the 0.1% that could be incorrect. As I discussed earlier, a judge will not accept a clinician or clinic’s defence that they solely relied on the AI. It is likely a judge will want to see some independent judgment used when AI is being used. At the very least the patient will need to be consented about the limitations.
If you are a clinic manager/owner you will need to have governance in place for the AI. This will need to risk assess the AI and keep the performance and risks of the AI under constant review with appropriate safeguards to handle these risks.
AI is evolutionary. It learns over the passage of time. It is therefore important that clinicians and clinic owners/managers keep the AI product up to date with the latest software.
If we go back to my hypothetic scenario regarding lung shadows. Let’s presume there is a software update to reflect Country A’s demographic. This would have a significant impact on the interpretations given by the AI. It is therefore important for clinicians and clinics to keep the software up to date because failing to do so could lead to potential claims if the update would have altered the subsequent events.
This is going to be an interesting area of law as there will be a need to separate who owns the AI product, who owns the data being placed into the AI product, and who owns the information that comes out of the AI product and the knowledge it brings. As clinic owners, make sure that this is identified and handled with specialist legal advice.
Before AI is used by yourselves you need to make sure you have undergone the appropriate training and, if you have not, make this clear to your employers. You will want to have training not only on how to use the AI but also on what sort of matters to use the AI on and what the limitations of the AI are.
If you are the practice manager or owner, then you would need to not only provide training to your employees and contractors but also protocols regarding the AI use.
If you have the responsibility of purchasing or hiring an AI tool, then make sure you get professional advice on the content of the contract. Should there be a dispute regarding liability it is likely the contract will be the focus of the dispute. Things you should consider are the division of responsibility/liability and also what protections the AI provider has for the known risks.
This is an area of concern that is highlighted in a recent white paper supported by The MPS Foundation which highlights the potential for clinicians to become a "liability sink" where they end up being held solely responsible for AI-driven decisions, even if the AI system itself is flawed or malfunctions. The paper recommends healthcare organisations ensure that for any AI system procured that makes recommendations to the clinician should have product liability which covers loss to a patient from an incorrect or harmful AI recommendation, or that the contract with the AI company includes an indemnity or loss-sharing mechanism in cases where a patient alleges harm by an AI recommendation implemented by a clinician, and the clinician is subsequently held liable.
Saving the most important until last – documentation. This is the key in any clinical negligence dispute, and it remains as such in AI settings. Clear detailed documentation must be kept at all times throughout the treatment of the patient. If you are using AI then a clear record of the consent discussion must be kept. Likewise, a clear documentation of the review of the AI must be kept and any discrepancies between the AI and your opinion must be clearly explained.
AI is no longer the future but the present. These are exciting times, but we are in uncharted territory. Clinical negligence law has never had to grapple with medicine in an AI setting before and it will be interesting to see how this area of law develops. My prediction is that, regardless of the development, it will progress rapidly to keep pace with the evolving role of AI in medicine, while the core principles of clinical negligence law will remain unchanged.