News and views
Estimated read time: 6 mins
The dental profession stands at the threshold of unprecedented technological transformation. In its wake are challenges for regulators, indemnifiers, and practitioners to ensure that the deployment of AI respects patient safety. This introductory article explores some of the evolving dentolegal landscape of AI in dentistry, highlighting key concerns around liability, data governance, and consent.
AI models are often described as ‘black-box’ systems. The term black box refers to a model whose internal workings are not visible, interpretable, or understandable – even to the people who designed it. While the inputs and outputs of AI may be accessible, the internal mechanisms by which conclusions are drawn often remain opaque. The black box presents a significant transparency challenge.
The clinician may be unable to fully account for how a recommendation was generated – raising concerns around informed consent, legal defensibility, and professional accountability. To put it another way, we need to understand the ‘why’ behind the ‘what’ of AI decisions.
To address these concerns, increasing emphasis is being placed on the development and deployment of explainable AI (XAI) systems – tools designed to produce outputs that are not only accurate but also interpretable by clinicians.
There is no doubt that there is a growing need for clinician training in AI literacy. This goes beyond technical training; it should include legal and professional dimensions, ensuring that dentists are not only confident in using AI, but also competent in navigating the dentolegal responsibilities it introduces. Training should also address ethical considerations, including fairness, transparency, and the risk of algorithmic bias. We shall discuss the ethical aspects in a future article.
The dentist remains legally and ethically responsible for any decisions made when treating patients, regardless of whether those decisions were informed or influenced by AI.
It’s professional judgement, not technological output, that underpins the duty of care we owe to our patients. The responsibility to act in the patient’s best interests, and to exercise reasonable skill and care cannot be delegated to an algorithm.
It is important to remember that clinical decisions must remain under clinician control. A human-in-the-loop approach ensures that AI output is reviewed and assessed by human expertise. The clinician retains primary responsibility for interpreting outputs and making judgments. In contrast, human-out-of-the-loop AI describes fully autonomous systems where decisions are made by autonomous AI systems which operate with minimal human oversight. At the time of writing, there are no known, fully autonomous AI systems currently in routine clinical use in dental practice (although we are aware of research and development in this area).
As AI remains in the early stages of adoption in dentistry, clinicians must undertake a critical appraisal of any system prior to its use. This includes understanding intended purpose, and limitations. Clinical records should include information on the use of AI, see Table 1.
1 - Record of AI useRecord that an AI tool was used and in what context.
2 - AI output documentedNote AI findings and/or diagnostic output.
3 - Clinical interpretation of AI outputDescribe how the AI output was considered.
4 - Reasoning for agreement or divergenceRecord the rationale for accepting or rejecting AI output.
5 - Clinician oversight appliedConfirm the final decision was made by the clinician.
6 - Patient informedNote that AI use was discussed with the patient. .
7 - AI limitationsDocument any known limitations of the AI system.
In England, the NHS Transformation Directorate's recent guidance1 on artificial intelligence emphasises that while AI can support healthcare professionals in their roles, it should not replace the clinician's expertise or the shared decision-making process with patients. Clinicians are advised to use AI tools as supportive aids, ensuring that any final decisions regarding patient care are made collaboratively with the patient and based on professional clinical judgment.
The integration of AI into clinical workflows and diagnostics raises a number of consent-related issues. Patients should be informed when AI is applied, understand its role, limitations and understand that the final decision rests with the clinician — not the algorithm (items 5 and 6 in the checklist).
For example, the patient should be informed that AI-based radiographic software is being used to detect interproximal caries and that there is a risk of false positives. The patient should also be made aware of bias, error, and lack of explainability of the black box. Given this lack of explainability, the challenge is how to communicate algorithmic reasoning and how confident can we be that the patient fully understands.
This raises an important question: are our current approaches to obtaining consent still fit for purpose, or does the integration of AI into clinical care require us to rethink — or at the very least, refine — the consent process?
AI systems often rely on large datasets to function. Using clinical data (radiographs, intraoral images, chart notes) requires appropriate consent and data governance. Patients must be told if their data will be used beyond direct care, especially if used commercially or for machine learning.
Explicit consent is needed if identifiable patient data is used outside of clinical care — e.g., for training future versions of the AI.
Anonymised data might raise issues under GDPR or equivalent laws if re-identification is possible.
Patients should understand whether their data may be monetised.
A particular challenge concerns sensitive personal data since, once that data enters an AI model, it’s virtually impossible to remove it – the so-called unlearning issue. (Machine unlearning is the subject of AI research focused on enabling models to “forget” specific data, a discussion of which is beyond the scope of this article.)
The vexed question of liability for AI-related errors is currently under global debate, with a notable lack of established case law to provide clear guidance. There are two types of liability that could arise from the use of AI - clinical negligence and product liability.
In 2022, The MPS Foundation funded a project led by Tom Lawton and Zoe Porter testing different Human-Machine Interaction models for shared decision-making, and their ethical and legal implications. An introductory paper exploring some of the challenges that artificial intelligence poses for clinicians was published in Future Healthcare Journal.2 The authors point out that ‘Product liability claims are notoriously difficult, and forensically expensive, compared with ordinary professional negligence claims. The claimant needs to identify and prove the defect in the product. Within the context of opaque and interconnected AI, this may be an extremely difficult and costly exercise requiring significant expertise.’
Given the obscure nature of the ‘black box’, it is not easy to demonstrate incorrect output resulted from the alleged defect.
This is an area that governments and legislature around the world are grappling with. The European Union's Product Liability Directive (EU) 2024/2853, which will be implemented by members of the EU in the coming years, reduces the burden of proof making it easier for claimants alleging injury from AI to successfully bring claims against manufacturers, suppliers and other entities.
Companies might try to mitigate their liability by including disclaimers or require users to agree to terms and conditions that limit the company’s responsibility for AI errors.
Clinicians using AI are likely be the first target for claimants seeking compensation. In his lecture given at the first MPS Foundation AI Symposium in April this year, Tom Lawton referred to what he calls ‘liability sink.’ He explains that ‘Analogous to the way a ‘heat sink’ takes up unwanted heat from a system, the human clinician risks being used here as a ‘liability sink’, where they absorb liability for the consequences of the AI's recommendation whilst being disenfranchised from its decision-making process…’
In cases involving the use of AI, the court is likely to examine the origin of the error, identify who was responsible, and evaluate the extent of clinician involvement as the human-in-the-loop in the decision-making process.
When a healthcare professional delegates tasks to an AI system, it could be argued that they still retain a non-delegable duty of care — meaning they remain responsible for the outcomes.
A non-delegable duty of care refers to a legal obligation that cannot be passed on to another party — even if tasks are delegated, the duty remains with the healthcare professional.
It might also be argued that if the AI operates in a way similar to an employee, the clinician or healthcare provider could be held vicariously liable for its actions.
As a global organisation, we are aware that different countries and regions are taking various approaches to liability (and regulation). Ultimately, legal liability will likely be determined on a case-by-case basis. As AI systems become more autonomous and self-learning, legal reform may be necessary to address emerging risks and ensure clear accountability for clinical outcomes.
The introduction of new and complex technologies always comes with the shadow of risk. At the time of writing, there are too many unanswered questions when it comes to AI applications in everyday dental practice. Dentists should apply the precautionary principle — using AI as a support, not a substitute, and ensuring that its use does not compromise patient safety, consent, or professional responsibility.
Looking ahead, as AI systems improve, excessive caution may be harder to justify.