AI Arriving? Mind the Liability Gap
16/12/25The increased use of AI in healthcare is well documented (Accelerating AI in the NHS | Capsticks). Benefits such as improved care, disease prevention / management away from hospital, increased administrative efficiency and faster information analysis must be set against risks such as data bias or hallucination in AI. Most importantly when the AI does go wrong, or a patient suffers harm because the AI wasn’t used in the right way, the issue of who is liable is a difficult one. AI is not a legal entity and cannot therefore be sued thus creating a “liability gap”. Instead it is the doctor , medical institution or developer who could all be potentially liable to some extent. This article considers the current legal position, potential future changes and what any healthcare organisation must do to minimise litigation risk associated with AI adoption.
Legal position
Whichever defendant a claimant chooses to sue -and there could be many as there will be separate data providers, software developers, AI developers, systems integration specialists and suppliers in addition to the doctor and hospital - they first need to show that a duty of care was owed by that Defendant and that there has been a breach of that duty causing harm. Trying to decipher opacity - the Black Box thinking of AI and how it reached the decision it did- will not be easy. Disclosure of sensitive commercial information is not straightforward and a wide range of medical and non-medical experts are likely to be needed making it very difficult and costly for claimants and defendants alike to prove their case. A defendant may successfully put forward a defence that the harm caused by the AI was not reasonably foreseeable and all reasonable precautions were taken…assuming they can evidence this. This is different to product liability claims where such a defence is not available and a claimant simply has to show a product is defective and it has caused the harm complained of. There is currently no strict liability for AI.
These and other issues were considered this summer in a Law Commission Report ( https://cdn.websitebuilder.service.justice.gov.uk/uploads/sites/54/2025/07/AI-paper-PDF.pdf) The challenges which autonomous and adaptive AI bring and the issues around opacity and causation in AI were felt to make it more difficult for claimants to bring a successful claim. The report looked at other jurisdictions noting in particular how the revised European Product Liability Directive issued in 2024 suggests changes to EU Member state law by December 2026 which grant claimants of AI caused healthcare harm stronger rights to disclosure. In particular once a claimant has presented a plausible case then it is the Defendant who must produce documentation “necessary and proportionate” failing which the product is presumed defective . The Law Commission is not proposing any change just yet to product liability law in the UK but equally it recognises that the Consumer Protection Act 1987 setting out product liability law is over thirty years old and was put on the statute books before AI or even the internet was in use. It is gathering opinion and building the conceptual/ legal foundations for potential reform.
Practical steps
Until legal change is enacted any healthcare provider bringing in AI to their organisation needs to be able to evidence a compliance roadmap to show that they have undertaken all reasonable precautions when procuring and implementing AI. This includes
- Due diligence looking at the background of the developers, other customer experience, randomised control trials and audits and articles on its use
- Mapping the AI supply chain and asking vendors to show who builds, trains and integrates the systems being offered.
- Reviewing contracts carefully to set standards, terms and definitions, user expectations, indemnities and liability limits.
- Thoroughly testing, monitoring and documenting the AI pathway with audit trails and version histories whether that is your own tailor made AI or an “off the shelf” product
- Ensuring staff are fully trained, protocols and guidelines developed around the use of AI and a multi-disciplinary AI committee oversees the use and integration of any AI.
By preparing for the risk of AI failing an organisation can identify and control the risks associated not only with the “liability gap” but also patient care using AI.
A more detailed analysis of these issues is contained in a webinar available here.
How Capsticks can help
Capsticks digital healthcare team are able to advise on all aspects of purchasing and integrating AI into your organisation or developing your own AI including issues ranging from data management and sharing information through to procurement and employment issues and litigation risk management. For further information please contact Majid Hassan or Andrew Latham.






