Technological development changes every field, and healthcare is certainly no different. Typically, new technologies are used to assist licensed healthcare professionals while they remain in charge of providing services to patients. Despite the use of technology, it was always clear that the healthcare professionals were the ones liable for any medical malpractice claims, not the assistive technologies. However, artificial intelligence (AI) is advancing past a mere assistive tool used by healthcare professionals and moving towards recommending or providing care itself. As advancements in AI continue to be used in a variety of ways, the legal question of liability in a malpractice claim blurs.
AI in the medical field is anything but new. For example, in 1971, AI was used to create an artificial medical consultant. Utilizing a search algorithm, the AI used the patient’s symptoms and returned a diagnosis, which allowed physicians to cross-check their own diagnosis against this “consultant.” Even these artificial consultants, however, were used as an assistive tool–the clinician still had an active role in collecting the patient’s symptoms, reaching a final diagnosis, and providing treatment. Liability for care involving these kinds of AI still remained with the clinician, whose judgment took precedence over the AI recommendation.
After over 50 years of AI in the medical field, malpractice liability is blurry due to the difference in, and switch to, autonomous AI. When one thinks of an “AI Model,” one is likely to think of models like ChatGPT. Those models are generative AI: they create something new based on the prompts they receive. Autonomous AI, on the other hand, are systems that are “designed to perform tasks and make real-time decisions without human intervention.”
Advancements have improved generative medical AI models so that now at least one scored at an “expert” doctor level on the U.S. medical licensing exam. They can complete a variety of tasks at a higher level than some human clinicians. AI is being used as more than an assistive tool. In addition to analyzing data, AI can now extract and analyze data and help facilitate personalized treatment recommendations. An AMA survey also revealed that a majority of physicians see the value of AI, indicating that its use is likely to increase. Questions of liability in medical malpractice claims are becoming less clear as AI becomes more advanced for use in the delivery of care.
In a medical malpractice claim, the plaintiff is usually a patient or the patient’s family member. The defendant is typically the clinician or the institution providing care. To succeed in a medical malpractice claim, the plaintiff must prove four elements. First, the defendant owed the plaintiff a duty of care. In medical malpractice cases with assistive AI, this element is met because the assistive AI is no different than other tools used by the physician, like a faulty radiology test. The clinician owes the patient a duty to confirm that information produced by its tools is accurate and make a final decision. In terms of recommendations generated by assistive AI, the clinician typically knows more about the patient, including the medications they’re taking, how they may react to others, and any other factors that may influence how to treat them and if it meets the standard of care. The second element is that the defendant breached that duty. This element is satisfied when the provided treatment falls below the “standard of care,” which is usually decided by a jury after hearing expert testimony. The third element is that the breach caused the plaintiff’s damages, which are usually some sort of injury, harm, or death. This element is satisfied if they can show the causal link between the clinician’s care and their injuries. The final element is damages and the plaintiff must show there was a harm from the breach of duty.
The question of liability is complicated with the switch from generative AI to autonomous AI where AI is the main provider of medical advice. As one doctor asks, “[W]ho is liable when patients rely on generative AI’s medical advice without consulting a doctor? Or what if a clinician encourages a patient to use an at-home AI tool for help with interpreting wearable device data, and the AI’s advice leads to a serious health issue?” To date, there are no medical malpractice cases on public record involving autonomous AI that answer the question of who is, or should be, liable. With these AI models, liability could change depending on how the AI model was involved because a patient’s harm could be caused by faulty programming, a clinician’s failure to supervise the use of the AI, or the algorithm itself.
Some argue that autonomous AI companies should be fully liable if their products were used properly, while clinicians should remain liable when using generative AI models in an assistive capacity. Others argue that liability should be apportioned, though apportioning liability is difficult as AI algorithms aren’t designed to explain their outcomes and it may be difficult for a clinician to assess the soundness of the AI’s recommendation. Major AI developers have made their positions clear by denying liability for medical harm in their terms of service. There currently has been no litigation on whether these terms of service are enforceable, and that will likely vary by jurisdiction. Despite the lack of precedent, some argue that the lack of bargaining power that patients or healthcare providers as users of the models have, and the lack of insight into how the AI models generate their recommendations, is against public policy, and the terms should be unenforceable.
Others have argued for a liability approach between the two extremes, coined the “enterprise liability” model. This model argues that health care organizations that use autonomous AI should bear the majority of liability, but AI companies and creators should have a legal duty of explainability, which addresses the difficulties of apportioning liability. This duty would require autonomous AI to create interpretable models such that clinicians can understand the reasoning behind the AI recommendations.
Despite the lack of specific litigation for autonomous AI in malpractice cases, courts do have some history of holding clinicians liable for the mistakes of others. For instance, courts have often declined to limit a doctor’s liability and responsibility even when a drug company fails to adequately warn about potential adverse side effects. They have also held clinicians liable when they provided care based on incomplete or mistaken medical literature or patient intake forms. Does this mean courts will continue this trend and refuse to limit a physician’s liability when they use autonomous AI? Only time will tell.