Artificial Intelligence (“AI”) tools are increasingly integrated into the healthcare world, from reading diagnostic images and triaging emergency care to recommending treatment plans and managing medical records. While these technologies promise efficiency and precision, when they fail, the consequences can be life-altering.
AI-powered robotic surgical systems are now used in everything from gallbladder removals to joint replacements. Proponents of such tools argue that AI improves precision, reduces complications, and shortens recovery times. While AI can be a useful surgical tool, when these systems fail due to software glitches or poor training, patients can be seriously injured. If you or a loved one were injured, call a surgical error lawyer at Plakas Mannos.
In fact, there have been thousands of adverse events associated with robotic surgical systems. Examples of harm include: organ perforation or laceration, thermal burns from electrosurgical instruments overheating; delayed or failed procedures caused by system malfunctions; and severe post-op infections due to extended surgical times from complications.
If you or a loved one were injured during an AI-assisted surgery, different legal avenues may be available depending on how the failure occurred. A surgical errors attorney can evaluate whether you have a compensation claim.
A surgeon who mishandles a robotic surgery system can still be liable under traditional medical malpractice principles, including for:
Working with an experienced medical malpractice lawyer can help you pursue compensation for these errors.
If the issue stems from the surgical robot itself, the manufacturer may be liable under product liability theories such as:
We may soon see shared liability between physicians, hospitals, and AI developers, similar to trends in self-driving vehicle litigation.
AI tools can also cause harm outside of the operating room. AI poses serious risks when training data is biased or when certain groups are insufficiently accounted for. These flaws can lead to misdiagnosis, especially for underrepresented groups. A disturbing case of AI bias was discussed in a 2019 study published in Science. A widely used algorithm that helped health systems prioritize patients for follow-up care was found to assign lower health risk scores to black patients than white patients with similar conditions. This led to black patients receiving fewer follow-up services, not based on clinical need, but because of flawed data.
Legal implications for relying on biased software could include: potential claims under civil rights laws, negligence by providers relying on biased tools, and even possible institutional liability for adopting algorithms without proper vetting.
AI holds promise for patients, but it isn't infallible. When AI leads to injury or missed diagnoses, patients have rights. At Plakas Mannos, our medical malpractice lawyers fight to hold accountable parties responsible, whether the error stemmed from human hands or a machine. We want to ensure our injured clients receive justice, even in this evolving legal landscape.
If you think AI played a role in your medical injury, call us for a confidential consultation about how AI played a role in your injury. We understand the intersection of medicine, technology, and the law, and we’re here to help you navigate it. Contact us today to get started!
Elisabeth Jackson is an associate attorney with our firm and a member of our Medical Malpractice team. Her areas of practice include personal injury, wrongful death, criminal law, and general litigation.