Experts say artificial intelligence may make it more difficult to determine who is responsible for medical failures Artificial Intelligence (AI)
Experts have warned that the use of artificial intelligence in healthcare could create a legally complex blame game when it comes to assigning responsibility for medical failings.
The development of artificial intelligence for clinical use has boomed, with researchers creating a range of tools, from algorithms to help interpret scans to systems that can aid in diagnosis. Artificial intelligence is also being developed to help manage hospitals, from optimizing bed capacity to processing supply chains.
But while experts say the technology could bring countless benefits to health care, they also say there is cause for concern, from the lack of testing of the effectiveness of artificial intelligence tools to questions about who is responsible if a patient gets a negative result.
Professor Derek Angus, from the University of Pittsburgh, said: “There are certainly going to be instances where there is a perception that something has gone wrong, and people will look around to blame someone.”
The JAMA Summit on AI, hosted by the Journal of the American Medical Association last year, brought together a wide range of experts including doctors, technology companies, regulatory bodies, insurance companies, ethicists, lawyers and economists.
the The resulting reportof which Angus is first author, looks not only at the nature of AI tools and the healthcare fields in which they are used, but also examines the challenges they present, including legal concerns.
Professor Glenn Cohen of Harvard Law School, who co-authored the report, said patients may have difficulties showing fault in the use or design of an AI product. There may be barriers to obtaining information about its inner workings, while it may also be difficult to suggest a plausible alternative design for the product or prove that a poor outcome was due to the AI system.
“The interaction between parties may also pose challenges to bringing a lawsuit — they may point to each other as the at-fault party, they may have an existing agreement to contractually reallocate liability or have claims for damages,” he said.
Professor Michelle Mello, another author of the report, from Stanford Law School, said the courts are well equipped to resolve legal issues. “The problem is that it takes time and will involve inconsistencies in the early days, and this uncertainty increases costs for everyone in the AI innovation and adoption ecosystem,” she said.
The report also raises concerns about how AI tools are evaluated, noting that many of them fall outside the oversight of regulators such as the US Food and Drug Administration (FDA).
“For doctors, effectiveness usually means improved health outcomes, but there is no guarantee that the regulatory authority will require proof,” Angus said. [of that]. Then, once they emerge, AI tools can be deployed in a variety of unpredictable ways in different clinical settings, with different types of patients, by users with different skill levels. There is no guarantee that what sounds like a good idea in a pre-approval package is actually what you get in practice.
The report explains that at present there are many barriers to evaluating AI tools, including that they often need to be used in clinical use to be fully evaluated, while current methods of evaluation are expensive and cumbersome.
Angus said it was important to provide funding to properly evaluate the performance of AI tools in healthcare, with investment in digital infrastructure a key area. “One of the things that came up during the summit is [that] The tools that were best evaluated were the least reliable. The most reliable tools were the least rated.”