A trio of health policy analysts, from the University of Pennsylvania, UC Berkeley and the Crescencz VA Medical Center in Philadelphia, call for more stringent rules for the introduction of AI medical applications. The idea of incorporating AI in the medical field will prove to be immensely useful, only if it’s properly regulated, argue researchers Ravi B. Parikh, Ziad Obermeyer and Amol S. Navathe.
“Regulatory standards for assessing algorithms’ safety and impact have not existed until recently. Furthermore, evaluations of these algorithms, which are not as readily understandable by clinicians as previous algorithms, are not held to traditional clinical trial standards,” they write in an editorial published in the journal Science. AI algorithms, unlike drugs or devices, are not products exhibiting static behavior. Their performance is purely dependent on the inputs, and “their predictions may change over time as the algorithms are exposed to more data,” they argue.
In order to safeguard patients who are part of the medical treatment that involves the use of AI applications or devices, the researchers suggest implementing ‘five’ standards. The first one, termed ‘endpoints’, requires AI-incorporated healthcare systems to mention clearly identifiable benefits. The second standard involves establishing ‘benchmarks’ that are appropriate to the applicable area and the third standard ensures that the variable input specifications are clear so as to achieve ‘interoperability and generalization’. The fourth standard talks about ‘specific interventions’ associated with findings by AI systems and whether they are appropriate and successful. And given the fact that the data –that influences the predictive abilities of an AI system – would change over time, the fifth standard focuses on regular rigorous ‘audits’ of AI applications.