Post

In black boxes we trust

No Comments

Machine learning algorithms inherently opaque, they lack transparency into their inner workings. Yet, in the recent wave of "AI is the future" clinicians and patients are asked to agree to put their trust and their health to poorly understood black boxes. Some people are ready to accept this, but most not yet. This may seem like a contradiction, after all trusting black boxes is not at all new.

In medicine everything is a black box; doctors prescribe, and patients accept, medicines with unknown mechanism of work, implants so complex there isn't anyone who can grasp the entirety of their workings. Healthcare systems themselves are so complex they have become a scientific research area in their own right.

So how come we trust even new chemical technologies, but not new software technologies?

The difference is in the evidence. Medicines and, to a lesser extent, medical devices, undergo clinical trials, in several stages, to show how well they work and at what risk. Trials a replicated then get replicated and the replication studies summarised. Without evidence, using new technologies is like rolling dice.

While there is a huge shortage of evidence (that is, not enough studies are done), so much less has been done on the use of machine learning and AI in medicine. More evaluation of AI in medicine is needed before any of it can be accepted into clinical practice. Evidence builds trust.

Evaluations need to be clearly documented and explained so they can be replicated and the strength of evidence established. Clear documentation creates replicability, and replicability builds trust.

Further reading: only 1 multi-site randomized controlled trial ever in medical AI

Recent Posts

Leave a Reply

Comments, questions or queries? Let us know!"