Our Posts

AI Doc Episode 3: Do you want an AI Doctor?

No Comments

The promise of AI doctors is to provide error-free, affordable healthcare for all. In other words, to be available, to be cheap and to perform at least at the level of the best human doctors. Assuming technology may one day deliver AI capable of service our medical needs, we would still need to trust it. To achieve trust, an AI would need:

  • Evidence: as all new innovations in medicine, it needs to be backed by evidence. Clinical trials and systematic reviews are traditionally the way new drugs or devices are introduced into practice butthey might not be feasible for complicated devices such as AI. We need prolonged evidence over a long period of time.
  • Control: It is not enough that we see machines perform a task. If we can't understand machine enough to control and improve them over time, they are not trusted. When humans are in control of machines, people feel that they can trust them.
  • Flexibility: The healthcare system is incredibly complex with lots of moving parts and with innovation built into it. An AI would need to continue to work well even when parts of the healthcare system around it change. For example, you might have a good algorithm to detect cancer nodules in X-rays of lungs, but it should perform at least as well when given X-rays images of higher resolution.
  • Resilience and security: Humans are very good at exploiting cognitive biases in other humans. Machine learning algorithms are much more easily manipulated because they (so far) lack critical thinking.
  • Expendible: Importantly, AI should be expendible. If it doesn't work it should not be used. The only legitimate reason to keep an AI working is that it improves outcomes for patients better than anything else on the market. Otherwise, it should not be used.

MIT fooled Google's AI into believing a cat was guacamole

Recent Posts

Leave a Reply

Comments, questions or queries? Let us know!"