• Question: Since AI is now used for medical purposes, and it's very often wrong, is it really safe to use it yet?

    Asked by anon-381472 on 23 Jan 2024. This question was also asked by anon-380805.
    • Photo: Rosemary J Thomas

      Rosemary J Thomas answered on 23 Jan 2024:


      AI is a way of making computers help doctors and nurses take care of people who are sick or hurt. AI can do many things, like finding diseases, making medicines, giving advice, and so on.

      Sometimes, AI can be wrong, because it is not perfect and it does not know everything. But AI can also be very helpful because it can learn from its mistakes and it can do things faster and better than humans. It can reduce the workload of our medical and healthcare professionals.

      AI is safe to use if we use it carefully and responsibly, and if we always check if it is doing the right thing. AI would not replace doctors, nurses, or healthcare professionals.

    • Photo: Arpita Saggar

      Arpita Saggar answered on 24 Jan 2024:


      AI is safe to use as long as it is transparent (i.e., its capabilities and limitations have been accurately established) and humans do not become completely reliant on it. Most AI solutions in healthcare are best utilised for assisting doctors rather than replacing them.

      AI deployed for any real-world applications, especially in high-stakes domains like healthcare, must be trustworthy. A lot of very capable AI models are black boxes, meaning that it is difficult to determine their inner mechanisms. However, there is a very interesting research area called explainable AI (also called XAI), which aims to decipher why a model makes a certain prediction for a given input. For example, let’s say that an AI system classifies an X-ray image as displaying symptoms of Pneumonia. In this case, the XAI algorithm would try to determine which regions of the X-ray the first AI system looked at before arriving at its prediction.

      XAI methods can help build trust in AI systems, which makes it easier to know when a model’s prediction can be trusted, and when it should be discarded.

Comments