This is a great and important question. Yes- there are certainly ethical dilemmas. One big dilemma is that it is important to make sure that any AI is safe, effective, reproducible, and an improvement on other existing methods. We need to know that AI works in expected ways and that we can understand how it works. Clinical trials are used to check this.
Another dilemma is to ensure AI algorithms do not contain bias.
Another dilemma is with data sharing. AI requires a large amount of data to train a model, and patient data must be shared safely and securely.
Medical research must have ethical approvals in place.
Emmiline has outlined some of the main ethical dilemmas. Another one is the extent to which you would allow AI to replace people in the military. Doing this can save lives and giving AIs more autonomy can save more. On the other hand allowing AI to make life and death decisions without human input could be very dangerous and is highly controversial.
On data sharing, is it OK to train models on data which is owned by other people (such as authors and artists) especially since the models may then compete with these same people. There are important test cases about this going to the courts at the moment.
Ethics of AI debate presents several challenges when seen from the perspective of assessment of trade-offs for e.g. Transparency vs Security – would higher degree of transparency for methods of design and deployment of AI based systems make them vulnerable to hacking and compromise the Security. Similarly, the Explainability vs Utility debate – would the requirements for higher depths of explainability of AI systems elongate the development period for AI systems and make them inaccessible at the point when their applications would benefit the most. These are not binary – Yes/No trade-offs and the answer often would be along a spectrum as well as vary from one sector to another
Comments
Gareth commented on :
Emmiline has outlined some of the main ethical dilemmas. Another one is the extent to which you would allow AI to replace people in the military. Doing this can save lives and giving AIs more autonomy can save more. On the other hand allowing AI to make life and death decisions without human input could be very dangerous and is highly controversial.
On data sharing, is it OK to train models on data which is owned by other people (such as authors and artists) especially since the models may then compete with these same people. There are important test cases about this going to the courts at the moment.
Muhammad commented on :
another one is to ensure “equitable and fair distribution of benefits for all societal factions”.
Arunav commented on :
Ethics of AI debate presents several challenges when seen from the perspective of assessment of trade-offs for e.g. Transparency vs Security – would higher degree of transparency for methods of design and deployment of AI based systems make them vulnerable to hacking and compromise the Security. Similarly, the Explainability vs Utility debate – would the requirements for higher depths of explainability of AI systems elongate the development period for AI systems and make them inaccessible at the point when their applications would benefit the most. These are not binary – Yes/No trade-offs and the answer often would be along a spectrum as well as vary from one sector to another