• Question: What are the risks involved in AI

    Asked by anon-379137 on 11 Jan 2024.
    • Photo: Andrew Maynard

      Andrew Maynard answered on 11 Jan 2024:


      There are plenty of possible risks associated with AI (although the destruction of life as we know it probably isn’t one — at least at the moment), but many of them depend on different types of AI and where the technology goes. Some of them include risks associated with fake news, people trusting machines too much, relying on imperfect AI leading to problems in financial and legal systems, the breakdown of supply chains because of glitches in AI, and even problems around bias and fairness within society.

    • Photo: Muhammad Malik

      Muhammad Malik answered on 11 Jan 2024:


      Bias in AI systems can lead to unfair outcomes, and concerns. There are Ethical dilemmas too, security risks and potential overreliance on AI are also on the list. The challenge lies in responsibly harnessing AI’s potential while addressing all concerns and ensuring it enhances, rather than hinders, our society and lives.

    • Photo: Rosemary J Thomas

      Rosemary J Thomas answered on 11 Jan 2024:


      AI can be very helpful and amazing, but it can also have some risks or dangers, such as:

      – AI can do some jobs faster, cheaper, or better than humans, so some humans may lose their jobs or have to learn new skills.
      – AI can sometimes make wrong or unfair decisions, because it uses data or rules that are not good or complete.
      – AI can be used to make weapons or machines that can hurt people or damage nature, either on purpose or by accident.

      We can try to prevent or reduce these risks by making sure that AI is used for good purposes, that it is tested and checked carefully, and that it respects the laws and values of humans.

    • Photo: Carl Peter Robinson

      Carl Peter Robinson answered on 12 Jan 2024:


      There are many risks regarding the use of AI, as there are with all systems we create and develop and then deploy for general use or a specific purpose. On a lighter note, one of my bugbears is that we’re currently in the Gimmicky AI era, where anything and everything is having “AI put into it”, because it’s the current buzzword and there is money to be made (e.g., “Hey, let’s put AI in a toaster!”, “Hey, AI would be cool in your bookshelves!”). What I’m trying to highlight is the unnecessary application of AI, just to make some quick cash, that creates a whole lot of consumer waste once the novelty wears off – not good for the planet!

      My main concerns are:
      1. The exploitation of open source AI models and applications by criminals to cause harm in some way, on individuals or the general populace.
      2. Mismanaged and rushed development and rollout of AI systems, due to the heavy competition that we’re seeing right now, that lead to a big mistake and potential fallout causing harm.
      3. Military applications of AI that consist of weaponisation without a human-in-the-loop factor. A human should always be in command of any weaponised AI system and have the final decision on that system’s actions. Ideally, AI would not be used here at all, but sadly that Pandora’s Box has already been opened.
      4. The ethical choice and use of AI, and the data used to train it, for certain applications (e.g., military, medical, anything personal that could discriminate).

      Focusing on ethics, as per what one of my colleagues said, I would highlight AI algorithm bias as a major risk. AI Algorithms (models) are usually trained on large amounts of data, most often acquired from the internet. A lot of this data contains biases. This means that elements of that data contain favouritism towards certain things (e.g., people, beliefs, points of view, etc.).

      This introduces problems as such data is then unfairly representative of particular dominant groups or viewpoints in our society, either locally or globally. This unfairness could be in terms of, for example: gender, race, or social background. If that data is poorly gathered and not very well prepared prior to use for training an AI model, that model is in danger of reflecting the unfair viewpoints found in the data it was trained on; this has already happened several times!

      The result of that is, the model is likely to produce skewed output when used for its purpose. For example, when such an AI model is used as a tool for interviewing candidates for a job, it might dismiss certain candidates simply because of something very specific that is unfair on those candidates. A few made-up examples of this could be that the model has been trained to “believe” the following: “people with green eyes can’t be trusted”, or “people who like cats are cleverer than people who like dogs”. How would you feel if you have green eyes and applied for that job but an AI algorithm decided you could not have that job simply because of your eye colour?!
      Dr Hannah Fry’s book, Hello World (https://hannahfry.co.uk/book/hello-world/), contains lots of great examples of bias and other ethical concerns.

Comments