• Question: With AI becoming more prevalent in everyday life, are you concerned about how criminals could exploit this?

    Asked by anon-377327 on 10 Jan 2024.
    • Photo: Carl Peter Robinson

      Carl Peter Robinson answered on 10 Jan 2024:


      Yes, I am, to the extent that I sometimes think about stuff like this as part of my work. Criminals are already using this technology for their underhanded means. For example, groups are using deepfake audio to scam money out of people by exploiting their vulnerabilities (https://www.theguardian.com/lifeandstyle/2023/aug/04/experience-scammers-used-ai-to-fake-my-daughters-kidnap).
      It’s a case of trying to stay ahead of what those with a specific negative goal want to accomplish and use technology to do so. Thatā€™s difficult and these newer AI tools and open source repositories arenā€™t going to make it any easier to nullify such threats. Perhaps the answer is public education for a better general awareness of what AI is now capable of and how itā€™s being exploited. Additional safety checks like greater use of multiple-factor authentication to confirm identities wouldnā€™t go amiss either. Believe me that research is going on, in part using machine learning and data science, that attempts to counter the negative use of AI tools for criminal purposes. But the net can only be cast so wide and the technology is moving so rapidly that you are still going to get reports at times of criminals successfully ā€œgetting awayā€ with something or other by using an AI tool.

    • Photo: Kevin Tsang

      Kevin Tsang answered on 10 Jan 2024: last edited 10 Jan 2024 9:16 pm


      Although AI has been used to accelerate creativity for many positive purposes, it can certainly be exploited for illegal activities. As it becomes easier to find information to harm others, it is also easier to find information to protect people.

      Using current methods, people can create realistic-looking videos of family members and very convincing audio using photos/videos found on the internet and social media. These deepfakes have major implications for identity theft and like the article linked by Carl, to scam families. The technology is also improving every day.

      We have researchers in world-leading AI teams, like Meta and Google, working to develop methods to identify generated images, such as using watermarks, but it will be a constant battle with people finding ways to dodge security.

      I think having an informed public can mitigate some of the risks posed by deepfakes.

      You can have a go yourself at identifying real vs fake images from this BBC quiz: https://www.bbc.co.uk/bitesize/articles/zg78239

    • Photo: Vian Bakir

      Vian Bakir answered on 11 Jan 2024:


      Yes I am concerned. Deepfakes, for instance, can be used for scamming, fraud, and manipulation of people for political, commercial or ideological ends. Audio deepfakes (voice only) can be incredibly convincing, and studies show that people are very bad at being able to distinguish deepfakes from real content, even when people are told to look out for the deepfake.

    • Photo: Alexander Coles

      Alexander Coles answered on 11 Jan 2024:


      Criminals certainly will exploit AI and will cause problems for us. However rather more worryingly are the individuals, companies, and groups of people that may not be considered criminals that will use AI for things that may not be considered criminal behavior. Like the mass production of fake information, or the use of AI that disproportionately affects a a minority group of people. This kind of behavior is hard to detect and can cause great harm before action is taken to prevent it.

    • Photo: Andrew Maynard

      Andrew Maynard answered on 11 Jan 2024:


      Absolutely — we already depend on simpler forms of AI in our lives, and with the latest developments AI is beginning to impact most things we do in some way.

      And of course, where there are opportunities people are going to try and work out how to use them to benefit themselves, even where it isn’t legal. There’s growing concern around how people can use AI to fool you, to con you, to persuade you to believe things that aren’t true, to break into and control computer systems, to undermine elections, and even to disrupt the systems that society relies on — like supply chains.

      Fortunately there are also people working on how to prevent harmful or criminal uses.

Comments