• Question: How far do you AI should go?

    Asked by anon-379277 on 18 Jan 2024.
    • Photo: Andrew Maynard

      Andrew Maynard answered on 18 Jan 2024:


      This is a really difficult question as it means deciding what we’re OK with as humans. Personally, I don’t like the idea of military AIs that decide on their own who to kill. I worry about AI that takes the place of politicians and the Prime Minister (although maybe that would be a good thing!). I’m uncomfortable with AI that decides who is a criminal and who is not.

    • Photo: Carl Peter Robinson

      Carl Peter Robinson answered on 18 Jan 2024:


      I agree, it is a difficult question and one that is being asked regularly at the moment at all levels of the scientific community. I think it’s why a pause in AI development was called for last year after ChatGPT came out. People were concerned with how fast AI developments were occurring and where they could lead, before we really got a handle on what was happening and how we could legislate for it. Personally, I don’t want to see any kind of AI system that is used in a critical area of society without having a human being aware of what “decisions” it makes and that human having the final say in any decision required. I also worry about the amount of energy and carbon emissions these larger AI models use and give off. I worry that our current belief is that we need to keep building bigger models that require more energy to keep progressing AI development. Surely, we have to start to consider how far we want to push the sizes of these models what with the carbon impact and energy use they are having.

    • Photo: Demetris Soukeras

      Demetris Soukeras answered on 19 Jan 2024:


      This is a really difficult question, one that the best minds working on AI often disagree on.

      One of the problems people worry about is that its hard to predict what an AI would do that was smarter than us, particularly as AI “behaves” very differently to how humans do.

      There’s a scary thought experiment where an AI researcher tells a super powerful AI to make stamps.

      Seems fine right?

      So it makes stamps.

      It keeps on making stamps.

      It makes more and MORE and MORE!

      Until it runs out of paper.

      So the AI thinks, ok, I’ve run out of paper BUT I’ve been told to make stamps. So I have to find a way to keep on making stamps.
      So it takes the cat and turns it into stamps, then the carpet, then the floor then the house,
      then the street
      on and on and on until the WHOLE WORLD IS STAMPS!

      You see how it was given an instruction but it behaved in a way we didn’t expect it to.

      Now this is quite scary, but I don’t think a disaster scenario like this is very likely. But there are people much smarter than me who spend a lot of time working out how to make AI safe, align it with our human morals/rules and prevent that kind of disaster.

Comments