-
Asked by anon-379277 on 18 Jan 2024.
-
Andrew Maynard answered on 18 Jan 2024:
This is a really difficult question as it means deciding what we’re OK with as humans. Personally, I don’t like the idea of military AIs that decide on their own who to kill. I worry about AI that takes the place of politicians and the Prime Minister (although maybe that would be a good thing!). I’m uncomfortable with AI that decides who is a criminal and who is not.
-
-
Carl Peter Robinson answered on 18 Jan 2024:
I agree, it is a difficult question and one that is being asked regularly at the moment at all levels of the scientific community. I think it’s why a pause in AI development was called for last year after ChatGPT came out. People were concerned with how fast AI developments were occurring and where they could lead, before we really got a handle on what was happening and how we could legislate for it. Personally, I don’t want to see any kind of AI system that is used in a critical area of society without having a human being aware of what “decisions” it makes and that human having the final say in any decision required. I also worry about the amount of energy and carbon emissions these larger AI models use and give off. I worry that our current belief is that we need to keep building bigger models that require more energy to keep progressing AI development. Surely, we have to start to consider how far we want to push the sizes of these models what with the carbon impact and energy use they are having.
-
Demetris Soukeras answered on 19 Jan 2024:
This is a really difficult question, one that the best minds working on AI often disagree on.
One of the problems people worry about is that its hard to predict what an AI would do that was smarter than us, particularly as AI “behaves” very differently to how humans do.
There’s a scary thought experiment where an AI researcher tells a super powerful AI to make stamps.
Seems fine right?
So it makes stamps.
It keeps on making stamps.
It makes more and MORE and MORE!
Until it runs out of paper.
So the AI thinks, ok, I’ve run out of paper BUT I’ve been told to make stamps. So I have to find a way to keep on making stamps.
So it takes the cat and turns it into stamps, then the carpet, then the floor then the house,
then the street
on and on and on until the WHOLE WORLD IS STAMPS!You see how it was given an instruction but it behaved in a way we didn’t expect it to.
Now this is quite scary, but I don’t think a disaster scenario like this is very likely. But there are people much smarter than me who spend a lot of time working out how to make AI safe, align it with our human morals/rules and prevent that kind of disaster.
Related Questions
Do you think in the future we will have AI healthcare and doctors?
I’ve always wanted to know how Artificial Intelligence works. I strongly agree that it can be a big help one day but
what is the highest potential that AI is capable of ?
Where do you see ai in the future?
how can ai help
What is AI really caperable of?
How did people come up with the idea of AI?
why did people make ai
as you have worked on rocket science, do you think we could use ai to to to alien life?
Latest Questions
-
Do you think in the future we will have AI healthcare and doctors? (1 Comment)
-
How do you believe advancements in artificial intelligence can specifically enhance or revolutionize diagnostic
-
how does chat GPT work?
-
will ai be able to create fully finished websites
-
If I make a sentient AI from scratch and let it know I’m it’s creator, will it respect me more than everyone else?
-
Does anyone have anything they wouldn’t mind sharing about astronomy or any interesting facts about AI
-
Who influenced you to do your Scientific research and how did they?
-
Why did you do AI instead of chemistry or biology’s and physics ?
-
I’ve always wanted to know how Artificial Intelligence works. I strongly agree that it can be a big help one day but
-
What artificial intelligence was the best one you have made (what did it do)
Comments