-
Asked by anon-379336 on 18 Jan 2024.
-
Andrew Maynard answered on 18 Jan 2024:
I don’t think they are at the moment. But maybe they will be one day!
-
-
Luke Humphrey answered on 18 Jan 2024:
Not automatically. Risk management of processes involving robots are similar to any risk management of other process.
To reduce risk of things messing up, all the risks must be identified and actions taken to mitigate the risks as much as reasonably possible. Many companies will employ risk managers whose whole job is about identifying and managing registers of such risks.
Sometimes we also have regulations enforced by regulatory bodies. For example, since radiation can be very dangerous, processes involving radioactive materials are not just risk managed by the company involved but also regulated by an external body to ensure that risks are minimised.
One of the areas robots are often used are in face extreme and hazardous environments, and part of the reason is because if things do mess up it may be expensive but at least nobody was harmed.
Hopefully as robotics develops, we will see robotics integrated into more such processes such as disaster search and rescue operations, firefighting, space and deep sea exploration, nuclear waste management, etc.
-
Gerard Canal answered on 19 Jan 2024:
Not really, they mess up things constantly! We keep working to create methods so that robots are more robust, and are able to recover from mistakes they made (and also explain why they made them!).
Humans also make mistakes all the time, so it’s fine if robots make some I guess, as long as they are able to recover from them (and learn to not repeat them!).
-
Carl Peter Robinson answered on 22 Jan 2024:
Nope. I imagine a robot like a small child, in that you need to give it very specific instructions and rules on how to operate in its environment, for its own safety and the safety of people around it. While the robot contains a control system, possibly including some AI-based algorithms, it will still be limited to what its developers have programmed it to do (unless its purpose is deliberately set to explore and learn about its environment and related tasks). So that’s where the risk management comes into play that Luke has rightly described. You have to think about all the things that could go wrong and mitigate for them, and put safetys in place that enable the robot to safely respond to situations you didn’t consider. But even that might not catch everything as there can be so many things to think about, e.g.: electronic component failure (search for “Ocado robot fire”), a missed software or firmware bug, or an emergent behaviour you didn’t see coming.
Related Questions
Latest Questions
-
Do you think in the future we will have AI healthcare and doctors? (1 Comment)
-
How do you believe advancements in artificial intelligence can specifically enhance or revolutionize diagnostic
-
how does chat GPT work?
-
will ai be able to create fully finished websites
-
If I make a sentient AI from scratch and let it know I’m it’s creator, will it respect me more than everyone else?
-
Does anyone have anything they wouldn’t mind sharing about astronomy or any interesting facts about AI
-
Who influenced you to do your Scientific research and how did they?
-
Why did you do AI instead of chemistry or biology’s and physics ?
-
I’ve always wanted to know how Artificial Intelligence works. I strongly agree that it can be a big help one day but
-
What artificial intelligence was the best one you have made (what did it do)
Comments