• Question: Are robots reliable not to mess up?

    Asked by user498and on 18 Jan 2024.
    • Photo: Andrew Maynard

      Andrew Maynard answered on 18 Jan 2024:


      I don’t think they are at the moment. But maybe they will be one day!

    • Photo: Luke Humphrey

      Luke Humphrey answered on 18 Jan 2024:


      Not automatically. Risk management of processes involving robots are similar to any risk management of other process.

      To reduce risk of things messing up, all the risks must be identified and actions taken to mitigate the risks as much as reasonably possible. Many companies will employ risk managers whose whole job is about identifying and managing registers of such risks.

      Sometimes we also have regulations enforced by regulatory bodies. For example, since radiation can be very dangerous, processes involving radioactive materials are not just risk managed by the company involved but also regulated by an external body to ensure that risks are minimised.

      One of the areas robots are often used are in face extreme and hazardous environments, and part of the reason is because if things do mess up it may be expensive but at least nobody was harmed.

      Hopefully as robotics develops, we will see robotics integrated into more such processes such as disaster search and rescue operations, firefighting, space and deep sea exploration, nuclear waste management, etc.

    • Photo: Gerard Canal

      Gerard Canal answered on 19 Jan 2024:


      Not really, they mess up things constantly! We keep working to create methods so that robots are more robust, and are able to recover from mistakes they made (and also explain why they made them!).

      Humans also make mistakes all the time, so it’s fine if robots make some I guess, as long as they are able to recover from them (and learn to not repeat them!).

    • Photo: Carl Peter Robinson

      Carl Peter Robinson answered on 22 Jan 2024:


      Nope. I imagine a robot like a small child, in that you need to give it very specific instructions and rules on how to operate in its environment, for its own safety and the safety of people around it. While the robot contains a control system, possibly including some AI-based algorithms, it will still be limited to what its developers have programmed it to do (unless its purpose is deliberately set to explore and learn about its environment and related tasks). So that’s where the risk management comes into play that Luke has rightly described. You have to think about all the things that could go wrong and mitigate for them, and put safetys in place that enable the robot to safely respond to situations you didn’t consider. But even that might not catch everything as there can be so many things to think about, e.g.: electronic component failure (search for “Ocado robot fire”), a missed software or firmware bug, or an emergent behaviour you didn’t see coming.

Comments