Nico van Dijk, product manager, maintenance management

Imagine that you’ve deployed a robot to carry out a maintenance job on a property or to a piece of equipment inside the property, and the robot breaks something during its work. Who will be liable? It’s an emerging question now that robots are being deployed professionally with increasing frequency. Whereas the debate so far has been mainly around the dangers inherent in artificial intelligence, thus keeping things hypothetical and speculative, the discussions now increasingly turn to the concrete risks related to the use of robots. Who will be held liable if a robot breaks something while performing its tasks?

The good news and the bad news

I wrote in a previous blog that deploying robots to assist us in complex tasks is coming steadily closer. After all, a great deal of money can be saved by using sensors, robotics and simulations for inspections, repairs and major maintenance. So it’s no longer a question of whether it’s going to happen, but when. That means many organisations are now considering the concrete applications of robots, and the associated opportunities and dangers.
Even Google is weighing into this discussion, and is sweeping the debate along with a cleaning robot as an example. In the paper 'Concrete Problems in AI Safety', Google’s research team raises five practical problems:

  1. What if a cleaning robot does indeed perform its duties, but knocks over a vase in doing so?
  2. What if a robot chooses getting a reward above performing its task, for example by not clearing up a small pile of rubbish but by covering it?
  3. What if a robot repeatedly keeps asking for the same instructions and does not adapt its duties according to the human instructions?
  4. What if robots are not careful enough, and they insert a wet mop into an electrical socket, for instance?
  5. What if robots don’t adapt to the environment if it is just a little different to the one for which they were trained?

Who would be responsible?

The more duties robots can perform, the more issues like these will come to the fore. At the same time, robots are increasingly going to work more autonomously, which has its own implications in terms of legal liability. That’s because as things stand at the moment, only an actual person can perform a punishable offence. In the debate on self-driving cars, the outcome appears to be that liability can also be established for the owner or driver, just as it can in the case of damage by animals, for example. What does this mean for robots who perform maintenance or cleaning?

It’s conceivable that robots will soon have to make choices. Could a robot weigh up the risks and pass judgement on them? Is breaking a vase, for instance, legitimised if that place is indeed cleaned well? And what if bodily harm is caused to a person?

In my experience, the human eye is far better at weighing up these types of judgements than a robot. So I think we need to aim for a situation where robots in such conflict situations can ask for the help of a person to make the judgement call. This will also mean that an actual person becomes responsible for taking a decision. This is a practical solution to a practical problem; that’s because we are not yet anywhere near having a fully autonomous robot, so in turn it’s not yet necessary to have a solution in place for all these issues.

This post was originally published on the Planon blog. To learn more about their software to support Smart Buildings, visit the Planon website.