Skip to main content

The Department for Work and Pensions recently stated that the makers of self-driving cars and other artificial intelligence systems will be responsible if their inventions harm workers. Government spokesperson Baroness Buscombe confirmed that existing health and safety law applies to artificial intelligence and machine learning software.

While our fears around out-of-control computers are most likely to focus on autonomous vehicles and intelligent robots, the Health and Safety Executive is more likely to need to rule on cases involving less obvious artificial intelligence, such as systems that direct factory workers or plan schedules. Systems such as these can have a major impact on employee health. And while these effects are not as obvious as being hit by a car, they can still cause long-lasting repercussions.

In America, the justice system has already had to wrestle with the consequences of lethal software, after an Uber self-driving car struck and killed a pedestrian in Arizona. Despite being loaded with camera, radar and LIDAR systems, the modified Volvo XC90 failed to stop. The failure has been blamed on the perception software, which had been turned down to avoid false positives, in which the car stops or turns to avoid dangers that don’t really exist – such as a plastic bag blowing across the road.

After a four-month break, and a raft of major changes to its programme, Uber is finally returning to American roads. But the case is a stark reminder of the dangers inherent in this bold push for convenience.

Inside contact centres, we don’t need to worry about driverless cars, but when we deploy AI we are trusting software to make decisions and carry out tasks. We want to be able to take our hands off the wheel and allow the software enough responsibility to reduce our workloads. And that naturally incurs a degree of risk. Risks can be minimised, but companies will need to consider the risks of deploying artificial intelligence in customer-facing positions.

What happens if the AI offends a customer? Or provides the wrong data?

While these questions about liability and risk are a distant concern for companies considering automation in the contact centre, they are likely to become more common questions as the technology becomes more advanced and able to tackle more complex queries. This is a story we will follow with interest in the coming months and years.