Skip to main content

Are you using artificial intelligence to boost your digital security?

While AI and machine learning are useful tools for cybersecurity professionals, they may already be in action on the other side of the front line.

Hackers and digital criminals are never far behind the latest technological innovations, and any tech that can make systems smarter and more secure can also be engineered to weaken and undermine those same systems.

Two recent reports highlight the growing acceptance among cybersecurity professionals that their use of AI is a double-edged sword. What’s working for them will soon be working for their rivals.

Digital security firm Webroot released a survey of 400 security professionals in the US and Japan, focused on the use of AI and machine learning. In Game Changers: AI and Machine Learning in Cybersecurity, they report that the use of these technologies is already widespread in the security community, with 88% of cybersecurity professionals in the US already on board.

AI and machine learning are deployed for a range of time-critical threat detection purposes such as:

  • Malware detection
  • Malicious IP blocking
  • Website classification

Machine learning, when a computer learns from inputs and decides how to behave without being explicitly programmed, is seen as critical by 95% of security pros. And 74% think their organisations will become dependent on AI within three years.

It’s clear that AI and machine learning have a role in digital security – but of course this raises questions about how criminals can use the same tools.

The bad guys have AI too

In the Game Changers report, 86% of respondents are concerned about hackers using AI to facilitate their breaches.

And in a joint report by Cambridge, Oxford and Yale universities, they warn that AI research often overlooks the potential for this technology to be exploited by hackers and criminals.

In The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation, the authors include several recommendations for restricting the potential of AI as a malicious device. While outlining the criminal threats that may be powered by AI, the authors suggest that attacks could become cheaper to organise with AI, and therefore more appealing to criminals. AI could also enable a new breed of attacks by facilitating tasks that no human could complete. Also, the nature of cyber attacks my change as AI enables attacks that are more refined, more targeted – and harder to attribute.

The report’s authors recommend that AI researchers consider the darker side of their innovations, and involve stakeholders from the security community, so that potential risks can be detected and mitigated.

In future, AI researchers may want to assess whether their research can be safely published openly – or whether their findings pose a threat to digital security.


What do Software Robots, AI and the Emerging Digital Workforce mean to your Contact Centre?

Come to our event on 17th October 2019 at the prestigious RAC Club in Pall Mall, London.

The event will focus on the digital revolution that is currently transforming Customer Contact, and in particular how exciting developments with technologies such as Robotic Process Automation and AI will allow organisations to keep pace with, and exceed, customers’ ever-growing demands.

Click here for more information.