While virtual reality headsets have stolen some of the limelight, artificial intelligence remains a headline-topping technology, and for good reason: consumers and businesses have expressed their concern about its security.
According to conversations between Google and Axios, Google is concerned that AI cybersecurity is being treated as an afterthought, drawing parallels with social media which was developed with good intentions but used maliciously by many.
The company also calls out those developing advanced technologies before nailing the basics, as the world rushes to advance artificial intelligence.
How to make artificial intelligence safe
Google Cloud CISO Phil Venables told Axios: “Even while people are searching for the more advanced approaches, people should really remember that you’ve got to have the basics right as well.”
The tech giant’s six-pillared approach begins by assessing existing security controls that can be used in artificial intelligence – some tweaks to existing work can help to lay the foundation as cybersecurity experts analyze and respond to threats. Secondly, Google wants to expand threat intelligence work to include specific AI research.
Next up, in response to the potential severity and scale of threats, the company calls for automation to be employed in response processes, for regular security measure reviews to take place, and for penetration testing to check the robustness of a response.
Finally, Venables hints at a sizeable opening in the job market that could see thousands of workers employed; he urges companies to work with teams of individuals who deeply understand the risks and approaches.
While many enterprises handling AI models may already be using some of these strategies, few are taking the holistic approach. Google promises to be working with its own customers and governments to address the matter.
In a parting message, Venables told Axios: “We think we’re pretty advanced on these topics in our history, but we’re not so arrogant to assume that people can’t give us suggestions for improvements.”
Clearly, a lot more consideration needs to be put into the security and privacy concerns relating to artificial intelligence to protect everybody, and frankly, our distance from the end goal compared to the rate of acceleration of AI is alarming.