ICO News

ICO tells businesses to adopt privacy enhancing technologies – Tech Monitor


The Information Commissioner’s Office (ICO) has urged companies to practice caution when dealing with AI and to deploy privacy enhancing technologies (PETS) when dealing with personal information. The new guidance from the data watchdog comes as the government pushes for faster adoption of AI to grow the economy and boost productivity.

The ICO says privacy enhancing technologies can make it easier to share information securely (Photo: Ascannio/Shutterstock)

One of the measures designed to improve data security practices are new guidelines for the use of privacy enhancing technologies (PETs). Published by the ICO, the guidelines are aimed at data, protection offices, and those getting involved in large personal data sets across finance, healthcare, research and government. 

PETs can be used to make it easier to share personal and sensitive information safely securely and anonymously. They work by creating versions of the data that can be anonymised, shared, linked to, and analysed without having to give direct access to the information itself. An example use case could be for financial institutions, sharing data with a third party that monitors for financial crimes including fraud and money laundering.

PETs are for life, says ICO

John Edwards, UK Information Commissioner, said any organisation that shares large volumes of data, particularly special category data should move towards using PETs over the next five years. “PETs enable safe data sharing and allow organisations to make the best use of the personal data they hold, driving innovation.”

These tools effectively build a secure environment for data from the ground up and allow for as little information to be shared, gathered, and retained as possible. It does so while still complying with data protection laws, and fraud prevention guidelines.

Readers Also Like:  ICO: Data tracing a major risk for domestic abuse victims - DIGIT.FYI

Edwards is meeting with other G7 data protection specialists to discuss how these sorts of techniques can be used to improve the flow of information across borders “Together with our G7 counterparts, we are focused on facilitating and driving international support for responsible and innovative adoption of PETs by researching and addressing barriers to adoption with clear guidance and examples of best practice,” he said.

This includes an exploration of other emerging technologies including the rapid development and deployment of generative AI. The aim is to ensure organisations across the world are innovating in a way that respect peoples information and privacy.

ICO warns against AI deployment

This isn’t the first time the ICO has spoken out against the potential risks of AI and large data, gathering in terms of privacy protection legislation. Last week, the watchdog warned the businesses need to address the privacy risks of generative AI before rushing to adopt the technology.

Content from our partners
A renewed demand for film rewards Kodak’s legacy

Why plugging the sustainability skills gap is key to ESG

Adaptability will shape the future of distributors

Stephen Almond, executive director of regulatory risk at the ICO, said the organisation would be monitoring the situation. This includes regular and tougher checks on whether groups deploying generative AI are compliant with data protection laws. “Businesses are right to see the opportunity that generative AI offers, whether to create better services for customers or to cut the cost of their services, but they must not be blind to the privacy risks.”

He added: “We will be checking whether businesses have tackled privacy risks before introducing generative AI – and taking action where there is risk of harm to people through poor use of their data. There can be no excuse for ignoring risks to peoples rights and freedoms before rollout.”

Readers Also Like:  Bitcoin crosses $31,000 mark: Can the bullish sentiment secure Bitcoin’s future - The Financial Express

The announcements come as governments around the world tackle with how to handle artificial intelligence in a safe and secure way. Italy and other parts of Europe have previously looked to deploy GDPR against companies like OpenAI and chatbot providers over the way the data is collected and included in both training and output.

The UK has established the Foundation Model AI Taskforce. The £100m group is being chaired by investor Ian Hogarth and has been given the task of developing and explore new tools for safe AI. Some of this work is likely to include rules around data, security and privacy, similar to work carried out by the ICO.

The government is keen to drive the adoption of artificial intelligence throughout the economy, particularly in public services. Chancellor Jeremy Hunt is said to have been very keen to deploy AI in such a way that it can increase productivity of civil servants without increasing cost. This, according to a report in the FT, would allow him to reduce taxes before the next general election in 2024.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.