security

CPPA releases initial automated decision-making rules proposal – International Association of Privacy Professionals


The long-awaited first draft of the California Privacy Protection Agency’s rulemaking on automated decision-making technologies dropped 27 Nov., setting the stage for some of the most consequential state-level U.S. artificial intelligence laws.

Automated decision-making technology includes systems that use machine learning, statistics or data processing to evaluate personal information to help humans make decisions. Such technology can also include individual profiling capabilities.

According to a draft provided to the IAPP, the proposed rules are broken down into three major sections: how to provide notice of the technology’s use, when and how opting out is allowed, and how consumers can access information used by the business. It also carves out key areas of discussion for the CPPA Board, including how businesses might approach profiling children under 16 and how consumer information can be used to train a given system.

The proposal will be discussed by the CPPA board at its 8 Dec. meeting. The formal rulemaking procedure is not expected to start until next year.

In a statement, CPPA Board member Vinhcent Le said the proposed rules are another instance where California is leading the country in privacy protection. “These draft regulations support the responsible use of automated decision-making while providing appropriate guardrails with respect to privacy, including employees’ and children’s privacy,” he said.

Pre-use notice requirements

The proposed rules would require businesses that use personal information in their automated decision-making systems to clearly outline to consumers how the information is used and their rights to opt out of it. If the initial notice is not sufficient for customers, more information must be made available through a hyperlink that explains why the information is important to the business’ systems.

Readers Also Like:  DHS Announces $2 Billion in Preparedness Grants - Homeland Security

The additional information must also include a description of whether the technology has been evaluated for reliability or fairness, and the outcome of such information. 

There are a few exceptions to the disclosure requirements, including businesses that use automated decision-making software for security reasons, to detect fraudulent or illegal actions, to protect consumer safety, or because the information is critical to the business’s ability to provide its goods or services.

However, if they are an exception, businesses must inform consumers about why they cannot opt out of the use of decision-making technology. Businesses that use such systems for behavioral advertising are not included in the exception and must provide an opt-out mechanism.

Opting out requirements

The proposal makes it clear there are several instances where consumers have the right to opt out. Those cases include legal determinations, evaluation of their performance as a student, job applicant or employee, and when they are in public places. 

In the workplace, this includes when employers deploy keystroke loggers, attention monitors, location trackers or web-browsing monitoring tools for productivity tracking. Businesses that operate in publicly accessible places, like shopping malls and stadiums, and profile consumers using technologies like Wi-Fi or facial recognition must also provide opt-out options.

A critical point of discussion for the board will be how to handle instances where consumers are children. If a business knows it is profiling a consumer is under 13, it must come up with a mechanism for parents to provide consent for that monitoring. Children between 13 and 16 must be informed of their right to opt out of profiling in the future.

Readers Also Like:  Yellen says she's 'concerned' about China's new export controls in her first public remarks in Beijing - CNBC

Access right requirements

Under the proposed rules, consumer have the right to ask businesses what automated decision-making technology is used for and how decisions affecting them were made. Businesses must provide details on the system’s logic and the possible range of outcomes, as well as how human decision-making influenced the final outcome.

That information can only be provided if a customer’s identity can be verified. Otherwise, a request may be rejected.

Like in other aspects of the rules, exceptions can be made to that disclosure if an access request could compromise the safety of a consumer, if a business uses the information for security purposes or to prevent fraud. But consumers have the right to appeal that decision to the CPPA and the California attorney general’s office, and businesses must provide links to those complaint websites.

Absent legislation

The CPPA’s rulemaking comes as policymakers around the world are feeling the pressure to regulate AI. In the U.S., President Joe Biden released an executive order on AI at the end of October that marked the first significant piece of legislative action. The EU is also in negotiations to finalize the proposed AI Act, which could serve as a global model for AI regulation.

While the executive order put in place some strong requirements — such as requiring developers to share safety test results with the government before going public — in place and spurred the majority of U.S. agencies to examine and issue guidance around AI use, the most striking feature was a call for Congress to pass privacy legislation, noting the two go hand in hand.

Readers Also Like:  Apple releases Rapid Security Response updates for iOS, iPadOS ... - The Tech Portal

Absent federal legislation, U.S. states are having their own discussions about AI. California laid the groundwork for ambitious AI regulation when Gov. Gavin Newsom, D-Calif., signed the state’s own AI order in September. 



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.