security

A view from DC: The FTC says 'Let It Go,' don't hold that data anymore – International Association of Privacy Professionals


Fifty years ago, the original Fair Information Practice Principles were enshrined as one of the first major markers along the road that would become the data privacy profession. Among those FIPPs were seeds of ideas that would grow into the foundational privacy principles we still return to today. Ideas like individual autonomy, purpose specification and data minimization are rooted in these originally formulated “safeguards against the potential adverse effects of automated personal-data systems.”

The concerns that were salient in 1973, as bureaucrats pondered the risks of automated record-keeping systems, are echoed in 2023, as policymakers grapple with emerging risks of increasingly advanced artificial intelligence systems.

It is good we have fifty years of lessons to learn from.

With twin enforcement actions against Amazon this week, the U.S. Federal Trade Commission reminds us not to unlearn these lessons. If individual autonomy, purpose specification and retention limits are at the core of our approach to privacy —as they should be in both law and practice — we must keep them at the center of our practice, even as we build powerful new technologies. This is especially true when dealing with relatively sensitive or intimate data, such as, say, video recordings in our homes or voice recordings of our children.

To that point, Commissioner Alvaro Bedoya released a concurring statement this week, signed by both of his peers, with a timely warning:

“Machine learning is no excuse to break the law. Claims from businesses that data must be indefinitely retained to improve algorithms do not override legal bans on indefinite retention of data. The data you use to improve your algorithms must be lawfully collected and lawfully retained. Companies would do well to heed this lesson.”

The enforcement action in question involves Amazon’s Alexa voice assistant service and smart speaker product line. Although affirmative retention limits are rare in U.S. consumer privacy law, the FTC alleges that Amazon violated the Children’s Online Privacy Protection Act Rule “by retaining children’s personal information longer than reasonably necessary to fulfill the purpose for which the information was collected.” As one example of when affirmative deletion is appropriate, the proposed FTC order would require Amazon to delete personal information from inactive child profiles within 90 days.

Readers Also Like:  Low-code security platform Zenity garners $16.5m in Series A - FinTech Global

The Alexa settlement, which imposes a monetary penalty of USD25 million due to the alleged COPPA violations, also results from general deception and unfairness charges related to Amazon’s supposed failure to fully delete voice recordings, transcripts of voice recordings and geolocation data, even after users requested deletion.

The second settlement of the week involves Ring, the security camera service Amazon acquired in 2018. Although it deals with in-home video footage rather than audio files, the Ring matter includes a number of parallels to the Alexa matter. The FTC alleges Ring violated the privacy and security of its customers by:

  • Failing to properly limit employee access to recorded videos. According to the FTC, Ring allowed employees and contractors to view live and stored videos from customers’ cameras with impunity and without user consent or knowledge.
  • Failing to properly disclose the use of in-home videos for product improvement. The FTC claims Ring used videos from customers’ cameras to develop new features and test its facial recognition technology, without adequately informing customers or obtaining their consent.
  • Failing to implement reasonable privacy and security practices. The FTC accuses Ring of not taking sufficient measures to protect customers’ videos from unauthorized access, such as by providing employee training, encrypting videos at rest and guarding against common data security attack vectors.

It is worth noting these alleged misdeeds were stopped after Ring made itself attractive for acquisition, or immediately after Amazon acquired the company, with the notable exception of the data security issues, some of which allegedly continued until recently. As part of the settlement, Amazon’s Ring has agreed to pay USD5.8 million in consumer refunds.

But much more impactful are the algorithm disgorgement terms in the settlement. The company will be required to delete any customer videos and face embeddings obtained prior to March 2018, as well as “any work products it derived from these videos, including any models or algorithms identified or reasonably identifiable by the Defendant as having been developed in whole or in part from review and annotation” of the files. If the deletion of these models is “technically infeasible,” the company must provide a sworn statement by its principal executive officer “certifying that such deletion or destruction is technically infeasible and providing a reasonable explanation for that determination.”

Readers Also Like:  Pixel 8 Series Will Get 7 Years Of Android OS, Security, And Feature Drop Updates - Wccftech

As many have predicted, algorithm disgorgement is becoming an increasingly common tool in privacy enforcement. To avoid the costly implications of such orders, privacy professionals would be well advised to check early and often on the use of personal information to train or improve algorithms. Unless personal data is collected in keeping with legal requirements and best practices, taking into account limitations on the purposes for which it can be used, it should not be incorporated into machine learning models.

Missteps now could have long-lasting implications and downstream effects as regulators catch up with AI innovations.

Here’s what else I’m thinking about:

  • What’s the state of privacy legislation in the 118th Congress? The IAPP’s Research and Insights team updated our resource tracking all introduced privacy legislation this term, including sectoral and cross-sectoral proposals alike. The resource also provides a quick visual summary of which legislative vehicles have bipartisan support.
  • Perhaps it is time to look into the Texas and Florida privacy bills, which are now sitting on their respective Governors’ desks. The IAPP published a helpful analysis by Doug Kilby, CIPP/US, of the “Data Bill of Rights” portion of Florida’s SB 262, which includes some widely applicable amendments, even though the majority of the law will only apply to a handful of companies. The Future of Privacy Forum’s analysis of the same bill also includes an explanation of its youth privacy and safety components, along with a helpful chart analyzing its major differences from California’s Age-Appropriate Design Code. Meanwhile, in Texas, the Data Privacy and Security Act is slated to become the tenth state comprehensive consumer privacy law, a remarkable doubling of the state consumer privacy landscape in a single year.
  • Maybe the Data Privacy Framework is, in fact, aligned with EU law. A new paper from American University Professor Alex Joel, CIPP/G, CIPP/US, provides a detailed analysis of why the use of the terms “necessary” and “proportionate” in Executive Order 14086, which orders the implementation of the U.S. promises baked into the DPF, constrains signal intelligence activities in a comparable manner to how those terms are used under European Union law and the European Convention on Human Rights. As he summarizes, “both the U.S. and EU legal frameworks establish controls in law with enforceable rights, approach the identification of national security objectives in similar fashion, and accept the need for bulk collection when accompanied by appropriate safeguards.”
  • Yet another widely signed warning letter about the existential threat of general AI was met with consternation by many in the policy community, who counter-warned that such warnings bely the immediate threats of AI systems related to equity, discrimination and privacy, while centering the conversation about solutions away from those with the power to incorporate responsible corporate governance practices today. Speaking of governance, many discussions of guardrails for AI development envision the possibility of an international body to coordinate global standard practices. A new review paper provides a helpful overview of some of the cross-national governance efforts to date, such as “ethical councils, industry governance, contracts and licensing, standards, international agreements, and domestic legislation with extraterritorial impact.”
Readers Also Like:  Estimation of multi-person 3D poses and shapes from a low-resolution image - Tech Xplore

Upcoming happenings

  • 6 June at 13:00 EDT, the Information Technology & Innovation Foundation’s Center for Data Innovation and R Street host a webinar titled “Does the U.S. Need a New AI Regulator?”
  • 8 June at 17:00 EDT, the monthly informal Tech Policy Happy Hour will take place (Wunder Garten).
  • 10 June at 13:00 EDT, the Future of Privacy Forum and friends will march in the 2023 Capitol Pride Parade. All are welcome to join, but you must register.
  • 11 June at 23:59 EDT, speaker submissions are due for IAPP’s AI Governance Global in Boston on 2-3 November.
  • 12 June at 17:30 EDT, the Internet Law & Policy Foundry presents its annual Foundry Trivia (Terrell Place).

Please send feedback, updates and security cam footage to cobun@iapp.org.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.