security

Lessons for AI from the ad-tech era: 'We're living in a memory-less … – Digiday


Until Facebook’s Cambridge Analytica crisis sent shockwaves through the worlds of social media and politics, data privacy wasn’t necessarily on everyone’s radar as a priority. In the years since, companies have rushed to upgrade the ad-tech ecosystem, but as the industry knows, it’s not easy to retrofit an entire industry inside or outside of a Sandbox.

As the generative AI race gains momentum, it’s worth asking how lessons from ad tech could also apply to the new era of innovation. During the laissez faire days, ad tech didn’t prioritize privacy before “cookie” became a dirty word, but what should companies do earlier this time with genAI to avoid past mistakes and mitigate unseen risks? 

Future problems aren’t always clear based on a platform’s intended use. For example, APIs like Facebook’s Open Graph were seen as a way to micro-target ads, but the risks were exposed only after the Cambridge Analytica scandal came to the surface. 

Katie Harbath, who spent a decade overseeing election security teams at Facebook, posed this question: What decisions are being made by companies that seem okay right now but that down the road might turn out to be problematic? (Harbath now works in tech policy through her think tank Anchor Change and various other groups.)

“If there’s a lesson being learned, these issues are layers of an onion being peeled back and it’ll be a lot more complicated than we think,” Harbath said. “We’re retrofitting the plane even as it’s in the air because the elections are already happening.”

An unwieldy industry learns from the past

To develop responsible AI, experts say it’s important to see how current internet problems like toxic content and data breaches have grown without early safeguards. Tiffany Xingyu Wang, CMO at OpenWeb, which develops engagement platforms for publishers, noted that a lack of diversity within tech and advertising companies has also contributed to other problems like hate speech and biased data. 

Readers Also Like:  Datadog Expands Application Security Capabilities to Automatically ... - MarTech Series

“If we don’t think about the data used to power AI, then I won’t be surprised to see in five years that these issues will get aggravated,” said Wang, who was previously CMO of contextual AI platform Spectrum Labs.

As co-founder of Oasis Consortium, an industry-backed think tank focused on designing ethical tech standards, Wang has spent years thinking ways to develop healthier frameworks for the internet. For example, building content and data tools earlier can de-incentivize users that might want to write hateful content into AI prompts. 

The power of privacy rules

Academics are also looking at how recent privacy laws could help inform new rules for AI. In a paper published earlier this month, researchers from Tufts, MIT and Penn explored how Europe’s GDPR strategy could provide lessons for mitigating AI risks by focusing on transparency, accountability and data protection. The authors also point out that AI, like data, can be hard to classify because their levels of risk varies based on the context — which they say could undermine frameworks used in the EU’s AI Act.

“The European Parliament chose not to classify foundation models like ChatGPT as high-risk AI systems but still imposed roughly the same set of requirements on the operators of those models as it does on the operators of high-risk AI systems,” the authors wrote. “This suggests that while regulators may be struggling to come to terms with the broad range of risks presented by AI and the variety of AI systems and applications they want to address, they do not have a very diverse set of regulatory mechanisms or proposals to use to address those risks.”

Readers Also Like:  Pearl Health banks $75M to power the shift to value-based care - FierceHealthcare

Many say ad tech’s complexities are part of why governments and industry players alike have struggled to come up with new laws or even industry standards. (For example, one agency exec said ad-tech representatives still say their companies “own cookies“ that actually belong to users and their devices.) If AI’s even harder to grasp, education will be even more important so regulators don’t yet again fall further behind. 

The utopia that wasn’t

Others point out that the web was originally meant to be stateless. Andy Parsons, senior director of Adobe’s Content Authenticity Initiative, noted that cookies weren’t even intended for advertising when they were first invented in 1994. He thinks improving online systems with transport-level security and giving users earlier control over their data might have helped improve platforms even as privacy risks grew to what they are now.

“It would be foolish for me to say everything would be fine [or that] we wouldn’t have misinformation,” Parsons said. “Motivated state and government and [other] well compensated actors will always find ways around this. None of what we’re talking about today is perfect, but it’s a huge step in the right direction.”

Dangers of bad actors using AI are “absolutely true,” Zeta global chief technology officer Chris Monberg told Digiday back in May not long after AI experts published a letter warning about the tech’s potential risks.

“A lot of people say AI is a joke,” Monberg said. “But since when is technology ever a joke?…Technology is a path forward and we need the right people to put the right regulatory compliance and ethical considerations in place. And that ethical piece is a thing we’ve really been challenged with as a technology community.”

Readers Also Like:  ZeroFox Named to the Enterprise Security Tech 2023 Cyber Top 20 ... - GlobeNewswire

Avoiding the next walled gardens

After years of walled gardens governed by giants like Google and Facebook, others are pressing for a lot more transparency with AI. However, experts say the large language models being developed by OpenAI and others are each their own black boxes — which raises new questions about AI models’ training data and overall performance. 

Kannan Srinivasan, a business professor at Carnegie Mellon University who’s researching AI, said it’s important to understand how systematic biases can undermine the data used in training LLMs. He also pointed out same tech giants that have dominated digital advertising — Google, Facebook and Microsoft — are now the same set of companies racing to develop the largest AI models.

“It seems as if we’ve started LLMs using the same product development process,” Srinivasan said. “Now now we’re seeing the limitations around privacy and bias … We are operating in a memory-less world. We don’t seem to carry over what we learned.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.