Retail

Can an AI chatbot be convicted of an illegal wiretap? A case against Gap's Old Navy may answer that


Drew Angerer | Getty Images

With generative AI tools like ChatGPT bound to create more powerful personal assistants that can take over the role of customer service agents, the privacy of your online shopping chat conversations is becoming a focus of court challenges.

Generative AI relies on massive amounts of underlying data — books, research reports, news articles, videos, music and more — to operate. That reliance on data, including copyrighted material, is already the cause of numerous lawsuits by writers and others who discovered their material has been used, without permission or compensation, to train AI. Now, as companies adopt gen AI-powered chatbots another legal can of worms has been opened over consumer privacy.

Can an AI be convicted of illegal wiretapping?

That’s a question currently playing out in court for Gap‘s Old Navy brand, which is facing a lawsuit alleging that its chatbot participates in illegal wiretapping by logging, recording and storing conversations. The suit, filed in the Central District of California, alleges that the chatbot “convincingly impersonates an actual human that encourages consumers to share their personal information.”  

In the filing, the plaintiff says he communicated with what he believed to be a human Old Navy customer service representative and was unaware that the chatbot was recording and storing the “entire conversation,” including keystrokes, mouse clicks and other data about how users navigate the site. The suit also alleges that Old Navy unlawfully shares consumer data with third parties without informing consumers or seeking consent.

Old Navy, through its parent company Gap, declined to comment.

Old Navy isn’t the only one to face similar charges. Dozens of lawsuits have popped up in California against Home Depot, General Motors, Ford, JCPenney and others, citing similar complaints of illegal wiretapping of private online chat conversations, albeit not necessarily with an AI-powered chatbot.

According to AI experts, a likely outcome of the lawsuit is less intriguing than the charges: Old Navy and other companies will add a warning label to inform customers that their data might be recorded and shared for training purposes — much like how customer service calls warn users that conversations may be recorded for training purposes. But the lawsuit also highlights some salient privacy questions about chatbots that need to be sorted out before AI becomes a personal assistant we can trust.

Readers Also Like:  Every industry will be tech driven – can India ride this next wave

“One of the concerns about these tools is that we don’t know very much about what data was actually used to train them,” said Irina Raicu, director of the Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University.

Companies haven’t been straightforward about information sources, and with AI-powered chatbots encouraging users to interact and enter information, they could presumably feed personal data into the system. In fact, researchers have been able to pull personal information from AI models using specific prompts, Raicu said. To be sure, companies are generally concerned about the data going into generative AI models and the guardrails being placed on usage as AI is deployed across corporate enterprise systems, a new example of the “firewall” issues that have always been core to technology compliance. Companies including JPMorgan and Verizon have cited the risk of employees leaking trade secrets or other proprietary information that shouldn’t be shared with large language models.

Lawsuits show US lagging on AI regulations

When it comes to regulations about AI, and online interactions more generally, the U.S. is behind Europe and Canada, a fact highlighted by the Old Navy lawsuit. It’s based on a wiretapping law from the 1960s when the chief concern was potential privacy violations over rotary phones.

For now, states have different privacy rules, but there is no unifying federal-level regulation about online privacy. California has the most robust laws, implementing the California Consumer Privacy Act, modeled after GDPR in Europe. Colorado, Connecticut, Utah and Virginia have comprehensive consumer data privacy laws that give consumers the right to access and delete personal information and to opt-out of the sale of personal information. Earlier this year, eight more states, including Delaware, Florida and Iowa, followed suit. But rules vary state by state, resulting in a patchwork system that makes it hard for companies to do business. 

Readers Also Like:  Primark owner upgrades profit outlook as inflation fuels jump in sales revenue

With no comprehensive federal online privacy legislation, companies are free to charge ahead without having to put privacy protections in place. Generative AI, powered by natural language processing and analytics, is “very much imperfect,” said Ari Lightman, professor of digital media at Carnegie Mellon University’s Heinz College. The models get better over time, and more people interact with them, but “it’s still a gray area here in terms of legislation,” he added.

Consumers don't know where A.I. chatbots get their data from, says Washington Post's Nitasha Tiku

Personal information opt-out and ‘delete data’ issues  

While the emerging regulations offer consumers varying levels of protection, it’s unclear whether companies can even delete information. Large language models can’t modify their training data.

“The argument has been made that they can’t really delete it, that if it’s already been used to train the model, you kind of can’t un-train them,” Raicu said.

California recently passed the Delete Act, which makes it possible for California residents to use a single request to ask all data brokers to delete their personal data or forbid them to sell or share it. The legislation builds on the California Consumer Privacy Act, which gives residents the same rights but requires them to contact 500 data brokers individually. The California Privacy Protection Agency has until January 2026 to make the streamlined delete process possible.

Overseas regulators have been grappling with the same problem. Earlier this year, the Italian Data Protection Protection Agency temporarily disabled ChatGPT in the country and launched an investigation over the AI’s suspected breach of privacy rules. The ban was lifted after OpenAI agreed to change its online notices and privacy policy.

Readers Also Like:  Dutch flower growers call for delays to UK post-Brexit border checks

Privacy disclosures and liability 

Privacy warnings are often long and arduous to read. Prepare for more of them. AI-powered chatbots are appealing to companies since they work 24/7, can enhance the work of human agents, and in the long run, be cheaper than hiring people. They can also appeal to consumers if it means avoiding long wait times to speak with a human representative.

Chet Wisniewski, director and global field CTO at cybersecurity firm Sophos, sees the Old Navy case as” a bit superficial” because regardless of the outcome, website operators will likely put up more banners “to absolve themselves of any liability.”

But issues around privacy will get stickier. As chatbots become more adept at conversation, it will be harder and harder to tell whether you’re speaking with a human or a computer.  

Privacy experts say that data privacy is not necessarily more of an issue when interacting with chatbots than a human or online form. Wisniewski says basic precautions — like not publicly posting information that can’t be changed, such as your birthdate — still apply. But consumers should know that the data can and will be used to train AI. That may not matter much if it’s about a return or an out-of-stock item. But the ethical issues get more complicated as the issues become more personal, whether it’s about mental health or love.

“We don’t have norms for these things yet they’re already part of our society. If there’s one common thread that I’m seeing in the conversations, it’s the need of disclosure,” Raicu said. And repeated disclosure, “because people forget.”

 



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.