security

Section 230 Won't Protect ChatGPT – Lawfare – Lawfare


The emergence of products fueled by generative artificial intelligence (AI) such as ChatGPT will usher in a new era in the platform liability wars. Previous waves of new communication technologies—from websites and chat rooms to social media apps and video sharing services—have been shielded from legal liability for content posted on their platforms, enabling these digital services to rise to prominence. But with products like ChatGPT, critics of that legal framework are likely to get what they have long wished for: a regulatory model that makes tech platforms responsible for online content. 

The question is whether the benefits of this new reality outweigh its costs. Will this regulatory framework minimize the volume and distribution of harmful and illegal content? Or will it stunt the growth of ChatGPT and other large language models (LLMs), litigating them out of mainstream use before their capacity to have a transformational impact on society can be understood? Will it tilt the playing field toward larger companies that can afford to hire massive teams of lawyers and bear steep legal fees, making it difficult for smaller companies to compete?

In this article, I explain why current speech liability protections do not apply to certain generative AI use cases, explore the implications of this legal exposure for the future deployment of generative AI products, and provide an overview of options for regulators moving forward.

Liability Protections in Current Law

The source of this conundrum is the text of Section 230, the law that scholars such as Jeff Kosseff cite as the basis for the rise of the internet. The law enables platforms to host speech from users without facing legal liability for it. If a user posts something illegal on a website, you can hold the user liable, but not the website. 

In recent years, Republicans and Democrats alike have criticized Section 230, dragging tech leaders to congressional hearings to lambast them about their content moderation practices and proposing dozens of bills with the goal of expanding accountability for tech platforms. Policymakers outside the United States have pursued similar reforms—in Europe, for example, the Digital Services Act heralds a new era in expanded accountability for internet platforms that host user content. 

Under current law, an interactive computer service (a content host) is not liable for content posted by an information content provider (a content creator): Section 230 stipulates that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 

When a platform uses Section 230 to defend against liability in court, the judge uses a three-part test to evaluate whether the case should be dismissed: 

  1. Is the defendant a provider or user of an interactive computer service?
  2. Is the plaintiff attempting to hold the defendant liable as a publisher or speaker?
  3. Is the plaintiff’s claim based on content posted by another information content provider?

If the answer to any one of these questions is no, then the defendant cannot use Section 230 as a defense to liability.

Section 230 defines an “interactive computer service” as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.” 

The statute defines an “information content provider” as “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.” The law is not entirely clear on the meaning of “development” of content “in part,” but courts have held that a platform may not use Section 230 as a defense if it provides “pre-populated answers” so that it is “much more than a passive transmitter of information provided by others.”

Will ChatGPT be considered an information content provider?

Given this definition, courts will likely find that ChatGPT and other LLMs are information content providers. The result is that the companies that deploy these generative AI tools—like OpenAI, Microsoft, and Google—will be precluded from using Section 230 in cases arising from AI-generated content.

The relevant question will be whether LLMs “develop” content, at least “in part.” It is difficult to imagine that a court would find otherwise if an LLM drafts text on a topic in response to a user request or develops text to summarize the results of a search inquiry (as ChatGPT can do). In contrast, Twitter does not draft tweets for its users, and most Google Search results simply identify existing websites in response to user queries.

Readers Also Like:  Microsoft to Stop Sending Security Updates to These Two Windows ... - Techweez

Of course, determining whether a defendant is entitled to Section 230 protection is a fact-intensive analysis that depends on how a particular technology is used in a particular case. ChatGPT could be found to be an information content provider in some product integrations and not in others. If, for instance, the technology is used to simply determine which search results to prioritize or which text to highlight from underlying search results, then a court might find that it is entitled to Section 230 protection. That said, many of the LLM use cases that have been discussed in the weeks since ChatGPT was released seem likely to sit outside the current boundaries of Section 230 protection. 

Finding LLMs liable for content is consistent with the outcome that many policymakers are hoping for. Members of Congress have proposed dozens of reforms to Section 230 in recent years, and the underlying theme is consistent: Lawmakers want to increase platform liability for content moderation decisions.

It may also align with an evolution in courts’ interpretation of existing law. Earlier this week, the Supreme Court heard arguments in two cases that address whether platforms should be held accountable for the content they recommend to users. Just as with content generated by LLMs, the question lies in how to apportion liability when platforms use algorithmic decision-making to determine what information people see. If the Court finds in favor of the plaintiffs, it will establish a beachhead for liability when a platform uses an algorithm to surface content to users. 

In the course of the argument, the justices seemed to make the connection between the case before them and the emergence of new technologies like LLMs. Justice Neil Gorsuch stated that LLMs would likely not receive Section 230 protections. “Artificial intelligence generates poetry. It generates polemics today that would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected. Let’s assume that’s right. Then the question becomes, what do we do about recommendations?”

Should the law shield LLMs from legal liability?

Courts will likely find that ChatGPT and other LLMs are excluded from Section 230 protections because they are information content providers, rather than interactive computer services. But is that a good thing?

In some cases, liability may seem desirable. An LLM could make it easier to break the law, such as if it provides instructions on how to build a bomb, or manufacture and distribute illegal drugs. It could make it easier for a student to cheat. It could use racist or misogynistic language, or help fuel disinformation campaigns. Making LLMs liable for problematic content they generate in response to a user prompt will push engineers to design the products to minimize the likelihood they produce harmful content.

But this liability regime would also come with costs. If a company that deploys an LLM can be dragged into lengthy, costly litigation any time a user prompts the tool to generate text that creates legal risk, companies will narrow the scope and scale of deployment dramatically. Without Section 230 protection, the risk is vast: Platforms using LLMs would be subject to a wide array of suits under federal and state law. Section 230 was designed to allow internet companies to offer uniform products throughout the country, rather than needing to offer a different search engine in Texas and New York or a different social media app in California and Florida. In the absence of liability protections, platforms seeking to deploy LLMs would face a compliance minefield, potentially requiring them to alter their products on a state-by-state basis or even pull them out of certain states entirely.

With such legal risk, platforms would deploy LLMs only in situations where they could bear the potential costs. This could cause platforms to limit the integration of LLMs into certain products or impose significant restrictions on LLM functionality in order to ensure that it cannot be used to deploy text that might land the platform in court. Platforms might restrict usage to pre-vetted users in an effort to exercise additional control over content. The result would be to limit expression—platforms seeking to limit legal risk will inevitably censor legitimate speech as well. Historically, limits on expression have frustrated both liberals and conservatives, with those on the left concerned that censorship disproportionately harms marginalized communities, and those on the right concerned that censorship disproportionately restricts conservative viewpoints.

Readers Also Like:  Beyond the degree: FIU graduates focus on innovation and impact - FIU News

The risk of liability could also impact competition in the LLM market. Because smaller companies lack the resources to bear legal costs like Google and Microsoft may, it is reasonable to assume that this risk would reduce startup activity. Large tech companies already have more resources to pour into engineering complex AI technologies, and giving them a compliance advantage in emerging technologies increases the likelihood that the tech sector will become even more concentrated. 

Critics of current Section 230 jurisprudence will consider these limitations to be welcome. Yet regardless of one’s stance on current Section 230 jurisprudence, it is important to recognize that a liability regime limiting the deployment of LLMs will come with costs of its own. If LLMs have the potential to expand expression, broaden access to information, improve productivity, and enhance the quality of a wide range of existing tech products, then there are costs to a liability regime that threatens to constrain their development and deployment.

How should LLMs be regulated?

If Section 230 is left intact as LLMs emerge, then critics of the liability protection will get what they have long desired: cracks in the shield. But to explore the real potential benefits of LLMs and to understand their capacity both for harm and for good, Section 230 reform is necessary. 

The rapid emergence of ChatGPT will likely put pressure on lawmakers to reform Section 230 in a way that is largely antithetical to their recent efforts: That is, it may push Congress to limit liability for generative AI products that generate content in response to user inputs. In order to flourish, LLMs will need a liability regime that allows them to be deployed without crippling them with lawsuits and court fees.

Ideal reform would not immunize LLM companies entirely but would instead align the costs they bear with social harm created by their products. That goal is easy to articulate but incredibly difficult to achieve.

One option would be to reform Section 230 to remove the “in part” language from the  definition of an information content provider: “any person or entity that is responsible, in whole, for the creation or development of information provided through the Internet or any other interactive computer service.” That change would immunize LLMs in cases where they generate illegal content in response to a user prompt. The obvious downside is that it would provide even broader liability protection for tech platforms, contrary to the aim of most policymakers.

An alternative approach would be to reform Section 230 to explicitly carve out LLMs from the definition of an information content provider: “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service. A large language model shall not be considered an information content provider when it generates text in response to a query from another information content provider.” While this revision would maintain the existing scope of liability for the tech sector, it would provide too broad an immunity for LLMs: The law should provide sufficient incentive for LLMs to avoid generating illegal content.

These two alternatives might be more palatable to policymakers if they were combined with other Section 230 reforms that have been pushed by both parties. For instance, Democrats might be more willing to consider an LLM carveout if it were accompanied by provisions that prohibit tech platforms from using Section 230 in cases involving state civil rights law. Republicans might be more willing to consider LLM-related reforms if they were combined with revisions that expand liability when platforms remove conservative speech. While such reforms might make reform more politically feasible, they come with their own downsides. Experts contend they are likely to increase censorship in some cases and increase the proliferation of harmful content in others.

Readers Also Like:  China's security industry sees US, not AI, as bigger threat | Daily Sabah - Daily Sabah

Perhaps the optimal approach at this stage would be to pass an LLM-specific liability law that provides narrow, time-limited immunity for LLM platforms, while enabling judges, policymakers, and academics to gather data about the kinds of content generated by LLMs and its impact on society. The objective would be to learn about the costs and benefits of LLMs, to better understand the types of situations where legal liability might arise, and to use this information to develop a smarter, stronger liability regime in the future. A new law could be crafted to provide some temporary immunity for LLMs, while also ensuring that LLMs could be held liable in certain cases, such as when they are actively involved in violations of federal criminal law. To facilitate sufficient transparency, the law should enable platforms to provide data to researchers and should protect both platforms and researchers when data is shared consistent with privacy and security best practices. Such a law would not open up the text of Section 230 and therefore would not alter current law for the rest of the internet.

The risks of this approach could be minimized by making it time bound and transparent. The law could sunset in two years so as to ensure that the limited liability for LLMs does not continue indefinitely, and it could create government-funded audit committees composed of technologists, academics, and community organizations to review the impact of LLMs as they are deployed more broadly. These committees might be required to report on specific aspects of LLM performance, such as whether they disproportionately harm marginalized communities or disproportionately censor specific political viewpoints. They might also be required to produce policy recommendations at the end of the two-year period so as to provide lawmakers with a road map for reform.

Of course, none of these options is likely to be politically feasible. In the current political environment, lawmakers are unlikely to support legislation that would provide liability protections for new technologies. It is far more likely that Congress will hold hearings about use cases that receive negative media attention and propose messaging bills that are aimed more toward generating headlines than toward establishing tangible new rules to guide the emergence of generative AI products. 

If Congress does not take action and LLMs are held liable as information content providers, then it will fall to the tech research ecosystem to ensure that the impact of this regulatory regime is quantified in some form. 

In many situations, we measure the impact of action we take, but not the impact of inaction. In the case of LLMs, a failure to act will have costs. The latest technology will be deployed more narrowly than it would be otherwise. Valuable features will never be built. Fearing legal fees and burdens, startups will be less likely to enter the market. Academics and tech experts should measure those costs, with the aim of developing a better understanding of the costs and benefits of the status quo that can then inform future policy. 

What will the future hold?

Policymakers have a range of policy tools they might use to manage the emergence of LLMs, but it’s hard to be optimistic that they will do it well. Tech reform has stalled in Congress for years, despite increasing political consensus that reform is necessary. Given partistan disagreements on the core objectives of Section 230 reform, political compromise is likely to remain elusive. 

The most likely outcome is that Congress will stand still while courts take the lead in dictating the liability regime for LLMs. Since they must make decisions based on existing law, rather than crafting what a more desirable liability regime might be, judicial bodies are not well positioned to design an approach that adequately balances the potential benefits of LLMs against their potential costs. As a result, Section 230’s critics will get what they have long wanted and its proponents what they have long feared: expanded liability for new technologies, alongside diminished opportunity for expression and innovation.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.