security

Insiders' View of the January 6th Committee's Social Media … – Just Security


This article is co-published with Tech Policy Press.

In the waning days of 2022, the Select Committee to Investigate the January 6th Attack on the United States Capitol released its final report. It focused largely on the conduct of former President Donald Trump in the weeks before the attack, outlining a range of efforts to overturn the results of the 2020 election through legal chicanery and outright violence. But the report’s emphasis on Trump meant important context was left on the cutting room floor. While Trump played an instrumental role in driving the attack, right-wing networks – comprised of everyone from mainstream talking heads to extremist armed groups – drove the mass spread of conspiracy theories and far-right content on social media. 

That spread could not have occurred without corporate policies of social media platforms and other enterprises that allowed dangerous rhetoric to proliferate online and ultimately contribute to violence in the real world. As investigators for the Committee, we were charged with examining the role these factors played in the events of January 6th. In this essay, we aim to call attention to the role that social media activity irrefutably played in the insurrection and to highlight the continuing threats to American democracy, as well as to share lessons learned that may help prevent future political violence. 

Indeed, the lack of an official Committee report chapter or appendix dedicated exclusively to these matters does not mean our investigation exonerated social media companies for their failure to confront violent rhetoric. Nor does it mean that Trump was the sole enabler of what ultimately became an attempted coup against the U.S. government and the peaceful transition of power. The events of January 6th resulted from many interrelated factors, including increasingly radical rhetoric from leading conservative voices both on- and offline, and an alarming uptick in people who believe political violence to be a viable solution.

These threats to U.S. democracy are likely to outlive Trump’s political relevance, and they were not outside the purview of the Committee. According to its enabled legislation, the purposes of the Select Committee included:

“To investigate and report upon the facts, circumstances, and causes relating to the January 6, 2021, domestic terrorist attack upon the United States Capitol Complex … as well as the influencing factors that fomented such an attack on American representative democracy while engaged in a constitutional process.”

Following this broad mandate, in January 2022 the Committee subpoenaed four of the largest social media companies in the United States: Alphabet (Google and YouTube’s parent company), Meta (Facebook and Instagram’s parent company), Reddit, and Twitter. As members of the Committee’s Purple Team, we also investigated other platforms including (but not limited to) Gab, Parler, the “Dot Win” communities (e.g., TheDonald[.]win), 4chan, Discord, TikTok, 8kun, Telegram, MeWe, and Bitchute. Our findings suggest that the intersection between social media and violent extremism remains pervasive and, in fact, a central component of the Internet. Recognizing and confronting the threat posed by this phenomenon is key to preventing another January 6th. 

Social Media Companies’ Role in Enabling January 6th

On the day of the attack, observers quickly drew a causal link to the circulation of election-related conspiracy theories on various platforms. Social media companies responded defensively. For example, days after the attack, Facebook’s then-COO Sheryl Sandberg said that the January 6th Stop the Steal rally and subsequent attack were “largely organized on platforms that [didn’t have Facebook’s] abilities to stop hate.” In truth, many platforms, including Facebook, were used to orchestrate the insurrection in ways that varied as much as the platforms themselves. Some were used to broadcast conspiracies and calls for violence directly to mass audiences, while others were used to directly plot criminal activity in exclusive chat groups. While skeptics of social media’s role in the attack might say that the causal relationship between these tools and the violence itself is tenuous, this is a strawman argument. Regardless of whether online activity is a direct cause of extremist violence, social media platforms are responsible for conducting due diligence against abuse of their services. 

Readers Also Like:  Palantir Selected by Department of Defense to Automate Spectrum ... - PR Newswire

The largest platforms, including those subpoenaed by the Committee, have developed increasingly complex policies in recent years for moderating content including hate speech, violent incitement, and misleading claims affecting elections, public health, and public safety. Meanwhile, most fringe and alt-tech platforms like Gab or 8kun exist in opposition to prevailing norms and standards around content moderation, often aligning with right-wing figures who believe larger platforms unduly censor conservative speech. 

At the outset of the investigation, we believed we might find evidence that large platforms like Facebook, Twitter, and YouTube resisted taking proactive steps to limit the spread of violent and misleading content during the election out of concern for their profit margins. These large platforms ultimately derive revenue from keeping users engaged with their respective services so that they can show those users more advertisements. Analysts have argued that this business model rewards and incentivizes divisive, negative, misleading, and sometimes hateful or violent content. It would make sense, then, that platforms had reason to pull punches out of concern for their bottom line.

While it is possible this is true more generally, our investigation found little direct evidence for this motivation in the context of the 2020 election. Advocates for bold action within these companies – such as Facebook’s “break glass” measures or Twitter’s policies for handling implicit incitement to violence – were more likely to meet resistance for political reasons than explicitly financial ones. For example, after President Trump told the Proud Boys to “stand back and stand by”’ during the first presidential debate in 2020, implicit and explicit calls for violence spread across Twitter. Former members of Twitter’s Trust and Safety team told the Select Committee that a draft policy to address such coded language was blocked by then-Vice President for Trust & Safety Del Harvey because she believed some of the more implicit phrases, like “locked and loaded,” could refer to self-defense. The phrase was much discussed in internal policy debates, but it was not chosen out of thin air – it was frequently invoked following the shooting by Kyle Rittenhouse in Kenosha the previous summer. But the fact it appeared in only a small fraction of the hundreds of tweets used to inform the policy led staff to the conclusion that Harvey’s decision was meant to avoid a controversial crackdown on violent speech among right-wing users. Ironically, elements of this policy were later used to guide the removal of a crescendo of violent tweets during the January 6th attack when the Trust & Safety team was forced to act without leadership from their manager, whose directive to them was, according to one witness, to “stop the insurrection.” 

At Facebook, many of that company’s “break glass” measures were implemented effectively before the election, but when conspiracy theories denying the election’s outcome began to spread through groups using the catchphrase “Stop the Steal,” the company declined to take decisive action. Leaked documents show that between the election and January 6th, nearly all of the fastest growing groups on Facebook were related to Stop the Steal, and these were rife with hate speech and calls for violence. The growth of these groups was not the result of Facebook’s recommendation algorithm; rather, a small number of organizers abused Facebook to invite huge numbers of individuals to the groups. Many of these super-inviters used backup groups to reconstitute those Facebook did remove. 

Readers Also Like:  Knightscope (NASDAQ: KSCP) Shines Spotlight on Autonomous ... - Digital Journal

Facebook staff saw this activity and called for stronger action. Brian Fishman, who headed Facebook’s Dangerous Organizations Policy team, told the Select Committee he thought the platform should have responded more forcefully to the Stop the Steal movement, the organizers of which were “absolutely” organizing on Facebook. He suggested the company could have taken action using a policy against coordinated social harm, meant to target networks that fall short of formal organization but nevertheless promote violence, hate speech, bullying, harassment, health misinformation, or other harms. The company did not do so, nor did it implement a broader policy against election delegitimization; Fishman told the Committee that so much of the right’s media ecosystem was complicit in election denial that such a policy would have been indefensible (see especially pp. 93-94 of his interview transcript). These decisions were not necessarily about immediate revenue considerations; they were part of a larger pattern of tolerating bad behavior in order to avoid angering the political right, regardless of how dangerous its messages might become.

One clear conclusion from our investigation is that proponents of the recently released “Twitter Files,” who claim that platform suspensions of the former President are evidence of anti-conservative bias, have it completely backward. Platforms did not hold Trump to a higher standard by removing his account after January 6th. Rather, for years they wrote rules to avoid holding him and his supporters accountable; it took an attempted coup d’état for them to change course. Evidence and testimony provided by members of Twitter’s Trust & Safety team make clear that those arguing Trump was held to an unfair double standard are willfully neglecting or overlooking the significance of January 6th in the context of his ban from major platforms. In the words of one Twitter employee who came forward to the Committee, if Trump had been “any other user on Twitter, he would have been permanently suspended a very long time ago.” 

Lessons Learned

Our investigation left us with several observations about how Congress and other stakeholders can respond to these continuing technological, social, and political threats to U.S. democracy. First, greater transparency in the realm of social media is essential. Social media experts who spoke with us during the investigation bemoaned the lack of insight policymakers have into how platforms work and their effect on American society today. If scholars, policymakers, and practitioners had access to more data from the platforms, they could generate more accurate empirical findings about the relationship between social media and extremism. Consider YouTube’s boasts about changes to its recommendation algorithm to reduce consumption of fringe and “borderline” content: without more detail, external observers cannot evaluate the veracity of these claims. It is possible YouTube does not even preserve the data necessary to study the impact of borderline content on users. These barriers to accountability research are a problem, but there are legislative solutions waiting to be passed

Similarly, during the 2020 election and its aftermath, platform executives made important decisions with huge consequences for political discourse outside of public view. While available evidence contradicts simplistic complaints of big tech anti-conservative censorship, the platforms’ opacity makes suspicion understandable. 

While some decisions seem arbitrary and wrongheaded – like Twitter senior management’s apparent refusal to take a stand against implicit incitement despite evidence brought by its staff – others involve difficult tradeoffs that the public rarely appreciates. These tradeoffs can go far beyond a binary of taking down or allowing different categories of content; for example, several of Facebook’s “break glass” measures involved changing how machine learning systems detect and downrank, as opposed to removing, content which might violate policy. These systems are not perfect and they are not always able to confidently reach a conclusion, but they are necessary to moderate content at scale. Depending on how aggressively platforms deploy them, such mechanisms generate false positives – permissive speech which is removed – or false negatives, resulting in harmful content that remains online. 

Readers Also Like:  Cyber security firm CyberArk opens R&D centre in India - BusinessLine

Presently, platforms balance these alternatives almost entirely behind closed doors. But these decisions are too important to society to be left to corporations that lack democratic legitimacy. Congress should lead a robust conversation about the role of content moderation and artificial intelligence in the public square, drawing on experts and advocates as well as technology companies to promote awareness of how these processes work and greater consensus on how online spaces should be governed. 

Beyond transparency, there are no quick or simple legislative solutions for untangling social media from the complex phenomenon of rising extremism. Instead we must pursue a host of pro-democracy and counter-extremism reforms both on and offline. As a starting point, a recent paper by Rachel Kleinfeld offers “Five Strategies to Support US Democracy” that government, civil society, and politicians can take to reduce demand for anti-democratic action, reinforce democratic norms and institutions, and rebuild public trust.

Finally, there appears to be a legislative hyperfocus on regulating algorithms. While algorithmic transparency could cultivate better, more accurate understandings of the ways users interact with one another and content on various platforms, algorithms are not always the boogeyman that the public and policymakers have attempted to position them as. Though scholar Becca Lewis was referring to YouTube in her writing on this topic, the sentiment remains true for social media platforms writ large: social media companies could remove all recommendation algorithms tomorrow, and the platforms would still be libraries of far-right extremism and conspiratorial content, easily searchable by those motivated to look. 

The mainstreaming of far-right conspiracy theories and disinformation – which has been supercharged by years of inadequate content moderation efforts by platforms – existed long before Trump’s presidency and will exist long after he disappears from public life. Trump would not have been able to spread the Big Lie and mobilize thousands of his followers without social media platforms allowing him to do so. In fact, developments since January 6th have left us more vulnerable to political violence stoked by radical rhetoric online. There are fewer resources and trust & safety personnel to deal with these problems and more platforms across which to monitor the spread of extremist activity. If social media companies are not held to account, other demagogues willing to trample on democratic norms to preserve their power will surely exploit the platforms to spread the next big lie. 

 

Photo credit: Committee members attend the fifth hearing held by the Select Committee to Investigate the January 6th Attack on the U.S. Capitol on June 23, 2022 (Doug Mills-Pool/Getty Images)



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.