enterprise

How businesses can shape the (safer) future of social media


Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


“A profound risk of harm to the mental health and well-being of children and adolescents.” This was the verdict of the U.S. Surgeon General Vivek Murthy in his recent Advisory on social media and youth mental health.

As a former senior member of the independent Meta/Facebook Oversight Board staff, I find this Advisory, which draws on years of research, a welcome elevation of the use of social media by youth to a national public health issue. It’s also an important call to action for companies and investors in shaping the responsible future of the internet. As I’ll explain, its findings reflect the difficulty for governments in taking effective action, the technical challenges in balancing age-appropriate content with privacy rights, and the uncharted ethical and regulatory territory of virtual environments. It also points to the huge opportunities in developing online trust and safety as a core business function.

The report is an antidote to both the unrepentant defense of social media platforms and the exaggerated critiques that attribute myriad social ills to its influence. Murthy takes a “safety-first” approach because of the widespread use of social media; it’s also a sensible approach, given the lack of clarity in the literature on harm.

Murthy is at pains to assert that social media — used by 95% of teens — has positive impacts on a meaningful percentage of youth. These include social connection or support, and validation for marginalized groups, including ethnic and gender minorities. This is an absolutely critical point that doesn’t receive enough attention, especially given the increasing violence and vitriol directed against these communities in recent years.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 


Register Now

However, it also provides some sobering statistics on social media use and the “ample indicators” of its harmful effects on many young users. For example, “nearly 40% of children ages 8–12 … a highly sensitive period of brain development” use social media, and frequent use may be associated with changes in the brain related to emotional regulation and impulse control. Cyberbullying is also a major problem, with nearly 20% of teens reporting that they have been cyberbullied. And teens who use social media for more than three hours per day are more likely to experience depression and anxiety. The Advisory also references “a nationally representative survey of girls aged 11–15” in which “one-third or more say they feel ‘addicted’ to a social media platform.”

The report is understandably focused on the U.S. It’s worth stating that research tells a different story in Europe, which finds a more negative association overall between social media use and well-being, and research finds an overall positive impact in Asia. This is an important distinction to note, as the public policy debate in the digital age sometimes paints with broad brushstrokes while policies are being conceived at multiple scales; in corporate boardrooms, in states, nations, and supranational organizations, such as the EU.

Readers Also Like:  Sony profits slip due to strikes, but PS5 outlook brightens

Easier said than done

So while the Advisory’s analysis is even-handed, implementing some of its recommendations, such as limiting access to social media and harmful content on social media, is a tall order. I’ve seen how difficult it is to find practical solutions for parents, policymakers and companies, across geographies, cultures and different ages. 

Take “strengthening and enforcing age minimums” as one example where nuance is easily lost. The goal itself is laudable, but we need to strike a tricky balance: verifying identity to keep young people safe, but without requiring personal information that can be aggregated and used for harm by others. For example, scanning a child’s face to verify their age is increasingly de rigueur given the lack of better alternatives; but that’s incredibly privacy-invasive, especially when data breaches at many websites are all but certain to happen. 

This is where a national U.S. data privacy framework would be helpful, both to add legal weight to valid arguments about the national security implications of data sharing on social media platforms and to encourage a more coordinated approach, especially for social media companies and new platforms hoping to scale globally. In the absence of a privacy framework, state legislatures are taking the lead in developing a patchwork of privacy and social media laws, which are widely variable and sometimes heavy-handed.

Consider the laws in Montana preventing children under 18 from using social networks without parental consent, or the blanket ban of TikTok in Montana. To put it bluntly, there’s a big difference between an eight-year-old and a 15-year-old. The latter has far greater agency and can legally learn to drive a car in most states.

We need to find a way to bring children at that stage of adolescence into the conversation and respect their views, both in family settings when defining shared rules and in public discourse. If we don’t, it will likely result in the same climate of mutual suspicion, acrimonious discourse and intergenerational polarization that we find on the online platforms these laws are supposed to moderate, not emulate.

A recent Pew Poll bears this out, finding that 54% of Americans aged 50–64 favor banning TikTok, compared with 29% of those under 50. If we don’t get serious about bringing young people into the conversation, any social media ban will backfire just like the explicit shock tactics of early smoking, drinking and anti-drug campaigns did. Moreover, blanket bans or government powers to block specific classes of content risk being abused by political actors seeking to co-opt the youth safety movement to further their own agendas.

Readers Also Like:  October is over — how many of its games could you play? | Kaser Focus

Getting the data

To avoid the spread of ineffective and divisive legislation, which promotes the perception of overt censorship by paternalistic elites, empirical evidence for each policy intervention must be more robust. Murthy admits knowledge gaps on the relationship between social media and youth mental health. As such, the key questions he offers — “What type of content, and at what frequency and intensity, generates the most harm?” — should be an open invitation for further research from academia, philanthropic groups and relevant public health agencies. 

But the quality of the evidence to inform this research depends on greater transparency from social media companies. Only when they provide researchers with access to data can more practical solutions be created.

Data transparency mandates, such as the EU’s Digital Services Act, are a step in the right direction. On U.S. soil, the Platform Accountability and Transparency Act would, in the words of Stanford Professor Nate Persily, who informed its creation, allow researchers “to get access to the data that will shed light on the most pressing questions related to the effects of social media on society.” Mandating data access for researchers is a critical priority, especially on the heels of Twitter not only making its data feed prohibitively expensive for academic researchers moving forward but also threatening legal action if they do not delete all data lawfully gathered to date.

Even with nuanced public policy, we need to overcome technical challenges for effective regulation of social media. A key dilemma facing trust and safety efforts for children and adolescents using social media is the limited ability of current tools to detect and act on harmful online behavior in real time, especially in live video, audio and other non-text dominant constructs. In addition, the current text-monitoring tools are mainly trained on English-language text, a major flaw in addressing the globalized marketplace of social media platforms. In the U.S., regulating online speech is extremely challenging without infringing current conceptions of First Amendment rights. 

Add to this the challenge of evaluating not just content but the behavior of actors in immersive or augmented reality virtual environments. For instance, how will Apple ensure the beneficial use of the new Apple Vision Pro “mixed reality” headset?  And how will all of the new apps being created to make use of the headset comply with Apple’s App Store requirements for strong, app-level content moderation? Hopefully, Apple will find innovative ways to moderate harmful behavior and conduct, a task that’s much more context-intensive and technically complicated than detecting and blocking harmful content.

Readers Also Like:  Why Anthropic’s Artifacts may be this year’s most important AI feature: Unveiling the interface battle

Holding social media platforms accountable

Ultimately, we should ask more of the companies building these platforms. We should insist on safety by design, not as a retroactive adjustment. We should expect age-appropriate health and safety standards, stricter data privacy for children, and algorithmic transparency and oversight.

One recommendation I would add is to add a chief trust officer to the C-suites of every online company, or otherwise truly empower the executive responsible for trust and safety. This role would be responsible for minimizing the risk of harm to youth; working closely with academic researchers to provide relevant data; and providing a counterpoint to the dominant internal motivators of maximizing engagement, virality and scale. Professionalization of the trust and safety field is a key step in this regard. Right now, there’s very little formal training or accreditation in this area at universities or otherwise. That needs to change if we are to educate a future generation of C-suite trust officers.

An eagerly awaited report from the Atlantic Council’s Task Force for a Trustworthy Future Web provides even more concrete recommendations to help ensure a more positive online and offline future for youth. Not least is the need to cultivate a more robust and diverse talent pipeline to support the expansion of trust and safety practices. The report should be required reading for industry leaders who care about safer, more trustworthy online spaces.

New legal standards and systems-level, risk-based governance of social media are nascent but are also a major opportunity. In terms of societal significance and investment prospects, online trust and safety will be the new cybersecurity. Youth, parents, policymakers, companies and philanthropies should all have a seat at the table to share the responsibility for shaping this future. 

Eli Sugarman is a Senior Fellow at Schmidt Futures and serves as Interim Director of the Hewlett Foundation Cyber Initiative. Previously, he was Vice President of Content (Moderation) at the Meta/Facebook Oversight Board.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.