security

Raising Online Defenses Through Transparency and Collaboration – Meta Store


Rarely, if ever, do today’s online threats target one single technology platform – instead, they follow people across the internet. We go to great lengths to keep our apps safe for people and help raise our collective defenses across the internet. Today’s security and integrity updates provide an under-the-hood view into our defense strategy and the latest news on how we build it into products like our new app Threads and generative AI systems.

In the decade since we first began publishing transparency reports, technology companies and researchers have learned a great deal about how online public spaces can be abused by malicious actors. A key lesson for us has been that transparency across our industry has positive cascading effects on our collective ability to respond to new threats – from continuous pressure on malicious groups through takedowns and public exposure to government sanctions as well as stronger product defenses. Today’s updates are a good window into how this works in practice.

Taking Down Two of the Largest Known Covert Influence Operations

China: We recently took down thousands of accounts and Pages that were part of the largest known cross-platform covert influence operation in the world. It targeted more than 50 apps, including Facebook, Instagram, X (formerly Twitter), YouTube, TikTok, Reddit, Pinterest, Medium, Blogspot, LiveJournal, VKontakte, Vimeo, and dozens of smaller platforms and forums. For the first time, we were able to tie this activity together to confirm it was part of one operation known in the security community as Spamouflage and link it to individuals associated with Chinese law enforcement. See details in our Q2 Adversarial Threat report.

Readers Also Like:  6 Ways AI Can Revolutionize Digital Forensics - 6 Ways AI Can Revolutionize Digital Forensics - Dark Reading

Russia: We also blocked thousands of malicious website domains as well as attempts to run fake accounts and Pages on our platforms connected to the Russian operation known as Doppelganger that we first disrupted a year ago. This operation was focused on mimicking websites of mainstream news outlets and government entities to post fake articles aimed at weakening support for Ukraine. It has now expanded beyond initially targeting France, Germany and Ukraine to also include the US and Israel. This is the largest and the most aggressively-persistent Russian-origin operation we’ve taken down since 2017. In addition to new threat research, we’re also publishing our enforcement and policy recommendations for addressing the abuse of the global domain name registration system. 

Examining Impact of Disrupting Hate Networks

We first began using what we call Strategic Network Disruptions in 2017 to tackle covert influence operations from Russia. As it proved to be an effective instrument in our toolbox, we expanded it to other issue areas, including hate networks, cyber espionage and mass reporting. But because we know that malicious groups keep evolving their tactics across the internet, we also continue to ask ourselves: do these strategies still work and how can we improve them?

Our team of researchers recently published a study of the effects of six network disruptions of banned hate-based organizations on Facebook. Their research found de-platforming these entities through network disruptions can help make the ecosystem less hospitable for designated dangerous organizations. While people closest to the core audience of these hate groups exhibit signs of backlash in the short-term, evidence suggests they reduce their engagement with the network and with hateful content over time. It also suggests that our strategies can reduce the ability of hate organizations to successfully operate online. 

Readers Also Like:  KeePass releases fix for password-leaking security bug - TechRadar

Building Threads and Generative AI Tools

While these network takedowns are impactful, they are just one tool in our broader defense against adversarial groups targeting people across the internet. Our investigations and enforcements power what we call a virtuous cycle of defense – improving our scaled systems and how we build new products. 

Threads: From the beginning, our integrity enforcement systems and policies developed for Instagram, and other apps, have been wired into how we built Threads. You can think of it as Threads being built on top of an established, global infrastructure powering multiple apps at once. This means that our security efforts, like tackling covert influence operations, apply to Threads just as they do to Facebook and Instagram. In fact, within 24 hours of Threads launching, we detected and blocked attempts to establish a presence on Threads by accounts linked to an influence operation we took down in the past.

We’ve also rolled out additional transparency features on Threads, including labeling state-controlled media and showing additional information about accounts so that people can know, for example, if accounts may have changed their names. We know that adversarial behaviors will keep evolving as the Threads app continues to mature, and so will we, to stay ahead of these threats. 

Generative AI: Openness and cross-society collaboration are even more critical when it comes to rapidly evolving technologies like generative AI. In addition to extensive internal “red teaming” where our internal teams take on the role of adversaries to hunt for flaws, we recently brought our generative AI model to DEFCON — the largest hacker conference in the world. We joined our peers at Google, NVIDIA, OpenAI and others to stress test our different models as part of the first-ever public Generative Red Team Challenge.

Readers Also Like:  Inside UBQ Materials’ “crisis, recovery, and resilience plan” for 7/10 - CTech

Over 2,200 researchers, including hundreds of students and organizations traditionally left out of the early stages of technological change, came together to hunt for bugs and vulnerabilities in these systems. According to the organizers of this challenge, they engaged in over 17,000 conversations with generative AI systems to probe for unintended behaviors – from bad math calculations to misinformation to providing bad user security practices. This open red team challenge was supported by the White House Office of Science and Technology Policy, the National Science Foundation, and the Congressional AI Caucus. Our hope is that early focus on establishing best practices for this emerging generative AI space will result in safer systems in the long term.

We believe that openness is the key to tackling some of the biggest challenges we collectively face online. Transparency reports, academic research, and other measures to innovate openly and stress-test our systems help our industry to learn from each other, improve our respective systems, and keep people safe across the internet.

 You can find our quarterly integrity reports on Meta’s Transparency Center.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.