security

Could C2PA Cryptography be the Key to Fighting AI-Driven … – TechRepublic


Adobe, Arm, Intel, Microsoft and Truepic put their weight behind C2PA, an alternative to watermarking AI-generated content.

A colorful robot head representing generative AI.
Image: Sascha/Adobe Stock

With generative AI proliferating throughout the enterprise software space, standards are still being created at both governmental and organizational levels for how to use it. One of these standards is a generative AI content certification known as ​​C2PA.

C2PA has been around for two years, but it’s gained attention recently as generative AI becomes more common. Membership in the organization behind C2PA has doubled in the last six months.

Jump to:

What is C2PA?

The C2PA specification is an open source internet protocol that outlines how to add provenance statements, also known as assertions, to a piece of content. Provenance statements might appear as buttons viewers could click to see whether the piece of media was created partially or totally with AI.

Simply put, provenance data is cryptographically bound to the piece of media, meaning any alteration to either one of them would alert an algorithm that the media can no longer be authenticated. You can learn more about how this cryptography works by reading the C2PA technical specifications.

This protocol was created by the Coalition for Content Provenance and Authenticity, also known as C2PA. Adobe, Arm, Intel, Microsoft and Truepic all support C2PA, which is a joint project that brings together the Content Authenticity Initiative and Project Origin.

The Content Authenticity Initiative is an organization founded by Adobe to encourage providing provenance and context information for digital media. Project Origin, created by Microsoft and the BBC, is a standardized approach to digital provenance technology in order to make sure information — particularly news media — has a provable source and hasn’t been tampered with.

Readers Also Like:  North Korea fires ICBM that may have been new type of weapon - The Associated Press

Together, the groups that make up C2PA aim to stop misinformation, specifically AI-generated content that could be mistaken for authentic photographs and video.

How can AI content be marked?

In July 2023, the U.S. government and leading AI companies released a voluntary agreement to disclose when content is created by generative AI. The C2PA standard is one possible way to meet this requirement. Watermarking and AI detection are two other distinctive methods that can flag computer-generated images. In January 2023, OpenAI debuted its own AI classifier for this purpose, but then shut it down in July ” … due to its low rate of accuracy.”

Meanwhile, Google is trying to provide watermarking services alongside its own AI. The PaLM 2 LLM hosted on Google Cloud will be able to label machine-generated images, according to the tech giant in May 2023.

SEE: Cloud-based contact centers are riding the wave of generative AI’s popularity. (TechRepublic)

There are a handful of generative AI detection products on the market now. Many, such as Writefull’s GPT Detector, are created by organizations that also make generative AI writing tools available. They work similarly to the way the AI themselves do. GPTZero, which advertises itself as an AI content detector for education, is described as a “classifier” that uses the same pattern-recognition as the generative pretrained transformer models it detects.

The importance of watermarking to prevent malicious uses of AI

Business leaders should encourage their employees to look out for content generated by AI — which may or may not be labeled as such — in order to encourage proper attribution and trustworthy information. It’s also important that AI-generated content created within the organization be labeled as such.

Readers Also Like:  US govt contractor ABB confirms ransomware attack, data theft - BleepingComputer

Dr. Alessandra Sala, senior director of artificial intelligence and data science at Shutterstock, said in a press release, “Joining the CAI and adopting the underlying C2PA standard is a natural step in our ongoing effort to protect our artist community and our users by supporting the development of systems and infrastructure that create greater transparency and help our users to more easily identify what is an artist’s creation versus AI-generated or modified art.”

And it all comes back to making sure people don’t use this technology to spread misinformation.

“As this technology becomes widely implemented, people will come to expect Content Credentials information attached to most content they see online,” said Andy Parsons, senior director of the Content Authenticity Initiative at Adobe. ”That way, if an image didn’t have Content Credentials information attached to it, you might apply extra scrutiny in a decision on trusting and sharing it.”

Content attribution also helps artists retain ownership of their work

For businesses, detecting AI-generated content and marking their own content when appropriate can increase trust and avoid misattribution. Plagiarism, after all, goes both ways. Artists and writers using generative AI to plagiarize need to be detected. At the same time, artists and writers producing original work need to ensure that work won’t crop up in someone else’s AI-generated project.

For graphic design teams and independent artists, Adobe is working on a Do Not Train tag in its content provenance panels in Photoshop and Adobe Firefly content to ensure original art isn’t used to train AI.

Readers Also Like:  Tech Digest daily roundup: Apple iOS 16.3 offers host of security ... - Tech Digest



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.