security

How You Can Tell the AI Images of Trump's Arrest Are Deepfakes – WIRED


The viral, AI-generated images of Donald Trump’s arrest you may be seeing on social media are definitely fake. But some of these photorealistic creations are pretty convincing. Others look more like stills from a video game or a lucid dream. A Twitter thread by Eliot Higgins, a founder of Bellingcat, that shows Trump getting swarmed by synthetic cops, running around on the lam, and picking out a prison jumpsuit was viewed over 3 million times on the social media platform.

What does Higgins think viewers can do to tell the difference between fake, AI images, like the ones in his post, from real photographs that may come out of the former president’s potential arrest? 

“Having created a lot of images for the thread, it’s apparent that it often focuses on the first object described—in this case, the various Trump family members—with everything around it often having more flaws,” Higgins said over email. Look outside of the image’s focal point. Does the rest of the image appear to be an afterthought?

Even though the newest versions of AI-image tools, like Midjourney (version 5 of which was used for the aforementioned thread) and Stable Diffusion, are making considerable progress, mistakes in the smaller details remain a common sign of fake images. As AI art grows in popularity, many artists point out that the algorithms still struggle to replicate the human body in a consistent, natural manner. 

Looking at the AI images of Trump from the Twitter thread, the face looks fairly convincing in many of the posts, as do the hands, but his body proportions may look contorted or melted into a nearby police officer. Even though it’s obvious now, it’s possible that the algorithm might be able to avoid peculiar-looking body parts with more training and refinement.

Readers Also Like:  Security firm: Chinese hackers broke into email security appliance in spying campaign - KLAS - 8 News Now

Need another tell? Look for odd writing on the walls, clothing, or other visible items. Higgins points toward messy text as a way to differentiate fake images from real photos. For example, the police wear badges, hats, and other documents that appear to have lettering, at first glance, in the fake images of officers arresting Trump. Upon closer inspection, the words are nonsensical.

An additional way you can sometimes tell an image is generated by AI is by noticing over-the-top facial expressions. “I’ve also noticed that if you ask for expressions, Midjourney tends to render them in an exaggerated way, with skin creases from things like smiling being very pronounced,” Higgins said. The pained expression on Melania Trump’s face looks more like a re-creation of Edvard Munch’s The Scream or a still from some unreleased A24 horror movie than a snapshot from a human photographer.

Keep in mind that world leaders, celebrities, social media influencers, and anyone with large quantities of photos circulating online may appear more convincing in deepfaked photos than AI-generated images of people with less of a visible internet presence. “It’s clear that the more famous a person is, the more images the AI has had to learn from,” Higgins said. “So very famous people are rendered extremely well, while less famous people are usually a bit wonky.” For more peace of mind about the algorithm’s ability to re-create your face, it might be worth thinking twice before posting a photo dump of selfies after a fun night out with friends. (Though it’s likely that the AI generators have already scraped your image data from the web.)

Readers Also Like:  Technology concerns force Monday shutdown of Rochester, Minn ... - MPR News

In the lead-up to the next US presidential election, what is Twitter’s policy about AI-generated images? The social media platform’s current policy reads, in part, “You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (‘misleading media’).” Twitter carves out multiple exceptions for memes, commentary, and posts not created with the intention to mislead viewers. 

Just a few years ago, it was almost unfathomable that the average person would soon be able to fabricate photorealistic deepfakes of world leaders at home. As AI images become harder to differentiate from the real deal, social media platforms may need to reevaluate their approach to synthetic content and attempt to find ways of guiding users through the complex and often unsettling world of generative AI.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.