TechieTricks.com
The Thorny Art of Deepfake Labeling The Thorny Art of Deepfake Labeling
Last week, the Republican National Committee put out a video advertisement against Biden, which featured a small disclaimer in the top left of the... The Thorny Art of Deepfake Labeling


Last week, the Republican National Committee put out a video advertisement against Biden, which featured a small disclaimer in the top left of the frame: “Built entirely with AI imagery.” Critics questioned the diminished size of the disclaimer and suggested its limited value, particularly because the ad marks the first substantive use of AI in political attack advertising. As AI-generated media become more mainstream, many have argued that text-based labels, captions, and watermarks are crucial for transparency.

But do these labels actually work? Maybe not.

For a label to work, it needs to be legible. Is the text big enough to read? Are the words accessible? It should also provide audiences with meaningful context on how the media has been created and used. And in the best cases, it also discloses intent: Why has this piece of media been put into the world?

Journalism, documentary media, industry, and scientific publications have long relied on disclosures to provide audiences and users with the necessary context. Journalistic and documentary films generally use overlay text to cite sources. Warning labels and tags are ubiquitous on manufactured goods, foods, and drugs. In scientific reporting, it’s essential to disclose how data and analysis were captured. But labeling synthetic media, AI-generated content, and deepfakes is often seen as an unwelcome burden, especially on social media platforms. It’s a slapped-on afterthought. A boring compliance in an age of mis/disinformation.

As such, many existing AI media disclosure practices, like watermarks and labels, can be easily removed. Even when they’re there, audience members’ eyes—now trained on rapid-fire visual input—seem to unsee watermarks and disclosures. For example, in September 2019, the well-known Italian satirical TV show Striscia la Notizia posted a low-fidelity face-swap video of former prime minister Matteo Renzi sitting at a desk insulting his then coalition partner Matteo Salvini with exaggerated hand gestures on social media. Despite a Striscia watermark and a clear text-based disclaimer, according to deepfakes researcher Henry Adjer, some viewers believed the video was genuine.

This is called context shift: Once any piece of media, even labeled and watermarked, is distributed across politicized and closed social media groups, its creators lose control of how it is framed, interpreted, and shared. As we found in a joint research study between Witness and MIT, when satire mixes with deepfakes it often creates confusion, as in the case of this Striscia video. These sorts of simple text-based labels can create the additional misconception that anything that doesn’t have a label is not manipulated, when in reality, that may not be true.

Technologists are working on ways to quickly and accurately trace the origins of synthetic media, like cryptographic provenance and detailed file metadata. When it comes to alternative labeling methods, artists and human rights activists are offering promising new ways to better identify this kind of content by reframing labeling as a creative act rather than an add-on.

When a disclosure is baked into the media itself, it can’t be removed, and it can actually be used as a tool to push audiences to understand how a piece of media was created and why. For example, in David France’s documentary Welcome to Chechnya, vulnerable interviewees were digitally disguised with the help of inventive synthetic media tools like those used to create deepfakes. In addition, subtle halos appeared around their faces, a clue for viewers that the images they were watching had been manipulated, and that these subjects were taking an immense risk in sharing their stories. And in Kendrick Lamar’s 2022 music video, “The Heart Part 5,” the directors used deepfake technology to transform Lamar’s face into both deceased and living celebrities such as Will Smith, O. J. Simpson, and Kobe Bryant. This use of technology is written directly into the lyrics of the song and choreography, like when Lamar uses his hand to swipe over his face, clearly indicating a deepfake edit. The resulting video is a meta-commentary on deepfakes themselves.





Source link

techietr