Home / Tech / News / The crackdown on face-swapped porn shows we aren’t powerless against AI fakes

The crackdown on face-swapped porn shows we aren’t powerless against AI fakes

Yesterday, Reddit banned face-swapped celebrity porn, ruling that the content, made using the latest AI techniques, falls under the company’s restrictions on “involuntary pornography.” Neither the celebrities whose faces are pasted into the clips nor the original performers are asked if they consent, and so, says Reddit, it isn’t kosher.

Other web platforms agree; Twitter, PornHub, and video-sharing site Gfycat all introduced similar bans in recent weeks. And although the legal status of AI fake porn is murky, web companies are clear: the realism of AI fakes and the ease with which they can be fabricated make this a potentially dangerous tool, that will certainly be used for bullying, harassment, and misinformation. Bans won’t stop this content from being made of course, but they will limit its spread.

All this should be welcome news, and not just for people worried about AI fake porn — but those concerned about high-quality AI fakes more generally.

This topic has been much discussed in AI communities over the past few years and will hit the mainstream sooner rather than later. The algorithms used to swap faces in pornographic “deepfakes” are only part of a bigger AI toolkit that lets people manipulate images, video, and audio more easily than ever before. Combine face-swapping tech with the ability to mimic someone’s voice, and you have the potential for misinformation on a catastrophic scale. Donald Trump declaring war on North Korea. Hillary Clinton caught praising the Illuminati. As a recent New York Times opinion piece put it, our political future is “hackable.” Other coverage is less even-handed, and claims that AI fakes “may facilitate the end of reality as we know it.”

We need to counter this sort of alarmist thinking, and the recent clampdown on AI fake porn is a salutary example in this fight. It shows that gatekeepers’ authority to police content doesn’t evaporate just because it was made by machine learning. And unlike other categories of questionable content, which platforms have had trouble defining and therefore limiting (e.g., hate speech and abuse), AI fakes present a more clear-cut category.

Experts taking a more pessimistic view say that this technology is going to improve to the point where humans can’t tell the difference between AI fakes and real footage. This is probably true, and will certainly make moderation harder — but only for humans. Researchers are already working on the problem of detecting “synthetic media” using the same AI tools that create it. And with video platforms like YouTube and Facebook ramping up their ability to tag and categorize content using machine learning, once we have decent methods of detecting AI fakes, in theory, it should be easy to integrate this sort of safety-check into the sites where this content could do the most damage. In a way, AI fakes might be easier to detect than bad reporting. The manipulation of pixels by an algorithm is quantifiable in a way that spin and bias are not.

None of this is to say that the problem of AI fakery is solved, or that the issue doesn’t need serious scrutiny over the years to come. AI fake porn, for example, will certainly continue to be used as a tool for harassment despite the bans, and the underlying tech will certainly have all sorts of creative applications. But, these events show that our current institutions and checks and balances aren’t completely outmatched by new technology. They can be adapted instead.

More importantly, though, there’s a danger that by hyping the threat of AI fakes, we increase its influence. Think about how the label “fake news” was applied overzealously by the media, becoming a buzzword without a clear meaning. Pretty soon, it was turned against outlets by the same populist and partisan forces whose power it was intended to blunt. In the short term, the actual technology of AI fakery might be less of a threat than its perception. Like “fake news,” it will become a shield for liars and conspiracy theorists, used to dismiss any evidence that runs counter to their own beliefs. In the age of AI, the next “grab them by the pussy” video will be even more easily shrugged off as a fake under a miasma of reasonable doubt.

This breeds a sort of media nihilism, a belief that no audiovisual content can ever be definitively said to be “real.” This attitude was visible on the r/deepfakes subreddit, where users who pasted celebrity faces onto pornographic clips argued that they were, in a way, helping people. By improving the face-swapping technology, they were eroding definitions of real and fake, which would stop people from being targeted by actual revenge porn. “Legitimate homemade sex movies used as revenge porn can be waved off as fakes,” said one popular post. “If anything can be real, nothing is real.”

It’s a self-serving argument, but it doesn’t have to be true. Even if AI fakes become indistinguishable from real life, it won’t mean reality goes away. AI researchers, web platforms, and media outlets all have a duty to prove that’s true.


Source link

Check Also

Audi E-tron is getting full Alexa integration to bolster its high-tech credentials

Audi’s new fully electric E-tron SUV will be the car maker’s first vehicle to integrate …

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.