Search

Taylor Swift image controversy fuels push to ban ‘deep fakes’ - The Boston Globe

Taylor Swift celebrates on the field after the Kansas City Chiefs 17-10 victory against the Baltimore Ravens in the AFC Championship Game at M&T Bank Stadium on Jan. 28 in Baltimore, Maryland.Patrick Smith/Getty

Thousands of innocent women have been subjected to intimate pictures circulated online without their permission, a travesty made worse by new AI applications that can turn ordinary images posted on social media into made-up pornographic videos.

But the effort to combat deep fakes and revenge porn went into overdrive this week after offensive AI-created images of Taylor Swift went viral on X, formerly called Twitter. The social-media service, which has sharply cut back its content moderation staff, blocked searches for the pop star’s name after dozens of images accumulated millions of views.

The problem goes beyond fake porn images. The new AI apps can be used to spoof all manner of images and voices. Ahead of the New Hampshire primary, some voters received robocalls with a fake recording of President Biden telling them not to vote. And a growing scam targeting the elderly relies on deep-fake phone calls from young relatives.

Only a handful of states, including Virginia and California, have enacted laws to fight deep fakes and revenge porn, and it’s not clear that the legal bans have had much impact yet. In Massachusetts, State Senator Barry Finegold, an Andover Democrat, has introduced a bill to prohibit deep fakes used in political campaign advertising. The laws have yet to be tested at the Supreme Court, which could limit protections citing the First Amendment. (In 2002, the court struck down a ban on computer-generated child pornography.)

Almost 100,000 deep-fake videos circulated online last year, 98 percent pornographic, according to a report by the cybersecurity group Home Security Heroes. That’s more than a six-fold increase since 2019, the group said.

Rep. Joe Morelle, a New York Democrat, has been pushing federal legislation he authored to ban the circulation of non-consensual, digitally altered intimate images nationwide. The bill was introduced before the Taylor Swift attacks, but Morelle jumped on the news to build support.

“Deepfakes don’t just happen to celebrities—they’re happening to women and girls everywhere, every day,” Morelle wrote in a post on X.

Another approach is to rely on technological solutions. The website StopNCII.org was originally set up to help people eliminate online posts of real images distributed without consent. Nine social media services including Facebook, Reddit, and Snap as well as porn site Pornhub partnered with the site to help take down non-consensual images.

The same technology can also be used against deep fake-porn created by AI. A victim can use the site to create a digital fingerprint of an offending image. Social media sites can then use the fingerprint to identify and take down copies of the image.

Since 2015, the service said, it has helped take down over 200,000 images.

The Biden White House has another idea. It’s pushing for AI companies to add a digital watermark to all AI-created content so it can be easily identified as fake. That might have helped with the Swift fakes, which were produced with Microsoft’s Designer app, according to a report from 404 Media.

But bad actors might find a way to avoid the watermarks — or even add the digital warning to real media.

So for now, social media sites, and their users, will have to remain hyper vigilant.


Aaron Pressman can be reached at aaron.pressman@globe.com. Follow him @ampressman.

Adblock test (Why?)

Article From & Read More ( Taylor Swift image controversy fuels push to ban ‘deep fakes’ - The Boston Globe )
https://ift.tt/AIydwNl
Entertainment

Bagikan Berita Ini

0 Response to "Taylor Swift image controversy fuels push to ban ‘deep fakes’ - The Boston Globe"

Post a Comment


Powered by Blogger.