Fake explicit images of Taylor Swift prove laws haven’t kept pace with technology, experts say

[ad_1]

Explicit AI-generated photos of one of the world’s most famous artists quickly spread across social media this week, once again highlighting what experts describe as an urgent need to crack down on technology and platforms that allow the sharing of harmful images.

Fake photos of Taylor Swift, depicting the singer-songwriter in sexually suggestive positions, were viewed tens of millions of times on X, formerly known as Twitter, before being deleted.

One photo, shared by a single user, was viewed more than 45 million times before the account was suspended. But in the meantime, the widely shared photo had been immortalized elsewhere on the Internet.

The situation showed how advanced – and easily accessible – AI is and reignited calls in Canada and the United States for better laws.

“If I can quote Taylor Swift, X marks the place where we collapsed” said Kristen Thomasen, assistant professor at the University of British Columbia.

“Where we should be paying more attention in the law now is also on the designers who create the tools that make this so easy, and (on) the websites that allow that image to appear… and then d “Be seen by millions of people,” Thomasen said.

Image of Barak Obama on a laptop screen.
This image, made from a fake video featuring former US President Barack Obama, shows elements of facial mapping technology that allows anyone to make videos of real people appearing to say things they don’t never said. (The Associated Press)

After pornographic photos depicting Swift began to surface, the artist’s fans flooded the platform with “Protect Taylor Swift” messages, aiming to bury the images to make them harder to find through search.

In a post, X said its teams were “closely monitoring” the site to see if photos would continue to appear.

“Our teams are actively removing any identified images and taking appropriate action against the accounts responsible for posting them,” the message said.

Neither Swift nor her publicist have commented on the images.

As the AI ​​industry continues to grow, companies seeking to share the profits have designed tools that allow less-experienced users to create images and videos using simple instructions. These tools have been popular and beneficial in some industries, but they also make it easier to create so-called deepfakes, which are images that show a person doing something they didn’t actually do. actually done.

Deepfake detection group Reality Defender said it had tracked a deluge of non-consensual pornographic material depicting Swift, particularly on X. Some images were also distributed on Facebook, owned by Meta, and other social media platforms.

“Unfortunately, they spread to millions and millions of users at the time some of them were removed,” said Mason Allen, Reality Defender’s head of growth.

The researchers found several dozen unique AI-generated images. The most widely shared were football-related, showing a painted or bloodied Swift objectifying him and, in some cases, suggesting violent harm to him.

Tools pave way for ‘new era’ of cybercrime

“One of the biggest issues is that it’s just an incredible tool…and now everyone can use it,” said Steve DiPaola, a professor of artificial intelligence at Simon Fraser University .

A 2019 study by DeepTrace Labs, an Amsterdam-based cybersecurity company, found that 96 percent of deepfake video content online was non-consenting pornographic material. It also revealed that the four main websites dedicated to deepfake pornography received more than 134 million views on videos targeting hundreds of female celebrities around the world.

WATCH | Schools could do more to educate young people about online harms:

Young people need better education on risks of online sexual violence, report says

Technology-facilitated sexual violence and harassment is on the rise in Canada and a new report suggests schools could do more to educate young people about the risks.

In Canada, police opened an investigation in December after fake nude photos of students a French immersion school for grades 7 to 12 in Winnipeg were shared online. Earlier that year, a Quebec man was sentenced to prison for using AI to create seven deepfake child pornography videos – it would be the first sentence of its kind handed down by Canadian courts.

“The police have clearly entered a new era of cybercrime,” wrote Court of Quebec judge Benoit Gagnoni in his judgment.

Canadian judges work with outdated laws

After Swift’s targeting this week, US politicians called for new laws to criminalize the creation of deepfake images.

Canada could also use this kind of legislation, said UBC’s Thomasen.

Some Canadian laws address the broader issue of non-consensual distribution of intimate images, but most of these laws do not explicitly refer to deepfakes because they are not an issue.

Taylor Swift posing in front of a wall that says Taylor Swift The Eras Tour.
Fake images of Swift, shown here in October 2023, circulated to millions of social media users before being deleted. (Valérie Macon/AFP/Getty Images)

That means deepfake judges must decide how to apply old laws to new technologies.

“This is such a blatant violation of a person’s dignity, of control of their body, of control of their information, that it is difficult for me to imagine that this could not be interpreted in this way. way,” Thomasen said. “But there is some legal disagreement on this and we are awaiting clarification from the courts.”

The new Intimate Images Protection Act, which takes effect Monday in British Columbia, includes references to deepfakes and will give prosecutors more power to pursue people who post intimate images of others online without consent – ​​but it does not include references to the people who create them or to social media. media companies.

Source link

Scroll to Top