This article is real, but AI-generated deepfakes look damn close and are scamming people

[ad_1]

Is it any wonder that Canadian celebrities like television chef Mary Berg, crooner Michael Bublé, comedian Rick Mercer and hockey megastar Sidney Crosby are finally revealing their secrets to financial success? That is, until the Bank of Canada tried to stop them.

Of course, none of this is true, but this was the figurative bag of magic beans that apparent scammers on social media were trying to sell people, tricking users into clicking on sensational posts – Berg in state arrest, Bublé taken away – and leading them to what looks, on the surface, like a legitimate news story on the CTV News website.

If you’re even more intrigued by what appears to be an AI-generated article, you’ll have ample opportunity to click on the numerous links – around 225 on a single page – that invite you to register and submit your first investment of $350, which would be multiplied by more than 10 in just seven days.

These are just the latest in a series of fake ads, articles and videos exploiting the names, images, footage and even voices of prominent Canadians to promote investment or cryptocurrency projects.

Lawyers specializing in deepfakes and AI-generated content warn that they currently have little legal recourse and that Canadian laws have not advanced as quickly as the technology itself.

Financial scams and schemes appropriating the likeness of famous people are nothing new, but the use of rapidly evolving generative AI technology brings “a new twist to a fairly old concept,” said the lawyer Molly Reynolds, partner at Torys LLP in Toronto.

And it’s going to get worse before it gets better. Developing the tools and laws to prevent this from happening is a game of catch-up that we are already losing, she said.

LISTEN | What you need to know about deepfake ads on social media:

Information morning – Nova Scotia7:01 a.m.Implications of Fraudulent False Ads

You’ve probably seen them if you spend any time online. Ads that show a CBC host, or a personality like Elon Musk, engaging in some sort of get-rich-quick scheme. They fall into the category of “deep fakes”, or videos generated by AI. Our tech columnist Nur Zincir-Heywood looks at this question.

Deepfake detection

While there is plenty of content on the internet that shows clear signs of being generated by AI, University of Ottawa computer science professor WonSook Lee said some of it is so good now that it is becoming much more difficult to discern what is real.

She said as recently as a few years ago that she could immediately detect an AI-generated image or deepfake video of a person just by glancing at it and noticing differences in pixelation or composition. But some programs can now create near-perfect photos and videos.

What’s not perfectly generated can be further edited with photo and video editing software, she added.

As we learn more about AI, it also becomes smarter.

“If we find a way to detect deepfakes, we help deepfakes get better,” she said.

WATCH | The National’s Ian Hanomansing takes on deepfakes himself:

Anyone can be faked in a fraudulent ad. Even Ian Hanomansing

Scammers are turning to fakes of trusted public figures to get your money through fake online ads. The National’s Ian Hanomansing is one of them. He found out what the law says and what social media companies are doing about it.

Star Power

It appears that X has reduced the swarm of fraudulent Canadian celebrity ads to some extent and suspended some – but not all – of the accounts that shared them. CBC News attempted to contact a spokesperson for X Corp., the social media platform’s parent company, but received only an automated response.

X and other social media and website hosting companies may have policies aimed at preventing spam and financial scams on their platforms. But Reynolds said they faced a “question of moral versus legal obligations.”

That’s because there aren’t many legal requirements that incentivize platforms to remove fraudulent materials, she explained.

“There are individuals who are deeply affected, without legal recourse, without help from tech companies and perhaps without a large social network, you know, to rely on like Taylor Swift,” Reynolds said.

After all, prominent Canadians don’t wield as much influence as Taylor Swift. If they did, perhaps the story would unfold differently.

The rapid spread of AI-generated sexualized images of the pop music superstar last month prompted social media companies to act almost immediately. Even the White House has spoken out.

X quickly removed the images and blocked searches on Swift’s name. Within days, American lawmakers tabled a bill to combat such deepfake pornography.

WATCH | Role of social media companies in combating the spread of sexualized deepfakes:

White House ‘alarmed’ by explicit AI-generated images of Taylor Swift on social media

US White House spokesperson Karine Jean-Pierre responded to a reporter’s question about artificial intelligence-generated fake and explicit images of Taylor Swift spread on social media, saying the companies Social media companies have a clear role in enforcing policies to prevent this type of content. to be distributed on their platforms.

But Reynolds said it’s not just situations involving sexualized, nonconsensual images that can cause harm, especially when it comes to people whose names and faces are their trademarks.

CBC News requested interviews with Berg and Mercer to find out if either had taken any action in response to the false ads appropriating their images. Mercer declined to be interviewed for this story. Berg’s publicist forwarded the request to CTV’s parent company, Bell Media, which denied it.

LISTEN | How Taylor Swift deepfakes will impact AI laws:

1:52 p.m.Will Taylor Swift AI deepfakes finally spur governments to act?

Last week, explicit AI-generated images of Taylor Swift were shared on X, formerly known as Twitter, without her consent. These photos were viewed millions of times before being removed. Journalists Sam Cole and Melissa Heikkilä – who have been following the rise of deepfakes for years – explain why this story struck a chord in Hollywood and Washington.

New legal landscape

Whether someone is famous doesn’t matter if your image is used in a way you didn’t consent to, said Pablo Tseng, a Vancouver-based intellectual property lawyer at McMillan LLP.

“You control how you should be presented,” said Tseng, a partner at McMilllan LLP. “The law will still view this as a wrong committed against you. Of course, the question is: do you think it’s worth pursuing this in court?”

Canada hasn’t followed the U.S. lead on new deepfake legislation, but there are some existing torts – laws primarily established by judges with the aim of compensating people for wrongdoing – which could potentially be applied in a trial involving AI. generated deepfakes, according to Tseng.

The tort of misappropriation of personality, he said, could apply because it often involves an image of someone that is digitally manipulated or grafted onto another image.

The tort of false light, which concerns the public misrepresentation of a person, is a more recent option based on American law and which was first recognized in Canada before the Superior Court. in 2019. But so far this has only been recognized in two provinces (British Columbia being the other).

WATCH | The consequences of falsification of celebrity photos:

When AI tampering fooled us | About that

Andrew Chang details the consequences of high-profile photo falsification after a few recent AI images went viral: the Pope in a down jacket and former US President Donald Trump arrested.

Play the long game

Anyone who wants to take legal action over the production and distribution of deepfakes will have to be in it for the long haul, Reynolds said. Any case would take time to go through the court system – and could be costly.

The fight can, however, pay off.

Reynolds highlighted a recent class action against Meta on “Sponsored Stories” ads on Facebook, between 2011 and 2014, which generated mentions by using users’ names and profile photos to promote products without their consent.

Meta offered a $51 million settlement to users in Canada last month. Lawyers estimate that 4.3 million people whose real name or photo was used in a sponsored post could qualify.

“It’s not a particularly quick solution for individuals, but it may be more cost-effective when it comes to a class action,” Reynolds said.

But the quest for justice or damages also requires knowing who bears responsibility for these deepfake scams. Lee, of the University of Ottawa, said what is already a challenge will become almost impossible to achieve with further advances in generative AI technology.

Much of the published research on artificial intelligence includes freely available source code, she explained, meaning that anyone with the know-how can create their own program without any sort of traceable markers.

WATCH | What happens when deepfakes are used to interfere in elections:

Can you spot deepfake? How AI threatens elections

Fake AI-generated videos are being used for internet scams and gags, but what happens when they are created to interfere in elections? CBC’s Catharine Tunney explains how technology can be weaponized and examines whether Canada is ready for a rigged election.

Source link

Scroll to Top