Is it any wonder that Canadian celebrities like television chef Mary Berg, crooner Michael Bublé, comedian Rick Mercer and hockey megastar Sidney Crosby are finally revealing their secrets to financial success? That is, until the Bank of Canada tried to stop them.
Of course, none of this is true, but this was the figurative bag of magic beans that apparent scammers on social media were trying to sell people, tricking users into clicking on sensational posts – Berg in state arrest, Bublé taken away – and leading them to what looks, on the surface, like a legitimate news story on the CTV News website.
If you’re even more intrigued by what appears to be an AI-generated article, you’ll have ample opportunity to click on the numerous links – around 225 on a single page – that invite you to register and submit your first investment of $350, which would be multiplied by more than 10 in just seven days.
These are just the latest in a series of fake ads, articles and videos exploiting the names, images, footage and even voices of prominent Canadians to promote investment or cryptocurrency projects.
Lawyers specializing in deepfakes and AI-generated content warn that they currently have little legal recourse and that Canadian laws have not advanced as quickly as the technology itself.
Financial scams and schemes appropriating the likeness of famous people are nothing new, but the use of rapidly evolving generative AI technology brings “a new twist to a fairly old concept,” said the lawyer Molly Reynolds, partner at Torys LLP in Toronto.
And it’s going to get worse before it gets better. Developing the tools and laws to prevent this from happening is a game of catch-up that we are already losing, she said.
Information morning – Nova Scotia7:01 a.m.Implications of Fraudulent False Ads
While there is plenty of content on the internet that shows clear signs of being generated by AI, University of Ottawa computer science professor WonSook Lee said some of it is so good now that it is becoming much more difficult to discern what is real.
She said as recently as a few years ago that she could immediately detect an AI-generated image or deepfake video of a person just by glancing at it and noticing differences in pixelation or composition. But some programs can now create near-perfect photos and videos.
What’s not perfectly generated can be further edited with photo and video editing software, she added.
As we learn more about AI, it also becomes smarter.
“If we find a way to detect deepfakes, we help deepfakes get better,” she said.
It appears that X has reduced the swarm of fraudulent Canadian celebrity ads to some extent and suspended some – but not all – of the accounts that shared them. CBC News attempted to contact a spokesperson for X Corp., the social media platform’s parent company, but received only an automated response.
X and other social media and website hosting companies may have policies aimed at preventing spam and financial scams on their platforms. But Reynolds said they faced a “question of moral versus legal obligations.”
That’s because there aren’t many legal requirements that incentivize platforms to remove fraudulent materials, she explained.
“There are individuals who are deeply affected, without legal recourse, without help from tech companies and perhaps without a large social network, you know, to rely on like Taylor Swift,” Reynolds said.
After all, prominent Canadians don’t wield as much influence as Taylor Swift. If they did, perhaps the story would unfold differently.
The rapid spread of AI-generated sexualized images of the pop music superstar last month prompted social media companies to act almost immediately. Even the White House has spoken out.
X quickly removed the images and blocked searches on Swift’s name. Within days, American lawmakers tabled a bill to combat such deepfake pornography.
But Reynolds said it’s not just situations involving sexualized, nonconsensual images that can cause harm, especially when it comes to people whose names and faces are their trademarks.
CBC News requested interviews with Berg and Mercer to find out if either had taken any action in response to the false ads appropriating their images. Mercer declined to be interviewed for this story. Berg’s publicist forwarded the request to CTV’s parent company, Bell Media, which denied it.
1:52 p.m.Will Taylor Swift AI deepfakes finally spur governments to act?
New legal landscape
Whether someone is famous doesn’t matter if your image is used in a way you didn’t consent to, said Pablo Tseng, a Vancouver-based intellectual property lawyer at McMillan LLP.
“You control how you should be presented,” said Tseng, a partner at McMilllan LLP. “The law will still view this as a wrong committed against you. Of course, the question is: do you think it’s worth pursuing this in court?”
Canada hasn’t followed the U.S. lead on new deepfake legislation, but there are some existing torts – laws primarily established by judges with the aim of compensating people for wrongdoing – which could potentially be applied in a trial involving AI. generated deepfakes, according to Tseng.
The tort of misappropriation of personality, he said, could apply because it often involves an image of someone that is digitally manipulated or grafted onto another image.
The tort of false light, which concerns the public misrepresentation of a person, is a more recent option based on American law and which was first recognized in Canada before the Superior Court. in 2019. But so far this has only been recognized in two provinces (British Columbia being the other).
Play the long game
Anyone who wants to take legal action over the production and distribution of deepfakes will have to be in it for the long haul, Reynolds said. Any case would take time to go through the court system – and could be costly.
The fight can, however, pay off.
Reynolds highlighted a recent class action against Meta on “Sponsored Stories” ads on Facebook, between 2011 and 2014, which generated mentions by using users’ names and profile photos to promote products without their consent.
Meta offered a $51 million settlement to users in Canada last month. Lawyers estimate that 4.3 million people whose real name or photo was used in a sponsored post could qualify.
“It’s not a particularly quick solution for individuals, but it may be more cost-effective when it comes to a class action,” Reynolds said.
But the quest for justice or damages also requires knowing who bears responsibility for these deepfake scams. Lee, of the University of Ottawa, said what is already a challenge will become almost impossible to achieve with further advances in generative AI technology.
Much of the published research on artificial intelligence includes freely available source code, she explained, meaning that anyone with the know-how can create their own program without any sort of traceable markers.