Meta will label AI-generated images from OpenAI, Google and other companies


Meta Platforms will begin detecting and labeling images generated by other companies’ artificial intelligence services in the coming months, using a set of invisible markers embedded in the files, its top policymaker said Tuesday.

Meta will apply the labels to any tagged content posted to its Facebook, Instagram and Threads services, in an effort to signal to users that the images – which in many cases look like real photos – are actually digital creations, a said the president of the global company. business, Nick Clegg, wrote in a blog post.

The company already labels all content generated using its own AI tools.

Once the new system is up and running, Meta will do the same for images created on services run by Alphabet’s OpenAI, Microsoft, Adobe, Midjourney, Shutterstock and Google, Clegg said.

The announcement provides a first look at an emerging system of standards that tech companies are developing to mitigate the potential harms associated with generative AI technologies, which can spit out fake but realistic-looking content in response to simple prompts.

The approach builds on a model established over the past decade by some of the same companies to coordinate the removal of banned content across platforms, including depictions of mass violence and child exploitation.

Audio and video tagging technology still in development

In an interview, Clegg told Reuters he was confident companies could reliably label AI-generated images at this point, but said tools for tagging audio and video content were more complicated and still under development.

“Even though the technology is not yet completely mature, especially when it comes to audio and video, the hope is that we can create a sense of momentum and incentive for the rest of the industry follows,” Clegg said.

WATCH | How AI-generated videos can be weaponized in elections:

Can you spot deepfake? How AI threatens elections

Fake AI-generated videos are being used for internet scams and gags, but what happens when they are created to interfere in elections? CBC’s Catharine Tunney explains how technology can be weaponized and examines whether Canada is ready for a rigged election.

In the meantime, he added, Meta would begin requiring users to label their own edited audio and video content and would apply penalties if they did not do so. Clegg did not describe the sanctions.

He added that there was currently no viable mechanism for labeling written text generated by AI tools like ChatGPT.

“That ship has sailed,” Clegg said.

A Meta spokesperson declined to comment on whether the company would apply labels to generative AI content shared on its encrypted messaging service WhatsApp.

Meta’s independent oversight board on Monday rebuked the company’s policy on deceptively doctored videos, saying it was too narrow and that content should be labeled rather than removed. Clegg said he generally agreed with these criticisms.

The board was right, he said, that Meta’s current policy “is simply not suited to an environment where you’re going to have a lot more synthetic content and hybrid content than before.”

He cited the new labeling partnership as evidence that Meta was already moving in the direction proposed by the board.

Source link

Scroll to Top