Meta for AI Image Identification to Prevent Misinformation

- Advertisement -

Meta said on Tuesday that it will be looking into images that might have been changed by AI to help stop the spread of fake news and deepfakes, especially since the election is coming up soon. 

Many times, AI and deepfakes have spread false information on the internet, which has become a big problem. Meta has said that it will add to its tools so that they can find all images on Facebook, Instagram, and Threads that were made by AI. Meta was already doing this, but only with photos from their own platforms. From now on, photos from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock will also be marked. 

- Advertisement -

President of global affairs at Meta, Nick Clegg, said that the company will label AI images that come from other sources and that they will “continue working on the problem in the coming months.” 

Clegg also said that work needs to be done so that they can “align on common technical standards that signal when a piece of content has been created using AI.” 

Issue with AI content is that sometimes it’s very clear to spot, and other times it’s tough to tell if a picture is real or has been changed in any way. 

“We are putting in a lot of effort to create classifiers that will help us automatically find content made by AI, even if it doesn’t have any invisible markers.” We’re also looking for ways to make it harder to get rid of or change watermarks that can’t be seen,” Clegg wrote. 

They also said that it is getting harder and harder to find AI in video and speech. There will be a place to say if your content was made with AI or not, though, according to Meta. If you share an image that was made with AI without labeling it, the “company may apply penalties.” 

- Advertisement -

You may also like…

RELATED ARTICLES

You may also like…

Advertisment

Recent Stories

Advertisement

Latest Posts on Tac And Survival