Meta is actively engaged in the identification and labeling of AI-generated images shared on its platforms, particularly those produced by third-party tools. This initiative is prompted by the imminent 2024 election season and the increasing presence of artificial intelligence tools that pose a risk to the clarity of the information ecosystem.
In the upcoming months, Meta will commence the addition of “AI generated” labels to images created using tools from various entities, including Google, Microsoft, OpenAI, Adobe, Midjourney, and Shutterstock. This labeling effort is part of Meta’s commitment to transparency. Notably, Meta already employs a comparable “imagined with AI” label for photorealistic images generated through its proprietary AI generator tool.
To enhance the effectiveness of this labeling process, Meta is collaborating with prominent companies in the artificial intelligence sector to establish common technical standards. These standards involve incorporating specific invisible metadata or watermarks within images, facilitating the identification of AI-generated images produced by these tools within Meta’s systems.
Meta’s labeling system is set to be introduced across Facebook, Instagram, and Threads, spanning multiple languages.
The unveiling of Meta’s labels aligns with growing concerns from online information experts, lawmakers, and select tech leaders. These concerns revolve around the emergence of advanced AI tools capable of generating lifelike images, combined with the rapid dissemination capabilities of social media. The overarching worry is that this combination poses a potential threat of proliferating misleading information, particularly with the looming 2024 elections in the United States and numerous other countries.
In addition, Meta is actively working on preventing users from removing the imperceptible watermarks embedded in AI-generated images, according to Clegg.
Clegg emphasized the significance of this effort, recognizing the likelihood of an increasingly adversarial landscape in the coming years. He noted that individuals and organizations seeking to deceive others with AI-generated content may attempt to circumvent protective measures. Clegg advised users to consider various factors when assessing whether content has been AI-generated, such as evaluating the trustworthiness of the account sharing the content or identifying details that may appear or sound unnatural.
On a separate note, Meta also revealed on Tuesday the expansion of support for an anti-sextortion tool called “Take it Down,” developed in collaboration with the National Center for Missing & Exploited Children. This tool enables teens or parents to securely generate a unique identifier for intimate images they are concerned about spreading online. This unique identifier facilitates platforms like Meta in efficiently identifying and removing such images from their platforms.
For More Details : Click Here