Inscrivez-vous maintenant pour un meilleur devis personnalisé!

OpenAI's new tool can detect its own DALL-E 3 AI images, but there's a catch

May, 07, 2024 Hi-network.com
DALL-E 3 in ChatGPT
Maria Diaz/

AI-generated images can be used to trick you into believing fake content is real. As such, ChatGPT developer OpenAI has developed a tool that aims to predict whether or not an image was created using its own DALL-E 3 image generator. The image detection classifier's success rate, however, depends on if and how the image was modified.

On Tuesday, OpenAI gave the first group of testers access to its new image detection tool. The goal is to enlist independent researchers to weigh in on the tool's effectiveness, analyze its usefulness in the real world, determine how it could be used, and look at the factors that determine AI-generated content. Interested researchers can apply for access on the DALL-E Detection Classifier Access Program webpage.

Also: The best AI image generators to try right now

OpenAI has been testing the tool internally, and the results so far have been promising in some ways and disappointing in others. When analyzing images generated by DALL-E 3, the tool identified them correctly around 98% of the time. Furthermore, when analyzing images that weren't created by DALL-E 3, the tool misidentified them as being made by DALL-E 3 only around 0.5% of the time.

Minor modifications to an image also had little impact, according to OpenAI. Internal testers were able to compress, crop, and apply changes in saturation to an image created by DALL-E 3, and the tool showed a lower but still relatively high success rate. So far, so good.  

Unfortunately, the tool didn't fare as well with images that underwent more extensive changes. In its blog post, OpenAI didn't reveal the success rate in these instances, other than to simply say that "other modifications, however, can reduce performance."

The tool's effectiveness dropped under conditions such as changing the hue of an image, OpenAI researcher Sandhini Agarwal told The Wall Street Journal (subscription required). OpenAI hopes to fix these types of issues by giving external testers access to the tool, Agarwal added.

Also: How to use DALL-E 3 in ChatGPT

Internal testing also challenged the tool to analyze images created using AI models from other companies. In these cases, OpenAI's tool was able to identify only 5% to 10% of the images from these outside models. Making modifications to such images, such as changing the hue, also led to a sharp decline in effectiveness, Agarwal told the Journal. This is another limitation that OpenAI hopes to correct with further testing.

One plus for OpenAI's detection tool -- it doesn't rely on watermarks. Other companies use watermarks to indicate that an image is generated by their own AI tools, but these can be removed fairly easily, rendering them ineffective.

AI-generated images are especially problematic in an election year. Hostile parties, both inside and outside of a country, can easily use such images to paint political candidates or causes in a negative light. Given the ongoing advancements in AI image generators, figuring out what's real and what's fake is becoming more and more of a challenge.

Also: We're not ready for the impact of generative AI on elections

With this threat in mind, OpenAI and Microsoft have launched a$2 million Societal Resilience Fund to expand AI education and literacy among voters and vulnerable communities. Given that 2 billion people around the world have already voted or will vote in democratic elections this year, the goal is to ensure that individuals can better navigate digital information and find reliable resources.

OpenAI also said that it's joining the Steering Committee of C2PA (Coalition for Content Provenance and Authenticity). Used as proof that content came from a specific source, C2PA is a standard for digital content certification adopted by software providers, camera manufacturers, and online platforms. OpenAI says that C2PA metadata is included in all images created and edited in DALL-E 3 and ChatGPT, and will soon appear in videos created by OpenAI's Sora video generator.

tag-icon Tags chauds: Innovation et Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.