Subscribe Us

header ads

OpenAI launches a tool for monitoring images generated by artificial intelligence


 OpenAI, the creator of ChatGPT and DallE, programs specializing in generative artificial intelligence, has introduced a tool that allows researchers to monitor any digital image generated by artificial intelligence.

Verifying the authenticity of online content has become a worrisome issue with the proliferation of generative artificial intelligence tools that allow the production of various types of content based on a simple request, such as fake photos or recordings of people for malicious purposes, such as fraud.

OpenAI announced that it has created a program that monitors all images created by the DAL3 tool.

The California-based company said in an online statement that internal tests on an earlier version of the tool, BNET, "correctly detected up to 98% of all early images generated by DALL3," explaining that less than 0.5% of images were not generated by artificial intelligence and were incorrectly generated by DALL3.

The company, which is heavily funded by Microsoft, confirmed that the effectiveness of its program is reduced when images generated by DALL3 are later modified, or in images generated by other models.

OpenAI also announced that it will add tags to images generated by artificial intelligence in accordance with C2PA standards.

This alliance represents a technology initiative to develop technical standards for determining the source and authenticity of digital content.

Last month, Meta (Facebook and Instagram) announced that it would begin tagging creative content created by AI based on the alliance's standards in May. Google has also joined the initiative.

Post a Comment

0 Comments