Google Has Strategy to Fight Back As AI-generated Fakes Proliferate

During Google I/O 2023, Google unveiled three features aimed at detecting AI-generated fake images in search results. The new tools, as reported by Bloomberg, include identifying the origins of an image, adding metadata to Google-generated AI images, and labeling other AI-generated images in search results. The proliferation of AI image synthesis models has made it increasingly easy to create realistic fake images, posing risks to misinformation, political propaganda, and the integrity of the historical record.

To combat these challenges, Google plans to introduce these features to its image search product in the upcoming months. Google acknowledges that misinformation is encountered regularly by a significant percentage of people, and therefore, they aim to develop user-friendly tools to help identify and evaluate visual content. The first feature, “About this image,” will provide users with additional information about an image’s history, including when it was indexed, its first appearance, and its presence on other online platforms such as news, social, or fact-checking sites.

By offering this context, users can make more informed judgments about an image’s reliability and determine if it requires further scrutiny. For instance, users may discover that an image depicting a fabricated Moon landing was flagged by news outlets as an AI-generated creation through the “About this image” feature. The second feature focuses on AI tools used in image creation. Google plans to label all images generated by its AI tools with special metadata that clearly indicates their AI origins.

An example of someone using "About this image" to gain context about an image through Google search.

Additionally, Google is collaborating with other platforms like Midjourney and Shutterstock, encouraging them to embed similar labels in their AI-generated images. These labels will be recognized by Google Image Search, displaying them to users within search results. While this approach is not foolproof as metadata can be altered or removed, it represents a significant effort to address the issue of deepfakes online.

As more images become AI-generated or AI-augmented over time, the boundary between “real” and “fake” may become increasingly blurred, influenced by evolving cultural norms. Ultimately, our trust in the source of information, regardless of its creation method, will continue to be crucial. However, solutions like those provided by Google can serve as valuable tools to assist users in evaluating source credibility as technology advances.

Share this post

Recent Posts

wpChatIcon